uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,941,325,220,088
arxiv
\section{Introduction} Exoplanet demographics provide one of the ultimate arbiters of theories of exoplanet formation and evolution. Nielsen et al. (2019) used the GPI exoplanet survey to search for planets with masses between 2 and 13 $M_{Jup}$ and semimajor axes between 3 and 100 au, finding that the peak occurrence distance of giant planets was in the range of 1 to 10 au. Fulton et al. (2021) found the same peak occurrence distance of 1 to 10 au for the California Legacy Doppler velocities survey. Vigan et al. (2021) showed that the VLT SPHERES direct imaging survey of 150 stars detected 13 sub-stellar companions with masses between 1 and 75 $M_{Jup}$ and semimajor axes between 5 and 300 au, finding that both core accretion (CA; Mizuno 1980) and disk instability (DI; Boss 1997) appeared necessary to explain the detections for the FGK stars in their sample. Gas giant planets with orbital distances as large as 980 au have been discovered and studied (Wu et al. 2022). Forming giant exoplanets at such large distances by CA within the $\sim$ 1 Myr lifetimes of the gaseous portion of protoplanetary disks is challenging (e.g., Chambers 2021), if not impossible. DI has the advantage of forming dense, self-gravitating clumps in a few orbital periods, relaxing the disk lifetime constraint for forming wide-orbit gas giants in situ (e.g., Boss 2011). Evidence for a gas giant protoplanet embedded in a spiral arm 93 au from AB Aurigae has been interpreted as an example of gas giant planet formation by DI (Currie et al. 2022; cf. Cadman et al. 2021; Zhou et al. 2022). DI has also been proposed as the source of the $\sim 10 M_{Jup}$ exoplanet that orbits $\sim$ 560 au from the massive binary b Centauri (Janson et al. 2021). Goda \& Matsuo (2019) examined the demographics of 485 planetary systems and concluded that a hybrid theory of planet formation, involving both CA and DI, was needed to explain the exoplanet detections. Miret-Roig et al. (2022) used a direct imaging survey coupled with Gaia and Hipparchos astrometry to search for unbound gas giant exoplanets in the Upper Scorpius and Ophiuchus young stellar association. Their survey yielded between 70 and 170 free floating planets (FFP), considerably more than might be expected to form as the tail end of the star formation process of molecular cloud core collapse and fragmentation, and suggested that ejection from unstable planetary systems might make a major contribution during the first 10 Myr. Gravitational microlensing has also found an abundance of likely FFPs, though these could also simply be bound planets with orbital distances greater than about 10 au (Mr\'oz et al. 2020; Ryu et al. 2021). Vorobyov (2016) performed numerical simulations that supported the hypothesis that FFPs might be the result of planets ejected from massive MGU disks. While exoplanet demographics reveal orbital characteristics at the present epoch, unless exoplanets do not undergo significant orbital evolution or migration following their formation, the present epoch orbital parameters are of limited usefulness in constraining their initial orbital distances. CA is the favored mechanism closer to the host star, as a result of shorter orbital periods, higher gas disk temperatures, and higher surface densities of solids, to name a few factors, while DI may be more effective at larger distances in suitably massive and cool protoplanetary disks. For either CA or DI, a key question then becomes the extent to which protoplanets might migrate away from their birth orbits to their present epoch orbits. As noted by Boss (2013), CA and DI both require giant protoplanets to form in the presence of disk gas. Theoretical work on protoplanetary orbital migration (e.g., Kley \& Nelson 2012) usually focuses on protoplanets in disks where the disk mass is low enough that the disk self-gravity can be neglected, greatly simplifying the analysis. Protoplanet evolution in MGU disk models has been calculated by Boss (2005), Baruteau et al. (2011), and Michael et al. (2011). These studies each considered quite different initial conditions and found a wide range of outcomes, ranging from large-scale inward orbital migration to relatively little orbital migration. Boss (2013) studied the evolution of protoplanets formed by either CA and DI in MGU disks, noting that while a MGU disk is essential for formation by DI, even a giant planet formed by CA in a quiescent, non-MGU disk can experience a later phase of MGU disk interactions during the periodic FU Orionis outbursts experienced by young solar-type protostars, which are thought to involve a phase of disk gravitational instability that dumps disk mass onto the protostar (e.g., Zhu et al. 2010; Kuffmeier et al. 2018). Dunhill (2018) similarly suggested that giant planets formed by CA might undergo orbital migration during FU Orionis outbursts. The Boss (2005, 2013) models were performed using the EDTONS three dimensional radiative hydrodynamics code, with a spherical coordinate grid that was fixed at moderate spatial resolution throughout the MGU disk evolutions. Virtual protoplanets were introduced at the beginning of each model to represent protoplanets as point sources of gravity, able to interact gravitationally with the disk and with each other and to accrete mass from the disk by Bondi-Hoyle accretion. Boss (2013) found that protoplanets with initial masses in the range from 0.01 $M_\oplus$ to 3 $M_{Jup}$ and initial orbital distances of 6 to 12 au in a MGU disk around a solar-mass protostar underwent chaotic orbital evolutions for $\sim$ 1000 yr without undergoing the monotonic inward or outward migration that typically characterizes the Type I or Type II behavior of non-self-gravitating disk models (e.g., Kley \& Nelson 2012). The present models of protoplanet orbital evolution employ the Enzo 2.5 hydrodynamics code. Enzo is also a three dimensional (3D) code and uses Adaptive Mesh Refinement (AMR) in Cartesian coordinates to ensure that sharp gradients in fluid quantities such as shock fronts can be handled accurately. Enzo is able to replace exceptionally dense disk clumps with sink particles representing newly formed (by DI) protoplanets, which thereafter interact with each other and the disk while accreting disk gas, as do the virtual protoplanets in the Boss (2013) models. We thus seek here to use a completely different 3D hydro code to learn more about the possible outcomes for protoplanet orbital evolution in MGU disks, and to compare the results with the latest advances in exoplanet demographics. \section{Numerical Hydrodynamics Code} As noted by Boss \& Keiser (2013), the Enzo 2.5 AMR code performs hydrodynamics (HD) using any one of three different algorithms (Collins et al. 2010; Bryan et al. 2014): (1) the piecewise parabolic method (PPM) of Colella \& Woodward (1984), (2) the ZEUS method of Stone \& Norman (1992), or (3) a Runge–Kutta third-order-based MUSCL (“monotone upstream-centered schemes for conservation laws”) algorithm based on the Godunov (1959) shock-handling HD method. Enzo is designed for handling strong shock fronts by solving the Riemann problem (e.g., Godunov 1959) for discontinuous solutions of a fluid quantity that should be conserved. The PPM option was used in the current models as a result of the testing on mass and angular momentum conservation performed with Enzo 2.0 by Boss \& Keiser (2013), who found that PPM was better able to conserve mass and angular momentum during the collapse of a rotating isothermal cloud core (Boss \& Bodenheimer 1979) than either ZEUS or MUSCL. Enzo is designed for parallel processing on high performance clusters (HPC), but when run on a single, dedicated 32-core node of the Carnegie memex HPC, a typical model still required 7 months of continuous computation to evolve for $\sim 10^3$ yrs of model time. The Enzo 2.5 models were initialized on a 3D Cartesian grid with 32 top grid points in each direction. A maximum of 7 levels of refinement was used, with a factor of two refinement occurring for each level, so that the maximum possible effective grid resolution was $2^7$ = 128 times higher than the initial resolution of $32^3$, i.e., $4096^3$. The models with 7 levels needed an increase in the number of cell buffer zones (NumberBufferZones) to 3 from the default value of 1, which was used for the lower levels of refinement, in order to maintain reasonable time steps. Grid refinement was performed whenever necessary to ensure that the Jeans length constraint (e.g., Truelove et al. 1997; Boss et al. 2000) was satisfied by a factor of 4 for cells with a density at least eight times the initial density. Periodic boundary conditions were applied on each face of the grid cubic box, with each side either 60 au or 120 au in length. A point source of external gravity was used to represent a 1 $M_\oplus$ protostar at the center of the grid. The maximum number of Green’s functions used to calculate the gravitational potential was 10. The time step typically used was 0.15 of the limiting Courant time step. The results were analyzed with the yt astrophysical analysis and visualization toolkit (Turk et al. 2011). Following Boss \& Keiser (2014), we used the Enzo 2.2 sink particle coding described by Wang et al. (2010). Sink particles are created in grid cells that have already been refined to the maximum extent permitted by the specified number of levels of grid refinement, but where the gas density still exceeds that consistent with the Jeans length criterion for avoiding spurious fragmentation (Truelove et al. 1997; Boss et al. 2000). As described by Boss \& Keiser (2014), sink particles accrete gas from their host cells at the modified Bondi-Hoyle accretion rate proposed by Ruffert (1994). Two parameters control the conditions under which sink particles can be merged together: the merging mass (SinkMergeMass) and the merging distance (SinkMergeDistance). The former of these two parameters is used to divide the sink particles into either large or small particles. Particles with less mass than SinkMergeMass are first subjected to being combined with any large particles that are located within the SinkMergeDistance. Any surviving small particles after this first step are then merged with any other small particles within the SinkMergeDistance. The merging process is performed in such a way as to ensure conservation of mass and linear momentum. Boss \& Keiser (2014) found that their results for collapse and fragmentation of magnetic molecular cloud cores were not particularly sensitive to the choice of these two key parameters with regard to the tendency of the cores to undergo fragmentation into multiple protostar systems. The current paper uses the Wang et al. (2010) sink particle coding with the SinkMergeMass set equal to 0.01 $M_{Jup}$ and the SinkMergeDistance set equal to 0.1 au, appropriate values for studying gas giant protoplanets in a 120 au-size region. Sink creation was only allowed for cells with densities exceeding the values listed in Table 1 (DensThresh in code units in the $sink\_maker.C$ subroutine). These densities were chosen to be low enough that sinks do form in the models, as the point of the present models was to study the orbital evolution of sink particles representing protoplanets in MGU disks rather than to study the precise physics of DI-induced fragmentation and clump formation in such disks (e.g., Boss 2021a,b). The sink particles used in the Enzo models are similar to the virtual protoplanets (VPs) used in the EDTONS models: both are introduced in regions of density maxima and are intended to represent gravitationally bound clumps of disk gas that will contract to form gaseous protoplanets, as they orbit in the disk around the central protostar, interacting gravitationally with each other and the disk gas, even as they accrete more disk gas. There are several differences, however. Sink particles are created automatically by Enzo following the criteria noted above, sink particles with close encounters can be merged together, and sink particles that encounter a grid boundary reappear on the opposite boundary as a result of the periodic boundary conditions. VPs, on the other hand, are inserted when a density maximum exceeds the Jeans length or Toomre length criteria (Nelson 2006; Boss 2021a,b) for the current grid spatial resolution. VPs may undergo close encounters with each other but do not suffer mergers. VPs that strike either the inner or outer grid boundary are removed from the calculation. While it would be desirable to compare flux-limited diffusion (FLD) approximation radiative hydrodynamic models from the EDTONS code with FLD radiative hydrodynamic models calculated by Enzo, the FLD routines available in Enzo are limited to non-local thermodynamic equilibrium (non-LTE), as Enzo was developed primarily for cosmological simulations, whereas EDTONS assumes LTE. As a result, we are limited to using a simpler approach to handling the disk thermodynamics with the Enzo code. Boss (1998) showed that disk fragmentation could occur for strongly gravitationally unstable disks with either locally isothermal or locally adiabatic thermodynamics, using disk gas adiabatic exponents ranging from $\gamma = 1$ (purely isothermal) to $\gamma = 7/5$, which is appropriate for molecular hydrogen. Given that disks are subject to compressional heating, $\gamma = 1$ is not strictly correct, and given that disks that are optically thick at their midplanes can cool from their surfaces, $\gamma = 7/5$ is not strictly correct either. The physically correct behavior presumably lies somewhere in the middle of these two extremes. Radiative cooling in optically thin regions was employed in the Enzo models, with a critical density for cooling of $10^{-13}$ g cm$^{-3}$; regions with densities above this critical value had the cooling rate decreased proportionally. This critical density was chosen because that is the disk midplane density where the dust grain opacity produces optical depths of order unity (e.g., Boss 1986). The cooling rates were modified from the default values in $cool\_rates.in$ to rates consistent with molecular line cooling in optically thin regions (Boss et al. 2010; Neufeld \& Kaufman 1993). Because Enzo PPM hydrodynamics involves a Riemann solver that cannot be purely isothermal, i.e., $\gamma$ cannot equal unity, the adiabatic index for the disk gas was taken to be $\gamma = 1.001$, appropriate for a nearly isothermal, but still adiabatic equation of state for an ideal gas. Test runs were computed for 100 yrs of evolution with both $\gamma = 7/5$ and $\gamma = 5/3$, but in both cases Enzo produced midplane disk temperatures that were over $10^4$ K, whereas the initial disk had a maximum midplane temperature of 1500 K. The test runs with $\gamma = 1.001$ produced the expected maximum temperatures of $\sim 1500$ K, and hence $\gamma = 1.001$ was adopted for the models presented here. The resulting temperature distributions were also affected by the assumption of radiative cooling; spiral features in the midplane temperature distribution accompanied spiral features in the midplane density distribution, as is to be expected. Finally, the mean molecular weight of the gas was effectively taken to be $\mu = 2.4$, appropriate for a solar composition mixture of molecular hydrogen and helium. \section{Initial Conditions} Table 1 lists the models with variations in the number of levels of grid refinement, the outer disk and envelope temperatures, initial minimum value of the Toomre (1964) $Q$ parameter, disk radius, calculational grid box size, and critical density for sink particle creation. A 60 au box size was used for the 20 au and 30 au radius disks, while a 120 au box size was used for 60 au radius disks, in order to give the disks sufficient room to evolve and expand by the outward transport of angular momentum through gravitational interactions with the spiral arms and clumps. In the number of levels column, 34 means the model was initially run with 3 levels and then a fourth level of refinement was added. The initial disks are based on the model HR disk from Boss (2001), with an outer disk temperature of 40 K and and disk envelope temperature of 50 K, which has been used as a standard initial model for many of the author's disk instability models (e.g., Boss 2021a,b). Model HR has an initial minimum Toomre $Q \approx 1.3$, implying marginal stability to the growth of rings and spiral arms. The model HR initial disk has a mass of 0.091 $M_\odot$ within an inner radius of 4 au and an outer radius of 20 au and orbits a 1 $M_\odot$ central protostar. The Enzo models have have masses of 0.102 $M_\odot$ for 20 au outer radius disks, slightly higher than in model HR because the Enzo models extend inward to 1 au, 0.142 $M_\odot$ for 30 au outer radius disks, and 0.210 $M_\odot$ for 60 au outer radius disks. The same disk density power-law-like Keplerian structure as in Boss (2001) is used for all of the models, with the structure being terminated at 20 au, 30 au, or 60 au. Figures 1 and 2 show cross sections of the initial disk density distribution for the 20 au disks, both parallel and perpendicular (i.e., disk midplane) to the disk rotation axis. \section{Results} Figure 3 shows the intermediate results for two of the four models that have the identical initial disk configuration (20 au radius) as the Boss (2001) model HR, depicted at the same time (190 yrs of evolution) as the same initial disk model (fldA) in Boss (2021b, cf. Figure 2a). Figure 3 shows that both of these models (3-1K-20 and 6-1K-20) rapidly evolved into a configuration of multiple spiral arms interspersed with dense clumps, as expected for a marginally gravitationally unstable disk. Also as expected, the degree of fragmentation and clump formation increases as the numerical grid resolution increases from 3 to 6 levels. When sink particles are allowed to form, the number of sink particles similarly increases as the resolution is improved. While the background disk looks quite similar for model 3-1K-20 with or without sink particles (Figure 3a,c), there is a clear difference in the case of model 6-1K-20 (Figure 3b,d), where the background disk has become perturbed into a prolate configuration due to the formation of a massive ($\sim 20 M_{Jup}$) secondary companion (at one o'clock), with its own circumplanetary disk and tertiary companion, whose combined tidal forces have evidently distorted the disk's overall appearance. Model fldA of Boss (2021b) had fragmented into a five clumps and three virtual protoplanets (i.e., sink particles) by 189 yrs, for a total of eight, considerably more than formed in the present model 3-1K-20, but not as many as in model 6-1K-20, suggesting that even with the quadrupled spatial resolution of the Boss (2021b) EDTONS models, the adaptive mesh refinement feature of Enzo results in significantly improved numerical spatial resolution of the disk instability and fragmentation. Confirmation of the formation of long-lived fragments in the model HR disk (Boss 2001, 2021b) with the completely different hydrodynamical method used here provides strong support for the viability of the disk instability mechanism for the formation of gas giant protoplanets and higher mass companions. Figure 4 displays the results after 2000 yrs for the Enzo models in Figure 3. The EDTONS model fldA in Boss (2021b) was stopped after only 189 yrs of evolution, but even still required over 4 years of computation on a single core of a node on memex, a Carnegie Institution computer cluster. EDTONS is based on code initially written in the late 1970s and is not parallelized. Enzo, in contrast, is a modern code designed to run on parallel processing systems like memex, and as a result the Enzo models can be computed much farther in time. Even still, model 3-1K-20 required one week to run for 2000 yrs of model time on a dedicated single memex node with 28 cores, while model 6-1K-20 required one year to run 2000 yrs on a dedicated 28-core node. Three dimensional hydrodynamics at high spatial resolution is computationally expensive, even when a parallelized code is employed. Figure 4 shows that the evolution of these two models diverged considerably following the early fragmentation phase depicted in Figure 3. The two sink particles evident in Figure 4a have masses of $\sim 2 M_{Jup}$ and $\sim 0.6 M_{Jup}$, with a total gas disk mass of $\sim 99 M_{Jup}$, while the 13-odd sink particles in Figure 4b have masses ranging from $\sim 0.2 M_{Jup}$ to $\sim 23 M_{Jup}$, for a total sink particle mass of $\sim 96 M_{Jup}$, leaving a disk gas mass of only $\sim 5 M_{Jup}$. Clearly, the final disk mass in model 3-1K-20 far outweighs the mass of the sink particles, and as a result the particles are unable to open gaps in the disk, though the disk has expanded outward to a radius of about 30 au as a result of the transport of disk mass and angular momentum outward, caused by the strong spiral arms evident in Figure 3a,c. The fact that sink particle formation has been so efficient in model 6-1K-20, with the total particle mass some 20 times larger than the disk mass, means that the particles rule the evolution and are able to clear out a distinct inner gap, centered on about 5 au (Figure 4b). In model 6-1K-20, the sink particles gained the bulk of the disk's mass and angular momentum, so that the disk is not able to expand beyond its initial radius of 20 au. Three sink particles were accelerated to speeds high enough at their orbital location to be ejected altogether from the system, but because of the periodic boundary conditions imposed on the calculations by the Enzo self-gravity solver, these ejectable particles were returned to the system and underwent further interactions with the sink particles and disk gas. Table 2 gives the maximum number of sink particles formed for all the models, as well as the number surviving at the end of the run. Table 2 shows that the maximum number of sinks formed decreases as the initial disk gas temperature is increased, as this results in an increase in the Toomre minimum $Q$ value (Table 1), i.e., in greater stability to the growth of rings and spiral arms, and hence to fragmentation and sink particle formation. By the time that the disk temperature is increased to 160 K, disk fragmentation is completely stifled in the Enzo models, consistent with the flux-limited diffusion approximation models of Boss (2021b), where fragmentation ceased for a minimum Toomre $Q$ greater than 2.2. Models 6-2K-30 and 7-2K-30 did not form sink particles, unlike the otherwise identical, but lower resolutions models 3-2K-30, 4-2K-30, and 5-2K-30, because this sequence used a fixed critical density for sink particle formation of $10^{-9}$ g cm$^{-3}$. That choice meant that the dense clumps formed in the two higher resolution models could always be resolved with more grid levels and finer spatial resolution, thereby preventing the clumps from exceeding the critical density required for sink particle formation, at least during the limited amount of model time that the 6- and 7-level models were able to be evolved (326 and 340 yrs, respectively). Small time steps prevented these two models for being evolved farther in time. Figure 5 shows these two models at their final times, showing that the spiral arms and nascent clumps become more distinct as the number of grid levels is increased, as expected when approaching the continuum limit of infinite spatial resolution. Table 2 also lists the number of sink particles mergers, where $N_{merged-sinks} = N_{max-sinks} - N_{final-sinks}$, and the number of times that a sink particle would have been ejected if periodic boundary conditions were not required. Table 2 shows that mergers of sink particles are quite common in all of the models that formed sink particles, and evidently are responsible for much of the gain in mass of the particles, along with the ongoing accretion of disk gas, given that the number of mergers is usually comparable to, or far greater than, the final number of sink particles. The value $N_{escaped-sinks}$ can be quite large due to the sink particles' inability to escape the system; often the same particle bounces in and out in orbital radius and achieves escape velocity multiple times. Achieving the escape velocity usually occurs for particle orbital distances of 30 au to 40 au, but can also occur from 10 au to 20 au in the more unstable disks (e.g., 5-1K-20, 6-1K-20). The large numbers of escape episodes in the latter two models are clearly solely a result of the periodic boundary conditions, but they do indicate that ejected protoplanets are to be expected as a natural outcome of a phase of gas disk gravitational instability. Table 2 suggests that such a phase of protoplanetary disk evolution should result in the ejection of several gas giant protoplanets. Figure 6 presents all of the sink particle masses and distances from the central star at the final times for the models. These distances correspond to observed separations in the absence of any other knowledge of the orbital parameters, i.e., the semimajor axis and eccentricity, The final masses range from $\sim 0.1 M_{Jup}$ to $\sim 100 M_{Jup}$, i.e., sub-Jupiters to brown dwarfs and late M dwarf stars. Separations range from inside 1 au to over 30 au. Ejected particles would be at much larger distances, were ejection permitted. Figure 6 shows that the black dots, representing the 20 au radius disks, tend to have higher masses ($> 10 M_{Jup}$) inside 10 au than the blue dots, representing the 60 au radius disks, which tend to have lower masses ($< 1 M_{Jup}$) inside 10 au. This outcome is the result of the 20 au radius disks all starting their evolutions from considerably more gravitationally unstable initial states, i.e., Toomre $Q_{minimum} = 1.3$ than the 60 au radius disks, with initial Toomre $Q_{minimum} = 1.9$ or 2.2. The 20 au radius models thus generally form more massive sink particles, as would be expected. Figure 7 shows the sink particle masses as a function of the orbital semimajor axis at the final times for the models, while Figure 8 depicts these properties for the known exoplanets on the same scales. Figure 3b shows that fragmenting dense clumps appear between about 5 au and 20 au, which is the same distance range as most of the sink particles in Figure 7; only a few have migrated inside 1 au, and only a few orbit beyond about 20 au. Clearly the present models produce a goodly number of cool gas giants and brown dwarfs, but do not lend support for the formation and inward migration of the numerous hot and warm exoplanets evident in Figure 8: little evidence for monotonic inward orbital migration is seen. This result is consistent with the EDTONS models of Boss (2013). Finally, Figure 9 shows the sink particle masses as a function of the orbital eccentricity at the final times for the models, while Figure 10 depicts these properties for the known exoplanets on the same scales. The present models show that the processes studied here of fragmentation, mergers, chaotic orbits, and ejections result in the observed wide range of eccentricities, though not the presumably tidally damped, near-zero eccentricities of the hot Jupiters. \section{Discussion} Drass et al. (2016) showed that the initial mass function in the Orion nebula cloud has two peaks, one at 0.25 $M_\odot$ and another at 0.025 $M_\odot$, and suggested that the latter peak was composed of brown dwarfs and isolated planetary-mass objects that had been ejected from circumstellar disks or multiple star systems. The large number of attempted ejections in the Enzo models that are listed in Table 2 fully support this hypothesis. Feng et al. (2022) combined high-precision Doppler velocity data with Gaia and Hipparcos astrometry to constrain the masses and orbital parameters of 167 sub-stellar companions to nearby stars. Their Figure 3 shows that these 167 companions fully populate a parameter space ranging from semimajor axes of $\sim$ 2 au to $\sim$ 40 au, with masses from $ \sim 4 M_{Jup}$ to $ \sim 100 M_{Jup}$, much like the upper right quadrant of Figure 7. Their Figure 3 also shows orbital eccentricities varying from 0 to 0.75, again in basic agreement with the range evident in the present models in Figure 9. These Enzo models suggest a unified formation mechanism of the 167 sub-stellar companions studied by Feng et al. (2022): fragmentation of MGU disks. Galvagni et al. (2012) used a smoothed particle hydrodynamics (SPH) code to study clumps formed at $\sim$ 100 au in a MGU disk, finding that the clumps could contract and heat up enough to begin molecular hydrogen dissociation, resulting in a dynamical collapse phase that can ensure their survival to tidal forces. Their results showed that this collapse phase could occur within $\sim 10^3$ yrs, shorter than the evolution times of the models considered here (Table 2), justifying the replacement of dense clumps with Enzo sink particles or EDTONS virtual protoplanets (e.g., Boss 2005, 2013). Lichtenberg \& Schleicher (2015) used Enzo to study fragments formed by the disk instability process in isothermal disks, but did not employ sink particles or radiative transfer effects, finding that the clumps formed were all lost by inward migration combined with the tidal force of the protostar. Stamatellos (2015) used a SPH code to study disks with radii of 100 au and high Toomre $Q$ values. Planets inserted at 50 au either migrated inward or outward over $2 \times 10^4$ yrs, depending on whether they were allowed to gain mass or not, respectively. Hall et al. (2017) used an SPH code to study the identification and interactions of disk fragments composed of clumps of SPH particles that formed from the fragmentation of a $0.25 M_\odot$ disk of radius 100 au around a $1 M_\odot$ protostar. Their models showed that fragment-fragment interactions early in the evolutions led to scattering of fragments to larger semi-major axes, as large as 250 au, and to increased eccentricities, as high as 0.7. While the periodic boundary conditions used in the present models preclude an assessment of the final semi-major axes after close encounters, the fact that the sink particle velocities were often sufficiently high to predict ejection from the system is consistent with the Hall et al. (2017) results showing efficient scattering outward (cf., Table 2). The eccentricity pumping found by Hall et al. (2017) is similarly consistent with that found in the present models (cf. Figure 9). Hall et al. (2017) also studied tidal downsizing and disruption of fragments that ventured too close to the tidal forces of the central protostar, finding that more clumps were destroyed by tidal disruption than by disappearing in a merger event. Tidal downsizing was proposed by Nayakshin (2010, 2017) as a means for forming inner rocky worlds from gas giants formed in a disk instability, following the formation of rocky inner cores by the sedimentation of dust grains and pebbles to the center of the giant gaseous protoplanet (Boss 1997). Tidal downsizing remains as a creative means to form inner rocky worlds as a result of a gravitationally unstable gas disk. The present sink particle models, as well as the virtual protoplanet models of the EDTONS code, do not allow tidal downsizing to occur, though implicitly the loss of virtual protoplanets that hit the inner disk boundary in EDTONS code calculations could be considered the equivalent of the loss of gas giant protoplanets by tidal disruption. Modeling the interior structure and thermal evolution of slowly contracting gas giant protoplanets is a future challenge for these types of models, and tidal disruption could result in the loss of sink particles that pass close to the central protostar, though it can be seen in Figure 6 that few sink particles passed inside 1 au. Fletcher et al. (2019) performed a code comparison study of the orbital migration of protoplanets inserted at 120 au in disks of 300 au radius, finding that protoplanets of 2 $M_{Jup}$ migrated inward to $\sim$ 40 au to $\sim$ 60 au within $\sim 10^4$ yr. These code comparisons differ considerably from the present models, as only single protoplanets were injected, the disks used a $\gamma = 7/5$ adiabatic index, and the disks were gravitationally stable everywhere, with Toomre $Q \ge 2$. As a result, the evolutions did not undergo the chaotic evolutions of the present models, where the MGU disk produces strong spiral arms that interact with the numerous protoplanets that formed near the outset. Finally, Rowther \& Meru (2020) used a SPH code to study planet survival in self-gravitating disks. They found that a fixed-mass planet with a range of masses would migrate inward in the cool outer regions of their disks, but that this migration was halted once the planet reached the warm inner disk. In their models, a single planet at a time is embedded in a disk with a mild spiral arm structure. Compared to the multiple clumps, sink particles, and strong spiral arms that form and interact in the present models (e.g., Figure 3), it is clear that the Rowther \& Meru (2020) planets do not undergo the chaotic orbital motions experienced by the Enzo models here (or the EDTONS models of Boss 2013), which prevent monotonic orbital migration. \section{Conclusions} The use of a completely different three dimensional hydrodynamical code (Enzo 2.5), with a completely different method for handling nascent protoplanets (sink particles), has produced results in good agreement with those obtained by the EDTONS code and the virtual protoplanet method (Boss 2005). Both codes agree that with high spatial resolution, the standard model HR disk (Boss 2001) fragments rapidly into multiple dense clumps and strong spiral arms. Both codes agree that when these clumps are replaced with particles that can accrete mass from the disk, the particles grow in mass and can orbit chaotically for 1000 yrs to 2000 yrs without suffering monotonic inward or outward orbital migration. In addition, the Enzo models show that the protoplanets have a high probability of close encounters with each other, leading either to mergers, or to being ejected from the protoplanetary system. Comparisons with the observational data on exoplanet demographics and FFPs suggest that gas disk gravitational instabilities have an important role to play in explaining the formation of sub-stellar companions with a wide range of masses and orbital distances. \acknowledgments I thank Sean Raymond for discussions about FFPs and Floyd Fayton for his invaluable assistance with the memex cluster. I also thank the reviewer for providing several suggestions for improving the manuscript. The computations were performed on the Carnegie Institution memex computer cluster (hpc.carnegiescience.edu) with the support of the Carnegie Scientific Computing Committee. The computations were performed using the Enzo code originally developed by the Laboratory for Computational Astrophysics at the University of California San Diego and now available at https://enzo-project.org/.
1,941,325,220,089
arxiv
\section{Introduction} The natural representation for many sources of unstructured data is intuitive to us as humans: for images, a 2D pixel representation; for speech, a spectrogram or linear filter-bank features; and for text, letters and characters. All of these possess fixed, rigid structure in space, time, or sequential ordering which are immediately amenable for further learning. For other unstructured data sources such as point clouds, semantic graphs, and multi-agent trajectories, such an initial ordered structure does not naturally exist. These data sources are set or graph-like in nature and therefore the natural representation is unordered, posing a significant challenge for many machine-learning techniques. A domain where this is particularly pronounced is in the fine-grained multi-agent player motions of team sport. Access to player tracking data changed how we understand and analyze sport~\citep{miller2014factorized, spatialStructure, wei_largeScaleAnalysis, cervone2014pointwise, passingPaper, chalkboarding, sha_interface18, yue2014learning}. More relevantly, sport has risen to an increasingly key space within the machine learning community as an application to expand our understanding of adversarial multi-agent motion, interaction, and representation~\citep{lucey2013CVPR, le2017ghosting, felsen2018cvae, lt_traj_NIPS, zhan2018generative, Yeh_2019_CVPR, googleRLSoccer}. In sport there exists strong, complex group-structure which is less prevalent in other multi-agent systems such as pedestrian tracking. Specifically, the \textit{formation} of a team captures not only the global shape and structure the group, but also enables the ordering of each agent according to a ``role'' within the group structure. In this regard, sport possesses relational structure similar to that of faces and bodies, which can be represented as a graph of key-points. In those domains, representation based on a fixed key-point ordering has allowed for cutting edge work across numerous tasks with a variety of approaches and architectures \citep{pictoralStructures, AAM, bilinearSpatioTemporal, openPose_data, hands_openPose}. Unlike for faces and bodies, the representation graph in sport is dynamic as players constantly move and switch positions. Thus dynamically discovering the appropriate representation of individual players according to their role in a formation affords us structural information while learning a useful representation for subsequent tasks. This challenge was addressed by the original role-based alignment of \citet{lucey2013CVPR} and subsequently by \citet{bialkowski2016IEEE} and \citet{sha_retrieval17}. Role-based alignment allows us to take unstructured multi-agent data, and reformat it into a consistent vector format that enables subsequent machine learning (Fig.~\ref{fig:problem}). \begin{figure} \centering \includegraphics[width=0.8\linewidth, trim={0 3cm 0 1.4cm}]{./figures/Fig0.pdf} \caption{A structure representation enables machine learning of multi-agent data. (Left) Data-points are colored according to the agent identity (letters denote agents in a given frame). (Right) By learning and aligning data to a formation template, we represent agents in a consistent vector form conducive to learning. Agents are now ordered by the role to which they are assigned.} \label{fig:problem} \end{figure} Here we formulate the role-based alignment as consisting of the phases of formation discovery and role assignment. Formation discovery uses unaligned data to learn an optimal \textit{formation template}; during role assignment a bipartite mapping is applied between agents and roles in each frame to produce ``aligned data''. A major limitation in past approaches was the speed of the template discovery process. In this work we propose an improved approach to the above alignment methods which provides faster and more optimal template discovery for role-based representation learning. Importantly, we seek to learn the same representation as \citet{lucey2013CVPR}, \citet{bialkowski2016IEEE}, and \citet{sha_retrieval17} by maximizing the same objective function of \cite{bialkowski2016IEEE} in a more effective manner. The reduced computational load enables on-the-fly discovery of the formation templates, new context-specific analysis, and rapid representation learning useful for modeling multi-agent spatiotemporal data. Our main contributions are: the formulation of this problem as a three-step approach (formation discovery, role assignment, template-clustering), the use of soft-assignment in the formation discovery phase thereby eliminating the costly hard-assignment step of the Hungarian Algorithm~\citep{hungarian}, a resetting training procedure based on the formation eigenvalues to prevent spurious optima, quantification of the impact of initialization convergence and stability, a restriction of the training data to key-frames for faster training with minimal impact on the learned representation, and a multi-agent clustering framework which captures the covariances across agents during the template-clustering phase. \section{Background} \subsection{Representing Structured Multi-Agent Data} A collection of agents is by nature a set and therefore no defined ordering exists \textit{a priori}. To impose an arbitrary ordering introduces significant entropy into the system through the possible permutations of agents in the imposed representation. To circumvent this, some approaches in representing multi-agent tracking data in sport have included sorting the players based on an ``anchor'' agent~\citep{hockeyTeamID}. This is limiting in that the optimal anchor is task-specific, making the representation less generalizable. An ``image-based'' representation~\citep{yue2014learning, lt_traj_NIPS, miller2014factorized} eliminates the need for an ordering, however, this representation is lossy, sparse, and high-dimensional. The role-based alignment protocol for sport of \citet{lucey2013CVPR} used a codebook of hand-crafted formation templates against which frame-level\footnote{Throughout we use the term ``frame'' to indicate a single moment in time in reference to the data being obtained via optical tracking from video.} samples were aligned. This work was extended by \citet{bialkowski2016IEEE} which learned the template directly from the data. \citet{sha_retrieval17} further employed a hierarchical template learning framework, useful in both retrieval \citep{sha_interface18} and trajectory prediction \citep{felsen2018cvae, Yeh_2019_CVPR}. \citet{le2017ghosting} similarly learned an agent-ordering directly from the data by learning separate role-assignment and motion-prediction policies in an iterative and alternating fashion. \subsection{Permutation-Equivariant Approaches} Permutation-equivariant approaches seek to leverage network architectures which are insensitive to the ordering of the input data. Approaches using graph neural networks (GNN) \citep{Kipf_GCN16, neural_message_passing, gnn_review, battaglia_InteractionNetwork, VAIN} have become very popular and shown tremendous promise. These approaches are particularly valuable for tasks (e.g. pedestrian tracking) which lack the strong coherent group structure of sport and therefore cannot leverage methods such as role-alignment. Within sport, \citet{kipf2018neural} used a GNN to predict the trajectories of players while simultaneously learning the edge-weights of the graph. \citet{Yeh_2019_CVPR} demonstrated the advantages in using GNNs to forecast the future motion of players in sports, surpassing both the role~\citep{bialkowski2016IEEE} and tree-based approaches~\citep{sha_retrieval17} on most metrics. The success of these approaches, however, does not negate the value of role-based alignment. The learned formation structure provides valuable insight into high-level organization of the group. Furthermore, many traditional machine learning techniques and common deep architectures require an ordered-agent representation. This is again similar to what is seen in the modeling of faces and bodies: great success has been achieved using geometric deep learning~\citep{MoNet, kipf2018neural}, but approaches based on a fixed representation remain popular and effective \citep{taylor2017deep, Kanazawa_2019_CVPR, poseWild, walker17poseGeneration, rayat18pose}. Interesting work has also been done to learn permutations for self-supervised feature learning or ranking tasks \citep{adams2011_sinkhorn, mena2018sinkhorn, DeepPermNet}. Central to these approaches is the process of Sinkhorn normalization \citep{sinkhorn1967}, which allows for soft-assignment during the training process and therefore a flow of gradients. Exploring the application of Sinkhorn normalization to this task is beyond the scope of this current work, however, we provide additional context on this method in Section~\ref{sinkhorn}. \section{Approach} \begin{figure} \centering \includegraphics[width=0.95\linewidth, trim={0 0.5cm 0 1.3cm}]{./figures/fig1.pdf} \caption{An overview of the proposed method. The procedure consists of (1) Normalization, (2) Initialization, (3) Reshaping, (4) Formation Discovery, (5) Template Alignment, (6) Role Assignment, (7) Template Clustering. In the role assignment step, the template distributions are shown as unfilled thick ellipses and the observed distributions of the role-aligned data are shown as the textured ellipses.} \label{fig:method} \end{figure} \label{approach} \subsection{Problem Formulation} Mathematically, the goal of the role-alignment procedure is to find the transformation ${\displaystyle A:\{{\bm{U}}_{1},{\bm{U}}_{2},\dots,{\bm{U}}_{N}\} \times {\bm{M}} \mapsto [{\bm{R}}_{1},{\bm{R}}_{2},\dots,{\bm{R}}_{K}]}$ which maps the unstructured set ${\bm{U}}$ of $N$ player trajectories to an ordered set (i.e. vector) of $K$ role-trajectories ${\bm{R}}$.~\footnote{Generically $N$ need not equal $K$ as a player may be sent off during a game, but for simplicity it is safe to assume $N=K$ in this work.} Each player trajectory is itself an ordered set of positions ${\bm{U}}_{n}= [x_{s,n}]_{s=1}^{S}$ for an agent $n\in[1,N]$ and a frame $s\in[1,S]$. We recognize ${\bm{M}}$ as the optimal permutation matrix which enables such an ordering. Thus our goal is to find the most probable set $\mathbfcal{{F}^*}$ of 2D probability density functions where \begin{equation} \label{eq:F*} \mathbfcal{{F}^*} = \argmaxS_{\mathbfcal{F}} P(\mathbfcal{F} | {\bm{R}}) \end{equation} \begin{equation} \label{eq:PSum} P({\bm{x}}) = \sum_{n=1}^{N} P({\bm{x}} | n)P(n) = \dfrac{1}{N} \sum_{n=1}^{N} P_{n}({\bm{x}}). \end{equation} \citet{bialkowski2016IEEE} transforms this equation into one of entropy minimization where the goal is to minimize the amount of overlap (i.e. the KL-Divergence) between each role. The final optimization equation in terms of the total entropy $H$ then becomes \begin{equation} \label{eq:F*Entropy} \mathbfcal{{F}^*} = \argminS_{\mathbfcal{F}}\sum_{n=1}^{N}H({\bm{x}} |n). \end{equation} See \ref{old_method} for additional details. The authors then use expectation maximization (EM) to approximate this solution and note similarity to k-means clustering. However, as they represent that data non-parametrically in terms of per-role heat maps, hard assignment must be applied at each iteration so the distributions may be updated. Instead, we note that \eqref{eq:PSum} describes the occupancy of space by any agent in any point in time as a mixture of conditional distributions across each of the $N$-roles. This is further equivalent to the sum over $n$-generating distributions. Thus if we model these generating distributions as $d$-dimensional Gaussian distributions, this reduces the template-discovery process to that of a Gaussian Mixture Model. \subsection{Toy Problem Formulation} Understanding the notion of independence under the different formulations of this problem is key. This may be better understood by considering a toy problem: imagine we have three independent 1D Gaussian distributions we wish to sample $S$ times from each. It is known that we sample from each distribution in rounds, effectively generating the samples in ``triplets'', although the order within the triplets is random. We then seek to reassign the points back to their original distributions. Following the approach of \citet{bialkowski2016IEEE}, the ``structure'' imposed by the triplet sampling is enforced through the hard-assignment at each iteration. Recall, however, that the original distributions were statistically independent; the triplet structure we wish to respect is imposed by the assignment step, not the underlying distributions. Contrastingly, in our method the samples are treated as fully independent; had all samples been taken from the first distribution, followed by the second, followed by the third, the outcome at the ``distribution-discovery'' phase would be identical to that having sampled the data in rounds. Only after the three distributions are estimated would the assignment of each point in every triplet be assigned to the distribution which maximized the overall likelihood in that triplet. Besides being more computationally effective (see Section~\ref{run_complexity}), this allows us to find the true MLE of the distributions. Our method will always discover a more optimal estimate of Eq.~\ref{eq:PSum}. This can be understood in considering how the assignment is performed during optimization. For each triplet, the likelihood of assigning each point to each distribution is computed in both approaches. In our approach, this gives \textit{the} likelihood under each mixture component. In the hard-assignment approach, however, if two (or more) points in a triplet have their highest likelihood under the same component, the exclusionary assignment \textit{must} result in a lower likelihood than assigning each point to its preferred Gaussian. Furthermore, in our approach, each sample contributes to every component of the mixture, thus the data under the mixture remains ``fixed'' during the optimization process. In contrast, as the hard-assignments are made, the samples contributing to each distribution changes each iteration. This, in combination with the sub-optimal likelihood above, effectively ``breaks'' the expectation maximization step and can cause solutions to diverge or oscillate, which is inconsistent with a maximum likelihood solution which must monotonically increase. Thus our approach is computationally efficient, more intuitively captures the independence of the generating distributions versus the structure of the sampling, and ensures a likelihood function that will converge under expectation maximization. \subsection{Formation-Discovery} Our procedure explained here is presented visually in Figure~\ref{fig:method} and algorithmically in Algorithm~\ref{alg:alignment}. Data is normalized so all teams are attacking from left to right and have mean zero in each frame, thereby removing translational effects. Following the approach of \citet{bialkowski2016IEEE}, we initialize the cluster centers for formation-discovery with the average player positions. The impact of this choice of initialization is explored in Section~\ref{res:initialization}. We now structure all the data as a single $(SN)\timesd$ vector where $S$ is the total number of frames, $N$ is the total number of agents (10 outfielders in the case of soccer), and $d$ is the dimensionality of the data (2 here). The K-Means algorithm is initialized with the player means calculated above and run to convergence; we find that running K-Means to convergence produces better results than running a fixed number of iterations as is commonly done for initialization. The cluster-centers of the last iteration are then used to initialize the subsequent mixture of Gaussians. Mixture of Gaussians are known to suffer from component collapse and becoming trapped in pathological solutions. To combat this, we monitor the eigenvalues $(\lambda_{i})$ of each of the components throughout the EM process. If the eigenvalue ratio of any component becomes too large or too small, the next iteration runs a Soft K-Means (i.e. a mixture of Gaussians with spherical covariance) update instead of the full-covariance update. We find that the range $\frac{1}{2}<\frac{\lambda_{1}}{\lambda_{2}}<2$ works well. In practice, we find this is often unnecessary when analyzing a single game as the player-initialization provides the necessary stabilization, but becomes important for analysis over many teams/games where that initialization signal is weaker. We refer to this set of $K$ distributions which maximizes the likelihood of the data the \textit{Formation}, which we denote $\mathbfcal{{F}^*}$. Note that the formation is a set of distributions. To enforce an ordering, we must align to a parent template, ${\bm{G}}^*$, which is an ordered set of distributions. The specific ordering of this template is unimportant so long as it is established and fixed. We align $\mathbfcal{{F}^*}$ to ${\bm{G}}^*$ by finding the Bhattacharyya distance~\citep{bhatt} between each distribution in $\mathbfcal{{F}^*}$ and ${\bm{G}}^*$ given by $ D_B=\frac{1}{8}(\mu_{\mathbfcal{{F}^*}_{i}}-\mu_{{\bm{G}}^*_{j}})^{T}\sigma^{-1}(\mu_{\mathbfcal{{F}^*}_{i}}-\mu_{{\bm{G}}^*_{j}}) + \frac{1}{2}\ln(\frac{\det\sigma}{\sqrt{\det\sigma_{\mathbfcal{{F}^*}_{i}}\det\sigma_{{\bm{G}}^*_{j}}}}) $ where \sigma = \frac{\sigma_{\mathbfcal{{F}^*}_{i}} + \sigma_{{\bm{G}}^*_{j}}}{2} $ to create a $K\timesK$ cost matrix and then use the Hungarian algorithm to find the best assignment. We have now produced our \textit{Template}, $\mathbfcal{T^*}$ an ordered set of distributions with an established ordering that maximizes the likelihood of the data. \subsection{Role-Assignment} The process of role-assignment maps each player in each frame to a specific role with the restriction that only one player may occupy a role in a given frame. We find the likelihood that each agent belongs to each of the discovered distributions in each frame which was already calculated during the formation-discovery step. This produces a $N\timesK$ cost matrix in each frame; the Hungarian algorithm is again used to make the optimal assignment. Thus we have achieved the tasks of formation-discovery and role-assignment having had to apply the Hungarian algorithm on only a single pass of the data. We now represent the aligned data as a $S \times(dK)$ matrix ${\bm{R}}$. \subsection{Clustering Multi-Agent Data} With an established well-ordered representation, we are now able to cluster the multi-agent data to discover sub-templates and perform other analysis. Sub-templates may be found either through flat or hierarchical clustering. Generically, we seek to find a set of clusters ${\bm{C}}$ which partitions the data into distinct states according to: \begin{equation} \label{eq:partition} \argminS_{{\bm{C}}} \sum_{C_{k}\in {\bm{C}}}\sum_{{\bm{R}}_{i},{\bm{R}}_{j}\in C_{k}} \|P({\bm{R}}_{i}) - P({\bm{R}}_{j})\|_{2} \end{equation} For flat clustering, a $\dimensionsN$-dimensional K-Means model is fit to the data. To help initialize this clustering, we seed the model with the template means plus a small amount of noise. To determine the optimal number of clusters we use a measure similar to Silhouette score~\citep{rousseeuw1987silhouettes}: \begin{equation} \label{eq:silhouette} \displaystyle \mathbb{E}({\bm{R}}) = \frac{1}{|{{\bm{R}}}|} \sum_{C_{k}\in {\bm{C}}}\sum_{{\bm{R}}_{i}\in C_{k}}\frac{\|P({\bm{R}}_{i})-\mu_{kn}\|_{2} - \|P({\bm{R}}_{i})-\mu_{k}\|_{2}}{\|P({\bm{R}}_{i})-\mu_{kn}\|_{2}} \end{equation} where $\mu_{k}$ is the mean of the cluster that example ${\bm{R}}_{i}$ belongs to and $\mu_{kn}$ is the mean of the closest neighbor cluster of example ${\bm{R}}_{i}$. Equation~\ref{eq:silhouette} measures the dissimilarity between neighboring clusters and the compactness of the data within each cluster. By maximizing E we seek to capture the most discriminative clusters. To learn a tree of templates through hierarchical clustering, we follow the method of~\citep{sha_retrieval17} with minor modification on how the clusters and templates are initialized. Algorithm~\ref{alg:growTree} outlines this procedure. \section{Results} \subsection{Dataset} For this work, we used an entire season of player tracking data from a professional European soccer league, consisting of 380 games, 6 of which were omitted due to missing data. The data is collected from an in-venue optical tracking system which records the $(x,y)$ positions of the players at 10Hz. The data also contains single-frame event-labels (e.g. pass, shot, cross) in associated frames; these events were used only to identify which frames contained the onset of an event which we call \textit{event-frames}. Unless explicitly noted, the analysis used only event-frames for training, providing over 1.8million samples across the season. \subsection{Run Complexity} \label{run_complexity} Finding the optimal solution to K-Means is NP-hard, even for 2 clusters. However, through standard methods K-means clustering can achieve an average per-iteration complexity of $(samples\cdot clusters\cdot dimensions)$ while Gaussian mixture models have a complexity of $(samples\cdot clusters\cdot dimensions^{2})$ per iteration due to the additional calculation of the precision matrix \citep{lloyd1982least, gmm_complexity}. Note that for all algorithms, $samples$ becomes $SN$ since each agent in each frame contributes to the distributions. The Hungarian Algorithm has a complexity of $elements^{3}$ per application. In the original algorithm of \citet{bialkowski2016IEEE}, the cost matrix per frame is calculated in a manner resembling that of the GMM, requiring the full distribution (i.e. mean and precision matrix) to be computed so the likelihoods may be calculated. However, the Hungarian Algorithm is then applied across the $N$-agents in each of the $S$-frames. This produces a per-iteration complexity of $(S N)K d^{2}N^{3}$. With $K=N$, this simplifies to $SN^{5}d^{2}$ (see Table~\ref{table-complexity} of Section~\ref{ap_run_complexity}). In contrast, K-Means and GMM have a per-iteration complexity of $SN^{2}d$ and $SN^{2}d^{2}$, respectively. Therefore, for a sport like soccer, $N=10$, causing the hard-assignment based algorithm to be $\sim$1000 times slower than the proposed approach. \subsection{Comparison of Discovered $\mathbfcal{{F}^*}$} \label{res:likelihood} As all of these methods are unsupervised, there is no notion of a ``more accurate'' formation. However, as the goal is to find the $\mathbfcal{{F}^*}$ which maximizes the likelihood of the data, for each team-game-period (TGP), we computed the formation via hard-assignment and our current method, and computed the per-sample average log-likelihood of the data under each method. Our method produced a lower (i.e. more likely) log-likelihood for \textit{every} TGP-formation, consistent with the theoretical guarantees (Figure~\ref{fig:reconstruction}A). In general, the difference between the two approaches was very small, an average difference in log-likelihood of 0.028. We compute the field area covered by a role as $A=\frac{\pi}{\sqrt{\lambda_1\lambda_2}}$ where $\lambda_1$ and $\lambda_2$ are the eigenvalues of the covariance matrix for that role. On average, the field area covered by a role under the current method is $0.021m^2$ smaller than the corresponding role under \citet{bialkowski2016IEEE}. This is consistent with the formations learned via the two methods being extremely similar; the average KL-Divergence~\citep{KL} between corresponding roles under the two method is 0.14nats. \begin{figure} \centering \includegraphics[width=0.95\linewidth, trim={1cm 4cm 0cm 0}]{./figures/results.pdf} \caption{(A) Difference in the per sample log-likelihood under the template learned calculated via hard-assignment and the current method. All of the values are positive demonstrating the formation learned under the current method always captures the data better. (B) Template-based alignment produces a more compressed representation of the data than an identity-based representation. Left: reconstruction error as a function of the number of clusters. Right: variance accounted for as a function of the number of eigenvectors used. In both \textit{Role\_current} corresponds to the method of the current work \textit{Role\_hard} (often directly under Role\_current) corresponds the hard-assignment approach.} \label{fig:reconstruction} \end{figure} \subsection{Compression Evaluation} Template-based alignment has been shown to produce a compressed representation of multi-agent spatiotemporal data~\citep{lucey2013CVPR}. We repeat this analysis here in Figure~\ref{fig:reconstruction}. Similar to the approach in~\citep{sha_retrieval17}, we evaluate the compressibility of the approach using clustering and principle component analysis (PCA). We randomly selected 500,000 frames from the larger data set. Frames were aligned according to Algorithm~\ref{alg:alignment} ("Role\_current" in Figure~\ref{fig:reconstruction}), via hard-assignment ("Role\_hard"), or left unordered ("Identity"). K-means clustering was applied to both the original unaligned data and aligned data for varying values of K. The average within-cluster-error (WCE) was calculated according to $ \text{WCE} = \frac{1}{|{\bm{R}}|} \sum_{C_{k}}\sum_{{\bm{R}}_{i}\in C_{k}}\|{\bm{R}}_{i}- \mu_{k}\|_{2} $ where we again abuse $C_{k}$ to indicate the $k^{th}$ cluster after K-means clustering. Similarly we run PCA on both the unaligned and aligned data and compute the variance explained by the eigenvectors: $ \text{Variance Explained} = \frac{\lambda_{k}}{\sum_{i=1}^{D}\lambda_{i}} $ where $\lambda_{i}$ is the $i^{th}$ eigenvalue indicating the significance of the $i^{th}$ eigenvector. Role-based representation, regardless of the method used to compute it, is significantly more compressive than an identity-based representation. The representation computed via the current method is slightly more compressive than under hard assignment: the per player reconstruction error over the range in Figure~\ref{fig:reconstruction}B is on average 0.76m lower and the variance explained on average is 0.068 higher. \subsection{Impact of Initialization and Key-Frames} \label{res:initialization} The original template-learning procedure proposed initializing the algorithm with the distributions of each player, as players tend to spend much of their time in a specific role. In the subsequent work of \citet{sha_retrieval17}, a random initialization at each layer was proposed. To assess the impact of the player-mean initialization, we ran Alg.~\ref{alg:alignment} 20 times per-TGP, each of which contains about 1500 frames, and recorded the reconstruction error during the K-Means initialization phase of the algorithm. While the exact reconstruction error is sample specific, all samples showed the same trend as Figure~\ref{fig:initialization}A: player-mean initialization begins with a much lower reconstruction error and converges significantly more quickly, often within 10 steps. In contrast, the random initialization is much more variable, takes many more iterations to converge, and often does not converge to as good a solution. The use of event-only ``key frames'' is also a key performance and stabilization measure. Limiting the data to event-only frames reduces the data by a factor of $\sim10$ , producing a speed-up of the same factor. This has minimal impact on the learned template as seen in Figure~\ref{fig:initialization}B. In most instances, the templates learned are almost identical: the average L2-distance between the center of two role distributions is 0.24m and the average Bhattacharyya distance~\citep{bhatt} between two role distributions is 0.078. \begin{figure} \centering \includegraphics[width=0.95\linewidth,trim={0 3.7cm 0 1.5cm}]{./figures/Initialization.pdf} \caption{Impact of initialization and key-frame selection. (A) Player-mean initialization (red) enables the K-Means initialization to run to convergence in fewer iterations than random (blue) initialization. (B) Learning the formation on event-only ``key-frames'' (thick line, no hashing) results in formations which are very similar to the formations learned on all data (thin line, hashing), but runs significantly faster due the reduced data size and is less prone to find spurious optima. Left: An average example showing the formations learned on the two sets of data are very similar. Right: An unusual ``bad'' example showing more disagreement between the two data sets.} \label{fig:initialization} \end{figure} \subsection{Context-Specific Formations} Previously, due to the slowness of the hard-assignment approach, templates had to be learned as a part of a preprocessing step before storage/analysis/consumption. Usually this would be done at the TGP-level, generating a total of 4 ``specialist'' templates per game. In contrast, the proposed method allows templates to be computed ``on the fly''. For several thousand rows of data, the formation can be discovered and aligned in only a few seconds. This allows us to select data under interesting contexts and learn the template that best describes those scenarios across many games. Figure~\ref{fig:context} shows two such analyses this method unlocks. On the left (A) we examine the formation of a team across an entire season when they are attacking in and defending against two very different ``styles'' of play (the very offensively aggressive counterattack, and a conservative ``hold the ball'' maintenance style) \citep{ruiz2017leicester}. Similarly, we can examine how a team positions itself when leading or trailing late in a game both at home or away (B). In addition to learning the formation, we can add back in the overall group positioning to see where on the pitch the team attempts to position itself. Additionally, we can learn and align the unique formations of every team across an entire season in a matter of minutes (see Figure~\ref{fig:league} in \ref{leagueAnalysis}). Other potential analyses could include computing the formation after substitutions or analyzing how teams perform when certain individuals occupy a given role; such analyses are left to future work. \begin{figure} \centering \includegraphics[width=0.9\linewidth,trim={0 2.6cm 0 0}, clip]{./figures/Context.pdf} \caption{Context-specific templates. (A) We trained distinct templates of a given team while attacking and defending against certain modes (aka. ``styles'') of play. Data is aggregated over multiple games across the season. (B) We trained distinct templates of a given team while defending during the last 10 minutes of the games while trailing and leading, both home and away. Here we have added back in the average team (i.e. group) position to show the overall positioning on the pitch.} \label{fig:context} \end{figure} \section{Summary} For multi-agent systems with a high degree of structure such as that seen in team sport, we are able to learn a mapping which takes the set of agents to an ordered vector of agents without introducing undue entropy from permutation. In this work we have shown an improved method for learning the group representation of structured multi-agent data which is significantly faster. Additionally, the monotonically decreasing nature of its objective function provides stability. Our approach exploits the independence of the role-generating distributions during the template-learning phase and enforces the hard assignment of a single agent to a single role only during the final alignment step. This new approach, in combination with a smart choice of key-frame selection and initialization, allows for this representation to be learned over $n^3$ times faster- a factor of more than 1000 for a sport like soccer. By learning this representation, we are able to perform season-wide contextual and on-the-fly representation learning which were previously computationally prohibitive.
1,941,325,220,090
arxiv
\section{Introduction} The Dirac semi-metals, whose low energy physics can be described by three dimensional (3D) pseudorelativistic Dirac equation with the linear dispersion around the Fermi level \cite{Burkov2011}, have attracted lots of attention in recent days, owing to their exotic physical properties \cite{WangZJ-2012-Na3Bi,WangZJ-2013,li2010dynamical,potter2014quantum,Parameswaran2014} and large application potentials in the future \cite{Abrikosov1998,liang2014,He2014}. Current studies mainly focus on two types of Dirac semi-metals with both inversion symmetry and time-reversal (TR) symmetry. One is achieved at the critical point of a topological phase transition. This type of Dirac semi-metal is not protected by any topology and can be gapped easily via small perturbations \cite{sato2011-TlBiSSe,wu2013sudden,LiuJP2013}. In contrast, the other type is protected by the uniaxial rotation symmetry \cite{ChenF2012}, so is quite stable. And according to even or odd parity of the states at the axis of $C_n$ rotation, the symmetry protected Dirac semi-metals can be further classified as two subclasses \cite{YangBJ2014}. The first subclass has a single Dirac point (DP) at a time-reversal invariant momentum (TRIM) point on the rotation axis protected by the lattice symmetry \cite{YoungSM2012,Steinberg2014}, while the second one possesses non-trivial band inversion and has a pair of DPs on the rotation axis away from the TRIM points. For the materials of the second subclass (such as Na$_3$Bi \cite{WangZJ-2012-Na3Bi,liu2014discovery}, Cd$_3$As$_2$ \cite{WangZJ-2013,liu2014stable,Borisenko2014,yi2014evidence,liang2014,JeonSJ2014,He2014,Narayanan2015}, and some charge balanced compounds \cite{gibson20143d,du2014dirac}) the non-zero $\mathbb{Z}_{2}$ number can be well defined at the corresponding two dimensional (2D) plane of the Brillouin zone (BZ) \cite{Morimoto2014,Gorbar2015}. And due to the non-trivial topology, these stable Dirac semi-metals are regarded as a copy of Weyl semi-metals \cite{YangBJ2014}. Thus, its Fermi arcs are observed on the specific surfaces \cite{xu2015}, and a quantum oscillation of the topological property is expected to be achieved in the thin film with the change of thicknesses \cite{WangZJ-2013}. In spite of these successful progresses, the 3D Dirac semi-metal materials either take uncommon lattice structures or contain heavy atoms, which are not compatible with current semiconductor industry. On the other hand, the group \uppercase\expandafter{\romannumeral4} elements, including C, Si, Ge, Sn and Pb, have been widely used in electronics and microelectronics. Generally, for some of the group \uppercase\expandafter{\romannumeral4} elements, the diamond structure is one of the most stable 3D forms at ambient conditions. However, under specific experimental growth conditions, various allotropes with exotic phyiscal and chemical properties are discovered experimentally. For example, the new orthorhombic allotrope of silicon, Si$_{24}$, is found to be a semiconductor with a direct gap of 1.3 eV at the $\Gamma$ point \cite{kim2015Si24}; and the 2D forms of silicene \cite{Seymur2009,Seymur2013-Sil,Seymure-sil-2014}, germanene \cite{Daviala2014,Chensi-2014} and stanene \cite{TangPz2014-stanene,Yong2013,zhu2015epitaxial} have been theoretically predicted to exist or experimentally grown on different substrates, which can be 2D topological insulators (TIs) and used as 2D field-effect transistors \cite{tao2015silicene}. In this article, by using \emph{ab initio} density functional theory (DFT) with hybrid functional \cite{heyd2003hybrid}, we predict new 3D metastable allotropes for Ge and Sn with staggered layered dumbbell (SLD) structure, named as germancite and stancite; and discover that they are stable Dirac semi-metals with a pair of gapless DPs on the rotation axis of $C_3$ protected by the lattice symmetry. Similar to the conventional Dirac semi-metals, such as Na$_3$Bi and Cd$_3$As$_2$, the topologically non-trivial Fermi arcs can be observed on the surfaces parallel to the rotation axis in the germancite and stancite. And via tuning the Fermi level, we can observe a Lifshitz transition in the momentum space. More importantly for future applications, the thin film of the germancite is found to be an intrinsic 2D TI, and the ultrahigh mobility and giant magnetoresistance can be expected in these compounds due to the 3D linear dispersion. \section{Methods} The calculations were carried out by using DFT with the projector augmented wave method \cite{PhysRevB.50.17953,PhysRevB.59.1758}, as implemented in the Vienna \textit{ab initio} simulation package \cite{PhysRevB.54.11169}. Plane wave basis set with a kinetic energy cutoff of $\mathrm{250~eV}$ and $\mathrm{150~eV}$ was used for germancite and stancite respectively. The structure is allowed to fully relax until the residual forces are less than $1\times 10^{-3}~\mathrm{eV/\AA}$. The Monkhorst-Pack $k$ points are $9\times 9\times 9$. With the relaxed structure, the electronic calculation of germancite and stancite using hybrid functional HSE06 \cite{heyd2003hybrid} has been done with and without SOC. The maximally localized Wannier functions \cite{Mostofi2008685} are constructed to obtain the tight-binding Hamiltonian for the Green's function method \cite{0305-4608-15-4-009}, which is used to calculate the surface electronic spectrum and surface states. \section{Results} As shown in Fig. \ref{fig:1}, the germancite and stancite share the same rhombohedral crystal structure with the space group of $D_{3d}^6$ ($R\bar{3}c$) \cite{PhysRevB.90.085426}, which contains the spacial inversion symmetry and $C_3$ rotation symmetry along the trigonal axis (defined as $z$ axis). In one unit-cell, fourteen atoms bond with each others to form six atomic layers; and in each layer, one dumbbell site can be observed. To clearly visualize the SLD structure in the germancite and stancite, we plot the side view of the hexagonal lattice shown in Fig. \ref{fig:1}(b) and the top view from (111) direction in Fig. \ref{fig:1}(c). As the grey shadow shown, the layers containing dumbbell sites stack along (111) direction in the order of $\cdots B\bar{A}C\bar{B}A\bar{C}\cdots$. The interlayer interaction is the covalent bonding between adjacent layers, whose bond lengths are almost equal to those of intralayer bonding (the difference is about $0.03$\AA). Meanwhile, different from the diamond structure, the tetrahedral symmetry is absent in the SLD structure and the coupling here is not typical $sp^3$ hybridization. Furthermore, in order to test the structural stability, we calculate the phonon dispersion for the germancite and stancite shown in Fig. \ref{fig:1}(e). It can be seen that the frequencies of all modes are positive over the whole Brillouin zone, which indicates that the SLD structures are thermodynamically stable. Furthermore, compared with the other experimentally discovered metastable allotropes of Ge and Sn \cite{guloy2006guest,kiefer2010synthesis,PhysRevLett.110.085502,ja304380p,Ceylan2015407,PhysRevB.34.362}, the germancite and stancite share the same order of magnetite of the mass density and cohesive energies (see Supplemental Information for details), so we expect the germancite and stancite could be composed in the future experiments. \begin{figure} \centerline{ \includegraphics[clip,width=0.8\linewidth]{Figure1.eps}} \caption{(Color online) (a) The unit cell of the SLD structure with three private lattice vectors set as \textbf{a$_{1,2,3}$}. The balls in different colors stand for the same kind of atoms in different layers. (b) The side view and (c) top view of the SLD structure. The layers containing dumbbell (DB) structures are labelled. The letters ($A,B,C$) denote the positions of DB sites and the sign of bar is applied to distinguish between two trigonal lattices transformed to each other by inversion. As an example, the top view of two adjacent layers (marked by dashed blue lines) is shown. The DB structures are labeled by the grey shadow shown in the top view of a single layer, and the atoms in one DB structure are represented by grey balls. (d) The 3D Brillouin zone (BZ) of germancite and stancite. The four inequivalent TRIM points are $\Gamma$ (0,0,0), $L$ (0,$\pi$,0), $F$ ($\pi$,$\pi$,0) and T ($\pi$,$\pi$,$\pi$). The hexagon and square, connected to $\Gamma$ by blue lines, show the 2D BZs projected to (111) and (2$\bar{1}\bar{1}$) surfaces respectively, and the high-symmetry $k$ points are labelled. (e) The phonon dispersion of germancite and stancite along high symmetry lines of 3D BZ.} \label{fig:1} \end{figure} The calculated electronic structures of the germancite and stancite around the Fermi level are shown in Fig. \ref{fig:2}(a), in which the solid lines and the yellow shadow stand for the bulk bands with and without spin-orbit coupling (SOC) respectively. It could be observed that: when the SOC effect is not included, the germancite is a conventional semi-metal whose bottom of the conduction bands and top of valence bands touch at the $\Gamma$ point with the parabolic dispersions; while for stancite, it is a metal whose band touching at the $\Gamma$ point is higher than the Fermi level. When the SOC effect is fully considered, our calculations indicate both germancite and stancite to be 3D Dirac semi-metals with a pair of DPs in the trigonal rotation axis (DP at (0,0,$\pm k_{z0}$)). Therefore, the low energy physics of this kind of materials can be described by the 3D Dirac-type Hamiltonian. And the schematic band structure based on the effective $k\cdot p$ model (see Supplemental Information for details) for germancite and stancite is shown in Fig. \ref{fig:2}(c), in which the pair of 3D DPs is clear. To understand the physical origin of the 3D gapless Dirac Fermions in the SLD structure, we plot the schematic diagram of the band evolution for the germancte and stancite in Fig. \ref{fig:2}(b). In contrast to isotropic coupling in the diamond structure, the hybridizations in the layered SLD structure are anisotropic, in which the inter-layer couplings are relatively weaker than intra-layer couplings and the $p_z$ and $p_{x\pm iy}$ states are splited. Furthermore, based on our calculations, the kind of anisotropic coupling will further shift down the anti-bonding state of $s$ orbital which is even lower than the bonding states of the $p_{x\pm iy}$ orbitals at the $\Gamma$ point. So the band inversion occurs at the $\Gamma$ point even without SOC effect, and the SOC herein just removes the degeneracy of $p_{x\pm iy}$ orbitals around the Fermi level. In the 2D BZ which contains the $\Gamma$ point and is perpendicular to the $\Gamma$-$\text{T}$ direction, the non zero $\mathbb{Z}_{2}$ topological number can be well defined. On the other hand, the $C_{3v}$ symmetry along the $\Gamma$-$\text{T}$ line contains one 2D ($\Lambda_{4}$) and two degenerate 1D ($\Lambda_{5}$, $\Lambda_{6}$) irreducible representations for its double space group \cite{koster1963properties}. As shown in the Fig. \ref{fig:2}(b), the two crossing bands at the Fermi level belong to $\Lambda_{5}+\Lambda_{6}$ and $\Lambda_{4}$ respectively. So there is no coupling and a TR pair of 3D DPs can be observed at the Fermi level along the $\Gamma$-$\text{T}$ direction. \begin{figure} \centerline{ \includegraphics[clip,width=0.8\linewidth]{Figure2.eps}} \caption{(Color online) (a) The band structures of germancite (left) and stancite (right) along high symmetry lines with the corresponding DOS around the Fermi level (dashed horizontal line). In the k-path $\text{T}$-$\Gamma$, the size of the red dots represents the contribution from the atomic $s$ and $p_z$ orbitals. The cyan dots are the Dirac points at (0,0,$k_{z0}$), where $k_{z0}\approx 0.08 $ \AA$^{-1}$ and $\approx 0.18 $ \AA$^{-1}$ respectively. Shaded regions denote the calculated energy spectrum without SOC. (b) Schematic diagrams of the lowest conduction bands and highest valence bands from the $\text{T}$ point to the $\Gamma$ point for germancite and stancite. The black lines present the SOC effect at the $\text{T}$ and $\Gamma$ point. Between them, the red and blue lines denote doubly degenerate bands belonging to different irreducible representations, where the solid/dashed red line is for germancite/stancite. And the crossing points (solid cyan dots) correspond to those gapless Dirac points in (a) respectively. (c) Schematic band dispersion based on the effective $k\cdot p$ model for germancite and stancite. The $k_{\perp}$ direction refers to any axis perpendicular to the $k_{z}$ direction in the momentum space and the color becomes warmer, as the energy increases.} \label{fig:2} \end{figure} Due to the non-trivial topology of 3D Dirac semi-metals, the projected 2D DPs and Fermi arcs are expected to be observed on some specific surfaces for the germancite and stancite. As shown in the Fig. \ref{fig:3}, by using the surface Green's function method \cite{0305-4608-15-4-009}, we study the electronic spectrum on the (111) and (2$\bar{1}\bar{1}$) surface whose BZs are perpendicular and parallel to the $\Gamma$-$\text{T}$ direction respectively. For the BZ of (111) surface, the pair of 3D DPs project to the $\widetilde{\Gamma}$ point as 2D Dirac cones (see Fig. \ref{fig:3}(a) and (d)); when the coupling between two projected 2D DPs is considered, a finite band gap could be easily obtained. Furthermore, besides the projected Dirac cones, we also observe the trivial surface states in the germancite and stancite ($\alpha_{1,2}$ states in the Fig. \ref{fig:3}(a) and (d)) which mainly originate from the dangling bonds on the (111) surface. \begin{figure} \centerline{ \includegraphics[clip,width=0.8\linewidth]{Figure3.eps}} \caption{(Color online) The electronic spectrum on the $(111)$ surface and its corresponding Fermi surface for (a) germanctie and (d) stancite respectively. Two bulk DPs are projected to the $\widetilde{\Gamma}$ point. The electronic spectrum on the $(2\bar{1}\bar{1})$ surface and its corresponding Fermi surface for (b) germanctie and (e) stancite respectively. The cyan dots label the projected DPs and the yellow dot represents the band crossing at the $\bar{\Gamma}$ point. On the Fermi surface, the Fermi arcs connect two projected DPs (cyan dots). For stancite $(2\bar{1}\bar{1})$ surface, the constant-energy contour is at $\epsilon_f-5.2$ meV, slightly away from the Fermi level, to distinguish the Fermi arcs. Stacking plots of constant-energy contours at different energies on its $(2\bar{1}\bar{1})$ surface of (c) germanctie and (f) stancite respectively. The Fermi level is set to be zero.} \label{fig:3} \end{figure} For the (2$\bar{1}\bar{1}$) surface of the germancite and stancite, the electronic structures are quite different. Because the BZ of (2$\bar{1}\bar{1}$) surface is parallel to the $\Gamma$-$\text{T}$ direction, the pair of 3D DPs are projected to different points (0,0,$\bar{\pm k_{z0}}$) which are marked by the cyan dots in the Fig. \ref{fig:3}(b) and (e). Between the projected DPs, a pair of the Fermi arcs could be observed clearly, which share the helical spin-texture and are not continuous at the projected points. This Fermi arcs originate from the non-trivial $\mathbb{Z}_{2}$ topology in the Dirac semi-metals. On any 2D plane in the bulk whose BZ is perpendicular to the $\Gamma$-$\text{T}$ direction with $-k_{z0}<k_z<k_{z0}$, the $\mathbb{Z}_{2}$ number is +1. Thus, in real space, the corresponding ``edge state" exist on the boundary. In the moment space, the BZ of the ``edge state'' corresponds to the line parallel to $\bar{Y}$-$\bar{\Gamma}$-$\bar{Y}$ with $-\bar{k_{z0}}$$<$$\bar{k_z}$$<$$\bar{k_{z0}}$, and its Fermi surface should be two points. After concluding all the contributions of planes with $\mathbb{Z}_{2}$=1, the Fermi surface becomes a pair of the Fermi arcs on the BZ of (2$\bar{1}\bar{1}$) surface which connect the projected DPs. At the same time, on the (2$\bar{1}\bar{1}$) surface, the other surface states contributed by the dangling bond also exist. Via tuning the Fermi level, we could observe the hybridization between the non-trivial surface states and Fermi arcs (see Fig. \ref{fig:3}(c) and (f)), so a Lifshitz transition is found on the Fermi surface. Additionally, because the Fermi surface contours on the (2$\bar{1}\bar{1}$) surface contain roughly the same wave vector (see the yellow arrow in Fig. \ref{fig:3} (e)), the charge density wave or surface reconstruction is possible to be observed here. However, the surface coupling will not break the TR symmetry or change the bulk topology, the pair of Fermi arcs always exist. \section{Discussion and Conclusion} Because of the compatibility with the traditional semiconductor devices and dissipationless edge transport, the realization of the quantum spin Hall (QSH) effect in the thin film of Ge attracts lots of attention recently. In a recent proposal \cite{Zhang2013-Ge-TI}, the non-trivial topology of the 2D thin film is induced by the large build-in electric field in the semiconductor interface, which may be difficult to control in real experiments. However, owing to the non-trivial topology of the Dirac semi-metal, the germancite (111) film may provide an opportunity for obtaining the QSH insulator. As discussed above, the topologically non-trivial band inversion occurs around the $\Gamma$ point in the germancite. So if we build 2D film with the proper thickness along the (111) direction, the band inversion may be restored at the $\widetilde{\Gamma}$ point and this thin film would become a QSH insulator. Figure \ref{fig:4} shows the electronic structure for germancite (111) film with the thickness of 14.5 nm (i.e., 16 layer). A small band gap (5.6 meV) opens at the $\widetilde{\Gamma}$ point due to the quantum confinement. To confirm our prediction, we calculate its $\mathbb{Z}_{2}$ number from the evolution of the Wannier charge centers (see Supplemental Information for details). It is found that the (111) thin film of the germancite is a 2D TI without applying the external electric field. \begin{figure} \centerline{ \includegraphics[clip,width=0.8\linewidth]{Figure4.eps}} \caption{(Color online) (a) Band structure of 16 layer germancite (111) film. The topologically nontrivial gap at the $\widetilde{\Gamma}$ can be seen in the inset. (b) Schematic device consisting of three germancite thin films with different thickness. The middle one is a QSH insulator, whereas the other two are topologically trivial. In the lower panel, the purple and green vectors stand for the spin-polarized current at the interfaces.} \label{fig:4} \end{figure} In conclusion, from DFT calculations with the hybrid functional, we predict the germancite and stancite with SLD structure are stable topological Dirac semi-metals protected by the rotation symmetry. And it is found that the Fermi arcs coexist with the trivial surface states on the surface plane parallel to the rotation axis of $C_3$, and a Lifshitz transition is observed when the Fermi level is tuned. Furthermore, we discover the (111) thin film of the germancite is a 2D TI without applying the external electric field which is important for future applications. Experimentally, the metastable allotropes of germanium has been synthesized through the oxidation of Ge$_9^{4-}$ Zintl anions in ionic liquids under ambient conditions \cite{guloy2006guest}. And owing to similar density and cohesive energy, we expect the germancite and stancite could be synthesized via the similar methods in the future. \begin{acknowledgments} We would like to thank S. Cahangirov and L. Xian for useful discussions. W.C. and W.D. acknowledge support from the Ministry of Science and Technology of China (Grant Nos. 2011CB606405 and 2011CB921901) and the National Natural Science Foundation of China (Grant No. 11334006). A.R. acknowledges financial support from the European Research Council Grant DYNamo (ERC-2010-AdG No. 267374) Spanish Grants (FIS2010-21282-C02-01), Grupos Consolidados UPV/EHU del Gobierno Vasco (IT578-13) and EC project CRONOS (280879-2 CRONOS CP-FP7). P.T. and S.-C.Z. acknowledge NSF under grant numbers DMR-1305677 and FAME, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. The calculations were done on the ``Explorer 100'' cluster system of Tsinghua University. \end{acknowledgments} W.C., P.T., W.D., S.-C.Z, and A.R. conceived and designed the project. W.C. and P.T. performed the $ab~initio$ calculations and theoretical analysis. P.T. and W.C. wrote the manuscript with the help from other authors. All authors contributed to the discussions. W.C. and P.T. contribute equally to this work.
1,941,325,220,091
arxiv
\section{Motivation} F-theory \cite{Vafa:1996xn, Morrison:1996na, Morrison:1996pp} provides a beautiful connection between the physics of string compactifications and the geometry of elliptically fibred Calabi--Yau manifolds. One of the most basic relationships is the emergence of non-abelian gauge symmetries in F-theory via singular fibres in codimension one of the fibration. These are, according to Kodaira's and Neron's classifications \cite{MR0132556, MR0184257, MR0179172}, in one-to-one correspondence to simple Lie algebras that furnish the gauge symmetries. By now, there exist a plethora of techniques to systematically engineer non-abelian gauge symmetries in F-theory \cite{Bershadsky:1995qy, Candelas:1997eh, Bouchard:2003bu, Katz:2011qp, Lawrie:2012gg, Braun:2013nqa, Kuntzler:2014ila, Lawrie:2014uya}. In comparison, the geometric origin of abelian gauge symmetries associated with the Mordell--Weil group of rational sections \cite{Morrison:1996na, Morrison:1996pp, Park:2011ji, Morrison:2012ei} is much less understood. This is in part to due the fact that sections are inherently global objects that can only be fully described within a globally defined geometry. Consequently, there are only a handful of concrete constructions of global F-theory models with abelian gauge symmetries explicitly realised \cite{Grimm:2010ez, Krause:2011xj, Morrison:2012ei, Borchmann:2013jwa, Cvetic:2013nia, Cvetic:2013uta, Borchmann:2013hta, Cvetic:2013jta, Cvetic:2013qsa, Cvetic:2015moa, Klevers:2014bqa, Cvetic:2015ioa}. An approach to construct models with both abelian and non-abelian gauge symmetries is to first pick a global fibration known to have sections, and then using the aforementioned techniques to introduce suitable singularities in codimension one. This approach has been used throughout the literature to construct phenomenologically appealing models (in addition to the previous references, see also \cite{Dolan:2011iu, Mayrhofer:2012zy, Braun:2013yti, Krippendorf:2014xba, Lin:2014qga, Krippendorf:2015kta}). However, as it is so often the case in geometry, the Mordell--Weil group of sections and codimension one singularities are not completely independent of each other. Indeed, it turns out that the existence of torsional sections\footnote{By the Mordell--Weil theorem, the Mordell--Weil group is a finitely generated abelian group, hence must be isomorphic to ${\mathbb{Z}}^m \times \prod_i {\mathbb{Z}}_{k_i}$. Sections lying in $\prod_i {\mathbb{Z}}_{k_i}$ are called torsional, as opposed to those in the free part ${\mathbb{Z}}^m$ that give rise to abelian symmetries in F-theory.} not only enforces specific codimension one singularities corresponding to a semi-simple Lie algebra ${\mathfrak{g}}$, it also restricts the possible matter representations \cite{Aspinwall:1998xj, Mayrhofer:2014opa} (see also \cite{Klevers:2014bqa, Oehlmann:2016wsb}). An equivalent formulation is to say that the gauge group is not $G$---the simply connected Lie group associated to ${\mathfrak{g}}$---but rather $G/{\cal Z}$, where ${\cal Z}$ is a subgroup of the centre of $G$. It is important to note that in this case, only representations transforming trivially under ${\cal Z}$ are allowed. Because field theoretically, only non-local operators such as line operators are sensitive to this quotient structure \cite{Aharony:2013hda}, one often refers to $G/{\cal Z}$ as the structure of the global gauge group, in order to distinguish it from the gauge algebra that is seen by local operators. If ${\cal Z} \neq \{1\}$, we will refer to the global group structure to be `non-trivial'. The analysis of \cite{Mayrhofer:2014opa} produced only models that have a non-trivial global structure in the non-abelian sector of the gauge group. However, the central subgroup ${\cal Z}$ can also overlap with a subgroup of the abelian sector. The most prominent example of such a non-trivial gauge group structure is in fact presumed to be the Standard Model of particle physics. Indeed, the Standard Model spectrum is invariant under a ${\mathbb{Z}}_6$ subgroup that lies in the centre ${\mathbb{Z}}_3 \times {\mathbb{Z}}_2 \times U(1) \subset SU(3) \times SU(2) \times U(1)$ (for a review see, e.g., \cite{McCabe:2007zz}). Thus the global gauge group is expected to be $[SU(3) \times SU(2) \times U(1)]/{\mathbb{Z}}_6$.\footnote{To be precise, the quotient could be by any subgroup of ${\mathbb{Z}}_6$ from the field theory perspective. Line operators differentiating between the possibilities have been recently classified in \cite{Tong:2017oea}.} Sometimes, this global structure is seen as further evidence for an $SU(5)$ GUT, since it is a direct consequence of breaking $SU(5)$ to the Standard Model (see, e.g., \cite{Baez:2009dj}). It may therefore seem surprising, that F-theory compactifications realising the Standard Model gauge algebra without an explicit GUT structure \cite{Lin:2014qga, Klevers:2014bqa, Cvetic:2015txa, Lin:2016vus} actually reproduces exactly the same representations which are invariant under the ${\mathbb{Z}}_6$ centre of $SU(3) \times SU(2) \times U(1)_Y$. Furthermore, since there is no evidence for these models to have torsional sections, one might wonder if this agreement is purely coincidental, or if there is some further hidden structure in the geometry giving rise to the non-trivial global gauge group. In this paper, we will show that the latter is the case. In fact, we will present an argument---very similar to that for torsional sections in \cite{Mayrhofer:2014opa}---showing that generically, F-theory compactifications with abelian gauge factors exhibit a non-trivial gauge group structure. We will demonstrate in section \ref{sec:general_story} that the Shioda map \cite{Shioda:1989, Shioda:1990, Wazir:2001} of sections generating the Mordell--Weil group relates the ${\mathfrak{u}}(1)$ charges of matter non-trivially to their representations under the non-abelian part of the gauge algebra. This relationship, which leads to a non-trivial centre of the universal covering of the actual gauge group, can be equivalently understood as a refined charge quantisation condition, which has been previously observed throughout the literature. Examples hereof will be presented in section \ref{sec:examples}, including those leading to F-theory `Standard Models'. In section \ref{sec:higgsing}, we address the issue if and how, in F-theory, such non-trivial gauge group structures can arise from the breaking, a.k.a.~higgsing, of a larger non-abelian gauge group, similar to breaking $SU(5)$ to the Standard Model. Because of the intricate geometric description of higgsing, we will content ourselves with the discussion of a concrete class of models having $\mathfrak{su}(2) \oplus {\mathfrak{u}}(1)$ gauge algebra. For these, we demonstrate explicitly, how different gauge group structures arise from different breaking patterns that are captured beautifully in the geometry. An interesting implication of our findings is presented in section \ref{sec:swampland}, where we argue that the geometric properties leading to the non-trivial global gauge group structure can also be interpreted as a criterion for effective field theories to be in the F-theory `swampland'. This swampland criterion is formulated in terms of a charge constraint on matter representations of the non-abelian gauge algebra. In section \ref{sec:summary}, conclusions and outlook for further investigations are presented. \section{Shioda map and the centre of gauge groups}\label{sec:general_story} Because our main argument is based on the Shioda map, we will first present a brief review of its prominent role in F-theory, which will also help to set up the notation. Let $\pi: Y_{n+1} \rightarrow B_n$ be a smooth, elliptically fibred Calabi--Yau space of complex dimension $n+1$, with (singular) Kodaira fibres over a codimension one locus $\{\theta =0 \} \equiv \{\theta\} \subset B_n$, and Mordell--Weil rank $m$. In addition to the zero section $\sigma_0$, the Mordell--Weil (MW) group has independent sections $\sigma_k$, $1 \leq k \leq m$, which generate the free part (we will call $\sigma_k$ a `free' generator of the MW-group). In the following, we will denote the divisor classes of the (zero) sections by ($Z$) $S_k$. Furthermore, we have the exceptional divisors $E_i = ( {\mathbb{P}}^1_i \rightarrow \{\theta\} )$, $1 \leq i \leq r$, which are ${\mathbb{P}^1}$-fibrations over $\{\theta\}$. Note that by definition, the zero section does not intersect the exceptional divisors, $Z \cdot {\mathbb{P}}^1_i = 0$.\footnote{Put differently, one usually defines the affine node of the generic Kodaira fibre over (an irreducible component of) $\{\theta\}$ as the one that is intersected by the zero section.} In this set-up, the Shioda--Tate--Wazir theorem \cite{Wazir:2001} implies that the N\'{e}ron--Severi (NS) group (i.e., divisors modulo algebraic equivalence) of $Y_{n+1} \equiv Y$\footnote{Strictly speaking, the Shioda--Tate--Wazir theorem is only proven for threefolds. However, it is usually assumed in the F-theory literature that it also holds for four- and fivefolds.} satisfies \begin{align} \text{NS}(Y) \otimes {\mathbb{Q}} = \text{span}_{\mathbb{Q}} ( S_1, ..., S_m ) \oplus \underbrace{\text{span}_{\mathbb{Q}} (Z, E_1,..., E_r) \oplus (\text{NS}(B) \otimes {\mathbb{Q}})}_T \, . \end{align} The subspace $T$ is spanned by the zero section $Z$, the exceptional divisors $E_i$, and any divisor $D_B$ pulled back from the base $B$ with $\pi$. Finally, let us introduce the height pairing $\langle \, , \, \rangle : \text{NS}(Y) \times \text{NS}(Y) \rightarrow \text{NS}(B)$, given by the projection $\langle D_1, D_2 \rangle = \pi (D_1 \cap D_2)$ of the intersection. For $n \geq 2$, we know \cite{Morrison:1996na, Morrison:1996pp, Klemm:1996hh} that F-theory compactified on $Y_{n+1}$ gives rise to a gauge theory in $d=10-2n$ dimensions with gauge algebra ${\mathfrak{u}} (1)^{\oplus \, r} \oplus {\mathfrak{g}}$ and charged matter arising from singular fibres over codimension two loci of $B_n$. The semi-simple (non-abelian) algebra ${\mathfrak{g}}$ is determined by the singularity types over $\{\theta\}$. In particular, the exceptional divisors $E_i$, $i= 1, ..., \text{rank}({\mathfrak{g}}) = r$, being dual to harmonic $(1,1)$-forms $\omega_i$, give rise---via the standard expansion $C_3 = \sum_i A_i \wedge \omega_i$ of the M-theory 3-form---to gauge fields $A_i$ taking value in the Cartan subalgebra $\mathfrak{h}$ of ${\mathfrak{g}}$. The W-bosons, i.e., states forming the roots of ${\mathfrak{g}}$, originate from M2-branes wrapping the ${\mathbb{P}}^1_i$ fibres of $E_i$. On the other hand, the ${\mathfrak{u}}(1)$ gauge fields arise from expanding $C_3$ along the $(1,1)$-forms $\omega_{{\mathfrak{u}}(1)_k}$ which are Poincar\'{e}-dual (PD) to divisors $\varphi(\sigma_k)$ associated with the free generators $\sigma_k$ of the Mordell--Weil group. This so-called Shioda map \cite{Shioda:1989, Shioda:1990, Wazir:2001} is a homomorphism $\varphi: \text{MW} \rightarrow \text{NS}(Y) \otimes {\mathbb{Q}}$ with $\text{ker}(\varphi) = \text{MW}(Y)_\text{torsion}$, that satisfies $\langle \varphi (\sigma) , D \rangle = 0$ for any $D \in T$. These conditions can be recast in terms of intersection numbers: \begin{align} \langle \varphi(\sigma) , D_B \rangle = 0\quad & \Longleftrightarrow \quad \varphi(\sigma) \cdot {\mathfrak{f}} = 0 \, , \label{eq:no_charge_generic_fibre}\\ \langle \varphi(\sigma) , Z \rangle = 0 \quad & \Longleftrightarrow \quad \varphi(\sigma) \cdot {\cal C}_B = 0 \, , \label{eq:no_flux_through_base}\\ \langle \varphi(\sigma) , E_i \rangle = 0 \quad & \Longleftrightarrow \quad \varphi(\sigma) \cdot {\mathbb{P}}^1_i = 0 \, . \label{eq:no_charge_W_bosons} \end{align} The first two conditions ensure that the intersection product of $\varphi(\sigma)$ with the generic fibre ${\mathfrak{f}}$ and any curve ${\cal C}_B$ of the base (lifted by the zero section) vanishes. Physically, this is related to the requirement that the ${\mathfrak{u}}(1)$ gauge field lifts properly from $d-1$ to $d$ dimensions in the M-/F-theory duality. The last condition is nothing other than the statement that the gauge bosons of ${\mathfrak{g}}$ are uncharged under the ${\mathfrak{u}}(1)$. These conditions determine the Shioda map up to an overall scaling: Since $\varphi$ relates a section $\sigma$ to a divisor class, we expect that $\varphi(\sigma) \sim S + $(correction terms), where $S$ is the class of $\sigma$ itself. To satisfy the first condition \eqref{eq:no_charge_generic_fibre}, the correction terms must contain $-Z$.\footnote{We could in fact shift by any other section $S_k$ instead of $Z$ to satisfy \eqref{eq:no_charge_generic_fibre}; however, because $\varphi$ is a homomorphism, we need $\varphi(\sigma_0) \stackrel{!}{=} 0$. Therefore, the shift has to be the divisor class $Z$ of the zero section.} The second condition \eqref{eq:no_flux_through_base} introduces a term of the form $\pi^{-1}(D_B)$, where the exact divisor $D_B \in \text{NS}(B)$ depends on the concrete model. We will neglect the discussion of this term, since its intersection number with any fibral curve $\Gamma$ is zero and hence does not contribute to the ${\mathfrak{u}}(1)$ charge of localised matter. Finally, the last condition \eqref{eq:no_charge_W_bosons} gives rise to a correction term of the form $\sum_i l_i \, E_i$, where the coefficients $l_i \in {\mathbb{Q}}$ will be discussed in more detail momentarily. Thus the Shioda map for any sections $\sigma$ reads \begin{align}\label{eq:shioda_image_with_factor} \varphi (\sigma) = \lambda \left(S - Z + \pi^{-1} (D_B) + \sum_i l_i \, E_i \right)\, , \end{align} where the overall factor $\lambda$ is not fixed by \eqref{eq:no_charge_generic_fibre} -- \eqref{eq:no_charge_W_bosons}; however, because $\varphi$ is a homomorphism, i.e., $\varphi(\sigma_1 + \sigma_2) = \varphi(\sigma_1) + \varphi(\sigma_2)$, the factor has to be the same for all sections. Accordingly, the ${\mathfrak{u}}(1)_k$ charge of matter states which arise as M2-branes wrapping fibral curves $\Gamma$---given by the intersection number $q_k(\Gamma) = \varphi(\sigma_k) \cdot \Gamma$---are only determined up to an overall scaling, which does not have a direct physical meaning. Therefore, we often find in the literature that the scaling is chosen such that all charges are integral. While there is in principle nothing wrong with such a rescaling, the factor can be misleading when we analyse the global gauge group structure. As we will see, by setting $\lambda=1$, we can read off the global gauge group directly from the coefficients $l_i$. Field theoretically, this points towards a `preferred' ${\mathfrak{u}}(1)$ charge normalisation, in which case the ${\mathfrak{u}}(1)$ charge lattice \textit{for each representation ${\cal R}$ of ${\mathfrak{g}}$} has lattice spacing 1. In this formulation, we can also interpret the non-trivial global gauge group as a relative shift by a fractional number of the charge lattice for different ${\mathfrak{g}}$-representations. Of course, these restrictions on the ${\mathfrak{u}}(1)$ charges have been previously observed, e.g., they are quantified in the literature \cite{Braun:2013nqa, Kuntzler:2014ila, Lawrie:2014uya, Lawrie:2015hia} for ${\mathfrak{g}} = \mathfrak{su}(5)$, and derived more generally from the consistency of large gauge transformation in the circle reduction of F-theory \cite{Grimm:2015wda}. The novelty of this paper is the observation that these charge restrictions are explicitly tied to the global gauge group structure for any F-theory compactification with non-trivial Mordell--Weil group. \subsection{Fractional \textit{U}(1) charges in F-theory}\label{sec:fraction_U1_charges} In the following, we will focus the discussion on a rank one Mordell--Weil group with a single free generator $\sigma$. The generalisation to higher Mordell--Weil rank (and also the inclusion of torsion) is straightforward and will be presented in section \ref{sec:more_MW_generators} with explicit examples. For the purpose of these notes, let us fix the factor $\lambda$ in the Shioda map to 1: \begin{align}\label{eq:general_shioda_image} \varphi(\sigma) := S - Z + \pi^{-1}(D_B) + \sum_i l_i \, E_i \, . \end{align} Since the following discussion revolves around the fractional coefficients $l_i$, let us recall that they arise from requiring the intersection numbers of $\varphi(\sigma)$ with the fibre ${\mathbb{P}}^1_i$s of the exceptional divisors $E_i$ to vanish, see \eqref{eq:no_charge_W_bosons}. This imposes \begin{align}\label{eq:coefficients_Cartan_generators} l_i = \sum_j (C^{-1})_{ij} \, \left((S - Z + \pi^{-1}(D_B)) \cdot {\mathbb{P}}^1_j \right) = \sum_j (C^{-1})_{ij} \, \left((S - Z) \cdot {\mathbb{P}}^1_j \right)\, . \end{align} Here, $C^{-1}$ denotes the inverse of $C_{ij} = - E_i \cdot {\mathbb{P}}^1_j$, which is the Cartan matrix of the algebra ${\mathfrak{g}}$.\footnote{If ${\mathfrak{g}} = \bigoplus_a {\mathfrak{g}}_a$, where ${\mathfrak{g}}_a$ are simple Lie algebras, then $C$ is the block diagonal matrix formed by the Cartan matrices of ${\mathfrak{g}}_a$.} In general, the coefficients $l_i$ are fractional numbers that in particular depend on the intersection properties between the divisor $S-Z$ and the fibres ${\mathbb{P}}^1_i$ of the exceptional divisors $E_i$. However, there is always a positive integer $\kappa$ such that $\kappa\,l_i \in {\mathbb{Z}}$ for all $i$. For example, we know that the entries of the inverse Cartan matrix of $\mathfrak{su}(n_a)$ are $z/n_a$ with $z \in {\mathbb{Z}}$. Hence, if ${\mathfrak{g}} = \bigoplus_a \mathfrak{su}(n_a)$, then the smallest such $\kappa$ is the least common multiple of all $n_a$. Note that this immediately implies charge quantisation (i.e., we really have a compact abelian gauge factor): Since $\kappa\,\varphi(\sigma)$ is a manifestly integer class, its intersection number with fibral curves is always integral. So the ${\mathfrak{u}}(1)$ charges (measured with respect to $\varphi(\sigma)$) of \textit{all} states realised geometrically (i.e., as M2-branes on fibral curves) lie in a lattice of spacing $1/\kappa$. In fact, the Shioda map \eqref{eq:general_shioda_image} makes an even more refined statement. Because $S$ and $Z$ are divisor classes of sections, they are manifestly integer, i.e., their intersection product with fibral curves $\Gamma$ must be integer as well. But then, the charge of the matter state ${\bf w}$ associated with $\Gamma$ must satisfy \begin{align}\label{eq:integer_pairing_condition} \begin{split} & q^{\bf w} = \varphi(\sigma) \cdot \Gamma = \left( S - Z + \sum_i l_i \, E_i \right) \cdot \Gamma \\ \Longrightarrow \quad & q^{\bf w} - \sum_i l_i \, E_i \cdot \Gamma = q^{\bf w} - \sum_i l_i \, {\bf w}_i = (S - Z) \cdot \Gamma \in {\mathbb{Z}} \, , \end{split} \end{align} where, in the second line, we have used the standard result that the Dynkin labels of a weight ${\bf w}$ associated with a fibral curve $\Gamma$ is given by ${\bf w}_i = E_i \cdot \Gamma \in {\mathbb{Z}}$. Curves $\Gamma_{\bf w,v}$ localised at the same codim 2 locus, but realising different states ${\bf w,v}$ of the same ${\mathfrak{g}}$-representation, differ by an integer linear combination $\mu_k \, {\mathbb{P}}^1_k$s, since these ${\mathbb{P}}^1$s correspond to the simple roots of the algebra ${\mathfrak{g}}$.\footnote{\label{footnote:same_charge_for_one_rep} By \eqref{eq:no_charge_W_bosons}, these states must have the same ${\mathfrak{u}}(1)$ charge, hence form a single representation $(q^{\cal R}, {\cal R}_{\mathfrak{g}})$ of the full algebra ${\mathfrak{u}}(1) \oplus {\mathfrak{g}}$. } For these, we have \begin{align} \begin{split} \sum_i l_i \, {\bf w}_i = & \sum_i l_i \, E_i \cdot \Gamma_{\bf w} = \sum_i l_i \, E_i \cdot (\Gamma_{\bf v} + \sum_k \mu_k \, {\mathbb{P}}^1_k) = \sum_i \left( l_i \, {\bf v}_i - \sum_k \mu_k \, l_i C_{ik} \right) \\ \stackrel{\text{\eqref{eq:coefficients_Cartan_generators}}}{=} & \sum_i l_i \, {\bf v}_i - \sum_k \underbrace{ \mu_k \, (S-Z)\cdot {\mathbb{P}}^1_k}_{\in {\mathbb{Z}}} \, . \end{split} \end{align} Thus, we can associate to each ${\mathfrak{g}}$-representation ${\cal R}_{\mathfrak{g}}$ a $\kappa$-fractional number between 0 and 1, \begin{align}\label{eq:L_of_rep} L({\cal R}_{\mathfrak{g}}) := \sum_i l_i \, {\bf w}_i \! \mod {\mathbb{Z}} \, , \end{align} which is independent of the choice of weight ${\bf w} \in {\cal R}_{\mathfrak{g}}$. For a representation $(q^{\cal R}, {\cal R}_{\mathfrak{g}})$ of ${\mathfrak{u}}(1) \oplus {\mathfrak{g}}$, this allows to rewrite \eqref{eq:integer_pairing_condition} as a condition for the ${\mathfrak{u}}(1)$ charge, \begin{align}\label{eq:integer_condition_charges_rep} q^{\cal R} - L({\cal R}_{\mathfrak{g}}) \in {\mathbb{Z}} \, . \end{align} So for any matter with ${\mathfrak{g}}$-representation ${\cal R}_{\mathfrak{g}}$, the possible ${\mathfrak{u}}(1)$ charges arrange in a lattice of integer spacing. However, for different representations, the lattices do not in general align. In fact, from what we have seen above, they can differ by multiples of $1/\kappa$. The geometric origin of \eqref{eq:integer_condition_charges_rep} lies in the intersection properties of divisors and codimension one singular fibres over $\{\theta\}$. Indeed, the non-integrality of the coefficients $l_i$ \eqref{eq:coefficients_Cartan_generators}, which leads to the non-trivial integrality condition \eqref{eq:integer_pairing_condition}, stems from the zero section $Z$ and the generating section $S$ intersecting ${\mathbb{P}}^1$ fibres of possibly different exceptional divisors $E_i$. This so-called \textit{split} \cite{Braun:2013yti, Kuntzler:2014ila} of the fibre structure over $\{\theta\}$ by the section can be easily determined in concrete models, e.g., directly from the polytope in toric constructions \cite{Braun:2013nqa}. The analysis carried out to obtain \eqref{eq:integer_condition_charges_rep} is essentially equivalent to the study of the fibre splitting patterns in the presence of sections and the allowed ${\mathfrak{u}}(1)$ charges, e.g., as in \cite{Kuntzler:2014ila,Lawrie:2015hia} for classifying all possible ${\mathfrak{u}}(1)$ charges of $\mathfrak{su}(5)$ matter. Here, we have rephrased it in a way that allows for a more straightforward connection to the global structure of the gauge group. An alternative way of deriving \eqref{eq:integer_pairing_condition} is to consider circle compactifications of F-theory and require consistency of the large gauge transformations along the circle \cite{Grimm:2015wda}. Note that the above discussion, in particular the derivation of \eqref{eq:integer_condition_charges_rep} for matter localised in codimension two, is based purely on codimension one properties. Hence, all arguments and conclusions hold for F-theory compactifications to six, four and two dimensions. \subsection{Non-trivial central element from the Shioda map} To see how the above observation relates the Lie algebra ${\mathfrak{u}}(1) \oplus {\mathfrak{g}}$ to the global gauge group $G_\text{glob}$, first note that $G_\text{glob}$ has $U(1) \times \tilde{G}$ as a cover, where $\tilde{G}$ is the simply connected Lie group associated to ${\mathfrak{g}}$. We can now define an element of the centre ${\cal Z}(U(1) \times \tilde{G}) = U(1) \times {\cal Z}(\tilde{G})$, which has to act trivially on all geometrically realised weights. For that, we first define the element $\Xi := q - \sum_i l_i \, E_i$ of the Cartan subalgebra ${\mathfrak{u}}(1) \oplus \mathfrak{h} \subset {\mathfrak{u}}(1) \oplus {\mathfrak{g}}$, where $q$ is the generator of ${\mathfrak{u}}(1)$. Its action on the representation space of an irreducible representation ${{\cal R}} = (q^{{\cal R}}, {{\cal R}}_{\mathfrak{g}})$ of ${\mathfrak{u}}(1) \oplus {\mathfrak{g}}$ is then simply defined through its action on the weights ${\bf w} \in {\cal R}$. Explicitly, denoting by ${\bf w}_i$ the Dynkin labels of ${\bf w}$ under ${\mathfrak{g}}$, we have \begin{align} \Xi ({\bf w}) := q^{{\cal R}}\,{\bf w} - \left( \sum_i l_i \, {\bf w}_i \right) \times \mathds{1} \, {\bf w} \, , \end{align} where $\mathds{1}$ is the identify matrix in the representation ${{\cal R}}_{\mathfrak{g}}$. By exponentiating this equation, we obtain the action of a group element in $U(1) \times \tilde{G}$ on weights of ${\cal R}$, \begin{align}\label{eq:central_element_from_Xi} \begin{split} C \, {\bf w} := & \exp \left( 2\pi i \, \Xi \right) {\bf w} = \left[ \exp (2\pi i\, q^{{\cal R}} ) \otimes \left( \exp( - 2\pi i \, \sum_i l_i \, {\bf w}_i ) \times \mathds{1} \right) \right] {\bf w} \\ \stackrel{\text{\eqref{eq:L_of_rep}}}{=} & \left[ \exp (2\pi i\, q^{{\cal R}} ) \otimes \left( \exp( - 2\pi i \, L({\cal R}_{\mathfrak{g}}) ) \times \mathds{1} \right) \right] {\bf w} \, . \end{split} \end{align} Evidently, $C$---being proportional to the identify element of $\tilde{G}$---commutes with every element, i.e., $C$ is in the centre $U(1) \times {\cal Z}(\tilde{G})$. Let us now restrict the action to representations realised in the F-theory compactification, i.e., weights $\bf w$ that arise from fibral curves $\Gamma$. Because the tensor product\footnote{The tensor product arises, because any finite dimensional irreducible representation of a product group is a tensor product of irreducible representations of the factors.} is bilinear, the expression can be also written as \begin{align}\label{eq:trivial_action_of_Xi} C \, {\bf w} = \left[ \vphantom{\sum_i} \right. \exp [ 2\pi i \, (\underbrace{q^{{\cal R}} - \sum_i l_i \, {\bf w}_i}_{\in {\mathbb{Z}} \, \text{ from \eqref{eq:integer_pairing_condition}}} ) ] \otimes \mathds{1} \left. \vphantom{\sum_i} \right] {\bf w} = {\bf w} \, , \end{align} i.e., $C$ acts trivially on weights ${\bf w}$ arising from fibral curves $\Gamma$! But on the other hand, we also know from the previous discussion that there is a positive integer $\kappa$ such that $\kappa \, l_i \in {\mathbb{Z}}$ for all $i$, see paragraph after \eqref{eq:coefficients_Cartan_generators}.\footnote{It is implicitly assumed that we choose $\kappa$ to be the smallest positive integer such that $\kappa\,l_i \in {\mathbb{Z}}$ for all $i$.} Going back to \eqref{eq:central_element_from_Xi}, for which we introduce the short-hand notation \begin{align}\label{eq:central_element_from_Xi_center_form} C \, {\bf w} = [Q^{{\cal R}} \otimes (\xi_{\bf w} \times \mathds{1})] \, {\bf w} \, , \end{align} we see that for weights in a \textit{any} representation of $\tilde{G}$, we have \begin{align} \begin{split} & (\xi_{\bf w} \times \mathds{1})^\kappa \equiv (\exp( 2\pi i \, l_i \, {\bf w}_i) \times \mathds{1})^\kappa = \exp( 2\pi i \,\kappa\, l_i \, {\bf w}_i) \times \mathds{1} = \mathds{1} \, . \end{split} \end{align} In other words, $\xi_{\bf w} \times \mathds{1}$ generates a ${\mathbb{Z}}_\kappa$ subgroup of the centre ${\cal Z}(\tilde{G})$. But because we have shown that all states in geometrically realised representations must be acted on trivially by $Q^{\bf w} \otimes (\xi_{\bf w} \times \mathds{1})$, we conclude that the global gauge group structure should be \begin{align}\label{eq:global_gauge_group_structure_general} G_\text{glob} = \frac{U(1) \times \tilde{G}}{ \langle C \rangle} \cong \frac{U(1) \times \tilde{G} }{{\mathbb{Z}}_\kappa} \, . \end{align} As mentioned before, the whole discussion applies to F-theory compactified to six, four and two dimensions. Note that, strictly speaking, the second equality in \eqref{eq:global_gauge_group_structure_general} is merely a definition of the notation $[U(1) \times \tilde{G}] / {\mathbb{Z}}_\kappa$. Indeed, from a purely representation theoretic point of view, we do not know that $(Q^{\bf w})^\kappa = 1$ for \textit{every} charged state (the charges could be quantised finer than $1/\kappa$). However, we have seen above that the geometry of the F-theory model actually dictates the charges to be quantised in units of $1/\kappa$, i.e., $C^\kappa = \text{id}$. In our discussion, both charge quantisation and the central element $C$ follow from the same observation, namely the integrality condition \eqref{eq:integer_pairing_condition}. Hence, the notation $(U(1) \times \tilde{G})/{\mathbb{Z}}_\kappa$ can also be seen as encoding the ${\mathfrak{u}}(1)$ charge quanta of an F-theory compactification. The reader might recognise the above argument, leading up to \eqref{eq:central_element_from_Xi}, from \cite{Mayrhofer:2014opa}, which related the presence of $\kappa$-torsional sections to the ${\mathbb{Z}}_\kappa$-center of purely non-abelian groups (i.e., no $U(1)$ factor in the cover of $G_\text{glob}$). Indeed, there one arrives by the same logic at \eqref{eq:central_element_from_Xi_center_form} with $Q = 1$. In that case the conclusion is simply that $\xi_{\bf w} \times \mathds{1}$, which generates a subgroup of the centre ${\cal Z}(\tilde{G})$, must act trivially. Finally, we note that even though the above discussion has been limited to a single ${\mathfrak{u}}(1)$ factor, the analysis readily extends to multiple sections $\sigma_k$ (free or torsional). Because the Shioda map \eqref{eq:general_shioda_image} of any Mordell--Weil generator $\sigma$ (free or torsional) takes the form $S - Z +(\text{non-sectional divisors})$, one quickly realises that each Mordell--Weil generator $\sigma_k$ gives rise to an independent trivially acting central element $C_k$. Thus, the global gauge group structure is a quotient by a product of ${\mathbb{Z}}_{\kappa_k}$'s. We will come back to explicit examples hereof in section \ref{sec:more_MW_generators}. \subsection{Preferred charge normalisation in F-theory}\label{sec:preferred_charge_normalisation} Let us revisit the possible rescaling \eqref{eq:shioda_image_with_factor} of the Shioda map and the resulting normalisation of ${\mathfrak{u}}(1)$ charges in F-theory. In field theory, the overall scaling of the ${\mathfrak{u}}(1)$ charge is unphysical, and can be chosen to our convenience. Likewise, as mentioned before, the Shioda map is only defined up to a constant rescaling. However, in F-theory we have a preferred normalisation provided by the integer divisor classes of sections. Explicitly, given free generator $\sigma$, we know that its divisor class $S$ must be integer and intersecting the generic fibre once. Any rescaling of $S$ cannot preserve these properties. Furthermore, if we rescale $\varphi(\sigma) = S - Z + l_i\,E_i$ by an integer $\kappa$, then, depending on the non-abelian gauge algebra and the fibre split structure, $\kappa\,l_i$ could be integer, which makes the ${\mathfrak{u}}(1)$ generator potentially `blind' to the central element $C$. Indeed, if we were to repeat the analysis leading to \eqref{eq:integer_pairing_condition} with the divisor $\kappa \, \varphi(\sigma)$, then the equivalent expression becomes \begin{align*} q^\text{w}_\kappa - \sum_i \underbrace{\kappa\,l_i}_{\in {\mathbb{Z}}} \,{\bf w}_i = \kappa \, (S - Z) \cdot \Gamma \in {\mathbb{Z}} \, , \end{align*} which does not provide any non-trivial relation between the ${\mathfrak{u}}(1)$ charge and the weight vectors ${\bf w}$. On the other hand, if we rescale the ${\mathfrak{u}}(1)$ charge by a fractional number $\lambda$, then it is no longer guaranteed that $\lambda \, (S - Z) \cdot \Gamma$ is always an integer. Therefore, it is only with the normalisation $\varphi(\sigma) = S - Z +...$ of the ${\mathfrak{u}}(1)$ generator, that we can make the non-trivial relation \eqref{eq:integer_pairing_condition} manifest in any F-theory compactification. Comparing to the field theory perspective, where any rescaling of ${\mathfrak{u}}(1)$ charges has no physical meaning, we conclude that the appropriate field theoretic data associated with this preferred normalisation are the global gauge group structure and the charge quantisation of individual ${\mathfrak{g}}$-representations. Equivalently, by first establishing these data, one is then free to choose any normalisation for the ${\mathfrak{u}}(1)$ charge in the field theory. \section{The global gauge group of F-theory models}\label{sec:examples} In this section, we will apply the above analysis to concrete models with ${\mathfrak{u}}(1)$s that have been constructed over the last few years in the literature. \subsection[Models with \texorpdfstring{$\mathfrak{su}(5) \oplus {\mathfrak{u}}(1)$}{su(5)+u(1)} singularity]{Models with \boldmath{$\mathfrak{su}(5) \oplus {\mathfrak{u}}(1)$} singularity}\label{sec:su5+u1_examples} Let us begin with arguably one of the most studied F-theory model, namely the so-called $U(1)$-restricted Tate model \cite{Grimm:2010ez} given by the hypersurface \begin{align}\label{eq:restricted_Tate_model} y^2 + a_1\,x\,y\,z + a_3\,y\,z^3 = x^3 + a_2\,x^2\,z^2 + a_4\,x\,z^4 \, . \end{align} The origin of the ${\mathfrak{u}}(1)$ symmetry in the restricted Tate model can be traced to the appearance of a rational section \begin{align} \sigma : [x:y:z] = [0:0:1] \end{align} with divisor class $S$, in addition to the standard zero section $\sigma_0 : [x:y:z] = [1:1:0]$ with class $Z$ \cite{ Krause:2011xj}. By tuning the coefficients $a_i$ following Tate's algorithm \cite{MR0393039, Bershadsky:1996nh, Katz:2011qp}, $a_2 = a_{2,1}\, \theta , \, a_3 = a_{3,2} \,\theta^2, \, a_4 = a_{4,3}\,\theta^3$, the elliptic fibration \eqref{eq:restricted_Tate_model} develops an $\mathfrak{su}(5)$ singularity over $\{\theta\}$. The resolution of this singularity introduces four exceptional Cartan divisors $E_i$, of which only $E_3$ is intersected once by $S-Z$.\footnote{ Recall that the zero section $Z$ intersects the affine node of $I_5$ fibre over $\{\theta\}$. The affine node is separated from the ${\mathbb{P}}^1_3$ node by the fibre of $E_4$. In the notation of \cite{Kuntzler:2014ila}, this is a fibre split type $(0||1)$. } Inserting into \eqref{eq:coefficients_Cartan_generators} then yields $l_i = \frac{1}{5}(2,4,6,3)_i$. The corresponding central element \eqref{eq:central_element_from_Xi} generates a subgroup ${\mathbb{Z}}_5 \subset SU(5) \times U(1)$, which has to act trivially on representations realised geometrically. Hence, the global gauge group of the $U(1)$-restricted Tate model with $\mathfrak{su}(5)$ singularity must be $(SU(5) \times U(1))/{\mathbb{Z}}_5$. Note that for these values of $l_i$, \eqref{eq:L_of_rep} yields $L({\bf 10}) = \frac{4}{5}$ and $L({\bf 5}) = \frac{2}{5}$. Hence, by \eqref{eq:integer_condition_charges_rep}, any ${\bf 10}$ representation must have ${\mathfrak{u}}(1)$ charge $\frac{4}{5} \mod {\mathbb{Z}}$, while any ${\bf 5}$ representation has charge $\frac{2}{5} \mod {\mathbb{Z}}$. This is of course consistent with the spectrum, which in terms of the normalised ${\mathfrak{u}}(1)$ generator $\varphi(\sigma) = S - Z + l_i\,E_i$ reads \begin{align*} {\bf 10}_{-1/5} \, , \quad {\bf 5}_{-3/5} \, , \quad {\bf 5}_{2/5} \, , \quad {\bf 1}_1 \, . \end{align*} Note that there are also $\mathfrak{su}(5) \oplus {\mathfrak{u}}(1)$ models with a ${\mathbb{Z}}_5$ centre that is embedded differently into the $U(1)$, leading to different charge assignments. One such example can be constructed via `toric tops' \cite{Candelas:1996su, Bouchard:2003bu} in a $\text{Bl}_1 {\mathbb{P}}_{112}$-fibration \cite{Morrison:2012ei}. It is labelled `top 2' in the appendix of \cite{Borchmann:2013jwa}, which is equivalent to the model `${\cal Q}(4,2,1,1,0,0,2)$' in \cite{Kuntzler:2014ila}. Without going into the details of this model, we note that the sections $Z$ and $S$ intersect in neighbouring nodes of $\mathfrak{su}(5)$ fibre (i.e., fibre split type $(0|1)$ in the notation of \cite{Kuntzler:2014ila}). The Shioda map is then $S - Z + \frac{1}{5} (1, 2, 3, 4)_i \, E_i$, which also leads to a ${\mathbb{Z}}_5$ centre. However, the ${\mathfrak{u}}(1)$ charges are constrained to be $\frac{1}{5} \! \mod {\mathbb{Z}}$ for ${\bf 5}$-matter and $\frac{2}{5} \! \mod {\mathbb{Z}}$ for ${\bf 10}$-matter. Correspondingly, the spectrum reads \begin{align*} {\bf 10}_{2/5} \, , \quad {\bf 5}_{6/5} \, , \quad {\bf 5}_{-4/5} \, , \quad {\bf 5}_{1/5} \, , \quad {\bf 1}_1 \, , \quad {\bf 1}_2 \, . \end{align*} Of course there are also models without a non-trivial global gauge group structure. An example is the model labelled `top 4' in the appendix of \cite{Borchmann:2013jwa}, or `${\cal Q}(3,2,2,2,0,0,1)$' in \cite{Kuntzler:2014ila}. Here, both sections $Z$ and $S$ intersect the affine node of $\mathfrak{su}(5)$ (i.e., fibre split type (01)). So the Shioda map is $\varphi(\sigma) = S-Z$, without any shifts by Cartan divisors. The central element \eqref{eq:central_element_from_Xi} then just imposes that all charges must be integral. Thus, the global gauge group is $SU(5) \times U(1)$, which of course is consistent with the spectrum \begin{align*} {\bf 10}_0 \, , \quad {\bf 5}_1 \, , \quad {\bf 5}_{-1} \, , \quad {\bf 5}_0 \, , \quad {\bf 1}_1 \, , \quad {\bf 1}_2. \end{align*} \subsection{Models with more Mordell--Weil generators}\label{sec:more_MW_generators} \subsubsection{Higher Mordell--Weil rank} We have mentioned in the previous section that a higher rank $m$ of the Mordell--Weil group implies that there are possibly $m$ independent non-trivial central elements acting trivially on representations. We will illustrate this now with concrete examples, in which the Mordell--Weil rank is 2. The simplest fibration that has two independent free sections arise from a generic cubic in an $\text{Bl}_2 {\mathbb{P}}^2 = dP_2$ fibration \cite{Borchmann:2013jwa,Cvetic:2013nia}.\footnote{A more general model with MW rank 2 has been recently constructed in \cite{Cvetic:2015ioa}; as shown there, the $\text{Bl}_2 {\mathbb{P}}^2$-fibration arises as a specialisation of this general construction.} We will denote the divisor classes of the two sections $\sigma_{1,2}$ generating the Mordell--Weil group by $S_{1,2}$ and stick with the above notation of $Z$ being the zero section. For simplicity, we focus on models with non-abelian gauge algebra $\mathfrak{su}(2)$, and label the single Cartan divisor by $E_1$. All three such models arising from toric tops have been constructed in \cite{Lin:2014qga}. Dubbed tops I, II and III, each of them turn out to have a different global gauge group structure, so it is instructive to analyse each individually. In top I, the Shioda map $\varphi$ takes the sections to \begin{align} \begin{split} & \varphi(\sigma_1) = S_1 - Z + \pi^{-1}(D_B) + \frac{1}{2} \, E_1 \, , \\ & \varphi(\sigma_2) = S_2 - Z + \pi^{-1}(D_B') \, . \end{split} \end{align} Therefore, the ${\mathfrak{u}}(1)$ charges $(q_1, q_2)$ of $\mathfrak{su}(2)$ matter must satisfy $q_1 - \frac{1}{2} w \in {\mathbb{Z}}$ and $q_2 \in {\mathbb{Z}}$, where $w$ is the Dynkin label of $\mathfrak{su}(2)$ states. Only the first condition leads to a central element acting non-trivially on $\mathfrak{su}(2)$ states. Clearly, it is an element of order 2, because $2\,q_1 - w \in {\mathbb{Z}}$. We can also translate the second condition into a central element $C_2 = e^{2\pi i \, q_2} \in U(1)_2$. However, this element evidently just imposes charge quantisation $q_2 \in {\mathbb{Z}}$. So the global gauge group structure is \begin{align}\label{eq:global_gauge_group_su2I} G^\text{I} = \frac{SU(2) \times U(1)_1}{{\mathbb{Z}}_2} \times \frac{U(1)_2}{C_2} \cong \frac{SU(2) \times U(1)_1}{{\mathbb{Z}}_2} \times U(1)_2 \, . \end{align} The non-abelian part of the spectrum arranges consistently into \begin{align*} {\bf 2}_{(\frac{1}{2}, -1)} \, , \quad {\bf 2}_{(\frac{1}{2}, 1)} \, , \quad {\bf 2}_{(\frac{1}{2}, 0)} \, . \end{align*} In the top II model, the Shioda map of the sections are \begin{align} \begin{split} & \varphi(\sigma_1) = S_1 - Z + \pi^{-1}(D_B) + \frac{1}{2} \, E_1 \, , \\ & \varphi(\sigma_2) = S_2 - Z + \pi^{-1}(D_B') + \frac{1}{2} \, E_1 \, . \end{split} \end{align} Now both ${\mathfrak{u}}(1)$ charges must satisfy $q_i - \frac{1}{2}\, w \in {\mathbb{Z}}$. Put differently, there are now two central elements of order 2, \begin{align}\label{eq:central_elements_su2II} \begin{split} C^\text{II}_1 = (\xi \times \mathds{1}) \otimes e^{2\pi i \, q_1} \otimes 1 \, , \\ C^\text{II}_2 = (\xi \times \mathds{1}) \otimes 1 \otimes e^{2\pi i \, q_2} \, , \end{split} \end{align} of the covering group $\tilde{G} = SU(2) \times U(1)_1 \times U(1)_2$ that have to act trivially on all representations. Each therefore generates a separate ${\mathbb{Z}}_2$ subgroup of $\tilde{G}$, leading to the actual gauge group \begin{align}\label{eq:global_gauge_group_su2II} G^\text{II} = \frac{ SU(2) \times U(1)_1 \times U(1)_2}{ {\mathbb{Z}}^{(1)}_2 \times {\mathbb{Z}}^{(2)}_2} \, , \end{align} where it needs to be understood that ${\mathbb{Z}}_2^{(i)}$ lies in the center of $SU(2) \times U(1)_i$. Consequently, the $\mathfrak{su}(2)$ matter are charged as \begin{align*} {\bf 2}_{(\frac{1}{2}, \frac{3}{2})} \, , \quad {\bf 2}_{(\frac{1}{2}, -\frac{1}{2})} \, , \quad {\bf 2}_{(\frac{1}{2}, \frac{1}{2})} \, . \end{align*} Finally, there is also the top III with Shioda map \begin{align} \begin{split} & \varphi(\sigma_1) = S_1 - Z + \pi^{-1}(D_B)\, , \\ & \varphi(\sigma_2) = S_2 - Z + \pi^{-1}(D_B') \, , \end{split} \end{align} which clearly leads to trivial central elements. Hence, the gauge group in this case is just $G^\text{III} = SU(2) \times U(1)_1 \times U(1)_2$. The spectrum in this case is\footnote{ Note that we have included a completely uncharged doublet here that was previously missed in \cite{Lin:2014qga}. In fact, the codimension two locus of $I_3$ fibres corresponding to this matter was noticed. However, the monodromy around a codimension 3 sublocus interchanging two of the fibre components was misinterpreted as projecting out the matter states. But due to the vanishing charges, this doublet is actually a real representation, i.e., the two fibre components are homologically equivalent. Thus the monodromy in higher codimension exchanging them is not surprising and actually expected geometrically. A similar observation holds for singlets charged under a discrete ${\mathbb{Z}}_2$ symmetry \cite{Klevers:2014bqa, Mayrhofer:2014haa}. } \begin{align*} {\bf 2}_{(1,0)} \, , \quad {\bf 2}_{(1,1)} \, , \quad {\bf 2}_{(0,1)} \, , \quad {\bf 2}_{(0,0)}\, . \end{align*} Before we move on, let us briefly comment on a peculiar behaviour of the centre when we rotate the ${\mathfrak{u}}(1)$s. Concretely, it was noted in \cite{Lin:2014qga} that, if we redefine the ${\mathfrak{u}}(1)$ charges $(q_a, q_b) = (-q_1 , q_2 - q_1)$ in top II, the spectrum is identical to that of top I. In fact, this is a consequence of a toric symmetry relating tops I and II. How is it compatible with the seemingly different gauge group structures \eqref{eq:global_gauge_group_su2I} and \eqref{eq:global_gauge_group_su2II}? To understand this, let us rewrite the central elements \eqref{eq:central_elements_su2II} in terms of the rotated ${\mathfrak{u}}(1)$ charges. Explicitly, we have $e^{2 \pi i \, q_1} \otimes 1 = e^{-2 \pi i \, q_a} \otimes 1$ and $1 \otimes e^{2 \pi i \, q_2} = 1 \otimes e^{2 \pi i \, (q_b - q_a)} = e^{-2\pi i \, q_a} \otimes e^{2 \pi i \, q_b}$. So the central elements are \begin{align} \begin{split} & C^\text{II}_1 = (\xi \times \mathds{1}) \otimes e^{-2 \pi i \, q_a} \otimes 1 \, , \\ & C^\text{II}_2 = (\xi \times \mathds{1}) \otimes e^{-2 \pi i \, q_a} \otimes e^{2 \pi i \, q_b} = C^\text{II}_1 \circ (\mathds{1} \otimes 1 \otimes e^{2 \pi i \, q_b})\equiv C_1^\text{II} \circ \tilde{C}^\text{II}_2\, , \end{split} \end{align} where here we use $\circ$ to denote the group multiplication in $SU(2) \times U(1)^2$. Note that we are dealing with central elements, hence they all commute. In the gauge group \eqref{eq:global_gauge_group_su2II} of top II, both $C^\text{II}_{1,2}$ must act trivially on all states. The above equation implies that this is equivalent to $C^\text{II}_1$ and $\tilde{C}^\text{II}_2$ acting trivially. But since $\tilde{C}^\text{II}_2$ lies in $U(1)_b$, we have \begin{align}\label{eq:global_gauge_group_su2II_rotated} G^\text{II} = \frac{SU(2) \times U(1)_a}{\langle C^\text{II}_1 \rangle} \times \frac{U(1)_b}{ \langle \tilde{C}^\text{II}_2 \rangle} \, . \end{align} Now the second quotient structure just imposes that the $U(1)_b$ charges are integer for all states, which is also implemented in \eqref{eq:global_gauge_group_su2I} by the second quotient. Therefore, we have shown that by rotating the ${\mathfrak{u}}(1)$s in top II, the global gauge group structure \eqref{eq:global_gauge_group_su2II} (including charge quantisation) turns out to be equivalent to that of top I \eqref{eq:global_gauge_group_su2I}. \subsubsection{Inclusion of Mordell--Weil torsion} Let us now look at a model with Mordell--Weil group ${\mathbb{Z}} \oplus {\mathbb{Z}}_2$. This example---studied extensively in \cite{Mayrhofer:2014opa} (and also appears in a slightly different fashion in \cite{Klevers:2014bqa})---has, in addition to the zero section a section $\sigma_f$ generating the free part and a section $\sigma_r$ generating the 2-torsional part of the Mordell--Weil group. The fibration has two $\mathfrak{su}(2)$ factors with Cartan divisors $C$ and $D$, i.e., the covering gauge group is $SU(2)_C \times SU(2)_D \times U(1)$. Under the Shioda map, the free section with divisor class $S$ maps onto \begin{align} \varphi(\sigma_f) = S - Z + \pi^{-1}(D_B) + \frac{1}{2}\,C \, , \end{align} giving rise to the central element $(\xi \times \mathds{1}) \otimes \mathds{1} \otimes e^{2\pi i \, q} \in {\mathbb{Z}}^{(f)}_2 \subset SU(2)_C \times U(1)$. For the torsional section, one may determine the Shioda map analogously \cite{Mayrhofer:2014opa} through the conditions \eqref{eq:no_charge_generic_fibre} to \eqref{eq:no_charge_W_bosons}. This yields \begin{align} \varphi(\sigma_r) = V - Z + \pi^{-1}(D'_B) + \frac{1}{2}\, (C - D) \, . \end{align} Because of the 2-torsional property of $\sigma_r$ and $\varphi$ being a homomorphism, we know that $\varphi(\sigma_r) = 0$. Analogous to the derivation of \eqref{eq:integer_pairing_condition} (with $q^{\bf w}=0$), it means that $\frac{1}{2}\,(w_C-w_D)$, with $w_{C,D}$ being the weights of $SU(2)_C \times SU(2)_D$ irreps, must be integral. So it defines another central element $\exp(\pi i (w_C - w_D))$ generating the `diagonal' ${\mathbb{Z}}_2^{(r)}$ of the ${\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ centre of $SU(2)_C \times SU(2)_D$, which has to act trivially on representations of the F-theory compactification. Therefore, the global gauge group structure is \begin{align} \frac{SU(2)_C \times SU(2)_D \times U(1)}{{\mathbb{Z}}_2^{(f)} \times {\mathbb{Z}}_2^{(r)}} \, , \end{align} where, in order for the notation to make sense, we need to clarify that ${\mathbb{Z}}_2^{(f)}$ acts only on $SU(2)_C \times U(1)$ representations, whereas ${\mathbb{Z}}_2^{(r)}$ acts on $SU(2)_C \times SU(2)_D$ representations. Note that the ${\mathbb{Z}}_2^{(r)}$ quotient forbids any matter transforming as the fundamental representation under a single $SU(2)$ factor, irrespective of the $U(1)$ charge. On the other hand, any matter transforming in the fundamental representation of $SU(2)_C$ must have $U(1)$ charge $\frac{1}{2} \mod {\mathbb{Z}}$ due to the ${\mathbb{Z}}_2^{(f)}$ quotient. Thus, it is not surprising that the spectrum of the model contains, in addition to a charge 1 singlet, only bifundamental matter with charge $1/2$. \subsection{F-theory Standard Models}\label{sec:examples_Standard_Models} We now come to a class of somewhat more phenomenologically interesting models, namely elliptic fibrations realising the Standard Model gauge algebra ${\mathfrak{g}}_\text{SM} = \mathfrak{su}(3) \oplus \mathfrak{su}(2) \oplus \mathfrak{u}(1)$ in F-theory. The first model, presented in \cite{Klevers:2014bqa} (and labelled there as $X_{F_{11}}$) has ${\mathfrak{g}}_\text{SM}$ as the full gauge algebra and the exact Standard Model spectrum (at the level of representations). The inclusion of fluxes in \cite{Cvetic:2015txa} resulted in a first globally consistent three-chiral-family Standard Model construction in F-theory. As mentioned in the introduction, the Standard Model spectrum is consistent with a global gauge group structure $(SU(3)\times SU(2) \times U(1))/{\mathbb{Z}}_6$. With the new insights from section \ref{sec:general_story}, we can now explicitly show that in F-theory, we indeed can construct such a global structure. In the $X_{F_{11}}$ model, the Shioda map of the free section is \begin{align}\label{eq:shioda_map_XF11} \varphi(\sigma) = S - Z +\pi^{-1}(D_B) + \frac{1}{2} \, E_1^{\mathfrak{su}(2)} + \frac{1}{3} ( 2\,E_1^{\mathfrak{su}(3)} + E_2^{\mathfrak{su}(3)} ) \, , \end{align} where $E_i^{\mathfrak{h}}$ denotes the Cartan generator(s) of the corresponding subalgebra $\mathfrak{h}$.\footnote{ Compared to \cite{Klevers:2014bqa}, we have switched the order of the $\mathfrak{su}(3)$ Dynkin labels by exchanging $E_1^{\mathfrak{su}(3)}$ and $E_2^{\mathfrak{su}(3)}$. This exchanges the notion of ${\bf 3}$ and $\overline{\bf 3}$, making the charges identical to that of the Standard Model. } If we denote the weight vectors of $\mathfrak{su}(3)$ resp.~$\mathfrak{su}(2)$ by $(w_1, w_2)$ resp.~$\omega$ and the ${\mathfrak{u}}(1)$ charge by $q$, then the integrality condition \eqref{eq:integer_pairing_condition} for $X_{F_{11}}$ reads \begin{align}\label{eq:integer_pairing_condition_XF11} q - \frac{1}{2}\omega - \frac{1}{3} (2\,w_1 + w_2) \in {\mathbb{Z}} \, . \end{align} Because $\omega, w_i \in {\mathbb{Z}}$, the smallest positive integer $\kappa$ such that $\kappa\,q \in {\mathbb{Z}}$ \textit{for all} possible charges $q$ is $\kappa = 6$. Thus, the central element \begin{align} C_{X_{F_{11}}} = \left[ e^{- 2\pi i \, \frac{2w_1 + w_2}{3} } \times \mathds{1}_{SU(3)} \right] \otimes \left[ e^{- 2\pi i \, \frac{\omega}{2} } \times \mathds{1}_{SU(2)} \right] \otimes e^{2\pi i \, q} \end{align} acting on $SU(3) \times SU(2) \times U(1)$ representations has order 6, so it defines a ${\mathbb{Z}}_6$ subgroup of the centre, i.e., the global gauge group is \begin{align} G_{F_{11}} = \frac{SU(3) \times SU(2) \times U(1)}{{\mathbb{Z}}_6} \, . \end{align} The condition \eqref{eq:integer_pairing_condition_XF11} implies that fundamental matter charged only under $\mathfrak{su}(3)$ must have charges $\frac{1}{3} \mod {\mathbb{Z}}$, while pure fundamentals of $\mathfrak{su}(2)$ have $q = \frac{1}{2} \mod {\mathbb{Z}}$. Inspecting the highest weight of the bifundamental, $\omega = 1, w_1 = 1, w_2 = 0$, we see that this representation must have charge $\frac{1}{6} \mod {\mathbb{Z}}$. Correspondingly, the geometric spectrum, \begin{align} ({\bf 3,2})_{1/6} \, , \quad ({\bf 1, 2})_{-1/2} \, , \quad ({\bf 3,1})_{2/3} \, , \quad ({\bf 3, 1})_{-1/3} \, , \quad ({\bf 1,1})_1 \, , \end{align} agrees with that of the Standard Model. A different class of Standard-Model-like models was constructed in \cite{Lin:2014qga}, of which we have examined the $\mathfrak{su}(2)$ sector already above. The $\mathfrak{su}(3)$ sector is constructed in the analogous fashion with tops, which can be then combined with any $\mathfrak{su}(2)$ top to yield the non-abelian part of ${\mathfrak{g}}_\text{SM}$. Due to the rank 2 Mordell--Weil group, these models have an additional ${\mathfrak{u}}(1)$ symmetry, which can be used to implement certain selection rules. As elaborated on in \cite{Lin:2014qga}, for each combination of the tops, there are again multiple ways of identifying the hypercharge ${\mathfrak{u}}(1)$ as a linear combination of the geometric ${\mathfrak{u}}(1)$s; the choice is tied to the role of the selection rule and the identification of the geometric spectrum with that of the Standard Model. For definiteness, we focus on one particular choice of tops and identification, for which there also exists an extensive analysis including $G_4$-fluxes \cite{Lin:2016vus}. In this case, the Shioda map, which for the first section also yields the hypercharge ${\mathfrak{u}}(1)$ generator, reads \begin{align} \begin{split} {\mathfrak{u}}(1)_{Y} : \quad & \varphi(\sigma_1) = S_1 - Z + \pi^{-1}(D_B) + \frac{1}{2} E^{\mathfrak{su}(2)}_1 + \frac{1}{3} (2E_1^{\mathfrak{su}(3)} + E^{\mathfrak{su}(3)}_2 ) \, , \\ & \varphi(\sigma_2) = S_2 - Z + \pi^{-1}(D'_B) + \frac{1}{3} (2E_1^{\mathfrak{su}(3)} + E^{\mathfrak{su}(3)}_2 ) \, . \end{split} \end{align} Analogously to the $X_{F_{11}}$ model, the first section leads to a central element of $SU(3) \times SU(2) \times U(1)_Y$ generating a ${\mathbb{Z}}^{(Y)}_6$ subgroup. Meanwhile, the second section clearly generates a ${\mathbb{Z}}^{(2)}_3 \subset SU(3) \times U(1)_2$. Hence, the global gauge group is \begin{align} \frac{SU(3) \times SU(2) \times U(1)_Y \times U(1)_2}{{\mathbb{Z}}^{(Y)}_6 \times {\mathbb{Z}}^{(2)}_3} \, . \end{align} Consistently, the $( U(1)_Y , U(1)_2)$ charges of the fundamental representations arrange as follows: \begin{itemize} \item $({\bf 3, 2})$ must have charge $(\frac{1}{6} \! \mod {\mathbb{Z}}, \frac{1}{3} \! \mod {\mathbb{Z}})$. \item $({\bf 3,1})$ must have charge $(\frac{1}{3} \! \mod {\mathbb{Z}}, \frac{1}{3} \! \mod {\mathbb{Z}})$. \item $({\bf 1,2})$ must have charge $(\frac{1}{2} \! \mod {\mathbb{Z}}, 0 \! \mod {\mathbb{Z}})$. \end{itemize} Geometrically, the representations of the $\mathfrak{su}(3) \oplus \mathfrak{su}(2) \oplus {\mathfrak{u}}(1)_Y$ subalgebra agrees with that of the Standard Model (with additional singlets with no hypercharge). However, the ${\mathfrak{u}}(1)_2$ charge discriminates between states that would be otherwise indistinguishable under the Standard Model algebra. Note that, in order to get in touch with the actual Standard Model, one ultimately needs to lift the second ${\mathfrak{u}}(1)$ from the massless spectrum, which would require further investigation, see \cite{Lin:2016vus}. \section{Relationship to (un-)higgsing}\label{sec:higgsing} In this section, we want to explore the origin of the non-trivial global gauge group structure in higgsing processes. In F-theory, ${\mathfrak{u}}(1)$s can often be unhiggsed into the Cartan of non-abelian gauge algebras \cite{Morrison:2012ei, Morrison:2014era, Klevers:2014bqa, Cvetic:2015ioa, Klevers:2016jsz}. Given that the Cartan charges of non-abelian matter are naturally integrally quantised, one might wonder if and how this is related to the restrictions on the ${\mathfrak{u}}(1)$ charges after breaking the non-abelian symmetry. In fact, the F-theory Standard Model fibration $X_{F_{11}}$ we discussed in section \ref{sec:examples_Standard_Models} is shown in \cite{Klevers:2014bqa} to geometrically unhiggs into a Pati-Salam-like theory with $[SU(4) \times SU(2)^2]/{\mathbb{Z}}_2$ gauge group. In this case, the ${\mathbb{Z}}_6$ centre of the Standard Model is known to arise from the representation theory of $[SU(4) \times SU(2)^2]/{\mathbb{Z}}_2$.\footnote{Note that the unhiggsed non-abelian group does not necessarily have to have a non-trivial global structure in order to induce one after breaking, cf.~$SU(5) \rightarrow [SU(3) \times SU(2) \times U(1)]/{\mathbb{Z}}_6$.} One may ask whether it is possible to also unhiggs $X_{F_{11}}$ to an $SU(5)$ fibration. However, the geometric description of unhiggsing is in general quite involved, since one does not a priori know the deformation corresponding to the specific unhiggsing process. In order to gain some further intuition, we therefore restrict our analysis to a specific class of models, for which we have a good handle on the geometry. For these, we show explicitly that the restrictions on the ${\mathfrak{u}}(1)$ charges leading to the global gauge group structure arise from a larger, purely non-abelian gauge theory. As we will see, the unhiggsed non-abelian gauge algebra depends on the fibre split structure induced by the section. \subsection[Unhiggsing the \texorpdfstring{${\mathfrak{u}}(1)$}{u(1)} in a \texorpdfstring{$\text{Bl}_1 {\mathbb{P}}_{112}$}{Bl1P112} fibration]{Unhiggsing the \boldmath{${\mathfrak{u}}(1)$} in a \boldmath{$\text{Bl}_1{\mathbb{P}}_{112}$} fibration} The class of models we analyse have non-abelian gauge algebras engineered in a $\text{Bl}_1 {\mathbb{P}}_{112}$ fibration, a.k.a.~the Morrison--Park model \cite{Morrison:2012ei}. Such models have a gauge algebra of the form ${\mathfrak{g}} \oplus {\mathfrak{u}}(1)$. Note that a broad class of such constructions has been classified through an analogue of Tate's algorithm in \cite{Kuntzler:2014ila}. They can be realised as a toric hypersurface defined by the vanishing of the polynomial \begin{align}\label{eq:general_MorrisonPark_hypersurface} P := w^2 \, s + b_0 \, w \, u^2 \, s+ b_1\,u\,v\,w\,s + b_2\,v^2\,w + c_0\,u^4 + c_1\,u^3\,v + c_2\,u^2\,v^2 + c_3\,u\,v^3 \, , \end{align} where $[u:v:w]$ are the projective ${\mathbb{P}}_{112}$ coordinates.\footnote{ The authors of \cite{Morrison:2012ei} showed that, due to the constant coefficient in the $w^2\,s$-term, one can in fact absorb the terms with $b_0$ and $b_1$ through a coordinate redefinition, effectively setting them to 0. The inclusion of these terms allows for a more straightforward construction of non-abelian algebras, either via Tate's algorithm or via tops. Here, we have adopted the notation set in appendix B of \cite{Morrison:2012ei}, which is related to the notation of \cite{Kuntzler:2014ila} by exchanging $b_0 \leftrightarrow b_2$; also, the coordinates $(u,v,w)$ are labelled $(w,x,y)$ in \cite{Kuntzler:2014ila}. } Furthermore, $s$ is the blow-up coordinate whose vanishing defines the addition rational section generating the Mordell--Weil group. As already discussed in \cite{Morrison:2012ei}, this fibration has a complex structure deformation, $b_2 \rightarrow 0$, which enhances the gauge algebra from ${\mathfrak{u}}(1)$ to an $\mathfrak{su}(2)_b$ localised over $c_3 = 0$. In the absence of any additional non-abelian singularities, this enhancement can be understood as the inverse, i.e., unhiggsing, of breaking the $\mathfrak{su}(2)_b$ to ${\mathfrak{u}}(1)$ with its adjoint representation. Under this breaking, the resulting singlets of the Morrison--Park model with charge 1 and 2, respectively, are remnants of fundamentals and adjoints, respectively, of the $\mathfrak{su}(2)_b$ theory. By tuning the coefficients $b_i,c_j$ to vanish to certain powers along a divisor $\{\theta\}$ of the base, i.e., $b_i = b_{i,k} \, \theta^k$ and similarly for $c_j$, the fibres overs $\{\theta\}$ develop Kodaira singularities corresponding to a certain simple gauge algebra ${\mathfrak{g}}$. The above deformation still exists for these gauge enhanced models in the form of $b_{2,k} \rightarrow 0$ (we will abusively write $b_{2,k} \equiv b_2$). This deformation will then modify the fibres over $c_{3,k'} \equiv c_3$ to have an $\mathfrak{su}(2)$ singularity, since a generic choice for $\{\theta\}$ will not affect this codimension one locus. Likewise, the codimension two enhancement leading to the fundamentals of $\mathfrak{su}(2)$ will still persist in the presence of the divisor $\{\theta\}$, as it will generically not contain this codimension two locus. \subsection[Charge constraints on \texorpdfstring{$\mathfrak{su}(2)$}{su(2)} matter from higgsing]{Charge constraints on \boldmath{$\mathfrak{su}(2)$} matter from higgsing} To keep things simple, we will focus on the easiest example with ${\mathfrak{g}} = \mathfrak{su}(2)_a$ from $I_2$-singularities over $\{\theta\}$, which we construct using `toric tops'. The subscript $a$ is to distinguish it from the $\mathfrak{su}(2)_b$ gauge algebra, which arises from unhiggsing the ${\mathfrak{u}}(1)$. The spectrum of these models consists of singlets with charges 1 and 2, and fundamentals of $\mathfrak{su}(2)_a$. The charges of these fundamentals depend on the fibre split, which for ${\mathfrak{g}} = \mathfrak{su}(2)_a$ can only be of type $(01)$ or $(0|1)$ (see figure \ref{fig:fibre_split}). \begin{figure}[ht] \centering \def.4\hsize{.4\hsize} \input{fibre_split_types.pdf_tex} \caption{The split types of an $I_2$-fibre by two sections (red dots): $(01)$ on the left and $(0|1)$ on the right.} \label{fig:fibre_split} \end{figure} With the techniques of \cite{Bouchard:2003bu}, one finds two different $\text{Bl}_1 {\mathbb{P}}_{112}$ fibrations with additional $\mathfrak{su}(2)_a$ singularities, corresponding to the $(01)$ and $(0|1)$ split type, respectively. In the $(01)$ split model, whose resolved geometry is given by the vanishing of the polynomial \begin{align} c_0\,e_0^2\,s^3\,u^4 + c_1\,e_0\,s^2\,u^3\,v + c_2\,s\,u^2\,v^2 + c_3\,e_1\,u\,v^3 + b_0\,e_0\,s^2\,u^2\,w + b_1\,s\,u\,v\,w + b_2\,e_1\,v^2\,w + s\,w^2 \, , \end{align} the $\mathfrak{su}(2)$ fibre over $\{\theta\}$ is formed by the ${\mathbb{P}}^1$ fibres of the divisors $E_0 = \{e_0\}$ and $E_1 = \{e_1\}$. Both the zero section $Z = [\{u\}]$ and the additional section $S = [\{s\}]$ intersect the component $\{e_0\}$. Therefore the Shioda map of the section $S$ simply yields $S - Z$; correspondingly, we find $\mathfrak{su}(2)_a$ fundamentals with charges 1 and 0. Their loci can be read of from the discriminant of this fibration: \begin{align}\label{eq:discriminant_su2_01} \Delta^{(01)} \sim \underbrace{(b_1^2 - 4\,c_2)^2}_{\text{type $III$, no matter}} \, \underbrace{(b_2^2\,c_2 - b_1\,b_2\,c_3 + c_3^2)}_{{\bf 2}_1} \, \underbrace{[b_1^2\,c_0 - b_0\,b_1\,c_1 + c_1^2 + (b_0^2 - 4\,c_0)\,c_2]}_{{\bf 2}_0}\, \theta^2 + {\cal O}( \theta^3) . \end{align} By tuning $b_2 \rightarrow 0$, we see from the discriminant \eqref{eq:discriminant_su2_01} that the locus of the ${\bf 2}_1$ curve now becomes $c_3^2$ --- precisely the locus of the $\mathfrak{su}(2)_b$ singularity: \begin{align}\label{eq:discriminant_su2_01_unhiggsed} \tilde{\Delta}^{(01)} \sim \underbrace{(b_1^2 - 4\,c_2)^2}_{\text{type $III$, no matter}} \, \underbrace{[b_1^2\,c_0 - b_0\,b_1\,c_1 + c_1^2 + (b_0^2 - 4\,c_0)\,c_2]}_{P}\, \theta^2 \, c_3^2 + {\cal O}( \theta^3 , c_3^3) \, . \end{align} Again, the first curve, $\{b_1^2 - 4\,c_2\}$, intersects both $\mathfrak{su}(2)_{a,b}$ divisors at codimension two loci of type $III$ enhancement, indicating the absence of any matter. The intersections of the curve $\{P\}$ with $\{\theta\}$ and $\{c_3\}$ give rise to fundamentals of each $\mathfrak{su}(2)$ factor. Finally, at the intersection $\{\theta\} \cap \{c_3\}$, we have bifundamentals of $\mathfrak{su}(2)_a \oplus \mathfrak{su}(2)_b$. Since there are fundamentals of each $\mathfrak{su}(2)$ factor present, the global gauge group of the unhiggsed model must be $SU(2)_a \times SU(2)_b$. Clearly, the higgsing in this case proceeds via adjoint breaking of $\mathfrak{su}(2)_b$, which preserves $\mathfrak{su}(2)_a$. Geometrically, we immediately see that the uncharged ${\bf 2}_0$ matter in \eqref{eq:discriminant_su2_01} are completely unaffected by the \mbox{(un-)}higgsing process, as they arise from the $\mathfrak{su}(2)_a$ fundamentals along $\{\theta\} \cap \{P\}$ in \eqref{eq:discriminant_su2_01_unhiggsed}. On the other hand, the charged fundamentals ${\bf 2}_1$ in \eqref{eq:discriminant_su2_01} arise from the bifundamentals sitting at $\{c_3\} \cap \{\theta\}$ before the higgsing; by the deformation that turns on $b_2$, they are localised at $\{b_2^2\,c_2 - b_1\,b_2\,c_3 + c_3^2\} \cap \{\theta\}$ after higgsing. Hence, we can directly interpret the ${\mathfrak{u}}(1)$ after higgsing as the remnant $\mathfrak{su}(2)_b$ Cartan generator. This explicitly identifies the ${\mathfrak{u}}(1)$ charge lattice with the weight lattice of $\mathfrak{su}(2)_b$. In the `preferred normalisation', in which the singlets arising from higgsed remnants of $\mathfrak{su}(2)_b$ fundamentals and adjoints have charges 1 and 2, this implies that all ${\mathfrak{u}}(1)$ charges must be integer---which is equivalent to the statement of the integrality condition \eqref{eq:integer_pairing_condition} applied to the Shioda map $S-Z$. In terms of the global gauge group structure, have $G_\text{glob} = SU(2) \times U(1)$. The toric $(0|1)$ split $\mathfrak{su}(2)_a \oplus {\mathfrak{u}}(1)$ model is the vanishing of the polynomial \begin{align} c_0\,e_0\,s^3\,u^4 + c_1\,e_0\,s^2\,u^3\,v + c_2\,e_0\,s\,u^2\,v^2 + c_3\,e_0\,u\,v^3 + b_0\,s^2\,u^2\,w + b_1\,s\,u\,v\,w + b_2\,v^2\,w + e_1\,s\,w^2 \, . \end{align} The $\mathfrak{su}(2)_a$ doublets have charges $3/2$ and $1/2$, consistent with the integrality condition \eqref{eq:integer_pairing_condition} for the Shioda map $S - Z + E_1/2$. Therefore, the global gauge group structure is $[SU(2) \times U(1)]/{\mathbb{Z}}_2$. The discriminant of this fibration is \begin{align}\label{eq:discriminant_su2_0|1} \Delta^{(0|1)} \sim \underbrace{(b_1^2 - 4 \,b_0\,b_2)^2}_{\text{type }III} \, \underbrace{b_2}_{{\bf 2}_{3/2}} \, \underbrace{Q}_{{\bf 2}_{1/2}} \, \theta^2 + {\cal O}(\theta^3) \, , \end{align} where $Q$ is a lengthy polynomial in the coefficients $b_i, c_j$. However, since $b_2$ is now an explicit factor of the $\theta^2$ term of the discriminant, tuning $b_2 \rightarrow 0$ will clearly enhance the vanishing order of the discriminant in $\theta$. Indeed, we find that after tuning, the discriminant becomes \begin{align}\label{eq:discriminant_su2_0|1_unhiggsed} \begin{split} \tilde{\Delta}^{(0|1)} & \, = \Delta^{(0|1)}|_{b_2=0} = c_3^2 \, \theta^3 \, \Delta_\text{res} \\ & \, = - \underbrace{\theta^3}_{({\bf 3,2})} \,\underbrace{(b_1^2 - 4\,c_2\,\theta)^2}_{\text{type }III}\, \underbrace{(b_1^2\,c_0 - b_0\,b_1\,c_1 + b_0^2\,c_2 + c_1^2\,\theta - 4\,c_0\,c_2\,\theta)}_{P_1 \rightarrow ({\bf 1,2})}\,c_3^2 + {\cal O}(c_3^3) \\ & \, = -\underbrace{c_3^2}_{({\bf 3,2})}\, \underbrace{b_1^3}_{\text{type }IV}\, \underbrace{(b_1^3\,c_0 - b_0\,b_1^2\,c_1 + b_0^2\,b_1\,c_2 - b_0^3\,c_3)}_{P_2 \rightarrow ({\bf 3,1})}\, \theta^3 + {\cal O}(\theta^4) \, , \end{split} \end{align} indicating an enhancement to $\mathfrak{su}(3)_a \oplus \mathfrak{su}(2)_b$. At the intersection of the $\mathfrak{su}(3)_a$ divisor $\{\theta\}$ and the $\mathfrak{su}(2)_b$ divisor $\{c_3\}$ we naturally find bifundamentals ${\bf (3,2)}$. Furthermore, the codimension two locus $\{P_1\} \cap \{c_3\}$ now supports fundamentals ${\bf (1,2)}$ of $\mathfrak{su}(2)_b$, and the locus $\{P_2\} \cap \{\theta\}$ supports fundamentals $\bf (3,1)$ of $\mathfrak{su}(3)_a$. The deformation process of turning on the $b_2$ term now corresponds to bifundamental higgsing. To see this, let us first look at the group theory. The fundamental and adjoint representations decomposes as \begin{align} \begin{split} \mathfrak{su}(3) \rightarrow \mathfrak{su}(2) \oplus {\mathfrak{u}}(1)_3 : \quad & {\bf 3} \rightarrow {\bf 2}_1 \oplus {\bf 1}_{-2} \, , \quad {\bf 8} \rightarrow {\bf 3}_0 \oplus {\bf 2}_3 \oplus {\bf 2}_{-3} \oplus {\bf 1}_0 \, ,\\ \mathfrak{su}(2) \rightarrow {\mathfrak{u}}(1)_2 : \quad & {\bf 2} \rightarrow {\bf 1}_{-1} \oplus {\bf 1}_{1} \, , \quad {\bf 3} \rightarrow {\bf 1}_2 \oplus {\bf 1}_0 \oplus {\bf 1}_{-2} \, . \end{split} \end{align} Hence, the representations of the product algebra decompose according to \begin{align}\label{eq:higgsing_spectrum_intermediate} \begin{split} \mathfrak{su}(3) \oplus \mathfrak{su}(2) \rightarrow \mathfrak{su}(2) \oplus {\mathfrak{u}}(1)_3 \oplus {\mathfrak{u}}(1)_2: \quad & ({\bf 3,2}) \rightarrow {\bf 2}_{(1,-1)} \oplus {\bf 2}_{(1,1)} \oplus {\bf 1}_{(-2,-1)} \oplus {\bf 1}_{(-2,1)} \, ,\\ & ({\bf 3,1}) \rightarrow {\bf 2}_{(1,0)} \oplus {\bf 1}_{(-2,0)} \, ,\\ & ({\bf 1,2}) \rightarrow {\bf 1}_{(0,-1)} \oplus {\bf 1}_{(0,1)} \, ,\\ & ({\bf 8,1}) \rightarrow {\bf 3}_{(0,0)} \oplus {\bf 2}_{(3,0)} \oplus {\bf 2}_{(-3,0)} \oplus {\bf 1}_{(0,0)} \, ,\\ & ({\bf 1,3}) \rightarrow {\bf 1}_{(0,2)} \oplus {\bf 1}_{(0,0)} \oplus {\bf 1}_{(0,-2)} \, . \end{split} \end{align} Therefore, by giving a vev to one of the singlets under the decomposition of the bifundamental, one breaks $\mathfrak{su}(3)_a \oplus \mathfrak{su}(2)_b$ to $\mathfrak{su}(2)_a \oplus {\mathfrak{u}}(1)$, where the ${\mathfrak{u}}(1)$ is a linear combination of the Cartans ${\mathfrak{u}}(1)_3$ and ${\mathfrak{u}}(1)_2$ such that the singlet receiving the vev is neutral under it. This leaves the possibilities ${\mathfrak{u}}(1) = {\mathfrak{u}}(1)_2 \pm ({\mathfrak{u}}(1)_3/2)$, where we have chosen the normalisation such that the singlets after higgsing have charges 1 and 2. It can be easily checked, that the two possibilities will in the end lead to the same $\mathfrak{su}(2)_a \oplus {\mathfrak{u}}(1)$ spectrum up to a sign for the ${\mathfrak{u}}(1)$ charge. So by fixing ${\mathfrak{u}}(1) = {\mathfrak{u}}(1)_2 + ({\mathfrak{u}}(1)_3/2)$, we find \begin{align}\label{eq:higgsing_spectrum_su3+su2_to_su2+u1} \begin{split} \mathfrak{su}(3) \oplus \mathfrak{su}(2) \rightarrow \mathfrak{su}(2) \oplus {\mathfrak{u}}(1) : \quad & ({\bf 3,2}) \rightarrow {\bf 2}_{-1/2} \oplus {\bf 2}_{3/2} \oplus {\bf 1}_{-2} \oplus {\bf 1}_0 \, ,\\ & ({\bf 3,1}) \rightarrow {\bf 2}_{1/2} \oplus {\bf 1}_{-1} \, , \\ & ({\bf 1,2}) \rightarrow {\bf 1}_{-1} \oplus {\bf 1}_{1} \, , \\ & ({\bf 8,1}) \rightarrow {\bf 3}_0 \oplus {\bf 2}_{3/2} \oplus {\bf 2}_{-3/2} \oplus {\bf 1}_0 \, ,\\ & ({\bf 1,3}) \rightarrow {\bf 1}_{2} \oplus {\bf 1}_0 \oplus {\bf 1}_{-2} \, . \end{split} \end{align} First, note that the charges agrees with the spectrum of the toric $(0|1)$ split $\mathfrak{su}(2)_a \oplus {\mathfrak{u}}(1)$. Furthermore, comparing the matter loci \eqref{eq:discriminant_su2_0|1} to those of the unhiggsed $\mathfrak{su}(3) \oplus \mathfrak{su}(2)$ theory \eqref{eq:discriminant_su2_0|1_unhiggsed}, one can explicitly verify that the locus $\{Q\} \cap \{\theta\}$, supporting the ${\bf 2}_{1/2}$ matter of the $(0|1)$ model, decomposes upon unhiggsing into \begin{align} \{Q\} \cap \{\theta\} \stackrel{b_2 \rightarrow 0}{\longrightarrow} \underbrace{\{c_3\} \cap \{\theta\}}_{(\bf 3,2)} \, \cup \, \underbrace{\{P_2\} \cap \{\theta\}}_{(\bf 3,1)} \, . \end{align} This confirms the geometric origin of the ${\bf 2}_{1/2}$ matter states we expected from the group theoretic higgsing process \eqref{eq:higgsing_spectrum_su3+su2_to_su2+u1}. On the other hand, the codimension two locus of the ${\bf 2}_{3/2}$ matter, $\{b_2\} \cap \{\theta\}$, is promoted to the $\mathfrak{su}(3)$ divisor $\{\theta\}$, so, as expected, the adjoints of $\mathfrak{su}(3)$ contribute to ${\bf 2}_{3/2}$ upon higgsing. The additional states originating from the bifundamentals are accounted for by explicitly checking the multiplicities. Note that the unhiggsed theory also contains pure fundamentals of each gauge factor, hence the unhiggsed global gauge group must be $SU(3) \times SU(2)$. So we conclude that the global gauge group structure, $[SU(2) \times U(1)]/{\mathbb{Z}}_2$, of the $(0|1)$ split model is a direct consequence of the bifundamental higgsing process of an $SU(3) \times SU(2)$ model, both field theoretically and geometrically in F-theory. \subsection{Higher rank gauge algebras} We have also repeated the above analysis for $(01)$ and $(0|1)$ split types with higher rank gauge algebras that appear in the classification of `canonical' Tate-like models in \cite{Kuntzler:2014ila}. This contains all A- and D-type algebras up to rank 5 as well as all exceptional algebras and $\mathfrak{so}(7)$. We find that for any algebra ${\mathfrak{g}}$ along $\{\theta\}$ with $(01)$ split, the tuning $b_2 \rightarrow 0$ never affects ${\mathfrak{g}}$, i.e., the vanishing order of the discriminant along $\theta$ does not enhance further with this tuning. It only leads to an $\mathfrak{su}(2)$ singularity along $\{c_3\}$, as we have seen before. This is consistent with the tuning corresponding to adjoint (un-)higgsing of ${\mathfrak{u}}(1) \leftrightarrow \mathfrak{su}(2)$, as the Shioda map in these cases will always be $S-Z$, and hence the global gauge group being $G \times U(1)$. For $(0|1)$ split type $\mathfrak{su}(n) \oplus {\mathfrak{u}}(1)$ models arising from $I_n$ singularities in codimension one, the tuning $b_2 \rightarrow 0$ unhiggs the model to $\mathfrak{su}(n+1) \oplus \mathfrak{su}(2)$. A more general treatment of the group theoretic decomposition \eqref{eq:higgsing_spectrum_intermediate} and \eqref{eq:higgsing_spectrum_su3+su2_to_su2+u1} including 2-index anti-symmetric representations\footnote{ For $\mathfrak{su}(6) \oplus \mathfrak{su}(2) \rightarrow \mathfrak{su}(5) \oplus {\mathfrak{u}}(1)$, the inclusion of three-index anti-symmetric representations of $\mathfrak{su}(6)$ produces ${\bf 10}_{-3/5}$ states, in addition to the ${\bf 10}_{2/5}$ states that arises from two-index anti-symmetrics of $\mathfrak{su}(6)$. Those states, which also fit into the charge distribution of $(0|1)$ models (see \cite{Lawrie:2015hia}), arise in non-canonical models \cite{Mayrhofer:2012zy, Kuntzler:2014ila}. } confirms that the charges and global gauge group structure are consistent with bifundamental breaking. For the other singularity types with $(0|1)$ split, we could verify that the tuning always enhances the singularity, i.e., increasing the rank of the gauge algebra along $\{\theta\}$ while still producing an $\mathfrak{su}(2)$ along $\{c_3\}$. This suggests that the higgsing, that produces the non-trivial global gauge group structure for the $(0|1)$ type ${\mathfrak{g}} \oplus {\mathfrak{u}}(1)$ model, is not achieved with adjoints. However, to determine the exact matter content of the (un-)higgsed model requires a more detailed analysis, which we postpone to future works. We also performed the same analysis for $(0||1)$ split types, i.e., where the section $S$ and zero-section $Z$ intersect next-to-neighbouring nodes of ${\mathfrak{g}}$'s affine Dynkin diagram. Naively, one finds an even higher enhancement along $\{\theta\}$ upon setting $b_2 \rightarrow 0$ (e.g., for $I_n$ singularities we find $\mathfrak{su}(n) \rightarrow \mathfrak{su}(n+2)$). However, these tuned geometries always exhibit non-minimal codimension two loci (i.e., where the Weierstrass functions $(f,g,\Delta)$ vanish to orders $(4,6,12)$), indicating that there is---at least in F-theory compactifications to 6D--- hidden strongly coupled superconformal physics. We hope to return to this issue in the future. \section{A Criterion for the F-theory swampland}\label{sec:swampland} We have seen that the geometry of elliptic fibrations imposes very stringent constraints on the ${\mathfrak{u}}(1)$ charges of matter states in F-theory compactifications. A natural question that arises is if these constraints go beyond consistency conditions from a (supersymmetric) effective field theory (EFT) perspective. Put differently, do they give rise to criteria for an EFT to be in the `swampland' \cite{Vafa:2005ui, ArkaniHamed:2006dz} of F-theory? Given that in an EFT description, the global gauge group structure is often very obscure (e.g., because the spectrum of line operators is difficult to determine), it would be advantageous to have a criterion based solely on the gauge algebra ${\mathfrak{g}} \oplus \bigoplus_k {\mathfrak{u}}(1)_k$ and the particle spectrum, which usually are directly accessible. However, from the field theory perspective, there is no physically preferred normalisation for the ${\mathfrak{u}}(1)$ charge, whereas the integrality constraints appearing in F-theory models are only manifest in the geometrically preferred normalisation discussed in section \ref{sec:preferred_charge_normalisation}. Thus, to formulate a swampland criterion based on charges, we first need to establish a method to fix the charge normalisation from the field theory perspective. As we would like to argue now, singlet states (i.e., states uncharged under any non-abelian gauge symmetries) should provide such a reference for the normalisation. \subsection{Singlet charges as measuring sticks} Recall the geometrically preferred charge normalisation of F-theory discussed in section \ref{sec:preferred_charge_normalisation}. This normalisation corresponds to having ${\mathfrak{u}}(1)$ generators $\omega_k = \varphi(\sigma_k)$ that arise from the normalised Shioda map $\varphi$ \eqref{eq:general_shioda_image} for free Mordell--Weil generators $\sigma_i$. For simplicity, let us first look at the single ${\mathfrak{u}}(1)$ case. In the preferred normalisation, singlets of ${\mathfrak{g}}$ have integral charges, because their associated fibral curves satisfy $\Gamma \cdot E_i =0$, so $q^{\bf 1} = (S - Z) \cdot \Gamma \in {\mathbb{Z}}$. In fact, we observe that in all ${\mathfrak{u}}(1)$ models with matter constructed so far in the literature, the charges of singlets computed with $\varphi(\sigma_k)$ for a free generator $\sigma_k$ of the Mordell--Weil group, are mutually relatively prime (there is at least one pair of coprime numbers). This was expected to hold in \cite{Morrison:2012ei}, and viewed as a geometric incarnation of charge minimality \cite{Polchinski:2003bq, Banks:2010zn, Hellerman:2010fv, Seiberg:2011dr}, though a precise proof of this statement is to date not available. If this statement holds in general, then singlets uniquely determine the preferred normalisation, since no rescaling of the ${\mathfrak{u}}(1)$ generator can preserve the charges to be integral and mutually relatively prime at the same time. Observe that for a single ${\mathfrak{u}}(1)$, integer linear combinations of the singlet charges span ${\mathbb{Z}}$ if and only if the singlet charges are integer and mutually relatively prime. This follows straightforwardly from elementary number theory, which says that $x,y$ are coprime integers if and only if there are integers $a,b$ such that $a\,x + b\,y = 1$. The obvious generalisation to $m$ ${\mathfrak{u}}(1)$s is to require that a basis of ${\mathfrak{u}}(1)$ generators $\omega_k$ is in the preferred normalisation (i.e., arise as Shioda-maps $\omega_k = \varphi(\sigma_k)$ of free Mordell--Weil generators $\sigma_k$), if and only if the corresponding singlet charges are all integer and span the full integer lattice ${\mathbb{Z}}^m$. Geometrically, this requirement is equivalent to say that the (Shioda-mapped) Mordell--Weil lattice is dual, with respect to the intersection pairing, to the lattice spanned by the fibral curves corresponding to singlets. Similar to the case of a single ${\mathfrak{u}}(1)$, this condition is not proven in general. For the purpose of this discussion, we will assume its validity, noting that it is true in all F-theory models with multiple ${\mathfrak{u}}(1)$s and charged matter constructed so far in the literature \cite{Borchmann:2013jwa, Borchmann:2013hta, Cvetic:2013nia, Cvetic:2013jta, Cvetic:2013uta, Cvetic:2013qsa, Cvetic:2015ioa, Krippendorf:2015kta}. One may worry about cases where there are no singlets present at all, e.g., in F-theory models with non-higgsable ${\mathfrak{u}}(1)$s \cite{Martini:2014iza, Morrison:2016lix, Wang:2016urs}. However, since we are interested in the interplay of non-abelian matter with the ${\mathfrak{u}}(1)$s, these particular models are not of concern because the tuning required for additional non-abelian algebra is expected to enhance the non-higgsable ${\mathfrak{u}}(1)$s into non-abelian symmetries.\footnote{We thank Wati Taylor and Yi-Nan Wang for pointing this out.} Whether this phenomenon persists in all non-higgsable F-theory models, or if there are (higgsable) ${\mathfrak{u}}(1)$s with non-mutually relatively prime singlet charges, requires a more in-depth geometric analysis beyond the scope of this work. Before we turn to the actual swampland conjecture, we would like to discuss how the normalisation condition carries over in cases where ${\mathfrak{u}}(1)$s are broken by either a higgsing or a fluxed-induced St\"uckelberg mechanism. In the case of higgsing, i.e., giving a vacuum expectation value to a collection of massless states (in a D-flat manner), the breaking mechanism can be described geometrically by a complex structure deformation of the elliptic fibration. This deformation yields another F-theory compactification, where the massless ${\mathfrak{u}}(1)$s are again realised by a non-trivial Mordell--Weil group.\footnote{ If there are remnant ${\mathbb{Z}}_k$ symmetries, then the complex structure deformation could yield a genus-one fibration without rational sections \cite{Braun:2014oya, Anderson:2014yva, Mayrhofer:2014haa, Mayrhofer:2014laa, Morrison:2014era}. However, it is generally believed that there is always an elliptic fibration---the Jacobian fibration---with well-defined rational sections, giving the same F-theory. } Thus, the above formulation of the normalisation condition carry over directly. On the other hand, a $G_4$-flux-induced breaking mechanism (which in F-theory is only possible in 4D and 2D) is not geometrised. Hence, we have to understand field theoretically how the singlet charges behave in such a situation. In the following, we will restrict our attention to four-dimensional compactifications, noting that the 2D case proceeds analogously \cite{Schafer-Nameki:2016cfr}. In 4D, the field theoretic description of flux-induced breaking of ${\mathfrak{u}}(1)$s has been worked out in detail in type II (see \cite{MarchesanoBuznego:2003axu, Blumenhagen:2006ci, Plauschinn:2008yd} for a review) and subsequently in F-theory \cite{Cvetic:2012xn}. In the latter setting, a non-zero flux induces a mass matrix \begin{align}\label{eq:mass_matrix_flux_induced} M_{kl} = \sum_\alpha \xi_{k,\alpha} \, \xi_{l,\alpha} \quad \text{with} \quad \xi_{k,\alpha} = \int_{Y} G_4 \wedge \omega_k \wedge \pi^*J_\alpha \, , \end{align} for the ${\mathfrak{u}}(1)$ gauge fields dual to the generators $\omega_k = \varphi(\sigma_k)$, $k=1,..., m$. The flux-induced Fayet--Iliopoulos terms $\xi_{k,\alpha}$ are labelled by a basis $J_\alpha$ of $H^{1,1}(B)$, i.e., divisors on the base of the elliptic fourfold $\pi: Y \rightarrow B$. Massless ${\mathfrak{u}}(1)$s are now precisely those linear combinations $\tilde{\omega}_s = \sum_k \lambda^s_{k} \, \omega_k$, which lie in the kernel of $M_{kl}$. Due to \eqref{eq:mass_matrix_flux_induced}, this is equivalent to requiring \begin{align}\label{eq:masslessness_condition_flux} \forall \alpha \, : \quad \sum_i \xi_{k,\alpha} \, \lambda^s_k = 0 \, , \end{align} which, depending on the $G_4$-flux, may or may not have non-trivial solutions. Crucially, one can show that the FI-terms $\xi_{k,\alpha}$ can be taken to be integers due to the quantisation condition of $G_4$ (see appendix \ref{sec:app}). Therefore, a non-trivial solution space $V$ of \eqref{eq:masslessness_condition_flux} can be generated by integer vectors $\lambda_k^s$, $s=1,..., \tilde{m} = \dim V$. In other words, the massless ${\mathfrak{u}}(1)$s are generated by integer linear combinations $\tilde{\omega}_s = \lambda^s_k \, \omega_k = \varphi(\lambda^s_k \, \sigma_k)$ of the Mordell--Weil generators. Hence, there must exist $\tilde{m}$ free Mordell--Weil generators $\tilde{\sigma}_s$ that span $V$. Since the full singlet charge lattice was by assumption dual to the Mordell--Weil lattice, it must also contain the sublattice dual to $\Lambda = \text{span}_{\mathbb{Z}} (\tilde{\sigma}_s)$. In other words, there is a basis $\tilde{\omega}_s = \varphi(\tilde{\sigma}_s)$ for the massless ${\mathfrak{u}}(1)$ generators, in which the singlet charges are all integer and their integer linear span fills out every lattice site of ${\mathbb{Z}}^{\tilde{m}}$. So we have established that in an F-theory compactification with $m$ massless ${\mathfrak{u}}(1)$s, there are always $m$ free Mordell--Weil generators $\sigma_k$, such that the associated ${\mathfrak{u}}(1)$s are dual to the Shioda-divisors $\omega_k = \varphi(\sigma_k)$ given by \eqref{eq:general_shioda_image}. In this geometrically preferred normalisation, the singlet charges are all integer and span the full ${\mathbb{Z}}^m$ lattice. From the field theoretic point of view, which only has direct access to the singlet charges, it is crucial to realise that this normalisation is unique up to a unimodular transformation of the ${\mathfrak{u}}(1)$ generators, i.e., a change of basis for the Mordell--Weil (sub-)group. To see that, let us denote by $q_k$, $k=1,...,m$ the charges of singlet states $Q_k$, which form a basis dual to $\omega_k$, i.e., $(q_k)_i = \delta_{ki}$. Now suppose that we picked a different basis $\omega'_l$ for the ${\mathfrak{u}}(1)$ generators, in which the singlet charges again span ${\mathbb{Z}}^m$, with basis $q'_k$ dual to the $\omega'_k$. While the $q_k$ and $q'_k$ correspond to different physical states $Q_k$ and $Q'_k$, their charge vectors both span ${\mathbb{Z}}^m$, so there must exist a change of basis, i.e., a unimodular matrix $U$, such that $Q'_k = U_{kl}\,Q_l$. For the dual generators $\omega_k$ and $\omega'_k$, the corresponding transformation $\omega'_k = U^{-1}_{kl} \, \omega_l$ is then again unimodular. Therefore, the sections $\sigma'_k = U^{-1}_{kl} \, \sigma_l$ generate the same lattice as $\sigma_k$, i.e., they are also Mordell--Weil generators. Thus, the ${\mathfrak{u}}(1)$ generators $\omega'_k = \varphi(\sigma'_k)$ are also in the geometrically preferred normalisation. \subsection{The swampland criterion} Having established that the singlet charges provide a measuring stick for determining the preferred normalisation, we are now in a position to formulate the criterion which needs to be satisfied by EFTs arising from an F-theory compactification. Given a theory with an unbroken ${\mathfrak{u}}(1)^{\oplus m} \oplus {\mathfrak{g}}$ gauge symmetry, normalise the ${\mathfrak{u}}(1)$s such that the singlet charges are integer and span ${\mathbb{Z}}^m$. As argued above, we assume that this is always possible in F-theory. In this case, the corresponding ${\mathfrak{u}}(1)$ generators $\omega_k$ are given by the Shioda-map \eqref{eq:general_shioda_image} of some free Mordell--Weil generators $\sigma_k$. Then, due to \eqref{eq:L_of_rep} and \eqref{eq:integer_condition_charges_rep}, the difference $q_k^{(1)} - q_k^{(2)}$ of the ${\mathfrak{u}}(1)_k$ charges for any two representations ${{\cal R}}^{(i)} = (q_k^{(i)} \, , {{\cal R}}^{(i)}_{\mathfrak{g}})$ must be integer if $R^{(1)}_{\mathfrak{g}} = R^{(2)}_{\mathfrak{g}}$. In other words, the condition states that singlets under the non-abelian gauge algebra provide a reference for the spacing of ${\mathfrak{u}}(1)$ charges, which has be respected also by all non-abelian matter in a given ${\mathfrak{g}}$-representation, even if these may have fractional charges. Any EFT that does not satisfy this criterion must lie in the F-theory `swampland', i.e., cannot be an F-theory compactification. Note that this condition goes beyond anomaly cancellation. An example in 6D is given by a tensorless $\mathfrak{su}(2) \oplus {\mathfrak{u}}(1)$ theory with 10 uncharged adjoint hypers, 64 fundamental hypers with charge 1/2, 8 fundamental hypers with charge 1, 24 singlet hypers with charge 1, and 79 uncharged hypers. Clearly, the ${\mathfrak{u}}(1)$ is properly normalised according to our condition above, since there are only one type of charged singlets with charge 1. However, despite the anomalies \cite{Park:2011wv} been cancelled with Green--Schwarz coefficients $a = -3, b_{{\mathfrak{u}}(1)} = 4, b_{\mathfrak{su}(2)} = 6$, the presence of both charge 1 and 1/2 $\mathfrak{su}(2)$ fundamentals does not meet our `swampland' criterion. In 4D, the constraints are even weaker, since a completely vector-like spectrum is always gauge-anomaly-free, independent of charges or representations. In summary, our necessary condition for an effective field theory with gauge algebra ${\mathfrak{u}}(1)^{\oplus m} \oplus {\mathfrak{g}}$ to be an F-theory compactification requires to first establish a `preferred' normalisation. This normalisation is determined by having all singlet charges being integer, and their integer span generates ${\mathbb{Z}}^m$. Then, the difference of charges for matter in the same ${\mathfrak{g}}$-representation must be integer. Any field theory not satisfying this condition must lie in the F-theory swampland. We re-emphasise that this criterion relies on the two key assumptions consistent with the current literature: (a) F-theory models with gauge algebra ${\mathfrak{u}}(1)^{\oplus m} \oplus {\mathfrak{g}}$ and charged matter always have singlets, and (b) their charges in the preferred normalisation span the full integer lattice. Were it not for these assumptions, we could always rescale the ${\mathfrak{u}}(1)$s so that all charges are integer, and the above conditions are trivially satisfied. To sharpen our criterion will eventually require a rigorous proof of both assumptions. Finally, we point out that our arguments are based on intersection properties of the Shioda map divisors with fibral curves, which in the F-theory compactification give rise to massless states in the effective field theory. On the other hand, there are also massive states in string compactifications coming from higher Kaluza--Klein states or KK reductions along non-harmonic forms. In light of the recent development of the weak gravity conjecture (see \cite{ArkaniHamed:2006dz, Cheung:2014vva, Montero:2015ofa, Heidenreich:2015nta, Heidenreich:2016aqi, Montero:2016tif, Palti:2017elp} for an incomplete list) which puts constraints on ${\mathfrak{u}}(1)$ charges of states relative to their masses, an interesting question is whether the global gauge group structure is also respected by massive states in F-theory. If so, it would certainly be interesting to explore if and how our F-theory swampland arguments fit into more general quantum gravity concepts. \section{Summary and outlook}\label{sec:summary} In this work, we have shown that F-theory compactifications with an abelian gauge factor generically come equipped with a non-trivial gauge group structure $[G \times U(1)]/{\cal Z}$. The finite subgroup ${\cal Z} = {\mathbb{Z}}_\kappa \subset {\cal Z}(G) \times U(1)$ of the centre is generated by an element $C$, which we have constructed explicitly from the Shioda map of the Mordell--Weil generator. Geometrically, different centres ${\cal Z}$ arise from different fibre split types, i.e., relative configurations of the zero and the generating sections on the codimension one singular fibres determining $G$. At the level of representations, the construction---generalising that for torsional sections \cite{Mayrhofer:2014opa}---imposes specific constraints on the allowed ${\mathfrak{u}}(1)$ charges $q_{\cal R}$ of each $G$-representation ${\cal R}$, such that ${\cal Z}$ acts trivially on $(q_{\cal R}, {\cal R})$. These constraints can be equivalently viewed as a refined charge quantisation condition: There is a normalisation \eqref{eq:general_shioda_image} of the ${\mathfrak{u}}(1)$ such that all charges of matter in a given $G$-representation $\cal R$ span a (one-dimensional) lattice with integer spacing. A non-trivial gauge group structure is then reflected in a relative shift between charge lattices of different $G$-representations by multiples of $1/\kappa$. We have exemplified our findings in several concrete models that have been constructed throughout the literature. Using these examples, we have also demonstrated that the argument straightforwardly generalises to multiple ${\mathfrak{u}}(1)$ factors, i.e., higher rank Mordell--Weil groups, and also to cases with both free and torsional sections. Each generator (free or torsional) leads to an independent central element (possibly trivial), such that in general, ${\cal Z}$ is a product of ${\mathbb{Z}}_{\kappa_k}$ factors. In particular, when applied to the `F-theory Standard Models' \cite{Lin:2014qga, Klevers:2014bqa, Cvetic:2015txa, Lin:2016vus}, we found that these models realise the physical Standard Model gauge group $[SU(3) \times SU(2) \times U(1)_Y]/{\mathbb{Z}}_6$. Correspondingly, the geometric spectrum completely agrees with the physical representations of the $\mathfrak{su}(3) \oplus \mathfrak{su}(2) \oplus {\mathfrak{u}}(1)_Y$ algebra. We have also explored the connections of ${\mathfrak{u}}(1)$ charge restrictions to the process of unhiggsing into a larger non-abelian group. Relying on simple class of geometries with $\mathfrak{su}(2) \oplus {\mathfrak{u}}(1)$ gauge algebra, we have shown explicitly that the two different global gauge group structures unhiggs, both geometrically and field theoretically, into different non-abelian gauge groups. Concretely, geometries with gauge group $SU(2) \times U(1)$ arise as adjoint higgsing of $SU(2) \times SU(2)$, whereas $[SU(2) \times U(1)]/{\mathbb{Z}}_2$ is a result of bifundamental higgsing of $SU(3) \times SU(2)$. Note that the non-abelian gauge groups in which we unhiggs do not have any non-trivial structure, i.e., there are no torsional sections. In general, models in the same class with gauge group $[G \times U(1)]/{\cal Z}$ unhiggs under the same complex structure deformation into $G' \times SU(2)$ with $G \subseteq G'$; equality holds only if ${\cal Z} = \{1\}$. However, for $[G \times U(1)]/{\cal Z}$ models, where the zero section and the Mordell--Weil generator do not intersect the same or neighbouring fibre components in codimension one, the unhiggsing procedure introduces codimension two non-minimal loci. In compactifications on a threefold, one would usually associate such a non-minimal locus with the existence of tensionless strings and interpret it as a superconformal sector of the 6D field theory. Clearly, it would be exciting to investigate how superconformal physics enters the global gauge group structure, and also gain insight into 4D compactifications, where non-minimal loci are less understood. It is worth pointing out that the centre also plays a crucial role in the higgsing of ${\mathfrak{u}}(1)$s to discrete symmetries \cite{Garcia-Etxebarria:2014qua, Klevers:2014bqa}. With the explicit description of the centre laid out in these notes, we look forward to apply our new insights to phenomenologically more appealing models with discrete symmetries \cite{SM_Z2}. We have also studied how the non-trivial global gauge group structures give rise to an F-theory `swampland' criterion. Formulated in terms of ${\mathfrak{u}}(1)$ charges, a field theory with gauge algebra ${\mathfrak{g}} \oplus {\mathfrak{u}}(1)$ that arises from F-theory must satisfy the following condition. By normalising the ${\mathfrak{u}}(1)$ such that all singlet charges are integer and span the full integer lattice, charges of matter having the same non-abelian ${\mathfrak{g}}$-representation, which individually can be fractional, must differ from each other by integers. Our analysis also shows that this criterion generalises to the case of multiple ${\mathfrak{u}}(1)$s, which remain massless in the low energy effective theory after a higgsing and/or turning on $G_4$-flux. While this condition is stronger than cancellation of field theory anomalies and hence can be used to rule out swampland theories, their validity is based on the assumption that in F-theory compactifications, the singlets serve as a `measuring stick' for the ${\mathfrak{u}}(1)$ charges. Geometrically, it is based on the observation that any F-theory model with ${\mathfrak{u}}(1)$s and charged matter have in particular charged singlets, whose corresponding fibral curve in the elliptic fibration spans a lattice that is dual to the (free) Mordell--Weil lattice under the intersection pairing. Note that this observation extends the conjecture \cite{Morrison:2012ei} made for a singlet ${\mathfrak{u}}(1)$, that singlet charges computed with respect to the normalised Shioda-map \eqref{eq:general_shioda_image} are integer and mutually relatively prime. To make the swampland criterion precise will therefore require a careful analysis of the intersection structures between sections and codimension two fibres in elliptically fibred Calabi--Yau manifolds. Nevertheless, it would be interesting to study the connection of this swampland criterion to other quantum gravity conditions such as the weak gravity conjecture and extensions thereof. \subsection*{Acknowledgement} We are indebted to Timo Weigand, Craig Lawrie, Thomas Grimm, Wati Taylor, Yi-Nan Wang, and Cumrun Vafa for valuable discussions and comments on the draft. Furthermore, we would also like to thank Paul Oehlmann, Eran Palti, Riccardo Penco and Fabian Ruehle for useful communications. This work was supported in part by DOE Award DE-SC0013528 (M.C.~, L.L.), and by the Fay R.~and Eugene L.~Langberg Endowed Chair (M.C.) and the Slovenian Research Agency (M.C.).
1,941,325,220,092
arxiv
\section{Introduction} In recent years, there has been a considerable amount of interest in supersymmetric (SUSY) models involving odd (anticommuting) Grassmann variables and superalgebras. Supersymmetry was introduced in the theory of elementary particles and their interactions and forms an essential component of attempts to obtain a unification of all physical forces \cite{Aitchison}. A number of supersymmetric extensions have been formulated for both classical and quantum mechanical systems. In particular, such supersymmetric generalizations have been constructed for hydrodynamic-type systems (e.g. the Korteweg-de Vries equation \cite{Mathieu,Labelle,LiuManas,BarcelosNeto,HusAyaWin}, the Sawada--Kotera equation \cite{TianLiu}, polytropic gas dynamics \cite{Das,Polytropic} and a Gaussian irrotational compressible fluid \cite{Gaussian}) as well as other nonlinear wave equations (e.g. the Schr\"{o}dinger equation \cite{Unterberger} and the sine/sinh-Gordon equation \cite{Grammaticos,Gomes,SSG,SKG}. Parametrizations of strings and Nambu-Goto membranes have been used to supersymmetrize the Chaplygin gas in $(1+1)$ and $(2+1)$ dimensions respectively \cite{Jackiw,Bergner,Polychronakos}. In addition, it was proposed that non-Abelian fluid mechanics and color magnetohydrodynamics could be used to describe a quark-gluon plasma \cite{Jackiw,Pi}. \noindent In this paper, we formulate a supersymmetric extension of the minimal surface equation and investigate its group-theoretical properties. The concept of minimal surfaces was originally devised by Joseph-Louis Lagrange in the mid-eighteenth century \cite{Bianchi} and still remains an active subject of research and applications. We consider a smooth orientable conformally parametrized surface ${\mathcal F}$ defined by the immersion $\vec{F}:{\mathcal R}\rightarrow \mathbb{R}^3$ of a complex domain ${\mathcal R}\subset\mathbb{C}$ into three-dimensional Euclidean space $\mathbb{R}^3$. We consider a variation of ${\mathcal F}$ along a vector field $\vec{v}$ which vanishes on the boundary of ${\mathcal F}$: $\vec{v}\mid_{\partial{\mathcal F}}=0$. The corresponding variation of the area of ${\mathcal F}$ in the small parameter $\varepsilon$ (where $\varepsilon<<1$) is, up to higher terms in $\varepsilon$, \begin{equation} A(\vec{F}+\varepsilon\vec{v})-A(\vec{F})=-2\varepsilon\int_{\mathcal F} \vec{v}\cdot\vec{H} dA+\ldots, \label{intr01} \end{equation} where $\vec{H}$ is the mean curvature vector on ${\mathcal F}$. Surfaces with vanishing mean curvature ($\vec{H}=0$) are called minimal surfaces. The conformal metric associated with the surface ${\mathcal F}$ is $\Omega=e^{u}dzd{\bar{z}}$, where $z$ and $\bar{z}$ are coordinates on ${\mathcal R}$ and $u$ is a real-valued function of $z$ and $\bar{z}$. If we re-label the variables $z$ and $\bar{z}$ as $x$ and $y$ respectively, then the real-valued function $u$ satisfies the partial differential equation (PDE) \cite{Spivak} \begin{equation} (1+(u_x)^2)u_{yy}-2u_xu_yu_{xy}+(1+(u_y)^2)u_{xx}=0, \label{minimals} \end{equation} which is called the minimal surface (MS) equation. Equation (\ref{minimals}) can be written in the form of the following conservation law: \begin{equation} \partial_x\left({u_x\over \sqrt{1+(u_x)^2+(u_y)^2}}\right)+\partial_y\left({u_y\over \sqrt{1+(u_x)^2+(u_y)^2}}\right)=0, \label{CL} \end{equation} which can be derived from the variational principle for the Lagrangian density \begin{equation} {\cal L}=\sqrt{1+(u_x)^2+(u_y)^2}. \label{Lag} \end{equation} A conformally parametrized surface is minimal if and only if it can be locally expressed as the graph of a solution of equation (\ref{minimals}). The minimal surface and its related equations appear in many areas of physics and mathematics, such as fluid dynamics \cite{Lamb,Roz}, continuum mechanics \cite{Chandrasekhar,Hill}, nonlinear field theory \cite{Boilat,Nelson,David}, plasma physics \cite{Jackiw,Luban,Chen}, nonlinear optics \cite{Sommerfeld,Luneburg} and the theory of fluid membranes \cite{Davidov,Ou,Safran}. Using the Wick rotation $y=it$, one can transform equation (\ref{minimals}) to the scalar Born-Infeld equation \cite{Arik,Whitham} \begin{equation} (1+(u_x)^2)u_{tt}-2u_xu_tu_{xt}-(1-(u_t)^2)u_{xx}=0. \label{Born} \end{equation} \noindent In this paper, we construct a supersymmetric extension of the minimal surface equation (\ref{minimals}) using a superspace and superfield formalism. The space $\{(x,y)\}$ of independent variables is extended to the superspace $\{(x,y,\theta_1,\theta_2)\}$ while the bosonic surface function $u(x,y)$ is replaced by the bosonic superfield $\Phi(x,y,\theta_1,\theta_2)$ defined in terms of bosonic and fermionic-valued fields of $x$ and $y$. Following the construction of our supersymmetric extension, we determine a Lie superalgebra of infinitesimal symmetries of our extended equation. We then classify the one-dimensional subalgebras of this Lie superalgebra into conjugation classes with respect to action by the Lie supergroup generated by the Lie superalgebra, and we use the symmetry reduction method to obtain invariant solutions of the SUSY equation. The advantage of using such group-theoretical methods to analyze our supersymmetrized equation is that these methods are systematic and involve regular algorithms which, in theory, can be used without having to make additional assumptions. Finally, we revisit and expand the group-theoretical analysis of the classical minimal surface equation and compare the obtained results to those found for the supersymmetric extension of the minimal surface equation. \section{Supersymmetric version of the minimal surface equation} Grassmann variables are elements of a Grassmann algebra $\Lambda$ involving a finite number of Grassmann generators $\zeta_1,\zeta_2,\ldots,\zeta_k$ which obey the rules \begin{equation} \begin{split} &\zeta_i\zeta_j=-\zeta_j\zeta_i\mbox{ if }i\neq j,\\ &\zeta_i^2=0\mbox{ for all }i. \end{split} \label{grasscond} \end{equation} The Grassmann algebra can be decomposed into even and odd parts: $\Lambda=\Lambda_{even}+\Lambda_{odd}$, where $\Lambda_{even}$ consists of all terms involving the product of an even number of generators $1,\zeta_1\zeta_2,\zeta_1\zeta_3,\ldots,\zeta_1\zeta_2\zeta_3\zeta_4,\ldots$, while $\Lambda_{odd}$ consists of all terms involving the product of an odd number of generators $\zeta_1,\zeta_2,\zeta_3,\ldots,\zeta_1\zeta_2\zeta_3,\zeta_1\zeta_2\zeta_4,\ldots$. A Grassmann variable $\kappa$ is called even (or bosonic) if it is a linear combination of terms involving an even number of generators, while it is called odd (or fermionic) if it is a linear combination of terms involving an odd number of generators.\\\\ We now construct a Grassmann-valued extension of the minimal surface equation (\ref{minimals}). The space of independent variables, $\{(x,y)\}$, is extended to a superspace $\{(x,y,\theta_1,\theta_2)\}$ involving two fermionic Grassmann-valued variables $\theta_1$ and $\theta_2$. Also, the bosonic function $u(x,y)$ is generalized to a bosonic-valued superfield $\Phi$ defined as \begin{equation} \Phi(x,y,\theta_1,\theta_2)=v(x,y)+\theta_1\phi(x,y)+\theta_2\psi(x,y)+\theta_1\theta_2u(x,y), \label{superfield} \end{equation} where $v(x,y)$ is a bosonic-valued field while $\phi(x,y)$ and $\psi(x,y)$ are fermionic-valued fields. We construct our extension in such a way that it is invariant under the supersymmetry transformations \begin{equation} x\longrightarrow x-\underline{\eta_1}\theta_1,\hspace{5mm}\theta_1\longrightarrow \theta_1+\underline{\eta_1}, \label{trQ1} \end{equation} and \begin{equation} y\longrightarrow y-\underline{\eta_2}\theta_2,\hspace{5mm}\theta_2\longrightarrow \theta_2+\underline{\eta_2}, \label{trQ2} \end{equation} where $\underline{\eta_1}$ and $\underline{\eta_2}$ are odd-valued parameters. Throughout this paper, we use the convention that underlined constants are fermionic-valued. The transformations (\ref{trQ1}) and (\ref{trQ2}) are generated by the infinitesimal supersymmetry generators \begin{equation} Q_1=\partial_{\theta_1}-\theta_1\partial_{x}\hspace{1cm}\mbox{and}\hspace{1cm}Q_2=\partial_{\theta_2}-\theta_2\partial_{y}, \label{supersymmetry} \end{equation} respectively. These generators satisfy the anticommutation relations \begin{equation} \{Q_1,Q_1\}=-2\partial_x,\hspace{1cm}\{Q_2,Q_2\}=-2\partial_y,\hspace{1cm}\{Q_1,Q_2\}=0. \label{anticommutators} \end{equation} To make the superfield model invariant under the transformations generated by $Q_1$ and $Q_2$, we construct the equation in terms of the following covariant derivatives: \begin{equation} D_1=\partial_{\theta_1}+\theta_1\partial_{x}\hspace{1cm}\mbox{and}\hspace{1cm}D_2=\partial_{\theta_2}+\theta_2\partial_{y}. \label{covariant} \end{equation} These covariant derivative operators possess the following properties \begin{equation} \begin{split} & D_1^2=\partial_x,\hspace{1cm}D_2^2=\partial_y,\hspace{1cm}\{D_1,D_2\}=0,\hspace{1cm}\{D_1,Q_1\}=0,\hspace{1cm}\\ & \{D_1,Q_2\}=0,\hspace{1cm}\{D_2,Q_1\}=0,\hspace{1cm}\{D_2,Q_2\}=0. \end{split} \label{covprop} \end{equation} Combining different covariant derivatives $D_1^m$ and $D_2^n$ of the superfield $\Phi$ of various orders, where $m$ and $n$ are positive integers, we obtain the most general form of the supersymmetric extension of equation (\ref{minimals}). Since this expression is very involved, we instead present the following sub-case as our superymmetric extension of the MS equation, and will refer to it as such. We obtain the equation \begin{equation} \begin{split} &D_2^4\Phi+(D_1^2\Phi)(D_1^3D_2\Phi)(D_1D_2^5\Phi)-2(D_1^2\Phi)(D_1D_2^3\Phi)(D_1^3D_2^3\Phi)+D_1^4\Phi\\ &+(D_2^2\Phi)(D_1D_2^3\Phi)(D_1^5D_2\Phi)=0. \label{eqmotion1} \end{split} \end{equation} In terms of derivatives with respect to $x$, $y$, $\theta_1$ and $\theta_2$, equation (\ref{eqmotion1}) can be written in the form \begin{equation} \begin{split} & \Phi_{yy}+\Phi_{xx}\\ &+\Phi_{x}(-\Phi_{x\theta_1\theta_2}+\theta_1\Phi_{xx\theta_2}-\theta_2\Phi_{xy\theta_1}+\theta_1\theta_2\Phi_{xxy})\times\\ & \hspace{5mm} (-\Phi_{yy}\theta_1\theta_2+\theta_1\Phi_{xyy\theta_2}-\theta_2\Phi_{yyy\theta_1}+\theta_1\theta_2\Phi_{xyyy})\\ & -2\Phi_{x}(-\Phi_{y\theta_1\theta_2}+\theta_1\Phi_{xy\theta_2}-\theta_2\Phi_{yy\theta_1}+\theta_1\theta_2\Phi_{xyy})\times\\ & \hspace{5mm} (-\Phi_{xy\theta_1\theta_2}+\theta_1\Phi_{xxy\theta_2}-\theta_2\Phi_{xyy\theta_1}+\theta_1\theta_2\Phi_{xxyy})\\ &+\Phi_{y}(-\Phi_{y\theta_1\theta_2}+\theta_1\Phi_{xy\theta_2}-\theta_2\Phi_{yy\theta_1}+\theta_1\theta_2\Phi_{xyy})\times\\ & \hspace{5mm} (-\Phi_{xx\theta_1\theta_2}+\theta_1\Phi_{xxx\theta_2}-\theta_2\Phi_{xxy\theta_1}+\theta_1\theta_2\Phi_{xxxy})\\ &=0. \end{split} \label{eqmotion2} \end{equation} In what follows, we will refer to equation (\ref{eqmotion2}) as the supersymmetric minimal surface equation (SUSY MS equation).\\\\ The partial derivatives satisfy the generalized Leibniz rule \begin{equation} \partial_{\theta_i}(fg)=(\partial_{\theta_i}f)g+(-1)^{\mbox{deg}(f)}f(\partial_{\theta_i}g), \end{equation} if $\theta_i$ is a fermionic variable and we define \begin{equation} \mbox{deg}(f)=\begin{cases} 0 & \mbox{ if } f \mbox{ is even},\\ 1 & \mbox{ if } f \mbox{ is odd}. \end{cases} \end{equation} The partial derivatives with respect to the odd coordinates satisfy $\partial_{\theta_i}(\theta_j)=\delta_j^i$, where the indices $i$ and $j$ each stand for 1 or 2 and $\delta_j^i$ is the Kronecker delta function. The operators $\partial_{\theta_1}$, $\partial_{\theta_2}$, $Q_1$, $Q_2$, $D_1$ and $D_2$ change the parity of a bosonic function to that of a fermionic function and vice-versa.\\\\ When dealing with higher-order derivatives, the symbol $f_{x_1x_2x_3\ldots x_{k-1}x_k}$ denotes the derivative $\partial_{x_k}\partial_{x_{k-1}}\ldots\partial_{x_3}\partial_{x_2}\partial_{x_1}(f)$ where the order must be preserved for the sake of consistency. Throughout this paper, we use the convention that if $f(g(x))$ is a composite function, then \begin{equation} \dfrac{\partial f}{\partial x}=\dfrac{\partial g}{\partial x}\cdot\dfrac{\partial f}{\partial g}. \label{composite} \end{equation} The interchange of mixed derivatives with proper respect for the ordering of odd variables is assumed throughout. For a review of recent developments in this subject see e.g. Freed \cite{Freed} and Varadarajan \cite{Varadarajan}. \section{Lie symmetries of the supersymmetric minimal surface equation} A symmetry supergroup $G$ of a supersymmetric system is a local supergroup of transformations acting on the Cartesian product of submanifolds ${\mathcal X}\times{\mathcal U}$, where ${\mathcal X}$ is the space of independent variables $\{(x,y,\theta_1,\theta_2)\}$ and ${\mathcal U}$ is the space of dependent superfields, which in this case involves only the superfield $\Phi$. In order to find symmetries of the SUSY MS equation, we make use of the theory described in the book by Olver \cite{Olver} in order to determine superalgebras of infinitesimal symmetries.\\\\ In order to determine the Lie point superalgebra of infinitesimal symmetries, we look for a bosonic vector field of the form \begin{equation} \begin{split} {\mathbf v}=&\xi_1(x,y,\theta_1,\theta_2)\partial_x+\xi_2(x,y,\theta_1,\theta_2)\partial_y+\rho_1(x,y,\theta_1,\theta_2)\partial_{\theta_1}\\ &+\rho_2(x,y,\theta_1,\theta_2)\partial_{\theta_2}+\Lambda(x,y,\theta_1,\theta_2)\partial_{\Phi}, \label{vectorfield} \end{split} \end{equation} where $\xi_1$, $\xi_2$ and $\Lambda$ are bosonic-valued functions, while $\rho_1$ and $\rho_2$ are fermionic-valued functions. The prolongation formulas allowing us to find the symmetries are very involved and will not be presented here. Moreover, it should be noted that the symmetry criterion has not yet been conclusively demonstrated for the case of equations involving Grassmann variables.\\\\ The following infinitesimal transformations were found to be symmetry generators of the SUSY MS equation \begin{equation} \begin{split} & P_1=\partial_x,\hspace{1cm}P_2=\partial_y,\hspace{1cm}P_3=\partial_{\theta_1},\hspace{1cm}P_4=\partial_{\theta_2},\hspace{1cm}P_5=\partial_{\Phi},\hspace{1cm}\\ & D=2x\partial_x+2y\partial_y+\theta_1\partial_{\theta_1}+\theta_2\partial_{\theta_2}+4\Phi\partial_{\Phi}, \\ & Q_1=\partial_{\theta_1}-\theta_1\partial_{x},\hspace{1cm} Q_2=\partial_{\theta_2}-\theta_2\partial_{y}. \end{split} \label{symmetries} \end{equation} These eight generators span a Lie superalgebra ${\mathcal G}$ of infinitesimal symmetries of the SUSY MS equation. Here, $P_1$, $P_2$, $P_3$ and $P_4$ generate translations in the $x$, $y$, $\theta_1$ and $\theta_2$ directions respectively, while $P_5$ generates a shift in the superfield $\Phi$. The vector field $D$ corresponds to a dilation involving both bosonic and fermionic variables as well as the superfield $\Phi$. Finally, the fermionic vector fields $Q_1$ and $Q_2$ are simply the supersymmetry transformations identified in (\ref{supersymmetry}). The supercommutation relations involving the generators of the superalgebra ${\mathcal G}$ are listed in Table 1. \begin{table}[htbp] \caption{Supercommutation table for the Lie superalgebra ${\mathcal G}$ generated by the vector fields (\ref{symmetries}). Here, for each pair of generators $X$ and $Y$, we calculate either the commutator $[X,Y]=XY-YX$ if either $X$ or $Y$ are bosonic, or the anticommutator $\{X,Y\}=XY+YX$ if both $X$ and $Y$ are fermionic.} \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|c|}\hline & $\mathbf{D}$ & $\mathbf{P_1}$ & $\mathbf{P_3}$ & $\mathbf{Q_1}$ & $\mathbf{P_2}$ & $\mathbf{P_4}$ & $\mathbf{Q_2}$ & $\mathbf{P_5}$ \\\hline\hline $\mathbf{D}$ & $0$ & $-2P_1$ & $-P_3$ & $-Q_1$ & $-2P_2$ & $-P_4$ & $-Q_2$ & $-4P_5$ \\\hline $\mathbf{P_1}$ & $2P_1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\\hline $\mathbf{P_3}$ & $P_3$ & $0$ & $0$ & $-P_1$ & $0$ & $0$ & $0$ & $0$ \\\hline $\mathbf{Q_1}$ & $Q_1$ & $0$ & $-P_1$ & $-2P_1$ & $0$ & $0$ & $0$ & $0$ \\\hline $\mathbf{P_2}$ & $2P_2$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\\hline $\mathbf{P_4}$ & $P_4$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-P_2$ & $0$ \\\hline $\mathbf{Q_2}$ & $Q_2$ & $0$ & $0$ & $0$ & $0$ & $-P_2$ & $-2P_2$ & $0$ \\\hline $\mathbf{P_5}$ & $4P_5$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\\hline \end{tabular} \end{center} \end{table} \noindent The Lie superalgebra ${\mathcal G}$ can be decomposed into the following combination of semidirect and direct sums: \begin{equation} {\mathcal G}=\{D\}\sdir\{\{P_1,P_3,Q_1\}\oplus\{P_2,P_4,Q_2\}\oplus\{P_5\}\}. \label{decomposed} \end{equation} It should be noted that the symmetries found for the SUSY MS equation (\ref{eqmotion2}) are qualitatively different from those found previously for the SUSY version of the equations of confomally parametrized surfaces with non-zero mean curvature \cite{Bertrand}. \section{Classification of Subalgebras for the Lie Superalgebra} We proceed to classify the one-dimensional Lie subalgebras of the superalgebra ${\mathcal G}$ generated by (\ref{symmetries}) into conjugacy classes under the action of the Lie supergroup $G=\mbox{exp}({\mathcal G})$ generated by ${\mathcal G}$. We construct our list of representative subalgebras in such a way that each one-dimensional subalgebra of ${\mathcal G}$ is conjugate to one and only one element of the list. Such a classification is useful because subalgebras that are conjugate to each other lead to invariant solutions that are equivalent in the sense that one can be transformed to the other by a suitable symmetry. Therefore, it is not necessary to perform symmetry reduction on two different subalgebras that are conjugate to each other.\\\\ In order to classify the Lie superalgebra ${\mathcal G}$ given in (\ref{decomposed}) we make use of the procedures given in \cite{Winternitz}. In what follows, $\alpha$, $r$, $k$ and $\ell$ are bosonic constants, $\underline{\mu}$, $\underline{\nu}$, $\underline{\eta}$, $\underline{\lambda}$, $\underline{\rho}$ and $\underline{\sigma}$ are fermionic constants, and $\varepsilon=\pm 1$. We begin by considering the subalgebra ${\mathcal S}_1=\{P_1,P_3,Q_1\}$. Consider a general element of ${\mathcal S}_1$ which can be written as the linear combination $X=\alpha P_1+\underline{\mu}P_3+\underline{\nu}Q_1$ and examine how this element changes under the action of the one-parameter group generated by the generator: $Y=rP_1+\underline{\eta}P_3+\underline{\lambda}Q_1$. This action is performed through the Baker-Campbell-Hausdorff formula \begin{equation} X\longrightarrow \mbox{Ad}_{\mbox{exp}(Y)}X=X+[Y,X]+\frac{1}{2\!}[Y,[Y,X]]+\ldots+\frac{1}{3\!}[Y,[Y,[Y,X]]]+\ldots \label{BCHformula} \end{equation} We obtain \begin{equation} \begin{split} [Y,X]&=[rP_1+\underline{\eta}P_3+\underline{\lambda}Q_1,\alpha P_1+\underline{\mu}P_3+\underline{\nu}Q_1]\\ &=[\underline{\eta}P_3,\underline{\nu}Q_1]+[\underline{\lambda}Q_1,\underline{\mu}P_3]+[{\lambda}Q_1,\underline{\nu}Q_1]\\ &=(\underline{\eta}\underline{\nu}+\underline{\lambda}\underline{\mu}+2\underline{\lambda}\underline{\nu})P_1, \end{split} \end{equation} \begin{equation} [Y[Y,X]]=[rP_1+\underline{\eta}P_3+\underline{\lambda}Q_1,(\underline{\eta}\underline{\nu}+\underline{\lambda}\underline{\mu}+2\underline{\lambda}\underline{\nu})P_1]=0. \end{equation} So we have \begin{equation} \{\alpha P_1+\underline{\mu}P_3+\underline{\nu}Q_1\}\longrightarrow \{(\alpha+\underline{\eta}\underline{\nu}+\underline{\lambda}\underline{\mu}+2\underline{\lambda}\underline{\nu}) P_1+\underline{\mu}P_3+\underline{\nu}Q_1\}. \end{equation} Therefore, aside from a change in the $P_1$ coefficient, each element of the form $\{\alpha P_1+\underline{\mu}P_3+\underline{\nu}Q_1\}$ is conjugate only to itself. This gives us the subalgebras \begin{equation} \begin{split} &{\mathcal G}_1=\{P_1\}, \hspace{5mm} {\mathcal G}_2=\{\underline{\mu}P_3\}, \hspace{5mm} {\mathcal G}_3=\{\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_4=\{P_1+\underline{\mu}P_3\}, \\ & {\mathcal G}_5=\{P_1+\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_6=\{\underline{\mu}P_3+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_7=\{P_1+\underline{\mu}P_3+\underline{\nu}Q_1\}. \end{split} \label{list1} \end{equation} An analogous classification is performed for the subalgebra ${\mathcal S}_2=\{P_2,P_4,Q_2\}$, from where we obtain the subalgebras \begin{equation} \begin{split} &{\mathcal G}_8=\{P_2\}, \hspace{5mm} {\mathcal G}_9=\{\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{10}=\{\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{11}=\{P_2+\underline{\mu}P_4\}, \\ & {\mathcal G}_{12}=\{P_2+\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{13}=\{\underline{\mu}P_4+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{14}=\{P_2+\underline{\mu}P_4+\underline{\nu}Q_2\}. \end{split} \label{list2} \end{equation} \\ The next step is to classify the direct sum of the algebras ${\mathcal S}_1$ and ${\mathcal S}_2$, that is, to classify \begin{equation} {\mathcal S}={\mathcal S}_1\oplus {\mathcal S}_2=\{P_1,P_3,Q_1\}\oplus \{P_2,P_4,Q_2\}, \end{equation} using the Goursat method of subalgebra classification \cite{Goursat1,Goursat2}. Each non-twisted subalgebra of ${\mathcal S}$ is constructed by selecting one subalgebra of ${\mathcal S}_1$ and finding its direct sum with a subalgebra of ${\mathcal S}_2$. The non-twisted one-dimensional subalgebras of ${\mathcal S}$ are the combined subalgebras ${\mathcal G}_{1}$ to ${\mathcal G}_{14}$ listed in (\ref{list1}) and (\ref{list2}). The twisted subalgebras of ${\mathcal S}$ are formed as follows. If $A\in {\mathcal S}_1$ and $B\in {\mathcal S}_2$, then $A$ and $B$ can be twisted together if there exists a homomorphism from $A$ to $B$, say $\tau(A)=B$. The twisted subalgebra is then obtained by taking $\{A+\tau(A)\}$. This results in the additional subalgebras \begin{displaymath} \begin{split} &{\mathcal G}_{15}=\{P_1+kP_2\}, \hspace{5mm} {\mathcal G}_{16}=\{P_1+\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{17}=\{P_1+\underline{\mu}Q_2\}, \\ & {\mathcal G}_{18}=\{P_1+kP_2+\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{19}=\{P_1+kP_2+\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{20}=\{P_1+\underline{\mu}P_4+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{21}=\{P_1+kP_2+\underline{\mu}P_4+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{22}=\{P_2+\underline{\mu}P_3\}, \hspace{5mm} {\mathcal G}_{23}=\{\underline{\mu}P_3+\underline{\nu}P_4\}, \\ & {\mathcal G}_{24}=\{\underline{\mu}P_3+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{25}=\{P_2+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{26}=\{P_2+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{27}=\{\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{28}=\{P_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{29}=\{P_2+\underline{\mu}Q_1\}, \hspace{1cm} \\ & {\mathcal G}_{30}=\{\underline{\mu}P_4+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{31}=\{\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{32}=\{P_2+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{33}=\{P_2+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{34}=\{\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{35}=\{P_2+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{36}=\{P_1+kP_2+\underline{\mu}P_3\}, \\ & {\mathcal G}_{37}=\{P_1+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{38}=\{P_1+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{39}=\{P_1+kP_2+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{40}=\{P_1+kP_2+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & \end{split} \end{displaymath} \begin{equation} \begin{split} &{\mathcal G}_{41}=\{P_1+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{42}=\{P_1+kP_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{43}=\{P_1+kP_2+\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_{44}=\{P_1+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{45}=\{P_1+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{46}=\{P_1+kP_2+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{47}=\{P_1+kP_2+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{48}=\{P_1+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{49}=\{P_1+kP_2+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{50}=\{P_2+\underline{\mu}P_3+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{51}=\{\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \hspace{5mm} {\mathcal G}_{52}=\{\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{53}=\{P_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \hspace{5mm} {\mathcal G}_{54}=\{P_2+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{55}=\{\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \hspace{5mm} {\mathcal G}_{56}=\{P_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{57}=\{P_1+kP_2+\underline{\mu}P_3+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{58}=\{P_1+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{59}=\{P_1+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{60}=\{P_1+kP_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{61}=\{P_1+kP_2+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{62}=\{P_1+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{63}=\{P_1+kP_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}. \end{split} \label{list3} \end{equation} \noindent Next, we consider the one-dimensional subalgebras of the semi-direct sum \begin{equation} \tilde{\mathcal S}={\mathcal S}\oplus\{P_5\}=\{\{P_1,P_3,Q_1\}\oplus \{P_2,P_4,Q_2\}\}\oplus\{P_5\}, \end{equation} Using the Goursat method as described above, we obtain, in addition to the subalgebras already listed in (\ref{list1}), (\ref{list2}) and (\ref{list3}), the subalgebras \begin{displaymath} \begin{split} &{\mathcal G}_{64}=\{P_5\}, \hspace{5mm} {\mathcal G}_{65}=\{P_1+kP_5\}, \hspace{5mm} {\mathcal G}_{66}=\{P_5+\underline{\mu}P_3\}, \hspace{5mm} {\mathcal G}_{67}=\{P_5+\underline{\mu}Q_1\}, \hspace{5cm} \\ & {\mathcal G}_{68}=\{P_1+kP_5+\underline{\mu}P_3\}, \hspace{5mm} {\mathcal G}_{69}=\{P_1+kP_5+\underline{\mu}Q_1\}, \\ & {\mathcal G}_{70}=\{P_5+\underline{\mu}P_3+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{71}=\{P_1+kP_5+\underline{\mu}P_3+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{72}=\{P_2+kP_5\}, \hspace{5mm} {\mathcal G}_{73}=\{P_5+\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{74}=\{P_5+\underline{\mu}Q_2\}, \\ & {\mathcal G}_{75}=\{P_2+kP_5+\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{76}=\{P_2+kP_5+\underline{\mu}Q_2\}, \\ & {\mathcal G}_{77}=\{P_5+\underline{\mu}P_4+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{78}=\{P_2+kP_5+\underline{\mu}P_4+\underline{\nu}Q_2\},\\ & {\mathcal G}_{79}=\{P_1+kP_2+\ell P_5\}, \hspace{5mm} {\mathcal G}_{80}=\{P_1+kP_5+\underline{\mu}P_4\}, \\ & {\mathcal G}_{81}=\{P_1+kP_5+\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{82}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_4\}, \\ & {\mathcal G}_{83}=\{P_1+kP_2+\ell P_5+\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{84}=\{P_1+kP_5+\underline{\mu}P_4+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{85}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_4+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{86}=\{P_2+kP_5+\underline{\mu}P_3\}, \\ & {\mathcal G}_{87}=\{P_5+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{88}=\{P_5+\underline{\mu}P_3+\underline{\nu}Q_2\}, \end{split} \end{displaymath} \begin{equation} \begin{split} &{\mathcal G}_{89}=\{P_2+kP_5+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{90}=\{P_2+kP_5+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{91}=\{P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{92}=\{P_2+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{93}=\{P_2+kP_5+\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_{94}=\{P_5+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{95}=\{P_5+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{96}=\{P_2+kP_5+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{97}=\{P_2+kP_5+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{98}=\{P_5+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{99}=\{P_2+kP_5+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{100}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_3\}, \\ & {\mathcal G}_{101}=\{P_1+kP_5+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{102}=\{P_1+kP_5+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{103}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}P_4\}, \\ & {\mathcal G}_{104}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{105}=\{P_1+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{106}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{107}=\{P_1+kP_2+\ell P_5+\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_{108}=\{P_1+kP_5+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{109}=\{P_1+kP_5+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{110}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_4+\underline{\nu}Q_1\}, \hspace{1.1cm} \\ & {\mathcal G}_{111}=\{P_1+kP_2+\ell P_5+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{112}=\{P_1+kP_5+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{113}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{114}=\{P_2+kP_5+\underline{\mu}P_3+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{115}=\{P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{116}=\{P_5+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{117}=\{P_2+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{118}=\{P_2+kP_5+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{119}=\{P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{120}=\{P_2+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{121}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{122}=\{P_1+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{123}=\{P_1+kP_5+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{124}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{125}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{126}=\{P_1+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{127}=\{P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}. \end{split} \label{list4} \end{equation} \ \\ Finally, we classify the complete semidirect sum superalgebra ${\mathcal G}=\{D\}\sdir\tilde{\mathcal S}$ using the method of splitting and non-splitting subalgebras \cite{Splitting1,Splitting2}. The splitting subalgebras of ${\mathcal G}$ are formed by combining the dilation $\{D\}$ or the trivial element $\{0\}$ with each of the subalgebras of $\tilde{\mathcal S}$ in a semidirect sum of the form $F\sdir N$, where $F=\{D\}$ or $F=\{0\}$ and $N$ is a subalgebra of the classification of $\tilde{\mathcal S}$. The splitting one-dimensional subalgebras of ${\mathcal G}$ are the combined subalgebras ${\mathcal G}_{1}$ to ${\mathcal G}_{127}$ listed in (\ref{list1}), (\ref{list2}), (\ref{list3}) and (\ref{list4}) together with the subalgebra ${\mathcal G}_{128}=\{D\}$. For non-splitting subalgebras, we consider spaces of the form \begin{equation} V=\{D+\sum\limits_{i=1}^{s}c_iZ_i\}, \end{equation} where the $Z_i$ form a basis of ${\mathcal S}$. The resulting possibilities are further classified by observing which are conjugate to each other under the action of the complete group generated by ${\mathcal G}$. This analysis provides us with the additional subalgebras \begin{displaymath} \begin{split} &{\mathcal G}_{129}=\{D+\varepsilon P_1\}, \hspace{5mm} {\mathcal G}_{130}=\{D+\underline{\mu}P_3\}, \hspace{5mm} {\mathcal G}_{131}=\{D+\underline{\mu}Q_1\}, \\ &{\mathcal G}_{132}=\{D+\varepsilon P_1+\underline{\mu}P_3\}, \hspace{5mm} {\mathcal G}_{133}=\{D+\varepsilon P_1+\underline{\mu}Q_1\}, \\ &{\mathcal G}_{134}=\{D+\underline{\mu}P_3+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{135}=\{D+\varepsilon P_1+\underline{\mu}P_3+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{136}=\{D+\varepsilon P_2\}, \hspace{5mm} {\mathcal G}_{137}=\{D+\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{138}=\{D+\underline{\mu}Q_2\}, \\ & {\mathcal G}_{139}=\{D+\varepsilon P_2+\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{140}=\{D+\varepsilon P_2+\underline{\mu}Q_2\}, \\ & {\mathcal G}_{141}=\{D+\underline{\mu}P_4+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{142}=\{D+\varepsilon P_2+\underline{\mu}P_4+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{143}=\{D+\varepsilon P_1+kP_2\}, \hspace{5mm} {\mathcal G}_{144}=\{D+\varepsilon P_1+\underline{\mu}P_4\}, \\ & {\mathcal G}_{145}=\{D+\varepsilon P_1+\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{146}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_4\}, \\ & {\mathcal G}_{147}=\{D+\varepsilon P_1+kP_2+\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{148}=\{D+\varepsilon P_1+\underline{\mu}P_4+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{149}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_4+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{150}=\{D+\varepsilon P_2+\underline{\mu}P_3\}, \\ & {\mathcal G}_{151}=\{D+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{152}=\{D+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{153}=\{D+\varepsilon P_2+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{154}=\{D+\varepsilon P_2+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{155}=\{D+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{156}=\{D+\varepsilon P_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \hspace{5cm} \\ & {\mathcal G}_{157}=\{D+\varepsilon P_2+\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_{158}=\{D+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{159}=\{D+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{160}=\{D+\varepsilon P_2+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{161}=\{D+\varepsilon P_2+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{162}=\{D+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{163}=\{D+\varepsilon P_2+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{164}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_3\}, \end{split} \end{displaymath} \begin{displaymath} \begin{split} &{\mathcal G}_{165}=\{D+\varepsilon P_1+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{166}=\{D+\varepsilon P_1+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{167}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{168}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{169}=\{D+\varepsilon P_1+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{170}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{171}=\{D+\varepsilon P_1+kP_2+\underline{\mu}Q_1\}, \\ & {\mathcal G}_{172}=\{D+\varepsilon P_1+\underline{\mu}P_4+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{173}=\{D+\varepsilon P_1+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{174}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_4+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{175}=\{D+\varepsilon P_1+kP_2+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{176}=\{D+\varepsilon P_1+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{177}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{178}=\{D+\varepsilon P_2+\underline{\mu}P_3+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{179}=\{D+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \hspace{5mm} {\mathcal G}_{180}=\{D+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{181}=\{D+\varepsilon P_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \hspace{5mm} {\mathcal G}_{182}=\{D+\varepsilon P_2+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{183}=\{D+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{184}=\{D+\varepsilon P_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{185}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_3+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{186}=\{D+\varepsilon P_1+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \hspace{5cm} \\ & {\mathcal G}_{187}=\{D+\varepsilon P_1+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{188}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{189}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{190}=\{D+\varepsilon P_1+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{191}=\{D+\varepsilon P_1+kP_2+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \hspace{5mm} {\mathcal G}_{192}=\{D+\varepsilon P_5\}, \\ & {\mathcal G}_{193}=\{D+\varepsilon P_1+kP_5\}, \hspace{5mm} {\mathcal G}_{194}=\{D+\varepsilon P_5+\underline{\mu}P_3\}, \\ & {\mathcal G}_{195}=\{D+\varepsilon P_5+\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_{196}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_3\}, \\ & {\mathcal G}_{197}=\{D+\varepsilon P_1+kP_5+\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_{198}=\{D+\varepsilon P_5+\underline{\mu}P_3+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{199}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_3+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{200}=\{D+\varepsilon P_2+kP_5\}, \\ & {\mathcal G}_{201}=\{D+\varepsilon P_5+\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{202}=\{D+\varepsilon P_5+\underline{\mu}Q_2\}, \\ & {\mathcal G}_{203}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_4\}, \hspace{5mm} {\mathcal G}_{204}=\{D+\varepsilon P_2+kP_5+\underline{\mu}Q_2\}, \\ & {\mathcal G}_{205}=\{D+\varepsilon P_5+\underline{\mu}P_4+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{206}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_4+\underline{\nu}Q_2\},\\ & {\mathcal G}_{207}=\{D+\varepsilon P_1+kP_2+\ell P_5\}, \hspace{5mm} {\mathcal G}_{208}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_4\}, \\ & {\mathcal G}_{209}=\{D+\varepsilon P_1+kP_5+\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{210}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_4\}, \\ & {\mathcal G}_{211}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}Q_2\}, \hspace{5mm} {\mathcal G}_{212}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_4+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{213}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_4+\underline{\nu}Q_2\}, \hspace{5mm} {\mathcal G}_{214}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_3\}, \end{split} \end{displaymath} \begin{displaymath} \begin{split} &{\mathcal G}_{215}=\{D+\varepsilon P_5+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{216}=\{D+\varepsilon P_5+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{217}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_3+\underline{\nu}P_4\}, \hspace{5mm} {\mathcal G}_{218}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{219}=\{D+\varepsilon P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{220}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \hspace{5mm} {\mathcal G}_{221}=\{D+\varepsilon P_2+kP_5+\underline{\mu}Q_1\}, \\ & {\mathcal G}_{222}=\{D+\varepsilon P_5+\underline{\mu}P_4+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{223}=\{D+\varepsilon P_5+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{224}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_4+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{225}=\{D+\varepsilon P_2+kP_5+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \hspace{5cm} \\ & {\mathcal G}_{226}=\{D+\varepsilon P_5+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{227}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{228}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_3\}, \hspace{5mm} {\mathcal G}_{229}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_3+\underline{\nu}P_4\}, \\ & {\mathcal G}_{230}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{231}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}P_4\}, \\ & {\mathcal G}_{232}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{233}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{234}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{235}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}Q_1\}, \hspace{5mm} {\mathcal G}_{236}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{237}=\{D+\varepsilon P_1+kP_5+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{238}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_4+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{239}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}Q_1+\underline{\nu}Q_2\}, \\ & {\mathcal G}_{240}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{241}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_4+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{242}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_3+\underline{\nu}Q_1\}, \hspace{5mm} {\mathcal G}_{243}=\{D+\varepsilon P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{244}=\{D+\varepsilon P_5+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{245}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{246}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{247}=\{D+\varepsilon P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{248}=\{D+\varepsilon P_2+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{249}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}Q_1\}, \\ & {\mathcal G}_{250}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{251}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \end{split} \end{displaymath} \begin{equation} \begin{split} &{\mathcal G}_{252}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1\}, \\ & {\mathcal G}_{253}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}Q_1+\underline{\rho}Q_2\}, \\ & {\mathcal G}_{254}=\{D+\varepsilon P_1+kP_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}, \\ & {\mathcal G}_{255}=\{D+\varepsilon P_1+kP_2+\ell P_5+\underline{\mu}P_3+\underline{\nu}P_4+\underline{\rho}Q_1+\underline{\sigma}Q_2\}. \hspace{4cm} \end{split} \label{list5} \end{equation} \noindent Therefore, the classification includes the 255 non-equivalent subalgebras listed above. It should be noted that we obtain far more subalgebras for this supersymmetric extension than were obtained for the scalar Born-Infeld supersymmetric extension described in \cite{Hariton}. \noindent It is worth noting that the minimal surface equation (\ref{eqmotion2}) is invariant under the discrete reflection transformation \begin{equation} x\rightarrow y,\quad y\rightarrow x,\quad \theta_1\rightarrow \theta_2,\quad \theta_2\rightarrow \theta_1. \label{discreter} \end{equation} Hence, identifying each subalgebra of the classification with its partner equivalent under the transformation (\ref{discreter}), the subalgebra classification of ${\mathcal G}$ can be simplified from 255 to 143 subalgebras. These 143 subalgebras, labeled ${\mathcal L}_1$ to ${\mathcal L}_{143}$, are listed in the Appendix. \section{Symmetry group reductions and solutions of the SUSY minimal surface equation} Each subalgebra given in the Appendix can be used to perform a symmetry reduction of the supersymmetric minimal surface equation (\ref{eqmotion2}) which, in most cases, allows us to determine invariant solutions of the SUSY MS equation. Once a solution of the equation is known, new solutions can be found by acting on the given solution with the supergroup of symmetries. Since both the equation and the list of subalgebras are very involved, we do not consider all possible cases. Instead we present certain interesting examples of nontrivial solutions which illustrate the symmetry reduction method. In each case, we begin by constructing a complete set of invariants (functions which are preserved by the symmetry subgroup action). Next, we find the group orbits of the corresponding subgroups as well as the associated reduced systems of equations. Each reduced system can be solved in order to construct an invariant solution of the SUSY MS equation (\ref{eqmotion2}). It should be noted that, as has been observed for other similar supersymmetric extensions \cite{Polytropic}, some of the subalgebras listed in the Appendix have a non-standard invariant structure in the sense that they do not reduce the system to ordinary differential equations in the usual sense. These are the 9 subalgebras: ${\mathcal L}_{2}$, ${\mathcal L}_{3}$, ${\mathcal L}_{6}$, ${\mathcal L}_{15}$, ${\mathcal L}_{16}$, ${\mathcal L}_{19}$, ${\mathcal L}_{21}$, ${\mathcal L}_{24}$, ${\mathcal L}_{33}$. This leaves 134 subalgebras that lead to standard symmetry reductions, of which we illustrate several examples. \subsection{Translation-invariant solutions} We first construct the following three polynomial translation-invariant solutions. For each of these examples, $K$, $C_1$, $C_2$, $C_7$ and $C_8$ are bosonic constants, while $\underline{C}$, $\underline{C_3}$, $\underline{C_4}$, $\underline{C_5}$ and $\underline{C_6}$ are fermionic constants. \noindent{\bf 1.} For the subalgebra ${\mathcal L}_1=\{\partial_x\}$, the set of invariants is $y$, $\theta_1$, $\theta_2$, $\Phi$, which leads to the group orbit $\Phi=\Phi(y,\theta_1,\theta_2)$. Substituting into equation (\ref{eqmotion2}), we obtain the quadratic solution \begin{equation} \Phi(y,\theta_1,\theta_2)=C_1+C_2y+\underline{C_3}\theta_1+\underline{C_4}y\theta_1+\underline{C_5}\theta_2+\underline{C_6}y\theta_2+C_7\theta_1\theta_2+C_8y\theta_1\theta_2. \label{solutionG1} \end{equation} \noindent{\bf 2.} For the subalgebra ${\mathcal L}_4=\{\partial_x+\underline{\mu}\partial_{\theta_1}\}$, we obtain the invariants $y$, $\eta=\theta_1-\underline{\mu}x$, $\theta_2$, $\Phi$, so $\Phi=\Phi(y,\eta,\theta_2)$ is the group orbit and we get the translationally invariant solution \begin{equation} \begin{split} \Phi(x,y,\theta_1,\theta_2)=&C_1+C_2y+\underline{C_3}(\theta_1-\underline{\mu}x)+\underline{C_4}y(\theta_1-\underline{\mu}x)+\underline{C_5}\theta_2+\underline{C_6}y\theta_2\\ &+C_7(\theta_1-\underline{\mu}x)\theta_2+C_8y(\theta_1-\underline{\mu}x)\theta_2. \label{solutionG4} \end{split} \end{equation} which constitutes an analogue travelling wave involving both the bosonic variable $x$ and the fermionic variable $\theta_1$. Along any curve $\theta_1-\underline{\mu}x=\underline{C}$, solution (\ref{solutionG4}) depends only on $y$ and $\theta_2$, which constitutes a subcase of (\ref{solutionG1}). \noindent{\bf 3.} The subalgebra ${\mathcal L}_{8}=\{\partial_x+k\partial_{y}\}$, $k\neq 0$, has invariants $\xi=y-kx$, $\theta_1$, $\theta_2$, $\Phi$, so we have the group orbit $\Phi=\Phi(\xi,\theta_1,\theta_2)$. We obtain the following stationary wave solution \begin{equation} \begin{split} \Phi(x,y,\theta_1,\theta_2)=&C_1+C_2(y-kx)+\underline{C_3}\theta_1+\underline{C_4}\theta_1(y-kx)+\underline{C_5}\theta_2+\underline{C_6}\theta_2(y-kx)\\ &+C_7\theta_1\theta_2+C_8\theta_1\theta_2(y-kx). \label{solutionG15} \end{split} \end{equation} which is an analogue travelling wave involving the bosonic spatial variables $x$ and $y$. Along any straight line $y-kx=K$, the dependence of solution (\ref{solutionG15}) is purely fermionic. \subsection{Scaling-invariant solution} We first present two subalgebra reductions involving combinations of dilations and translations. \noindent {\bf 4.} The subalgebra ${\mathcal L}_{74}=\{2x\partial_x+2y\partial_y+(\theta_1+\underline{\mu})\partial_{\theta_1}+\theta_2\partial_{\theta_2}+4\Phi\partial_{\Phi}\}$ involves a linear combination of the dilation $D$ and the fermionic translation $P_3$. This subalgebra has invariants \begin{equation} \xi=\dfrac{y}{x},\quad \eta_1=\dfrac{\theta_1+\underline{\mu}}{\sqrt{x}},\quad \eta_2=\dfrac{\theta_2}{\sqrt{x}},\quad \Psi=\dfrac{\Phi}{x^2}, \end{equation} which leads to the group orbit $\Phi=x^2\Psi(\xi,\eta_1,\eta_2)$. If we make the assumption that the bosonic superfield $\Psi$ is of the particular bodiless form \begin{equation} \Psi=\omega(\xi)\eta_1\eta_2, \label{bodiless} \end{equation} where $\omega(\xi)$ is an arbitrary bosonic function of $\xi$, equation (\ref{eqmotion2}) reduces to the ordinary differential equation \begin{equation} (\omega^2+\xi^2+1)\omega_{\xi\xi}\eta_1\eta_2=0, \end{equation} from where we obtain the following two solutions for $\omega$: \begin{equation} \omega(\xi)=\varepsilon_1 i\sqrt{\xi^2+1},\hspace{2cm} \omega(\xi)=A\xi+B, \end{equation} where $\varepsilon_1=\pm 1$ and $A$ and $B$ are complex-valued constants. This leads to the following radical and algebraic invariant solutions, respectively: \begin{equation} \mbox{(i) }\hspace{2mm} \Phi=\varepsilon_1 i\sqrt{x^2+y^2}(\theta_1+\underline{\mu})\theta_2,\hspace{1.5cm} \mbox{(ii) }\hspace{2mm} \Phi=(Ay+Bx)(\theta_1+\underline{\mu})\theta_2. \label{theg66solution} \end{equation} Solutions (\ref{theg66solution}) consist of (i) a radially dependent solution and (ii) a centered wave whose level curves are lines intersecting at the origin. Both solutions involve the fermionic variables $\theta_1$ and $\theta_2$. \noindent {\bf 5.} The subalgebra ${\mathcal G}_{136}=\{2x\partial_x+(2y+\varepsilon)\partial_y+\theta_1\partial_{\theta_1}+\theta_2\partial_{\theta_2}+4\Phi\partial_{\Phi}\}$ (which is equivalent to ${\mathcal L}_{73}$ under the discrete transformation (\ref{discreter})) involves a linear combination of the dilation $D$ and the bosonic translation $P_2$. This subalgebra has invariants \begin{equation} \xi=\dfrac{2y+\varepsilon}{x},\quad \eta_1=\dfrac{\theta_1}{\sqrt{x}},\quad \eta_2=\dfrac{\theta_2}{\sqrt{x}},\quad \Psi=\dfrac{\Phi}{x^2}, \end{equation} which leads to the group orbit $\Phi=x^2\Psi(\xi,\eta_1,\eta_2)$. Under the assumption (\ref{bodiless}), equation (\ref{eqmotion2}) reduces to the ordinary differential equation \begin{equation} (2\xi\omega\omega_{\xi}+6\omega^2+\xi^2+4)\omega_{\xi\xi}\eta_1\eta_2=0, \end{equation} from where we obtain the following two solutions for $\omega$: \begin{equation} \omega(\xi)=\varepsilon_1 \sqrt{-\dfrac{1}{8}\xi^2-\dfrac{2}{3}+\dfrac{K}{\xi^6}},\hspace{2cm} \omega(\xi)=A\xi+B, \end{equation} where $\varepsilon_1=\pm 1$ and $A$, $B$ and $K$ are complex-valued constants. This leads to the following radical and algebraic invariant solutions, respectively: \begin{equation} \mbox{(i) }\hspace{2mm} \Phi=\varepsilon_1 \theta_1\theta_2\sqrt{-\dfrac{(2y+­\varepsilon)^2}{8}-\dfrac{2x^2}{3}+\dfrac{Kx^8}{(2y+\varepsilon)^6}},\hspace{1.5cm} \mbox{(ii) }\hspace{2mm} \Phi=\theta_1\theta_2(2Ay+Bx+\varepsilon A). \label{theg72solution} \end{equation} In (\ref{theg72solution}), solution (i) is a radical solution which admits two sixth-order poles in $y$ for $\varepsilon=\pm 1$. In contrast, solution (ii) is a cubic polynomial solution which does not have poles. It is a subcase of (\ref{solutionG15}). \noindent {\bf 6.} We now construct a scaling-invariant solution corresponding to the subalgebra ${\mathcal L}_{72}=\{2x\partial_x+2y\partial_y+\theta_1\partial_{\theta_1}+\theta_2\partial_{\theta_2}+4\Phi\partial_{\Phi}\}$. This subalgebra has invariants \begin{equation} \xi=\dfrac{y}{x},\quad \eta_1=\dfrac{\theta_1}{\sqrt{x}},\quad \eta_2=\dfrac{\theta_2}{\sqrt{x}},\quad \Psi=\dfrac{\Phi}{x^2}, \end{equation} which leads to the group orbit $\Phi=x^2\Psi(\xi,\eta_1,\eta_2)$. Since the general case is very involved, we make various assumptions concerning the form of the bosonic function $\Psi$ in order to obtain particular solutions. From the hypothesis \begin{equation} \Psi=f(\xi)+g(\eta_1,\eta_2)+\underline{A}\eta_1+\underline{B}\eta_2+C, \end{equation} where $f$ and $g$ are bosonic functions, $\underline{A}$ and $\underline{B}$ are fermionic constants, and $C$ is a bosonic constant, we obtain the solutions \begin{equation} \Phi(x,y,\theta_1,\theta_2)=c_1x\theta_1\theta_2+c_2y^2+C_3xy+C_4x^2, \label{solutionG64a} \end{equation} where $C_1$, $C_2$, $C_3$ and $C_4$ are arbitrary bosonic constants, and \begin{equation} \Phi(x,y,\theta_1,\theta_2)=ay^2+Cxy-Mx^2+N\theta_1\theta_2, \label{solutionG64b} \end{equation} where $C$, $M$ and $N$ are arbitrary bosonic constants. Under the assumption (\ref{bodiless}), we obtain the following double periodic solution of equation (\ref{eqmotion2}) \begin{equation} \begin{split} &\Phi(x,y,\theta_1,\theta_2)=\dfrac{i\theta_1\theta_2\sqrt{x}}{x^2+1}\bigg{[}2\left(-i(x+i)\right)^{1/2}2^{1/2}\left(-i(-x+i)\right)^{1/2}(xi)^{1/2}\cdot\\ &\left(x(x^2+1)\right)^{1/2}\mbox{E}\left(\left(-i(x+i)\right)^{1/2},2^{-1/2}\right) -\left(-i(x+i)\right)^{1/2}2^{1/2}\left(-i(-x+i)\right)^{1/2}\cdot\\ &(xi)^{1/2}\left(x(x^2+1)\right)^{1/2}\mbox{F}\left(\left(-i(x+i)\right)^{1/2},2^{-1/2}\right) -(x^3+x)^{1/2}x^2-(x^3+x)^{1/2}\bigg{]}, \end{split} \label{solutionG64c} \end{equation} where $F(\varphi,k)$ and $E(\varphi,k)$ are the standard elliptic integrals of the first and second kind respectively, \begin{equation} \begin{split} &F(\varphi,k)=\int\limits_0^{\varphi}{d\theta\over \sqrt{1-k^2\sin^2{\theta}}}=\int\limits_0^{x}{dt\over \sqrt{(1-t^2)(1-k^2t^2)}},\\ &E(\varphi,k)=\int\limits_0^{\varphi}\sqrt{1-k^2\sin^2{\theta}}d\theta=\int\limits_0^{x}\sqrt{1-k^2t^2\over 1-t^2}d\theta, \end{split} \label{ellipticintegrals} \end{equation} where $x=\sin{\varphi}$, and the modulus $k=2^{-1/2}$ is such that $k^2<1$. This ensures that the elliptic solutions each possess one real and one purely imaginary period and that for real-valued arguments $\varphi$ we have real-valued solutions \cite{Byrd}. The solutions are doubly periodic multiwaves. \noindent It should be noted that the solutions found for the subalgebras ${\mathcal L}_{74}$ and ${\mathcal G}_{136}$ involving combinations of dilations and translations were fundamentally different from the solutions found for the subalgebra ${\mathcal L}_{72}$ involving a dilation alone. It should also be noted that, at the limit where $\theta_1$ and $\theta_2$ approach zero, the solutions (\ref{theg66solution}), (\ref{theg72solution}) and (\ref{solutionG64c}) vanish. These solutions therefore have no counterpart for the classical MS equation. \section{Group Analysis of the Classical Minimal Surface Equation} In this section, we review previous group-theoretical results concerning the classical minimal surface equation (\ref{minimals}). In reference \cite{Bila}, the infinitesimal Lie point symmetries of (\ref{minimals}) were determined to be \begin{equation} \begin{split} & e_1=\partial_x,\hspace{1cm}e_2=\partial_y,\hspace{1cm}e_3=\partial_u,\hspace{1cm}e_4=-y\partial_x+x\partial_y, \\ & e_5=-u\partial_y+y\partial_u,\hspace{1cm} \hspace{1cm} e_6=-x\partial_u+u\partial_x,\hspace{1cm}e_7=x\partial_x+y\partial_y+u\partial_u. \end{split} \label{symmetriesclassical} \end{equation} The non-zero commutation relations of the generators (\ref{symmetriesclassical}) are given by \begin{equation} \begin{split} & [e_1,e_4]=e_2,\hspace{1cm}[e_1,e_6]=-e_3,\hspace{1cm}[e_1,e_7]=e_1,\hspace{1cm}[e_2,e_4]=-e_1,\hspace{1cm} \\ & [e_2,e_5]=e_3,\hspace{1cm}[e_2,e_7]=e_2,\hspace{1cm}[e_3,e_5]=-e_2,\hspace{1cm}[e_3,e_6]=e_1,\hspace{1cm} \\ & [e_3,e_7]=e_3,\hspace{1cm}[e_4,e_5]=-e_6,\hspace{1cm}[e_4,e_6]=e_5,\hspace{1cm}[e_5,e_6]=-e_4. \end{split} \label{commutationclassical} \end{equation} The seven-dimensional Lie algebra ${\mathcal E}$ generated by the vector fields (\ref{symmetriesclassical}) can be decomposed as the following combination of semi-direct sums: \begin{equation} {\mathcal E}=\{\{e_4,e_5,e_6\}\sdir\{e_1,e_2,e_3\}\}\sdir\{e_7\} \label{classicaldecomposition} \end{equation} Using the methods described in section 4, we perform a classification of the one-dimensional subalgebras of the Lie algebra ${\mathcal E}$. We briefly summarize the obtained results. We begin with the subalgebra ${\mathcal A}=\{e_4,e_5,e_6\}$. This subalgebra is isomorphic to $A_{3,9}\hspace{5mm} (su(2))$ as listed in \cite{Patera77}, whose subalgebras are all conjugate with $\{e_4\}$. Next, we use the methods of splitting and non-splitting subalgebras to determine the subalgebras of \begin{equation} {\mathcal A}\sdir{\mathcal B}=\{e_4,e_5,e_6\}\sdir\{e_1,e_2,e_3\} \label{classicalaplusb} \end{equation} Through the Baker-Campbell-Hausdorff formula (\ref{BCHformula}), we find that all subalgebras of ${\mathcal A}\sdir{\mathcal B}$ are conjugate to an element of the list $\{e_1\}, \{e_4\}, \{e_4+me_3\}$, where $m$ is any real number. Finally, if we consider the full Lie algebra ${\mathcal E}$, we also obtain the subalgebra $\{e_7\}$. Thus, the full classification of the one-dimensional subalgebras of ${\mathcal E}$ \begin{equation} \{e_1\}, \hspace{5mm} \{e_4\}, \hspace{5mm} \{e_4+me_3\}, \hspace{5mm} \{e_7\}.\hspace{5mm} \label{classificlist} \end{equation} This result is different from the one obtained in \cite{Bila}, whose twelve different conjugation classes were obtained for the classification. \noindent We perform symmetry reduction of the classical minimal surface equation for each of the four one-dimensional subalgebras given in (\ref{classificlist}). The results for subalgebras $\{e_1\}$ and $\{e_7\}$ are the same as those found in \cite{Bila}, that is planar solutions. However, for subalgebras $\{e_4\}$ and $\{e_4+me_3\}$, we obtain the following results which are not given in \cite{Bila}. In both cases, the reduced equations led to instances of Abel's equation of the first kind. \noindent For subalgebra $\{e_4\}$, the invariants are $\xi=x^2+y^2$ and $u$, and so $u$ is a function of $\xi$ only. Equation (\ref{minimals}) then reduces to \begin{equation} v_{\xi}=-\dfrac{1}{\xi}v-2v^3,\hspace{5mm} v=u_{\xi}. \label{thesolutionfore4} \end{equation} Solving equation (\ref{thesolutionfore4}) leads to the invariant solution \begin{equation} u(x,y)=\dfrac{1}{\sqrt{2s_0}}\ln{\Big{|}4\sqrt{s_0}\sqrt{s_0(x^2+y^2)^2-2(x^2+y^2)}+4s_0(x^2+y^2)-4\Big{|}}+k_0 \label{e4solution} \end{equation} where $s_0$ and $k_0$ are real constants. \noindent For subalgebra $\{e_4+me_3\}$, the invariants are \begin{displaymath} \xi=x^2+y^2 \hspace{1cm}\mbox{ and }\hspace{1cm} \phi=u+m\hspace{1mm}\arcsin{\left(\dfrac{x}{\sqrt{x^2+y^2}}\right)} \end{displaymath} Therefore \begin{displaymath} u=\phi(\xi)-m\hspace{1mm}\arcsin{\left(\dfrac{x}{\sqrt{x^2+y^2}}\right)}. \end{displaymath} Equation (\ref{minimals}) then reduces to \begin{equation} v_{\xi}=-\dfrac{2\xi}{\xi+m^2}v^3-\dfrac{2\xi+3m^2}{2\xi(\xi+m^2)}v,\hspace{5mm} v=\phi_{\xi}. \label{thesolutionfore4plusme3} \end{equation} Solving equation (\ref{thesolutionfore4plusme3}) leads to the invariant solution \begin{equation} \begin{split} u(x,y)=&\dfrac{im}{2}\ln{\Bigg{|}\dfrac{2\sqrt{2}im(s_0\xi-2)^{1/2}(m^2+\xi)^{1/2}+(s_0m^2-2)\xi-4m^2}{\xi}\Bigg{|}}\\ &+\dfrac{1}{\sqrt{2s_0}}\ln{\Bigg{|}\dfrac{2\sqrt{s_0}(s_0\xi-2)^{1/2}(m^2+\xi)^{1/2}+(2\xi+m^2)s_0-2}{\sqrt{s_0}}\Bigg{|}}+k_0 \end{split} \label{e4me3solution} \end{equation} where $s_0$ and $k_0$ are real constants. This completes the symmetry reduction analysis of the classical minimal surface equation (\ref{minimals}) for one-dimensional Lie subalgebras. \section{Final Remarks} In this paper, we have formulated a supersymmetric extension of the minimal surface equation using a superspace involving two fermionic Grassmann variables and a bosonic-valued superfield. A Lie superalgebra of symmetries was determined which included translations and a dilation. The one-dimensional subalgebras of this superalgebra were classified into a large number of conjugation classes under the action of the corresponding supergroup. A number of these subalgebras were found to possess a non-standard invariant structure. For certain subalgebras, the symmetry reduction method was used to obtain invariant solutions of the SUSY MS equation. These solutions include algebraic solutions, radical solutions and doubly periodic multiwave solutions expressed in terms of elliptic integrals. In addition, we have also performed a Lie symmetry analysis of the classical minimal surface equation and compared the results with those obtained in \cite{Bila}. We found fewer one-dimensional subalgebras in the subalgebra classification by conjugation classes than obtained in \cite{Bila}. Finally, we have completed the symmetry reduction analysis for this equation. In contrast with the supersymmetric case, where 143 representative subalgebras were found, only four such subalgebras were found for the classical case. In both the classical and supersymmetric cases, a dilation symmetry was found, together with translations in all independent and dependent variables.\\\\ In the future, it would be interesting to expand our analysis in several directions. One such possibility would be to apply the above supersymmetric extension methods to the MS equation in higher dimensions. Due to the complexity of the calculations involved, this would require the development of a computer Lie symmetry package capable of handling odd and even Grassmann variables. To the best of our knowledge, such a package does not presently exist. The conservation law is well-established for the classical minimal surface equation (\ref{CL}). The question of which quantities are conserved by the supersymmetric model still remains an open question for the minimal surface equation. We could also consider conditional symmetries of the SUSY MS equation, which could allow us to enlarge the class of solutions and corresponding surfaces. Finally, it would be of interest to develop the theory of boundary conditions for equations involving Grassmann variables and analyze the existence and unicity of solutions.\\\\ \noindent {\bf Acknowledgements}\\ AMG's work was supported by a research grant from NSERC of Canada. AJH wishes to thank the Mathematical Physics Laboratory of the Centre de Recherches Math\'{e}matiques, Universit\'{e} de Montr\'{e}al, for the opportunity to participate in this research.
1,941,325,220,093
arxiv
\section*{The problem: a Cambrian explosion} Until relatively recently, statisticians had a monopoly on data analysis. They were, and are, highly trained to appreciate the intricate relationships and biases in data and to use relatively \emph{simple} methods (in the best sense of the word) to analyze the data and fit models to it. Data collection was often done under their guidance to ensure biases were understood, documented, and mitigated. Nowadays, data is ubiquitous and often claimed to be the new oil. However, real-world datasets often resemble more of an oil spill, containing a plethora of unknown (and often unknowable) biases. Without sufficient statistics and SE skills, the development of a DSS tends to lead to the following implications: \begin{align*} \text{Big Data} &\Rightarrow \text{Messy Data} \Rightarrow \text{Big Code}\\ &\Rightarrow \text{Messy Code} \Rightarrow \text{Incorrect Conclusions} \end{align*} The radical increase in the scale and availability of data has led to an equally radical paradigm shift in its use. Data scientists build complex systems on top of complex, biased, and generally incomprehensible data. To do this, they are the consumers of many more software tools than classical statisticians. As a user of many tools, it is naturally more vital to know how to interface with them and less possible to understand their internal workings. Hence the underlying software must be trustworthy; one has to assume it is almost bug-free, with any remaining bugs being insignificant to any conclusions. Expressing and structuring an analysis plan in code is the bedrock for all data science projects and due to these many tools, modern data scientists must write increasing amounts of custom `glue code' when developing DSSs. However, SE is a challenging discipline, and building on vast unfamiliar codebases often leads to unexpected consequences. \textit{Both} from the data and algorithm's perspective, this paradigm shift resembles a Cambrian explosion in the quantity and intrinsic complexity of data and code. \section*{Why is the problem challenging?} In this section, we want to discuss some significant challenges data scientists face when developing a correct and effective DSS. Some of these challenges are due to human nature, whereas others are of a technical nature. \subsection*{Challenge 1: Missing SE skills} Most data scientists only learn to write small codebases, whereas SE focuses on working with large codebases. As mentioned above, code is the interface to many data science tools, and SE is the discipline of organizing interfaces methodically. For this paper, \textbf{we define SE as the discipline of managing the complexity of code and data with interfaces as one of its primary tools}~\cite{parnas1972criteria}. While many SE practices focus on enterprise software and do not trivially apply to all components of DSSs, it is our conviction that SE methodologies must play a more prominent role in future data science projects. \subsection*{Challenge 2: Correctness and efficacy} A DSS must work correctly, i.e., it does what you think it does. It also must be efficacious, i.e., produce relevant and usable predictions. Without SE, following earlier arguments, this tends to lead to the following implications: \begin{align*} &\text{Multiple Experiments} \Rightarrow \\ &\ \text{Messy Code} \Rightarrow \text{Incorrect Conclusions} \end{align*} So why do we truly need correctness \textit{and} efficacy for a trustworthy high-performing model? Firstly, as mentioned, a published, executable code can provide computational reproducibility, but repeatability requires correctness. Secondly, while an incorrect DSS can be efficacious due to a lucky bug, it is uninterpretable and hard to modify. \textbf{Without correctness, it is impossible to understand, interpret, or trust the outputs of and conclusion based on a DSS.} See Figure~\ref{fig:table} for visualization of why we need correctness and efficacy. \begin{figure*} \centering \begin{tabular}{c||c|c} & Not correct & Correct \\ [0.5ex] \hline\hline \hline Not efficacious & \makecell{You do not know whether\\ your idea is bad.\\ Try to achieve correctness,\\ it might give you\\ efficacy too.} & You need a new idea. \\ \hline Efficacious & \makecell{You do not know whether\\ your idea works.\\ Try to make the system\\ correct or analyze why\\ your system is effective.} & \makecell{\includegraphics{figures/party-popper.pdf}} \end{tabular} \caption{You need both: correctness \& efficacy.} \label{fig:table} \end{figure*} \subsection*{Challenge 3: Perverse incentives in academia} Software engineers, industrial data scientists, and academic data scientists produce different products within wildly different incentive structures. Software engineers are rewarded for creating high-performing, well-documented, and reusable codebases; data scientists are rewarded based on their DSS outputs. Like software engineers, industrial data scientists are rewarded based on the system's usefulness to the company. Academic data scientists, however, aim to use their results to write marketable papers to further their field, apply for grants, and enhance their reputation. For academia, there is a conflict between short-term and long-term incentives. Academic careers are peripatetic in nature and most positions are temporary for early career researchers, who tend to be those developing DSSs. Therefore, in the short-term it is rewarding to publish papers quickly, and give less attention to reusability of the codebase, as careful reusable development leads to delayed gratification. The short-term academic incentive structure might even discourage producing and publishing code comprehensible for a broad audience to avoid getting `scooped' by competitors. In the long-term however, a clear incentive to develop reusable DSSs is that this increases the probability that the paper will become influential and be well cited. For example, if two similar papers are published, but only one provides good code, it is almost certain that future papers will compare directly to this one. Over time, this will dramatically (and multiplicative) separate the popularity of the two papers. Not having this incentivised however leads to an enormous value destruction for society. However, the grant system gives one potential mechanism to resolve the perverse encourage realisation of these long-term incentives by encouraging proposals that involve the development of DSSs must involve resources for the construction of reusable and deployable codebases. Interestingly, this is not a new phenomenon; Knuth~\cite{knuth1984literate} discussed it in the 1980s when he was advised to publish \TeX's source code. However, if a field's incentive structure and goals are misaligned, see, e.g., the positive publication bias~\cite{barnett2019examination,van2021significance}, the path of least resistance easily wins the upper hand. \subsection*{Challenge 4: Short-circuits} The democratization of powerful data analysis and machine learning tools allows for short-circuits as keen amateurs can develop complex DSSs relatively quickly. This is not to say that using powerful publicly available tools or short-circuits are inherently bad. On the contrary, if every practitioner was writing private versions of common tool kits, this would be a major source of bugs. However, powerful tools reduce the accidental complexity, not the intrinsic complexity of DSS. Thus, they make it easier to build complex systems having a high intrinsic complexity. This intrinsic complexity is extremely hard to manage, especially because it often has hidden subtleties. \subsection*{Challenge 5: Teams vs. individual work} Working in a team, e.g., on a codebase, can be extremely powerful, but without the proper training or organizational structure, it can also produce massive inefficiencies and errors -- teams being complex systems themselves. Software engineers are often highly trained in agile teamwork methodologies, e.g., SCRUM~\cite{sutherland2014scrum, fowler2001agile}. They also know how to harness the benefits of infrastructure, such as version control, continuous integration pipelines, and pair programming. Academic data scientists tend only to possess informal training in these teamwork-enabling tools. \subsection*{Challenge 6: Bridging the academia-industry gap} Data science projects in industry and academia have many similarities. However, besides the already discussed incentive differences, there are also key distinctions in the DSS development environment. Due to a larger SE culture, industry embraces the idea that high-quality code is obligatory for maintainable DSSs; academia is often simply not interested in, or rewarded for, maintainability. In academia, the incentives promote a strong throwaway mentality towards code. Many DSSs never break out from research groups and, usually, there is no incentive for long-term maintenance. Finally, academia has virtually no feedback loop for code quality. High-quality code is neither a pre-requisite for most publications nor utilized for assessing career performance. \subsection*{Challenge 7: Training a DSS is costly} A change in a DSS can require costly and lengthy retraining, either to check how it changes the outcome or to check that it does not change the outcome. For this reason, seemingly minor fixes and improvements, as well as code cleanups, might not happen at all. \subsection*{Challenge 8: Long-term maintenance} Even a small DSS is often sufficiently complex that the number of dependencies on other packages or codes can number in the dozens or hundreds. As complex systems are inherently fragile, a minor change in one of the dependencies can lead to a (potentially silent) failure of the entire DSS. This is one of many reasons long-term code maintenance is costly or simply not possible. While there are many countermeasures to facilitate computation reproducibility, e.g., publishing Python/Anaconda environments and test suites, they do not ensure future reusability within larger DSS. \subsection*{Summary of the challenges} Many researchers have a systemic lack of awareness that SE is integral to modern data science. This not only translates to a lack of formal training but also into a perverse incentive structure. Considering human nature, it is somewhat surprising that there is good academic data science code at all. These perverse incentives cause a colossal loss of opportunity to create value! DSSs must be \emph{both} correct and efficacious. The potential unleashed by the useability of modern data science tools has enabled significant progress but also the development of many seemingly efficacious but incorrect systems. Industry is inherently better at developing high-quality code as their code must integrate with infrastructure, teams, and deployment platforms. Academia lacks such guard rails; code development is often myopic. \section*{Using software engineering to grow complex systems} Every programmer can write small codebases, but larger ones require SE to perform both correctly and be maintainable~\cite{farley2021}. So, why is it generally hard to build complex systems from scratch? Complex systems tend to consist of many highly interconnected components which are fragile to small perturbations. In the case of a codebase, these perturbations could be simple typos that, with luck, produce a syntax error. Otherwise, a simple typo can subtly alter the outcome in unknown ways and lead to dramatic and unexpected consequences. Gall's law~\cite{gall1975general} states that complex systems cannot be built; they can only be grown, i.e., we should not plan the entire DSS in advance, implement it and then evaluate the code. Instead, one should use small incremental steps following a not-too-detailed plan, never deviating far from a working system. Gall's law should be of great value to data scientists as we argue that growing an $n$-component system can reduce the maximal build complexity from $O(n^2)$ to $O(n)$. Although one can often decompose complex systems into predominantly simple components, the sheer number of them quickly turns the interacting simple components into a complex whole. If one wants to build a system with $n$ components, there are up to $O(n^2)$ interactions between them, giving $O(n^2)$ potential failure points (assuming that each component works correctly). SE has developed two leading solutions to this ``$O(n^{2})$-problem": software architecture and agile development~\cite{bass2003software, farley2021}. \textbf{Software Architecture.} Well-established code development principles are a critical component of SE. One key principle is the separation of concerns, which splits the software into different components, each handling a single isolated concern and possessing a simple, complexity-hiding interface~\cite{parnas1972criteria}. These components are, in turn, formed by connecting lower-level isolated components. Designing the software architecture in this manner reduces the graph spanned by the different components from a potentially densely connected graph with $O(n^{2})$ connections to a sparse graph with fewer connections, i.e., far fewer potential failure points. It is also advantageous to have a sense of locality in the code and graph such that components are preferably locally connected, see Figure~\ref{fig:graph_network} for a visualization. \begin{figure*} \centering \includegraphics[trim=25 25 25 25,clip,width=\textwidth]{figures/graph_net.pdf} \caption{The graph on the left is a fully connected graph illustrating a bad software architecture. The graph on the right is sparsely, and mostly locally, connected, demonstrating a better architecture~\cite{watts1998collective,valverde2003hierarchical}.} \label{fig:graph_network} \end{figure*} \textbf{Agile Development.} Modern SE tends to follow an agile approach, where one grows software incrementally, adding or changing one component at a time, so there is always a working system. One only has to consider how this new component interacts with the existing $n$ components. This reduces the $O(n^{2})$ potential failure points to $O(n)$ possible failure points at each step. In the end, this reduces the ``build complexity" from $O(n^{2})$ to $O(n)$ when the complex system is grown. See Figure~\ref{fig:delayed_gratification} for a visualization. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/delayed_gratification.png} \caption{Some SE practices require delayed gratification, as one has to sacrifice in the short-term for long-term progress.} \label{fig:delayed_gratification} \end{figure*} Another crucial component of SE is the infrastructure used, e.g., continuous integration pipelines and version control systems, along with test suites for codebases. There is empirical evidence that following these principles leads to improvements in SE~\cite{48455, radziwill2020accelerate}. While one might not need a software engineer to develop a DSS, the same principles apply, and one certainly needs software engineering \section*{Corrective action is required} We now want to discuss how we can combine and generalize these principles into concrete advice. We hope to enable data scientists to write correct and effective code in less time. \subsection*{Do not build complex systems; grow them} The approach of planning an entire system and then building it does not work for complex systems and \textbf{we must grow DSSs} to keep the complexity on the order of at most $O(n)$ at each incremental stage, ideally -- enabled by good software architecture -- of $O(1)$. Planning is still required to ensure we continually grow our systems towards the desired goal, but it should be highly iterative and alternate with incremental implementation steps. \textbf{Planning not only orients the iterations but also helps to avoid local optima} during the evolution of the DSS. It is valuable to recognize that the future evolution of a complex system is increasingly fuzzy. Planning should follow a multiscale approach with a discount factor on future details. An excellent example, is the comparison of SpaceX's rocket development process against the classical approach \cite{reddy2018spacex,vance2015elon,smith1979shuttle}. The rocket's design was grown, by testing it over many iteratively adapted instantiations, each being a little bit less of a failure than the last one. \cite{perkel2022fix}~argues that this iterative process, embracing the inevitability of errors, must become a deeply appreciated fact during a DSS's development -- and complex (software) system in general. \subsection*{Testing for data science systems} Developing a suite of tests alongside a DSS provides a comprehensive correctness feedback loop. \textbf{A good test suite mitigates the fragility} that DSSs -- as complex systems -- inevitably have, e.g., a typo can destroy everything without ever being noticed. However, software engineers often base tests on known example input-output pairs, but knowing these for non-trivial codes may be impossible, e.g., complex numerical code. \textbf{Property-based testing} can help. While we often do not know a-priori the correct output of a function for a given input, in many cases, mathematics can tell us some properties that the function, or its output, should have. With property-based testing, we use that knowledge, e.g., by creating random inputs to the function and checking whether the property holds for all of them. Python libraries, such as \texttt{pytest}~\cite{pytest} and \texttt{hypothesis}~\cite{Hypothesis} can be utilised for general and property-based testing respectively. One critical question is: which tests do I have to write to be confident in the correctness of my system? We propose that focusing on the functionality the system will need to provide when deployed is key. This allows for recursive -- similar to dynamic programming -- thinking about what components of the system one must test and to which degree, to have confidence in the system's functionality when deployed externally. Within the code, we recommend performing as many integrity checks on the data as possible by implementing tests that check whether the inputs of your system fulfill the assumptions you make about them before it goes into the model~\cite{recipe}. This is hard as we are often unaware of assumptions we make and often forget which assumptions we made, so it makes sense to hard-code them with tests whenever we notice them. One can also do this with dedicated Python libraries like \texttt{pandera}~\cite{pandera}. If we expect a variable to be in a particular format, we should write a check that generates an error or warning in case the format is wrong. This approach minimizes the uncertainties in the code by converting assumptions into certainties. \subsection*{The nature and necessity of feedback loops} Earlier, we discussed how growing a DSS can reduce the build complexity from $O(n^2)$ to $O(n)$. Incremental, iterative development relies on feedback loops. When establishing a feedback loop, relying on a test function, it is helpful to consider two properties of it: alignment and cycle time, which we also visualize in Figure~\ref{fig:feedback}. \begin{figure*} \centering \includegraphics[width=.5\textwidth]{figures/feedback.png} \caption{All feedback signals (blue) have an alignment with the goal we have in mind (red) and a cycle time. Often one has to pay for high alignment with high cycle time.} \label{fig:feedback} \end{figure*} \textbf{Alignment.} How many assumptions about a given code component are measured by the test function? Aligning the objectives of the test function with our expectations of the code is crucial. If the alignment is not good, the feedback loop cannot give us confidence in the trustworthiness of the component. \textbf{Cycle time.} How much time (or cost) is required to get the feedback? If we could build a 100\% aligned feedback loop, it is not very helpful if it takes an unreasonable amount of time to run. Ideally, we want a short cycle time to allow for high-frequency feedback. Writing a test suite is extremely powerful for establishing a feedback loop. Each test run in the test suite returns a measurement on the code, providing a feedback signal about a particular aspect of the code. Responding to the signal of a test suite that has high alignment and low cycle time (running it with high frequency) establishes a strong feedback loop. Readable code is quicker to understand and more reliable, accelerating the cycle time. As discussed previously, data scientists should care about \textit{both} the models' efficacy and the code's correctness. Therefore we need feedback loops that measure both. We measure the DSSs efficacy by evaluating our model on a test set and measure its correctness (or trustworthiness) with a test suite and by making the code as readable as possible. \subsection*{Software architecture for data science systems} Good software architecture helps reduce the number of components connected to an incremental addition to the system, reducing the complexity of building the system. An additional advantage of good architecture is that it dramatically improves the readability of code and keeps the codebase flexible for future developments~\cite{lakshmanan2020machine}. We argue that the crucial architecture concept for DSSs is the idea of horizontal layers, as shown in Figure~\ref{fig:cake}. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/cakes.png} \caption{The growth of a DSS over time. You must start by building all layers of your DSS as thin as possible to support the cherry on top as early as possible. This ``steel thread'' is necessary to establish an efficacy feedback loop.} \label{fig:cake} \end{figure*} \textbf{Horizontal software layers} are the different components of an analysis pipeline, e.g., data loading, preprocessing, model training, and evaluation. Each of these layers can be seen as a component in the software and should have as few external connections as possible. Each layer is a pocket of complexity, hiding its complexity from the other components in the system. Feature and model engineering are two of the most important tasks in machine learning, and we can interpret both as asking questions about the data. The ultimate question we often ask is: how well can I predict some labels with given features, particular preprocessing, and a specific model? Answering this requires coding the whole pipeline and then testing how well it works. This is the opposite of the agile approach and forces us to wait long for an efficacy feedback signal. Therefore, we recommend organizing the pipeline in horizontal layers but building each layer in a minimalistic incomplete fashion so that the basic connections between the layers are established early on in the project (Figure~\ref{fig:cake}). \textit{We think this point is crucial to quickly establish a tight, highly aligned feedback loop.} Without this feedback loop, even feature preprocessing becomes a potential `fishing expedition' as you can not be sure if it improves the outcome. This also justifies why using a simple method like linear regression first is a good idea when fitting a predictive model. It is often claimed that you should do this to avoid overfitting, but this is clearly not the case. The key reason is that linear regression is easy to implement and fast to run, enabling you to rapidly establish the first feedback loop with a short cycle time. In SE terms, this could be called a minimum viable product. For data pipelines, this is often called a steel thread~\cite{playbook} as it bootstraps a stable path that one can gradually extend to build a more complete pipeline. \subsection*{Data $*$ Code = Complexity$^2$} Data scientists must not only be tolerant of the complexities of code but also of the complexity of data and additional complexities introduced by applying code to data. However well designed the code, the old adage of garbage-in-garbage-out still holds. SE allows for mastery of the complexities of code but combining code with data results in complexity on top of complexity, and constructing a DSS can resemble balancing a stick on top of a stick. Implementing tests not only for code but also for data~\cite{baumgartner2021, recipe} is clearly a crucial and powerful tool for a data scientist; we want to emphasize again that \textbf{tests turn assumptions into lasting certainties}. It is also crucial to plot the data as often as possible. Looking at plots is a feedback loop with high throughput and can give you high alignment. However, looking at plots is time consuming, i.e., this has a high cycle time. We recommend extracting the relevant information from what you see in the plots and writing tests based on that. It is critical to appreciate that DSSs often fail silently~\cite{recipe}, as we do not know what the data knows. They should not just churn data but also highlight issues and violated assumptions to the developer. It can seem like everything is working, but in reality, the code is buggy, with machine learning code in general and deep learning code, in particular, failing silently in many unexpected ways. For example, one likely would not notice if early layers of a convolutional neural network's architecture were buggy and their weights not updated during training. Questions are asked of the data by running experiments. Like in a good conversation, you must listen to the answers carefully and adapt your future responses and questions accordingly. That does not mean your question generator algorithm has to be greedy, but it has to be iterative. On the one hand, iterative work unlocks the power of feedback loops, which we need when working with complex/real-world data. On the other hand, this requires agility in how you interact with the data, i.e., in your software. \section*{Conclusion} We now want to highlight the most valuable and important points. One has to \textbf{grow DSS} incrementally. Feedback loops are a prerequisite for feature engineering, model development, and everything else. \textbf{Feedback loops allow one to move faster, further, and more confidently.} \textbf{Correctness and efficacy are different things that require different feedback loops.} The most critical feedback loops for correctness are writing and running a \textbf{test suite} and writing as \textbf{comprehensible code} as possible. The most important point for building a feedback loop for efficacy is to establish it early by growing the \textbf{entire data pipeline as early and thin as possible}. We note that (almost) no feedback loop is perfectly aligned; still, they are essential. We want to warn that a subtle problem can arise when iterating on misaligned feedback loops: overfitting, also known as Goodhart's law\cite{goodhart1984problems, hoskin1996awful}, stating that every measure which becomes a target ceases to be a good measure. Overfitting is predominantly a problem for efficacy feedback loops. As discussed in~\cite{muller2019tyranny}, people and processes optimizing perverse incentives and misaligned feedback tend to (un-)consciously ``play the system.'' This overfitting, i.e., on the validation set, can happen to the entire DSS, not just the model. While researchers might be aware of this problem when training models, they are often unaware of this problem for DSSs in general. The same countermeasures to model overfitting apply to DSS, e.g., the use of holdout test sets not utilised during the development. Also, we want to reemphasize that this is a \textbf{socio-technical problem}: despite endeavors like Zenodo or SoftwareX, academia often lacks, or even reverses, incentive structures to create and publish a high-quality DSS. We must, therefore, improve the incentives structure alignment in academia. \subsection*{Acknowledgements} There is no direct funding for this study, but the authors are grateful for the EU/EFPIA Innovative Medicines Initiative project DRAGON (101005122) (S.D., M.R., AIX-COVNET, C.-B.S.), the Trinity Challenge BloodCounts! project (M.R., J.G., C.-B.S.), the EPSRC Cambridge Mathematics of Information in Healthcare Hub EP/T017961/1 (M.R., J.H.F.R., J.A.D.A, C.-B.S.), the Cantab Capital Institute for the Mathematics of Information (C.-B.S.). The European Research Council under the European Union’s Horizon 2020 research and innovation programme grant agreement no. 777826 (C.-B.S.), the Alan Turing Institute (C.-B.S.), Wellcome Trust (J.H.F.R.), Cancer Research UK Cambridge Centre (C9685/A25177) (C.-B.S.), British Heart Foundation (J.H.F.R.), the NIHR Cambridge Biomedical Research Centre (J.H.F.R.), HEFCE (J.H.F.R.). In addition, C.-B.S. acknowledges support from the Leverhulme Trust project on ‘Breaking the non-convexity barrier’, the Philip Leverhulme Prize, the EPSRC grants EP/S026045/1 and EP/T003553/1 and the Wellcome Innovator Award RG98755. Finally, the AIX-COVNET collaboration is also grateful to Intel for financial support. We also want to thank Jan-Christoph Lohmann, Shaun Griffith, and Jeremy Tang for the helpful comments and discussions. \subsection*{AIX-COVNET} Michael Roberts$^{1}$, S{\"{o}}ren Dittmer$^{1,6}$, Ian Selby$^{7}$, Anna Breger$^{1,8}$, Matthew Thorpe$^{9}$, Julian Gilbey$^{1}$, Jonathan R. Weir-McCall$^{7,10}$, Effrossyni Gkrania-Klotsas$^{3}$, Anna Korhonen$^{11}$, Emily Jefferson$^{12}$, Georg Langs$^{13}$, Guang Yang$^{14}$, Helmut Prosch$^{13}$, Jacobus Preller$^{3}$, Jan Stanczuk$^{1}$, Jing Tang$^{15}$, Judith Babar$^{3}$, Lorena Escudero Sánchez$^{7}$, Philip Teare$^{16}$, Mishal Patel$^{16,17}$, Marcel Wassin$^{18}$, Markus Holzer$^{18}$, Nicholas Walton$^{19}$, Pietro Li{\'{o}}$^{20}$, Tolou Shadbahr$^{15}$, James H. F. Rudd$^{4}$, John A.D. Aston$^{5}$, Evis Sala$^{7}$ and Carola-Bibiane Schönlieb$^{1}$.\\ \noindent ${}^{7}$ Department of Radiology, University of Cambridge, Cambridge, UK ${}^{8}$ Faculty of Mathematics, University of Vienna, Austria. ${}^{9}$ Department of Mathematics, University of Manchester, Manchester, UK. ${}^{10}$ Royal Papworth Hospital, Cambridge, Royal Papworth Hospital NHS Foundation Trust, Cambridge, UK ${}^{11}$ Language Technology Laboratory, University of Cambridge, Cambridge, UK. ${}^{12}$ Population Health and Genomics, School of Medicine, University of Dundee, Dundee, UK. ${}^{13}$ Department of Biomedical Imaging and Image-guided Therapy, Computational Imaging Research Lab Medical University of Vienna, Vienna, Austria. ${}^{14}$ National Heart and Lung Institute, Imperial College London, London, UK. ${}^{15}$ Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland. ${}^{16}$ Data Science \& Artificial Intelligence, AstraZeneca, Cambridge, UK. ${}^{17}$ Clinical Pharmacology \& Safety Sciences, AstraZeneca, Cambridge, UK. ${}^{18}$ contextflow GmbH, Vienna, Austria. ${}^{19}$ Institute of Astronomy, University of Cambridge, Cambridge, UK. ${}^{20}$ Department of Computer Science and Technology, University of Cambridge, Cambridge, UK. \end{multicols} \medskip \printbibliography \end{document}
1,941,325,220,094
arxiv
\section{Model configuration} \label{app:architecture} \subsection{Architecture} We picked the hyperparameters and chose the model architecture for all 3 models by only considering their performance (NMI) on the Computer Science\xspace dataset. No additional tuning for other datasets has been done. \textbf{GNN-based model. (\Eqref{eq:gcn})} We use a 2-layer graph convolutional neural network, with hidden size of 128, and the output (second) layer of size $C$ (number of communities to detect). We apply batch normalization after the first graph convolution layer. Dropout with 50\% keep probability is applied before every layer. We add weight decay to both weight matrices with regularization strength $\lambda = 10^{-2}$. The feature matrix ${\bm{X}}$ (or ${\bm{A}}$, in case we are working without attributes) is normalized such that every row has unit $L_2$-norm. We also experimented with the Jumping Knowledge Network \cite{xu2018jknet} and GraphSAGE \cite{hamilton2017graphsage} architectures, but they led to lower NMI scores on the Computer Science\xspace dataset. \textbf{MLP-based model. (\Eqref{eq:mlp})} We found the MLP model to perform best with the same configuration as described above for the GCN model (i.e. same regularization strength, hidden size, dropout and batch norm). \textbf{Free variable model (\Eqref{eq:free-var})} We considered two initialization strategies for the free variable model: (1) Locally minimal neighborhoods \cite{gleich2012neighborhoods} --- the strategy used by the BigCLAM\xspace and CESNA\xspace models and (2) initializing ${\bm{F}}$ to the output of an untrained GCN. We found strategy (1) to consistently provide better results. \subsection{Training} \textbf{GNN- and MLP-based models. } We train both models using Adam optimizer \cite{kingma2014adam} with default parameters. The learning rate is set to $10^{-3}$. We use the following early stopping strategy: Every 50 epochs we compute the full training loss (\Eqref{eq:loss}). We stop optimization if there was no improvement in the loss for the last $10 \times 50 = 500$ iterations, or after 5000 epochs, whichever happens first. \textbf{Free variable model. } We use Adam optimizer with learning rate $5 \cdot 10^{-2}$. After every gradient step, we project the ${\bm{F}}$ matrix to ensure that it stays non-negative: $F_{uc} = \max \{0, F_{uc}\}$. We use the same early stopping strategy as for the GNN and MLP models. \section{Baselines} \label{app:baselines} \begin{table}[h] \caption{Overview of the baselines. See text for the discussion of scalability of CESNA\xspace.} \label{tab:baselines} \begin{center} \begin{tabular}{llcc} Method & Model type & Attributed & Scalable\\ \toprule BigCLAM\xspace \cite{yang2013bigclam} & Probabilistic & &{\large \checkmark} \\ CESNA\xspace \cite{yang2013cesna} & Probabilistic & {\large \checkmark} & ${\large \checkmark}^{*}$\\ SNetOC\xspace \cite{todeschini2016exchangeable} & Probabilistic & \\ EPM\xspace \cite{zhou2015bplink} & Probabilistic & \\ CDE\xspace \cite{li2018cde} & NMF & {\large \checkmark}\\ SNMF\xspace \cite{wang2011nmf} & NMF & & {\large \checkmark}\\ DW/NEO\xspace \cite{perozzi2014deepwalk,whang2015neo} & Deep learning & & \\ G2G/NEO\xspace \cite{bojchevski2018g2g,whang2015neo} & Deep learning & {\large \checkmark} & \\ \midrule NOCD\xspace & \begin{tabular}[x]{@{}l@{}}Deep learning + \\probabilistic\end{tabular} & {\large \checkmark} & {\large \checkmark}\\ \end{tabular} \end{center} \end{table} \begin{itemize} \item We used the reference C++ implementations of BigCLAM\xspace and CESNA\xspace, that were provided by the authors (\url{https://github.com/snap-stanford/snap}). Models were used with the default parameter settings for step size, backtracking line search constants, and balancing terms. Since CESNA\xspace can only handle binary attributes, we binarize the original attributes (set the nonzero entries to 1) if they have a different type. \item We implemented SNMF\xspace ourselves using Python. The ${\bm{F}}$ matrix is initialized by sampling from the $\operatorname{Uniform}[0, 1]$ distribution. We run optimization until the improvement in the reconstruction loss goes below $10^{-4}$ per iteration, or for 300 epochs, whichever happens first. The results for SNMF\xspace are averaged over 50 random initializations. \item We use the Matlab implementation of CDE\xspace provided by the authors. We set the hyperparameters to $\alpha = 1$, $\beta = 2$, $\kappa = 5$, as recommended in the paper, and run optimization for 20 iterations. \item For SNetOC\xspace and EPM\xspace we use the Matlab implementations provided by the authors with the default hyperparameter settings. The implementation of EPM\xspace provides to options: EPM and HEPM. We found EPM to produce better NMI scores, so we used it for all the experiments. \item We use the TensorFlow implementation of Graph2Gauss\xspace provided by the authors. We set the dimension of the embeddings to 128, and only use the ${\bm{\mu}}$ matrix as embeddings. \item We implemented DeepWalk\xspace ourselves: We sample 10 random walks of length 80 from each node, and use the Word2Vec implementation from Gensim (\url{https://radimrehurek.com/gensim/}) to generate the embeddings.The dimension of embeddings is set to 128. \item For NEO-K-Means\xspace, we use the Matlab code provided by the authors. We let the parameters $\alpha$ and $\beta$ be selected automatically using the built-in procedure. \end{itemize} \section{Convergence of the stochastic sampling procedure} \label{app:convergence} Instead of using all pairs $u, v \in V$ when computing the gradients $\nabla_{{\bm{\theta}}} {\mathcal{L}}$ at every iteration, we sample $S$ edges and $S$ non-edges uniformly at random. We perform the following experiment to ensure that our training procedure converges to the same result, as when using the full objective. \textbf{Experimental setup. } We train the model on the Computer Science\xspace dataset and compare the full-batch optimization procedure with stochastic gradient descent for different choices of the batch size $S$. Starting from the same initialization, we measure the full loss (\Eqref{eq:loss}) over the iterations. \textbf{Results.} \Figref{fig:stochastic} shows training curves for different batch sizes $S \in \{1000, 2500, 5000, 10000, 20000\}$, as well as for full-batch training. The horizontal axis of the plot displays the number of entries of adjacency matrix accessed. One iteration of stochastic training accesses $2S$ entries $A_{ij}$, and one iteration of full-batch accesses $2N + 2M$ entries, since we are using the caching trick from \cite{yang2013bigclam}. As we see, the stochastic training procedure is stable: For batch sizes $S=10K$ and $S=20K$, the loss converges very closely to the value achieved by full-batch training. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/sgd.png} \end{center} \caption{Convergence of the stochastic sampling procedure.} \label{fig:stochastic} \end{figure} \section{Datasets} \label{app:datasets} \begin{table}[h] \caption{Dataset statistics. $K$ stands for 1000.} \label{tab:datasets} \begin{center} \scalebox{0.9}{ \begin{tabular}{llrrrr} \textbf{Dataset} & \textbf{Network type} & $N$ & $M$ & $D$ & $C$ \\ \midrule Facebook 348 & Social & $224$ & $3.2K$ & $21$ & 14 \\ Facebook 414 & Social & $150$ & $1.7K$ & $16$ & 7 \\ Facebook 686 & Social & $168$ & $1.6K$ & $9$ & 14 \\ Facebook 698 & Social & $61$ & $270$ & $6$ & 13 \\ Facebook 1684 & Social & $786$ & $14.0K$ & $15$ & 17 \\ Facebook 1912 & Social & $747$ & $30.0K$ & $29$ & 46 \\ Computer Science\xspace & Co-authorship & $22.0K$ & $96.8K$ & $7.8K$ & 18 \\ Chemistry\xspace & Co-authorship & $35.4K$ & $157.4K$ & $4.9K$ & 14 \\ Medicine\xspace & Co-authorship & $63.3K$ & $810.3K$ & $5.5K$ & 17 \\ Engineering\xspace & Co-authorship & $14.9K$ & $49.3K$ & $4.8K$ & 16 \\ \end{tabular} } \end{center} \end{table} \section{Hardware and software} \label{app:hardware} The experiments were performed on a computer running Ubuntu 16.04LTS with 2x Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz CPUs, 256GB of RAM and 4x GTX1080Ti GPUs. Note that training and inference were done using only a single GPU at a time for all models. The NOCD\xspace model was implemented using Tensorflow v1.1.2 \cite{abadi2016tensorflow} \section{Quantifying agreement between overlapping communities} \label{app:metrics} A popular choice for quantifying the agreement between true and predicted overlapping communities is the symmetric agreement score \cite{yang2013bigclam,yang2013cesna,li2018cde}. Given the ground-truth communities ${\mathcal{S}}^{*} = \{S_i^*\}_i$ and the predicted communities ${\mathcal{S}} = \{S_j\}_j$, the symmetric score is defined as \begin{align} \label{eq:sym-score} \frac{1}{2 |{\mathcal{S}}^*|} \sum_{S_i^* \in {\mathcal{S}}^*} \max_{S_j \in {\mathcal{S}}} \delta(S_i^*, S_j) + \frac{1}{2 |{\mathcal{S}}|} \sum_{S_j \in {\mathcal{S}}} \max_{S_i^* \in {\mathcal{S}}^*} \delta(S_i^*, S_j) \end{align} where $\delta(S_i^*, S_j)$ is a similarity measure between sets, such as $F_1$-score or Jaccard similarity. We discovered that these frequently used measures can assign arbitrarily high scores to completely uninformative community assignments, as you can in see in the following simple example. Let the ground truth communities be $S_1^* = \{v_1, ..., v_K\}$ and $S_2^* = \{v_{N - K}, ..., v_N\}$ ($K$ nodes in each community), and let the algorithm assign all the nodes to a single community $S_1 = V = \{v_1, ..., v_N\}$. While this predicted community assignment is completely uninformative, it will achieve symmetric $F_1$-score of $\frac{2K}{N + K}$ and symmetric Jaccard similarity of $\frac{K}{N}$ (e.g., if $K = 600$ and $N =1000$, the scores will be $75\%$ and $60\%$ respectively). These high numbers might give a false impression that the algorithm has learned something useful, while that clearly isn't the case. As an alternative, we suggest using overlapping normalized mutual information (NMI), as defined in \cite{mcdaid2011normalized}. NMI correctly handles the degenerate cases, like the one above, and assigns them the score of 0. \section*{Acknowledgments} This research was supported by the German Research Foundation, Emmy Noether grant GU 1409/2-1. \section{Background} \label{sec:community-detection} Assume that we are given an undirected unweighted graph ${\bm{G}}$, represented as a binary adjacency matrix ${\bm{A}} \in \{0, 1\}^{N \times N}$. We denote as $N$ the number of nodes $V = \{1, ..., N\}$; and as $M$ the number of edges $E = \{(u, v) \in V \times V : A_{uv} = 1\}$. Every node might be associated with a $D$-dimensional attribute vector, that can be represented as an attribute matrix ${\bm{X}} \in \mathbb{R}^{N \times D}$. The goal of overlapping community detection is to assign nodes into $C$ communities. Such assignment can be represented as a non-negative community affiliation matrix ${\bm{F}} \in \mathbb{R}_{\ge 0}^{N \times C}$, where $F_{uc}$ denotes the strength of node $u$'s membership in community $c$ (with the notable special case of binary assignment ${\bm{F}} \in \{0, 1\}^{N \times C}$). Some nodes may be assigned to no communities, while others may belong to multiple. Even though the notion of "community" seems rather intuitive, there is no universally agreed upon definition of it in the literature. However, most recent works tend to agree with the statement that a community is a group of nodes that have higher probability to form edges with each other than with other nodes in the graph \citep{fortunato2016community}. This way, the problem of community detection can be considered in terms of the probabilistic inference framework. Once we posit a community-based generative model $p({\bm{G}} | {\bm{F}})$ for the graph, detecting communities boils down to inferring the unobserved affiliation matrix ${\bm{F}}$ given the observed graph ${\bm{G}}$. Besides the traditional probabilistic view, one can also view community detection through the lens of representation learning. The community affiliation matrix ${\bm{F}}$ can be considered as an embedding of nodes into $\mathbb{R}_{\ge 0}^C$, with the aim of preserving the graph structure. Given the recent success of representation learning for graphs \citep{cai2018embeddingsurvey}, a question arises: "Can the advances in deep learning for graphs be used to design better community detection algorithms?". As we show in \Secref{sec:exp-recovery}, simply combining existing node embedding approaches with overlapping K-means doesn't lead to satisfactory results. Instead, we propose to combine the probabilistic and representation points of view, and learn the community affiliations in an end-to-end manner using a graph neural network. \section{Discussion \& Future work} We proposed NOCD\xspace --- a graph neural network model for overlapping community detection. The experimental evaluation confirms that the model is accurate, flexible and scalable. Besides strong empirical results, our work opens interesting follow-up questions. We plan to investigate how the two versions of our model (NOCD-X\xspace and NOCD-G\xspace) can be used to quantify the relevance of attributes to the community structure. Moreover, we plan to assess the inductive performance of NOCD\xspace \cite{hamilton2017graphsage}. To summarize, the results obtained in this paper provide strong evidence that deep learning for graphs deserves more attention as a framework for overlapping community detection. \subsection{Attributes} \label{sec:exp-attributes} \begin{figure*}[t] \begin{center} \includegraphics[width=\textwidth]{figs/histograms.png} \end{center} \caption{Histograms of the reconstruction loss values (upper row) and NMI scores (lower row) achieved by NOCD-G\xspace (blue) and NOCD-X\xspace (orange) models. Reconstruction loss is highly indicative of the community recovery quality, as measured by NMI.} \label{fig:attributes} \end{figure*} As we discussed in \Secref{sec:model-attributes}, since the loss of our model only measures the reconstruction quality, we can directly compare its value between different variants of the model that use different input features. Such comparison allows us to test whether the node features carry relevant information for community detection, or if we should rely only on the graph structure. We are interested in testing our claim: If NOCD-X\xspace achieves lower reconstruction loss than NOCD-G\xspace on average, then NOCD-X\xspace also achieves better recovery of communities, as measured by NMI; and that the same holds in the opposite direction. Of course, this only makes sense if the ground truth community labels can actually explain the network structure. \textbf{Experimental setup. } We fit the two variants of our model, NOCD-X\xspace, working with node attributes ${\bm{X}}$ as input, and NOCD-G\xspace that uses ${\bm{A}}$ as features. We train each model 50 times (with different random initializations), and compute the values of the full reconstruction loss (\Eqref{eq:loss}) and NMI between predicted and ground truth communities. \textbf{Results. } We visualize the histogram of reconstruction loss values and respective NMI scores for the two models in Figure \ref{fig:attributes}. We observe that in the case when NOCD-G\xspace achieves significantly better reconstruction loss (Facebook 1684 dataset), it also achieves much better NMI scores. The same is true if we reverse the statement --- relatively lower loss values achieved by the attribute-based NOCD-X\xspace translate into higher NMI compared to NOCD-G\xspace (Chemistry\xspace dataset). For the Facebook 1912 dataset, both variants of the model produce very similar values of the reconstruction loss (NOCD-G\xspace is better by a 0.01 margin), which translates into a very similar distribution of NMI scores (with NOCD-G\xspace being better on average by only 0.8 percentage points). These results are not only limited to our model.\todo{this previous sentence sounds a bit confusing; it sounds also a bit like the other techniques can show the same! phrase it different} As can be seen in Table \ref{tab:recovery}, datasets, for which the reconstruction loss and NMI are much better for NOCD-X\xspace also happen to be the ones, where the attribute-driven baselines (CESNA\xspace, G2G/NEO\xspace) mostly outperform their structure-only counterparts (BigCLAM\xspace and DW/NEO\xspace respectively). This highlights the fact the we can use the two versions of our model to detect when one source of information is more reliable than the other. \subsection{Convergence of the stochastic sampling} \label{sec:exp-convergence} As mentioned in \Secref{sec:model-scalability}, we perform stochastic optimization of the objective function (\Eqref{eq:loss}) to speed up training and lower the memory footprint. Instead of using all pairs $u, v \in V$ when computing the gradients $\nabla_{{\bm{\theta}}} {\mathcal{L}}$ at every iteration, we sample $S$ edges and $S$ non-edges uniformly at random. We perform the following experiment to ensure that our training procedure converges to the same result, as when using the full objective. As a side note, we also tried variants of random node sampling \cite{gopalan2012scalable} and node-anchored sampling \cite{bojchevski2018g2g}, but neither of these strategies resulted in faster convergence or significantly lower gradient variance to justify their higher computational cost. \textbf{Experimental setup. } We train the model on the Computer Science\xspace dataset and compare the full-batch optimization procedure with stochastic gradient descent for different choices of the batch size $S$. Starting from the same initialization, we measure the full loss (\Eqref{eq:loss}) over the iterations. \textbf{Results.} \Figref{fig:stochastic} shows training curves for different batch sizes $S \in \{1000, 2500, 5000, 10000, 20000\}$, as well as for full-batch training. The horizontal axis of the plot displays the number of entries of adjacency matrix accessed. One iteration of stochastic training accesses $2S$ entries $A_{ij}$, and one iteration of full-batch accesses $2N + 2M$ entries, since we are using the caching trick (\Eqref{eq:bigclam-caching}). As we see, the stochastic training procedure is stable: For batch sizes $S=10K$ and $S=20K$, the loss converges very closely to the value achieved by full-batch training. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{figs/sgd.png} \end{center} \caption{Convergence of the stochastic sampling procedure.} \label{fig:stochastic} \end{figure} \subsection{Inductive community detection} \label{sec:exp-inductive} So far, we have observed that the NOCD\xspace model is able to recover communities within graphs it is trained on with high precision. Since our model learns a mapping from node features to communities, it should also able to predict communities inductively for nodes not seen during training time. We would like to evaluate this property of the model, and therefore conduct the following experiment. \textbf{Experimental setup. } We use $t \in \{10, ..., 90\}$ percent of nodes from the graph chosen at random as the test set. The NOCD-X\xspace and NOCD-G\xspace models are trained only using the remaining nodes. Once the parameters ${\bm{\theta}}$ are learned, we perform a forward pass of each model using the full adjacency and attribute matrix (\Secref{sec:model-inductiveness}). We then compute the agreement between the predicted communities for the nodes in the test set and the ground truth, using NMI as metric. We compare with the MLP model (\Eqref{eq:mlp}) as a baseline. The results are averaged over 50 random seeds (which includes 50 different of test nodes for each $t$). \textbf{Results. } \Figref{fig:inductive} shows the results for the Computer Science\xspace, Medicine\xspace and Facebook 1912 datasets. We notice that when using node attributes, the performance of the NOCD-X\xspace remains stable, even as the test size increases and the training graph becomes smaller. This means that we could train the model using only a fraction of all nodes and then generate predictions for the entire network, while preserving the high quality of community detection. This opens up the possibility to scale the already efficient NOCD\xspace model to even larger graphs. In general, the inductive performance of different variants of the model strongly correlates with their respective transductive NMI scores. % Surprisingly, on the Facebook 1912 dataset, the model can achieve even higher score on the test set, than possible in the full graph setting. This is explained by the relatively small size of the graph ($N = 747$), which means that the test set only contains few nodes. \begin{figure*}[t] \begin{center} \includegraphics[width=1\textwidth]{figs/inductive.png} \end{center} \caption{Inductive community detection results for four model variants: NOCD-X\xspace (orange), NOCD-G\xspace (blue), MLP with ${\bm{X}}$ as input (red) and MLP with ${\bm{A}}$ as input (green). The dashed line indicates the average transductive NMI score of the NOCD-X\xspace model.} \label{fig:inductive} \end{figure*} \subsection{Recovery of ground-truth communities} \label{sec:exp-recovery} We evaluate the NOCD\xspace model by checking how well it recovers communities in graphs with known ground-truth communities. \textbf{Baselines. } In our selection of baselines, we chose methods that are based on different paradigms for overlapping community detection: probabilistic inference, non-negative matrix factorization (NMF) and deep learning. Some methods incorporate the attributes, while other rely solely on the graph structure. BigCLAM\xspace \citep{yang2013bigclam}, EPM\xspace \cite{zhou2015bplink} and SNetOC\xspace \cite{todeschini2016exchangeable} are based on the Bernoulli--Poisson model. BigCLAM\xspace learns ${\bm{F}}$ using coordinate ascent, while EPM\xspace and SNetOC\xspace perform inference with Markov chain Monte Carlo (MCMC). CESNA\xspace \citep{yang2013cesna} is an extension of BigCLAM\xspace that additionally models node attributes. SNMF\xspace \citep{wang2011nmf} and CDE\xspace \citep{li2018cde} are NMF approaches for overlapping community detection. We additionally implemented two methods based on neural graph embedding. First, we compute node embeddings for all the nodes in the given graph using two established approaches -- DeepWalk\xspace \cite{perozzi2014deepwalk} and Graph2Gauss\xspace \cite{bojchevski2018g2g}. Graph2Gauss\xspace takes into account both node features and the graph structure, while DeepWalk\xspace only uses the structure. Then, we cluster the nodes using Non-exhaustive Overlapping (NEO) K-Means \cite{whang2015neo} --- which allows to assign them to overlapping communities. We denote the methods based on DeepWalk\xspace and Graph2Gauss\xspace as DW/NEO\xspace and G2G/NEO\xspace respectively. To ensure a fair comparison, all methods were given the true number of communities $C$. Other hyperparameters were set to their recommended values. An overview of all baseline methods, as well as their configurations are provided in Appendix \ref{app:baselines}. \begin{table*}[t] \caption{Comparison of the GNN-based model against simpler baselines. Multilayer perceptron (MLP) and Free Variable (FV) models are optimizing the same objective (\Eqref{eq:loss}), but represent the community affiliations ${\bm{F}}$ differently.} \label{tab:simple} \begin{center} \vspace{-3mm} \begin{tabular}{lcccccc} {} & \multicolumn{2}{c}{Attributes} & \multicolumn{2}{c}{Adjacency} & \\ Dataset & GNN & MLP & GNN & MLP & Free variable \\ \toprule Facebook 348 & \textbf{36.4 $\pm$ 2.0} & 11.7 $\pm$ 2.7 & 34.7 $\pm$ 1.5 & 27.7 $\pm$ 1.6 & 25.7 $\pm$ 1.3 \\ Facebook 414 & \textbf{59.8 $\pm$ 1.8} & 22.1 $\pm$ 3.1 & 56.3 $\pm$ 2.4 & 48.2 $\pm$ 1.7 & 49.2 $\pm$ 0.4 \\ Facebook 686 & \textbf{21.0 $\pm$ 0.9} & 1.5 $\pm$ 0.7 & 20.6 $\pm$ 1.4 & 19.8 $\pm$ 1.1 & 13.5 $\pm$ 0.9 \\ Facebook 698 & 41.7 $\pm$ 3.6 & 1.4 $\pm$ 1.3 & \textbf{49.3 $\pm$ 3.4} & 42.2 $\pm$ 2.7 & 41.5 $\pm$ 1.5 \\ Facebook 1684 & 26.1 $\pm$ 1.3 & 17.1 $\pm$ 2.0 & \textbf{34.7 $\pm$ 2.6} & 31.9 $\pm$ 2.2 & 22.3 $\pm$ 1.4 \\ Facebook 1912 & 35.6 $\pm$ 1.3 & 17.5 $\pm$ 1.9 & \textbf{36.8 $\pm$ 1.6} & 33.3 $\pm$ 1.4 & 18.3 $\pm$ 1.2 \\ Chemistry\xspace & 45.3 $\pm$ 2.3 & \textbf{46.6 $\pm$ 2.9} & 22.6 $\pm$ 3.0 & 12.1 $\pm$ 4.0 & 5.2 $\pm$ 2.3 \\ Computer Science\xspace & \textbf{50.2 $\pm$ 2.0} & 49.2 $\pm$ 2.0 & 34.2 $\pm$ 2.3 & 31.9 $\pm$ 3.8 & 15.1 $\pm$ 2.2 \\ Engineering\xspace & 39.1 $\pm$ 4.5 & \textbf{44.5 $\pm$ 3.2} & 18.4 $\pm$ 1.9 & 15.8 $\pm$ 2.1 & 7.6 $\pm$ 2.2 \\ Medicine\xspace & \textbf{37.8 $\pm$ 2.8} & 31.8 $\pm$ 2.1 & 27.4 $\pm$ 2.5 & 23.6 $\pm$ 2.1& 9.4 $\pm$ 2.3 \\ \end{tabular} \vspace{-4mm} \end{center} \end{table*} \textbf{Results: Recovery. } Table \ref{tab:recovery} shows how well different methods recover the ground-truth communities. Either NOCD-X\xspace or NOCD-G\xspace achieve the highest score for 9 out of 10 datasets. We found that the NMI of both methods is strongly correlated with the reconstruction loss (\Eqref{eq:loss}): NOCD-G\xspace outperforms NOCD-X\xspace in terms of NMI exactly in those cases, when NOCD-G\xspace achieves a lower reconstruction loss. This means that we can pick the better performing of two methods in a completely unsupervised fashion by only considering the loss values. \textbf{Results: Hyperparameter sensitivity. } It's worth noting again that both NOCD\xspace models use the same hyperparameter configuration that was tuned only on the Computer Science\xspace dataset ($N=22K, M=96.8K, D=7.8K$). Nevertheless, both models achieve excellent results on datasets with dramatically different characteristics (e.g. Facebook 414 with $N=150, M=1.7K, D=16$). \textbf{Results: Scalability. } In addition to displaying excellent recovery results, NOCD\xspace is highly scalable. NOCD\xspace is trained on the Medicine\xspace dataset (63K nodes, 810K edges) using a single GTX1080Ti GPU in 3 minutes, while only using 750MB of GPU RAM (out of 11GB available). See Appendix \ref{app:hardware} for more details on hardware. EPM\xspace, SNetOC\xspace and CDE\xspace don't scale to larger datasets, since they instantiate very large dense matrices during computations. SNMF\xspace and BigCLAM\xspace, while being the most scalable methods and having lower runtime than NOCD\xspace, achieved relatively low scores in recovery. Generating the embeddings with DeepWalk\xspace and Graph2Gauss\xspace can be done very efficiently. However, overlapping clustering of the embeddings with NEO-K-Means\xspace was the bottleneck, which led to runtimes exceeding several hours for the large datasets. As the authors of CESNA\xspace point out \cite{yang2013cesna}, the method scales to large graphs if the number of attributes $D$ is low. However, as $D$ increases, which is common for modern datasets, the method scales rather poorly. This is confirmed by our findings --- on the Medicine\xspace dataset, CESNA\xspace (parallel version with 18 threads) took 2 hours to converge. \subsection{Do we really need a graph neural network?} \label{sec:exp-simple} Our GNN-based model achieved superior performance in community recovery. Intuitively, it makes sense to use a GNN for the reasons laid out in \Secref{sec:model-architecture}. Nevertheless, we should ask whether it's possible achieve comparable results with a simpler model. To answer this question, we consider the following two baselines. \textbf{Multilayer perceptron (MLP):} Instead of a GCN (\Eqref{eq:gcn}), we use a simple fully-connected neural network to generate ${\bm{F}}$. \begin{equation} {\bm{F}} = \operatorname{MLP}_{\bm{\theta}}({\bm{X}}) = \textnormal{ReLU}(\textnormal{ReLU}({\bm{X}} {\bm{W}}^{(1)}) {\bm{W}}^{(2)}) \label{eq:mlp} \end{equation} This is related to the model proposed by \cite{hu2017deep}. Same as for the GCN-based model, we optimize the weights of the MLP, ${\bm{\theta}} = \{{\bm{W}}^{(1)}, {\bm{W}}^{(2)}\}$, to minimize the objective \Eqref{eq:loss}. \begin{equation} \min_{{\bm{\theta}}} {\mathcal{L}}(\operatorname{MLP}_{\theta}({\bm{X}})) \label{eq:mlp-obj} \end{equation} \textbf{Free variable (FV): } As an even simpler baseline, we consider treating ${\bm{F}}$ as a free variable in optimization and solve \begin{equation} \label{eq:free-var} \min_{{\bm{F}} \ge 0} {\mathcal{L}}({\bm{F}}) \end{equation} We optimize the objective using projected gradient descent with Adam \cite{kingma2014adam}, and update all the entries of ${\bm{F}}$ at each iteration. This can be seen as an improved version of the BigCLAM\xspace model. Original BigCLAM\xspace uses the imbalanced objective (\Eqref{eq:bp-nll}) and optimizes ${\bm{F}}$ using coordinate ascent with backtracking line search. \textbf{Setup. } Both for the MLP and FV models, we tuned the hyperparameters on the Computer Science dataset (just as we did for the GNN model), and used the same configuration for all datasets. Details about the configuration for both models are provided in Appendix \ref{app:architecture}. Like before, we consider the variants of the GNN-based and MLP-based models that use either ${\bm{X}}$ or ${\bm{A}}$ as input features. We compare the NMI scores obtained by the models on all 11 datasets. \textbf{Results. } The results for all models are shown in Table \ref{tab:simple}. The two neural network based models consistently outperform the free variable model. When node attributes ${\bm{X}}$ are used, the MLP-based model outperforms the GNN version for Chemistry\xspace and Engineering\xspace datasets, where the node features alone provide a strong signal. However, MLP achieves extremely low scores for Facebook 686 and Facebook 698 datasets, where the attributes are not as reliable. On the other hand, when ${\bm{A}}$ is used as input, the GNN-based model always outperforms MLP. Combined, these findings confirm our hypothesis that a graph-based neural network architecture is indeed beneficial for the community detection task. \section{Evaluation} \label{sec:evaluation} \input{sections/exp_preliminaries} \input{sections/exp_recovery} \input{sections/exp_simple} \section{Introduction} \label{sec:introduction} Graphs provide a natural way of representing complex real-world systems. Community detection methods are an essential tool for understanding the structure and behavior of these systems. Detecting communities allows us to analyze social networks \citep{girvan2002social}, detect fraud \citep{pinheiro2012fraud}, discover functional units of the brain \citep{garcia2018brain}, and predict functions of proteins \citep{song2009protein}. The problem of community detection has attracted significant attention of the research community and numerous models and algorithms have been proposed \citep{xie2013survey}. In the recent years, the emerging field of deep learning for graphs has shown great promise in designing more accurate and more scalable algorithms. While deep learning approaches have achieved unprecedented results in graph-related tasks like link prediction and node classification \cite{cai2018embeddingsurvey}, relatively little attention has been dedicated to their application for unsupervised community detection. Several methods have been proposed \citep{yang2016deepmodularity, choong2018vgaecd, cavallari2017come}, but they all have a common drawback: they only focus on the special case of disjoint (non-overlapping) communities. However, it is well known that communities in real networks are overlapping \cite{yang2014structure}. Handling overlapping communities is a requirement not yet met by existing deep learning approaches for community detection. In this paper we address this research gap and propose an end-to-end deep learning model capable of detecting overlapping communities. To summarize, our main contributions are: \begin{itemize} \item Model: We introduce a graph neural network (GNN) based model for overlapping community detection. \item Data: We introduce 4 new datasets for overlapping community detection that can act as a benchmark and stimulate future research in this area. \item Experiments: We perform a thorough evaluation of our model and show its superior performance compared to established methods for overlapping community detection, both in terms of speed and accuracy. We highlight the importance of the GNN component of our model through an ablation study. \end{itemize} \section{The NOCD\xspace model} \label{sec:model} Here, we present the Neural Overlapping Community Detection (NOCD\xspace) model. The core idea of our approach is to combine the power of GNNs with the Bernoulli--Poisson probabilistic model. % \subsection{Bernoulli--Poisson model} The Bernoulli--Poisson (BP) model \cite{yang2013bigclam,zhou2015bplink,todeschini2016exchangeable} is a graph generative model that allows for overlapping communities. According to the BP model, the graph is generated as follows. Given the affiliations ${\bm{F}} \in \mathbb{R}_{\ge 0}^{N \times C}$, adjacency matrix entries $A_{uv}$ are sampled i.i.d. as \begin{align} \label{eq:bigclam} A_{uv} \sim \operatorname{Bernoulli}(1 - \exp(-{\bm{F}}_u {\bm{F}}_v^T)) \end{align} where ${\bm{F}}_u$ is the row vector of community affiliations of node $u$ (the $u$'s row of the matrix ${\bm{F}}$). Intuitively, the more communities nodes $u$ and $v$ have in common (i.e. the higher the dot product ${\bm{F}}_u {\bm{F}}_v^T$ is), the more likely they are to be connected by an edge. This model has a number of desirable properties: It can produce various community topologies (e.g. nested, hierarchical), leads to dense overlaps between communities \cite{yang2014structure} and is computationally efficient (\Secref{sec:model-scalability}). Existing works propose to perform inference in the BP model using maximum likelihood estimation with coordinate ascent \cite{yang2013bigclam,yang2013cesna} or Markov chain Monte Carlo \cite{zhou2015bplink,todeschini2016exchangeable}. \subsection{Model definition} \label{sec:model-architecture} Instead of treating the affiliation matrix ${\bm{F}}$ as a free variable over which optimization is performed, we generate ${\bm{F}}$ with a GNN: \begin{align} \label{eq:model} {\bm{F}} := \operatorname{GNN}_{{\bm{\theta}}} ({\bm{A}}, {\bm{X}}) \end{align} A ReLU nonlinearity is applied element-wise to the output layer to ensure non-negativity of ${\bm{F}}$. See \Secref{sec:evaluation} and Appendix \ref{app:architecture} for details about the GNN architecture. The negative log-likelihood of the Bernoulli--Poisson model is \begin{align} \label{eq:bp-nll} -\log p({\bm{A}} | {\bm{F}}) &= -\hspace{-3mm}\sum_{(u, v) \in E} \log (1 - \exp(-{\bm{F}}_u {\bm{F}}_v^T)) + \hspace{-2mm}\sum_{(u, v) \notin E} {\bm{F}}_u {\bm{F}}_v^T \end{align} Real-world graph are usually extremely sparse, which means that the second term in \Eqref{eq:bp-nll} will provide a much larger contribution to the loss. We counteract this by balancing the two terms, which is a standard technique in imbalanced classification \cite{he2008imbalanced} \begin{align} \label{eq:loss} \hspace{-1mm}{\mathcal{L}}({\bm{F}}) &= \hspace{-0.5mm}-\mathbb{E}_{(u, v) \sim P_E} \hspace{-1mm}\left[\log (1 \hspace{-0.5mm}- \hspace{-0.5mm}\exp(-{\bm{F}}_u {\bm{F}}_v^T))\right] \hspace{-0.5mm}+ \hspace{-0.5mm} \mathbb{E}_{(u, v) \sim P_N} \hspace{-1mm}\left[{\bm{F}}_u {\bm{F}}_v^T\right] \end{align} where $P_E$ and $P_N$ denote uniform distributions over edges and non-edges respectively. Instead of directly optimizing the affiliation matrix ${\bm{F}}$, as done by traditional approaches \cite{yang2013bigclam,yang2013cesna}, we search for neural network parameters ${\bm{\theta}}^{\star}$ that minimize the (balanced) negative log-likelihood \begin{align} {\bm{\theta}}^\star = \argmin_{{\bm{\theta}}} {\mathcal{L}}(\operatorname{GNN}_{{\bm{\theta}}} ({\bm{A}}, {\bm{X}})) \end{align} Using a GNN for community prediction has several advantages. First, due to an appropriate inductive bias, the GNN outputs similar community affiliation vectors for neighboring nodes, which improves the quality of predictions compared to simpler models (\Secref{sec:exp-simple}). Also, such formulation allows us to seamlessly incorporate the node features into the model. If the node attributes ${\bm{X}}$ are not available, we can simply use ${\bm{A}}$ as node features \cite{kipf2016gcn}. Finally, with the formulation from \Eqref{eq:model}, it's even possible to predict communities inductively for nodes not seen at training time. \subsection{Scalability} \label{sec:model-scalability} One advantage of the BP model is that it allows to efficiently evaluate the loss ${\mathcal{L}}({\bm{F}})$ and its gradients w.r.t. ${\bm{F}}$. By using a caching trick \cite{yang2013bigclam}, we can reduce the computational complexity of these operations from $O(N^2)$ to $O(N + M)$. While this already leads to large speed-ups due to sparsity of real-world networks (typically $M \ll N^2$ ), we can speed it up even further. Instead of using all entries of ${\bm{A}}$ when computing the loss (\Eqref{eq:loss}), we sample a mini-batch of $S$ edges and non-edges at each training epoch, thus approximately computing $\nabla {\mathcal{L}}$ in $O(S)$. In Appendix \ref{app:convergence} we show that this stochastic optimization strategy converges to the same solution as the full-batch approach, while keeping the computational cost and memory footprint low. While we subsample the graph to efficiently evaluate the training objective ${\mathcal{L}}({\bm{F}})$, we use the full adjacency matrix inside the GNN. This doesn't limit the scalability of our model: NOCD\xspace is trained on a graph with 800K+ edges in 3 minutes on a single GPU (see \Secref{sec:exp-recovery}). It is straightforward to make the GNN component even more scalable by applying the techniques such as \cite{chen2018stochastic,ying2018pinsage}. \section{Related work} \label{sec:related-work} The problem of community detection in graphs is well-established in the research literature. However, most of the works study detection of non-overlapping communities \cite{abbe2018sbm,von2007tutorial}. Algorithms for overlapping community detection can be broadly divided into methods based on non-negative matrix factorization \cite{li2018cde,wang2011nmf,kuang2012nmf}, probabilistic inference \cite{yang2013bigclam,zhou2015bplink,todeschini2016exchangeable,latouche2011osbm}, and heuristics \cite{gleich2012neighborhoods,galbrun2014overlapping,ruan2013codicil,li2015ospectral}. Deep learning for graphs can be broadly divided into two categories: graph neural networks and node embeddings. GNNs \citep{kipf2016gcn, hamilton2017graphsage, xu2018jknet} are specialized neural network architectures that can operate on graph-structured data. The goal of embedding approaches \citep{perozzi2014deepwalk, kipf2016gae, grover2016node2vec, bojchevski2018g2g} is to learn vector representations of nodes in a graph that can then be used for downstream tasks. While embedding approaches work well for detecting disjoint communities \cite{cavallari2017come,tsitsulin2018verse}, they are not well-suited for overlapping community detection, as we showed in our experiments. This is caused by lack of reliable and scalable approaches for overlapping clustering of vector data. Several works have proposed deep learning methods for community detection. \cite{yang2016deepmodularity} and \cite{cao2018incorporating} use neural nets to factorize the modularity matrix, while \cite{cavallari2017come} jointly learns embeddings for nodes and communities. However, neither of these methods can handle overlapping communities. Also related to our model is the approach by \cite{hu2017deep}, where they use a deep belief network to learn community affiliations. However, their neural network architecture does not use the graph, which we have shown to be crucial in \Secref{sec:exp-simple}; and, just like EPM\xspace and SNetOC\xspace, relies on MCMC, which heavily limits the scalability of their approach. Lastly, \cite{chen2017supervisedcd} designed a GNN for \emph{supervised} community detection, which is a very different setting.
1,941,325,220,095
arxiv
\section{Introduction} \label{sec:introduction} Actuator systems are embedded in our daily lives to such an extent that it is hard to find an example of an electronic system that does not have actuators in some form. Anything from household devices like smart locks or robotic vacuum cleaners, to transportation to production to defense. Actuators are everywhere. These devices interact with the physical world by converting an electric signal into some other form of energy, typically movement, but a heater or a light source can also be considered an actuator. It is well known that the wires used to feed electrical signals to such devices can act as antennas, unintentionally picking up electromagnetic interference (EMI)~\cite{kune2013ghost,leone1999coupling,goedbloed1992electromagnetic} from the environment, or indeed from an attacker. This inherent vulnerability allows an attacker to \emph{wirelessly} inject attacking signals into the wires, disturbing or changing the original signals. Since an actuator is simply an energy transducer, it cannot authenticate its input signals and will respond to whatever it receives, in the worst case resulting in the adversary being able to fully control the state of the actuator. It is easy to see how such an attack can be used, e.g., to rotate the motor in the smart lock to unlock a door; or force closed a fuel injection valve in a car to stop the car's engine. When the target system is complex and important, these attacks can be incredibly powerful and dangerous. For example imagine the potential harm if an adversary could control critical industrial applications (e.g., robotic arms) or medical devices (e.g., pacemakers), or say, move the control surfaces of an airplane without pilot input. Even though such attacks are complicated to perform in practice, and as a result are still rare, we need to find effective detection and mitigation strategies to deal with them before they become common. In the last few years, there has been work on detecting such attacks on sensors~\cite{sp20Zhang,shoukry2015pycra,kohler2021signal,ruotsalainen2021watermarking, fang2022detection}. In this paper, we focus on detecting attacks on actuators, which is quite a bit harder. The reason is that when a sensor is attacked, the receiving device is a microcontroller that has the ability to run filters and algorithms, or use redundant measurements for added security. For actuators that is not as easy. When actuators are attacked the receiving device is the actuator itself, and since actuators are ``dumb devices'' (it might just be a coil of wire, like in a motor), they do not have the ability to ignore malicious signals, even if such signals deviate from some usual pattern. In this paper we provide a novel detection method that uses common and inexpensive electrical components, making it possible to apply our method at scale. The basic idea is to compare the signal to be protected with a reference, in order to identify when any external interference is present. However, this is not as easy as it sounds. First of all, an adversarial signal will affect any reference signals as well, and there are challenges with the sampling rate, bandwidth limits, and signal processing efforts that can make a trivial scheme unusable in practice. Our detection method solves all those problems and we are able to provide strong detection guarantees for (almost) any actuator system. Our contributions of this work are summarized as follows: \begin{itemize} \item We create a universal and flexible system model that fits most (if not all) actuator systems. It allows us to capture the specific needs of any system by tuning parameters of the model (Section~\ref{sec:actuator_system}). \item We propose a general and lightweight detection method that uses differential amplifiers to detect electromagnetic signal injection attacks, and we show that it can provide the actuator system with a strong security guarantee (Section~\ref{sec:attack_detection} and Section~\ref{sec:protection_outofthe_operational_band}). \item We implement the proposed detection method on a speaker system and a motor control system, and we demonstrate the generality, feasibility, and robustness of our detection method (Section~\ref{sec:implementation}). \end{itemize} In the remaining parts of this work, we present a background on the electromagnetic signal injection attacks in Section~\ref{sec:background}. Furthermore, additional important issues are discussed in Section~\ref{sec:discussion}, and related work is summarized in Section~\ref{sec:related_work_existing_defenses}. Finally, a conclusion of this work is drawn in Section~\ref{sec:conclusion}. \section{Background} \label{sec:background} Before introducing our detection method, we first provide a brief background on electromagnetic signal injection attacks. We first explain how electromagnetic waves are injected into a victim system, and next, we explain how the injected signals influence the actuator, as well as presenting successful attacks in previous studies. \subsection{Electromagnetic Signal Injection} Electromagnetic fields can affect a metal conductor by inducing voltage changes into it, and this has been thoroughly studied in the area of ``Electromagnetism''. Besides antennas for wireless communications, the metal conductor also exists in devices in the form of wires (or traces) that connect electronic components. These wires can also act as antennas to capture environmental electromagnetic waves. Many researchers exploited such ``antenna-like'' behaviors of the wires to radiate electromagnetic waves and wirelessly inject them into the circuits~\cite{kasmi2015iemi,kune2013ghost, Kasper2009PACf,selvaraj2018electromagnetic, Markettos2009Tfia, osuka2018information,giechaskiel2019framework, shin2016sampling, tu2019trick, sp20Zhang,wang2022ghosttouch, dayanikli2020senact, dayanikli2021electromagnetic,ware2017effects,selvaraj2018intentional}. Many factors affect the injection process, but the attack power and the attack frequency are the basic ones that an attacker tunes, as they determine the effectiveness and efficiency of the injection~\cite{yan2020sok}. To cause effective impacts onto the circuits, the injected voltage needs to be strong enough. Since the injected voltage is proportional to the attack power~\cite{friis1946note}, the more powerful the attacking signal is, the higher the injected voltage will be, and it is more likely the attack is effective. In addition, in order to maximize the injected voltage, the attack frequency must be the resonant frequency of the wire; at other frequencies, it will cost more attack power to achieve the same amount of injected voltage~\cite{kune2013ghost}. By properly tuning the attack power and the attack frequency, the attacker can inject arbitrary signals into the wires. \subsection{Circuit Response to Injected Signal} After the injection, a successful attack depends on how the circuits respond to injected signals. On the one hand, the injected signal can be within the frequency band in which the circuits are designed to operate, namely the operational band (in-band). Since the malicious voltage changes are within the operational band, the circuits respond to them directly. This will subsequently influence a signal that drives the actuator, further impacting the actuator responses. On the other hand, the injected signal can also be out of the operational band (out-of-band). However, in order to affect the circuits in an effective and predictable way, it is essential to cause voltage changes within the operational band. A well-studied method of transferring the out-of-band changes to the operational band is exploiting the nonlinear properties of electronic components: the attacker first exerts an in-band malicious signal onto an out-of-band radio-frequency (RF) carrier to form the attacking signal; next, after the signal injection, the malicious signal is extracted from the attacking signal due to nonlinearities of electronic components such as amplifiers~\cite{kune2013ghost, kasmi2015iemi, tu2019trick}, electro-static discharge (ESD) circuits~\cite{selvaraj2018electromagnetic, dayanikli2020senact}, and analog-to-digital converters (ADCs)~\cite{giechaskiel2019framework, giechaskiel2019sok}; as a result, the in-band malicious signal appears in the operational band, further affecting the actuator. Here are some examples of manipulating actuators. Selvaraj et al.~\cite{selvaraj2018electromagnetic} demonstrated how to inject fine-tuned attacking signals into the target wire to precisely manipulate servo angles. They exploited the nonlinearities of the electro-static discharge (ESD) circuits to toggle the voltage level of the signal so that they can precisely control the signal pulse width that determines the servo angle. Danyanikli et al.~\cite{dayanikli2020senact} demonstrated similar attacks: they could remotely manipulate the signal that controls switches in an AC-to-DC power converter, which regulates power delivery to electric vehicles. The attack can forcibly toggle the on/off state of the switch. An irreparable result of the attack is causing a short circuit to burn the converter. In this work, we will demonstrate that an arbitrary audio signal can be injected into a speaker system, in which the nonlinearities of the audio amplifier is exploited (see details in Section~\ref{sec:speaker}); furthermore, we will show that a motor control system can be precisely controlled, in which the imperfection of transistors~\cite{bona2010new, bona2009eaa, FioriFranco2014SoSP} are exploited (see details in Section~\ref{sec:motor}). All of these show that it is not difficult to use electromagnetic signal injection attacks to manipulate the actuator behaviors precisely. \begin{figure}[t] \def1.5{0.55} \centering \import{Figures/block/}{transfer_functions.pdf_tex} \caption{The actuator system consists of a microcontroller, a signal conditioner, and an actuator. The transfer functions $T_{C}$ and $T_{D}$ explain the control signal wire and the drive signal wire capturing the attacking signals, respectively.} \label{fig:system_model} \end{figure} \section{System Model and Adversary Model} \label{sec:actuator_system} In this section, we introduce a general and flexible system model that fits most actuator systems. This allows us to capture the needs of any specific system by tuning the model parameters. We also present a comprehensive attacker model that, together with the system model, forms a flexible tool to describe signal injection attacks and defense mechanisms on actuator systems. \subsection{System Model} \label{sec:system_model} We call a system that controls an actuator an ``actuator system''. In an actuator system, a microcontroller is the device used to regulate, command, and manage the behaviors of the actuator. Between the microcontroller and the actuator, there are circuits transforming the microcontroller output signal into a suitable signal to drive the actuator. For instance, such circuits may be for signal amplifications or waveform conversions. To capture all the characteristics of the circuits, we define a new device, called the signal conditioner. How this device works will differ from circuit to circuit, but we treat it as a black box. Therefore, our system model consists of three devices: a microcontroller, a signal conditioner, and an actuator; a block diagram of the system model is presented in Figure~\ref{fig:system_model}. In the system model, wires are used to connect these devices: as shown in Figure~\ref{fig:system_model}, one wire is used to transmit the microcontroller output signal, which we call the \emph{control~signal}, to the signal conditioner; the other transmits the signal conditioner output signal, which we call the \emph{drive~signal}, to the actuator. Note that, in practice, the control signal wire often carries comparatively little power compared to the drive signal wire, as the voltage and the current of the microcontroller outputs are constrained to a few volts and milliamperes, respectively. Whereas the drive signal wire can carry high-power signals because some actuators consume significant power while working. The operational frequency of the control signal and drive signal can vary significantly from system to system. Still, it is generally possible to define a normal operating range to which signals are confined in normal operation. This is important because while low/high pass filters can filter out adversarial injections at extreme frequencies, it is more difficult to filter out attacks in the operational range without affecting the valid control/drive signal. Our solution assumes that such an operational range can be defined and we call the upper limit of this range $f_{max}$. Note that we make no assumptions about what the value of this limit is, only that it exists. In Section~\ref{sec:protection_outofthe_operational_band}, we discuss ways of extending this range way beyond the design limits of the electrical components. \subsection{Adversary Model} \label{sec:adversary_model} The attacker's goal is to affect the actuator by electromagnetic interference, i.e., inject an attacking signal into the system. The attacker can inject attacking signals into the actuator system remotely but cannot physically access or modify the actuator system. We grant the attacker full knowledge of the actuator system; specifically, the attacker can predict the waveform and timing (phase) of signals in the actuator system. The attacker can also craft any (physically possible) signal she wants. In practice, signal injection can be rather complicated, especially from far away, but we deliberately grant the adversary extremely strong power to make sure our detection method works in every case. We manage this complexity using a transfer function that encapsulates any changes to the attacking signal caused by the injection process, e.g., frequency selectivity, attack distance, attenuation, spreading and convolution, etc.,\ as shown in Figure~\ref{fig:system_model}. We do not limit the power available for the adversary. However, we do assume that there exists a lower limit, below which any injected signal no longer has a meaningful effect on the target system. We call this lower limit $P_{min}$. This power limit is set by the system designer to make sure that any injected signal above this limit is detected. It can be set arbitrarily low, but in order to successfully attack the system, the attacker must inject a signal with power higher than $P_{min}$. The reason why we grant the attacker ideally strong abilities is that if such an attacker cannot avoid being detected by our proposed detection method, it is impossible for any other attackers who are no better than this ideally strong attacker to bypass the detection method. \subsection{Two Injection Points} \label{sec:two_injection_points} In a particular physical system, there could exist multiple injection places through which attacking signals enter the system. Therefore, many electronic components will also be affected by the injections. However, only when these injections lead to effects on signals that directly determine responses of the system will the system be successfully manipulated by the attacker, and this has also been considered and shown in previous studies~\cite{tu2019trick, esteves2018remote}. Therefore, regardless of where the signal injection happens in the system, even if currents are induced in many places at once, it is possible to find an input signal that, when applied to one of the two wires in our model, produces the same effect. This means that without loss of generality, we can model any signal injection as if the attacking signal was injected into the control or drive signal wire through an appropriate transfer function. In practice, signal injection does in fact almost exclusively happen via these wires, because these are the most efficient ``antennas'' in the system, and thus where most of the energy is transferred. There is an important difference between these two injection points. As mentioned previously, the power of the control signal is comparatively weak, so the adversary can more easily overshadow any valid signal on the control signal wire, and it will generally take less power to make changes that affect the actuator through this injection point. We define such an injection as a \emph{control~signal~injection}. The second injection point is the drive signal wire. This wire will generally carry signals with higher power and more specialized waveforms. For some actuators, e.g., brushless electric motors, the drive signals are not only high-powered but also somewhat complex, and the timing between the different phases of the signal is very important to the operation of the motor. This means that injection into this wire is more difficult and requires much more power from the adversary. We define such an injection as a \emph{drive~signal~injection}. Despite this difference in attacker capabilities between the two injection points, our detection system, described in the following sections, works for both the control signal wire and the drive signal wire. \section{Attack Detection} \label{sec:attack_detection} \begin{figure}[t] \def1.5{0.72} \centering \import{Figures/block/}{add_comparator.pdf_tex} \caption{A differential amplifier compares the primary signal that is transmitted from the sender to the receiver with the reference signal. A comparator circuit further compares the differential amplifier output with a threshold $\epsilon$, and the microcontroller determines whether attacks happen according to the binary results of the comparator.} \label{fig:add_comparator} \end{figure} As mentioned previously, our detection approach works for both the control signal wire and the drive signal wire. Rather than choose one of the two injection points for our description, we instead treat each wire as having a ``sender'' and a ``receiver''. The sender device is thus either the microcontroller or the signal conditioner, with the corresponding receiver being the signal conditioner or the actuator, respectively. We discuss any minor differences between the injection points in Section~\ref{sec:differences_injection_points}. In order to detect a signal injected into the wire, we make the sender generate two identical signals, one primary signal (sent to the receiver) and one reference signal (used for detection). This idea is illustrated in Figure~\ref{fig:add_comparator}. A differential amplifier is then used to amplify any differences between the primary and reference signals. In the absence of an attack, these two signals should be identical and thus produce no output from the amplifier. However, if the primary wire is affected by an external signal, the difference will be amplified and can be detected using a comparator and a microcontroller. A very important requirement is that the reference wire is sufficiently different from the primary wire to make sure that the adversary cannot modify both in the same way. This can be easily accomplished by simply making the wires different lengths~\cite{balanis2016antenna} in order to make them sensitive to different frequencies, but more significant difference can be achieved, e.g., with additional Radio Frequency (RF) shielding materials on the reference wire. When no attack signal is present, the two input signals of the differential amplifier (the primary and the reference) are the same, and the differential amplifier output is zero. In reality, there will be a non-zero amount of noise, but the output is essentially zero. When an attack happens, the primary wire and the reference wire both pick up the attacking signal. But, because the two wires cannot be modified in the same way, captured by the two different transfer functions $T_C$ and $T_R$ shown in Figure~\ref{fig:add_comparator}, the two inputs to the differential amplifier will be different. This results in a non-zero signal on the output of the differential amplifier, and allows the microcontroller to detect the attack. It is essential to emphasize that the differential amplifier is used in a novel way that is different from how it is commonly used in analog electronics, in which the differential amplifier is used to reduce equal interference (common-mode interference) onto its two inputs~\cite{razavi2005design}. However, in our detection method, the two inputs are deliberately crafted such that the differential amplifier captures the attack interference, rather than mitigate it. \subsection{Modeling Differential Amplifier Output} \label{sec:comparator_response} The differential amplifier amplifies the difference of its input signals. We model this as the difference between the primary and the reference $\delta(t)$, plus additive white Gaussian noise $n(t)$, amplified by a constant gain $G$. To simplify the notation, we omit $t$ hereafter. The output of the differential amplifier becomes: \[ o = G(\delta + n) \] Given an attacking signal $s$, the signal that is injected in the primary wire is $T_{C}(s)$, and the signal that is injected in the reference wire is $T_{R}(s)$. In order to obtain a simple relationship between these two injected signals, we make the simplifying assumption that $T_{R}$ can be expressed as being $K$ times weaker than $T_{C}$. Therefore we can write: \[ \delta = T_{C}(s) - T_{R}(s) = T_{C}(s) - \frac{1}{K}T_{C}(s) = \frac{K-1}{K}T_{C}(s) \] Thus, the output of the differential amplifier becomes \begin{align} \label{eq:o_tc} o = G\left(\frac{K-1}{K}T_{C}(s) + n\right) \end{align} Finally we take advantage of the fact that the power that is absorbed by the receiving antenna (the primary wire) is proportional to the attack power $P$~\cite{friis1946note}, to arrive at the final model for our detection system: \begin{align} o = G\left(\frac{K-1}{K} P + n\right) \label{eq:o_p} \end{align} This equation gives us the output of the differential amplifier as a function of the main parameters of our detection system, namely the noise $n$, the gain of the differential amplifier $G$, the ``difference'' of the primary and reference wires $K$, and the power of the adversarial signal $P$. \subsection{Detection Rule and Choice of Parameters} \label{sec:detection_procedures} According to Equation~\eqref{eq:o_p}, when no attack signal is present, i.e., the attacker's power is 0, we have ${o=Gn}$. To make sure that small amounts of noise do not cause false positives, we define a threshold $\epsilon$ that the output of the amplifier must exceed in order to be detected as an attack. The actual detection is done by a comparator whose output is high when $o\ge\epsilon$ and low otherwise. This allows the output of the comparator to be fed into an interrupt pin of the microcontroller, as shown in Figure~\ref{fig:add_comparator}, and ensures that even attack signals with a very short duration can be detected efficiently without requiring the microcontroller to sample at a high rate. \begin{figure}[tb] \centering \includegraphics[width=.5\linewidth]{Figures/block/injected_vs_k_low.pdf} \caption{The minimum detectable attack power is expressed as a function of $K$. The detection method can detect attacks on and above the curve (green), but it cannot detect attacks below the curve (red). By decreasing (increasing) $\epsilon$ or increasing (decreasing) $G$, the horizontal asymptote can be moved down (up).} \label{fig:injected_vs_k} \end{figure} Since the attack is detected if: \[ G\left(\frac{K-1}{K} P + n\right) \ge \epsilon \] we can rearrange this to see that the detection method can detect any attack with power that fulfills the following requirement: \begin{align} P \geq \left(\frac{\epsilon}{G}-n\right) \cdot \frac{K}{K-1} \label{eqn:min_power} \end{align} From Inequality~\eqref{eqn:min_power} we see that the minimum detectable power can be made arbitrarily small with appropriate choices of $K$, $G$, and $\epsilon$. In the following, we describe the procedures for choosing these values. The choice of $K$ is relatively simple: bigger is better. A large $K$ means that the difference between the two transfer functions $T_C$ and $T_R$, which govern how an attacking signal affects the primary and references wires, is as big as possible. To get a sense of how the choice of $K$ affects the detection performance, we plot the attacker's power $P$ as a function of $K$ in Figure~\ref{fig:injected_vs_k}. The detection method detects attacks on or above the curve (green region), while the attacks below the curve (red region) are not detected. We see that $K$ does not have to be very high for the detection method to be effective, but it does have to be above 1, i.e., the primary and reference wires do have to differ. The amplification of the differential amplifier $G$ is dictated by the choice of amplifier used. Different amplifiers have different maximum gains, and typical values range from 100 to 300. Generally, $G$ should be chosen as high as possible, although in noisy environments, it may be beneficial to reduce the amplification to reduce the sensitivity to noise. As for choosing the detection threshold $\epsilon$, it needs to be chosen such that environmental noise does not cause a detection event. Therefore, $\epsilon$ is chosen just high enough to make sure that false positives from noise are kept to a minimum; we show an example of this in our implementation in Section~\ref{sec:implementation}. Moreover, because noise environments are often complicated and change significantly over time, we emphasize that $\epsilon$ does not have to be constant. It can for example be adaptively adjusted to accommodate lower levels of ambient electromagnetic noise during the night. We further discuss this in Section~\ref{sec:adaptive_threshold}. \subsection{Security Analysis} \label{sec:security_analysis} Recall from the adversary model that the goal of the attacker is to affect the actuator. In order to achieve this goal, the attacker must inject a signal with power of at least $P_{min}$. We prove that such attacks are always detected by our detection method as follows. Substituting $P_{min}$ into Inequality~\eqref{eqn:min_power} we see that if the following inequality holds the attack is detected: \[ P_{min} \ge \left(\frac{\epsilon}{G}-n\right) \cdot \frac{K}{K-1}\] To show that it is always possible to find values of $K$, $\epsilon$, and $G$ to make this inequality true, for any value of $n$ we first make the observation that $K$ can be made arbitrarily high independent of noise. Since $K/(K-1)$ approaches 1 for high enough values of $K$, we can pick a high value and reduce the above inequality to \[ G(P_{min} +n) \ge \epsilon\] In addition, as mentioned in the previous subsection, it is a functional requirement that the detection threshold must not be triggered by the noise alone, i.e., the following must hold: \[ Gn < \epsilon\] Both of the two inequalities above must be true in order to have a functional detection system. That gives the following constraint: \begin{align*} G(P_{min} +n) &\ge \epsilon > Gn \\ P_{min} +n &> n\\ P_{min} &> 0 \end{align*} Thus for any value of $P_{min} > 0$, it is possible to find values of $K$, $G$, and $\epsilon$ that allows the detection system to detect any adversarial signal with power above $P_{min}$ and at the same time do not trigger false positives from noise. \subsection{Differences Between Injection Points} \label{sec:differences_injection_points} Recall that one injection point is the control signal wire, and the other is the drive signal wire. The first difference is between the differential amplifiers at these two injection points. Recall that the control signal has a low voltage level, and as such, it is sufficient for the differential amplifier to have an input voltage range of several volts. However, the drive signal's voltage can go up to hundreds of volts, e.g., $\SI{380}{\volt}$ industrial motors. Thus, a differential amplifier with a large enough voltage input range is needed such that the tapped signal will not cause any damage to the differential amplifier. It is not hard to find such a differential amplifier in the market. Note that since the differential amplifier has a much higher impedance than the actuator, the tapping only draws a tiny portion of the control/drive signal, causing negligible impacts on the signal conditioner/actuator. Another difference at these two injection points is that the drive signal can be much more complex than the control signal, and thus, it may be more complicated while deploying our approach for the drive signal. In the previously mentioned example of a brushless electric motor, the microcontroller produces one signal for controlling, while the signal conditioner needs to convert this solitary control signal into three different signals to drive the motor. In general, it is essential to deploy our approach to each signal to guarantee the security, which means one for the control signal and three for the drive signals. However, in many cases where the physical properties of the multiple wires are the same or very similar and they are put close to each other, it is tricky that protecting one wire is sufficiently enough, and doing so can significantly reduce the complexity of deploying our approach. This is because the attacker cannot selectively choose a wire to affect in these cases, and in other words, all of these identical or similar wires will be impacted by the attack. In the example of the brushless DC motor, its three drive signal wires are almost identical, and they are put very close to each other. Therefore, protecting any one of the wires with our approach is equivalent to protecting all three wires. \subsection{Attacks on Detection Circuit} \label{sec:attacks_on_detection_circuit} Our defense mechanism has added circuitry to the system that could itself be the target of an injection attack. In this section, we demonstrate that this circuitry cannot be exploited by the attacker to achieve the injection. First, we note that there is no path from our detection circuit to the actuator, so the only malicious action we have to consider is whether an adversary could inject a signal that would be hidden from detection because of interference in the detection circuit itself. To analyze this, we define a new transfer function $T$ for the main wire in the detection circuit. The resulting signal that is injected into the detection wire is then $T(s)$ when the adversary sends $s$. Note that there may also be multiple injection points as discussed in Section~\ref{sec:two_injection_points}, but they can be modeled to the main wire, as $o$ directly determines whether an attack happens or not. Therefore, the injected signal is superimposed onto $o$, described in Equation~\ref{eq:o_tc}, making the modified differential amplifier output~$o'$: \begin{align*} o' &= T(s) + G\left(\frac{K-1}{K}T_{C}(s) + n\right) \\ &= \frac{G(K-1)}{K}\left( \frac{K}{G(K-1)}T(s) + T_{C}(s)\right) + Gn \end{align*} If the attacker wants to avoid detection~$o'$ must be zero (technically just less than~$\epsilon$, but basically zero). That means that the value in the parentheses must be zero, which in turn requires the following equation holds: \begin{align} \label{eq:ts_tcs} {\frac{K}{G(K-1)}T(s) = -T_{C}(s)} \end{align} The negative sign in Equation~\ref{eq:ts_tcs} implies that $T(s)$ and $T_{C}(s)$ must be 180 degrees out of phase, and this requires that the physical distance between the two corresponding wires is exactly half of the wavelength of the attacking signal $s$~\cite{balanis2016antenna}. This is already a good argument for why an attacker cannot inject a signal that affects the actuator, and simultaneously cancel it out in the detection system, since the frequency would have to be in the 10-100s of GHz to get a half wavelength short enough. Such a high-frequency signal is way above what affects most actuators. However just to make the point extra clear, let's assume that the attacker could in fact send a signal with a high enough frequency to make this work. Even then, the constant $\frac{K}{G(K - 1)}$ in Equation~\ref{eq:ts_tcs} means that the signal injected into the detection system, $T(s)$, must be 100s of times stronger than $T_{C}(s)$, the signal injected into the actuator control signal itself. However, given that the two wires are so close to each other, and that the smaller of the two needs more power injected into it, it is impossible to achieve such a $T(s)$ in practice. As a result Equation~\ref{eq:ts_tcs} can never hold in practice. For those two reasons (phase difference and relative power), no adversarial signal $s$ can ever prevent its own detection due to interference with the detection circuitry itself. \section{Extended Maximum Detectable Frequency} \label{sec:protection_outofthe_operational_band} \begin{figure}[t] \def1.5{0.65} \centering \import{Figures/block/}{comparator_vs_attacking_signal.pdf_tex} \caption{With constant attack power, the peak amplitude (dashed line) and the rectified DC term (dash-dotted line) of the differential amplifier output signal change along with the frequency. The maximum detectable frequency is extended to $f_{p}$.} \label{fig:comparator_vs_attacking_signal} \end{figure} Our detection method relies on a differential amplifier to help detect injected signals. Like all electronic components, a differential amplifier is designed to work within a particular frequency range. When choosing parameters for the detection system, a suitable differential amplifier should be used, which covers the entire range where adversarial signals are likely to be able to affect the actuator. However, on rare occasions, e.g., for very high frequency applications or if cost is a significant concern, it might be difficult to get a differential amplifier that fully covers the desired frequency range. For such cases, we have come up with a method to extend the maximum detectable frequency $f_{max}$ beyond the normal upper bound of the differential amplifier. Many previous studies~\cite{wu2018characterization, richardson1979modeling,forcier1979microwave,larson1979modified} have shown that a differential amplifier will still respond beyond its normal operational band, although the response is entirely different from the normal amplification within its design parameters. As the frequency increases beyond $f_{max}$, the peak amplitude of the differential amplifier output starts to decline, as the gain plummets to almost zero~\cite{kitchin2006designer, mancini2003op}. This is shown in Figure~\ref{fig:comparator_vs_attacking_signal} where the dashed curve depicts the change of the peak amplitude. Although the peak amplitude decreases to nothing, the output will gain a DC offset with respect to the normal ground state, shown in Figure~\ref{fig:comparator_vs_attacking_signal} as the Rectified DC Term. This happens as the differential amplifier rectifies the high frequency signals~\cite{devices2009rfi,wu2018characterization,FioriFranco2014SoSP}. The phenomenon is also known as radio-frequency (RF) rectification, and it is attributed to the nonlinear voltage-current characteristic of transistors that make up the differential amplifier~\cite{devices2009rfi}. Further increasing the frequency will eventually decrease the rectified DC term, which will ultimately become negligible when the frequency is high enough~\cite{larson1979modified, richardson1979modeling,forcier1979microwave}. While this effect does eventually disappear, it allows us to extend the detection by hundreds or thousands of times higher than the upper bound of the operational band. It is important to note that this phenomenon is not limited to a specific differential amplifier, but is true for many different designs, which has been experimentally verified in the literature~\cite{wu2018characterization, oapm2013specification}. For our detection system to provide firm guarantees, it is essential to ensure no gap in the protected frequency band. Therefore we have to ensure that the DC offset rises enough to be detected before the normal peak amplitude of the differential amplifier goes to zero. In Figure~\ref{fig:comparator_vs_attacking_signal}, the frequency at which the magnitude of the rectified DC term exceeds $\epsilon$ is denoted as $f_{dc\epsilon}$, and the frequency at which the peak amplitude falls below $\epsilon$ is $f_{pk\epsilon}$. We show in Section~\ref{sec:implementation} that we can easily achieve $f_{dc\epsilon} < f_{pk\epsilon}$ in practice. \section{Implementation} \label{sec:implementation} We implement our detection method on two practical and distinct actuator systems: a speaker system (in Section~\ref{sec:speaker}) and a motor control system (in Section~\ref{sec:motor}). The objective of the implementation is to validate the feasibility of our detection method in practice. One of the reasons why we choose these two systems is that they are widely deployed in many critical applications: the speaker system can be found in applications in which sound information needs to be broadcast, such as mobile phones and car satellite navigation; the motor control system can be found in those that need to drive some mechanical structures, such as smart locks and insulin pumps. Implementing the detection method on these two systems also verifies its capabilities of handling different actuator systems regardless of types of signals: sinusoidal signals (analog) are used in the speaker system, while pulses (digital) in the motor control system. We first introduce how to build our own actuator systems on which we can quickly implement our detection method. Then, we show how to detect various attacking signals in each actuator system. We only demonstrate the control signal injection, as the drive signal injection is power-consuming and difficult to achieve with our equipment (please see detailed discussion in Section~\ref{sec:difficulty_of_drive signal_injection}). Finally, a summary of the implementation of these two actuator systems is given in Section~\ref{sec:implementation_summary}. \subsection{Setup} \label{sec:setup} \begin{figure}[t] \def1.5{0.55} \centering \import{Figures/block/}{speaker_system.pdf_tex} \caption{A setup of the actuator system. Devices in the dotted squares differ from system to system, and others are the same.} \label{fig:speaker_system} \end{figure} Based on the system model, we build a setup that can be easily configured into a speaker system or a motor control system, as shown in Figure~\ref{fig:speaker_system}. We use a signal generator to produce the control signal and the reference signal. The signal generator is functionally equivalent to the microcontroller. The benefit of using the signal generator is having easier control of signals regarding their frequencies, amplitudes, synchronization, etc. The control signal is fed into a signal conditioner. The signal conditioner is different in these two systems: an audio power amplifier LM386 is used in the speaker system, and a brushed DC motor driver chip DRV8833 is used in the motor control system. Regarding the actuator (either a loudspeaker or a motor), since its responses are deterministic and its input signal (i.e., the drive signal) sufficiently reflects the responses, we simply omit the actuator in the setup but use an oscilloscope to monitor and record the drive signal. An advantage of doing so is that different actuator systems can be quickly tested without extra work of using different methods to sense and process the actuator responses (e.g., a microphone to measure sound played by the speaker, or a hall-effect sensor to measure the speed of the motor). Moreover, a computer is used to process the data that is recorded by the oscilloscope. Based on such an actuator system, we deploy our detection method to it. The control signal and the reference signal are fed into a differential amplifier, as shown in Figure~\ref{fig:speaker_system}. In the speaker system, we choose an AD623 with a gain of around 150 as the differential amplifier because it is specifically designed to amplify small differences between its two inputs. As for the motor control system, a unity-gain differential amplifier AD629 is selected as the differential amplifier, as it can handle high-voltage inputs. The output of the differential amplifier is monitored and recorded by the oscilloscope, and the recorded data are sent to the computer for attack detection. To achieve a large $K$, i.e., difference between the transfer functions of the control signal wire and the reference wire, we form a loop on the control signal wire to make it easier to pick up the attacking signal, and choose a short cable as the reference wire. Thus, the control signal wire is much more sensitive to the attacking signal than the reference wire. Note that it does not matter which wire is more sensitive because our detection method only requires the transfer functions to be different. Moreover, to guarantee that the control signal and the reference signal arrive at the differential amplifier at the same time, the tapping point is carefully chosen to ensure that the paths that feed these two signals into the differential amplifier have the same length. Our setup is extremely flexible and allows us to easily experiment with different actuator types without having to build dedicated systems for each one. Despite being a lab setup we believe that our results accurately reflect the response of real commercial products. \subsection{Speaker System} \label{sec:speaker} In a speaker system, an audio signal is amplified and then broadcast. The objective of the attack is maliciously manipulating the waveform of the audio signal, and in the extreme case, can lead to the speaker system broadcasting any messages the attacker wishes. \subsubsection{Determining Threshold} The differential amplifier output is measured when no attack happens. The measurements show that the differential amplifier output signal amplitude is always below $\SI{2.4}{\milli\volt}$. Since this value already includes all noise sources in our experimental environment, it is chosen as the threshold. The benefit of choosing this value as the threshold is that, on the one hand, it significantly reduces the possibility of which the noise accidentally triggers the detection; the false-positive rate remains at $0\%$ as calculated from the measurements. On the other hand, this threshold is small enough to guarantee that the weakest attack that effectively impacts the actuator system is successfully detected, and this will be shown and explained in the experimental results as follows. \subsubsection{Direct Power Injection Attacks} \label{sec:speaker_in-band_detection} \begin{figure}[t] \def1.5{0.55} \centering \import{Figures/plot/}{speaker_comparator_output_dpi.pdf_tex} \caption{The peak amplitude (left y-axis) of the differential amplifier output drops to zero when the frequency of the attacking signal is far beyond the operational band of the audio amplifier; the DC offset (right y-axis) rises while increasing the frequency of the attacking signal. The peak-to-peak voltage of the attacking signal is from $\SI{10}{\milli\volt}$ to $\SI{100}{\milli\volt}$} \label{fig:speaker_comparator_output_dpi} \end{figure} The normal operational band of an audio amplifier is below the megahertz level, and low-frequency attacking signals are needed for in-band attacks. Due to the practical difficulty of injecting low-frequency attacking signals into the circuit wirelessly, we first demonstrate that the detection method can handle the in-band attacks using direct power injection (DPI)~\cite{giechaskiel2019sok}. Note that in the following sections (Section~\ref{sec:speaker_out-of-band_detection} and Section~\ref{sec:motor_detection}), the attacking signals are injected wirelessly. In order to show that any malicious frequency can be injected into the audio signal, the attack frequency is swept from $\SI{1}{\hertz}$ to $\SI{10}{\mega\hertz}$, and the peak-to-peak voltage of the attacking signal is from $\SI{10}{\milli\volt}$ to $\SI{100}{\milli\volt}$. The reason why the highest attack frequency is set to $\SI{10}{\mega\hertz}$, which is beyond the operational band of the audio amplifier, is to verify that no gap (as described in Section~\ref{sec:protection_outofthe_operational_band}) exists in the frequency band. The reason why $\SI{10}{\milli\volt}$ is chosen as the weakest peak-to-peak amplitude of the attacking signal, is that the malicious change caused by an attack at this voltage is already around $\SI{49}{\dB}$ weaker than the audio signal. Weaker attacking signals have little to no impact on the speaker system. To demonstrate the impact of the attack on the differential amplifier in detail, we show both the peak amplitude and the DC offset in Figure~\ref{fig:speaker_comparator_output_dpi}. Each point in the figure represents the averaged peak amplitude or the averaged DC offset with a standard deviation. The first observation of the experimental results is related to the attack power: the peak amplitude and the DC offset increase (decrease) while the attack power increases (decreases). Concerning the attack frequency, when it is lower than $\SI{1}{\kilo\hertz}$, the peak amplitude is significantly larger than the threshold, which reveals the existence of the attacking signal. When the frequency is between $\SI{1}{\kilo\hertz}$ and $\SI{1}{\mega\hertz}$, the peak amplitude plummets, but it is still above the threshold; meanwhile, the DC offset rises above the threshold. When the frequency of the attacking signal reaches $\SI{1}{\mega\hertz}$ and beyond, the DC offset is well above the threshold, indicating the existence of the attack. The experimental results validate the capabilities of the differential amplifier to detect attacks in the entire frequency range from DC to $\SI{10}{\mega\hertz}$. We perform this experiment 240 times and all (240 out of 240) attacking signals are detected, making the true-positive rate is $100\%$. This shows that even for practical systems, the detection method provides strong protection against both in-band and out-of-band attacks. \subsubsection{Wireless Attacks} \label{sec:speaker_out-of-band_detection} \begin{figure}[t] \def1.5{0.55} \centering \import{Figures/plot/}{speaker_amp_output_DPI.pdf_tex} \caption{A $\SI{6}{\kilo\hertz}$ malicious signal is successfully injected into the $\SI{5}{\kilo\hertz}$ audio signal. The $\SI{6}{\kilo\hertz}$ spike is highlighted by a red point in the frequency domain of the audio amplifier output. The power ratio between the $\SI{6}{\kilo\hertz}$ frequency component and the $\SI{5}{\kilo\hertz}$ is around $\SI{-30}{\dB}$.} \label{fig:speaker_amp_output_DPI} \end{figure} To test high-frequency attacks in a more realistic setting, we modulate a high frequency carrier with an audio signal and inject it wirelessly into the control and reference wires. An RF signal generator is used to produce the attacking signals, and they are radiated by a coil antenna, as shown in Figure~\ref{fig:speaker_system}. The antenna is placed around $\SI{2}{\centi\meter}$ above the control signal wire for the best possible energy transfer. That way we can use less power to achieve the wireless attack in our experiments. If an attacker is further away from the victim system, she needs more powerful attacking signals to achieve the attack. To present a concrete attack, we choose to inject a $\SI{6}{\kilo\hertz}$ malicious frequency into a $\SI{5}{\kilo\hertz}$ audio signal. In Figure~\ref{fig:speaker_amp_output_DPI}, an attack result is shown: in the frequency domain of the audio amplifier output, besides the legitimate $\SI{5}{\kilo\hertz}$ frequency component, a malicious spike can be observed at $\SI{6}{\kilo\hertz}$. In order to quantify the impact of the attack, the power ratio between the malicious frequency component and the legitimate frequency component is measured, which can be expressed as the following equation: \begin{equation*} impact = 10\times\log_{10}\Big(\frac{P_{malicious}}{P_{legitimate}}\Big) \end{equation*} where $P$ represents the power. The bigger the ratio is, the stronger the injected signal is, and the larger the impact of the attack is. When no attacking signal is presented, our measurements show that the $impact$ remains at around~$\SI{-52.7}{\dB}$. Different attacking signals are generated to test the performance of the detection method: the peak-to-peak voltage of the attacking signal is changed from $\SI{100}{\milli\volt}$ to $\SI{700}{\milli\volt}$, and the carrier frequency of the RF signal is changed from $\SI{100}{\mega\hertz}$ to $\SI{1000}{\mega\hertz}$. The impact of the attacks are numerically represented in Figure~\ref{fig:speaker_amp_output_attack_ratio}. When the attack frequency reaches $\SI{800}{\mega\hertz}$, the $impact$ is close to $\SI{-52.7}{\dB}$, which means that attacks beyond this frequency will have little practical significance. We did conduct experiments beyond $\SI{1000}{\mega\hertz}$, but the impact of the attacking signal beyond $\SI{1000}{\mega\hertz}$ are smaller, and hence we only focus on the frequency range within $\SI{1000}{\mega\hertz}$. \begin{figure}[t] \def1.5{0.55} \centering \import{Figures/plot/}{speaker_amp_output_attack_ratio.pdf_tex} \caption{The power ratio between the malicious signal and the legitimate signal gradually decreases while increasing the frequency of the attacking signal. The peak-to-peak voltage of the attacking signal is from $\SI{100}{\milli\volt}$ to $\SI{700}{\milli\volt}$.} \label{fig:speaker_amp_output_attack_ratio} \end{figure} Regarding the attack detection, the peak amplitudes of all measurements of the differential amplifier output are below the threshold. This is because the frequency of the attacking signal is already far beyond the operational band of the differential amplifier, as explained in Section~\ref{sec:protection_outofthe_operational_band}. However, as shown in Figure~\ref{fig:speaker_comparator_output}, the DC offset of the differential amplifier output is well above the threshold throughout the range for all attacker signals other than $\SI{100}{\milli\volt}$, indicating the existence of the attack. We see that the DC offset increases when the attack power is increased, so for attacking signals with peak-to-peak voltages of $\SI{300}{\milli\volt}$, $\SI{500}{\milli\volt}$, and $\SI{700}{\milli\volt}$, the DC offsets are always above the threshold (solid blue line) regardless of the frequency. When the attacking signal is $\SI{100}{\milli\volt}$, a few attacks fall below the threshold when the carrier frequencies reach $\SI{800}{\mega\hertz}$ and $\SI{900}{\mega\hertz}$. Referring back to the impact of these two attacking signals in Figure~\ref{fig:speaker_amp_output_attack_ratio}, the ratios indicate that the impacts are so tiny that they are unlikely to have any significance for a practical system. Since our detection method successfully detects 389 out of 400 attacking signals, the true-positive rate is $97.25\%$. In Figure~\ref{fig:speaker_comparator_output}, the curves of DC offsets vary up and down along the attack frequency. This is because the attacking signal is injected wirelessly instead of through DPI. The transfer function of the wire accounts for the ups and downs of the curves: the attacking signal is efficiently injected into the wire at specific frequencies where local maximum values of the DC offset reaches, but less efficient at other frequencies. \begin{figure}[t] \def1.5{0.55} \centering \import{Figures/plot/}{speaker_comparator_output.pdf_tex} \caption{The DC offset of the differential amplifier output varies while changing the voltage level and the frequency of the attacking signal. The peak-to-peak voltage of the attacking signal is from $\SI{100}{\milli\volt}$ to $\SI{700}{\milli\volt}$.} \label{fig:speaker_comparator_output} \end{figure} The experiment results show that the frequency range covered by the differential amplifier is easily large enough to protect the frequency band that the speaker system is vulnerable to. Our detection method shows the feasibility of detecting the attacking signals with frequencies from DC to far beyond the speaker system's operational band. Moreover, given the wireless injections, our detection method demonstrates its capabilities of handling real attack scenarios. We present concrete attacking signals that can precisely manipulate the audio frequencies, but it does not mean that our detection method can only handle these specific attacking signals. Any attacks that cause voltage changes of the differential amplifier output signal beyond the pre-determined threshold can be spotted immediately. \subsection{Motor Control System} \label{sec:motor} \begin{figure}[t] \def1.5{0.60} \centering \import{Figures/block/}{experiment_setup.pdf_tex} \caption{A motor driver is used to amplify a control signal to drive a motor.} \label{fig:experiment_setup} \end{figure} In the motor control system, a pulse signal is used to control the rotating speed of the motor. The duty cycle of the pulse signal describes the amount of time that the signal is at the high-voltage level as a percentage of the total time of a cycle. The larger the duty cycle is, the faster the motor's rotation speed is. As mentioned in the setup, a motor driver is used as a signal conditioner to amplify the control signal into a powerful drive signal to energize the motor. The motor driver is made of transistors, and for simplicity, as shown in Figure~\ref{fig:experiment_setup}, they can be regarded as two switches that are connected in series and are controlled by the pulse signal. Since these two switches work in opposite ways, the output signal toggles between $V_{CC}$ and the ground in the same pattern as the input signal. The attacker's objective is to manipulate the duty cycle and impact the functionality of the motor. \subsubsection{Determining Threshold} \label{sec:motor_setup} When no attack presents, the differential amplifier output signal is recorded, and the threshold is $\SI{0.17}{\milli\volt}$. This threshold value is chosen as it makes the false-positive rate to its minimum ($0\%$) in our experimental environment; also, this threshold is sufficiently large to spot the weakest attacks, as shown as follows. \subsubsection{Detection of Attacks} \label{sec:motor_detection} \begin{figure}[t] \def1.5{0.55} \centering \import{Figures/plot/}{motor_comparator_output.pdf_tex} \caption{When an attack happens, the DC offset of the differential amplifier output is always above the threshold, implying detecting the attack.} \label{fig:motor_comparator_output} \end{figure} Since the differential amplifier is specifically designed to handle the input difference in its operational band, it is not difficult to detect the in-band attacks. We do not repeat the in-band attacks here but focus on the out-of-band attacks. Note that the out-of-band attacks are realized wirelessly. In the experiments, the frequency of the attacking signal ranges from $\SI{30}{\mega\hertz}$ to $\SI{90}{\mega\hertz}$, and the peak-to-peak voltage ranges from $\SI{900}{\milli\volt}$ to $\SI{1300}{\milli\volt}$. The reason why the frequency of the attacking signal is below $\SI{90}{\mega\hertz}$ is that beyond this frequency, the motor driver never responds to the attacking signal, even though the peak-to-peak voltage of the attacking signal reaches its upper limit in the signal generator. The reason why the peak-to-peak voltage of the attacking signal is above $\SI{900}{\milli\volt}$ is that, below this voltage level, the attacking signal is too weak to affect the motor driver. In our experiment, the RF signals can cause the motor driver to output a low voltage level when a high voltage level should be outputted. In other words, the duty cycle of the pulses can be reduced. We can precisely control when to start and stop radiating the attacking signals, and hence, the duty cycle of the control signal can be precisely manipulated, further controlling the motor speed. Note that using other types of attacking signals can also increase the duty cycle~\cite{selvaraj2018electromagnetic}; however, the purpose of the experiment focuses on attack detection, and we do not further show and discuss how to control the motor speed. Regarding the attack detection, both the peak amplitude and the DC offset of the differential amplifier output signal are checked. Under these out-of-band attacks, the peak amplitude is always below the threshold. However, as shown in Figure~\ref{fig:motor_comparator_output}, the DC offset is always above the threshold, indicating an attack. All (210 out of 210) DC offsets are above the threshold, indicating that all attacking signals are detected. Therefore, the true-positive rate is $100\%$. \subsection{Summary of Implementation} \label{sec:implementation_summary} The implementation of our detection method on the speaker system and the motor control system show the generality of our detection method regardless of the type of signal. The deployments also demonstrate the simplicity of implementing the detection method in practice. The high true-positive rates and low false-positive rates in the speaker system and the motor control system show the robustness of the detection method on different actuator systems. \section{Discussion} \label{sec:discussion} \subsection{Different Detection Strategies} \label{sec:strategies} \begin{figure}[t] \def1.5{0.60} \centering \import{Figures/block/}{timeline.pdf_tex} \caption{The timeline shows that an attack starts from radiating an attacking signal to actuator misbehaving. At different moments, the attack can be detected by different ways.} \label{fig:timeline} \end{figure} An electromagnetic signal injection attack has several distinct phases, each of which gives rise to different detection strategies. In the attack timeline shown in Figure~\ref{fig:timeline}, three moments are highlighted: the first moment is when the attacking signal is radiated; the second is when the attacking signal is injected into a wire in the target system; the third is when the actuator starts deviating from its intended activity. Each of the three moments marks the start of a new phase of the attack. The three phases should not be thought of as sequential since an attack of any meaningful duration will be in all three phases at once, but should rather be thought of as three opportunities to detect the attack. In the first phase, when attacking signals are being radiated, the electromagnetic radiation can be detected in the environment using an antenna. Thus, the detection strategy is monitoring the environmental electromagnetic power level: if the power level is above a pre-defined threshold, or maybe outside a known noise profile, the attack is detected. This strategy has the potential to detect an attack early; however, it requires a monitoring device that can reliably detect adversarial interference at the frequency that would harm the target system, which is not always easy to achieve. Examples of this detection strategy are some of the anomaly detectors that will be introduced in Section~\ref{sec:related_work_detection_anomaly}. In the second phase, when the attacking signals are successfully injected into the actuator system, the signals in the actuator system wires are changed. Since the attacking signals are not supposed to exist in the actuator system, these changes are a reliable indicator of an attack, if they can be measured. Our detection method uses the second form of detection, i.e., we detect signals that are successfully injected into the actuator system. In the final phase, the actions of the actuator will deviate from what the system expects, assuming the attack is powerful enough to result in a measurable change. If the system can detect this behavior change, this can be used to detect the attack. This might be an attractive detection strategy since no effort is wasted on attacks that do not have a measurable effect on the target actuator. However, detecting such attacks typically requires extra sensors. By the nature of the detection method, it will only detect attacks after they have already affected the system. An example of this detection method is Muniraj and Farhood's work~\cite{muniraj2019detection} that will be introduced in Section~ \ref{sec:related_work_detection_actuator}. \subsection{Adaptive Threshold} \label{sec:adaptive_threshold} In our implementation of the detection method, we find a proper threshold by experiment and keep it constant while testing the performance of our detection method. The advantage of using a constant threshold is that once a proper threshold is found and determined, it is efficacious forever, and the designer never has to adjust it again. However, in some cases where the environmental noise varies significantly and complicatedly over time, to provide the actuator system with more flexibility, the designer can program the actuator system to adjust its threshold adaptively when necessary. Imagine a simple case: during the daytime, the noise is intense because of human activities (e.g., wireless communications, transportations), but at midnight when people sleep, the noise becomes relatively weak. During the daytime, the threshold can be slightly increased to allow more noise, and as such, it can avoid the noise frequently triggering the detection. At midnight, to restore the detection method to be more sensitive to attacks, the designer can program the actuator system to lower down the threshold. No matter how the designer adjusts the threshold adaptively, it is still essential to guarantee that the detection method meets the requirements as mentioned in previous sections: first, no noise triggers the detection accidentally; second, no attack that effectively impacts the actuator is missed. \subsection{Difficulty of Canceling Attacking Signals} An idea of mitigating the influence caused by attacks is generating an ``anti-attack'' signal to cancel out the attacking signal. The anti-attack signal and the attacking signal have the same frequency and amplitude, but they are 180 degrees out of phase. When the anti-attack signal and the attacking signal meet, they destruct each other. This idea is similar to the sound noise cancellation technology that is used in headphones. However, it is hard to realize such a cancellation regarding the electromagnetic interference. In the air, an electromagnetic signal propagates around the light speed; in the circuit, the speed halves. In addition, it takes time for the actuator system to capture the attacking signal and then generate the anti-attack signal for the cancellation. This means that the anti-attack signal always lags behind the attacking signal. It is difficult to synchronize the anti-attack signal with the attacking signal unless the microcontroller can predict the attacking signal. \subsection{Difficulty of Drive Signal Injection} \label{sec:difficulty_of_drive signal_injection} As mentioned previously, compared with the control signal injections, a drive signal injection may require much more power if the actuator is power-consuming. We estimate the power of a drive signal injection as follows. According to datasheets of an off-the-shelf motor, it needs a drive signal that is around $\SI{4.5}{\watt}$; as for a microcontroller, such as an Arduino Uno microcontroller, it can output a control signal that is only $\SI{0.1}{\watt}$. For simplicity, we suppose that the attenuation on attacking signals is the same in those two injections. Then, the attacker needs to radiate at least $\frac{\SI{4.5}{\watt}}{\SI{0.1}{\watt}}=45$ times more power to realize the drive signal injection than the control signal injection. This result implies that it is much more difficult and costly to conduct the drive signal injection than the control signal injection in practice. Another evidence to show that the drive signal injection is hard to achieve is to regard the injection as wireless power transmission~\cite{shinohara2014wireless}. In wireless power transmission techniques, scientists specifically designed both antennas of the transmitter and the receiver to achieve the power transmission. Given the wire that works as a low-gain antenna in the actuator system, delivering enough power into the drive signal wire can be much more challenging. \section{Related Work} \label{sec:related_work_existing_defenses} Many countermeasures against electromagnetic signal injection attacks have been proposed and developed; however, it needs to be noted that protecting sensors has been much more extensively studied than actuators. The countermeasures can be categorized into two types: one is attenuation that aims to reduce attack impacts, and the other is detecting the existence of attacks. \subsection{Attenuation} \label{sec:related_work_attenuation} Wrapping components with proper RF shielding materials is a common method to attenuate attacking signals~\cite{kasmi2015iemi,kune2013ghost, Kasper2009PACf,selvaraj2018electromagnetic, Markettos2009Tfia, osuka2018information,giechaskiel2019framework, shin2016sampling, tu2019trick, sp20Zhang,wang2022ghosttouch}. However, the shielding materials provide finite attenuation~\cite{tesche1996emc}, and a powerful attacker may breach the protection by increasing her attack power. Although adding thicker shielding materials can increase the attenuation level, it will still challenge the weight and the size of the devices, especially for applications such as implantable medical devices and aviation. In addition to shielding materials, regarding traces in a printed circuit board (PCB), researchers suggested that via-fenced striplines can also eliminate attacking signals by a finite amount~\cite{dayanikli2020senact, dayanikli2021electromagnetic}. Filtering is another prevalent solution to mitigate attacking signals. Low-pass filters can significantly attenuate out-of-band attacking signals~\cite{giechaskiel2019framework, selvaraj2018electromagnetic, kune2013ghost, tu2019trick,osuka2018information, sp20Zhang}. However, in-band attacking signals can still pass through the low-pass filters. Researchers also pointed out that the parasitics in surface mount components can convert the low-pass filter into a band-stop filter, which allows out-of-band attacking signals to pass~\cite{ryanhurley2007}. Besides, Kune et al.~\cite{kune2013ghost} proposed to deploy an adaptive filtering mechanism~\cite{proakis2001digital} that makes use of knowledge about ambient electromagnetic emissions to attenuate the interference in sensor measurements. Crovetti and Musolino~\cite{crovetti2021digital} also proposed a digital way to suppress the EMI-induced errors in the sensor measurements. However, it is challenging to have such digital methods for the actuator because it has no computational capabilities. Furthermore, Kune et al.~\cite{kune2013ghost} also recommended using differential rather than single-end comparator to attenuate the attacking signals in a finite frequency band, thereby raising the bar for attackers. \subsection{Detection} \label{sec:related_work_detection} \subsubsection{Anomaly Detection} \label{sec:related_work_detection_anomaly} An idea of detecting attacks is to add a specific channel to monitor whether abnormal electromagnetic signals or activities appear. Note that although some of the following approaches are initially designed for sensors, similar ideas possibly work in actuator systems, too. Researchers developed standalone detection systems that capture electromagnetic waves by dedicated antennas and then use intricate circuits to process the captured signals for detection~\cite{adami2014hpm,adami2011hpm,dawson2014cost}. Kune et al.~\cite{kune2013ghost} investigated using extra antennas or conductors to capture and measure attacking signals for detection, and the measurements can be then used by their adaptive filtering mechanism as mentioned previously. In a similar vein, Tu et al.~\cite{tu2021transduction} proposed adding a dummy sensor for detection and correction. In another work, Tu et al.~\cite{tu2019trick} proposed leveraging the superheterodyne technique to create an anomaly detector to check whether sensor measurements carry malicious frequency components. Note that these approaches count on the knowledge about the waveforms of the attacking signals, which are usually high-frequency (e.g., MHz or GHz). Thus, they require electronic components (e.g., high-speed ADCs) that can properly handle high-frequency signals, as well as extra computing resources to process the captured signals for detection purposes, implying significant implementation overheads regarding both hardware and software. As a comparison, our approach counts on the signal strength difference between the primary and the reference signals to detect attacks, rather than waveforms. This makes our approach gain advantages over the other approaches: first, a simple detection circuit made of differential amplifiers is used to catch the difference, and such a detection circuit has fewer hardware overheads; second, an interrupt pin of the microcontroller is configured to handle the output of the detection circuit to determine whether an attack happens, and it needs fewer computing resources. Besides, our approach does not require any RF interface to capture the attacking signals, thus avoiding the troubles of crafting the dedicated RF interfaces, as well as preventing extra attack power from entering the victim devices and causing other unwanted influence. \subsubsection{Detection Methods for Sensor Systems} \label{sec:related_work_detection_sensor} Especially for sensor systems, Zhang and Rasmussen~\cite{sp20Zhang} proposed a generalized detection method that selectively turns off the sensor in a secret way to observe whether attacks alter the sensor measurements. Shoukry et al.~\cite{shoukry2015pycra} proposed similar detection methods, but they were designed for specific types of sensors, as well as requiring significant computational overheads. Succeeding studies~\cite{kohler2021signal, ruotsalainen2021watermarking} further adapted these detection methods to more practical applications. Fang et al.~\cite{fang2022detection} proposed adding unique noise (fingerprints) to sensor measurements and using machine learning techniques to detect the attacks. In addition, in specific devices such as cardiac implantable electrical devices (CIED)~\cite{kune2013ghost} and smartphones~\cite{wang2022ghosttouch}, researchers utilized users' reactions or behaviors while using these devices to identify the existence of attacks on the sensors. A few works mentioned that multiple built-in sensors of a device can react to variations of the electromagnetic environment, and the characteristics can be exploited to detect abnormal electromagnetic activities~\cite{kasmi2015iemi, kasmi2015automated}. Such a detection approach is also known as sensor fusion, which has been widely studied to detect signal injections that use other types of attacking signals such as ultrasonics and lasers~\cite{giechaskiel2019sok, yan2020sok}. These detection methods work well for the sensors because the computational capabilities of the receiver (microcontroller) make the authentication possible. However, it is not easy to apply similar ideas to the actuator systems because the receiver (actuator) lacks computational capabilities to authenticate its input signals. \subsubsection{Detection Methods for Actuator Systems} \label{sec:related_work_detection_actuator} Reliable sensor measurements can be used to indicate whether actuators are under attack. In unmanned aircraft systems, Muniraj and Farhood~\cite{muniraj2019detection} proposed to artificially cause minor disturbances to the actuators at a random time and use sensors to capture the disturbances; if unexpected disturbances are detected, the attacks are found. However, this method trades off the stability of the whole system against its security. The same authors proposed another detection method that casts the actuator attack detection problem as an unknown input estimation problem and uses a two-stage extended Kalman filter to estimate actuator attacks from sensor measurements, requiring additional computational power. In addition to the two detection methods, the authors also proposed a method that adds randomness to control signals to improve the resilience of the actuator against malicious attacks. Our approach outstrips these detection methods in terms of these three aspects. First, they require a complex model of the specific actuator system, which makes it difficult to be applied to other applications, whereas our approach is generalized for different actuator systems. Second, they need extra computing resources to run the detection algorithms, but we can use the interrupt mechanism of the microcontroller for detection, which is more efficient. Third, their detection methods always spot attacks after the actuator misbehaves; however, our approach detects the attacks earlier, thus possibly allowing the actuator systems to take proper measures to stop/mitigate the attacks. \section{Conclusion} \label{sec:conclusion} In this paper, we have proposed a novel detection method that can detect electromagnetic signal injection attacks on actuator systems. This class of systems previously had to rely on physical security measures and signal decay, and had no meaningful security guarantees against a determined adversary. Our detection system fills this critical gap and provides strong detection guarantees to any actuator system. The core idea of our detection method is straightforwad: any diffenrence caused by external attacks between two identical signals (the primary signal and the reference signal) indicates the attacks. Our detection method provides provable guarantees against attacks, and can be tuned to any attack power and any amount of environmental noise. We have shown that our detection method provides the actuator system with a strong security guarantee, and an attacker who attempts to effectively manipulate the actuator system will always be detected by our detection method. Despite this, our detection method requires only a few cheap off-the-shelf electronic components and does not add any significant weight to the system it protects. This is important in many contexts, such as aviation and implantable medical devices. Moreover, the implementation of the detection method on a speaker system and a motor control system proves its generality for different actuator systems, as well as the effectiveness and the robustness in a practical setting. \balance \bibliographystyle{ACM-Reference-Format}
1,941,325,220,096
arxiv
\section{Introduction} Public authorities currently play a significant role in encouraging sustainable development policies, giving impetus to sustainable urban mobility practices that aim to reduce the use of private cars and increase the use of sustainable transport modes, such as public transport. In urban and peri-urban areas, this transport strategy faces a number of challenges, including the regularity, quality of service and congestion of public transport. One of the major goals of stakeholders (operators and authorities) is to adapt as accurately as possible the schedules to the passenger demand during each specific period (e.g., normal period, period under events, disturbed period, special day, and so on). According to transport operators, another goal is to anticipate the demand for disposable ticket or pass (non-rechargeable smart cards) to match ticket availability to passenger demand during a specific period, particularly event periods (e.g., concerts, sports games, shows, exhibitions, and so forth). Furthermore, this information on the number of type of ticket or pass used per quarter hour can be used to provide mobility services adapted to the different types of passengers (regular/occasional). For example, a larger number of agents may be made available in the event of a high number of occasional passengers to help manage the extra passenger flow. To address these issues, we propose a generic data shaping of contextual data, allowing the use of well-known regression models for long-term forecasting of passenger demand with fine-grained temporal resolution. In this study, we forecast the number of passengers entering each of the 68 metro stations in the city of Montréal, Canada, until one year ahead by taking calendar information and planned events into account. We also predict passenger demand per type of ticket or pass used to travel to address the problem of adapting ticket availability to passenger demand. The aggregation time window for the number of passengers has been chosen as 15 minutes, which permits the precise analysis of the impact of events on passenger demand and is relevant to adapting transport supply. We compare several well-known forecasting models, including basic, statistical and machine learning models. In this context, we analyse the use of contextual data such as information about the day and an event database provided by the public transportation authority of Montréal (Société de transport de Montréal, STM). This methodology aims to be reproducible to forecast the passenger demand for other transport networks around the world (depending on the availability of equivalent data sets in the other cities). The main objective of this study is to determine whether it is possible to predict the number of passengers using the calendar and event information available in advance (in this case, available one year in advance), with the following innovative aspects: \begin{itemize} \itemsep=0pt \item Predict the number of incoming passengers at each station of a transportation network (68 stations in the Montréal metro network in Canada) \item Propose a generic data shaping of contextual data \item Carry out the study over a long period of time (2 years for the learning set and 1 year for the test set) \item Predict the number of passengers aggregated with a fine temporal resolution (15 minutes) \item Perform a detailed analysis of the forecasting results during different periods (e.g. event periods and periods without events) \item Forecast the number of passengers and perform an analysis of this forecast based on the type of ticket or pass used to travel. \item Compare several forecasting methods, including basic methods, statistical methods and machine learning methods \end{itemize} First, early forecasting of the number of aggregated passengers per quarter hour at each station is useful to transport operators to help them improve the planning of the transport supply schedule (e.g., the number of subways per quarter hour, when planning to increase the supply of related transport systems such as buses) to match it as closely as possible to passenger demand. In addition, this demand forecast can be used to plan the presence of agents, secure stations in the case of excessive passenger traffic and allow passengers to avoid overcrowded situations. Note that this approach can only be applied on fixed networks. To be effective, the approach requires a historical data set that includes the occurrence of events at a station; otherwise, the forecasting model will not be able to take into account the event information at a station that never hosted an event in the historical database. The remainder of this paper is organised as follows. Section~\ref{sec:related_work} details the related work. The case study is presented in Section~\ref{sec:casestudy}. Section~\ref{sec:forecasting} details the forecasting methods and the data shaping that we have developed. Section~\ref{subsec:forecast_results_allpass} describes the forecasting results on the global aggregation of type of ticket or pass used to travel, while Section~\ref{subsec:forecast_results_perpass} provides an analysis of the forecasting performance per type of ticket or pass used to travel. Finally, some possibilities for future research and conclusions are outlined in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:related_work} Since 2004, the use of smart card data to analyse mobility in public transportation has received substantial attention from researchers. More recently, studies on mobility analysis have revolved around passenger demand forecasting. A distinction can be made between research that relates to forecasting OD matrices and research that attempts to forecast passenger flows at a specific point. Knowledge about these two factors is indeed essential for planning, operation and management in any transportation network, but each of these areas uses different types of data. The passenger demand goals differ depending on the forecasting time horizon. For long-term forecasting, the aim is to forecast demand with data available at the long-term period in advance (e.g., time features and planned events), which can be very useful for improving transport supply scheduling. In contrast, the forecasting process can also account for the last observations, in which case it is generally referred to as short-term forecasting. Going forward, in the case of an atypical situation, the main goal for transport operators is to use the forecasted passenger demand to optimise transport system operation to match transport supply to the atypical demand or propose to the passenger an alternative way to reach their destination. \subsection{Short-term Forecasting of Passenger Demand in Public Transport} Short-term forecasting, which corresponds to a few time steps ahead forecasting, has been studied with different models. \cite{Li2017} used multiscale RBF networks to forecast the number of alighting passengers at different Beijing subway stations multiple time steps ahead (t+15 and t+30 minutes) by taking the number of boarding passengers at the other station of the subway network into account. In this study, the authors performed an in-depth analysis of the results obtained under special event scenarios. Other examples of subway passenger flow forecasting include the work of \cite{Roos2016}, where the authors predicted passenger flows of the next time step (t+2 minutes). The authors used a Bayesian network model and predicted multiple passenger flows (entry and exit) at all the stations of a subway line of the Paris network. In the study of~\cite{Cui2016}, the authors created a fuzzy nonlinear autoregressive exogenous model to predict the number of passengers at the next time step (t+1 hour). In addition to forecasting, \cite{Ding2016} conducted an in-depth analysis of the influence of subway predictor variables, such as bus transfer activities, and temporal features on the forecasting results and showed that the most important short-term forecasting features are the past observation of the metro ($\sim82.0$\%), the past observation of the bus ($\sim10.4$\%) and the prediction time step ($\sim3.6$\%). This study predicted the next time step (t+15 minutes) of passenger flows at 3 stations of the Beijing subway network. \subsection{Short-term Mobility Forecasting with Spatiotemporal Focus} A closer examination of the most recent studies about short-term forecasting in the transportation field reveals high spatial and temporal values in such prediction problems. For example, in ride-sharing demand forecasting, a research team from Uber (\cite{Laptev2017}) studied Uber ride-sharing demand data with a focus on the temporal values for extreme event forecasting. \cite{ke2017} focused on capturing knowledge from the spatiotemporal information of the ride network via a deep learning approach. Similar approaches have been performed by~\cite{Zhang2017} to predict citywide crowd flows and by~\cite{Yao2018} to predict taxi demand. Studies that spotlight the spatiotemporal aspect of traffic forecasting have also been conducted by~\cite{Wu2016,cheng2017} with a combination of convolutional and recurrent neural network models and by~\cite{yu2017} with a graph convolutional neural network model. \subsection{Event Data Usage in Short-term Forecasting} Some studies have shown the importance of external data, especially event data, for improving the prediction accuracy of forecasting models. Events such as concerts, shows, and sports games are sources of disturbance regarding human mobility. \cite{Ni2017} developed short-term prediction approaches to forecast subway passenger flows for the next 4 hours using social media data. The authors focused on predicting the total number of passengers (sum of entry and exit) of one subway station of the New York City network. They proposed a two-step methodology: hashtag-based event detection followed by the combined use of linear regression and a seasonal autoregressive moving average model. More recent studies conducted by \cite{Markou2018, Rodrigues2019} involved automatic event data collection, where the authors worked on the short-term forecasting of taxi demand in two distinct locations in New York city by using deep learning methods. In these studies, the model comparison showed that event categorisation could significantly help forecasting models obtain better results. As shown in Table~\ref{tab:relatedwork_shorttermforecasting}, numerous studies consider short-term forecasting with various methods and forecasting horizons. \begin{table}[!htb] \centering \caption{Related work on short-term forecasting}\label{tab:relatedwork_shorttermforecasting} \begin{threeparttable} \begin{tabularx}{1.\textwidth}{p{38mm}Xp{18mm}p{24mm}XX} \hline Reference & Method & Mode & Aggregation&Horizon & Event \\ \hline \cite{Li2017} &RBF&Subway&15 min&1,2& No\\ \cite{Roos2016} &Bayesian&Subway&2 min&1&No\\ \cite{Cui2016} &AR&Subway&1 h&1&No\\ \cite{Ding2016} &MLP&Subway&15 min&1&No\\ \cite{Laptev2017} &LSTM&Taxi&1 day&1&No\\ \cite{ke2017} &CRNN&Taxi&1 h&1&No\\ \cite{Zhang2017} &CRNN&Taxi\&Bike&1~h \& 30~min&1\&1&No\\ \cite{Yao2018} &CRNN&Taxi&30 min&1&No\\ \cite{Wu2016} &CRNN&Traffic&5 min&1&No\\ \cite{cheng2017} &CRNN&Traffic&15 min&1,2,3,4&No\\ \cite{yu2017} &GCNN&Traffic&15 min&1,2,3&No\\ \cite{Markou2018} &GP &Taxi&1 h&1&Yes\\ \cite{Rodrigues2019} &LSTM&Taxi&1 day&1&Yes\\ \hline \end{tabularx} \begin{tablenotes} \scriptsize \item RBF represents radial basis function network. AR represents autoregressive method. MLP represents multilayer perceptron. LSTM represents long short-term memory introduced by \cite{hochreiter1997}. CRNN represents a different architecture of neural network with convolution and recurrent neural network. GP represents Gaussian process. GCNN represents graph convolutional network. \end{tablenotes} \end{threeparttable} \end{table} \subsection{Long-term Passenger Demand Forecasting} To the best of our knowledge, only a few resources related to long-term forecasting with fine-grained resolution are available in the literature, unlike short-term forecasting. The study most related to our work is the study of \cite{Pereira2015}. The authors worked on long-term forecasting approaches using event data extracted from the web as features to forecast the aggregated number of passengers per half hour of tap in/out of 3 subway and 11 bus stops assigned to 5 venues in the city of Singapore. Their study was performed on a data set with a total period of 16 days. They demonstrated that using event information (online information) combined with public transport data can improve the quality of transport prediction under special events. In this study, we investigate the problem of long-term (one year ahead) passenger demand forecasting, represented as the number of tap ins aggregated by 15-minute intervals of all the subway stations (68 stations) in the city of Montréal, Canada, and the use of an event database given by the transport organisation of Montréal. The real data set spans a long period (3 years). We propose a data shaping method that allows the use of well-known regression models for long-term forecasting of passenger demand. Moreover, we study the forecasting of the passenger demand per type of transit fare to provide an in-depth analysis of the passenger demand forecasting, thus helping transit operators adapt the availability of specific pricing during special events. \section{Case Study} \label{sec:casestudy} The forecasting of transport demand at each station of a public transport network is a challenging task, mainly due to the influence of several well-known factors introduced by~\cite{Zhang2017} on crowd flows and on transport demand. These factors can be summarised as follows: temporal factors, including time interval and the type of day, i.e., Monday, Tuesday, ..., Sunday; public or school holidays; and extra day off. Spatial factors include the type of area where the station is located (e.g., residential, office, shopping, and areas of interest). Predictable factors include weather, events, transport operator strikes and renovations. Unpredictable factors include transport network disruption that could be induced by a technical problem (rail problem, fire accident), a passenger problem or another factor that could severely impact the transport supply. In this study, we aim to perform one-year-ahead forecasting by taking the temporal, spatial and contextual factors into account. To this end, temporal and contextual data that are available one year ahead will be used as inputs of the forecasting models. In the following sections, we detail the smart card entry logs, the time features and the event database. \subsection{Smart Card Entry Logs} \label{subsec:smartcardentrylogs} The real dataset used to evaluate the proposed methodology was provided by the transport organisation authority of Montréal, Canada (Société de transport de Montréal, STM). The ticketing logs used in our study are obtained thanks to the validation of passes and tickets on automated fare collection systems for each user's entry into the transport network. We address all 68 subway stations in the city. The data set consists of ticketing logs aggregated by 15-minute intervals during 2015, 2016 and 2017. The studied subway network handles more than 670k passengers every day. We also forecast the number of passengers by the type of ticket or pass used to travel. We have aggregated the passengers according to their type of ticket or pass: STM monthly pass, regional monthly pass, book tickets and occasional passes. Disposable tickets include tickets used occasionally (occasional passes), 1- or 2-way tickets, 1- or 3-day passes, weekend passes, special event tickets, etc. From Figure~\ref{pie_chart_pass_use_daystation-event}, we can see the percentage of passengers entering the subway network according to their type of ticket or pass during the global period from 2015-2017 and during the event period (pairs of days and stations with events). During the global period, the most used pass is the STM monthly pass, with approximately 140M entries per year, which represents 58\% of the passenger demand. On the other hand, during event periods, the percentage of occasional passes increases significantly to 29.2\%, versus 15.7\% during the global period. \begin{figure}[!htb] \centering \includegraphics[height=10em]{pictures/pie_chart_pass_use_1.png} \hspace{-1em} \includegraphics[height=10em]{pictures/pie_chart_pass_use_daystation-event_1.png} \caption{Use of the type of ticket or pass in percentage (2015-2017): global passenger demand on the left, passenger demand during event period on the right (pairs of days and stations with events)} \label{pie_chart_pass_use_daystation-event} \end{figure} \subsection{Detailed Calendar Information} \label{subsec:dayinformation} Passenger demand mainly depends on the day type; therefore, we created a list of nine day features, as follows: \begin{itemize} \itemsep=0pt \item Name of the day of the week (e.g., Monday, Tuesday, and so on) \item Month (e.g., January, February, and so on) \item Holiday (e.g., Christmas day, New Year's day and so on.) \item 24th of December \item 31st of December \item Christmas holiday \item Summer university holiday part 1 (intensive session, Université de Montréal) \item Summer university holiday part 2 (regular session, Université de Montréal) \item Renovation period that took place at the Beaubien station over 4 months in 2015 \end{itemize} This list of features is certainly specific for this transport network and this city, but it could easily be modified to suit another transport network and city. \subsection{Event Data} \label{subsec:eventdata} Passenger demand strongly depends on different contextual factors. Some factors cannot be planned far in advance (e.g., weather, transport network disruptions, and so forth), whereas others can be planned in advance, such as the presence of large events in a city (sports games, festivals, concerts, and so on). We could manually create or even automatically extract such event databases, as shown in previous studies conducted by \cite{Moreira2016, Markou2018, Rodrigues2019}. In our case, a real data set of events was provided by the STM operator, who manually built a calendar of events in the city of Montréal occurring during the three years of the study. Each event is characterised by a location (the nearest subway station), the event start and end times (format is "Y-m-d H:M:S", approximately 80\% of the event end times are available) and a manually built short text description of the event (description does not follow the same construction pattern, e.g., the same event could have different descriptions). We manually defined 10 topics as event categories to have an event categorisation able to be taken into account by the forecasting models. Taking these type of data into account is a challenge because it involves a large and sparse encoding of the data that is difficult to treat with regression models. Moreover, the end time of the event is not available for each event, which makes the interpretation of the event difficult. Figure~\ref{fig:numberofevents_station} shows the number of events per station and per category. We consider an event by the presence of a start time in the database (e.g., if the same event occurs on 4 consecutive days and is represented by 4 start times in the database, it will be counted as 4 events). As shown, most of the events occur near three stations: Lucien-L'Allier, Jean-Drapeau and Bonaventure. \begin{figure}[!htb] \centering \includegraphics[width=0.80\textwidth]{pictures/nb_events_perstation.png} \caption{Number of events per metro station that host the event and per category.} \label{fig:numberofevents_station} \end{figure} To provide an overview of the smart card data set and the differences in passenger demand that could occur between the same type of day with or without the presence of an event, the numbers of passengers on three different Mondays of the same month (April 2017) at the station named "Lucien-L'Allier" are depicted in Figure~\ref{fig:observation_3monday}. Monday, April 3, 2017, is depicted by the green line and could be considered a normal Monday. We can observe the typical morning and evening peaks of passenger demand. Monday, April 10, 2017, is coloured in orange, and this day is considered special because an event (Def Leppard concert that finished at 11:00 p.m.) occurred on this day near this station. Finally, Monday, April 17, 2017, which is a holiday (Easter Monday), is depicted by the blue line. We can observe a decrease in passenger demand throughout all Easter Monday (blue) compared to the normal Monday (green). However, we can see a highly concentrated increase in passenger demand due to the end of the concert during the Monday with the event (orange). \begin{figure}[!htb] \centering \includegraphics[width=0.80\textwidth]{pictures/observation_3days.png} \caption{Number of passengers on three different Mondays. April 3, 2017, corresponds to a normal Monday; April 10, 2017, is a Monday with an event (Def Leppard concert that finished at 11:00 p.m.); and April 17, 2017, is a holiday (Easter Monday).} \label{fig:observation_3monday} \end{figure} \section{Forecasting Workflow} \label{sec:forecasting} We aim to forecast the number of passengers entering each station of a transport network at each time step of a day until one year ahead. Here, we forecast the passenger demand of the 68 metro stations in the city of Montréal, Canada, at each quarter hour of a day (96 time steps per day) by taking planned events into account. We have compared the use of different sets of features as inputs of the forecasting models and different types of forecasting models. Section~\ref{subsec:configuration} details the data shaping and the compared set of features. The general description of the compared models is depicted in Section~\ref{subsec:methodology} and the evaluation metrics are described in Section~\ref{subsec:evaluationmetrics}. \subsection{Data Configuration} \label{subsec:configuration} To evaluate the importance of each contextual data set, we trained the forecasting models with four input data sets (D1, D2, D3 and D4). Each of these input data sets corresponds to a specific concatenation of the following 4 sets of features: \begin{itemize} \itemsep=0pt \item A: Month and name of the day of the week, encoded as one-hot vectors. \item B: Holiday, 24 of December and 31 of December, Christmas school holiday, university holidays part 1 and part 2, and Beaubien station renovation period. These features are encoded as one-hot vectors. \item C: Start event, end event and period event at each station that hosts an event. For each station that hosts an event (29 stations), at each time step of the day (vector size 96), we counted the number of time steps related to event schedule information (3 features), namely, the start time, end time and event period. For example, if we encode this information for one station that hosts an event on the day from 00:00 a.m to 00:45 a.m. We will obtain the following three vectors of size 96: (i) start time $[1, 0, ..., 0]$, (ii) end time $[0, 0, 1, 0, ..., 0]$ and (iii) event period $[1, 1, 1, 0, ..., 0]$. \item D: Category of the event (10 event categories). This has the same encoding as C, but the difference is that we counted the number of event per category. For each station that hosts an event (29 stations), at each time step of the day (vector size 96), for each category of event (10 categories), we counted the number of time steps related to event schedule information (3 features), namely, the start time, end time and event period. \end{itemize} The input data sets D1, D2, D3 and D4 are depicted in Table~\ref{tab:setoffeatures}. \begin{table}[!htb] \centering \caption{Input data sets D1, D2, D3 and D4.}\label{tab:setoffeatures} \begin{threeparttable} \begin{tabularx}{0.98\textwidth}{cXXXXp{55mm}} \hline Data & A & B & C & D & Size\\ {D1}& \checkmark & & & & $11+6=17$ \\ {D2}& \checkmark&\checkmark & & & $17+7=24$\\ {D3}& \checkmark & \checkmark & \checkmark & & $24+96\times3\times29 = 8376$\\ {D4}& \checkmark &\checkmark & \checkmark & \checkmark & $8376 + 96\times3\times29\times10 = 91896$ \\ \hline \end{tabularx} \begin{tablenotes} \scriptsize \item The set of features: A corresponds to the month and name of the day of the week, B corresponds to the detailed day features, C corresponds to the event features, and D corresponds to the category of the event features. \end{tablenotes} \end{threeparttable} \end{table} We have trained a specific forecasting model per station (total of 68 models) with daily multi-time-step output forecasting. This means that for each day, we perform a unique prediction that corresponds to a vector containing the forecasting of the number of passengers per quarter-hour intervals (output vector size is equal to 96, the number of 15-minute time steps in 24 hours). All the forecasting models (one model per station) have the same inputs and outputs, which are depicted in Figure~\ref{fig:input_output}. This figure depicts one input sample ($x_i \in X$) composed of features \{A,B\} and \{C,D\} corresponding to the features available until one year in advance of the forecasted $day_i$. Features A and B are detailed in Section~\ref{subsec:dayinformation} (e.g., day of the week, holiday, school holiday, and so forth). Meanwhile, features C and D are encoded per time step (96 quarters of hour per day) and correspond to the event features. \begin{figure}[!htb] \centering \includegraphics[width=0.65\textwidth]{pictures/article_revue1_input_output-v2.png} \caption{Data shaping of one input sample ($x_i \in X$) composed of features \{A,B\} (day features) and \{C,D\} (event features) and of one output sample ($y_i \in Y$) that corresponds to the forecasting of the number of passengers entering a station at each of the 96 time steps. } \label{fig:input_output} \end{figure} \subsection{General description of the compared models} \label{subsec:methodology} We aim to forecast the passenger demand until one year ahead with a fine-grained temporal resolution (quarter-hour aggregation). In this context, it is not possible to use and optimise the parameters of time series forecasting as autoregressive models (ARIMA, SARIMAX, and so forth) because of the too large training data set and the multi-time-step ahead prediction. Therefore, we compared different well-known models that could be used for regression problems. We computed a baseline model based on a historical average, a linear regression model, machine learning models such as random forest, gradient boosting decision trees, artificial neural network and kernel-based models including a support vector regressor and Gaussian process. \subsubsection{Historical Average} The historical average model is a baseline model that aims to predict passenger flows based on the average value of historical observations by type of day in the corresponding time step. The type of day could be defined by a set of features, such as those depicted in Section~\ref{subsec:dayinformation}. For example, one may take the most basic feature, which is the name of the day of the week. In this case, the prediction for the time step of 8:00 a.m.-8:15 a.m., a Monday, corresponds to the average of all the historical values for Monday at 8:00 a.m.-8:15 a.m. The model computes the average number of passengers from the available historical dataset, representing two years in our case. This approach is the baseline of the long-term forecasting models. \subsubsection{Linear Regression} Linear regression is a statistical model that assumes a linear relationship between the dependent variable and one or more explanatory variables (or independent variables). We use as input more than one explanatory variable, and we predict more than one dependent variable (total of 96 dependent variables). In this context, we choose a multivariate linear regression. To prevent the collinearity phenomenon with categorical features, such as the name of the day of the week, we formatted the data by deleting one of the categories. To avoid overfitting, we computed the linear regression with the elastic net regularisation introduced by \cite{Zou2005} that linearly combines the L1 and L2 penalties of the lasso and ridge methods. We optimised the hyperparameters alpha and l1\_ratio, where alpha is a constant that multiplies the penalty terms L1 and L2, and l1\_ratio corresponds to the penalty term associated with the L1 method and (1-l1\_ratio) to the L2 method. \subsubsection{Random Forest (RF)} \label{subsubsection:randomforest} Random forest is a well-known machine learning model whose effectiveness for performing regression or classification problems has been widely proven for many real-world applications. The model introduced by \cite{breiman2001} is an ensemble learning algorithm based on the average prediction of different decision trees (forest). Each tree is fit on different parts of the data, which were created by applying two sampling methods: random sampling with replacement, which is also known as the bootstrap aggregation or bagging method, and random selection of features, which is called feature bagging. The bagging methods and the averaging of the results obtained by the different trees make the RF more robust and accurate than a simple decision tree. We optimised the hyperparameters n\_estimators, which corresponds to the number of trees used by the model; min\_samples\_split, which corresponds to the minimum number of splits required to split an internal node; min\_samples\_leaf, which is the minimum number of leaves required to be at a leaf node; and max\_features, which is the number of features to consider when searching for the best split. \subsubsection{Gradient Boosting Decision Tree (GBDT)} Gradient boosting, introduced by \cite{Friedman2001}, is a machine learning model for regression or classification tasks that uses an ensemble of weak prediction models, such as decision trees in our case, to create a prediction model. Similar to most of the other boosting methods, GBDT builds weak learners (decision trees) one at a time, where each new tree helps to correct the errors made by previously trained trees. After a tree is added, the data weights are readjusted. Correctly classified input data lose weight, and misclassified examples receive a higher weight. This technique helps future trees focus more on input data that were misclassified by previous trees. We optimised the same hyperparameters as the random forest model detailed in Section~\ref{subsubsection:randomforest}. \subsubsection{Artificial Neural Network (ANN)} An artificial neural network, also known as a neural network, is a computational model based on the structure and functions of biological neural networks. Each neuron receives inputs and biases, multiplies them by their weights, sums them and combines that sum with their internal state (activation function) to produce an output. In our case, we used the rectified linear unit (relu) function as the activation function of the hidden layer neurons, and the identity function for the neurons of the last layer (used as default by the scikit-learn library for regression problems). We optimised the number of layers and the number of neurons per layer, and we used the early stopping technique in order to stop the training of the model automatically. \subsubsection{Gaussian Process Regressor (GP)} \cite{Rasmussen2003} developed the Gaussian process, which is a generic supervised learning method; more specifically, it is a kernel method designed to solve regression problems. The prediction is probabilistic (Gaussian) and interpolates the observations. One of the advantages of this model is that it is able to compute confidence intervals in addition to the prediction. The main disadvantage of Gaussian processes is that they lose efficiency in high-dimensional space and that they use the entire sample of feature information to perform the prediction, which could lead to overfitting. We optimised the hyperparameter alpha, which specifies the noise level in the targets. \subsubsection{Support Vector Regressor (SVR)} The support vector regressor is a supervised machine learning model and is based on the kernel method introduced by \cite{Drucker1997}. This model can efficiently perform a nonlinear regression using the kernel trick, implicitly transforming the data into a high-dimensional feature space to make it possible to perform the linear regression. The implementation is based on support vector machine, which is effective in high-dimensional spaces. We optimised the hyperparameters kernel; gamma, which corresponds to the kernel coefficient; and C, which is the penalty parameter of the error term. \subsubsection{Trend Factor} \label{subsec:Trend factor} The main disadvantage of the forecasting method described in this study is that the models do not take into account the global trend of the number of passengers from year to year. The heatmap in Figure~\ref{fig:heatmap_trend} shows the percentage increase between the years 2015 and 2016 and between 2016 and 2017 of the average number of passengers per time step and per station (we do not take into account the Beaubien and Rosemont stations, which were severely impacted by renovations in 2016). As shown, for 60\% of the stations, the increase is of the same sign (positive or negative) between 2015-2016 and 2016-2017. \begin{figure}[!htb] \includegraphics[width=0.98\textwidth]{pictures/heatmap_increasepercentage_peryear.png} \caption{Trend factors between 2015-2016 and 2016-2017 per station. Note that 60\% of the stations have the same sign of trend factor between 2015-2016 and 2016-2017.} \label{fig:heatmap_trend} \end{figure} To take this trend into account in the forecast, we multiplied the forecasted passenger demand at each time step by the trend factor depicted in Equation~\ref{eq:trend_factor}, obtained between 2015 and 2016 (training set). We set the trend factor of the Beaubien and Rosemont stations to 1 to not take the trend factor of these stations into account. To adjust the forecast of the number of passengers per type of ticket or pass used to travel, we calculated a specific trend factor between 2015 and 2016 for each type of ticket or pass used to travel. \begin{ceqn} \begin{align}\label{eq:trend_factor} trend\_factor_{2015-2016}(s) = \frac{\frac{1}{T_{2016}} * \sum_{t_1=0}^{T_{2016}} x^{t_1}_{2016}(s)}{\frac{1}{T_{2015}} * \sum_{t_2=0}^{T_{2015}} x^{t_2}_{2015}(s)} \end{align} \end{ceqn} where \begin{conditions} x & Number of passengers\\ T_y & Number of time step of year $y$, with $t \in T$\\ s & Station $s$ \end{conditions} This first attempt to introduce a trend factor in the forecasting model is basic. Further investigations are needed to improve the forecasting capability of the models. \subsection{Evaluation Methods} \label{subsec:evaluationmetrics} To evaluate the models, we split the entire dataset into two different parts: (i) a training dataset used to fit the models with data spanning the years 2015 and 2016 and (ii) a testing dataset used to compare the performance of the models with data of the year 2017. We evaluate the results obtained by the different forecasting models with several well-known metrics. To obtain a better understanding of the errors, three measures of prediction accuracy were used, namely, the root mean square error (RMSE), the median absolute error (MAE) and the mean average percentage error at $v$ (MAPE@v). These errors can be expressed as follows: \begin{ceqn} \begin{equation} \operatorname{RMSE}=\sqrt{\frac{\sum_{s=1}^S\sum_{t=1}^T (\hat{y}_s(t) - y_s(t))^2}{T\times S}} \end{equation} \end{ceqn} \begin{ceqn} \begin{equation} \operatorname{MAE}={\frac {1}{T\times S}}\sum_{s=1}^S\sum_{{t=1}}^{T}\left|\hat{y}_s(t) - y_s(t)\right| \end{equation} \end{ceqn} \begin{ceqn} \begin{equation} \operatorname{\forall y_s(t) > v,~MAPE@v}={\frac {100}{T\times S}} \times \sum _{s=1}^{S}\sum _{t=1}^{T}\left|{\frac {y_s(t)-\hat{y}_s(t)}{y_s(t)}}\right| \end{equation} \end{ceqn} where $\hat{y}_s(t)$ is the forecast value of station $s$ at time step $t$, $y_s(t)$ is the actual value, and $S$ is the station number. \subsection{Implementation and Optimisation of the Models} In this section, we detail the setups of the different forecasting models. We discuss the optimisation of the hyperparameters and the library and resources used to build the models. \subsubsection{Implementation} We used Scikit-Learn developed by \cite{scikit-learn}, a famous Python library, to compute the following models: elastic net regression, Gaussian process regressor, random forest, gradient boosting decision tree, artificial neural network and support vector regressor. We used the MultiOuputRegressor class of Scikit-Learn to perform multi-output forecasting with SVR and GBDT models that are single-output regressors. \subsubsection{Optimisation} We performed a grid search with 5 random fold cross-validation to optimise all the statistical and machine learning models. We fixed the computation time for the optimisation of each model to a maximum of 2 days. We used the Scikit-learn default hyperparameters for the model GBDT with input data sets D3 and D4 because of the computation time being too long. The tested hyperparameters are presented in Table~\ref{tab:hyperparameters}. The experiments were conducted in parallel on 20 cores. \begin{table}[!htb] \centering \caption{Grid search hyperparameters of the forecasting models.}\label{tab:hyperparameters} \begin{threeparttable} \begin{tabularx}{0.98\textwidth}{Xll} \hline Model & Hyperparameter& Tested values \\ \hline \multirow{3}{*} {LR} & alpha &0.1, 1, 10 \\ & l1\_ratio & 0.25, 0.5, 0.75, 1 \\ & normalise & True, False \\ \hline \multirow{2}{*} {GP} & alpha &0.1, 0.5, 1 \\ & normalise\_y & False, True \\ \hline \multirow{4}{*} {RF} & n\_estimators &100, 150, 200 \\ & min\_samples\_split & 2, 5, 10 \\ & min\_samples\_leaf & 1, 5, 10 \\ & max\_features & 'auto' \\ \hline \multirow{4}{*} {GBDT}& n\_estimators &100, 150, 200 \\ & min\_samples\_split & 2, 5, 10 \\ & min\_samples\_leaf & 1, 5, 10 \\ & max\_features & 'auto' \\ \hline \multirow{3}{*} {SVR} & kernel &'rbf', 'linear' \\ & gamma & 1, 0.1, 0.01, 0.001 \\ & C & 0.001, 0.01, 0.1, 1.0, 10 \\ \hline \multirow{3}{*} {ANN} & solver & 'adam' \\ & batch\_size & 16 \\ & max\_iteration & 5000 \\ & early\_stopping & True \\ & hidden\_layer\_sizes & (10), (100), (300), (10,~10), (100,~10), (100,~100), (300,~100) \\ \hline \end{tabularx} \begin{tablenotes} \scriptsize \item The value 'auto' of the hyperparameter max\_features of the RF and GBDT models corresponds to the total number of features. The kernel 'rbf' of the SVR model corresponds to the radial basis function kernel. These two values corresponds to the default values in the scikit-learn library. Concerning the hyperparameter hidden\_layer\_sizes of the ANN model, the ith element represents the number of neurons in the ith hidden layer. \end{tablenotes} \end{threeparttable} \end{table} \section{Forecasting Results and Discussion} \label{sec:forecast_results} First, the results of the passenger demand forecasting with an overall aggregation are presented in Section~\ref{subsec:forecast_results_allpass}. We compare the forecasting results and present some forecast and observation examples during two different periods: a normal period and an event period. Regarding the methodological context, we show which model performs the best in terms of forecast accuracy and the importance of each feature in the forecasting. In a more focused system transport context, we show the forecasting results per station. Then, we detail the results obtained for each type of ticket or pass used to travel in Section~\ref{subsec:forecast_results_perpass}. All the results correspond to the forecast obtained by the different models in addition to the trend factor method explained in Section~\ref{subsec:Trend factor}. The trend factor method improves the results by approximately 0.80\%. \subsection{Forecasting Result Aggregation of All Types of Ticket or Pass} \label{subsec:forecast_results_allpass} \subsubsection{Global Forecast Analysis} \label{sec:globalanalysis} To obtain a global comprehension of the results obtained by the models using different sets of features as model inputs, we studied the aggregated errors mentioned in Section~\ref{subsec:evaluationmetrics} of all the stations during the entire training and testing. Table~\ref{tab:results_allpass_globalperiod} depicts the RMSE, MAE and MAPE@150 errors on the training and test sets obtained by the forecasting models described in Section~\ref{sec:forecasting}. The models are used to forecast the number of ticketing logs aggregated per 15 minutes. Each model has been computed with different input data sets (D1, D2, D3 and D4) detailed in Section~\ref{subsec:configuration}. We can observe that the best results are obtained with models using the combination of all the input data sets (D4), except for the ANN model, which is not able to capture the event information because of the excessively low number of training samples due to the special formatting of the data, and the Gaussian process, which overfits because of the excessively large and sparse information caused by the additional event and category of the event features. The best prediction model is obtained with the RF model, with 38.53 and 13.13\% for the RMSE and MAPE@150 error, respectively. The historical average (HA) model is a basic method in terms of its implementation (it calculates the average number of passengers based on the day type). Unlike the SVR and LR models, where the number of parameters corresponds to the number of features, the number of parameters of the HA model corresponds to all possible combinations of features. For example, for the data set D1, the HA model contains 84 parameters (7 days * 12 months). This explains why this model succeeds in obtaining better results than the LR, and SVR models. On the other hand, it becomes more difficult to predict with this model when the number of features increases (data set D2) and even impossible to predict if the number of features is too large (data sets D3 and D4). We have seen that the random forest model obtains the best forecast results using the D4 data set over the global period. We will see in Table~\ref{tab:error_allpass_eventperiod} that despite the difference in the number of features between D3 and D4, it is preferable to predict the number of passengers with the D4 data set, since the difference in performance may increase depending on the forecast period. \begin{table}[!htb] \caption{Errors on the training and test sets of the different forecasting models with different input data sets (D1, D2, D3 and D4).}\label{tab:results_allpass_globalperiod} \begin{threeparttable} \begin{tabularx}{0.98\textwidth}{XXXXXXXX} \hline &&\multicolumn{3}{c}{Train (2015-2016)} & \multicolumn{3}{c}{Test (2017)}\\ \hline { Data} & { Model} & { RMSE} & { MAE} & { MAPE } & { RMSE} &{ MAE} & { MAPE} \\ \hline \multirow{7}{*} {D1}&HA & 45.41 & 18.07 & 12.69 & 50.36 & 21.47 & 15.28 \\ &LR& 49.71 & 20.51 & 13.87 & 52.04 & 22.46 & 15.51\\ &RF& 46.97 & 18.76 & 13.01 & 50.39 & 21.32 & 15.17\\ &\textbf{GP}& 46.09 & 18.62 & 12.85 & \textbf{50.32} & 21.56 & 15.17\\ &SVR& 55.98 & 23.72 & 14.12 & 57.54 & 25.40 & 15.58\\ &GBDT& 48.21 & 19.44 & 13.35 & 51.19 & 21.77 & 15.82\\ &ANN&49.93 & 20.57 & 14.51 & 53.13 & 22.76 & 16.18\\ \hline \multirow{7}{*} {D2}&HA & 32.24 & 13.04 & 9.73 & 44.31 & 19.16 & 13.84\\ &LR& 41.15 & 17.99 & 12.44 & 44.96 & 20.13 & 13.99\\ &\textbf{RF}& 35.12 & 14.66 & 10.65 & \textbf{41.35} & 18.19 & 13.20\\ &GP& 33.68 & 14.04 & 10.30 & 41.42 & 18.54 & 13.42\\ &SVR& 45.67 & 20.42 & 12.59 & 49.15 & 22.47 & 14.04\\ &GBDT& 37.62 & 16.18 & 11.56 & 42.33 & 18.84 & 13.91\\ &ANN& 40.2 & 16.6 & 12.08 & 43.83 & 18.75 & 13.63\\ \hline \multirow{6}{*} {D3} &LR& 34.56 & 16.59 & 11.57 & 43.74 & 20.37 & 14.21\\ &\textbf{RF}& 26.79 & 12.67 & 9.29 & \textbf{39.66} & 17.99 & 13.16\\ &GP& 17.13 & 7.00 & 4.91 & 79.71 & 36.39 & 22.66\\ &SVR& 36.83 & 18.93 & 12.32 & 51.11 & 24.98 & 16.18\\ &GBDT& 26.38 & 13.39 & 9.82 & 42.75 & 18.90 & 14.04\\ &ANN& 40.01 & 18.52 & 13.62 & 55.2 & 25.43 & 18.61\\ \hline \multirow{6}{*} {\textbf{D4}}&LR& 33.79 & 16.62 & 11.65 & 42.62 & 20.27 & 14.18\\ &\textbf{RF}& 26.60 & 12.63 & 9.29 & \textbf{38.53} & 17.88 & 13.13\\ &GP& 16.90 & 6.96 & 4.85 & 80.71 & 36.98 & 23.06\\ &SVR& 37.04 & 19.20 & 12.36 & 51.14 & 25.35 & 16.37\\ &GBDT& 26.10 & 13.33 & 9.79 & 40.79 & 18.77 & 14.01\\ &ANN& 41.44 & 19.80 & 14.52 & 63.57 & 29.62 & 21.55\\ \hline \end{tabularx} \begin{tablenotes} \scriptsize \item The data are represented by different sets of features (D1, D2, D3 and D4) described in Section~\ref{subsec:configuration}. The different models are described in Section~\ref{sec:forecasting}. The evaluation metrics RMSE, MAE and MAPE@150 are defined in Section~\ref{subsec:evaluationmetrics}. \end{tablenotes} \end{threeparttable} \end{table} As shown in Figure~\ref{fig:mapeatv_globaltestperiod}, the MAPE@v error highly depends on the threshold value. For example, with the best input data (D4), the RF model has a MAPE@5 (MAPE considering all of the observation passenger numbers greater than 5) of approximately 20\% and a MAPE@150 of 13\%. We choose the threshold value of MAPE as 150 to obtain a better estimation of the performance of passenger number forecasting when there is a considerable amount of demand that could impact ticketing demand and transport supply. \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{pictures/mapeatv_global_test_period.png} \caption{MAPE of random forest models with input data sets (D1, D2, D3, D4) in terms of the MAPE threshold value.} \label{fig:mapeatv_globaltestperiod} \end{figure} The observation and forecasting of the random forest model with all the sets of input data (D1, D2, D3 and D4) at Guy-Concordia station are depicted in Figure~\ref{fig:obsvspred_allpass_guyconcordia}. The passenger demand for this station is largely related to the activity of students at Guy-Concordia University. Indeed, we can see that on Monday, September 18, 2017, the passenger demand appears to follow a regular pattern with activity peaks corresponding to the end of the courses at the university. In this case, the model with input data set D4 succeeds in accurately predicting passenger demand and is slightly better than the models with other input data. \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{pictures/obsvspred_allpass_2017_09_18_Guy-Concordia-factor.png} \caption{Observation and forecasting of the passenger demand at Guy-Concordia station, Monday, September 18, 2017.} \label{fig:obsvspred_allpass_guyconcordia} \end{figure} Because events could impact the forecasting results, we analysed the forecasting results of the best forecasting model (RF model) considering periods with and without events (see Table~\ref{tab:error_allpass_eventperiod}). Seventeen stations with events in 2017 (test set period) were extracted. The filtering of the event period is performed by selecting the day/station pair with events. The period without an event represents the remaining data in the considered period. We can observe that the choice of input data significantly impacts the RMSE error during the event period. Indeed, RMSE is slightly improved during the period without an event depending on the use of the input data set D2 or D4 (50.71 against 48.83 of RMSE), whereas this error is largely improved during the event period when D4 is used (153.34 against 124.72 of RMSE). The model with input D1 is too basic to be relevantly compared during periods without events with the model with input D4. \begin{table}[!htb] \centering \caption{Errors of the random forest model applied to the test set period, 2017 (event period and the period without event), on the 17 stations that host events in 2017. }\label{tab:error_allpass_eventperiod} \begin{threeparttable} \begin{tabularx}{0.85\textwidth}{XXXXXXXX} \hline &\multicolumn{3}{c}{Period without event} & \multicolumn{3}{c}{Event period}\\ \hline {Data} &{ RMSE} &{ MAE} &{ MAPE }&{ RMSE} &{ MAE} &{ MAPE} \\ \hline {D1}& 61.54 & 28.48 & 14.95 & 159.13 & 46.96 & 23.59\\ {D2}& 50.71 & 24.73 & 13.18 & 153.34 & 43.44 & 22.20\\ {D3}& 49.13 & 24.10 & 13.19 & 137.69 & 43.51 & 21.37\\ {\textbf{D4}}& \textbf{48.83 }& 23.98 & 13.12 & \textbf{124.72} & 40.70 & 21.07\\ \hline \end{tabularx} \begin{tablenotes} \scriptsize \item The data are represented by different sets of features (D1, D2, D3 and D4) described in Section~\ref{subsec:configuration}. The different models are described in Section~\ref{sec:forecasting}. The evaluation metrics RMSE, MAE and MAPE@150 are defined in Section~\ref{subsec:evaluationmetrics}. \end{tablenotes} \end{threeparttable} \end{table} Figure~\ref{fig:mapeatv_eventtestperiod} depicts the MAPE@v error according to the threshold value v during the event test period of the RF models. As shown, the best performance is not obtained by the same models with a threshold that is lower or greater than 120, which could be explained by the fact that the calculation of the MAPE@v error is clearly impacted by the passenger number observation. It disadvantages forecasting with a high value when the observation corresponds to a small value over forecasting with a low value when the observation relates to a high value. To improve transport supply and ticket availability in cases of high demand, the usage of a threshold that is greater than a certain value is more relevant than a lower threshold for a model comparison. According to this data set, the threshold of 150 seems to be a good compromise for the evaluation of the models. \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{pictures/mapeatv_event_test_period.png} \caption{MAPE of random forest models with input data sets (D1, D2, D3, D4) in terms of threshold value of MAPE during the event test period.} \label{fig:mapeatv_eventtestperiod} \end{figure} Taking the presence of events into account may be essential for forecasting the number of passengers with precision. As shown in Figure~\ref{fig:obsvspred_allpass_lucienlallier}, the random forest with input data set D1 or D2 (detailed information about the day) is not able to predict the high increase in the passenger demand due to the end of a hockey game at Lucien-L'Allier station. However, with the help of event and event category information (input data set D4), the random forest model accurately forecasts the passenger demand peak. \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{pictures/obsvspred_allpass_2017_01_18_Lucien-LAllier-factor.png} \caption{Observation and forecasting of the passenger demand at Lucien L'Allier station, Wednesday, January 18, 2017. The information about the event is the following: start time, 19:30; end time, 21:30; station, Lucien-L'Allier; and category, hockey.} \label{fig:obsvspred_allpass_lucienlallier} \end{figure} Figure~\ref{fig:obsvspred_allpass_placedesarts} shows the passenger demand observation during the event named "Nuit Blanche", which induces a very specific pattern due to numerous events occurring during the night in the event area of Place-des-Arts station and the opening of the metro all night. We can observe a high increase in the number of passengers that has been successfully forecasted by the random forest model with input data set D4. \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{pictures/obsvspred_allpass_2017_03_05_Place-des-Arts-factor.png} \caption{Observation and forecasting of the passenger demand at Place-des-Arts station, Sunday, March 5, 2017. The information about the event is as follows: start date and time, 2017-03-04 18:00:00; end date and time, 2017-03-05 05:00:00; station, Place-des-Arts; and category, other.} \label{fig:obsvspred_allpass_placedesarts} \end{figure} \subsubsection{Feature Importance on Specific Station} Models such as random forest allow quantification of the feature importance of the input data. Because one model has been trained per station, we are able to investigate with precision the feature importance per station, which could be interesting for understanding how the models work. Figure~\ref{fig:feature_importance} shows the feature importance of the random forest model with input data set D4 for 3 stations with particular locations (stations are depicted in Figure~\ref{fig:mape_rf4_test_period}). The feature ranking denoted as $f$ is computed with the "mean decrease impurity" used for regression trees introduced by~\cite{breiman2017}. The importance of feature $i$, denoted as $f_i$, is given by: \begin{ceqn} \begin{equation}{\label{equation:gini}} f_i = \frac{\sum_{j : \text{node j splits on feature i}} n_j}{\sum_{j \in \text{all nodes}} n_j} \end{equation} \end{ceqn} With $n_j$ the importance of node j, \begin{ceqn} \begin{equation} n_j = w_j C_j - w_{left(j)}C_{left(j)}- w_{right(j)}C_{right(j)} \end{equation} \end{ceqn} where $w_j$ is the weighted number of samples in node $j$, $C_j$ is impurity in this node that corresponds to the within node variance of the output value, and left(j) and right(j) are its respective child nodes. The feature importance is given in percentage and has been aggregated in the following categories: the information about the date detailed in Section~\ref{subsec:dayinformation}, events (that corresponds to the sum of the feature importance of all the start, end and period event features of all the stations with events) and category (it corresponds to all the information about the event category available in all the station with events). The most important feature is the name of the day of the week, with 60.31\%, 83.78\% and 53.49\% feature importance for the Place-des-Arts, Square-Victoria and Guy-Concordia stations, respectively. For these three stations we can see that the importance of the features December 24 and 31 are less than 1\%. This is explained by the few days with these features in the training database (4 days). Nevertheless these features are still important to predict those special days that cannot be categorized with the other features. We can see that Place-des-Arts is a station largely impacted by the event and event category features (approximately 8\% and 11\% of feature importance), which is explained by the presence of many events located near this station. Square-Victoria-OACI is a station located in a business area; in contrast to the Place-des-Arts station, we find that the most important features are holiday and Christmas school holidays. Finally, the Guy-Concordia station is the station of the Guy-Concordia University, which explains the importance of the features Christmas school holidays and school holidays parts 1 and 2. \begin{figure}[!htb] \centering \includegraphics[width=0.70\textwidth]{pictures/feature_importance-v3.png} \caption{Aggregated feature importance of random forest model with input data set D4 in Place-des-Arts, Square-Victoria-OACI and Guy-Concordia stations} \label{fig:feature_importance} \end{figure} \subsubsection{Forecast Analysis by Station} In addition to the global analysis detailed in Section~\ref{sec:globalanalysis}, it is also important to analyse the results per station because each station has its own activity pattern. Figure~\ref{fig:mape_rf4_test_period} shows the MAPE@150 error of the best forecasting model, which is the random forest with the full set of features as input data (D4 corresponds to the information about the day, the event and the category of the event). As shown, the model obtains an error greater than or equal to 17\% as MAPE@150 in some special stations. Université de Montréal and Edouard-Montpetit stations are located on the University of Montreal campus, which implies a passenger demand impacted by the university calendar (MAPE@150 equal to 17\% and 20\%). The Lucien-L'Allier station is difficult to predict (MAPE@150 equal to 20\%) because this station is the one that hosts most of the events in the city. Finally, the several events that took place at the Jean-Drapeau station, located on an island without habitation, make this station the hardest to predict (50\% of MAPE@150). \begin{figure}[!htb] \centering \includegraphics[width=0.70\textwidth]{pictures/mape-rf4-journal.png} \caption{MAPE@150 error per station during the global test period (2017) of the random forest model with input D4. The metro lines of Montreal are depicted by the blue, orange, green and yellow lines. The heatmap (blue zone to red zone) depicts the event activity during 2017. The green background represents parks, and the pink background indicates school or university.} \label{fig:mape_rf4_test_period} \end{figure} \subsection{Forecasting Results per Type of Ticket or Pass} \label{subsec:forecast_results_perpass} One of the goals of transport operators is to accurately estimate the demand for certain types of ticket or pass used to travel to adapt ticket availability to passenger demand. In this context, we compare the forecasting results of the random forest model with a focus on the forecasting of subsets of data corresponding to the number of passengers by type of ticket or pass used to travel. The type of ticket or pass are aggregated into the following categories: STM monthly pass (SMP), regional monthly pass (RMP), book tickets (BT) and occasional pass (OP). \subsubsection{Forecast Analysis per Type of Ticket or Pass During the Global Period} According to the results shown in Table \ref{tab:error_perpass_globalperiod} and as expected, the occasional transport demand (pass OP) is the most difficult to forecast. The MAPE@150 is 27.93\% for this type of pass that represents 15.7\% of the total passenger demand against a MAPE@150 of 12.23\% for the type of pass SMP that represents 51\% of the total passenger demand. The use of input data set D4 related to event information in addition to the information about the day is necessary to obtain the best results for the forecasting of occasional passenger demand BT and OP passes. This is due to the particularity of book tickets and the occasional pass that are mainly used during events. \begin{table}[!htb] \centering \caption{Errors of the random forest model on the training period from 2015-2016 and the test set period, 2017, per type of ticket or pass used to travel. }\label{tab:error_perpass_globalperiod} \begin{threeparttable} \begin{tabularx}{0.98\textwidth}{llXXXXXX} \hline & &\multicolumn{3}{c}{Train set (2015 and 2016)} & \multicolumn{3}{c}{Test set (2017)}\\ \hline { Pass} & { Data} & { RMSE} &{ MAE} &{ MAPE }&{ RMSE} &{ MAE} &{ MAPE} \\ \hline \multirow{2}{*}{SMP}&\textbf{D2} & 16.37 & 8.65 & 9.67 & \textbf{20.04} & 10.76 & 12.23 \\ & D4 & 14.10 & 7.71 & 8.51 & 20.08 & 10.72 & 12.24\\ \hline \multirow{2}{*} {RMP}& \textbf{D2} &8.30 & 3.08 & 9.28 & \textbf{10.17} & 3.76 & 11.97 \\%% & D4 & 7.46 & 2.84 & 8.28 & 10.28 & 3.75 & 12.12\\ \hline \multirow{2}{*} {BT}& D2& 5.58 & 3.06 & 16.23 & 6.73 & 3.63 & 20.48 \\%% & \textbf{D4} & 5.06 & 2.89 & 13.17 & \textbf{6.57} & 3.58 & 19.19\\ \hline \multirow{2}{*} {OP}& D2 & 19.49 & 4.88 & 28.28 & 21.41 & 5.66 & 30.42 \\ & \textbf{D4} & 11.22 & 4.07 & 18.93 & \textbf{17.86} & 5.40 & 27.93\\ \hline \end{tabularx} \begin{tablenotes} \scriptsize \item The data are represented by different sets of features (D2 and D4) described in Section~\ref{subsec:configuration}. The evaluation metrics RMSE, MAE and MAPE@150 are defined in Section~\ref{subsec:evaluationmetrics}. The aggregation of the types of passes is as follows: STM monthly pass (SMP), regional monthly pass (RMP), book tickets (BT) and occasional pass (OP). \end{tablenotes} \end{threeparttable} \end{table} \subsubsection{Forecast Analysis per Type of Ticket or Pass Used to Travel During the Event Period} The STM monthly pass and regional monthly pass are slightly impacted by events. As shown in Table~\ref{tab:error_perpass_eventperiod}, the RMSE of the 17 stations with events increased from 24.99 to 28.01 during the event period for the STM monthly pass and from 12.48 to 13.27 during the event period for the regional monthly pass. Meanwhile, we can observe that book tickets and occasional passes are highly impacted by the presence of events. The random forest model obtains the best scores for these two types of passes with the input data set D4 in both periods: with and without event periods. \begin{table}[!htb] \centering \caption{Errors of the random forest model on the test event period and test set period without events, 2017, per type of ticket or pass over the 17 stations with events during the year 2017. }\label{tab:error_perpass_eventperiod} \begin{threeparttable} \begin{tabularx}{0.98\textwidth}{llXXXXXX} \hline & &\multicolumn{3}{c}{Test period without event} & \multicolumn{3}{c}{Test set period with event}\\ \hline {\scriptsize Pass} & {\scriptsize Data} & {\scriptsize RMSE} &{\scriptsize MAE} &{ \scriptsize MAPE }&{\scriptsize RMSE} &{\scriptsize MAE} &{\scriptsize MAPE} \\ \hline \multirow{2}{*}{SMP}&D2 & \textbf{24.99} & 13.12 & 11.88 & 30.48 & 13.68 & 18.14 \\ & D4 & 25.21 & 13.07 & 11.99 & \textbf{28.01} & 13.27 & 16.49\\ \hline \multirow{2}{*} {RMP}& D2 & \textbf{12.48} & 5.27 & 11.20 & 13.47 & 5.67 & 11.18 \\ & D4 & 12.67 & 5.28 & 11.41 & \textbf{13.27} & 5.57 & 11.23\\ \hline \multirow{2}{*} {BT}& D2& 8.95 & 4.74 & 17.92 & 16.16 & 6.54 & 51.70 \\ & \textbf{D4} & \textbf{8.80} & 4.65 & 17.44 & \textbf{14.63} & 6.36 & 41.03\\ \hline \multirow{2}{*} {OP}& D2 & 21.85 & 8.57 & 30.06 & 114.60 & 23.72 & 55.70\\ & \textbf{D4} & \textbf{19.17} & 7.79 & 29.33 &\textbf{ 91.03} & 21.95 & 43.98\\ \hline \end{tabularx} \begin{tablenotes} \scriptsize \item The data are represented by different sets of features (D2 and D4) described in Section~\ref{subsec:configuration}. The evaluation metrics RMSE, MAE and MAPE@150 are defined in Section~\ref{subsec:evaluationmetrics}. The aggregation of the types of passes is as follows: STM monthly pass (SMP), regional monthly pass (RMP), book tickets (BT) and occasional pass (OP). \end{tablenotes} \end{threeparttable} \end{table} We can observe the impact of a hockey game on the passenger demand for each type of ticket or pass in Figure~\ref{fig:obsvspred_perpass_lucienlallier}. This event is described as beginning at 07:30 p.m.; however, the ending time is not defined. We can see that every random forest with the input data set D4 (day information, event and category information) is able to forecast with a good accuracy the increase in the number of passengers between 10:00 p.m. and 11:00 p.m. The type of ticket or pass used to travel that is the most impacted by the event is the occasional pass with an increase of 1000 passengers during the passenger demand peak at 10:15 p.m. \begin{figure}[!htb] \centering \includegraphics[width=0.9\textwidth]{pictures/obsvspred_perpass_2017_11_29_Lucien-LAllier-factor.png} \caption{Observation and forecasting of the passenger demand per type of ticket or pass at Lucien-L'Allier station, on Wednesday, November 29, 2017. The information about the event is the following: start date and time, 2017-11-29, 19:30:00, end date and time Nan; station, Lucien-L'Allier; and category, hockey.} \label{fig:obsvspred_perpass_lucienlallier} \end{figure} \section{Conclusion} \label{sec:conclusion} This paper has investigated the use of smart card, calendar and event data to forecast metro passenger demand per station with a long-term forecasting time horizon (until one year ahead) with fine-grained temporal resolution (15 minutes aggregation). We performed the forecasting task on real data (Montréal subway, Canada) by taking into account events in the city, such as concerts, hockey games, festivals, and so forth. The operational objectives were twofold: long-term forecasting can be useful for transport operators to adapt the transport supply and to adjust ticket availability to passenger demand. In this context, we have investigated the forecasting of the number of passengers per type of ticket or pass in addition to the forecasting of the global passenger demand. We have proposed generic data shaping, allowing the use of contextual data (smart card, calendar and event data) as input for well-known regression models: basic, statistical and machine learning models. Global forecast analysis has proven that it is possible to obtain good long-term forecasting accuracy with fine-grained resolution even in the presence of events. The random forest model achieved the best forecasting results with the calendar information and event data as input. The forecasting results highlighted the importance of taking event data into account during the forecasting of passenger demand, particularly during an event period. This study has also illustrated the value for transport operators to use one regression model per station to understand which features mostly impact the passenger demand per station. We have studied transport-related results to better understand which station is difficult to predict. In the same line of work, we have shown that, as expected, passenger demand depending on certain types of ticket or pass used to travel (book tickets and non-rechargeable smart cards) is more impacted by events and requires event data to be accurately forecasted. We have also proposed a basic method to reproduce the impact of the global year-to-year trend on the forecasting results. These results have demonstrated the effectiveness of the trend method in addition to the data shaping and machine learning method for such forecasting tasks. Nevertheless, further work is required to investigate in detail the trend problem in the long-term prediction task. Future work could investigate a medium-term forecast that could be placed between a long-term forecast that requires only the use of data available well in advance and a short-term forecast that requires recent observations of passenger numbers (collection and analysis of near real-time data). For this purpose, the medium-term forecast model could take as inputs, in addition to long-term data (calendar and event information), medium-term features such as the trend of the number of passengers observed recently (e.g., in previous days, weeks or months). If we take the case of a transport network with a constrained spatial grid that can be defined by stations (e.g., metro, bus, train), it will be possible to use the same forecast method as well as the same data formatting method. These forecasting methods are applicable provided that the same types of data (calendar and event data) are used. On the other hand, in the case of an unmeshed network as in~\cite{Zhang2017} (e.g., road traffic, free-floating bicycles, free-floating scooters), it will be necessary to spatially mesh the network in order to group the events into a number of fixed points of interest, as well as for the counting of transport flows, which will also have to be grouped into a fixed number of points. Because it is desired to be applicable in other public transport systems of the world, the forecasting methodology presented in this study could definitely help to create high added value mobility services for citizens. \section*{Acknowledgement} This research was partially supported by Thales and the Natural Science and Engineering Research Council of Canada (NSERC). The authors wish to thank the transport organisation authority of Montreal (Société de transport de Montréal) for providing the smart card and event data. They also acknowledge the computer infrastructure support from the IVADO and the Quebec Research Funds (FRQNT, FRQSC).
1,941,325,220,097
arxiv
\section{Local Abstractions} \comment{\outline{ -- why needed? Dining phils -- collapse to generic process -- generalization to parametric case -- }} The symmetry reductions described in the previous section are applicable to several networks which have only a small amount of global symmetry. Still, protocols have other local symmetries which cannot be captured by this definition. For instance, consider the Dining Philosophers' protocol on an arbitrary network. Every node operates in a roughly similar fashion, attempting to own all of its forks before entering the eating state. However, nodes with differing numbers of adjacent edges cannot be locally symmetric -- there can be no isomorphism between their neighborhoods. In fact, a network may be so irregular as to have only the trivial symmetry groupoid. In order to be able to represent these other symmetries, we must abstract away from the structural differences between nodes. It suffices to define an abstraction function over the local state of a node. As explained in more detail in~\cite{namjoshi-trefler-vmcai-2013}, a \emph{local abstraction} is formally specified by defining for each node $m$ an abstract domain, $D_m$, and a total abstraction function, $\alpha_m$, which maps local states of $P_m$ to elements of $D_m$. This induces a Galois connection on subsets, which we also refer to as $(\alpha_m,\gamma_m)$: $\alpha_m(X) = \{\alpha_m(x) \;|\; x \in X\}$, and $\gamma_m(A) = \{x \;|\; \alpha_m(x) \in A\}$. We must adjust the fixpoint computation to operate at the abstract state level. The abstract set of initial states, $\abs{I}_m$ is given by $\alpha_m(I_m)$. The abstract step transition, $\abs{T}_m$, is obtained by standard existential abstraction: there is a transition from (abstract) state $a$ to (abstract) state $b$ if there exist local states $x, y$ such that $\alpha_m(x)=a$, $\alpha_m(y)=b$, and $T_m(x,y)$ holds. An abstract transition $(a,b)$ for node $m$ is the result of interference by a transition of node $k$ from $\theta_k$ if the following holds. \begin{equation} \label{eq:absintf} (\exists s,t: \alpha_m(s[m])=a \;\wedge\; \alpha_m(t[m])=b \;\wedge\; T_k(s,t) \;\wedge\; \alpha_k(s[k]) \in \theta_k) \end{equation} For the Dining Philosophers' protocol, such a function can be defined through a predicate, $\mathsf{A}$, which is true at a local state if, and only if, the node owns all forks in that state. The abstract state of a node is now a pair $(l,a)$ where $l$ is its internal state (one of $T,H,E,R$) and $a$ is the value of the predicate $\mathsf{A}$. With this definition, the abstract fixpoint calculation produces the local invariant shown as a transition graph in Figure \ref{fig:abs-dp}. This graph shows that the abstract invariant for node $m$ implies that $(E_m \;\Rightarrow\; \mathsf{A}_m)$. Concretizing this term, one obtains that the concrete invariant implies that if $E_m$ is true, then node $m$ owns all of its forks. This, in turn, implies the exclusion property, as adjacent nodes $m$ and $n$ cannot both own the common fork $f_{mn}$ in the same global state. \begin{figure} \begin{center} \scalebox{0.50}{\includegraphics{abs-dp}} \caption{(From~\cite{namjoshi-trefler-vmcai-2013}) Abstract State Transitions (a) for non-isolated nodes and (b) for an isolated node. The notation ``$-\mathsf{A}$'' indicates the negation of $\mathsf{A}$. Green/dark states are initial. } \label{fig:abs-dp} \end{center} \end{figure} There are two features to note of this transition graph. First, all interference transitions are self-loops -- i.e., the actions of neighboring processes do not change the abstract state of a process. This is due to the protocol: the action of a neighbor cannot cause a process to own a fork, or to give up one that it owns. Second, all nodes in any network fall into one of the two classes which are shown, in terms of their abstract compositional invariant. It follows that the concretized compositional invariant holds in a parametric sense: i.e., over \emph{all} nodes of \emph{all} networks. This connection between compositional reasoning and parametric proofs is not entirely unexpected. Parametric invariants for protocols often have the universally quantified form ``for every node $n$ of an instance, $\theta(n)$ holds''. If the property $\theta(n)$ is restricted to the neighborhood of $n$ and holds compositionally, which is often the case, then the property is a split invariant for every fixed-size instance. The application of abstraction and symmetry serves to turn this connection around: computing a compositional invariant on an (abstract) instance induces a parametric invariant. The following theorem shows that this is a complete method -- but it is not automatic, as it requires the choice of a proper abstraction. The abstraction in the theorem can be chosen so that every pair of nodes is locally symmetric in terms of its abstract state space, and the cross-node interference is benign, as in the illustration above. \theorem{(From~\cite{namjoshi-trefler-vmcai-2013}) \label{thm:param-completeness} For a parameterized family of process networks, any compositional invariant of the form $(\forall i: \theta_i)$, where each $\theta_i$ is local to process $P_i$, can be established by compositional reasoning over a small abstract network.} \section{Conclusions} \comment{\outline{-- nice theory -- open questions: what programs are most amenable to such reasoning? -- connections with knowledge -- strategies for synthesis? }} In this article, I have attempted to show that compositional reasoning is a topic with a rich theory and practically relevant application. There are pleasing new connections to the new concept (in verification) of local symmetry, and to long-established ones such as abstraction and parametric reasoning. In this article I have chosen to focus on the simplest form of compositional reasoning, that used to construct inductive invariants, but the methods extend to general (i.e., possibly non-inductive) invariance, as well as to proofs of temporal properties under fairness assumptions~\cite{cohen-namjoshi08,cohen-namjoshi-saar10,cohen-namjoshi-saar10b}. The simultaneous fixpoint calculation lends itself to parallelization, as the individual components can be computed asynchronously so long as the computation schedule is fair~\cite{cohen-et-al-hvc2010}. It is worth noting that the theory applies to arbitrary state spaces under appropriate abstractions, as shown by the work in~\cite{gupta-popeea-rybalchenko-2011}, which applies compositional reasoning to C programs. There are many open questions. Among the major ones are the following: Why are certain protocols more amenable to compositional methods than others? (``Loose coupling'' is sometimes offered as an answer, but that term does not have a precise definition.) Can one create better methods which compute only as much auxiliary state as is necessary for a proof? What sorts of abstractions are useful for parametric proofs? \comment{Finally, an intrit seems intuitively likely (cf.~\cite{namjoshi-trefler-2013}) that compositional methods work well for protocols that must operate under adversarial conditions, such as those in ad-hoc and dynamic network models. This is because any correct protocol must work relative to minimal assumptions about its neighbors, since the set of neighbors and their connectivity can change at any moment.} \paragraph{Closing.} I would like to thank the referees for helpful comments on the initial draft of this paper. The work described here would not have been possible without the varied and immensely enjoyable collaborations with my co-authors: Ariel Cohen, Yaniv Sa'ar, Lenore Zuck, and Richard Trefler. My co-authors on a survey of compositional verification, Corina P\u{a}s\u{a}reanu and Dimitra Giannakopoulou, contributed many insights into the methods. I am very glad to be able to offer this as my contribution to David Schmidt's Festschrift. It is a small return for the respect I have for his research work, for the help and advice I received from him when co-organizing VMCAI in 2006, and for many enjoyable conversations! My initial work on compositional reasoning was supported in part by the NSF, under award CCR 0341658. The writing of this paper was done while I was supported in part by DARPA, under agreement number FA8750-12-C-0166. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. \section{Introduction} \comment{ \outline{ -- Concurrency, which was once limited to operating systems and networking, is now getting into the mainstream, due to multicores -- Verification of conc. programs is hard, as a proof has to simultaneously track the behavior of multiple executing threads -- Theoretically, it is PSPACE hard (give proof) -- Strategy is divide and conquer -- pros and cons -- sketch of paper. }} Concurrency was once limited to the internals of operating systems and networking. It is now in the mainstream of programming, largely due to the availability of cheap hardware and the ubiquitous presence of multi-core processors. Designing a correct concurrent program or protocol, however, is a hard problem. Intuitively, this is because the designer must coordinate the behavior of multiple, simultaneously active threads of execution. Verification of an existing program is an even harder problem, as the analysis process must reconstruct the invariants which guided the original design of the program. These informal statements can be made precise through complexity theory: verification of an $N$-process concurrent program is PSPACE-hard in $N$, even if the state space of each component is fixed to a small constant. In practice, the difficulty manifests itself as model checking tools run into \emph{state explosion}: the exponential growth of a program state space with the number of its concurrent components. A common strategy when faced with a large problem is to break it into smaller and simpler sub-problems. In program verification, this ``divide and conquer" strategy is known as \emph{compositional} or \emph{modular} verification. The essential idea is to verify a program ``in bits and pieces", analyzing only a single component at a time, along with an abstraction of the environment of the component (i.e., the rest of the program). The foundations of compositional methods were created by Owicki and Gries \cite{owicki-gries76} and Lamport \cite{lamport77}. These methods are based on a simple proof rule; however, formulating the right combination of assertions for the proof rule can be a difficult and frustrating task. Also, it is not clear that doing so reduces the manual proof effort in any appreciable way, as Lamport points out in~\cite{lamport-1997}. On the other hand, for \emph{fully automated} proof methods such as model checking or static analysis, there is indeed much to be gained by a divide-and-conquer strategy, as the state space of a single component is much smaller than that of the full program. In this article, we focus on the simplest, but fundamental verification task: constructing an inductive program invariant. We show how the construction of a \emph{compositional} inductive invariant may be formulated as a simultaneous least fixpoint calculation. That paves the way for a variety of simplifications and generalizations. We show how the fixpoint can be computed in parallel, how the fixpoint computation may be simplified drastically by analyzing the \emph{local symmetries} of a process network, and how it may be generalized through the use of \emph{local abstraction}. As much of this exposition is based on published work, in this article we keep a light touch on the theory, emphasizing instead the intuition which lies behind the theoretical ideas. To illustrate the many aspects of the theory, we use a running example of a Dining Philosophers' protocol. This is chosen for two reasons: it is particularly amenable to compositional reasoning, and the protocol is flexible enough to operate on arbitrary networks, making it easy to illustrate the effects of localized symmetry and local abstraction, and their influence on parametric reasoning. \section{Related Work} The book~\cite{deRoever01} has an excellent description of the Owicki-Gries method and other compositional methods. The fixpoint formulation for computing the strongest split invariant is implicit in the deduction system of~\cite{flanagan-qadeer-03} and is explicitly formulated in~\cite{namjoshi-07a}. Other closely related work on compositional verification has been referenced in the previous sections. Local symmetry and balance are originally defined in~\cite{golubitsky-stewart-2006} in a slightly different form. That paper analyzes the role of local symmetry to prove \emph{existential} path properties of \emph{continuous} systems; it is remarkable that those definitions also serve to analyze universal properties of discrete systems. it is explicitly stated Parametric verification is undecidable in general, even if each process has a small, fixed number of states~\cite{apt-kozen86}. A common thread running through the various approaches to parametric verification is the intuition that for a correct protocol, behaviors of very large instances are already present in some form in smaller instances. Decidability results~\cite{german-sistla92,emerson-namjoshi95,emerson-kahlon00} are based on ``cutoff'' theorems which establish that it suffices to check all instances up to the cutoff size, or on well-quasi-ordering of the transition structure~\cite{abdulla-cerans-et-al-1996}. The method of invisible invariants~\cite{pnueli-ruah-zuck01} generalizes an invariant computed automatically for a small instance to an inductive invariant for all instances. In~\cite{namjoshi-07a}, it was shown that the success of generalization is closely related to the invariant being a split invariant; the parametric analysis based on abstractions and local symmetry that is carried out in this paper is a further extension of those results. The ``environmental abstraction'' procedure~\cite{talupur06} analyzes a single process in the context of an approximation of the rest of the system. Although the approximation is developed starting with the full state space, there is a close similarity between the final method and compositional reasoning. Related procedures include~\cite{mcmillan-zuck-2011} and~\cite{sanchez-2012}. \section{Split Invariance} \comment{\outline{ -- quantified invariants for parametric systems -- turn into split invariants -- general form -- simplification to fixpoint -- connections to O-G (give proofs) -- connections to R-G (give proofs) -- sequential complexity -- parallel complexity -- illustration with mutex protocol -- pairwise and neighborly invariants }} Methods for program verification are based on two fundamental concepts: that of \emph{inductive invariance} and \emph{ranking}. An \emph{inductively invariant} set is closed under program transitions and includes all reachable program states. A \emph{ranking} function which decreases for every non-goal state shows that the program always progresses towards a goal. The strongest (smallest) inductive invariant set is the set of reachable states. The standard model checking strategy -- without abstraction -- is to compute the set of reachable states in order to show that a property is invariant (i.e., it includes all reachable states). The reachability calculation can be prohibitively expensive due to state explosion: for instance, the model-checker SPIN~\cite{spin} runs out of space checking the exclusion property for approximately 10 Dining Philosophers on a ring. The divide-and-conquer approach to invariance, which we discuss in this paper, is to calculate an inductive invariant which is made up of a number of local invariant pieces, one per process. A rather straightforward implementation of this calculation verifies the exclusion property for 3000 philosophers in about 1 second. In this section, we develop the basic theory behind the compositional reasoning approach. Subsequent sections explore connections to symmetry, abstraction, and parametric verification, as well as some of the limitations of compositional reasoning. \paragraph{A Note on Notation.} We use the notation developed by Dijkstra and Scholten in~\cite{dijkstra-scholten90}. Validity of a formula $\phi$ is denoted $[\phi]$. (We usually omit the variables on which $\phi$ depends when that can be determined from context.) Existential quantification of a set of variables $V$ is denoted $(\exists V: \phi)$. Thus, if $f$ and $g$ are formulas representing sets, $[f \;\Rightarrow\; g]$ denotes the property that the set $f$ is a subset of the set $g$. The advantage of this notation is in its succinctness and clarity, as will be seen in the rest of the paper. \subsection{Basics} A \emph{program} is defined symbolically as a tuple $(V,I,T)$, where $V$ is a non-empty set of typed \emph{variables}, $I$ is a Boolean-valued \emph{initial} assertion defined over $V$, and $T(V,V')$ is a Boolean-valued \emph{transition} relation, defined over $V$ and an isomorphic copy, $V'$. For each variable $x$ in $V$, its copy, $x'$, denotes the value of $x$ in a successor state. A program defines a \emph{transition system}, represented by the tuple $(S,S^0,R)$, as follows. The set of \emph{states}, $S$, is the set of all (type-consistent) valuations to the variables $V$; the subset of \emph{initial states}, $S^0$, is those states which satisfy the initial condition $I$; and a pair of states, $(s,t)$, is in the \emph{transition relation} $R$ if $T(s,t)$ holds. To make the role of locality clear, we work with programs that are structured as a \emph{process network}. The network is a graph structure where nodes are labeled with programs (also called processes) and edges are labeled with shared state. Formally, the graph underlying the network is a tuple $(N,E,C)$, where $N$ is a set of \emph{nodes}, $E$ is a set of \emph{edges}, and $C$ is a \emph{connectivity} relation, a subset of $(N\times E) \cup (E \times N)$. The structure of a ring network and that of a star network is shown in Figure \ref{fig:networks}. \begin{figure} \begin{center} \includegraphics[scale=0.75]{network.pdf} \end{center} \caption{Network Structure: circles represent nodes and processes, rectangles represent edges and shared state. In these networks, connectivity is bidirectional.} \label{fig:networks} \end{figure} The \emph{neighborhood} of a node is the set of edges that are connected to it. I.e., for a node $n$, the set of edges $\{e \;|\; (e,n) \in C \;\vee\; (n,e) \in C\}$ forms its neighborhood. Two nodes are \emph{adjacent} if their neighborhoods have an edge in common. A node $m$ \emph{points-to} a node $n$ if there is an edge $e$ in the neighborhood of $n$ such that $(m,e)$ is in $C$. An \emph{assignment} is a mapping of programs to the nodes of the network and of state variables to the edges. This must be done so that the program which is mapped to node $n$ only accesses, other than its internal variables, only those external variables which are mapped to edges in the neighborhood of $n$. We also require that the process only reads those variables on edges $e$ such that $(e,n)$ is a connection, and only writes those variables on edges $e$ such that $(n,e)$ is a connection. The semantics of the process network is formally defined as a program $P=(V,I,T)$, where \begin{itemize} \item $V$ is the union of all program variables $V=(\cup i: V_i)$. The variables in $V_i$ which are not mapped to network edges are the \emph{internal variables} of process $P_i$, and are denoted by $L_i$. \item $I$ is any initial condition over the program state, whose projection on $V_i$ is $I_i$. Notationally, $[(\exists V\backslash V_i: I) \;\equiv\; I_i]$. \item $T$ is the transition condition which enforces asynchronous interleaving. Formally, $[T \;\equiv\; (\;\vee\; i: T_i \;\wedge\; \unch(V\backslash V_i)]$. I.e., $T$ is the disjunction of the individual process transitions, under the constraint that the transition of process $P_i$ leaves all variables other than those of $V_i$ unchanged. For a simpler notation, we adopt the convention that $T_i$ is defined so that it leaves other variables unchanged.Then $T$ can be written as $(\;\vee\; i: T_i)$. \end{itemize} The set of \emph{reachable states} of a program $P=(V,I,T)$ is denoted by $\mathit{Reach}(P)$, and is defined as the least fixpoint expression $(\mu Z: I \;\vee\; \SP(T,Z))$. The fixpoint expression denotes the least (smallest, strongest) set $Z$ which satisfies the fixpoint constraint $[Z \;\equiv\; I \;\vee\; \SP(T,Z)]$. The \emph{strongest post-condition} operator is denoted $\SP$; for a transition relation $T$ and a set of states $Z$, the expression $\SP(T,Z)$ denotes the immediate successors of states in $Z$ due to transitions in $T$. Formally, $\SP(T,Z) = \{t \;|\; (\exists s: Z(s) \;\wedge\; T(s,t)\}$. A set of states (or assertion) $\varphi$ is an \emph{invariant} for the program if it is true for all reachable states, i.e., $[\mathit{Reach}(P) \;\Rightarrow\; \varphi]$. An assertion $\varphi$ is \emph{inductively invariant} if it is invariant, and also closed under program transitions. These conditions can be succinctly expressed by (1) (initiality) $[I \;\Rightarrow\; \varphi]$, and (2) (step) $[\SP(T,\varphi) \;\Rightarrow\; \varphi]$. Inductive invariance forms the basis of proof rules for program correctness. \subsection{Generalized Dining Philosophers' Protocol} We will use a Dining Philosophers' protocol as a running example. The protocol consists of a number of similar processes operating on an arbitrary network. Every edge on the network models a shared ``fork''. The edge between nodes $i$ and $j$ is called $f_{ij}$. Its value can be one of $\{i,j,\bot\}$. Node $i$ is said to \emph{own} the fork $f_{ij}$ if $f_{ij}=i$; node $j$ owns this fork if $f_{ij}=j$; and the fork is available if $f_{ij}=\bot$. The process at node $i$ goes through the following internal states: $T$ (thinking); $H$ (hungry); $E$ (eating); and $R$ (release), which are the values of its internal variable, $L$. The local state of a node also includes the state of each of its adjacent edges (i.e., forks). Let $nbr(i,j)$ be a predicate true for nodes $i,j$ if they share an edge. The transitions for a process are defined in guarded command notation as follows. \begin{itemize} \item A transition from $T$ to $H$ is always enabled. I.e., $(L=T) \;\longrightarrow\; L := H$ \item In state $H$, the process acquires forks, but may also choose to release them \begin{itemize} \item (acquire fork) $(L=H) \;\wedge\; nbr(i,j) \;\wedge\; f_{ij} = \bot \;\longrightarrow\; f_{ij} \;:=\; i$, \item (release fork) $(L=H) \;\wedge\; nbr(i,j) \;\wedge\; f_{ij} = i \;\longrightarrow\; f_{ij} \;:=\; \bot$, and \item (to-eat) $(L=H) \;\wedge\; (\forall j: nbr(i,j): f_{ij}=i) \;\longrightarrow\; L \;:=\; E$. \end{itemize} \item A transition from $E$ to $R$ is always enabled. I.e., $(L=E) \;\longrightarrow\; L := R$. \item In state $R$, the process releases its owned forks. \begin{itemize} \item (release fork) $(L=R) \;\wedge\; nbr(i,j) \;\wedge\; f_{ij} = i \;\longrightarrow\; f_{ij} \;:=\; \bot$ \item (to-think) $(L=R) \;\wedge\; (\forall j: nbr(i,j): f_{ij} \neq i) \;\longrightarrow\; L := T$ \end{itemize} \end{itemize} The initial state of the system is one where all processes are in internal state $T$ and all forks are available (i.e., have value $\bot$). The desired safety property is that there is no reachable global state where two neighboring processes are in the eating state $E$. \subsection{Split Invariance} An inductive invariant, in general, depends on all program variables; i.e., it can express arbitrary constraints among the program variables. The divide and conquer principle suggests that one should break up an invariance assertion into a number of assertions which are limited in scope, each depends only on the variables of a single process. Hence, we define a \emph{split assertion} $\theta$ to be a conjunction, written $(\;\wedge\; i: \theta_i)$, of a number of local assertions $\{\theta_i\}$. The $i$'th assertion, $\theta_i(V_i)$, is a function only of the variables of process $P_i$; i.e., its internal variables, and those assigned to the neighborhood of node $i$. Figure \ref{fig:splitinv} gives a pictorial view of a split invariant for the ring and star networks, showing the scope of each of its terms. The terms for adjacent nodes have the shared variables in common; this sets up (weak) constraints between the invariant states of those processes. \begin{figure} \begin{center} \includegraphics[scale=0.5]{splitinv.pdf} \end{center} \caption{Split Invariance. A dotted ellipse shows the scope of a term of the split invariant. } \label{fig:splitinv} \end{figure} We now consider the conditions for a split assertion to be a global inductive invariant. Examining the initiality and step conditions, one notices that, as the split assertion is conjunctive and $\SP$ distributes over the disjunction of transition relations, those conditions simplify to the equivalent set of constraints given below. \begin{align} & [I \;\Rightarrow\; \theta_i] \label{eq:split-init} \\ & [\SP(T_i, \theta) \;\Rightarrow\; \theta_i] \label{eq:split-step} \\ & [\SP(T_j, \theta) \;\Rightarrow\; \theta_i], \mbox{ for all } j \mbox{ which point to } i \label{eq:split-intf} \end{align} In the last constraint, nodes $j$ which do not point to $i$ are not considered, as any action in $T_j$ must leave the state of $V_i$ unchanged since there are no variables in common. \comment{ As $\theta_i$ is defined only in terms of $V_i$, one can quantify out all other variables on the left-hand side, to obtain the equivalent set of equations: \begin{align} & [(\exists V\backslash V_i: I) \;\Rightarrow\; \theta_i] \label{eq:split-init-2} \\ & [(\exists V\backslash V_i:\SP(T_i, \theta)) \;\Rightarrow\; \theta_i] \label{eq:split-step-2} \\ & [(\exists V\backslash V_i:\SP(T_j, \theta)) \;\Rightarrow\; \theta_i], \mbox{ for } j \mbox{ points-to } i \label{eq:split-intf-2} \end{align} } The form of these constraints is remarkably similar to the ``assume-guarantee'' or Owicki-Gries rules for compositional reasoning, which can be stated as follows. \begin{align} & [I \;\Rightarrow\; \theta_i] \label{eq:ag-init} \\ & [\SP(T_i, \theta_i) \;\Rightarrow\; \theta_i] \label{eq:ag-step} \\ & [\SP(T_j, \theta_i \;\wedge\; \theta_j) \;\Rightarrow\; \theta_i], \mbox{ for } j \mbox{ points-to } i \label{eq:ag-intf} \end{align} The first two constraints show that $\theta_i$ is an invariant of process $P_i$ by itself. The third constraint is Owicki and Gries' \emph{non-interference} condition: a transition by any other process from a state satisfying both process' invariants, preserves $\theta_i$. The closeness of the connection between the two formulations is shown by the following theorem. \theorem{\label{thm:ag-split} Every solution to the assume-guarantee constraints is a split inductive invariant. Moreover, in a network where all processes refer to common shared state (such as the star network in Figure \ref{fig:networks}), the strongest solutions of the two sets of constraints are identical.} For computational purposes, we are interested in the strongest solutions, as they correspond to least fixpoints. We will use the assume-guarantee form from now on, as it is is simpler to manipulate. As $\theta_i$ is defined only in terms of $V_i$, projecting the left-hand sides of the implications \mref{ag-init}-\mref{ag-intf} on $V_i$ gives an equivalent set of constraints: \begin{align} & [(\exists V\backslash V_i: I) \;\Rightarrow\; \theta_i] \label{eq:ag-init-2} \\ & [(\exists V\backslash V_i: \SP(T_i, \theta_i)) \;\Rightarrow\; \theta_i] \label{eq:ag-step-2} \\ & [(\exists V\backslash V_i: \SP(T_j, \theta_i \;\wedge\; \theta_j)) \;\Rightarrow\; \theta_i], \mbox{ for } j \mbox{ points-to } i \label{eq:ag-intf-2} \end{align} These constraints can be reworked into the simultaneous pre-fixpoint form (cf.~\cite{flanagan-qadeer-03,namjoshi-07a}) \begin{equation} [F_i(\theta) \;\Rightarrow\; \theta_i] \label{eq:prefix} \end{equation} where $F_i$ is the disjunction of the left-hand-sides of equations \mref{ag-init-2}-\mref{ag-intf-2}. By the monotonicity of $\SP$, the function $F_i$ is monotonic in $\theta$, considered now as a vector of local assertions, $(\theta_1,\ldots,\theta_N)$, ordered by point-wise implication. By the Knaster-Tarski theorem, there is a least fixpoint, which defines the strongest compositional invariant. It can be computed by the standard iteration shown in Figure \ref{fig:computation}. \begin{figure} \begin{center} \begin{verbatim} var theta, new_theta: prediate array // initialize for i := 1 to N do new_theta[i] := emptyset done; // compute until fixpoint repeat theta := new_theta; for i := 1 to N do new_theta[i] := F(i,theta) done; until (theta = new_theta) \end{verbatim} \end{center} \caption{Computing the Strongest Split Invariant.} \label{fig:computation} \end{figure} The computation takes time polynomial in $N$, the number of processes in the network -- a rough bound is $O(N^2 * L^3 * D)$, where $L$ is the size of the local state space of a process, and $D$ is the maximum degree of the network. This computation produces the split invariant for the Dining Philosophers on a 3000 node ring in about 1 second. A number of experimental results can be found in~\cite{cohen-namjoshi07} and in~\cite{cohen-namjoshi-saar10b}. \subsection{Completeness} \comment{\outline{ -- auxiliary variables -- locks, single and nested ''last'' variable (Vineet work) -- how to infer auxiliary variables -- connections with other strategies, e.g., learning. -- liveness and fairness properties }} The split invariance formulation is, in general, incomplete. That is, it is not always possible to prove a program invariant by exhibiting a stronger split invariant. In part, this is indicated by the complexity bounds: the split invariance calculation is polynomial in $N$, while the invariance checking problem is PSPACE-complete in $N$. However, this reasoning depends on whether PSPACE=P. An direct, unconditional proof is given by the simple, shared-memory mutual exclusion program below. Every process of the program goes through states T (thinking), H (hungry), and E (eating). The desired invariance property is that no two processes are in state E together. \begin{verbatim} var x: boolean; initially true process P(i): var l: {T,H,E}; initially T while (true) { T: skip; H: <x -> x := false> // atomic test-and-set E: x := true } \end{verbatim} The fixpoint calculation produces the split invariant $\theta$ where $\theta_i=\mathit{true}$, for all $i$. This invariant clearly does not suffice to show mutual exclusion. As recognized by Owicki-Gries and Lamport, one can strengthen a split invariant by introducing auxiliary global variables which record part of the history of the computation. Intuitively, the auxiliary state helps to tighten the constraints between the $\theta$ components, as a pair of local invariant states must agree on the shared auxiliary state. For this example, it suffices to introduce an auxiliary global variable, \verb|last|, which records the last process to enter its $E$ state. \begin{verbatim} var x: boolean; initially true var last: 0..N; initially 0 process P(i): var l: {T,H,E}; initially T while (true) { T: skip; H: <x -> x := false; last := i> // atomic test-and-set E: x := true } \end{verbatim} In the fixpoint, the $i$'th component $\theta_i$ is given by $(E_i \;\equiv\; (\neg x \;\wedge\; last=i))$. This suffices for mutual exclusion -- if distinct processes $P_m$ and $P_n$ are both in state $E$, then $last$ must simultaneously be equal to $m$ and to $n$, which is impossible. An important question in automated compositional model checking is, therefore, the development of heuristics for discovering appropriate auxiliary variables. For split invariance, one such heuristic is developed in~\cite{cohen-namjoshi07}. It is based on a method which analyzes counter-examples to expose aspects of the internal state of a process as an auxiliary global predicate. The method is complete for finite-state processes: in the worst case, all of the internal process state is exposed as shared state, which implies that the fixpoint computation turns into reachability on the global state space. The intuition is that for many protocols, it is unnecessary to go to this extreme in order to obtain a strong enough invariant. This intuition can be corroborated by experiments such as those in~\cite{cohen-namjoshi07}. In the automaton-learning approach to compositional verification~\cite{cobleigh-et-al-2003}, the auxiliary state is represented by the states of the learned automata. \comment{ \noindent\textbf{Proof:}\xspace of Theorem \ref{thm:ag-split}: Consider any solution of the assume-guarantee constraints. By the monotonicity of $\SP$, this solution meets the split invariance conditions. Let $\theta^1$ be the strongest solution of the split invariance constraints, and $\theta^2$ the strongest solution of the assume-guarantee constraints. The previous argument implies that $[\theta^1 \;\Rightarrow\; \theta^2]$. The converse needs a more delicate proof. We work with a logically equivalent form of the condition \mref{eq:ag-intf}: \begin{equation*} [(\exists V\backslash V_i: \SP((\exists V'\backslash V'_i, V\backslash V_i: T_j \;\wedge\; \theta_j), \theta_i) \;\Rightarrow\; \theta_i] \end{equation*} We require the following lemma: for every $i$, $[\theta^1_i \;\Rightarrow\; (\exists V\backslash V_i: \theta^1)]$. Informally, this means that any state satisfying $\theta^1_i$ can be extended to a state which satisfies $\theta^1$. Assuming this property, we show that $\theta^1$ meets the second set of constraints. The constraint \mref{ag-init} is immediate. First, for any transition relation $\alpha$ defined on $V_i$, it is the case that $[(\exists V\backslash V_i: \SP(\alpha, (\exists V\backslash V_i: \beta))) \;\equiv\; (\exists V\backslash V_i:\SP(\alpha,\beta))]$. The direction from right to left is trivial. We show the other direction. Consider any state $t$ which satisfies the left hand side. There is $v$ such that $v \sim_i t$ and $(u,v)\in \alpha$, for some $u$ such that there is $u'$ for which $u \sim_i u'$ and $u' \in \beta$. Then $(u',v')$ is a transition in $\alpha$ for which $v' \sim_i v$. This shows that $t$ satisfies the right hand side. By the lemma, we have that $(\exists V\backslash V_i: \SP(T_i, \theta^1_i))$ is equivalent to $(\exists V\backslash V_i: \SP(T_i, (\exists V\backslash V_i: \theta^1)))$, which is equivalent by the identity above to $(\exists V\backslash V_i: \SP(T_i, \theta^1))$, which implies $\theta_i$ by equation \mref{eq:split-step-2}. Similarly, by the lemma, we have that $(\exists V\backslash V_i:\SP((\exists V'\backslash V'_i, V\backslash V_i: T_j \;\wedge\; \theta^1_j), \theta^1_i))$ is equivalent to $(\exists V\backslash V_i:\SP((\exists V'\backslash V'_i, V\backslash V_i: T_j \;\wedge\; (\exists V\backslash V_j: \theta^1)), (\exists V\backslash V_i: \theta^1)))$, which is equivalent by the identity to $(\exists V\backslash V_i:\SP((\exists V'\backslash V'_i, V\backslash V_i: T_j \;\wedge\; (\exists V\backslash V_j: \theta^1)), \theta^1))$, Expanding the formula, there is a transition $(u,v) \in T_i$ to a state $v$ such that $v \sim_i t$. As $u \in \theta^1_i$, from the lemma, there is a state $u'$ such that $u \sim_i u'$ and $u' \in \theta^1$. Hence, there is a state $v'$ such that $(u',v') \in T_i$ and $v \sim_i v'$. By the inductiveness of $\theta^1$, $v' \in \theta^1$. As $t \sim_i v'$, it follows that $t \in Now we turn to showing the lemma. The lemma has the following informal statement: any neighborhood state of node $i$ satisfying $\theta^1[K]_i$ can be extended to a state of the entire network that satisfies $\theta^1[K]$. This statement is true for $K=0$ when both predicates are $\mathit{false}$. Assume it holds at stage $K$. Consider $\theta^1[K+1]_i$. === TO=BE=CONTINUED==== \xspace\textbf{EndProof.} } \section{Local Symmetries} \comment{\outline{ -- note symmetries in mutex. -- what are the minimum symmetries needed? -- local symmetries, groupoids, rings and torus -- collapse -- reductions in mutex and other protocol complexities. }} Many concurrent programs have inherent symmetries. For instance, the mutual exclusion protocol of the previous section has a fully symmetric state space, while the Dining Philosopher's protocol when run on a ring network has state space which is invariant under ring rotations. Symmetries can be used to reduce the state space that must be explored for model checking, as shown in the pioneering work in~\cite{clarke-filkorn-jha93,emerson-sistla93,ip-dill96}. Global symmetry reduction, however, works well only for fully symmetric state spaces, where it can result in an exponential reduction. For a number of other regular networks, such as the ring, torus, hypercube, and mesh networks, there is not enough global symmetry: the state space reductions are usually at most linear. This earlier work on symmetry is connected to model checking on the full state space. What is the corresponding notion for compositional methods? It is the nature of compositional reasoning that the invariant of a process depends only on that of its neighbors. Thus, intuition suggests that it should suffice for the network to have enough \emph{local} symmetry. For example, any two nodes on a ring network are locally symmetric: each has a single left neighbor and a single right neighbor. Torus, mesh and hypercube networks also have similar local symmetries. Technically, the notion of local symmetry is best described by a \emph{groupoid}~\cite{weinstein-1996}. A groupoid is a weaker object than a group (which is used to describe global symmetries), but has many similar properties. We use a specific groupoid, developed in~\cite{golubitsky-stewart-2006}, which defines the local symmetries of a network. The elements of the network groupoid are triples of the form $(m,\beta,n)$, where $m$ and $n$ are nodes of the network, and $\beta$ is an isomorphism on their neighborhoods which preserves the direction of connectivity. (I.e., $(m,e)$ is a connection if, and only if $(n,\beta(e))$ is a connection and, similarly, $(e,m)$ is a connection if, and only if, $(\beta(e),n)$ is a connection.) We call such a triple a \emph{local symmetry}. Local symmetries have group-like properties: \begin{itemize} \item The composition of local symmetries is a symmetry: if $(m,\beta,n)$ and $(n,\delta,k)$ are symmetries, so is $(m,\delta\beta,k)$ \item The symmetry $(m,id,m)$ is the identity of the composition \item If $(m,\beta,n)$ is a symmetry, the symmetry $(n,\beta^{-1},m)$ is its inverse. \end{itemize} The set of all local symmetries forms the \emph{network groupoid}. One may reasonably conjecture that nodes which are locally symmetric have isomorphic compositional invariants. I.e., if $(m,\beta,n)$ is a symmetry, then $\theta_m$ and $\theta_n$ are isomorphic up to $\beta$. This is, however, not true in general. The reason is that the compositional invariant computed at nodes $m$ and $n$ depends on the invariants computed at adjacent nodes, and those must be symmetric as well. Thus, one is led to a notion of recursive similarity, called \emph{balance}~\cite{golubitsky-stewart-2006}. This has a co-inductive form like that of bisimulation. A balance relation $B$ is a sub-groupoid of the network groupoid, with the following property: if $(m,\beta,n)$ is in $B$, and node $k$ points to $m$, there is a node $l$ which points to $n$ and an isomorphism $\delta$, such that $(k,\delta,l)$ is in $B$. Moreover, $\beta$ and $\delta$ must agree on the mapping of edges which are common to the neighborhoods of $m$ and $k$. The utility of the balance relation is given by the following theorems. \theorem{\label{thm:balance} (From~\cite{namjoshi-trefler-vmcai-2012}) \begin{enumerate} \item If $G$ is a group of automorphisms for the network, then the set $\{(m,\beta,n) \;|\; \beta \in G \;\wedge\; \beta(m)=n \}$ is a balance relation. \item Let $\theta$ be the strongest compositional invariant for a network. If $(m,\beta,n)$ is in a balance relation, then $[\theta_n \;\equiv\; \dia{\beta}\theta_m]$. \end{enumerate}} Informally, the first result shows that the global symmetry group induces balanced local symmetries; this is a quick way of determining a balance relation for a network. The second shows that the local invariants for a pair of balanced nodes are isomorphic. Here, $\dia{\beta}$ is a pre-image operator that maps states over $V_m$ to states over $V_n$ using $\beta$ to relate the values of corresponding edges. \paragraph{Local Symmetry Reduction.} This theorem points the way to symmetry reduction for compositional methods. The idea is to compute fixpoint components only for representatives of local symmetry classes. The group-like properties ensure that for any groupoid, its \emph{orbit relation}, defined as $m \sim n$ if there is $\beta$ such that $(m,\beta,n)$ is in the groupoid, is an equivalence. For a ring network, it suffices to compute a single component, rather than all $N$ components! The calculation is thus independent of the size of the network. This has interesting consequences for parametric proofs, as explained in the next section.
1,941,325,220,098
arxiv
\section{Introduction} In this paper we consider minimizers of functionals of the form \begin{equation}\label{Functional} \int_{B_1} F(D{\bf u})\,dx \end{equation} where ${\bf u} \in H^1(B_1)$ is a map from $\mathbb{R}^n$ to $\mathbb{R}^m$ and $F$ is a smooth, uniformly convex function on $M^{m \times n}$ with bounded second derivatives. By a minimizer we understand a map $\bf u$ for which the integral above increases after we perform any smooth deformation of $\bf u$, with compact support in $B_1$. If $F$ satisfies these conditions then minimizers are unique subject to their own boundary condition. Moreover $\bf u$ is a minimizer if and only if it solves the Euler-Lagrange system \begin{equation}\label{MinimizerEquation} \text{div}(\nabla F(D{\bf u})) = 0, \end{equation} in the sense of distributions. The regularity of minimizers of \eqref{Functional} is a well-studied problem. Morrey \cite {Mo} showed that in dimension $n = 2$ all minimizers are smooth. This is also true in the scalar case $m = 1$ by the classical results of De Giorgi and Nash \cite{DG1},\cite{Na}. In the scalar case, the regularity is obtained by differentiating equation \eqref{MinimizerEquation} and treating the problem as a linear equation with bounded measurable coefficients. An example of De Giorgi \cite{DG2} shows that these techniques cannot be extended to the case $m \geq 2$. Another example due to Giusti and Miranda \cite{GM2} shows that elliptic systems do not have regularity even when the coefficients depend only on ${\bf u}$. On the other hand it is known that minimizers of \eqref{Functional} are smooth away from a closed singular set of Hausdorff $n-p$ dimensional measure zero for some $p > 2$, see \cite{GM1}, \cite{GG}. (In fact, if $F$ is uniformly quasi-convex then minimizers are smooth away from a closed set of Lebesgue measure zero, see Evans \cite{E2}). However, the singular set may be non-empty. We will discuss some interesting examples below. The main result of this paper is a counterexample to the regularity of minimizers of \eqref{Functional} when $n = 3$ and $m = 2$, which are the optimal dimensions in light of the previous results. The existence of such minimizing maps from $\mathbb{R}^3$ to $\mathbb{R}^3$ or from $\mathbb{R}^3$ to $\mathbb{R}^2$ is stated as an open problem in the book of Giaquinta (see \cite{Gi}, p. 61). The first example of a singular minimizer of \eqref{Functional} is due to Ne\v{c}as \cite{Ne}. He considered the homogeneous degree one map $${\bf u}(x)= \frac{x \otimes x}{|x|}$$ from $\mathbb{R}^n$ to $\mathbb{R}^{n^2}$ for $n$ large, and constructed explicitly a smooth uniformly convex $F$ on $M^{n^2 \times n}$ for which ${\bf u}$ minimizes \eqref{Functional}. Later Hao, Leonardi and Ne\v{c}as \cite{HLN} improved the dimension to $n = 5$ using \begin{equation}\label{SYExample} {\bf u}(x) = \frac{x \otimes x}{|x|} - \frac{|x|}{n}I. \end{equation} The values of \eqref{SYExample} are symmetric and traceless, and thus lie in a $n(n+1)/2 - 1$ dimensional subspace of $M^{n \times n}$. \v{S}ver\'{a}k and Yan \cite{SY1} showed that the map \eqref{SYExample} is a counterexample for $n=3,\,m=5$. Their approach is to construct a quadratic null Lagrangian $L$ which respects the symmetries of ${\bf u}$, such that $\nabla L = \nabla F$ on $D{\bf u}(B_1)$ for some smooth, uniformly convex $F$ on $M^{5 \times 3}$. The Euler-Lagrange system $\text{div}(\nabla F(D{\bf u})) = \text{div}(\nabla L(D{\bf u})) = 0$ then holds automatically. In \cite{SY2} they use the same technique to construct a non-Lipschitz minimizer with $n = 4,\,m=3$ coming from the Hopf fibration. To our knowledge, these are the lowest-dimensional examples to date. Our strategy is different and it is based on constructing a homogenous of degree one minimizer in the scalar case for an integrand which is convex but has ``flat pieces". An interesting problem about the regularity of minimizers occurs in the scalar case when considering in \eqref{Functional} convex integrands $F:\mathbb{R}^n \to \mathbb{R}$ for which the uniform convexity of $F$ fails on some compact set $\mathcal S$. Assume for simplicity that $F$ is smooth outside the degeneracy set $\mathcal S$, and also that $F$ satisfies the usual quadratic growth at infinity. One key question is whether or not the gradient $\nabla u$ localizes as we focus closer and closer to a point $x_0 \in B_1$. In \cite{DS} it was proved that, in dimension $n=2$, the sets $\nabla u(B_\varepsilon(x_0))$ decrease uniformly as $\varepsilon \to 0$ either to a point outside $\mathcal S$, or to a connected subset of $\mathcal S$. In Theorem \ref{ScalarExample} below we show that this ``continuity property" of $\nabla u$ does not hold in dimension $n=3$ when the set $\mathcal S$ is the union of two disconnected convex sets. We remark that, as in the $p$-Laplace equation, it is relatively standard (see \cite{E1,CF}) to obtain the continuity of $\nabla u$ outside the convex hull $\mathcal S^c$ of $\mathcal S$. Let $w$ be the homogeneous degree one function $$w(x_1,x_2) = \frac{x_2^2 - x_1^2}{\sqrt{2(x_1^2 + x_2^2)}}=\frac{-1}{\sqrt 2} \, \, r \, \cos 2 \theta,$$ and let $u_0$ be the function on $\mathbb{R}^3$ obtained by revolving $w$ around the $x_1$ axis, $$u_0(x_1,x_2,x_3) = w\left(x_1,\sqrt{x_2^2 + x_3^2}\right).$$ We show that $u_0$ solves a degenerate elliptic equation that is uniformly elliptic away from the cone $$K_0 = \{x_1^2 > x_2^2+x_3^2\}.$$ \begin{thm}\label{ScalarExample} For any $\delta > 0$ there exists a convex function $G_0 \in C^{1,1-\delta} (\mathbb{R}^3)$ which is linear on two bounded convex sets containing $\nabla u_0(K_0)$, uniformly convex and smooth away from these two convex sets, such that $u_0$ is a minimizer of the functional $$\int_{B_1} G_0(\nabla u_0)\,dx.$$ \end{thm} We use $u_0$ and $G_0$ to construct a singular minimizing map from $\mathbb{R}^3$ to $\mathbb{R}^2$. Rescaling $u_0$ we obtain a function $u^1$ that solves an equation that is uniformly elliptic away from a thin cone around the $x_1$ axis, and switching the $x_1$ and $x_3$ axes we get an analogous function $u^2$. Then ${\bf u} = (u^1,u^2)$ is a minimizing map for $$F_0(p^1,p^2) := G_1(p^1) + G_2(p^2),$$ which is a convex function defined on $\mathbb{R}^6 \cong M^{2 \times 3}$. Notice that the Euler-Lagrange system $\text{div}(\nabla F_0(D{\bf u}))=0$ is de-coupled, and $F_0$ fails to be uniformly convex or smooth in certain regions. However, a key observation is that $F_0$ separates quadratically from its tangent planes when restricted to the image of $D {\bf u}$. We obtain our example by making a small perturbation of $F_0$. More specifically, let $$u^1(x_1,x_2,x_3) = u_0(x_1/2,x_2,x_3), \quad u^2(x_1,x_2,x_3) = u^1(x_3,x_2,x_1)$$ and let \begin{equation}\label{MainExample} {\bf u} = (u^1,u^2). \end{equation} Our main theorem is: \begin{thm}\label{main} The map \eqref{MainExample} is a minimizer of $$\int_{B_1} F(D{\bf u})\,dx$$ for some smooth, uniformly convex $F : M^{2 \times 3} \rightarrow \mathbb{R}$. \end{thm} The paper is organized as follows. In Section \ref{MainProof} we state a convex extension lemma and the key proposition, which asserts the existence of a suitable smooth small perturbation of $G_0$. We then use them to prove Theorem \ref{main}. In Section \ref{Constructions} we prove the key proposition. This section contains most of the technical details. In Section \ref{Appendix} we prove the extension lemma and some technical inequalities needed for the key proposition. Finally, at the end of Section \ref{Appendix} we outline how to prove Theorem \ref{ScalarExample}. \section{Key Proposition and Proof of Theorem \ref{main}}\label{MainProof} In this section we state the extension lemma and the key proposition. We then use them to prove Theorem \ref{main}. The function $F_0$ defined in the Introduction is not uniformly convex in $M^{2 \times 3}$, but it separates quadratically from its tangent planes on the image of $D \bf u$ which, by the one-homogeneity of $\bf u$, is the two dimensional surface $D{\bf u}(S^2)$. The quadratic separation holds on this surface since $G_1$ is uniformly convex in the region where $G_2$ is flat and vice versa. We would like to find a uniformly convex extension of $F_0$ with the same tangent planes on $D{\bf u}( \partial B_1)$. \subsection{Extension Lemma} The extension lemma gives a simple criterion for deciding when the tangent planes on a smooth surface can be extended to a global smooth, uniformly convex function. Let $\Sigma$ be a smooth compact, embedded surface in $\mathbb{R}^n$ of any dimension. \begin{lem}\label{ExtensionLemma} Let $G$ be a smooth function and ${\bf v}$ a smooth vector field on $\Sigma$ such that \begin{equation}\label{QuadSepCondition} G(y) - G(x) - {\bf v}(x) \cdot (y-x) \ge \gamma |y-x|^2, \end{equation} for any $x, \,y \in \Sigma$ and some $\gamma > 0$. Then there exists a global smooth function $F$ such that $F = G$ and $\nabla F = {\bf v}$ on $\Sigma$, and $D^2F \ge \gamma I$. \end{lem} The idea of the proof is to first make a local extension by adding a large multiple of the square of distance from $\Sigma$. We then make an extension to all of $\mathbb{R}^n$ by taking the supremum of tangent paraboloids to the local extension. Finally we mollify and glue the local and global extensions. We postpone the proof to the appendix, Section \ref{Appendix}. We also record an obvious corollary. \begin{defn}\label{SeparationDefinition} Let $G$ be a smooth function on an open subset $O$ of $\mathbb{R}^n$. We define the separation function $S_G$ on $O \times O$ by $$S_{G}(x,y) = G(y) - G(x) - \nabla G(x) \cdot (y-x).$$ \end{defn} \begin{cor}\label{ExtensionCorollary} Assume that $G$ is a smooth function in a neighborhood of $\Sigma$ such that $S_G(x,y) \ge \gamma |y-x|^2$ for any $x, \,y \in \Sigma$ and some $\gamma > 0$. Then there exists a global smooth, uniformly convex function $F$ such that $F = G$ and $\nabla F = \nabla G$ on $\Sigma$. \end{cor} \subsection{Key Proposition} In this section we state the key proposition. We first give the setup for the statement. Recall that $w = (x_2^2-x_1^2)/\sqrt{2(x_1^2+x_2^2)}$. Let $$\Gamma = \nabla w(B_1 - \{0\}) = \nabla w(S^1).$$ We describe $\Gamma$ as a collection of four congruent curves. The part of $\Gamma$ in the region $\{p_2 \geq |p_1|\}$ can be written as a graph $$\Gamma_1 = \{(p_1, \varphi(p_1))\}$$ for $p_1 \in [-1,1]$, where $\varphi$ is even, uniformly convex, tangent to $p_1^2=p_2^2$ at $\pm 1$, and separates from these lines like $(\text{dist})^{3/2}$. We will give a more precise description of $\varphi$ in Section \ref{Constructions}. The other pieces of $\Gamma$ can be written $$\Gamma_2 = \{-\varphi(p_2),p_2\}, \quad \Gamma_3 = \{p_1, -\varphi(p_1)\}, \quad \Gamma_4 = \{\varphi(p_2),p_2\}$$ for $p_i \in [-1,1]$, representing the left, bottom and right pieces of $\Gamma$ (see figure \ref{Gammapic}). \begin{figure} \centering \includegraphics[scale=0.35]{Gamma.pdf} \caption{$\Gamma$ consists of four identical curves separating from the lines $p_1^2=p_2^2$ like $\text{dist}^{3/2}.$} \label{Gammapic} \end{figure} Recall that $u_0 = w\left(x_1,\sqrt{x_2^2+x_3^2}\right)$. Then $$\Omega = \nabla u_0(S^2)$$ is the surface obtained by revolving $\Gamma$ around the $p_1$ axis. Let $\Omega_R \subset \Omega$ be the surface obtained by revolving $\Gamma_1$ around the $p_1$ axis. In the statement below, $\delta$ and $\gamma$ are small positive constants depending on $\varphi$. \begin{prop}\label{KeyProposition} For any $\epsilon > 0$ there exists a smooth function $G$ defined in a neighborhood of $\Omega$ such that $$\text{div}(\nabla G(\nabla u_0)) = 0 \quad \quad \mbox{in} \quad B_1 \setminus\{0\},$$ and \begin{enumerate} \item If $p \in \Omega_R \cap \{-1 + \delta \leq p_1 \leq 1-\delta\}$ then $S_{G}(p,q) \ge \gamma|p-q|^2$ for all $q \in \Omega$, \item $S_{G}(p,q) \geq -\epsilon |p-q|^2$ otherwise for $p,\,q \in \Omega$. \end{enumerate} \end{prop} We delay the proof of this proposition to Section \ref{Constructions}, and use it now to prove Theorem \ref{main}. \subsection{Proof of Theorem \ref{main}} Recall that $$u^1(x_1,x_2,x_3) = u_0(x_1/2,x_2,x_3), \quad u^2(x_1,x_2,x_3) = u^1(x_3,x_2,x_1),$$ and let $$G_1(p_1,p_2,p_3) = G(2p_1,p_2,p_3), \quad G_2(p_1,p_2,p_3) = G_1(p_3,p_2,p_1).$$ Then by Proposition \ref{KeyProposition} we have $\text{div}(\nabla G_i(\nabla u^i)) = 0$. Let $$\Sigma = D{\bf u}(B_1).$$ Since $D^2u^1$ has rank $2$ away from the cone $$K_1 = \left\{x_1^2 \geq 4(x_2^2+x_3^2)\right\}$$ and similarly $D^2u^2$ has rank $2$ away from $$K_2 = \left\{x_3^2 \geq 4(x_1^2 + x_2^2)\right\},$$ it is easy to see that $\Sigma$ is a smooth embedded surface in $\mathbb{R}^6$. Let $$\Omega_i = \nabla u^i(B_1 - K_i).$$ Note that $\Omega_1$ is just $\Omega_R$ squeezed by a factor of $1/2$ in the $p_1$ direction. Let $\nu_i$ be the outer normals to $\Omega_i$. Since $u^i$ are homogeneous degree one we have $\nu_i(\nabla u^i(x)) = x$ on $(B_1 - K_i) \cap S^2$. Furthermore, the preimage $x \in S^2$ of any point in $\Sigma$ satisfies either $|x_1| \leq |x_3|$ or vice versa. It follows from these observations that if $(p^1, p^2) \in \Sigma$ then either $$p^1 \in \Omega_1 \cap \{-\beta/2 \leq p^1_1 \leq \beta/2\} \text{ or } p^2 \in \Omega_2 \cap \{-\beta/2 \leq p^2_3 \leq \beta/2\}$$ with $\beta$ such that $\varphi'(\beta)=1/2$, $\beta< 1-\delta$ (see figure \ref{GradMap}). Assume $p^1$ belongs to the set above. \begin{figure} \centering \includegraphics[scale=0.45]{GradMap.pdf} \caption{$\nabla u^1$ maps the cone $K_1$ to a region where $G_1$ is slightly non-convex, but $\nabla u^2$ maps it well inside $\Omega_2$ where $G_2$ is uniformly convex.} \label{GradMap} \end{figure} Finally, let $$F_0(p^1,p^2) = G_1(p^1) + G_2(p^2).$$ By rescaling Proposition \ref{KeyProposition} we have for $(p^1,p^2),\, (q^1,q^2) \in \Sigma$ that \begin{align*} S_{F_0}((p^1,p^2),(q^1,q^2)) &= S_{G_1}(p^1,q^1) + S_{G_2}(p^2,q^2) \\ & \geq \gamma|p^1-q^1|^2 - \epsilon |p^2-q^2|^2. \end{align*} Let $\omega_0 \in S^2$ be a preimage of $p^1$ under $\nabla u^1$. Then $|\nabla u^1(\omega)-\nabla u^1(\omega_0)| > c|\omega-\omega_0|$ and $|\nabla u^i(\omega)-\nabla u^i(\omega_0)| < C|\omega - \omega_0|$ for any $\omega \in S^2$, so $$|p^2-q^2| \leq C|p^1-q^1|,$$ giving quadratic separation. By Corollary \ref{ExtensionCorollary} there is a smooth uniformly convex function $F$ on $\mathbb{R}^6$ so that $F = F_0$ and $ \nabla F = \nabla F_0$ on $\Sigma$, hence $\bf u$ satisfies the Euler-Lagrange system $\text{div}(\nabla F(D {\bf u})) = 0$ in $B_1 \setminus \{0\}$. Now it is straightforward to check that $\bf u$ is a weak solution of the system in the whole $B_1$. Indeed $$\int_{B_1}\nabla F(D {\bf u})\cdot D \psi=0, \quad \quad \forall \psi \in C_0^\infty(B_1),$$ follows by integrating first by parts in $B_1 \setminus B_\epsilon$ and then letting $\epsilon \to 0$. \section{Constructions}\label{Constructions} In this section we prove the key step, Proposition \ref{KeyProposition}. Since $\Omega = \nabla u_0(B_1)$ is the surface obtained by revolving $\Gamma$ around the $p_1$ axis, we can reduce to a one-dimensional problem on $\Gamma$ and then revolve the resulting picture around the $p_1$ axis. Since all of our constructions will be on $\mathbb{R}^2$ in this section we use coordinates $(x,y)$ rather than $(p_1,p_2)$. \subsection{Setup} Define $H$ to be an even function in $x$ and $y$ which has the form \begin{equation}\label{Hdef} H(x,y) = f(x) + h(x)(|y|-\varphi(x)), \end{equation} and is defined in a neighborhood of every point on $\Gamma_1 \cup \Gamma_3$, for some smooth functions $f$ and $h$ on $[-1,1]$. In our construction $h$ will be identically zero and $f$ linear near $x = \pm 1$, so $H$ is linear in a neighborhood of the cusps of $\Gamma$. Notice that we can extend $H$ to be a linear function (depending only on $x$) in a whole neighborhood of $\Gamma_2$ and similarly on $\Gamma_4$. Then $H$ is defined and smooth in a neighborhood of $\Gamma$. \subsection{Inequalities for $\varphi$} We now record some useful properties of $\Gamma$. For proofs see Section \ref{Appendix}. The first estimate gives an expansion for $\varphi$ near $x = -1$. \begin{prop}\label{PhiComputations} The function $\varphi$ is even, uniformly convex, and tangent to $y=|x|$ at $x = \pm 1$. Furthermore, $\varphi''$ is decreasing near $x = -1$ and we have the expansion \begin{equation}\label{phiexpansion} \varphi''(-1 + \epsilon) = \sqrt{\frac{2}{3}} \epsilon^{-1/2} + O(1). \end{equation} \end{prop} The second estimate says that the vertical reflection of $\varphi$ over its tangent $y = -x$ lies above and separates from $\Gamma_2$ (see figure \ref{SideSepPic}). It follows easily from the uniform convexity of $\varphi$. \begin{prop}\label{LeftSideSeparation} The function $a(x) = -2x-\varphi$ is uniformly concave, tangent to $\Gamma_2$ at $x = -1$, and lies strictly above $\Gamma_2$ for $x > -1$. \end{prop} \begin{figure} \centering \includegraphics[scale=0.30]{SideSep.pdf} \caption{The graph $a = -2x - \varphi$ lies strictly above $\Gamma_2$.} \label{SideSepPic} \end{figure} \subsection{Euler-Lagrange Equation} Let $$G(p_1,p_2,p_3) = H\left(p_1,\sqrt{p_2^2+p_3^2}\right).$$ The condition that $u_0$ solves the Euler-Lagrange equation $\text{div}(\nabla G(\nabla u_0)) = 0$ is equivalent to \begin{equation}\label{EulerLagrange} h(x) = \frac{f''(x)}{2\varphi''(x)}. \end{equation} Indeed, since $G$ is linear near the surfaces obtained by revolving $\Gamma_2$ and $\Gamma_4$, we just need to verify the Euler-Lagrange equation where $\nabla u_0$ is on the surface $\Omega_R$ obtained by revolving $\Gamma_1$. By passing a derivative the Euler-Lagrange equation $\text{div}(\nabla G(\nabla u_0))$ is equivalent to $$\text{tr}\left(D^2G(\nabla u_0) \cdot D^2u_0\right) = 0.$$ Let $\Omega_R$ have outer normal $\nu$ and second fundamental form $II$. Since $u_0$ is homogeneous degree one we have $\nu(\nabla u_0(x)) = x$ on $S^2$. Let $T$ be a frame tangent to $S^2$ at $x$, and differentiate to obtain $D_T^2u_0(x) = II^{-1}(\nabla u_0(x)).$ In coordinates tangent to $\Omega_R$ at $p = (p_1,\varphi(p_1),0)$ one computes $$II = \frac{1}{\sqrt{1 + \varphi'^2}} \left( \begin{array}{cc} \frac{\varphi''}{1 + \varphi'^2} & 0 \\ 0 & -\frac{1}{\varphi} \end{array} \right), \quad D^2G = \left( \begin{array}{cc} \frac{f'' - h\varphi''}{1 + \varphi'^2} & 0 \\ 0 & \frac{h}{\varphi} \end{array} \right)$$ and the Euler-Lagrange formula follows. \begin{rem}\label{ELComputation} For a fast way to compute $D^2G$ in tangential coordinates, differentiate the equation $G(p_1,\varphi(p_1),0) = f(p_1)$: $$\nabla G \cdot (1,\varphi') = f', \quad (1,\varphi')^T \cdot D^2G \cdot (1,\varphi') + h \varphi'' = f''.$$ The other eigenvalue comes from the rotational symmetry of $G$ around the $p_1$ axis. \end{rem} \begin{rem}\label{HigherDimEL} If we do the computation in $\mathbb{R}^n$ we have $n-1$ rotational principal curvatures and derivatives, giving the Euler-Lagrange equation $h = \frac{f''}{(n-1)\varphi''}$. \end{rem} \subsection{Convexity Conditions} Since most of our analysis is near a cusp, it is convenient to shift the picture by the vector $(1,-1)$ so that $\varphi, \,f$ are defined on $[0,2]$ and $\varphi$ is tangent to $y = -x$ at zero. We assume this for the remainder of the section. We examine convexity conditions between two points on $\Gamma_1$. Let $p = (x_0,\varphi(x_0))$ and $q = (x,\varphi(x))$. We first write the equation for the tangent plane $L_p$ to $H$ at $p = (x_0,\varphi(x_0))$: $$L_p(x,y) = f(x_0) + f'(x_0)(x-x_0) + h(x_0) \left[y- (\varphi(x_0) + \varphi'(x_0)(x-x_0)) \right].$$ Applying the Euler-Lagrange equation \eqref{EulerLagrange} we obtain \begin{equation}\label{TangentPlane} L_p = f(x_0) + f'(x_0)(x-x_0) - \frac{f''(x_0)}{2\varphi''(x_0)} \left[y- (\varphi(x_0) + \varphi'(x_0)(x-x_0)) \right]. \end{equation} By definition, $$S_{H}(p,q) = f(x) - L_p(x,\varphi(x)).$$ Using equation \eqref{TangentPlane} we obtain \begin{equation}\label{TopSeparation} S_H(p,q) = \int_{x_0}^x f''(t)(x-t)\,dt - \frac{f''(x_0)}{2\varphi''(x_0)} \int_{x_0}^x \varphi''(t)(x-t)\,dt. \end{equation} \begin{defn}\label{Sep} For a nonnegative function $g : \mathbb{R} \rightarrow \mathbb{R}$ define the weighted average $$s_g(x_0,x) = \frac{\int_{x_0}^x g(t)(x-t)\,dt}{g(x_0)(x-x_0)^2}.$$ \end{defn} With this definition we have \begin{equation}\label{TS1} S_H(p,q) = f''(x_0) \left( s_{f''}(x_0,x) - \frac{1}{2}s_{\varphi''}(x_0,x) \right) (x-x_0)^2, \end{equation} thus, the first qualitative convexity condition is \begin{equation}\label{FirstConvexityCondition} s_{f''}(x_0,x) \geq \frac{1}{2}s_{\varphi''}(x_0,x). \end{equation} \begin{rem}\label{LocalComputationFirstCondition} Notice that $$\lim_{x \to x_0} s_g(x_0,x)=\frac 12.$$ It is easy to check that if $g$ is increasing (decreasing) then $s_g(x_0,x)$ is increasing (decreasing) with $x$. With this observation one verifies that condition \eqref{FirstConvexityCondition} holds for $x_0, \,x$ near $0$ if $f''(x) = Cx^{1-\alpha}$ for any $\alpha \in (0,1)$. Indeed, since $f''$ is increasing and $\varphi''$ is decreasing one only needs to check the condition at $x = 0$, where one computes $s_{f''}(x_0,0) = \frac{1}{3-\alpha}$ and $\frac{1}{2}s_{\varphi''}(x_0,0) = \frac{1}{3} + O(\sqrt{x_0})$ which follows by Proposition \ref{PhiComputations}. \end{rem} We now examine convexity conditions between $p \in \Gamma_1$ and $q \in \Gamma_2$. Let $p = (x_0, \varphi(x_0))$. In our construction we will have $h \geq 0$, and since $H$ is linear near $\Gamma_2$, we see that $S_H(p,q) \geq 0$ if the intersection line of tangent planes to $H$ at $p$ and at $0$ lies above the line $y = -x$ on $[0,2]$. Using equation \eqref{TangentPlane} we compute the formula for the intersection line: \begin{equation}\label{IntersectionLine} \begin{split} y = \varphi(x_0) - \frac{2\varphi''(x_0)}{f''(x_0)}\int_{0}^{x_0} f''(t)(x_0-t)\,dt \\ + \left(\varphi'(x_0) - \frac{2\varphi''(x_0)}{f''(x_0)}\int_{0}^{x_0}f''(t)\,dt\right)\cdot (x-x_0). \end{split} \end{equation} If condition \eqref{FirstConvexityCondition} holds at $x = 0$, it means that the origin lies below the intersection line, thus $S_H(p,q) \geq 0$ for all $q \in \Gamma_2$ provided that the slope of the intersection line above is larger than $-1$: $$ \varphi'(x_0) - \frac{2\varphi''(x_0)}{f''(x_0)}\int_{0}^{x_0}f''(t)\,dt \ge -1=\varphi'(0).$$ \begin{defn}\label{DerivDiff} For a nonnegative function $g: \mathbb{R} \rightarrow \mathbb{R}$ define $$d_g(x) = \frac{\int_0^{x} g(t)\,dt}{xg(x)}.$$ \end{defn} With this definition the slope condition above can be written as \begin{equation}\label{SecondConvexityCondition} d_{f''}(x) \leq \frac{1}{2} d_{\varphi''}(x). \end{equation} \begin{rem}\label{LocalComputationSecondCondition} Near $x = 0$ one computes $\frac{1}{2}d_{\varphi''}(x) = 1 + O(\sqrt{x})$. Thus, if $f''(x) = Cx^{1-\alpha}$ near $x = 0$ then \eqref{SecondConvexityCondition} holds. However, away from a small neighborhood of $0$, condition \eqref{SecondConvexityCondition} will not hold in our construction. We will use formula \eqref{TangentPlane} more carefully, combined with Proposition \ref{LeftSideSeparation}, to deal with these cases. \end{rem} \begin{rem}\label{LinPartIndependence} Conditions \eqref{FirstConvexityCondition} and \eqref{SecondConvexityCondition} are independent of the linear part of $f$. Thus, when checking convexity conditions we only need to use the properties of $f''$. \end{rem} \subsection{Preliminary Construction} As a stepping stone to proving Proposition \ref{KeyProposition} we construct first a $C^{1,\alpha}$ function $H_0$ near $\Gamma$, that is globally convex. We will use this construction to prove Theorem \ref{ScalarExample} in Section \ref{Appendix}. The function $H\in C^\infty$ is obtained by perturbing $H_0$. Below we define $$G_0(p_1,p_2,p_3) = H_0\left(p_1,\sqrt{p_2^2 + p_3^3}\right).$$ Recall in the constructions below that we have shifted the picture by $(1,-1)$. \begin{prop}\label{H0Construction} For any $\alpha \in (0,1)$ there exist a function $H_0$ near $\Gamma$ such that \begin{enumerate} \item $H_0$ is a linear function depending only on $x$ on $\Gamma_2$, and similarly on $\Gamma_4$. \item $H_0$ is pointwise $C^{1,1-\alpha}$ on the cusps of $\Gamma$ and smooth otherwise, \item $\text{div}(\nabla G_0(\nabla u_0)) = 0$ away from the cone $\{x_1^2 = x_2^2+x_3^2\},$ \item $S_{H_0}(p,q) \geq 0$ for all $p,\,q \in \Gamma$, \item If $p = (x,\varphi(x))$ then $S_{H_0}(p,q) \ge \eta(x)|p-q|^2$ for all $q \in \Gamma$, where $\eta$ is some continuous function on $[0,2]$ with $\eta > 0$ on $(0,2)$ and $\eta(0)=\eta(2)=0$. \end{enumerate} \end{prop} We will define $f_0$ by $f_0(0) = f_0'(0) = 0$ and prescribe $f_0''$, and then let $H_0$ be the function determined by $f_0$ through the Euler-Lagrange relation \eqref{EulerLagrange}. It is easy to check that condition \eqref{FirstConvexityCondition} holds if we take $f_0'' = \varphi''$. However, we want $h_0 = f_0''/(2\varphi'')$ to go to zero at the endpoints so that $H_0$ is linear on $\Gamma_2$ and $\Gamma_4$. Motivated by the above and Remarks \ref{LocalComputationFirstCondition} and \ref{LocalComputationSecondCondition}, define $$f_0''(x) = \begin{cases} \delta^{\alpha - 1}\varphi''(\delta)x^{1-\alpha}, \quad 0 \leq x \leq \delta \\ \varphi''(x), \quad \delta \leq x \leq 1 \\ f_0''(2-x), \quad 1 \leq x \leq 2 \end{cases}$$ (See figure \ref{f0pic}). Assume $\delta$ is tiny so that $\varphi''$ is well approximated by its expansion \eqref{phiexpansion}. Let $H_0$ be the function as in \eqref{Hdef} determined by $f_0$ through the Euler-Lagrange relation \eqref{EulerLagrange}. \begin{figure} \centering \includegraphics[scale=0.35]{f0Construction.pdf} \caption{$f_0''$ agrees with $\varphi''$ on $[\delta,2-\delta]$, behaves like $x^{1-\alpha}$ near zero, and is symmetric around $x = 1$.} \label{f0pic} \end{figure} \begin{proof}[{\bf Proof of proposition \ref{H0Construction}}] The first three items are clear by construction so we check the convexity conditions. By symmetry we only need to consider $p \in \Gamma_1 \cup \Gamma_2$. If $p \in \Gamma_2$ the positive separation is a consequence of $H_0 \geq 0$. This follows from the definition of $H_0$ on $\Gamma_1 \cup \Gamma_3$. Also, by symmetry, the linear function on $\Gamma_4$ intersects the linear function on $\Gamma_2$ on the vertical line $\{x=1 \}$, and since $\Gamma_4 \subset \{x>1\}$ we obtain $H_0 \ge 0$ on $\Gamma_4$ as well. We now consider the situation when $p\in \Gamma_1$ and distinguish two cases depending whether $q \in \Gamma_1 \cup \Gamma_3$ or $q \in \Gamma_2 \cup \Gamma_4$. Let $p = (x_0,\varphi(x_0))$. {\bf First Case:} Assume first that $q = (x,\varphi(x)) \in \Gamma_1$. By symmetry of $f_0''$ around $x = 1$ we may assume $x < x_0$. If $x_0 \in [0,\delta]$ then by formula \eqref{TS1} and Remark \ref{LocalComputationFirstCondition} we have $$S_{H_0}(p,q) \geq c(\alpha)f''(x_0)(x-x_0)^2.$$ If $x_0 \in [\delta, 2-\delta]$ we have $f_0'' = \varphi''$, so one computes $$S_{H_0}(p,q) = \int_{x}^{x_0} (f''(t) - \frac 12 \varphi''(t))(t-x)\,dt.$$ If $x \geq \delta$ then this is clearly controlled below by $\frac{1}{4}\min(\varphi'')(x-x_0)^2$, and if $x < \delta$ then we have $$S_{H_0}(p,q) = \int_{x}^{\delta} (f''(t) - \varphi''(t)/2)(t-x)\,dt + \frac{1}{2}\int_{\delta}^{x_0} \varphi''(t)(t-x)\,dt,$$ which is controlled below by $$\varphi''(\delta)(s_{f''}(\delta,x) - s_{\varphi''}(\delta,x)/2)(\delta - x)^2 + \frac{1}{4}\min(\varphi'')(x_0-\delta)^2 \geq c(\alpha)(x-x_0)^2.$$ Finally, if $x_0 \geq 2-\delta$ then since $f_0''/\varphi''$ is decreasing on $[\delta,2]$, we compute for $x \geq \delta$ that $$S_{H_0}(p,q) \geq \frac{1}{2} \int_{x_0}^x f''(t)(x-t)\,dt \geq \frac{1}{4}\min\{f_0''(x_0),\min(\varphi'')\}(x-x_0)^2.$$ If $x < \delta$ then, since $f_0'' \leq \varphi''$ and they agree on $[\delta,2-\delta]$, we have using expansion \eqref{phiexpansion} that $$S_{H_0}(p,q) \geq \frac{1}{2}\int_{\delta}^{2-\delta} \varphi''(t)(t-x)\,dt - C\sqrt{\delta} \geq c(x-x_0)^2.$$ If $q \in \Gamma_3$ then quadratic separation holds as well since $$\partial_yH_0(x_0,\varphi(x_0)) = \frac{f_0''(x_0)}{2\varphi''(x_0)} > 0.$$ {\bf Second Case:} By symmetry we may assume $q \in \Gamma_2$. If $x_0 \leq \delta$ we compute $$d_{f_0''}(x_0) = \frac{1}{2-\alpha} < 1.$$ By Remark $\ref{LocalComputationSecondCondition}$, inequality \eqref{SecondConvexityCondition} holds strictly. Now assume $x_0 \in [\delta,1]$. Define $$g(x) = \varphi(x) - 2f_0(x).$$ Using the tangent plane formula \eqref{TangentPlane} we compute $$L_p(x,g(x)) = -\int_{x_0}^x f''(t)(x-t)\,dt + \frac{1}{2} \int_{x_0}^x \varphi''(t)(x-t)\,dt = -S_{H_0}(p, (x,\varphi(x))) \leq 0$$ by the computations in the first case. Furthermore, since $f_0'' \leq \varphi''$, the graph of $g$ lies above the function $$a(x) = -2x-\varphi(x)$$ defined in Proposition \ref{LeftSideSeparation} (see figure \ref{SideSepKeyPic}). Since $a(x)$ lies strictly above $\Gamma_2$ for $x > 0$ and $\partial_yH_0(x_0,\varphi(x_0)) = 1/2$, we have strictly positive separation on $\Gamma_2$. \begin{figure} \centering \includegraphics[scale=0.35]{SideSepKey.pdf} \caption{The tangent plane at $(x,\varphi(x))$ is negative on the curve $y = g(x)$, hence on $\Gamma_2$, for $x \in [\delta,2-\delta]$.} \label{SideSepKeyPic} \end{figure} Finally, for $x_0 \in [1, 2]$, the intersection of the tangent planes at $p$ and at $\tilde{p} = (2-x_0,\varphi(2-x_0))$ is the line $x = 1$ since $f_0''$ is symmetric around $x = 1$. By the previous computations, the tangent plane at $\tilde{p}$ is negative on $\Gamma_2$. Thus, the tangent plane at $p$ is negative on $\Gamma_2$, completing the proof. \end{proof} \subsection{Proof of Key Proposition} We can slightly modify the construction of $H_0$ from the previous section to make it smooth, at the expense of giving up a little convexity near the cusps of $\Gamma$. Below $\delta, \, \gamma > 0$ are small constants depending only on $\varphi$. Let $G(p_1,p_2,p_3) = H\left(p_1,\sqrt{p_2^2+p_3^2}\right)$. \begin{prop}\label{HConstruction} For any $\epsilon > 0$ there exists a smooth function $H$ defined on a neighborhood of $\Gamma$ such that \begin{enumerate} \item $H$ is linear (depending only on $x$) in a neighborhood of $\Gamma_2$, respectively $\Gamma_4$, \item $\text{div}(\nabla G(\nabla u_0)) = 0$, \item $H_y(x,\varphi(x)) \geq \frac 1 2$ for $x \in [\delta, 2-\delta]$, and $H_y \geq 0$ on $\Gamma_1$, \item If $p = (x,\varphi(x))$ with $x \in [\delta,2-\delta]$ then $S_{H}(p,q) \geq \gamma |p-q|^2$ for all $q \in \Gamma$, \item $S_{H}(p,q) \geq - \epsilon |p-q|^2$ otherwise for $p,\,q \in \Gamma$. \end{enumerate} \end{prop} Note that the key Proposition \ref{KeyProposition} follows easily from Proposition \ref{HConstruction} by defining $G$ as above. Let $\alpha = \frac{1}{2}$ in the construction of $f_0''$ from the previous section and let $\epsilon \ll \delta$. Let $f''$ be a smoothing of $f_0''$ defined by cutting it off smoothly to zero between $\epsilon$ and $2\epsilon$, gluing it smoothly to itself between $\delta$ and $\delta + \epsilon$, and making it symmetric over $x = 1$ (see figure \ref{fpic}). Let $H$ be the function in \eqref{Hdef} determined by $f$ through the Euler-Lagrange relation \eqref{EulerLagrange}. \begin{figure} \centering \includegraphics[scale=0.35]{fConstruction.pdf} \caption{$f''$ is a small perturbation of $f_0''$ that connects smoothly to $\varphi''$ near $x = \delta$ and goes quickly to zero near $x = 0$.} \label{fpic} \end{figure} \begin{proof}[{\bf Proof of proposition \ref{HConstruction}}] The first three conclusions are clear by construction so we just need to check the convexity conclusions. Most of them will follow by continuity. If $p \in \Gamma_2$ we have positive separation since $H \geq 0$, so assume $p = (x_0,\varphi(x_0))$. If $x_0 \in [\delta,2-\delta]$ then the conclusion holds by continuity from the arguments in the proof of Proposition \ref{H0Construction} after taking $\epsilon$ small. Next we may assume by symmetry that $x_0 \in [0, \delta]$. {\bf Case 1:} Assume that $x_0 \geq 10\epsilon$. If $q = (x,\varphi(x))$ with $x > x_0$ then the positive separation follows again by continuity. If $x < x_0$ one computes $$2s_{f''}(x_0,x) \geq 2s_{f''}(x_0,0) \geq \frac{4}{5}(1-(1/5)^{5/2}) > s_{\varphi''}(x_0,0) \geq s_{\varphi''}(x_0,x)$$ so condition \eqref{FirstConvexityCondition} holds and we have positive separation on $\Gamma_1$. Since the cutoff is between $\epsilon$ and $2\epsilon$ and $f''$ is increasing for $x < \delta$ we compute \begin{equation}\label{LeftSepKey} d_{f''}(x_0) < \frac{2}{3}. \end{equation} and by Remark \ref{LocalComputationSecondCondition} the condition \eqref{SecondConvexityCondition} holds for $x_0 < \delta$. We thus have positive separation on $\Gamma_2$ and $\Gamma_3$. Finally, for $q \in \Gamma_4$ positive separation follows again by continuity. This establishes positive separation everywhere for $x_0 \in [10\epsilon, 2-10\epsilon]$. {\bf Case 2:} Assume $x_0 \leq 10\epsilon$. The tangent plane at $p$ is of order $\epsilon$ on $\Gamma$, so we have positive separation when $q \in \Gamma_4$. Using that $f''$ is increasing and $\varphi''$ decreasing near $0$, we obtain positive separation if $q = (x,\varphi(x))$ with $x \in [x_0, \delta]$. The same holds for $x> \delta$ by continuity. If $q = (x,\varphi(x))$ for $x < x_0$ we compute $$S_{H}(p,q) \geq - f''(x_0)s_{\varphi''}(x_0,x)(x-x_0)^2 \geq - C\sqrt{\epsilon} \, \,|p-q|^2,$$ since $s_{\varphi''}(x_0,x) \le s_{\varphi''}(x_0,0) \le 1$. This gives the desired estimate on $\Gamma_1$. Next we bound $S_H(p,q)$ with $q \in \Gamma_2$. For this we estimate the location of the intersection line $l_p$ of the tangent plane at $p$ with $0$. By \eqref{IntersectionLine}, $l_p$ passes through $$\left(x_0, \varphi(x_0) - \frac{2\varphi''(x_0)}{f''(x_0)}\int_{0}^{x_0} (x_0-t)f''(t)\,dt\right).$$ We first claim that this point lies above the line $y = -x$. Indeed, since $f''$ is increasing in $[0,x_0]$, the second component is larger than $\varphi(x_0) - \varphi''(x_0)x_0^2$, and using the expansion \eqref{phiexpansion} we see that $$\varphi(x_0) + x_0 \geq \left(\frac{4}{3}\varphi''(x_0) + O(1)\right)x_0^2 > \varphi''(x_0)x_0^2.$$ By \eqref{LeftSepKey} the slope of $l_p$ is between $-1$ and $0$, so we have positive separation for $q \in \Gamma_3$ and $q \in \Gamma_2 \cap \{y < -x_0\}$. Finally, from \eqref{IntersectionLine} we see that the slope of $l_p$ is less than $\varphi'(x_0)$. Thus, for $x < x_0$, $l_p$ lies above the line $$y=l(x) = -x_0 + \varphi'(x_0)(x-x_0).$$ A short computation using the expansion \eqref{phiexpansion} shows that $l(x)$ crosses $a(x)$, hence $\Gamma_2$, at some $x < \xi x_0$ where $$\xi + \frac{2}{3}\xi^{3/2} = 1 + O(\sqrt{\epsilon}).$$ In particular, $\xi < 1-c$. This gives that the separation is positive on $\Gamma_2 \cap \{x > \xi x_0\}$, and otherwise the separation is at worst $-C\sqrt{\epsilon} x_0^2 \geq -C\sqrt{\epsilon}|p-q|^2$ (see figure \ref{DistSepPic}). \end{proof} \begin{figure} \centering \includegraphics[scale=0.35]{DistSep.pdf} \caption{The separation is positive if $q$ is below the line $l(x)$, and if the separation is negative then $|p-q|$ is of order $x_0$.} \label{DistSepPic} \end{figure} \begin{rem}\label{ConvexityLoss} The proof shows in fact that $S_{H}(p,q)$ is only negative for $p,\,q$ very close to the same cusp. \end{rem} \section{Appendix}\label{Appendix} \subsection{Convex Extension Lemma} \begin{proof}[{\bf Proof of lemma \ref{ExtensionLemma}}] Let ${\bf v}_{T}$ be the tangential component and ${\bf v}_{\perp}$ be the normal component, and let $\nabla^{\Sigma}G$ be the gradient of $G$ on $\Sigma$. Note that condition \ref{QuadSepCondition} implies ${\bf v}_T = \nabla^{\Sigma}G$. For $x \in \Sigma$ let $T_x, \, N_x$ be the tangent and normal subspaces to $\Sigma$ at $x$. Let $d_{\Sigma}(y)$ be the distance from $y$ to $\Sigma$ and let $$\Sigma^r = \{y: d_{\Sigma}(y) < r\}.$$ Finally, for $x \in \mathbb{R}^n$ let $y(x)$ be the closest point in $\Sigma$ to $x$. It is well-known that $y$ and $d_{\Sigma}^2$ are well-defined and smooth in a neighborhood of $\Sigma$, and for $x \in \Sigma$, $D_xy(x)$ is the projection to $T_x$ and $D^2(d_{\Sigma}^2/2)(x)$ is the projection to $N_x$. (For proofs, see for example \cite{AS}). {\bf Step 1:} We claim that the function $$F(x) = G(y(x)) + {\bf v}(y(x)) \cdot (x-y(x)) + \frac{A}{2}d_{\Sigma}^2(x)$$ with $A$ large lifts quadratically from its tangent planes in $\Sigma^{\sigma}$ for $\sigma$ sufficiently small. We first compute for $x \in \Sigma$ that \begin{align*} F(x + \epsilon z) &= G(x) + \epsilon\left(\nabla^{\Sigma}G(x) \cdot z_{T} + {\bf v}(x) \cdot z_{\perp}\right) + O(\epsilon^2) \\ &= G(x) + \epsilon {\bf v}(x) \cdot z + O(\epsilon^2) \end{align*} giving that $F = G$ on $\Sigma$ and $\nabla F = {\bf v}$ on $\Sigma$. Now, for $\epsilon$ small and $\nu \in N_x$ we have $y(x + \epsilon \nu) = x$ and $d_{\Sigma}(x+\epsilon \nu) = \epsilon$, so $F_{\nu\nu}(x) = A.$ In addition, if $x \in \Sigma$ and $x + \epsilon z \in \Sigma$ for some unit vector $z$ then by hypothesis we have \begin{align*} F(x + \epsilon z) &= F(x) + \epsilon \nabla F(x) \cdot z + \frac{\epsilon^2}{2}z^T \cdot D^2F(x) \cdot z + O(\epsilon^3) \\ &\geq F(x) + \epsilon \nabla F(x) \cdot z + \gamma \epsilon^2. \end{align*} Taking $\epsilon$ to zero we see that $F_{\tau\tau}(x) > 2\gamma$ for any tangential unit vector $\tau$. Take any unit vector $e$ and write $e = \alpha \tau + \sqrt{1-\alpha^2} \nu$ for some unit $\tau \in T_x\Sigma$ and $\nu \in N_x\Sigma$. Since $D^2(d_{\Sigma}^2/2)$ is the projection matrix onto $N_x$ at $x \in \Sigma$, we have \begin{align*} F_{ee}(x) &= \alpha^2F_{\tau\tau} + (1-\alpha^2)F_{\nu\nu} + 2\alpha\sqrt{1-\alpha^2}(F - Ad_{\Sigma}^2/2)_{\tau\nu} \\ &\geq 2\alpha^2\gamma + (1-\alpha^2)A - C\alpha\sqrt{1-\alpha^2} \end{align*} for some $C$ independent of $A$. We conclude that $D^2F > \frac{3}{2}\gamma I$ on $\Sigma$ for $A$ sufficiently large, and in particular, $D^2F > \frac{3}{2}\gamma I$ on a neighborhood $\Sigma^{2\rho}$ of $\Sigma$. Finally, we show that the tangent planes to $F$ in $\Sigma^{\sigma}$ separate quadratically for $\sigma$ small. Let $x, \,z \in \Sigma^{\sigma}$. We divide into two cases. If $|z-x| < \rho$ then $x$ and $z$ can be connected by a line segment contained in $\Sigma^{2\rho}$, so it is clear that $$F(z) > F(x) + \nabla F(x) \cdot (z-x) + \frac{3}{4}\gamma|z-x|^2.$$ If on the other hand $|z-x| > \rho$, we use that $$F(y(z)) > F(y(x)) + \nabla F(y(x)) \cdot (y(z)-y(x)) + \gamma |y(z)-y(x)|^2.$$ Replacing $y(z)$ by $z$ and $y(x)$ by $x$ changes these quantities by at most $C\sigma$, and since and $|z-x| > \rho$ we have that $$F(z) > F(x) + \nabla F(x) \cdot (z-x) + \frac{3}{4}\gamma |z-x|^2$$ for all $x,\,z \in \Sigma^{\sigma}$ for $\sigma$ small. {\bf Step 2:} From now on denote the open set $\Sigma^{\sigma}$ by $N$. Let $N_{\epsilon}$ denote $\{x \in N: B_{\epsilon}(x) \subset N\}$. Finally, let $\rho_{\epsilon}$ denote the standard mollifier $\epsilon^{-n}\rho(x/\epsilon)$ where $\rho$ is supported in $B_1$, nonnegative, smooth and has unit mass. We define a global uniformly convex function that agrees with $F$ on $N$. Let $$H_0(y) = \sup_{x \in N}\left\{F(x) + \nabla F(x) \cdot (y-x) + \frac{3}{4}\gamma |y-x|^2\right\}.$$ Then $H_0$ is a uniformly convex function on $\mathbb{R}^n$ with $D^2H_0 \geq \frac{3}{2}\gamma I$ and furthermore by construction we have that $H_0 = F$ on $N$. To finish we glue $H_0$ to a mollification. Fix $\delta$ so that $\Sigma \subset N_{2\delta}$. Let $$H_{\epsilon} = \rho_{\epsilon} \ast H_0$$ for some $\epsilon$ small. In $N_{\delta}$ we have $$|H_{\epsilon} - H_0|, \, |\nabla H_{\epsilon}-\nabla H_0| < C\epsilon.$$ Finally, since $D^2H_0 \geq \frac{3}{2}\gamma I$ we have $D^2H_{\epsilon} > \frac{3}{2}\gamma I.$ Let $\eta$ be a smooth cutoff function which is $1$ on $N_{2\delta}$ and $0$ outside of $N_{\delta}$. Then let $$H= \eta H_0 + (1-\eta) H_{\epsilon}.$$ We compute $$D^2H = \eta D^2H_0 + (1-\eta) D^2H_{\epsilon} + 2\nabla \eta \otimes \nabla(H_0-H_{\epsilon}) + D^2\eta(H_0-H_{\epsilon}).$$ Then $H$ is smooth, $H = F$ on $N_{2\delta}$ and taking $\epsilon$ small we have $D^2H > \gamma I$, completing the construction. \end{proof} \subsection{Expansion of $\varphi$} \begin{proof}[{\bf Proof of proposition \ref{PhiComputations}}] The symmetries of $\varphi$ follow from the symmetries of $w$. The curve $\Gamma_1$ is parametrized by $\nabla w(\theta)$ for $\theta \in [\pi/4, \,3\pi/4]$. Let $\nu$ be the upward normal to $\Gamma_1$. Since $w$ is homogeneous degree one we have $\nu(\nabla w(\theta)) = \theta$. Differentiating we get the the curvature $\kappa = \frac{1}{g'' + g}$ where $g(\theta) = \frac{-1}{\sqrt{2}}\cos 2\theta$ are the values of $w$ on $S^1$. Thus, $\varphi$ is uniformly convex and its second derivatives blow up near $x = \pm 1$. To quantify this we compute \begin{align*} \nabla w(\theta) &= g(\theta)(\cos \theta,\, \sin \theta) + g'(\theta)(-\sin\theta,\, \cos \theta) \\ &= \frac{1}{\sqrt{2}}(-\cos \theta(1+2\sin^2\theta),\, \sin \theta(1+2\cos^2\theta)). \end{align*} Expanding around $\theta = \frac{\pi}{4}$ (which gets mapped to the left cusp on $\Gamma_1$) we get \begin{equation}\label{GradientExpansion} \varphi\left(-1 + \frac{3}{2}\theta^2 + \theta^3 + O(\theta^4)\right) = 1-\frac{3}{2}\theta^2 + \theta^3 + O(\theta^4). \end{equation} Differentiating implicitly one computes $$\varphi''(-1 + \epsilon) = \sqrt{\frac{2}{3}}\epsilon^{-1/2} + O(1)$$ and that $\varphi''$ is decreasing near $-1$. \end{proof} \subsection{Theorem \ref{ScalarExample}} In \cite{DS} the authors show that if $u$ is a scalar minimizer to a convex functional $\int_{B_1} F(\nabla u)\,dx$ on $\mathbb{R}^2$ and $F$ is uniformly convex in a neighborhood of $\nabla u(B_1) \cap \{|p_1| < 1\}$ then $\nabla u$ cannot jump arbitrarily fast across the strip. In particular, $\nabla u(B_{\gamma})$ localizes to $\{p_1 < 1\}$ or $\{p_1 > -1\}$ for some $\gamma$ small. In this final section we use the preliminary construction $H_0$ from section \ref{Constructions} to indicate why this result is not true in three or higher dimensions. Make a global extension of $H_0$ by taking $$\bar H_0(x) = \sup_{p \in \Gamma_1 \cup \Gamma_3}\{H_0(p) + \nabla H_0(p) \cdot (x-p) + \eta(p_1)|x-p|^2\}.$$ The resulting extension is smooth near any non-cusp point of $\Gamma$. It is uniformly convex near each point on $(\Gamma_1 \cup \Gamma_3) \cap \{|p_1| < 1\}$ with the modulus of convexity decaying towards the cusps. Furthermore, $\bar H_0$ is flat in a neighborhood of every point on $(\Gamma_2 \cup \Gamma_4) \cap \{|p_2| < 1\}$. Finally, if $p$ is a cusp of $\Gamma$ then it is straightforward to check that $\bar H_0$ is pointwise $C^{1,1-\alpha}$ at $p$, i.e. $S_{\bar H_0}(p, x) < C|x-p|^{2-\alpha}$ for all $x$ near $p$. By iterating a mollification and gluing procedure similar to those used in the proof of lemma \ref{ExtensionLemma} near the cusps we can get a global convex extension $\bar H$ that is smooth away from the cusps, uniformly convex on $\Gamma_1 \cup \Gamma_3$ away from the cusps, flat on convex sets containing $\Gamma_2$ and $\Gamma_4$, and $C^{1,1-\alpha}$ at the cusps. \begin{rem}\label{DimensionReg} In dimension $n$ the Euler-Lagrange equation allows us to take $f_0''(x) = x^{n-2-\alpha}$ near the cusp, which gives $\bar H$ an extra derivative for each dimension. \end{rem} \begin{figure} \centering \includegraphics[scale=0.40]{G0Pics.pdf} \caption{$G_0$ is linear on two bounded convex sets containing $\nabla u_0(\{|x_1| > r\})$.} \label{G0Pics} \end{figure} Let $G_0$ be the function on $\mathbb{R}^3$ obtained by revolving $\bar H$ around the $p_1$ axis (see figure \ref{G0Pics}). By construction $u_0$ solves the Euler-Lagrange equation $\text{div}(\nabla G_0(\nabla u_0)) = 0$ away from the cone $C_0 = \{|x_1| = r\}$ where $r = \sqrt{x_2^2 + x_3^3}$. Thus, it is not immediate that $u_0$ minimizes $\int_{B_1} G_0(\nabla u_0)\,dx$. However, we claim $u_0$ is a minimizer. To show this we must establish $$\int_{B_1} \nabla G_0(\nabla u_0) \cdot \nabla \psi \,dx = 0$$ for any $\psi \in C^{\infty}_0(B_1)$. The contribution from integrating in $B_{\epsilon}$ and a thin cone $\{(1-\epsilon)r < |x_1| < (1+\epsilon)r\}$ is small. Integrating by parts in the remaining region with boundary $S$, we get a boundary term of the form $\int_S \psi \nabla G_0(\nabla u_0) \cdot \nu \,ds$ where $\nu$ is the outer normal. The cones $\{|x_1| = (1\pm \epsilon)r\}$ are $\epsilon$ close, and the outward normals on these cones are $\epsilon$ close to flipping direction, so by the continuity of $\nabla G_0$ the contribution from this term is also small. Taking $\epsilon$ to zero we get the desired result. \section*{Acknowledgement} C. Mooney was supported by NSF fellowship DGE 1144155. O. Savin was supported by NSF grant DMS-1200701. \frenchspacing \bibliographystyle{plain}
1,941,325,220,099
arxiv
\section{Introduction} The two dimensional Helmholtz equation appears in a wide range of physical and engineering problems across diverse fields $-$ like the study of vibration, acoustic and electromagnetic (EM) wave propagation and quantum mechanics. In a large class of these problems one is required to determine the eigenspectrum of the Helmholtz operator for various boundary conditions and geometries. Canonical examples of the Dirichlet boundary condition (DBC) are the vibration of membranes, the propagation of the TM modes of EM waves within a waveguide and a quantum particle confined in an infinite deep potential well. Perhaps the prominent example of the Neumann boundary condition (NBC) is the transmission of the TE modes of EM waves in a waveguide. Analytic closed-form solution to the boundary value problems \cite{b.1a,b.1b,b.1c} can however be obtained only for a restricted class of boundaries. The problem for rectangular, circular, elliptical and triangular boundaries are classical ones addressed by Poisson, Clebsch, Mathieu and Lam{\'e} respectively \cite{b.1a}. Invoking the geometry of the problem by a suitable choice of co-ordinates often aids in finding the solutions (e.g. elliptic boundary where, separation of variables leads to a solution in the form of Mathieu functions). However, one quickly exhausts the list of such problems where simplification by virtue of using a specific co-ordinate system is possible. In most physical problems, one encounters boundaries which are far removed from such idealization like the case of the quantum dot. The dots are believed to be circular but in practice that can hardly be guaranteed \cite{b.2,b.2a,b.2b,b.2c}. In such a scenario it is natural to consider the confining region to be a supercircle \cite{b.3}. Another important deviation from such idealization is the design of waveguides with a shape, other than a rectangular or a circular, that can be handy to purge the losses due to corners \cite{b.w1}. Further, the problem gets analytically intractable for arbitrary boundaries. The study of propagation of the electromagnetic waves in open dielectric systems for an arbitrary cross-section has been studied recently \cite{b.d1}. The problem of solving Helmholtz equation for an arbitrary boundary has mostly been tackled using numerical methods \cite{b.4a,b.4b,b.4c,b.4d,b.4e,b.4f,b.4g,b.4h,b.4i,b.4j,b.4k,b.4l,b.4m,b.4n}. The analytic approach towards this has mainly revolved around various approximation methods. Of these, the perturbative techniques stand out as being the most widely used \cite{b.1a,b.1b,b.1c,b.4l,b.5a,b.5b,b.5c,b.5d,b.5e,b.5f}, where the starting point is the Rayleigh's theorem : which states that the gravest tone of a membrane whose boundary has slight departure from a circle, is nearly the same as that of a mechanically similar membrane in the form of a circle of the same mean radius or area. For slight departure from circular boundary, one expands the wavefunction in terms of a complete set of eigenfunctions (viz. Bessel functions) of the unperturbed case (i.e. a circular boundary). Then the wavefunction is made to satisfy the given condition on the arbitrary boundary and which in turn extracts out the expansion coefficients of the required wavefunction. The works \cite{b.1c,b.5e,b.6a,b.6b,b.6c} study arbitrary domains in a general formalism using Fourier representation of the boundary asymmetry treated as a perturbation around an equivalent circle. In contrast ref. [24] studies analytic methods where the arbitrary simply connected domain is mapped to a square by conformal mapping and then the eigenvalues are approximated order by order in the basis of a square boundary. In ref. [24] the zeros of Bessel functions are approximated (i.e. energy eigenvalues associated with a unit radius circle) from a square box. In our case we have approximated the eigenvalues associated with a square box from an equivalent circle. The ref. [24] has done a perturbation up to the third order whereas in our case we have done up to first order except the cases for $l = 0$ states where the second order corrections are also included. It can be easily seen from the respective tables (Table III in ref. [24] and Table I in our paper) that at the first order, both the methods have comparable efficiency. Ref. [24] focuses its attention mainly towards the eigenvalues in case of Dirichlet boundary condition, whereas our formulation in a single stroke handles Neumann condition as well. Moreover our paper gives at each order of perturbation the correction to the wavefunction exactly. We have also written down the exact expression for the $n^{\rm{th}}$ order correction to the eigenvalue in abstract sense. We have explored an alternative approach towards solving the eigenvalue problem for the two dimensional Helmholtz operator in the interior of a region bounded by an arbitrary closed curve. The general problem is mapped into an equivalent problem where the boundary is a regular closed curve (for which the Helmholtz equation is exactly solvable) whereas the equation itself gets modified owing to the deformation of the metric in the interior. The modified equation, we see, can be written as the original Helmholtz equation with additional terms arising from transformation in the metric. The extra pieces can now be treated as a perturbation to the original Helmholtz operator. The equation is thereby solved using the Schr\"{o}dinger perturbation technique \cite{b.7}. The corrections to the eigenfunctions are expressed in a closed form at each order of perturbation irrespective of boundary condition. The eigenvalue corrections are then obtained by imposing the appropriate boundary condition. This is in contrast to the earlier methods where the formulations are generally boundary condition dependent. In this approach towards solving the equation, the boundary conditions are specified on some known regular curve and maintain the same simple form at each order of perturbation. This bypasses the issue of imposing constraints on a boundary having a complicated geometry \cite{b.6a,b.6b,b.6c}. We expect this perturbative scheme to effectively solve the eigenvalue problem for boundaries which reflect slight departure from known regular curves. We have verified our method against the numerically obtained solutions for a supercircle and an ellipse. In this analysis we use Fourier representation of the boundary. This allows us to apply the method to a general class of continuous or discontinuous asymmetries. Section 2 describes the general formalism in abstract sense. In section 3, we deal with the non degenerate case ($l = 0$) and find solutions up to second order. Section 4 tackles the degenerate case ($l \neq 0$) and obtains the first order correction to energy and eigenfunction. In section 5, we apply the method for supercircular and elliptical boundaries and compare our analytic perturbative results with the numerically obtained ones. Finally, we summarise our results noting the advantages of our method over the other existing ones and conclude with a few comments. \section{Formulation} The homogeneous Helmholtz equation on a 2 dimensional flat simply connected surface $\cal S$ reads, \begin {equation} \left(g_{ij} \nabla^{i} \nabla^{j} + k^{2}\right)\psi \equiv \left(\nabla^{2} + E\right) \psi=0, \label{eq:1} \end {equation} where $g_{ij}$ is the flat metric and $\nabla$ represents a covariant derivative. We look for solutions in the interior of the bounded region with the Dirichlet condition $\psi=0$ or the Neumann condition $\frac{\partial \psi}{\partial n} = 0 $ on $\partial \cal S$, where $\frac{\partial \psi}{\partial n}$ denotes the derivative along the normal direction to $\partial \cal S$. The parameter $k^{2}$ may be identified with $E$, the energy of a quantum particle confined in the region having a boundary $\partial \cal S$. It is convenient to work in polar coordinate system ($r, \theta$), where any closed curve satisfies the periodicity condition $r(\theta)=r(\theta + 2\pi)$. We consider a general arbitrary boundary of the form $r = r(\theta)$. In this analysis we assume that the arbitrary boundary can be expressed as a perturbation around an effective circle (the analysis can in principle work for a deformation around any simple curve for which the Helmholtz equation is exactly solvable). We introduce new coordinates $(R, \alpha)$ with the transformation $(r, \theta) \rightarrow (R, \alpha)$ given by \begin{equation} \label{eq:3} \begin{split} r =~ & R + \epsilon f(R,\alpha) ~;\\ \theta =~ & \alpha ~, \end{split} \end{equation} where $\epsilon$ is a deformation parameter. This defines a diffeomorphism for the entire class of well behaved functions $f(R,\alpha)$. A suitable choice of $f(R,\alpha)$, shall transform our arbitrary boundary into a circle of average radius, say $R_{0} \, (=\frac{1}{2 \pi}\int \limits^{2 \pi}_{0} r(\theta) \,\, \text{d}\theta)$ in the $R-\alpha$ plane. The deformation of the arbitrary boundary to a circle changes the components of the underlying metric $g_{ij}(r, \theta)$ in the interior ($g_{ij}(r, \theta) \rightarrow \tilde{g}_{ij}(R, \alpha)$). Henceforth, we use the notation $$ \phi^{(i,j)} \equiv \frac{\partial^{i+j}\phi}{\partial R^{i} \partial \alpha^{j}}~,$$ where $\phi$ is a function of $R$ and $\alpha$. The dependence on the arguments $(R, \alpha)$ are not shown explicitly for brevity. The flat background metric in the ($r, \theta$) system is given by $g_{ij}= {\rm diag} (1,r^{2})$. Under the coordinate transformations ({\ref{eq:3}}) this takes the form \begin{equation} \tilde{g}_{ij} = \left[\begin{array}{ll} \quad\left(1 + \epsilon f^{(1,0)} \right)^2 \qquad\quad\,\,\,~ \epsilon f^{(0,1)} \left(1 + \epsilon f^{(1,0)}\right) \\ \epsilon f^{(0,1)} \left(1 + \epsilon f^{(1,0)}\right) \qquad (R +\epsilon f)^{2} + \epsilon^2 f^{{(0,1)}^{2}} \end{array} \right],\nonumber \end{equation} We note that except for $\Gamma^{\alpha}_{{\phantom \alpha} R R }$ all the components of connection $\Gamma$ are non-vanishing. The diffeomorphism ({\ref{eq:3}}) does not induce any spurious curvature in the manifold (i.e. Riemann tensor, $R^{i}_{{\phantom i} jkl} = 0~~\forall~ i, j, k, l$). The Eq. ({\ref{eq:1}}), where $\psi = \psi(r, \theta)$, transforms under the map $(r, \theta) \rightarrow (R, \alpha)$ to \begin{flalign} \label{eq:6} &E ~\psi + \frac{\psi ^{(0,2)}}{(R+\epsilon f)^2}+\frac{\left[(R+\epsilon f)^2 - \epsilon f^{(0,2)} \left(R + \epsilon f\right) + 2 \epsilon ^2 f^{{(0,1)}^2} \right] \psi ^{(1,0)}}{(R+\epsilon f)^3 \left(\epsilon f^{(1,0)}+1\right)} \nonumber\\ &+\frac{2 \epsilon^2 f^{(0,1)} \left[(R+\epsilon f) f^{(1,1)}-f^{(0,1)} \left(\epsilon f^{(1,0)}+1\right)\right]\psi ^{(1,0)} }{(R+\epsilon f)^3 \left(\epsilon f^{(1,0)}+1\right)^2}-\frac{2 \epsilon f^{(0,1)} \psi ^{(1,1)} }{(R+\epsilon f)^2 \left(\epsilon f^{(1,0)}+1\right)} \nonumber\\ &-\frac{\epsilon f^{(2,0)} \left[(R + \epsilon f)^{2} + \epsilon^2 f^{{(0,1)}^2}\right] \psi ^{(1,0)} }{(R+\epsilon f)^2 \left(\epsilon f^{(1,0)}+1\right)^3}+\frac{\left[(R + \epsilon f)^{2} + \epsilon^2 f^{{(0,1)}^2}\right] \psi ^{(2,0)}}{(R+\epsilon f)^2 \left(\epsilon f^{(1,0)}+1\right)^2} = 0\,. \end{flalign} The analysis can proceed from here for a specific form of the function $f(R,\alpha)$. We choose $f(R,\alpha)= R g(\alpha)$, where $g(\alpha)$ can be expanded without a loss of generality in a Fourier series. We further impose $g(\alpha) = g(-\alpha)$ for simplicity whereby only the cosine terms are retained \begin{equation} g(\alpha) = \sum_{n=1}^{\infty} C_{n} \cos n\alpha \label{eq:19}. \end{equation} The constant part $C_{0}$ can always be absorbed in $R$ defined in ({\ref{eq:3}}). With this choice of $f(R,\alpha)$, Eq. ({\ref{eq:6}}) simplifies to \begin{equation} \sum_{n = 0}^{\infty} \epsilon^{n} {\cal L}_{n}\psi + E \psi = 0, \label{eq:7} \end{equation} where the operator ${\cal L}_{n}$ is given by \begin{flalign} &{\cal L}_{n}\psi = (-1)^{n}\frac{(n+1)}{6 R^{2}} g^{n-2}\left[ 3 n R g \left\{ g^{(0,2)} \psi^{(1,0)} + 2 g^{(0,1)} \psi^{(1,1)}\right\} \right. \\& \left.\quad+ n (n -1) R \left(g^{(0,1)}\right)^2 \left\{ 2 \psi^{(1,0)} + R \psi^{(2,0)} \right\} +6 g^{2} \left\{\psi^{(0,2)} + R \psi^{(1,0)} + R^{2} \psi^{(2,0)} \right\} \right].\nonumber \end{flalign} We shall adopt the method of stationary perturbation theory \cite{b.7} to solve for $\psi$ and $E$. Thereby, treating $\epsilon$ as a perturbation parameter we expand the eigenfunction $\psi$ corresponding to the eigenvalue $E$ as \begin{subequations} \begin{align} \psi &= \psi^{(0)} + \epsilon \psi^{(1)} + \epsilon^{2} \psi^{(2)} +\cdots ; \\ E &= E^{(0)} + \epsilon E^{(1)} + \epsilon^{2} E^{(2)} +\cdots, \end{align}\label{eq:12} \end{subequations} with superscripts denoting the order of perturbation. We assume that the perturbative scheme converges and $(\psi, E)$ can be calculated order by order up to any arbitrarily desired precision. We note that the parameter $\epsilon$ is arbitrarily invoked to track different orders and could be absorbed in the Fourier coefficients $C_{n}$. Plugging ({\ref{eq:12}}) in (\ref{eq:7}), and collecting the coefficients for different powers of $\epsilon$ yields \begin{subequations} \begin{align} {\mathcal{O}}(\epsilon^{0})&: &&({\cal L}_{0} + E^{(0)}) \psi^{(0)} = 0~,\label{eq:13} \\ {\mathcal{O}}(\epsilon^{1})&: &&({\cal L}_{0}+ E^{(0)}) \psi^{(1)} + ({\cal L}_{1} + E^{(1)}) \psi^{(0)} = 0~, \label{eq:14}\\ {\mathcal {O}}(\epsilon^{2})&: &&({\cal L}_{0} +E^{(0)}) \psi^{(2)} + ({\cal L}_{1}+ E^{(1)}) \psi^{(1)}+({\cal L}_{2} + E^{(2)}) \psi^{(0)} = 0~, \label{eq:15} \\ \vdots \nonumber\\ {\mathcal{O}}(\epsilon^{m})&: &&\sum_{n = 0}^{m}\left({\cal L}_{n} + E^{(n)}\right) \psi^{(m-n)} = 0~.\label{eq:15a} \end{align} \label{eq:15tot} \end{subequations} The change in the metric components induced by the smooth deformation ({\ref{eq:3}}) amounts to a gauge transformation and generates source terms to the unperturbed homogeneous Helmholtz equation at each order in $\epsilon$. At the $i^{\text{th}}$ order we have the terms ${\cal L}_{1} \psi^{(0)},\, {\cal L}_{1} \psi^{(1)},\, \cdots,\, {\cal L}_{i} \psi^{(j(<i))}$, which have no physical origin and are merely artifacts of the chosen gauge. Maintaining the simplicity of the boundary conditions is hence achieved at the cost of new terms appearing in the original equation. The unperturbed energy $E^{(0)}$ and corrections $E^{(1)}$, $E^{(2)}$ are given by \begin{subequations} \begin{align} E^{(0)} = & -\langle \psi^{(0)}|{\cal L}_{0}|\psi^{(0)} \rangle ; \label{eq:16a} \\ E^{(1)} = & -\langle \psi^{(0)}|{\cal L}_{1}|\psi^{(0)}\rangle; \label{eq:16b} \\ E^{(2)} = & -\langle \psi^{(0)}|{\cal L}_{1} + E^{(1)} |\psi^{(1)} \rangle - \langle \psi^{(0)}|{\cal L}_{2}|\psi^{(0)} \rangle. \label{eq:16} \\ \vdots \nonumber\\ E^{(m)} = & -\Big{\langle} \psi^{(0)}\Bigl\lvert\sum_{n=1}^{m-1}\left({\cal L}_{n} + E^{(n)}\right)\Bigr\rvert \psi^{(m-n)} \Big{\rangle} - \Big{\langle} \psi^{(0)} \Bigl\lvert {\cal L}_{m} \Bigr\rvert \psi^{(0)} \Big{\rangle} . \label{eq:16c} \end{align} \label{eq:16tot} \end{subequations} A unique feature of our method is that both the boundary conditions maintain their simple forms separately for every order in perturbation. Thus, we have for the $i^{\text{th}}$ order wavefunction the DBC and the NBC respectively, \begin{subequations} \begin{align} &~\psi^{(i)}(R_{0},\alpha) = 0\,, \qquad\qquad\qquad\qquad\quad\qquad ~\mbox{(DBC)}\,;\label{eq:dbc}\\ &\left(\frac{\partial\psi^{(i)} }{\partial R} -\frac{g^{(0,1)}}{R}\frac{\partial\psi^{(i-1)} }{\partial \alpha}\right)\Bigg\rvert_{(R_{0},\alpha)} = 0\,, \quad\quad~\mbox{(NBC)}\,,\label{eq:nbc} \end{align}\label{eq:bctot} \end{subequations} where $i \in \mathbb{N}$. The general solution of the Eq. ({\ref{eq:13}}) is \begin{align} \psi_{l,j}^{(0)} & = N_{0,j} J_{0}(\rho)\,, \qquad\qquad \qquad \qquad ~(l = 0)\,; \nonumber \\ & = N_{l,j} J_{l}(\rho)\left\{\begin{array}{c} \cos (l\alpha) \\ \sin (l\alpha) \end{array} \right\}, \qquad \quad (l \neq 0)\,, \label{eq:21} \end{align} where $J_{l}$ is the $l^{\text{th}}$ order Bessel function with the argument $\rho = \sqrt{E_{l,j}^{(0)}} R$, where $E_{l,j}^{(0)}$ are the energies of the unperturbed Helmholtz equation. $N_{l,j}$ is a suitable normalisation constant with $l \in \mathbb{N}$, $j \in \mathbb{N}_{>0}$. It is to be noted that the normalisation constant will be different for the different boundary conditions. Henceforth, we will discuss both the cases, viz. the Dirichlet and the Neumann boundary condition parallely. The energy $E_{l,j}^{(0)}$ is dictated by the $ j^{\text{th}}$ zero of $J_{l}$, denoted by $\rho_{_{l,j}}$, and the $ j^{\text{th}}$ zero of $J^{\prime}_{l}$, denoted by $\rho^{\prime}_{_{l,j}}$, for DBC and NBC respectively. Using Eq. ({\ref{eq:dbc}}) and ({\ref{eq:nbc}}) for $i = 0$, we have \begin{align} E_{l,j}^{(0)} =~&\rho^{2}_{_{l,j}}/R_{0}^{2} \,,~\qquad \qquad \mathrm{(DBC)} ~;\label{eq:endbc}\\ =~&\rho^{\prime^{2}}_{_{l,j}}/R_{0}^{2}\,,\qquad \qquad ~\mathrm{(NBC)}~,\label{eq:ennbc} \end{align} where all the levels with non-zero $l$ are doubly degenerate. In this formulation the energy corrections can be obtained in two ways. Firstly, $E^{(i)}$ can be estimated from the knowledge of $\psi^{(m)}(\forall~ m < i)$ using Eqs. (\ref{eq:16tot}). Alternatively it can be extracted by imposing the boundary condition on $\psi^{(i)}$ given by Eqs. (\ref{eq:bctot}), which in addition yields the coefficients of Bessel functions (in $\psi^{(i)}$). The method can in principle be used to calculate corrections at all orders of perturbation. We next calculate the energy corrections for both the boundary conditions for the following two cases. \section{Case I: Non-degenerate states ($l=0$)} The first order correction to the eigenfunction is obtained solving the Eq. ({\ref{eq:14}}). Thus, we have \begin{align} \psi_{0,j}^{(1)} =&\, a_{0} J_{0}(\rho) -\frac{\rho E_{0,j}^{(1)}}{2 E_{0,j}^{(0)}}N_{0,j}J_{1}(\rho) + \sum_{p=1}^{\infty}{\Big\{} a_{p} J_{p}(\rho)-\rho N_{0,j} C_{p} J_{1}(\rho){\Big\}} \cos (p\alpha), \label{eq:22} \end{align} where $E_{0,j}^{(1)}$, $a_{0}$ and $a_{p}$ are constants to be fixed by the boundary conditions. The terms contain $N_{0,j}J_{1}(\rho)$ make the particular integral of the Eq. ({\ref{eq:14}}). The first order energy corrections are obtained by imposing the respective boundary condition, given by Eq. ({\ref{eq:dbc}}) or by substituting $\psi_{0,j}^{(1)}$ and $\psi_{0,j}^{(0)}$ into the Eq. ({\ref{eq:nbc}}), (for $i = 1$). This yields \begin{align*} &E_{0,j}^{(1)} = 0 ~~~\mbox{(for both the cases)}; \\ &a_{p} = \rho_{_{0,j}} N_{0,j} C_{p}J_{1}(\rho_{_{0,j}})/J_{p}(\rho_{_{0,j}})\,, ~ (p \neq 0)~\mbox{(DBC)};\\ &a_{p} = \rho^{\prime}_{_{0,j}} N_{0,j} C_{p} J_{0}(\rho^{\prime}_{_{0,j}})/J^{\prime}_{p}(\rho^{\prime}_{_{0,j}})\,, ~ (p \neq 0)~\mbox{(NBC)}. \end{align*} The vanishing of the first order correction, $E_{0,j}^{(1)}$, is verified using Eq. ({\ref{eq:16b}}). The remaining constant $a_{0}$ of Eq. ({\ref{eq:22}}) is zero for both the boundary conditions by virtue of orthogonality of $\psi_{0,j}^{(0)}$ and $\psi_{0,j}^{(1)}$. These results are consistent with the results obtained in \cite{b.6a,b.6b,b.6c} by other methods. The first non-vanishing energy correction occurs at the second order. The correction $E_{0,j}^{(2)}$ is obtained by substituting $\psi_{0,j}^{(0)}$ and $\psi_{0,j}^{(1)}$ in Eq. ({\ref{eq:16}}) and have \begin{align} E_{0,j}^{(2)} = E_{0,j}^{(0)}\sum_{n=1}^{\infty} \xi_{n,j} C^{2}_{n} \,; ~~ \xi_{n,j} = \frac{1}{2} + \frac{\rho_{_{0,j}} J^{\prime}_{n}(\rho_{_{0,j}})}{J_{n}(\rho_{_{0,j}})},\label{eq:27} \end{align} and \begin{align} E_{0,j}^{(2)} = -\,E_{0,j}^{(0)} \sum_{n=1}^{\infty} \lambda_{n,j} C^{2}_{n} \,; ~ \lambda_{n,j} = \frac{1}{2} + \frac{\rho^{\prime}_{_{0,j}} J_{n}(\rho^{\prime}_{_{0,j}})}{J^{\prime}_{n}(\rho^{\prime}_{_{0,j}})},\label{eq:27nb} \end{align} for the DBC and the NBC respectively. We may as well solve ({\ref{eq:15}}) to obtain \begin{alignat}{2} &\psi_{0,j}^{(2)} = b_{0} J_{0}(\rho) - \frac{\rho E_{0,j}^{(2)}}{2 E_{0,j}^{(0)}}N_{0,j}J_{1}(\rho)+ \sum_{n=1}^{\infty}C_{n}{\cal J}_{n,j}(\rho)\nonumber\\ &+\sum_{p=1}^{\infty}{\Big\{} b_{p} J_{p}(\rho)-\rho a_{0} C_{p} J_{1}(\rho) +\sum_{n=1}^{\infty}\left(C_{n+p} + C_{|n-p|} \right){\cal J}_{n,j}(\rho) {\Big\}} \cos (p\alpha)\,, \label{eq:24} \end{alignat} where \begin{equation} {\cal J}_{n,j}(\rho)=\frac{\rho}{2}{\Big\{} a_{n} J^{\prime}_{n}(\rho)-\frac{\rho}{2} N_{0,j} C_{n} J^{\prime}_{1}(\rho){\Big\}}.\nonumber \end{equation} Boundary conditions, Eqs. (\ref{eq:bctot}) (for $i = 2$), extract $E_{0,j}^{(2)}$ as given in Eq. (\ref{eq:27}) or Eq. (\ref{eq:27nb}) and in addition the coefficients $b_{p}$ are respectively given by \begin{align*} b_{p} = & ~\frac{\rho_{_{0,j}} J_{1}(\rho_{_{0,j}})}{J_{p}(\rho_{_{0,j}})}\left\{a_{0}C_{p}-\frac{N_{0,j}}{2}\sum_{n=1}^{\infty}C_{n}(C_{n+p}+ C_{|n-p|})\xi_{n,j}\right\},~~~\mbox{(DBC)};~\\ = & ~\frac{\rho^{\prime}_{_{0,j}} J_{0}(\rho^{\prime}_{_{0,j}})}{J^{\prime}_{p}(\rho^{\prime}_{_{0,j}})}\left\{a_{0}C_{p}+\frac{N_{0,j}}{2\rho^{\prime}_{_{0,j}}}\sum_{n=1}^{\infty} np\,C_{n}(C_{n+p} - C_{|n-p|}) \right. \\ &\left. \qquad \qquad \qquad \qquad +\frac{N_{0,j}}{2}\sum_{n=1}^{\infty}C_{n}(C_{n+p} +C_{|n-p|})\lambda_{n,j}\right\},~~~~ \mbox{(NBC)}. \end{align*} The remaining constant $b_{0}$ can be fixed by normalising the corrected wavefunction. \section{Case II: Degenerate states ($l \neq 0$)} In the $l \neq 0$ case, the first order wavefunction correction is given by \begin{align} &\psi_{l,j}^{(1)} =\, a_{0} J_{0}(\rho) + \frac{\rho N_{l,j} C_{l}}{2}J^{\prime}_{l}(\rho)+\left\{a_{l}J_{l}(\rho) + \frac{\rho N_{l,j} J^{\prime}_{l}(\rho)}{2}\left(C_{2l} + \frac{E_{l,j}^{(1)}}{E_{l,j}^{(0)}}\right)\right\} \cos (l\alpha) \nonumber\\ &\qquad\qquad+\sum_{\substack{p=1\\ p\neq l}}^{\infty} \left\{a_{p}J_{p}(\rho) + \frac{\rho}{2} N_{l,j} J^{\prime}_{l}(\rho)\left(C_{l+p} + C_{|l-p|} \right)\right\} \cos (p\alpha). \label{eq:23} \end{align} Here we have considered only the `cosine' form of $\psi_{l,j}^{(0)}$ (see Eq. (\ref{eq:21})) for the $l \neq 0$ case. The other solution with the $\sin (l\alpha)$ term can be treated similarly. We have estimated the first order energy corrections by imposing the respective boundary conditions for $i =1$ given in Eqs. (\ref{eq:bctot}). $E_{l,j}^{(1)}$ is also verified by substituting $\psi_{l,j}^{(0)}$ in Eq. ({\ref{eq:16b}}). We have \begin{align} E_{l,j}^{(1)} = & -E_{l,j}^{(0)}C_{2l} ~, ~\qquad \qquad\qquad \qquad \mbox{(DBC)}~;\nonumber \\ E_{l,j}^{(1)} = & -E_{l,j}^{(0)}C_{2l}\left(\frac{\rho^{\prime^{2}}_{_{l,j}} + l^{2}}{\rho^{\prime^{2}}_{_{l,j}} - l^{2}}\right) ~,~~~\qquad\mbox{(NBC)}~,\nonumber \end{align} where the corresponding $E_{l,j}^{(0)}$ are given by Eq. ({\ref{eq:endbc}}) and Eq. ({\ref{eq:ennbc}}) respectively. This is generally non-vanishing unlike the earlier case. The second order energy correction becomes crucial when $C_{2l} = 0$. The choice of the `sine' solution for $\psi_{l,j}^{(0)}$ (Eq. {\ref{eq:21}}) gives \begin{align} E_{l,j}^{(1)} = &~ E_{l,j}^{(0)}C_{2l} ~, \qquad \qquad\qquad \quad \mbox{(DBC)}~; \nonumber \\ E_{l,j}^{(1)} = &~ E_{l,j}^{(0)}C_{2l}\left(\frac{\rho^{\prime^{2}}_{_{l,j}} + l^{2}}{\rho^{\prime^{2}}_{_{l,j}} - l^{2}}\right) ~,~~\quad\mbox{(NBC)}~.\nonumber \end{align} Further, the coefficients $a_{0}$ and $a_{p}$ (for $p \neq 0,~l$) are obtained as a bonus giving, \[ \left.\hspace{-2.6cm} \begin{array}{cl} a_{0}= & -\frac{N_{l,j} \rho_{_{l,j}} C_{l}}{2}\frac{J^{\prime}_{l}(\rho_{_{l,j}})}{J_{0}(\rho_{_{l,j}})} \\ a_{p}= & -\frac{N_{l,j} \rho_{_{l,j}}J^{\prime}_{l}(\rho_{_{l,j}})}{2J_{p}(\rho_{_{l,j}})}\left[C_{p+l} + C_{|p-l|} \right]\phantom{(\rho^{\prime^{2}}_{l,j}-pl) C_{|p-l|}^{(1)}} \\ a_{l}= & 0 \end{array} \hspace{-2.6cm}\right\}\mathrm{(DBC)}\] \[ \left. \begin{array}{cl} a_{0}= & -\frac{N_{l,j} \rho^{\prime}_{l,j} C_{l}}{2}\frac{J_{l}(\rho^{\prime}_{l,j})}{J_{1}(\rho^{\prime}_{l,j})}\\ a_{p}= & \frac{N_{l}J_{l}(\rho^{\prime}_{l,j})}{2\rho^{\prime}_{l,j}J_{p}^{\prime}(\rho^{\prime}_{l,j})} \left[(\rho^{\prime^{2}}_{l,j}+pl)C_{p+l}^{(1)} +(\rho^{\prime^{2}}_{l,j}-pl) C_{|p-l|}^{(1)} \right]\\ a_{l}=& \frac{l^4}{(\rho^{\prime^{2}}_{l,j}-l^2)^2}C_{2l} \end{array} \right\}\mathrm{(NBC)}\] The coefficient $a_{l}$ is calculated from the normalisation of the corrected wavefunction up to first order. These results are consistent with the ones obtained in earlier investigations \cite{b.6a,b.6b,b.6c}. The contours and the nodal lines of a wavefunction corrected up to first order for different boundary geometries are shown in Fig. \ref{fig:1}. The small change in the nodal lines is visible for the case of supercircle deformation whereas for the other cases, viz. square, rectangle and ellipse, the changes are violent. At first glance it seems that one gets a wrong eigenmode for the square in the upper left corner of the Fig. \ref{fig:1}. A closer look shows that it is indeed an eigenfunction of a square membrane where two degenerate modes, viz. (1,4) and (4,1), are mixed in equal proportion with a relative negative sign. Under deformation among the nodal lines only the line of symmetry is preserved as can be seen from the examples of Fig. \ref{fig:1}. Moreover the number of crossings of the nodal lines is also not conserved for such violent perturbation. What seems to be preserved between the equivalent domains is the number of humps and valleys. \section{Results and Discussions} We next apply the analytical formalism developed in the earlier section to a few specific boundary geometries. We have compared our perturbative results against the numerical solutions obtained by using the Partial Differential Equation Toolbox${\rm ^{TM}}$ of MATLAB$\textsuperscript {\textregistered} $. To ensure the convergence of the eigenvalues in numerical method we have restricted only to convex domains. We have considered the case of a supercircle and an ellipse. The polar form for these two families of curves are respectively given by \begin{align} r(\theta)&=\frac{a}{(|\cos\theta|^{t}+|\sin\theta|^{t})^{1/t}}~,\label{eq:sc}\\ r(\theta)&=\frac{a \sqrt{1- \epsilon^2}}{\sqrt{1- \epsilon^2 \cos^2\theta}}~.\label{eq:elp} \end{align} The parameters defining the boundaries are $(a>0,t\geq1)$ and $(a>0,\epsilon>0)$ respectively. Eq. (\ref{eq:sc}) defines a diamond ($45^{\circ}$ rotated square) for $t=1$, a circle for $t = 2$, a supercircle for $t>1$ and $t \neq 2$ and a square as $t\rightarrow \infty$. The specific form of $r(\theta)$ for these closed curves are used to calculate the metric deformation and thereby estimate energy corrections. These curves are chosen because they have a reflection symmetry about y-axis and hence they can be represented by a Fourier series given in Eq. \eqref{eq:19} with only cosine terms. The perturbative prescription is seen to converge with dominant non-zero corrections coming from the first few orders. It can be seen that the first and second order corrections in energy are linear and bi-linear in $C_{n}$ respectively. Further, it is clear that $E^{(m)}$ will be $m$-linear in $C_{n}$, hence convergence of the Fourier coefficients, $C_{n}$, will ensure the convergence of the series. In this analytic formalism the approximation appears only through truncation of the series given by Eqs. ({\ref{eq:12}}). We have estimated energy corrections up to the second order in perturbation for the $l = 0$ states and only first order corrections for the $l \neq 0$ states. The two-fold degeneracy of the original $l \neq 0$ states splits at the first order for $C_{2l} \neq 0$. In Figs. \ref{fig:2} and \ref{fig:3} we have illustrated the comparison of the analytical values calculated by our perturbation scheme with their respective numerical ones for the supercircular boundary in the range of $t$ from $1$ to $3$ for first few energy levels. Our results are in good agreement with the numerical ones. The discrepancy is $ \sim 1\%$ for the supercircle within the range $1.5\leq t \leq 3$ and is relatively larger ($\sim 5\%$) as it tends toward the diamond shape, i.e., $t=1$. This larger discrepancy is anticipated because a square is a violent departure from a circle and this large deformation is against the inherent ingredient of the perturbative method. Furthermore, energy levels corresponding to some $l$ values exhibit level crossing phenomenon as reported earlier \cite{b.6b,b.6c}. It is clear from the Figs. \ref{fig:2} and \ref{fig:3} that the overall matching for the DBC is outstanding except for few occasions in case of square where only the first order correction (for $l \neq 0$ case) is included. However, inclusion of higher order corrections will definitely improve the accuracy of our method. In contrast, for the case of NBC most of the low-lying states with non-zero $l$ values do not even have the first order correction. So, in such cases the error is distinct for the square. The method is expected to yield better results for smooth boundaries without vertices. The results for the supercircle even at the first order show better accuracy than their second order counterparts for the square. Similarly, the comparison for elliptical boundaries is shown in Figs. \ref{fig:4} and \ref{fig:5}. Here also, the agreement is highly satisfactory for a wide range of $\epsilon$ and the discrepancy is $\sim 2\%$. A typical comparison between the results obtained by numerical method ($Ns$) or the exact solution ($Es$) and our perturbative scheme ($Ps$) is shown in the Table \ref{tab:1}. In conclusion, we note that Fourier decomposition of the boundary asymmetry makes the method completely general and it holds good for a wide variety of boundaries and for general boundary conditions also. The main advantage of this method over the others is that it is boundary condition free and has general closed form solutions at every order of perturbation. Next it maintains the same simple form of boundary condition at every order of perturbation making the application of the boundary condition easier. In principle, the higher order corrections could also be calculated exactly but they are algebraically more complicated and tedious to evaluate. Since our solutions of the wavefunction are general (i.e. independent of boundary condition), the other mixed boundary conditions such as, Cauchy\cite{b.8} or Robin\cite{b.9}, could also be applied to obtain the corresponding spectrum easily. \section{Acknowledgements} SP would like to acknowledge the Council of Scientific and Industrial Research (CSIR), India for providing the financial support. The authors would like to thank S. Bharadwaj, S. Kar, A. Dasgupta, S. Das and Ganesh T. for useful discussions and help. The authors would like to thank the referee for critical comments and suggestions for improving the text.
1,941,325,220,100
arxiv
\section{Introduction} Nash Equilibrium problems arise in a broad variety of applications, such as wireless networks \cite{Charilas2010}, construction engineering and management \cite{Kapliski2010} and Smart Grids \cite{Saad2012}. In the latter, for optimal energy provision, Smart Grids are often represented as hierarchical models, where on a higher level an energy management problem needs to be solved between microgrids, while on a lower level, i.e. inside the microgrids, the distribution of energy or power needs to be optimally planned. The energy management problem is often cast as a non-cooperative game, where the microgrids compete against each other regarding the energy price of a central power plant \cite{Atzeni13}, \cite{Kasbekar2012}. In contrast to that, lower level problems, such as the economic dispatch problem \cite{Tatarenko2019distr}, \cite{Zimmermann2020}, are usually formulated as social welfare optimization problems, which depend on the result of the game on the higher level. Casting this problem as a multi-cluster game enables simultaneous solution of the non-cooperative game between microgrids and the cooperative distributed optimization inside the microgrids. Such an approach is likely to be more efficient than a separate solution on each level, as the result is optimal regarding both problems.\\ For the distributed solution of multi-cluster games, most algorithms have been designed for continues time. The works of \cite{Ye2018} and \cite{Ye2017} aim to solve an unconstrained multi-cluster game by using gradient-based algorithms. Inspired by these results, authors of \cite{Ye2020} propose a gradient-free algorithm for a similar setup, where cost functions are unknown to agents and therefore, a payoff-based approach is used. All three publications do not define the inter-cluster communication and no explicit hierarchy between the agents inside a cluster is mentioned. In contrast to that, the following publications define the inter-cluster communication by undirected graphs and introduce a leader-follower hierarchy, in which only the cluster leader communicates with leaders from other clusters. In \cite{Yue2018} such leaders and followers exchange pseudo gradients to achieve the Nash Equilibrium of the considered constrained problem. The authors of \cite{Zeng2019} and \cite{Zou2019} both employ gradient-free algorithms to a achieve a generalized Nash Equilibrium. \\ Less research has been dedicated to discrete-time setups. In the work of \cite{Meng2020} a leader-follower based algorithm for discrete-time settings is proposed, which can solve unconstrained multi-cluster games. To minimize the cost functions, a gradient-tracking approach is chosen.\\ All of the previous mentioned work deals with undirected communication architectures. \\ In this paper, we provide an algorithm that is based on the gradient-tracking algorithm of \cite{Pu2020} and solves the multi-cluster game. Each agent maintains two variables, one for the decision estimation of all agents and one for the gradient-tracking inside the cluster to which the agent belongs. The step-size of each gradient-step is considered to be constant, which is an advantage of gradient-tracking updates against other methods such as the one in \cite{Nedic2015}. In contrast to most of the mentioned publications, we consider a discrete time setting for our problem. Moreover, we define inter cluster communication, as it is done for example in \cite{Meng2020}. Furthermore, we go beyond the leader-follower architecture of \cite{Meng2020}: In our approach the inter-cluster communication graph can be defined more generically as it is allowed that more than one agent can communicate with agents outside its own cluster. However, if the graph is defined such that only one agent sends and receives information from other clusters, we arrive at the leader-follower architecture of \cite{Meng2020}. Therefore, the hierarchy setup among agents of \cite{Meng2020} can be regarded as a special case of our approach. Furthermore, opposed to all mentioned literature concerning multi-cluster games, we consider directed communication, which generalize undirected architectures. Our contribution can therefore be summarized as follows: 1) We provide a discrete-time algorithm that runs on directed, leader-free communication graphs and solves the distributed multi-cluster game. 2) We show convergence by approximation of the update equations of the algorithm with a linear, time-invariant state-space system as it is done in \cite{Meng2020} and \cite{Pu2020}. 3) At last, we verify our theoretical results with a simulation of an extended cournot game. \section{Notation and Graphs} All time indices $k$ belong to the set of non-negative integers $\mathbb{Z}^+$. Scalars are denoted by $x$, while we use boldface for vectors $\bm x$ and matrices $\bm A$. The expression $(x_i)_{i=1}^{n}$ vectorizes all $x_i$, i.e. $[x_1, ..., x_n]^T$. The same way, $(\bm x_i)_{i=1}^{n}$ stacks vectors to matrices. We denote vector norms by $|| \cdot ||$ and matrix norms by $||| \cdot |||$. Agent $i$ is part of the considered agent system, consisting of $n$ agents. All agents are grouped into $h = 1, ..., H$ clusters, where a cluster $h$ encompasses $n_h$ agents. The operator $\text{diag}(\cdot)$ expands the vector $\bm x$ into a diagonal matrix with entries of $\bm x$ on its trace and $\text{diag}\lbrace \bm A_1, ..., \bm A_n \rbrace$ expands a series of matrices $\bm A_i \in \mathbb{R}^{n_i \times q_i}$ into a block matrix with matrices $\bm A_1, ... ,\bm A_n$ on its diagonal as blocks and zero entries otherwise, such that the resulting matrix is of dimension $\sum_{i} n_i\times \sum_i q_i$. For brevity of notation we use the following notation for gradients: $\nabla_{\bm x} f(\bm x,\bm y) \big|_{\bm x = \bm x_r} \triangleq \nabla_{\bm x_r} f(\bm x_r,\bm y)$.\\ A directed graph $\mathcal{G} = \lbrace \mathcal{V}, \mathcal{E} \rbrace$ contains a set of vertices $ \mathcal{V}$ and a set of edges $\mathcal{E} = \mathcal{V} \times \mathcal{V}$. Each vertex $v_i \in \mathcal{V}$ of a graph $\mathcal{G}$ is represented by an agent $i$ of the agent system and each edge $(j,i) \in \mathcal{E}$ is a directed communication channel from agent $j$ to agent $i$. \section{Preliminaries} \subsection{Communication} In this section, we present the assumptions that we make towards the communication architecture of the agent system, which is described by graph theory. There are two separate communication layers: Layer one, which connects the agents inside the respective cluster but allows no communication to other clusters and layer two, which connects agents regardless of their cluster membership. For these, we make the assumptions that \begin{assum} \label{as:graphC} The directed graph $\mathcal{G}^h$, which connects the agents inside cluster $h$, can be described by the weighted adjacency matrices $\bm C^h \in \mathbb{R}^{n_h \times n_h}$. It holds that \begin{itemize} \item all graphs $\mathcal{G}^h$, $h = 1, ..., H,$ are strongly connected, respectively, and \item the weights of each $\bm C^h$ are chosen such that the matrix is non-negative, column-stochastic, i.e. $\bm 1^T \bm C^h = \bm 1^T$, and has positive diagonal entries $\bm C^h_{ii}$. \end{itemize} \end{assum} \begin{assum} \label{as:graphR} The directed graph $\mathcal{G}$, which models connections both inside the clusters as in-between, can be described by the adjacency matrix $\bm R \in \mathbb{R}^{n \times n}$. It holds that \begin{itemize} \item the graph is strongly connected and \item the weights of $\bm R$ are chosen such that the matrix is non-negative, row-stochastic, i.e. $\bm R \bm 1 = \bm 1$, and has positive diagonal entries $\bm R_{ii}$. \end{itemize} \end{assum} \begin{remark} For column-stochastic weighting see for example Remark 2 of \cite{Zimmermann2020}. For row-stochastic weighting use $\bm R_{ij} = 1/ \delta_j^+$, $i = 1, ..., n$, where $\delta_j^+$ is the in-degree of node $j$. The in-degree can be determined by simple message forwarding. \end{remark} Some properties of the eigenvectors of the communication matrices can be summarized in the following Lemma from \cite{Pu2020}: \begin{lemma} \label{lemma:RCeigen} Let matrices $\bm C^h$ and $\bm R$ be defined as in Assumptions \ref{as:graphC} and \ref{as:graphR}, respectively. Let all statements of these assumptions hold. Then, \begin{itemize} \item the matrices $\bm C^h$, $h = 1,..., H$, each have a unique, positive right eigenvector $\bm v^h$ with regard to eigenvalue $1$, normed such that $\bm 1^T \bm v^h = 1$, i.e., it holds that $ \bm C^h \bm v^h = \bm v^h$, \item the matrix $\bm R$ has a unique, positive left eigenvector $\bm u$, normed such that $\bm u^T \bm 1 = 1$, i.e., it holds that $\bm u^T \bm R = \bm u^T$. \end{itemize} \end{lemma} \subsection{Matrix norms} \label{subsec:matrixnorms} In the theoretical part of this work, we will need definitions for weighted spectral matrix norms, for which we take orientation from \cite{Xin2019} and weighted Frobenius matrix norms, for which our results are loosely based on \cite{Meng2020}. \\ We define the weighted spectral matrix norms for arbitrary quadratic matrices $\bm X \in \mathbb{R}^{r \times r}$ as follows: \begin{align*} ||| \bm X |||_2^{\bm u} &\triangleq ||| \text{diag}(\sqrt{\bm u}) \bm X \text{diag}(\sqrt{\bm u})^{-1}|||_2, \\ ||| \bm X |||_2^{\bm v^h} &\triangleq ||| \text{diag}(\sqrt{\bm v^h})^{-1} \bm X \text{diag}(\sqrt{\bm v^h})|||_2, \end{align*} using the left eigenvector $\bm u$ from matrix $\bm R$, defined in Assumption \ref{as:graphR}, and right eigenvector $\bm v^h$ from matrix $\bm C^h$, defined in Assumption \ref{as:graphC}. Note, that these definitions correspond to the norms in equation (4) and (5) of \cite{Xin2019}. \\ The Frobenius inner product for real, rectangular matrices $\bm A, \bm B \in \mathbb{R}^{r \times s}$ is defined as $\langle \bm \bm A, \bm B \rangle_{F} = \text{tr}(\bm B^T \bm A) $, see \cite{Horn}. The weighted Frobenius inner products $\langle \bm \bm A, \bm B \rangle_{F}^{\bm u} = \text{tr}(\bm B^T \text{diag}(\bm u) \bm A), \langle \bm \bm A, \bm B \rangle_{F}^{\bm v^h} = \text{tr}(\bm B^T \text{diag}(\bm v^h)^{-1} \bm A)$ induce the weighted Frobenius matrix norms \begin{align} ||| \bm A |||_F^{\bm u} & \triangleq ||| \text{diag}(\sqrt{\bm u})\bm A|||_F, \\ ||| \bm A |||_F^{\bm v^h} & \triangleq ||| \text{diag}(\sqrt{\bm v^h})^{-1}\bm A|||_F. \end{align} Based on the equivalence of norms, it can be established that there exist constants $\delta_{u,F}, \delta_{v, F}, \delta_{F, v}, \delta_{F, u} > 0$ such that \begin{align*} &||| \cdot |||^{\bm u}_F \leq \delta_{u,F} ||| \cdot |||_F, & ||| \cdot |||^{\bm v^h}_F \leq \delta_{v,F} ||| \cdot |||_F, \\ &||| \cdot |||_{F} \leq \delta_{F,u} ||| \cdot |||_F^{\bm u}, & ||| \cdot |||_{F} \leq \delta_{F,v} ||| \cdot |||_F^{\bm v^h}. \end{align*} Creating such upper bounds is standard in relevant literature \cite{Meng2020}, \cite{Pu2020}, \cite{Xin2019}.\\ Concerning the standard Frobenius norm of a matrix product, the following upper bound can be provided using the spectral matrix norm: \begin{lemma}\label{lemma:submult} For arbitrary matrices $\bm A \in \mathbb{R}^{n \times n}$, $\bm B \in \mathbb{R}^{n \times q}$, it holds that $\left| \left| \left| \bm A \bm B \right| \right|\right|_F \leq \left| \left| \left| \bm A \right| \right|\right|_2 \left| \left| \left| \bm B \right| \right|\right|_F$. \end{lemma} This Lemma can be proved using the submultiplicative property of the Frobenius norm. We skip the mathematical details for brevity. \\ The matrix norms defined above can now be used in the following Lemma \begin{lemma}\label{lemma:sigma} Let Assumptions \ref{as:graphC} and \ref{as:graphR} hold and matrices $\bm R$ and $\bm C^h$ be defined as therein. The vectors $\bm u$ and $\bm v^h$ are the respective eigenvectors. Then, for arbitrary $\bm x \in \mathbb{R}^{n \times q}$ and $\bm y \in \mathbb{R}^{n_h \times q_h}$, there exist positive constants $\sigma_R, \sigma_C < 1$ such that \begin{align} \left| \left| \left| \bm R \bm x - \bm 1 \bm u^T \bm x\right| \right|\right|_F^{\bm u} &\leq \sigma_R \left| \left| \left| \bm x - \bm 1 \bm u^T \bm x\right| \right|\right|_F^{\bm u}, \label{eq:lemmasigmaR} \\ \left| \left| \left| \bm C^h \bm y - \bm v^h \bm 1^T \bm y\right| \right|\right|_F^{\bm v^h} &\leq \sigma_C \left| \left| \left| \bm y - \bm v^h \bm 1^T \bm y\right| \right|\right|_F^{\bm v^h}. \label{eq:lemmasigmaC} \end{align} \end{lemma} This Lemma is an adjusted version of Lemma 4 of \cite{Pu2020}. Again, the details of the proof are skipped for brevity. \section{Multi-cluster games and gradient-tracking} \subsection{Problem formulation} Assume an agent system consisting of $n$ agents, in which each agent has communication and computation abilities. The storage capacity of these agents is limited. The agents are grouped into $H$ clusters and $n_h$ agents belong to cluster $h$. The set $\mathcal{A}_h$ contains the agents of cluster $h$. All of these sets for clusters $h = 1, ..., H$ are disjoint, $\mathcal{A}_i \cap \mathcal{A}_j = \emptyset$ for $i \neq j$. The agents are connected by two different communication graphs. The graph $\mathcal{G}^h$ connects only the agents inside cluster $h$, while the graph $\mathcal{G}$ connects agents independently of their cluster membership. Therefore, $\mathcal{G}^h$ restricts the communication to intra-cluster exchange of information, while $\mathcal{G}$ enables global messaging.\\ It is assumed that each cluster forms a coalition, which means that all agents inside a cluster collaborate to achieve a common goal. In contrast to this, the clusters compete against each other regarding some coupled cost function. This inter-cluster competition can be modeled as a non-cooperative game, where the clusters are regarded as virtual players, while the actual decisions and actions are determined by the agents inside the clusters. Mathematically, we model this setup as follows.\\ Agent $i$ of cluster $h$ has exclusive access to its personal cost function $f_i^h(\bm x)$, which is assumed to be convex. This cost function is only known by agent $i$ and unknown to every other agent, independently of cluster membership. Vector $\bm x \in \mathbb{R}^q$ is the shared decision vector that can be separated into decisions of cluster $h$, i.e. $\bm x^h \in \mathbb{R}^{q_h}$, and the decisions of all other clusters, which are denoted by $\bm x^{-h} \in \mathbb{R}^{q - q_h}$. The cluster cost function $F^h:\mathbb{R}^q \rightarrow \mathbb{R}$ of cluster $h$ is declared as \begin{equation}\label{eq:gameformulation} F^h(\bm x^h, \bm x^{-h}) = \sum_{i = 1}^{n_h} f_i^h(\bm x^h, \bm x^{-h}). \end{equation} Each cluster aims to minimize this function $F^h$, which depends not only on the actions $\bm x^h$ of cluster $h$ but also on the actions of all other clusters. However, in the optimization process, agents can only adjust the decisions of their own cluster while observing $\bm x^{-h}$.\\ With all considerations from above, we can express the multi-cluster game $\Gamma(H, \mathbb{R}^q, \lbrace F^h\rbrace)$, with the clusters as virtual players, as the following optimization problem \footnote{Note that if the number of clusters is reduced to $H = 1$, we arrive at the standard definition of an unconstrained, distributed optimization problem as described in \cite{Nedic2015} or \cite{Pu2020}.} : \begin{equation}\label{eq:gameproblem} \min_{\bm x^h \in \mathbb{R}^{q_h}} F^h(\bm x^h, \bm x^{-h}) = \min_{\bm x^h \in \mathbb{R}^{q_h}} \sum_{i=1}^{n_h} f_i^h(\bm x^h, \bm x^{-h}), \end{equation} $\forall h = 1, ..., H.$ In order to evaluate the gradient of the local cost function, an estimation of other cluster's decisions needs to be present. Therefore, every agent $i$ maintains a vector $\bm x_i$ that estimates the decisions of all clusters in the network. In order for a solution $\bm{x}^*$ to be an optimum of the defined game, the following conditions need to be fulfilled: \begin{itemize} \item Consensus among agents concerning the local state estimations: \vspace{-0.3cm} \begin{equation} \bm x_i = \bm x_j = \bm{x}^*, \ i,j = 1, ..., n, \label{eq:consensuscondition} \end{equation} \item Social welfare minimum inside all clusters $h= 1, ..., H$ for sum of convex functions: \begin{equation} \sum_{i=1}^{n_h} \nabla_{\bm x^h} f_i^h((\bm x^h)^*, (\bm x^{-h})^* ) = 0 \label{eq:socialwelfarecondition} \end{equation} \item Nash Equilibrium for game $\Gamma(H, \mathbb{R}^q, \lbrace F^h\rbrace)$ between clusters: \begin{equation} F^h\left((\bm x^h)^*, (\bm x^{-h})^*\right) \leq F^h\left(\bm x^h, (\bm x^{-h})^*\right), \forall h \label{eq:nasheqcondition}. \end{equation} \end{itemize} The consensus condition is necessary, because the final decision vectors need to be the same at every agent. Every estimation should converge to the optimal decision $\bm{x}^*$ that satisfies the social welfare and Nash Equilibrium conditions.\\ Before we describe our algorithm, which solves the formulated problem, we first make assumptions regarding the local cost functions and their gradients to further specify the class of problems that we consider. \begin{assum}\label{as:lipschitz} All local cost functions $f_i^h(\bm x^h, \bm x^{-h})$, for all $i = 1, ..., n_h$ and $h = 1, ..., H$, are convex, continuously differentiable and the gradient $\nabla_{\bm x^h} f_i^h(\bm x^h, \bm x^{-h})$ is Lipschitz continuous on $\mathbb{R}^{q_h}$, i.e. there exist constants $L_i^h > 0$ such that \begin{align*} ||\nabla_{\bm x^h} f_i^h(\bm x^h, \bm x^{-h}) - \nabla_{\tilde{\bm x}^h} f_i^h(\tilde{\bm x}^h, \tilde{\bm x}^{-h}) ||_2& \\ \leq L_i^h || \bm x^h - \tilde{\bm x}^h ||_2 \leq L_i^h || \bm x - \tilde{\bm x} ||_2.& \end{align*} Furthermore, it can be assumed that there exists a constant $L > 0$ such that $L \geq L_i^h, \ \forall i, h$. \end{assum} This assumption is standard in gradient-based distributed optimization \cite{Nedic2015}, \cite{Pu2020}. Next, we define $\bm g^h(\bm x^h, \bm x^{-h}) \triangleq \sum_{i = 1}^{n_h} \nabla_{\bm x^h}f_i^h(\bm x^h, \bm x^{-h}) \in \mathbb{R}^{q_h}$ and the game mapping $ \bm M: \mathbb{R}^q \rightarrow \mathbb{R}^q$ \begin{equation} \label{eq:gamemapping} \bm M(\bm x) = [\bm g^1(\bm x)^T, ..., \bm g^H(\bm x)^T]^T. \end{equation} We make the assumption that \begin{assum}\label{as:strongmonotonegame} The game mapping $M(\bm x)$ is strongly monotone on $\mathbb{R}^q$ with constant $\mu > 0$. \end{assum} \begin{remark} This assumption is necessary for uniqueness of the Nash Equilibrium of game $\Gamma(H, \mathbb{R}^q, \lbrace F^h\rbrace)$. Note that with this assumption it holds $\forall \bm x, \bm y \in \mathbb{R}^q$ that \begin{align*} \sum_{h = 1}^H \left[\left( \sum_{i = 1}^{n_h} \left(\nabla_{\bm x^h} f_i^h(\bm x) - \nabla_{\bm y^h}f_i^h(\bm y)\right)\right)^T(\bm x^h - \bm y^h) \right]\\ \geq \mu \sum_{h = 1}^H ||\bm x^h - \bm y^h||_2^2 = \mu ||\bm x - \bm y||_2^2. \end{align*} \end{remark} \subsection{Algorithm} Let $k = 1, 2, ...$ be the time index. At each instance $k$ vector $\bm x_i(k)$ contains agent $i$'s estimations of the decisions of all clusters and therefore takes the form \begin{equation*} \bm x_i(k) = \left[ (\bm x_i^1(k))^T, ..., (\bm x_i^h(t))^T, ..., (\bm x_i^H(k))^T \right]^T \in \mathbb{R}^q, \end{equation*} where $\bm x_i^h(k) \in \mathbb{R}^{q_h}$ is the estimation made by agent $i$ of cluster $h$'s decisions at time $k$. The estimation vectors of all $n$ agents in the system can then be stacked to receive a matrix of all estimations at time $k$ \begin{equation*} \bm x(k) = [\bm x_1(k), ..., \bm x_N(k)]^T \in \mathbb{R}^{n \times q}. \end{equation*} Without loss of generality, we assume that the row sequence of $\bm x(k)$ is ordered according to the numbering of cluster $h= 1, ..., H$. This means that the first $n_1$ rows of $\bm x(k)$ are estimations of agents that belong to cluster $1$, the next $n_2$ rows are estimations of agents belonging to cluster $2$ and so on.\\ Furthermore, we introduce the variable $\bm y_i^h(k) \in \mathbb{R}^{q_h}$, which is agent $i$'s local tracking variable of the gradient in cluster $h$ at time $k$. Assuming that $i \in \mathcal{A}_h$, we define the vector \begin{equation*} \hat{\bm y}_i^h(k) = [\bm 0_{1\times n_{<h}}, (\bm y_i^h(k))^T, \bm 0_{1\times n_{>h}}]^T \in \mathbb{R}^q, \end{equation*} with $n_{<h}\sum_{l = 1}^{h-1} n_l$ and $n_{>h}\sum_{l = h+1}^{H} n_l$. We stack all local tracking variables of cluster $h$ \begin{equation*} \bm y^h(k) = [\bm y^h_1(k), ..., \bm y^h_{n_h}(k)]^T \in \mathbb{R}^{n_h \times q_h} \end{equation*} and then include all tracking variables of the separate clusters in the block matrix \begin{equation*} \bm Y(k) = \text{diag}\lbrace\bm y^1(k), ..., \bm y^H(k)\rbrace \in \mathbb{R}^{n \times q}. \end{equation*} It is important to initialize all local tracking variables $\bm y_i^h(0)$ with the local gradient at starting estimation $\bm x_i(0)$, i.e. \begin{equation} \label{eq:yhinit} \bm y_i^h(0) = \nabla_{\bm x_i^h} f_i^h(\bm x_i(0)). \end{equation} At last, we define matrix $\bm G^h(k) \in \mathbb{R}^{n_h \times q_h}$, which contains the gradients of cluster $h$ at time $k$: \begin{equation*} \bm G^h(k) = [\nabla_{\bm x_1^h} f_1^h(\bm x_1(k)), ..., \nabla_{\bm x_{n_h}^h} f_{n_h}^h(\bm x_{n_h}(k))]^T. \end{equation*} With all above definitions, we are able to formalize our algorithm with agent-wise update equations as follows: \newpage \begin{subequations}\label{eq:algorithmagent} \begin{align} \bm x_i(k+1) &= \sum_{j=1}^n \bm R_{ij}(\bm x_i(k) - \alpha \hat{\bm y}_i^h(k)) , \\ \bm y_i^h(k+1) &= \sum_{j=1}^{n_h} \bm C^h_{ij} \bm y_j^h(k) \\ &+ \nabla_{\bm x_i^h} f_i^h(\bm x_i^h(k+1), \bm x_i^{-h}(k+1) ) \nonumber \\ &-\nabla_{\bm x_i^h} f_i^h(\bm x_i^h(k), \bm x_i^{-h}(k)) \nonumber, \end{align} \end{subequations} where $\alpha$ is a positive, constant step-size. \\ Using the stacked vectors and matrices from above, we can write a matrix-update representation of the algorithm: \begin{subequations}\label{eq:algorithmvector} \begin{align} \bm x(k+1) &= \bm R \left(\bm x(k) - \alpha \bm Y(k) \right) \label{alg:vector_x}, \\ \bm y^h(k+1) &= \bm C^h \bm y^h(k) + \bm G^h(k+1) - \bm G^h(k) \label{alg:vector_y}. \end{align} \end{subequations} This algorithm is based on the push-pull algorithm, discussed for example in \cite{Pu2020} or \cite{Xin2019}, which was adapted to the cluster game scenario. However, in contrast to the algorithms in these publications, in the gradient tracking step of our algorithm, all agents solely exchange information with other agents of the same cluster $h$. This is ensured by setting the graphs $\mathcal{G}^h$ and therewith the matrix $\bm C^h$ appropriately. Furthermore, in the update step of the estimation, only those estimations of decisions are updated by the gradient information that belong to the respective cluster $h$ that $i$ is part of. Estimations $\bm x_i^{-h}$ of agent $i$ are updated without using gradient information. By this structure, we can assure that own decisions are pushed towards the local social welfare optimum in the respective clusters, while estimations of the decisions of other clusters are pushed towards a consensus.\\ In contrast to the algorithm presented in \cite{Meng2020}, our algorithm relies on directed communication, which extends the range of applications. Furthermore, in \cite{Meng2020} only leaders can exchange information between clusters. By allowing more inter-cluster communication channels in our approach, the amount of exchanged information can be increased such that an inter-cluster consensus is likely to be achieved faster. For a preliminary convergence comparison, see the end of the Simulation section. \subsection{Convergence} Let the vectors $\bar{\bm x}(k) \in \mathbb{R}^{1 \times q}$ and $\bar{\bm y}^h(k) \in \mathbb{R}^{1 \times q_h}$ be defined as \begin{align*} \bar{\bm x}(k) = \bm u^T \bm x(k), \qquad \bar{\bm y}^h(k) = \frac{1}{n_h} \bm 1^T \bm y^h(k), \end{align*} with $\bm u$ being the left eigenvector of matrix $\bm R$, see Lemma \ref{lemma:RCeigen}. Due to initialization of $\bm y^h$ in Equation \eqref{eq:yhinit}, it can be shown by induction that \begin{equation}\label{eq:yinduction} \bar{\bm y}^h(k) = \frac{1}{n_h} \sum_{i = 1}^{n_h} \nabla_{\bm x_i^h} f_i^h(\bm x_i^h(k), \bm x_i^{-h}(k)). \end{equation} By using the eigenvectors $\bm v^h$ of matrices $\bm C^h$, we define the block matrices \begin{align} \bm \Lambda_{\bm y} &= \text{diag}\lbrace\bm v^1\bar{\bm y}^1(k), ..., \bm v^H\bar{\bm y}^H(k)\rbrace \in \mathbb{R}^{n \times q}, \\ \bm \Lambda_{\bar{\bm g}} &= \text{diag}\lbrace\bm v^1\bar{\bm g}^1(k), ..., \bm v^H\bar{\bm g}^H(k)\rbrace \in \mathbb{R}^{n \times q}, \end{align} using the vectors \begin{equation*} \bar{\bm g}^h (k) = \frac{1}{n_h} \sum_{i = 1}^{n_h} (\nabla_{\bar{\bm x}^h}f_i^h(\bar{\bm x}^h(k), \bar{\bm x}^{-h}(k)))^T \in \mathbb{R}^{1 \times q^h}. \end{equation*} The general structure of our convergence proof is a known procedure (see \cite{Meng2020}, \cite{Pu2020}), which we extend to our problem formulation. The main idea is to show that the norms $\left| \left| \left| \bm 1 \bar{\bm x}(k)- \bm 1 \bm{x}^* \right| \right|\right|_F, \left| \left| \left| \bm x(k) - \bm 1 \bar{\bm x}(k) \right| \right|\right|_F^{\ub}$ and $\sum_{h=1}^H\left| \left| \left| \bm y^h(k) - \bm v^h \bar{\bm y}^h(k) \right| \right|\right|_F^{\vb^h} $converge to zero as time goes to infinity when using the update equation of $\eqref{eq:algorithmagent}$ or \eqref{eq:algorithmvector}. This means that all estimations $\bm x_i$ and all tracking variables $\bm y_i^h$ converge to a stable state. This can be achieved by upper bounding the update steps by a linear, time-invariant matrix $\bm A$ and showing that the spectral radius of this matrix is strictly smaller than 1. Therefore, we show that: \begin{prop}\label{prop:matrixA} Let Assumptions \ref{as:graphC}, \ref{as:graphR} and \ref{as:lipschitz} hold. The vector $\bm{x}^*$ fulfills the optimality condition \eqref{eq:socialwelfarecondition}. Using update equations of the Algorithm in \eqref{eq:algorithmagent} or \eqref{eq:algorithmvector}, the following linear inequality system can be established: \begin{align} \begin{bmatrix} \left| \left| \left| \bm 1 \bar{\bm x}(k+1)- \bm 1 \bm{x}^* \right| \right|\right|_F \\ \left| \left| \left| \bm x(k+1) - \bm 1 \bar{\bm x}(k+1) \right| \right|\right|_F^{\ub} \\ \sum_{h=1}^H\left| \left| \left| \bm y^h(k+1) - \bm v^h \bar{\bm y}^h(k+1) \right| \right|\right|_F^{\vb^h} \end{bmatrix} \\ \leq \bm A \begin{bmatrix} \left| \left| \left| \bm 1 \bar{\bm x}(k)- \bm 1 \bm{x}^* \right| \right|\right|_F \\ \left| \left| \left| \bm x(k) - \bm 1 \bar{\bm x}(k) \right| \right|\right|_F^{\ub} \\ \sum_{h=1}^H\left| \left| \left| \bm y^h(k) - \bm v^h \bar{\bm y}^h(k) \right| \right|\right|_F^{\vb^h} \end{bmatrix}, \end{align} with matrix \begin{align*} \bm A = \begin{bmatrix} \phi(\alpha) & \alpha a_{12} & \alpha a_{13} \\ \alpha a_{21}&\sigma_R + \alpha a_{22} & \alpha a_{23} \\ \alpha a_{31 }& a_{32} + \alpha a'_{32} & \sigma_c + \alpha a_{33} \end{bmatrix} \end{align*} and scalar function \begin{equation*} \phi(\alpha) = \sqrt{1 - 2\alpha \underline{\eta} \mu + \alpha^2 L_v^2 \left| \left| \left| \bm 1 \bm u^T \right| \right|\right|_2^2} \end{equation*} where $L_v = L \max_{i,h} \lbrace v_i^h\rbrace$ and $0 < \underline{\eta} \leq \min_h \lbrace \eta^h = \frac{(\bm v^h)^T \bm u^h}{n_h} \rbrace $.\\ All factors $a_{12}, a_{13}, a_{21}, a_{22}, a_{23}, a_{31}, a_{32},a'_{32}, a_{33}$ are positive. \end{prop} The proof of this Proposition relies on the argumentation in \cite{Meng2020}, which we were able to adjust to our setting, i.e. incorporation of row- and column-stochastic matrices $\bm R$ and $\bm C^h$ that allow for more general communication than undirected leader-follower architectures. For the mathematical details and factors $a_{ij}$ see Appendix \ref{ap:proposition}. \newline Now that we have the upper bound of every iteration, we need to show that system matrix $\bm A$ is stable. Therefore, we demonstrate that \begin{prop}\label{prop:Astable} There exists a step-size $\alpha > 0$ such that the spectral radius of $\bm A(\alpha)$ is strictly smaller than $1$, i.e. \begin{equation} \rho(\bm A) < 1. \end{equation} \end{prop} \begin{proof} Again, we take our orientation from \cite{Meng2020}. For $\alpha = 0$, matrix $\bm A$ contains the entries \begin{equation} \bm A(\alpha = 0) = \begin{bmatrix} 1 & 0 & 0 \\ 0&\sigma_R & 0\\ 0& a_{32} & \sigma_C \end{bmatrix}. \end{equation} Because of $0 < \sigma_R< 1$ and $ 0< \sigma_C < 1$, see Lemma \ref{lemma:sigma}, it holds that $\rho(\bm A(0)) = 1$ and $\bm A(0)$ has the right eigenvector $\bm u = [1 \ 0 \ 0]^T$ corresponding to eigenvalue $\lambda_1 = 1$. Now we need to investigate how this eigenvalue $\lambda_1$ changes, when the value of $\alpha$ increases from $0$. For this, the eigenvalue problems provides us with $ \frac{d \lambda_1(\alpha)}{d \alpha}\big|_{\alpha = 0} \bm u =\frac{d \bm A(\alpha)} {d \alpha}\big|_{\alpha = 0} \bm u,$ from which \begin{align*} \frac{d \lambda_1(\alpha)}{d \alpha}&\Bigg|_{\alpha = 0} = \frac{d \phi(\alpha)}{d \alpha } \Bigg|_{\alpha = 0} \\ & = \frac{- 2 \underline{\eta} \mu + 2\alpha L_v^2 \left| \left| \left| \bm 1 \bm u^T \right| \right|\right|_2^2}{2 \sqrt{1 - 2\alpha \underline{\eta} \mu + \alpha^2 L_v^2 \left| \left| \left| \bm 1 \bm u^T \right| \right|\right|_2^2}}\Bigg|_{\alpha = 0} = - \underline{\eta} \mu \end{align*} follows, where $0 < \underline{\eta} \leq \eta^h = \frac{(\bm v^h)^T \bm u^h}{n_h} \ \forall h$, because all $\bm v^h, \ h = 1, ..., H$, and $\bm u$ are positive, see Lemma \ref{lemma:RCeigen}. Therefore, $- \underline{\eta} \mu < 0$. Because of this, the value of the spectral radius $\rho(\bm A(\alpha))$ decreases as $\alpha$ increases from zero. Following from this fact, together with the continuity of the evolution of spectral radii, there must exist an $\alpha > 0$ for which $\rho(\bm A(\alpha))< 1$. \end{proof} With this result, we now know that the variables of the algorithm \eqref{eq:algorithmagent} or \eqref{eq:algorithmvector} converge to stable states. In the following theorem we combine all of the results above and show the problem of the multi-cluster game can be solved with our algorithm: \begin{theorem} Let Assumption \ref{as:graphC}, \ref{as:graphR}, \ref{as:lipschitz} and \ref{as:strongmonotonegame} be given. Then, there exists a unique Nash Equilibrium for the game $\Gamma(H, \mathbb{R}^q, \lbrace F^h\rbrace)$ defined in \eqref{eq:gameproblem}. Furthermore, using the update equations in \eqref{eq:algorithmagent} or \eqref{eq:algorithmvector}, it holds that the estimations of all agents reach a consensus \begin{equation}\label{eq:toinfinityandbeyond} \lim_{k \rightarrow \infty}\bm x_j(k) = \lim_{k \rightarrow \infty}\bm x_i(k) = \bm{x}^* , \forall i,j = 1, ..., n, \end{equation} where the stable state $\bm{x}^*$ fulfills optimality conditions \eqref{eq:socialwelfarecondition} and \eqref{eq:nasheqcondition}. With this, optimality condition \eqref{eq:consensuscondition} is fulfilled as well. Therefore, the multi-cluster game is solved and the convergence rate to the optimum is linear. \end{theorem} \begin{proof} Given strong monotony of the mapping $\bm M(\bm x)$ as claimed in Assumption \ref{as:strongmonotonegame}, there exists a unique Nash Equilibrium and the vector $\bm{x}^*$ is this unique stable point, if it satisfies $\bm M(\bm{x}^*) = \bm 0$ \cite{Tatarenko2019}. From Propositions \ref{prop:matrixA} and \ref{prop:Astable}, we know that all estimations converge linearly to a consensus and this consensus is $\bm{x}^*$, which satisfies optimality condition \eqref{eq:socialwelfarecondition}. Therefore, the expression in Equation \eqref{eq:toinfinityandbeyond} is true and the convergence rate is linear.\\ According to the definition of the game mapping in Equation \eqref{eq:gamemapping}, $\bm M(\bm{x}^*) = \bm 0$ holds if $\bm g^h(\bm{x}^*) \triangleq \sum_{i = 1}^{n_h} \nabla_{\bm x^h}f_i^h(\bm{x}^*) = \bm 0$ for all $h = 1, ..., H$. This in turn is the condition for the local social welfare optimum in Equation \eqref{eq:socialwelfarecondition}. This means that if the estimations of all agents are in consensus with each other, i.e. condition \eqref{eq:consensuscondition} holds, and this consensus vector $\bm{x}^*$ fulfills condition \eqref{eq:socialwelfarecondition} for all $h = 1, ..., H$, then the Nash Equilibrium condition \eqref{eq:nasheqcondition} is always fulfilled as well. We showed convergence towards $\bm x^*$, which fulfills condition \eqref{eq:socialwelfarecondition}, in Propositions \ref{prop:matrixA} and \ref{prop:Astable}. Therefore, the proof is concluded. \end{proof} \section{Simulation} For the verification of our algorithm by simulation, we chose a variant of the well-known cournot game, like it is done in \cite{Meng2020}. In the scenario there are $n$ factories that produce the same product. We describe the amount of product units produced by factory $i$ with the scalar $x_i$. Each factory has an individual cost $C_i(x_i)$ for producing $x_i$ units of the product. For our simulation, we chose $ C_i(x_i) = a_i x_i^2 + b_i x_i + c_i$. By selling $x_i$ units for a price $P(\bm x)$, which depends on the vectorized output $\bm x = [x_1, ..., x_n]^T$ of all factories, each factory generates revenue. We chose the price function $P(\bm x) = P_c - \sum_{j=1}^n x_j$, where $P_c$ is some positive constant. With this, factory $i$ has the objective function $f_i(\bm x) = C_i(x_i) - x_iP(\bm x).$ Each of the factories belongs to a company $h$ and this company aims to minimize the objective function for all of the factories that belongs to it \begin{equation}\label{eq:costcournot} \min_{\bm x^h} F^h = \min_{\bm x^h} \sum_{i = 1}^{n_h} C_i(x_i) + x_iP(\bm x) \end{equation} by adjusting the output $\bm x^h = [(x_i)_{i=1}^{n_h}]$, while competing against other companies. Thereby, we have arrived at the formulation of a non-cooperative, multi-cluster game, where the factories correspond to agents and the companies are the agent-containing clusters.\\ It can readily be confirmed that cost functions in Equation \eqref{eq:costcournot} fulfill Assumptions \ref{as:lipschitz} and \ref{as:strongmonotonegame}. We simulated our algorithm with $H = 3 $ companies. Company $h=1$ owns four factories, while companies $h=2,3$ each own three. The factories are connected by inner-cluster graphs $\mathcal{G}^h$ and a global graph $\mathcal{G}$, which were chosen such that they satisfy Assumptions \ref{as:graphC} and \ref{as:graphR}, respectively. The starting estimations of the decisions of each agent were chosen randomly and the tracking variables were set according to Equation \eqref{eq:yhinit}. The step-size was set to $\alpha = 0.1$ . The results of the simulation are shown in Figure \ref{fig:cournot_states}. \begin{figure}[h] \centering \scalebox{0.9}{ \input{cournot_states_single} } \caption{Convergence of estimations $\bm x_i$ to the optimum $\bm{x}^*$ for the multi-cluster cournot game with 10 factories. Each color represents a different dimension of the decision vector $\bm x$ and each line is the estimation of one dimension made by one agent.} \label{fig:cournot_states} \end{figure} It can be seen that a consensus is reached in every dimension of the final decision vector. In fact, it can be confirmed that after 300 iterations the sum of absolute differences between the estimation of agent $1$, chosen as a representative, and all other agents is less than $1.90\times 10^{-3}$ for all dimensions of $\bm x$. Furthermore, the normed difference of the estimation $\bm x_1$ of agent $1$ to the Nash Equilibrium state $\bm{x}^*$ can be calculated with $ \epsilon = ||\bm x_1 - \bm{x}^*||_2 = 0.005$. When restricting the setup to a leader-follower hierarchy, such as in \cite{Meng2020}, where only the cluster leader communicates with other clusters, we need about 500 iterations to receive a comparable accuracy with the same parameterization. \section{Conclusion} Within this work, we presented a distributed algorithm for the solution of a class of multi-cluster games and we proved linear convergence to an optimal decision vector that fulfills the optimality conditions of the Nash Equilibrium between the clusters and social welfare optimum inside each cluster. As less restrictions are imposed on the communication architecture, the algorithm is applicable to a wider range of problems than the one in \cite{Meng2020}. However, in order to handle more complicated scenarios such as the multi-level energy provision problem in Smart Grids, mentioned in the introduction, the optimization procedure needs to be able to respect constraints of agents or clusters. How to include such constraints into our algorithm seems to be a promising field of future research.
1,941,325,220,101
arxiv
\section{Introduction} \label{sec:intro} Distributed optimization~\cite{bertsekas1989parallel} aims to optimize a global objective formed by a sum of functions: \begin{align} \label{eq:goal} f(x) = \tfrac{1}{m}\sum_{i=1}^{m} f_i(x) \qquad f,f_i : \mathbb{R}^d \rightarrow \mathbb{R}, \end{align} where $m$ is the number of computational nodes and each $f_i$ is called a local loss function. Each $f_i$ may possibly be derived from the $i^\text{th}$ batch of data, and hence may differ for each node $i$. Optimization of~(\ref{eq:goal}) usually consists of two phases: local optimization like gradient descent and inter-node communication like model averaging. Conventional distributed optimization algorithms, such as the ``Parameter-Server'' model ~\cite{li2014scaling,ho2013more,li2014communication}, usually require immediate inter-node communication after performing local gradient descent to simulate centralized learning and guarantee convergence for the overall model. In practice, the rapid expanding size of machine learning models, some with tens of millions of trainable parameters, often renders the communication step an increasingly significant bottleneck. This issue is further compounded by concerns on privacy and security, bandwidth requirement, power consumption and information delay~\cite{duchi2012dual,agarwal2011distributed}, where it is favorable to reduce the communication cost and only exchange information when necessary. Among the possible solutions to alleviate the communication overhead, one practical method is using compressed models or gradients in communication. For example, studies on quantized gradients~\cite{seide20141,alistarh2017qsgd,wangni2018gradient} and sparsified models~\cite{aji2017sparse,lin2017deep,dryden2016communication,li2018optimal} allow each node to pass low bit gradients or models to the server instead of the raw data. However, extra noise is introduced in quantization or sparsification and communication is still required at every iteration. Another possible approach~\cite{smith2018cocoa,ma2017distributed} in reducing the communication cost is to update the local models by performing $T_i > 1$ iterations of local GD before sending the model to the server. Two fundamental questions in this scenario are: \begin{center} \begin{itemize} \item \textbf{Question 1}: \emph{Does the algorithm still converge with an arbitrary choice of $T_i$?} \item \textbf{Question 2}: \emph{Does more local updating $T_i$ definitely lead to less communication?} \end{itemize} \end{center} From a general distributed optimization perspective, the answers are negative. Conventional studies \cite{zhang2012communication,zhang2016parallel,zhou2017convergence,stich2018local,yu2018parallel} show that although it may not be necessary to communicate after each local GD, frequent communication after $T_i$ steps is still needed. Moreover, to ensure convergence a decaying learning rate is required or alternatively, the loss bound has a finite term that may grow with increasing $T_i$~\cite{stich2018local,yu2018parallel}. More importantly, these prior results also indicate that in general, a bigger $T_i$ leads to poorer optimization performance and may not necessarily reduce the communication cost to obtain the same overall precision. However, in this paper, we draw completely different conclusions from the previous results and show the answers can be positive in certain scenarios. We provide a series of theoretical analysis for convex cases and show that convergence holds for an arbitrary choice of $T_i$, and even for $T_i = \infty$ where each node updates its local model to optimality. Moreover, our results also provide answers for the second question by showing that more local updating can reduce the overall need for communication. Beyond convex cases, we show that similar conclusions may still hold by providing theoretical analysis on simple non-convex settings whose optimal sets are affine subspaces and also provide a series of experimental evidences for deep learning. These different conclusions rest upon the following intersection assumption we make throughout the paper: \begin{Assumption} \label{Ass:intersection} Denoting $S_i := \{ x \in \mathbb{R}^d : f_i(x) \leq f_i(y) \ \text{holds for}\ \forall y\in\mathbb{R}^d \}$ as the optimal set of $f_i$, the set $S := \cap_{i=1}^m S_i$ is non-empty. \end{Assumption} This assumption is inspired by both the new phenomenon in modern machine learning named ``over-parameterization''~\cite{zhang2016understanding,arora2018optimization} and a classical mathematical problem named ``convex feasibility problem''~\cite{von1949rings}. Modern machine learning models, especially deep learning models, often consist of huge amounts of parameters that far exceed the instance numbers~\cite{ma2017power,bassily2018exponential,oymak2018overparameterized,allen2018convergence}. This over-parameterization phenomenon leads significant communication \emph{challenges} to distributed optimization as it requires enormous bandwidth to transport local models. But in this paper, we show that this phenomenon also brings new \emph{hopes} that allow each node to \emph{reduce the communication frequency} arbitrarily by updating its local model to (sub)optimality before sending information to server. The underlying reason is that the training loss for over-parameterized models can often easily approach 0 due to the degeneracy of the over-parameterized functions~\cite{zhang2016understanding,du2018gradient,allen2018convergence}, indicating that there exists a common $x^*$ such that all local losses $f_i(x^*)$ are all 0 when data are distributed to multiple nodes and the above intersection assumption holds naturally. Our work also bridges connections of the over-parameterized machine learning models with the classical convex feasibility problem~\cite{von1949rings} in mathematics, which assumes the intersection of $m$ convex closed subsets $S_i$ of a Hilbert space is non-empty and uses sequential projections~\cite{bauschke1996projection,aharoni1989block,censor2016implicit} to find a feasible point $x^* \in \cap_{i=1}^m S_i$. This non-empty intersection assumption resembles Assumption~\ref{Ass:intersection}, but the classical approach of direct projections in convex feasible problem can often be challenging for most machine learning tasks since we cannot easily characterize local feasible sets. In this paper, we show that this projection step can be replaced by continuous local GD and convergence can still be obtained. \textit{Notation}: \ \ Throughout this paper, we denote by $\|\cdot\|$ the Euclidean norm. If the argument is a matrix, it is the induced $2$-norm. For a closed convex set $S$, we denote by $P_S(x)$ the projection of $x$ onto $S$. The shortest distance between a point $x$ and a set $S$ is denoted by $\dS{x}$. \section{Convex Cases} \label{sec:convex} To start, we first focus on the convex scenarios. Each function $f_i$ in~(\ref{eq:goal}) is assumed to be convex and $L_i$-smooth (i.e. $\nabla f_i$ is Lipschitz with constant $L_i$), and therefore the overall loss function $f$ is also convex and $L$-smooth, with $L=\tfrac{1}{m}\sum_{i=1}^{m} L_i$. We consider the classical algorithm as seen in Alg~\ref{alg}, where each node $i$ is allowed to update $T_i$ local GD steps with step size $\eta_i$ before interacting with the server. Denoting the point after the $n$-th communication as $x_n$, the following lemma establishes a useful bound on the evolution of the distance between $x_n$ and the common optimal set $S$. \begin{restatable}{Lemma}{lemmaone} \label{lm:mother_equation} Consider algorithm in Alg~\ref{alg} and assume each $f_i$ is convex and $L_i$-smooth. Then, \begin{equation*} \label{eq:mother_equation} \dS{x_{n+1}}^2 \leq \dS{x_{n}}^2 - \tfrac{1}{m}\sum_{i=1}^{m} \sum_{t=0}^{T_i - 1} \alpha_i \| \nabla f_i (x_n^{i,t}) \|^2, \end{equation*} where $\alpha_i = \eta_i (\tfrac{2}{L_i} - \eta_i)$. \end{restatable} \begin{Proof} For any point $x^* \in S$, \begin{align} \label{eq:xn+1} \begin{split} \| x_{n+1} - x^* \|^2 = \| \tfrac{1}{m} \sum_{i=1}^{m} x_n^{i,T_i} - x^* \|^2 &\leq \tfrac{1}{m} \sum_{i=1}^{m} \| x_n^{i,T_i} - x^* \|^2 \end{split} \end{align} For each node, we have \begin{align*} & \| x_n^{i,t+1} - x^* \|^2 \\ =& \| x_n^{i,t} - x^* - \eta_i \nabla f_i (x_n^{i,t}) \|^2 \\ =& \| x_n^{i,t} - x^* \|^2 - 2 \eta_i \langle x_n^{i,t} - x^*, \nabla f_i (x_n^{i,t}) \rangle +(\eta_i)^2 \| \nabla f_i (x_n^{i,t}) \|^2 \\ \leq& \| x_n^{i,t} - x^* \|^2 - \alpha_i \| \nabla f_i (x_n^{i,t}) \|^2, \end{align*} where in the last step we used the co-coercivity of convex and $L_i$-smooth functions and the fact that $x^*$ is also in each $S_i$ due to Assumption~\ref{Ass:intersection}. Summing the above from $t=0$ to $t=T_i-1$ and noticing $x_n^{i,0} = x_n$, we have \begin{align*} \| x_n^{i,T_i} - x^* \|^2 \leq \| x_n - x^* \|^2 - \alpha_i \sum_{t=0}^{T_i - 1} \| \nabla f_i (x_n^{i,t}) \|^2. \end{align*} Combining this with Eq.~\eqref{eq:xn+1}, we obtain \begin{equation*} \| x_{n+1} - x^* \|^2 \leq \| x_{n} - x^* \|^2 - \tfrac{1}{m}\sum_{i=1}^{m} \sum_{t=0}^{T_i - 1} \alpha_i \| \nabla f_i (x_n^{i,t}) \|^2. \end{equation*} Setting $x^* = P_S(x_n)$ and noticing that $\dS{x_{n+1}} \leq d(x_{n+1},x^*)$, we conclude the proof. \end{Proof} \begin{algorithm}[!t] \caption{Model Averaging for Distributed Optimization} \label{alg} \begin{algorithmic}[1] \item[\textbf{\underline{Worker $i=1,\cdots,m$:}}] \STATE{pull $x_n$ from server and initialize $x_n^{i,0} = x_n$} \FOR{$t=0,\cdots,T_i - 1$} \STATE{update $x_n^{i,t+1} = x_n^{i,t} - \eta_i \nabla f_i(x_n^{i,t}) $} \ENDFOR \STATE{push $x_n^{i,T_i}$ to server} \end{algorithmic} \begin{algorithmic}[1] \item[\textbf{\underline{Server:}}] \STATE{average model: $x_{n+1} = \frac{1}{m}\sum_{i=1}^m x_n^{i,T_i}$} \end{algorithmic} \end{algorithm} \paragraph{Remark:} For the general convex cases, Lemma~\ref{lm:mother_equation} already provides answers to these two questions raised in Sec~\ref{sec:intro}. (1) To see the gradient norm converges for arbitrary $T_i$, we first notice $\{\dS{x_n}^2\}$ is a positive non-increasing sequence for $\alpha_i>0$. Therefore, for any $\delta$, there exists a sufficiently large $n$ such that \begin{equation*} \frac{1}{m}\sum_{i=1}^{m} \sum_{t=0}^{T_i - 1} \alpha^i \Vert \nabla f_i (x_n^{i,t}) \Vert^2 \leq \dS{x_n}^2 - \dS{x_{n+1}}^2 \leq \delta. \end{equation*} The above inequality implies there exists a constant C such that $\Vert \nabla f_i (x_n^{i,t}) \Vert^2 \leq C\delta$ holds for all $i \in [1,\cdots,m]$ and $t\in [0,\cdots,T_i]$. Selecting $t=0$ and noticing $x_n^{i,0} = x_n$, we obtain \begin{equation*} \Vert \nabla f(x_n) \Vert^2 = \Vert\frac{1}{m}\sum_{i=1}^{m} \nabla f_i(x_n)\Vert^2 \leq \frac{1}{m}\sum_{i=1}^{m} \Vert\nabla f_i(x_n) \Vert^2 \leq C\delta. \end{equation*} Since $C\delta$ can be arbitrarily small, we conclude the gradient residuals $\Vert \nabla f(x_n) \Vert^2$ vanish regardless of the choice of $T_i$. (2) To answer question 2, we notice larger $T_i$ decreases $d(x_{n},S)$ more aggressively in Lemma~\ref{lm:mother_equation}. Namely, more local updating $T_i$ lead to less communication round $n$ to reach the same distance. (3) The above analysis does not rely on a specific choice of learning rate $\eta_i$. Specifically, a constant learning rate $\eta_i$ can be used in local GD, and this is in contrast to studies like \cite{zhou2017convergence,stich2018local,yu2018parallel} where algorithm relies on a diminishing stepsize to obtain convergence. With a constant learning stepsize, the local learning on each node $i$ can be potentially faster, especially for the restricted strongly convex cases considered in Sec~\ref{sec:linear_conv_sc}. The above analysis provides qualitatively answers for the questions in Sec 1, and we shall restate these results with more concrete theorems in the following result. \subsection{Sublinear Convergence Rate for General Convex Case} \begin{restatable}{Theorem}{theoremtwo} \label{th:convergeceGuarantee} Suppose $f_i$ is convex and $L_i$-smooth, $\alpha_i > 0$, and $1\leq T_i \leq \infty$ for all $i=1,\dots,m$. Then, \begin{enumerate} \item[(i)] $\liminf_{n\rightarrow\infty} n \| \nabla f (x_n) \|^2 = 0$, \item[(ii)] $\liminf_{n\rightarrow\infty} n^{\tfrac{1}{2}} [ f (x_n) - \min_{y\in\mathbb{R}^d} f(y) ] = 0$. \end{enumerate} \end{restatable} \begin{Proof} Eq.~\eqref{eq:mother_equation} implies the sequence $\{ \dS{x_n}^2 : n\geq0 \}$ is non-increasing, and its limit exists. Define $z_n := \tfrac{1}{m}\sum_{i=1}^{m} \sum_{t=0}^{T_i - 1} \alpha_i \| \nabla f_i (x_n^{i,t}) \|^2 \geq 0$. Summing Eq.~\eqref{eq:mother_equation} and taking limit, we get \begin{align*} \lim_{n\rightarrow\infty} \sum_{k=0}^{n-1} z_k \leq \dS{x_0}^2 - \lim_{n\rightarrow\infty} \dS{x_n}^2 < \infty, \end{align*} which then implies $\liminf_{n\rightarrow\infty} n z_n \rightarrow 0$. Since $T_i\geq 1$, we have $$z_n \geq \tfrac{\min_i \alpha_i}{m}\sum_{i=1}^{m} \| \nabla f(x^{i,0}_n) \|^2 \geq \min_i \alpha_i\| \nabla f(x_n) \|^2.$$ Hence $\liminf_{n\rightarrow\infty} n \| \nabla f (x_n) \|^2 \rightarrow 0.$ This proves $(i)$. To show $(ii)$, observe that for any $x^* \in S$, by convexity we have \begin{align*} f(x^*) \geq f(x_n) + \langle \nabla f(x_n), x^* - x_n \rangle, \end{align*} and so $ f(x_n) - f(x^*) \leq \| \nabla f(x_n) \| \dS{x_0}$ and $(ii)$ follows. \end{Proof} \paragraph{Remark:} (1) Theorem~\ref{th:convergeceGuarantee} shows that if we are working with convex functions with Lipschitz gradients, then the intersection assumption~\ref{Ass:intersection} is enough to guarantee the convergence of Alg \ref{alg} as the overall gradient norms $\| \nabla f (x_n) \|^2$ vanish, and moreover that a subsequence does so at a rate $\mathcal{O}(1/n)$; If further that the gradient norms form a monotone sequence, then this rate holds for the whole sequence. (2) Theorem~\ref{th:convergeceGuarantee} indicates that the algorithm converges for arbitrary choices of the number of local updates, including $T_i = \infty$, which represents the idealized situation where the local gradient descent problem is solved to completion before each communication step. This is attractive in practice, as it allows for each node to perform computation independently from other nodes for long times before having to exchange information. \vspace{0.1in} \subsection{Linear Convergence Rate for Restricted Strongly Convex Case} \label{sec:linear_conv_sc} The convergence rate bound in Theorem~\ref{th:convergeceGuarantee} is far from fast. In the following, we show that if some additional assumptions are adopted, then convergence can be shown to be linear in the number of communication steps. \begin{Assumption} \label{Ass:restricted_sc} Each $f_i$ satisfies restricted strong convexity, i.e. there exists constants $\mu_i > 0$ such that $$ \| \nabla f_i (x) \| \geq \mu_i \dSi{x},$$ \end{Assumption} \begin{Assumption} \label{Ass:separation} The optimum sets $\{ S_i \}$ satisfy a separation property: there exists a constant $c>1$ such that $$\dS{x} \leq c \cdot \tfrac{1}{m} \sum_{i=1}^{m} \dSi{x}.$$ \end{Assumption} Assumption~\ref{Ass:restricted_sc} is a relaxed version of strong convexity, and coincides with it if $S_i$ is a singleton. Basically, this says that outside of the minimum set, the functions $f_i$ behave like strongly convex functions lower-bounded by some quadratic function. Assumption~\ref{Ass:separation} is an interesting geometric property, which roughly translates to the statement that that the ``angle of separation'' between the optimal sets is bounded from below. The fact that convergence properties depend on geometric properties of minimum sets also highlights the difference with classical distributed optimization. \begin{restatable}{Theorem}{theoremthree} \label{th:restricted_sc_linear_conv} Let assumptions~\ref{Ass:restricted_sc} and~\ref{Ass:separation} be satisfied. Then, for any $1\leq T_i \leq \infty$ we have \begin{equation*} \dS{x_n} \leq \rho^n \dS{x_0}, \end{equation*} where $\rho = \sqrt{1 - c^{-1} \min_i\{ \alpha_i\mu_i^2 \} }$ and $\alpha_i$ is such that $\alpha_i \mu_i^{2} \leq 1$. \end{restatable} \begin{Proof} From Eq.~\eqref{eq:mother_equation} and assumptions~\ref{Ass:restricted_sc} and~\ref{Ass:separation}, we have for any $T_i\geq 1$ \begin{align*} \dS{x_{n+1}}^2 \leq& \dS{x_n}^2 - \tfrac{1}{m}\sum_{i=1}^{m} \alpha_i \| \nabla f_i (x_n) \|^2 \\ \leq& \dS{x_n}^2 - \tfrac{1}{m}\sum_{i=1}^{m} \alpha_i \mu_i^2 \dSi{x_n}^2 \\ \leq& \dS{x_n}^2 - \min_i\{ \alpha_i\mu_i^2 \} \tfrac{1}{m} \sum_{i=1}^{m} \dSi{x_n}^2 \\ \leq& [1 - c^{-1} \min_i\{ \alpha_i\mu_i^2 \} ] \dS{x_n}^2, \end{align*} where $c>1$. Now, observe that $\alpha_i \leq 1/L_i^2$ and $\mu_i \leq L_i$, and so $\kappa^{-1}:=\min_i(\alpha_i\mu_i^2) \in (0,1]$. In fact, $\kappa$ can be understood as an effective condition number for $f$. Thus $\rho = \sqrt{1-(c\kappa)^{-1}} \in (0,1)$ and the claim follows. \end{Proof} Theorem \ref{th:restricted_sc_linear_conv} shows that if each local function $f_i$ satisfies the restricted strong convexity assumption and the geometric assumption also holds, a linear convergence rate can be obtained. Similar to Theorem~\ref{th:convergeceGuarantee}, any $T_i$ including infinity guarantees convergence, which does not hold for scenarios without the intersection assumption. \subsection{Convex Experiments} \label{sec:convex_exp} \subsubsection{General Convex Case} We first validate the general convex cases where Theorem~\ref{th:convergeceGuarantee} implies the gradient residuals $\| \nabla f (x_n) \|^2$ vanish with a speed approximately of order $1/n$. The example we consider is an synthetic case from~\cite{beck2003convergence}, as the $x_n$ can approach the optimal point $x^*$ with an arbitrary slow speed. Specifically, the loss functions on two nodes are defined as $f_1(x,y) = \max^2 \left( \sqrt{x^2 + (y-1)^2}-1,0 \right)$ and $f_2(x,y) = \max^2 \left( y, 0 \right)$ so that the feasible set $S_1$ only intersects with $S_2$ at the point $(0,0)$. Heuristically, consider a point $x$ around the boundary of $S_1$ so that $d(x,S_1) \approx 0$. In this case, \begin{align*} \frac{d(x,S)}{\frac{1}{m} \sum_{i=1}^{m} d(x,S_i)} &\approx \frac{d(x,S)}{\frac{1}{2} d(x,S_2)} = \frac{2}{\sin\theta}. \end{align*} If $\theta \rightarrow 0$, the left hand side goes to infinity and therefore the separation condition~\ref{Ass:separation} does not hold for this case. \begin{figure}[!h] \centering \includegraphics[width=0.35\linewidth]{Crop2.jpg} \caption{Synthetic experiment. The separation condition is not satisfied.} \label{fig:syn} \end{figure} A start point $x_0$ is randomly selected and each node performs $T_i=10$ gradient descent steps independently before combining the parameters. Fig~\ref{ConvexA} reports how the gradient residuals $\| \nabla f (x_n) \|^2$ and the function values $f(x_n)$ vanish after each combination. To enhance visualization, we plot an auxiliary function $\hat{f}$ with a specified gradient vanishing speed $\Vert \nabla \hat{f} \Vert^2_2 =C/n$ as a reference (black line). The similar trend of the gradient residuals $\| \nabla f (x_n) \|^2$ to our reference line validates our previous conclusion in Theorem~\ref{th:convergeceGuarantee} that the gradient residuals $\| \nabla f (x_n) \|^2$ vanish with an approximate speed of $\mathcal{O}(1/n)$. \begin{figure*}[!t] \centering \hspace*{\fill} \subfigure[Experimental results on synthetic dataset with $T_i = 10$. The black line is a reference function with $\Vert \nabla \hat{f} \Vert^2 =C/n$. Gradient residual $\| \nabla f (x_n) \|^2$ can be observed to vanish with a speed similar to $1/n$.] {\label{ConvexA} \includegraphics[width=.44\linewidth]{1_T_Convergence_Rate_Loss_GD_MSE} } \hfill \subfigure[Mean-square regression on Cancer dataset with various $T_i$. The threshold $\Vert \nabla f_i \Vert^2 \leq 10^{-8}$ is set to simulate $T_i = \infty$. Linear convergence rates can be observed for all $T_i$. ] {\label{ConvexB} \includegraphics[width=.44\linewidth]{log_T_1000_MSE_Convex_Cancer_GD.pdf} } \hspace*{\fill} \caption{Convex experiments. The x-axis on the left denotes $\log(n)$ and the x-axis on the right denotes $n$, and the y-axis represents the gradient residuals $\| \nabla f (x_n) \|^2$.} \label{fig:convex} \end{figure*} \subsubsection{Linear Convergence Case} To validate the linear convergence rate in Theorem~\ref{th:restricted_sc_linear_conv}, we perform mean-square regression on the colon-cancer dataset~\cite{alon1999broad} from LIBSVM data repository\footnote{\url{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/}}, which consists of 62 tumor and normal colon tissue samples with 2000 features. Data are evenly distributed to $m=2$ nodes, and models on each node are all over-parameterized since feature numbers far exceed instance numbers. Similar to previous experiment, each node is required to perform $T_i$ rounds of GD before communication. The infinity case $T_i = \infty$ is simulated by performing continuous GD until the local gradient norm is sufficiently small $\Vert \nabla f_i \Vert^2 \leq 10^{-8}$. The restricted strong convex assumption and separation assumption are all satisfied in this case, and the experimental results in Fig~\ref{ConvexB} are consistent with conclusions derived in Theorem~\ref{th:restricted_sc_linear_conv}: (1) all $T_i$ leads to linear convergence rates, including the infinity case; (2) more local updating $T_i$ decreases the overall gradient residuals $\| \nabla f (x_n) \|^2$ faster, indicating sufficient local updating can reduce the overall communication cost for over-parameterized cases. \vspace{0.2in} \section{Beyond Convexity} \label{sec:nonconvex} Having established some basic convergence properties in the convex case, a natural question is whether these results hold beyond the convexity assumption. Instead of the fully general case, we first consider the following simple extension out of the convex realm, under which some convergence results can be obtained. \subsection{Quasi-Convex Case} Among all the non-convex scenarios, we shall first focus on the analysis of quasi-convex cases with the following assumption: \begin{Assumption} \label{Ass:quasi_convex} Each $f_i$ is differentiable and quasi-convex, i.e. have convex sub-level-sets. Equivalently, for any $x,y\in\mathbb{R}^d$, $f_i(\lambda x + (1-\lambda) y ) \leq \max\{ f_i(x), f_i(y) \}$ for $\lambda \in [0,1]$. \end{Assumption} \begin{Assumption} \label{Ass:affine_opt_set} Each $S_i$ is an affine subspace, i.e. there exist $x^*_i \in \mathbb{R}^d$ and subspaces $U_i \subseteq \mathbb{R}^d$ such that \begin{align*} S_i = \{x^*_i\} + U_i \equiv \{ x^*_i + u: u\in U_i \}. \end{align*} \end{Assumption} Assumption~\ref{Ass:quasi_convex} is a relaxation of convexity; every convex function is quasi-convex, but the converse does not hold. For example, the sigmoid and tanh functions are quasi-convex, but not convex. Assumption~\ref{Ass:affine_opt_set} says that the optimal set of local functions are affine subspaces. This is quite a strong assumption, but it greatly simplifies the analysis. Although it is unlikely to hold in general situations, it does have some heuristic connections to neural networks, which we will subsequently discuss. \begin{restatable}{Lemma}{lemmafour} \label{lm:affine} Let $f_i:\mathbb{R}^d\rightarrow \mathbb{R}$ satisfy assumptions~\ref{Ass:quasi_convex} and~\ref{Ass:affine_opt_set} above. Then, for every $x\in\mathbb{R}^d$ we have \begin{align*} f_i(x + u) = f_i(x) \end{align*} for all $u\in U_i$. \end{restatable} \begin{Proof} Without loss of generality, we may assume $\min_y f_i(y) = 0$. Let $x\in \mathbb{R}^d$. If $f_i(x) = 0$ then we are done. Suppose instead that $f_i(x) = c > 0$, we first show that $f_i(x+u) \leq c$ for all $u \in U_i$. Suppose for the sake of contradiction that there exists $u\in U_i$ with $f_i(x+u) = c + \delta$, $\delta>0$. Then, by continuity there exists a $\lambda \in (0,1)$ such that $f_i(\lambda(x+u) + (1-\lambda) x_i^*) = c + \delta/2$. Set $y:=\lambda(x+u) + (1-\lambda) x_i^*$ and $z := (y - \lambda x) / (1-\lambda)$, then by quasi-convexity we have \begin{align*} f_i(y) \leq \max \{ f_i(x), f_i(z) \}. \end{align*} But, by construction, $z = x^*_i + \tfrac{\lambda}{1-\lambda} u \in S_i$, and thus $f_i(z) = 0$, and so the above gives \begin{align*} c < c + \delta / 2 = f_i(y) \leq \max \{ c, 0 \} = c \end{align*} which leads to a contradiction. Hence, we must have $f_i(x+u) \leq f_i(x)$ for all $u\in U_i$, but by replacing $u$ with $-u$ we also have the reverse inequality, and so we must have $f_i(x+u) = f_i(x)$ for all $u\in U_i$. \end{Proof} \begin{restatable}{Lemma}{lemmafive} \label{corr:gd_orthogonality} For every $x\in \mathbb{R}^d$, we have $\langle u, \nabla f_i(x) \rangle = 0$ for all $u\in U_i$. In particular, if $\{ x^{i,t} : t\geq 0, x^{i,0} = x\}$ is a sequence of convergent gradient descent iterates under loss $f_i$, then for each $t\in [0,\infty]$, $x - x^{i,t} \perp U_i$ and $x^{i,\infty} = P_{S_i}(x)$. \end{restatable} \begin{Proof} Since $f_i$ is differentiable, we have $\langle u, f_i(x)\rangle = \lim_{\epsilon\rightarrow 0} \tfrac{1}{\epsilon} [f_i(x+\epsilon u) - f_i(x)] = 0$. Now, take any $u\in U_i$. We have $\langle u, x^{i,t+1} - x^{i,t} \rangle = - \eta_i \langle u, \nabla f_i(x) \rangle = 0$. Summing, we have $\langle u, (x^{i,t} - x) \rangle = 0$. If gradient descent converges, we can take the limit $t\rightarrow\infty$ to obtain $\langle u, x^{i,\infty} - x \rangle = 0$. But $x^{i,\infty} \in S_i$ which is closed since $f_i$ is continuous, and so by uniqueness of orthogonal projection, $x^{i,\infty} = P_{S_i}(x)$. \end{Proof} \vspace{0.2in} Recall from Sec~\ref{sec:linear_conv_sc} that to ensure linear convergence, a geometrical assumption on the optimal sets were required. We show in the following result that in the current setting where optimal sets are affine subspaces, this separation condition is automatically satisfied. \begin{restatable}{Lemma}{lemmasix} \label{lm:affine_separation} Let $S_i$, $i=1,\dots,m$ be a collection of affine subspaces with non-empty intersection $S = \cap_{i=1}^{m} S_i$. Then, \begin{align*} \tfrac{1}{m} \sum_{i=1}^{m} \dSi{x} \leq \dS{x} \leq \tfrac{c}{m} \sum_{i=1}^{m} \dSi{x}, \quad x\in \mathbb{R}^d, \end{align*} for some constant $c \geq 1$ with equality if and only if $S_i = S$ for all $i$. \end{restatable} \begin{Proof} The lower bound is immediate since $S\subseteq S_i$ and so $\dSi{x} \leq \dS{x}$. We now prove the upper bound. By a translation we can consider without loss of generality that $S_1,\dots,S_m$ and $S$ are subspaces. Thus, for each $i$ there exists a matrix $A_i$ whose rows are a orthonormal basis for $S_i^\perp$ and $\ker(A_i) = S_i$. The projection operator onto $S_i$ is then $P_i = I - A_i^\dag A_i$ (where $\dag$ denotes the Moore-Penrose pseudo-inverse) and the projection onto $S_i^\perp$ is $P_i^\perp = A_i^\dag A_i$. In particular, for each $x\in \mathbb{R}^d$, we have $\dSi{x} = \| P_i^\perp x \| = \| A_i^\dag A_i x \|$. Now, we show that the projection $P$ onto $S$ is $P = I - Q^\dag Q$, where $Q = I - \tfrac{1}{m} \sum_{i=1}^{m} P_i = \tfrac{1}{m} \sum_{i=1}^{m} A_i^\dag A_i$, and consequently $P^\perp = Q^\dag Q$. To show this it is enough to show that $S = \ker(\tfrac{1}{m}\sum_{i=1}^m A_i^\dag A_i)$. The forward inclusion is trivial, and the reverse inclusion follows from the fact that for any $x\in \ker(\tfrac{1}{m}\sum_{i=1}^m A_i^\dag A_i)$, we have \begin{align*} \begin{split} \| x \| = \| \tfrac{1}{m} \sum_{i=1}^{m} P_i x \| \leq \tfrac{1}{m} \sum_{i=1}^{m} \| P_i x \| \leq \tfrac{1}{m} \sum_{i=1}^{m} \| x \| = \| x \|, \end{split} \end{align*} with equality if and only if $P_i x = x$ for all $i$, i.e. $x\in S$. Now, we have \begin{align*} \begin{split} \dS{x} =& \| P^\perp x \| = \| Q^\dag Q x \| \\ \leq& \| Q^\dag \| \| Q x \| \\ \leq& {\sigma_{\text{min}}(Q)}^{-1} \| \tfrac{1}{m} \sum_{i=1}^{m} A^\dag A x \| \\ \leq& {\sigma_{\text{min}}(Q)}^{-1} \tfrac{1}{m} \sum_{i=1}^{m} \| A^\dag A x \| \\ \leq& {\sigma_{\text{min}}(Q)}^{-1} \tfrac{1}{m} \sum_{i=1}^{m} \dSi{x}, \end{split} \end{align*} where $\sigma_{\text{min}}(Q)$ denotes the smallest (non-zero) singular value of $Q$. Note that since the rows of $A_i$ are orthonormal, $\sigma_{\text{min}}(Q) \leq \tfrac{1}{m}\sum_{i=1}^{m} \| A_i^\dag A_i \| = 1$ with equality if and only if all $A_i^\dag A$ are multiples of each other (hence identical, by the orthonormal constraint), which implies $S_i=S$ for all $i$. Moreover, if $\sigma_{\text{min}}(Q) = 0$, then $Q=0$ and since each $A_i^\dag A_i$ is positive semi-definite, then $A^i = 0$ and $S_i = \mathbb{R}^d$ for all $i$, in which case any $c>0$ suffices. Thus, we can assume $c < \infty$. \end{Proof} \vspace{0.2in} With the separation condition, we can now establish the convergence rate for the current setting. \begin{restatable}{Theorem}{theoremseven} \label{th:linear_conv_affine} Let Assumption~\ref{Ass:quasi_convex} and \ref{Ass:affine_opt_set} hold. Take $T_i=\infty$ for all $i$ and suppose that each local gradient descent converges. Then, \begin{align*} \dS{x_n} \leq {(1 - c^{-2})}^{\frac{n}{2}} \dS{x_0}. \end{align*}\end{restatable} \begin{Proof}{ Applying Lemma~\ref{lm:affine_separation}, we have \begin{align*} \dS{x_n}^2 \leq& \tfrac{c^2}{m} \sum_{i=1}^{m} \dSi{x_n}^2 = \tfrac{c^2}{m} \sum_{i=1}^{m} \| x_n - P_{S_i}(x_n) \|^2. \end{align*} By Corollary~\ref{corr:gd_orthogonality}, $x^{i,\infty}_n = P_{S_i}(x_n)$, thus \begin{align*} \| x_n - x^{i,T_i}_n \|^2 \leq& \| x_n - P_{S}(x_n) \|^2 - \| x^{i,T_i}_n - P_{S}(x_n) \|^2 \end{align*} and so, \begin{align*} \dS{x_n}^2 \leq& c^2 \dS{x_n}^2 - \tfrac{c^2}{m} \sum_{i=1}^{m} \| P_{S_i}(x_n) - P_{S}(x_n) \|^2 \\ \leq& c^2 \dS{x_n}^2 - c^2 \| \tfrac{1}{m} \sum_{i=1}^{m} P_{S_i}(x_n) - P_{S}(x_n) \|^2 \\ \leq& c^2 \dS{x_n}^2 - c^2 \dS{x_{n+1}}^2. \end{align*} Rearranging, we have \begin{align*} \dS{x_{n+1}} \leq \sqrt{1 - c^{-2}} \dS{x_{n}}. \end{align*}} \end{Proof} \paragraph{Remark:} Gradient descent for quasi-convex function can be arbitrary slow and therefore selecting a small $T_i$ may also lead to arbitrary slow convergence rate. In the above theorem, we set $T_i= \infty$ to make sure the local model get sufficient updates and hence the overall convergence is guaranteed. \clearpage \subsection{Deep Learning} As alluded to earlier, although Assumption~\ref{Ass:affine_opt_set} is quite restrictive, it represents an interesting class of problems where gradient descent and projections are intimately connected. Moreover, deep learning models typically consist of nested affine transformations and nonlinear activations. In the over-parameterized setting where hidden node numbers are large, the affine transformations create degeneracies exactly in the form of affine subspaces. Of course, in general the optimal sets of deep learning loss functions may be unions of affine subspaces, and furthermore the loss functions need not be quasi-convex, and hence the result above does not directly apply to deep neural networks. But we still get some \textbf{motivations} that in deep learning scenarios, the answers for these two questions in Sec~\ref{sec:intro} may be more similar to over-parameterized convex cases instead of the conventional distributed studies~\cite{li2014communication,stich2018local,yu2018parallel} that require $T_i$ to be sufficiently small to guarantee convergence. On the contrary, updating local models more precisely (with large or even infinite $T_i$) can indeed reduce the overall communication cost, as the optimal sets of these local models are likely to intersect. \textbf{\begin{figure*}[!t] \centering \subfigure[Gradient residual $\Vert \nabla f(x_n) \Vert^2$ for the Non-Intersected case.] {\label{nonInterA} \includegraphics[width=.42\linewidth]{NonOverlappingNetworkGradientComparison.pdf} } \subfigure[Loss for the Non-Intersected case.] {\label{nonInterB} \includegraphics[width=.42\linewidth]{NonOverlappingNetworkLossComparison.pdf} } \subfigure[Gradient residual $\Vert \nabla f(x_n) \Vert^2$ for the Intersected case.] {\label{InterA} \includegraphics[width=.42\linewidth]{OverlappingNetworkGradientComparison.pdf} } \subfigure[Loss for the Intersected case.] {\label{InterB} \includegraphics[width=.42\linewidth]{OverlappingNetworkLossComparison.pdf} } \caption{1 Layer Neural Network on MNIST dataset. $T_i =100$ for all cases.} \label{fig:1LayerNN} \end{figure*}} \subsubsection{Necessity of the Intersection Assumption} Before further numerical validation, we first highlight the necessity of the intersection assumption~\ref{Ass:intersection} that distinguishes our work from previous studies. We select the first 500 training samples from MNIST dataset~\cite{lecun1998gradient} and construct two 1-layer neural networks: (1) the first is to directly transform the $28 \cdot 28$ image into 10 categories by an affine transformation followed by softmax cross-entropy loss, which we name as the ``Intersected Case'' since the parameters exceed the instance numbers; (2) the second is to perform two continuous max-pooling twice with a $(2,2)$ window before the final prediction, which we name as the ``Non-Intersected Case'' since the total parameters number is 490 and the intersection assumption is not satisfied. Figure~\ref{fig:1LayerNN} shows the results of centralized training only on the server~(denoted as ``1 Node'') and distributed training on 10 nodes. Without the intersection condition being satisfied, the gradient residuals $\Vert \nabla f(x_n) \Vert^2$ may not even vanish on the ``Non-Intersected case'' in Fig~\ref{nonInterA} and the distributed training loss $f(x_n)$ can also be different from centralized learning in Fig~\ref{nonInterB}. On the contrary, both the gradient residuals and the loss on 10 learning nodes perform in a similar way to centralized learning for the ``Intersected case'' in Fig~\ref{InterA} and ~\ref{InterB}. These different results validate the importance of the intersection assumption we made in the previous part. \subsubsection{LeNet and ResNet} In practice, most deep learning models are highly over-parameterized, and the intersection assumption is likely to hold. In these scenarios, we numerically test the performance of Alg \ref{alg} on non-convex objectives and explore whether large or even ``infinite'' $T_i$ leads to less communication requirement. To do so, we select two classical benchmarks for deep learning: LeNet~\cite{lecun1998gradient} on MNIST dataset and ResNet-18~\cite{he2016deep} on CIFAR-10 dataset~\cite{krizhevsky2009learning}. To accelerate experiments, we only select the first 1000 training samples from these two dataset (although we also provide experiments on the complete dataset in appendix), and evenly distribute these instances to multiple learning nodes. Similar to our previous convex experiments, each node is required to perform $T_i$ iterations of GD before sending models to server, and the $T_i = \infty$ is simulated by continuous gradient descents until local gradient residual $\Vert \nabla f_i \Vert^2$ is sufficiently small. Figure~\ref{fig:dl} shows the experimental results on these benchmarks. The result is consistent with our previous convex experiments that choices for $T_i$ is no longer limited as the conventional studies, and larger $T_i$ decreases the total loss more aggressively. In other words, updating local model more precisely can reduces communication cost for these two deep learning models. Note that for ResNet-18, we intentionally set the local gradient norm threshold to a relatively small number $10^{-2}$, and hence the ``Threshold'' method requires thousands of epochs to reach this borderline in the beginning but only need a few epochs after 40 iterations, which explains why it first outperforms $T_i=100$ but then is inferior to it. \begin{figure*}[!t] \centering \hspace*{\fill} \subfigure[LeNet for MNIST dataset. The threshold is set as $\Vert \nabla f_i \Vert_2^2 \leq 10^{-4}$.] {\label{LeNet} \includegraphics[width=.44\linewidth]{10Nodes_Loss.pdf} } \hfill \subfigure[ResNet for CIFAR 10 dataset. The threshold is set as $\Vert \nabla f_i \Vert_2^2 \leq 10^{-2}$.] {\label{ResNet} \includegraphics[width=.44\linewidth]{10Nodes_Loss.pdf} } \hspace*{\fill} \caption{Deep learning experiments. The x-axis denotes the communication round $n$ and the y-axis denotes the $\log(f)$. } \label{fig:dl} \end{figure*} \section{A Quantitative Analysis of the Trade-off between Communication and Optimization} \label{sec:tradeoff} In previous sections we have focused on convergence properties. Recall that we proved in the convex case, essentially any frequency of local updates is sufficient for convergence. This effect then brings into relevance the following important practical question: given a degenerate distributed optimization problem, can we decide how many steps $T_i$ to take locally before a combination, in order to optimize performance or minimize some notion of computational cost? Note that this question is not well-posed unless convergence is guaranteed for any (or at least a large range of) $T_i$, as we have established in Sec 3 by relying on the degeneracy assumption. Building on the results earlier, we now show that in our setting, this question can be answered quantitatively and this gives guidance to designing efficient distributed algorithms. From Lemma~\ref{lm:mother_equation}, it is clear that the decrement of $\dS{x_{n}}$ come from two sources, one from the frequency of applying the outer iteration in $n$ (communication), and the other from the size of $\sum_{t=0}^{T_i - 1} \alpha_i \| \nabla f_i (x_n^{i,t}) \|^2$, which relies on $T_i$ and also the rate of convergence of local gradient descent steps (optimization). It is well-known that for general smooth convex functions, the upper-bound for the decay of gradient norms is $\mathcal{O}(t^{-1})$. However, depending on the loss function at hand, different convergence rates can occur, ranging from linear convergence to sub-linear convergence in the form of power-laws. Therefore, to make headway one has to assume some decay rate of local gradient descent. To this end, let us assume that the local gradient descent decreases the gradient norm according to \begin{align} \label{eq:h_def} \| \nabla f_i(x_n^{i,t}) \|^2 \geq h_i(t) \| \nabla f_i(x_n^{i,0}) \|^2 \end{align} where $h_i(t)$ is a positive, monotone decreasing function with $h_i(0)=1$. Let $\epsilon>0$ be fixed and define \begin{align} \label{eq:n_star_def} n^* = \inf \{ k\geq 0, \|\nabla f(x_k)\|^2 \leq \epsilon \}. \end{align} From Lemma~\ref{lm:mother_equation} and Eq.~\eqref{eq:h_def} we have \begin{align*} \dS{x_{n+1}}^2 \leq \dS{x_{n}}^2 - \tfrac{1}{m}\sum_{i=1}^{m} \sum_{t=0}^{T_i - 1} \alpha_i h_i(t) \| \nabla f_i( x_n) \|^2. \end{align*} Assume for simplicity $T_i=T$ for all $i$. we have for each $n \leq n^* - 1$, \begin{align*} \dS{x_{n+1}}^2 \leq \dS{x_{n}}^2 - \sum_{t=0}^{T - 1} \alpha h(t) \epsilon, \end{align*} where $\alpha := \min_i \alpha_i$ and $h(t) := \min_i h_i(t)$. Hence, \begin{align} \label{eq:n_star_expr} n^* \leq \tfrac{\dS{x_0}^2}{ \alpha \epsilon \sum_{t=0}^{T-1} h(t)}. \end{align} This expression concretely links the number of steps required to reach an error tolerance to the local optimization steps. Now, we need to define some notion of cost in order to analyze how to pick $T$. In arbitrary units, suppose each communication step has associated cost $C_c$ per node and each local gradient descent step has cost $C_g$. Then, the total cost for first achieving $\|\nabla f(x_n) \|^2 \leq \epsilon $ is \begin{align*} \begin{split} C_{total} &= (C_c m + C_g m T) n^* \\ &= C_c m ( 1 + r T ) n^* \\ &\leq C_c m \dS{x_{0}}^2 {(\alpha\epsilon)}^{-1} \tfrac{( 1 + r T )}{ \sum_{t=0}^{T-1} h(t)}, \end{split} \end{align*} where we have defined $r := C_g / C_c$. We are mostly interested in the regime where $r$ is small, i.e. communication cost dominates gradient descent cost. The key question we would like to answer is: for a fixed and small cost ratio $r$, how many gradient descent steps should we take for every communication and combination step in order to minimize the total cost? It is clear that the answer to this question depend on the behavior of the sum $\sum_{t=0}^{T-1} h(t)$ as $T$ varies. Below, let us consider two representative forms of $h(t)$, which gives very different optimal solutions. \paragraph{Linearly Convergent Case.} We first consider the linearly convergent case where $h(t) = \beta^T$ and $\beta \in (0,1)$. This is the situation if for example, each $f_i$ is strongly convex (in the restricted sense, see Assumption~\ref{Ass:restricted_sc}). Then, we have \begin{align*} \sum_{t=0}^{T-1} h(t) = \tfrac{1 - \beta^T}{1 - \beta}, \end{align*} and so $ C_{total} \leq C_c m \dS{x_0}^2 (1 - \beta) {(\alpha\epsilon)}^{-1} \tfrac{1 + r T}{1 - \beta^T}. $ The upper bound is minimized at $T=T^*$, with \begin{align*} T^* = \tfrac{1}{\log \beta} \left[ 1 + W^-(-e^{-1} \beta^{\tfrac{1}{r}}) \right] - \tfrac{1}{r}, \end{align*} where $W^-$ is the negative real branch of the Lambert's $W$ function, i.e. $W^{-}(x e^x) = x$ for $x \in [-1/e, 0)$. For small $x$, $W^-$ has the asymptotic form $W^- = \log (-x) + \log (- \log (-x) ) + o(1)$. Hence, for $r\ll 1$ we have \begin{align*} T^* = \log \left( 1 + \tfrac{\log (\beta^{-1})}{ r } \right) + o(1). \end{align*} \paragraph{Sub-linearly Convergent Case.} Let us suppose instead that $h(t) = 1 / (1+ a t)^\beta$ for some $a>0,\beta >1$. This is a case with sub-linear (power-law) convergence rate, and is often seen when the local objectives are not strongly convex, e.g. $x^{2l}$ where $l$ is a positive integer greater than 1. For instance, in this case one can show that for small learning rates, $h(t)$ is approximately of this form with $a=2l - 2$ and $\beta = (2l - 1) / (2l-2)$ By integral comparison estimates we have \begin{align*} \int_{0}^{T} h(s) ds \leq \sum_{t=0}^{T-1} h(t) \leq 1 + \int_{0}^{T-1} h(s) ds. \end{align*} Therefore, \begin{align*} \tfrac{1 - {(1+a T)}^{1-\beta}}{a(\beta-1)} \leq \sum_{t=0}^{T-1} h(t) \leq 1 + \tfrac{1 - {(1+a (T-1))}^{1-\beta}}{a(\beta-1)}. \end{align*} Thus, we have \begin{align*} C_{total} \leq C_c m \dS{x_0}^2 a (\beta - 1) {(\alpha\epsilon)}^{-1} \tfrac{1 + r T}{1 - {(1 + a T)}^{1-\beta}}. \end{align*} The minimizing $T^*$ is the unique positive solution of the algebraic equation \begin{align} r \left((1 + a T^*)^{\beta }-1\right)-a (\beta +\beta r T^*-1) = 0, \end{align} whose asymptotic form for $r\ll 1$ is \begin{align*} T^* = \tfrac{1}{a} \left( \left[ \tfrac{a(\beta-1)}{r} \right]^{\tfrac{1}{\beta}} - 1 \right) + o(r^{-\tfrac{1}{\beta}}). \end{align*} From the explicit calculations above, we can see that the number of local gradient descent steps that minimizes the upper-bound total cost depends in a non-trivial manner on the speed of local gradient descent. If the latter is fast (linear convergence case), then the number of local steps to take is small $T^*\sim \log(1/r)$, whereas if local gradient descent is slow (sub-linear convergence case), one should take more local steps and $T^*\sim 1/r^{1/\beta}$. Besides theoretical interest, these estimates can be used in practice to tune the number of local descent steps: one may detect the order of local convergence on the fly, then use these estimates as a guideline to adjust $T$. This gives a principled way to balance optimization and communication, and is potentially useful for solving practical large-scale problems. \paragraph{Experiment.} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{log_T_1000_Quartic_Convex_Cancer_GD.pdf} \caption{Cancer dataset with quartic loss function. The threshold is set as $\Vert \nabla f_i \Vert_2^2 \leq 10^{-8}$.} \label{fig:quartic} \end{figure} For comparison, we replace the quadratic loss in Fig \ref{ConvexB} with quartic loss and obtain Fig \ref{fig:quartic}. Gradient for quadratic loss starts with a large initial value but vanishes exponentially, and $T_i=100$ already coincide with threshold case in \ref{ConvexB}. Or equivalently, each node only need to maintain a relatively small $T_i$ and then refill gradient through combination. In contrary, gradient for quartic loss begins with a smaller value and decreases as a sub-linear case. Therefore, a sufficient large $T_i$ is required to reduce total communication rounds. \section{Conclusion} \label{sec:conclusion} In this paper, we analyzed the dynamics of distributed gradient descent on degenerate loss functions where the optimal sets of local functions intersect. The motivation for this assumption comes from over-parameterized learning that is becoming increasingly relevant in modern machine learning. We showed that under convexity and Lipschitz assumptions, distributed gradient descent converges for arbitrary number of local updates before combination. Moreover, we show that the convergence rate can be linear under the restricted convexity assumption, and that the convexity conditions can be relaxed if the optimal sets are affine subspaces -- an assumption connected in spirit to the degeneracies that arises in deep learning. Lastly, we analyzed quantitatively the trade-off between optimization and communication, and obtained practical guidelines for balancing the two in a principled manner. \bibliographystyle{unsrt}
1,941,325,220,102
arxiv
\section{Introduction} \nt A connected graph $G = (V, E)$ is said to be {\it local antimagic} if it admits a {\it local antimagic edge labeling}, i.e., a bijection $f : E \to \{1,\dots ,|E|\}$ such that the induced vertex labeling $f^+ : V \to \Z$ given by $f^+(u) = \sum f(e)$ (with $e$ ranging over all the edges incident to $u$) has the property that any two adjacent vertices have distinct induced vertex labels~\cite{Arumugam}. Thus, $f^+$ is a coloring of $G$. Clearly, the order of $G$ must be at least 3. The vertex label $f^+(u)$ is called the {\it induced color} of $u$ under $f$ (the {\it color} of $u$, for short, if no ambiguous occurs). The number of distinct induced colors under $f$ is denoted by $c(f)$, and is called the {\it color number} of $f$. The {\it local antimagic chromatic number} of $G$, denoted by $\chi_{la}(G)$, is $\min\{c(f) : f\mbox{ is a local antimagic labeling of } G\}$. Clearly, $2\le \chi_{la}(G)\le |V(G)|$. Haslegrave\cite{Haslegrave} proved that every graph is local antimagic. This paper relates the number of pendant vertices of $G$ to $\chi_{la}(G)$. Sharp upper and lower bounds, and sufficient conditions for the bounds to equal, are obtained. Consequently, there exist infinitely many graphs with $k\ge \chi(G)-1 \ge 1$ pendant vertices and $\chi_{la}(G)=k+1$. We conjecture that every tree $T_k$, other than certain caterpillars, with $k\ge 1$ pendant vertices has $\chi_{la}(T_k) = k+1$. The following two results in~\cite{LSN2} are needed. \begin{lemma}\label{lem-pendant} Let $G$ be a graph of size $q$ containing $k$ pendants. Let $f$ be a local antimagic labeling of $G$ such that $f(e)=q$. If $e$ is not a pendant edge, then $c(f)\ge k+2$.\end{lemma} \begin{theorem}\label{thm-pendant} Let $G$ be a graph having $k$ pendants. If $G$ is not $K_2$, then $\chi_{la}(G)\ge k+1$ and the bound is sharp.\end{theorem} \nt The sharp bound for $k\ge 2$ is given by the star $S_k$ with maximum degree $k$. The left labeling below is another example for $k=2$. The right labeling shows that the lower bound is sharp for $k=1$.\\\\ \centerline{\epsfig{file=C3-2.eps, width=2.5cm}\qquad \epsfig{file=theta6-1.eps, width=3.3cm}}\\ \nt For $ 1\le i\le t, a_i, n_i\ge 1$ and $d=\sum^t_{i=1} n_i\ge 3$, a spider of $d$ legs, denoted $Sp(a_1^{[n_1]}, a_2^{[n_2]}, \ldots, a_t^{[n_t]})$, is a tree formed by identifying an end-vertex of $n_i$ path(s) of length $a_i$. The vertex $u$ of degree $d$ is the core of the spider. Note that $Sp(1^{[n]})$ is the star graph of $n$ pendant vertices with $\chi_{la}(Sp(1^{[n]})) = n+1$. We first give a family of spiders with $k$ pendant vertices to have $\chi_{la} > k+1$. \begin{theorem} For $n\ge 3$, $$\chi_{la}(Sp(2^{[n]})=\begin{cases}n+2 &\mbox{ if } n \ge 4 \\ n+1 &\mbox{ otherwise.} \end{cases}$$ \end{theorem} \begin{proof} Let the neighbors of $u$ be $v_1, \dots, v_n$. Let $w_i$ be the pendant adjacent to $v_i$, $1\le i\le n$. By Theorem~\ref{thm-pendant}, it is easy to conclude that $\chi_{la}(Sp(2^{[3]})) = 4$. Consider $n\ge 4$. Let $f$ be a local antimagic labeling of $Sp(2^{[n]})$ with minimum $c(f)$. By Lemma~\ref{lem-pendant} and symmetry, suffice to consider $f(v_1w_1)=2n = f^+(w_1)$. By definition, $f^+(u)\ne f^+(v_1)$. Now, $f^+(u) > n(n+1)/2 > 2n \ge f^+(w_i)$ and $f^+(v_1) = 2n > f^+(w_i)$ for $1\le i\le n$. Thus, $c(f)\ge n+2$ and $\chi_{la}(Sp(2^{[n]}))\ge n+2$. \\ \nt Define a bijection $f : E(Sp(2^{[n]}))\to [1,2n]$ such that $f(uv_i) = i$ and $f(v_iw_i) = 2n + 1 - i$. We have $f^+(u) = n(n+1)/2 > f^+(v_i) = 2n+1 > f^+(w_i) = 2n + 1 - i$, $1\le i\le n$. Consequently, $\chi_{la}(Sp(2^{[n]}))= n+2$ for $n\ge 4$. The theorem holds. \end{proof} \section{Adding pendant edges} Suppose $G$ has $e\ge 2$ edges with $\chi_{la}(G) = t\ge 2$ such that the corresponding local antimagic labeling $f$ induces a $t$-independent set $\{V_1, V_2, \ldots, V_t\}$ with $|V_i|=n_i\ge 1$. Moreover, each non-pendant vertex must be in one of $V_i$ for $1\le i\le r\le t$, and $V_i$ is a singleton consisting of a pendant vertex for $r+1\le i\le t$ if $r < t$. Let $f^+(v) = c_i$ for each vertex $v\in V_i$. Without loss of generality, we assume that $c_1 < c_2 <\cdots < c_r$ and that $c_{r+1} < c_{r+2} < \cdots < c_t$. Note that $c_{r+1}$ to $c_t$ do not exist for $r=t$. By Theorem~\ref{thm-pendant}, $G$ contains at most $t-1$ pendant vertices. \\ Let $b\ge 0$ be the number of pendant vertices in $\cup^{r}_{i=1} V_i$ so that $G$ has exactly $t-r+b$ pendant vertices. Let $f(u'v')=e \le f^+(u') < f^+(v')$ such that $v'\in V_i, 1\le i\le r$ and that $u'\in V_j$. Suppose $e= f^+(u')$, then $u'$ is a pendant vertex. We now have (a) $j < i \le r$ so that $1\le b\le j$ or (b) $i \le r < j$ so that $0\le b \le i-1$. Otherwise, if $e < f^+(u')$, then $u'$ is not a pendant vertex so that $j < i \le r $ and $0\le b\le j-1$. \\ Let $G(V_i,s)$ be a graph obtained from $G$ by adding $s$ pendant edges $v_aw_{a,k}$ $(1\le k\le s)$ to each vertex in $V_i=\{v_a\,|\,1\le a\le n_i\}$.\\ \begin{theorem}\label{thm-addpendant} Suppose $G$ and $G(V_i,s)$ are as defined above. Let $r\ge 2$ and $$e+sn_i\ge \begin{cases}c_r &\mbox{ for } i<r\\ c_{r-1} &\mbox{ for } i = r \end{cases}$$ such that $s\ge 1$ if $n_i=1$, and $s\ge 2$ is even if $n_i\ge 2$. \begin{enumerate}[(1)] \item Suppose $e < c_1$. For $1\le i\le r$, $\chi_{la}(G(V_i,s)) = sn_i + t-r + 1$. \item Suppose $c_1 \le e < c_2$, then $\chi_{la}(G(V_1,s)) = sn_1 + t-r + 1$. For $2\le i \le r$, $sn_i + t-r + b + 1\le \chi_{la}(G(V_i,s)) \le sn_i + t-r + 2$. Moreover, if $c_1=e$ and $b=1$, then $\chi_{la}(G(V_i,s)) = sn_i + t-r + 2$. \item Suppose $c_{j-1} \le e < c_{j}$ for $3\le j\le r$, then \begin{enumerate}[(a)] \item $sn_i + t-r+b \le \chi_{la}(G(V_i,s)) \le sn_i + t-r+j-1$ for $1\le i\le j-1$, if $V_i$ has a pendant vertex; \item $sn_i + t-r+b+1 \le \chi_{la}(G(V_i,s)) \le sn_i + t-r+j-1$ for $1\le i\le j-1$, if $V_i$ has no pendant vertex; \item $sn_i + t-r+b+1\le \chi_{la}(G(V_i,s)) \le sn_i + t-r+j$ for $j\le i\le r$. \end{enumerate} Moreover, when $c_{j-1} = e$ and $b=j-1$, $\chi_{la}(G(V_i,s)) = sn_i + t-r+j-1$ for $1\le i\le j-1$, and $\chi_{la}(G(V_i,s)) = sn_i + t-r+j$ for $j\le i\le r$. \end{enumerate} In particular, if $c_{r-1}\le e < c_r$, then in $G(V_r,s)$, the condition on $e+sn_i$ is simplified to $s\ge 1$ for $n_r=1$, and $s\ge 2$ is even for $n_r\ge 2$. \end{theorem} \begin{proof} Note that $G$ must contain a non-pendant vertex $v$ such that $f^+(v)>e$. So $c_r > e\ge c_t$. Moreover, $uv\not\in E(G)$ implies that $uv\not\in E(G(V_i,s))$. For $1\le i\le r$, define a bijection $g_i: E(G(V_i,s))\to [1,e+sn_i]$ such that $g_i(e) = f(e)$ if $e\in E(G)$ and that $$g_i(v_aw_{a,k})= \textstyle e+[k+\frac{(-1)^k}{2}-\frac{1}{2}]n_i -(-1)^k(a-\frac{1}{2})+\frac{1}{2}=\begin{cases}e+(k-1)n_i+a &\mbox{ if } k \mbox{ is odd,}\\ e+kn_i +1-a & \mbox{ if } k \mbox{ is even.}\end{cases}$$ Therefore $g_i^+(v_a) = c_i + es + \frac{s}{2}(sn_i+1)> e+sn_i$ when $s$ is even; $g_i^+(v_a) = c_i + es + \frac{s}{2}(s+1)+a-1> e+sn_i$ when $s$ is odd (in this case $n_i=1$ and $a=1$). \ms\nt Observe that $g_i^+(v) = f^+(v)$ for each $v\not\in V_i$ so that $c_1, \ldots, c_{i-1}, c_{i+1}, \ldots, c_t$ are vertex colors under $g_i$. Moreover, for $v_a\in V_i$, $f^+(v_a)$ is replaced by $g_i^+(v_a)>c_t$ for $1\le a\le n_i$ and\break $\{g_i^+(w_{a,k}) = g_i(v_aw_{a,k})\;|\;1\le a\le n_i, 1\le k\le s\} = [e+1, e+sn_i]$. \begin{enumerate}[(1)] \item Since $e<c_1$, in $G(V_i,s)$, we have $c_{r+1} < \cdots <c_t < e+1 \le c_1 < \cdots < c_r \le e+sn_i$ when $1\le i<r$, and $c_{r+1} < \cdots <c_t < e+1 \le c_1 < \cdots < c_{r-1} \le e+sn_i$ when $i=r$. So $(\{c_j\;|\; 1\le j\le r\}\setminus\{c_i\}) \subset [e+1, e+sn_i]$ but all the $c_{r+1}, \ldots, c_t$ are not in $[e+1, e+sn_i]$. Since $c_i + es + \frac{s}{2}(n_is+1)> e+sn_i$, $g_i$ is a local antimagic labeling with induced vertex color set $\{c_{r+1}, c_{r+2}, \ldots, c_t\}\cup [e+1, e+sn_i] \cup \{c_i + es + \frac{s}{2}(n_is+1)\}$. Therefore, $\chi_{la}(G(V_i,s)) \le sn_i + t-r + 1$. Since $G(V_i,s)$ contains at least $sn_i+t-r$ pendant vertices, by Theorem~\ref{thm-pendant}, $\chi_{la}(G(V_i,s)) \ge sn_i + t-r + 1$. Thus, $\chi_{la}(G(V_i,s)) = sn_i + t-r + 1$. \item For $c_1\le e < c_2$, in $G(V_i,s)$, similar to the above case, $(\{c_j\;|\; 1\le j\le r\}\setminus\{c_1,c_i\}) \subset [e+1, e+sn_i]$ but all the $c_1, c_{r+1}, \ldots, c_t$ are not in $[e+1, e+sn_i]$. Suppose $i=1$, then $g_1$ is a local antimagic labeling with induced vertex color set $\{c_{r+1}, c_{r+2}, \ldots, c_t\}\cup [e+1, e+sn_1] \cup \{c_1 + es + \frac{s}{2}(n_1s+1)\}$. Thus, $\chi_{la}(G(V_1,s)) \le sn_1 + t-r + 1$. By the same argument of (1), we get $\chi_{la}(G(V_1,s))= sn_1 + t-r + 1$. Suppose $2\le i\le r$. In this case, $0\le b\le 1$. Similar to the above case, $g_i$ is a local antimagic labeling with induced vertex color set $\{c_{r+1}, c_{r+2}, \ldots, c_t, c_1\}\cup [e+1, e+sn_i] \cup \{c_i + es + \frac{s}{2}(sn_i+1)\}$. Thus, $\chi_{la}(G(V_1,s)) \le sn_i + t-r + 2$. Combining with Theorem~\ref{thm-pendant}, we have $sn_i + t-r +b+ 1\le \chi_{la}(G(V_i,s)) \le sn_i + t-r + 2$. Moreover, $e=c_1$ implies that $b=1$ so $\chi_{la}(G(V_i,s)) = sn_i + t-r + 2$. \item For $c_{j-1} \le e < c_j$, $3\le j\le r$, we have $0\le b\le j-1$. In $G(V_i,s)$, similar to above case, $\{c_j, \ldots, c_r\} \subset [e+1, e+sn_i]$ but all the $c_1, \ldots, c_{j-1}$ are not in $[e+1, e+sn_i]$. Similar to the above case, if $1\le i\le j-1$, then $g_i$ is a local antimagic labeling with induced vertex color set $\{c_k\;|\; r+1\le k\le t\}\cup(\{c_k\;|\; 1\le k\le j-1\}\setminus\{c_i\})\cup [e+1, e+sn_i] \cup \{c_i + es + \frac{s}{2}(n_is+1)\}$ so that $\chi_{la}(G(V_i,s)) \le sn_i + t-r + j-1$. If $j\le i\le r$, then the induced vertex color set is $\{c_k\;|\; r+1\le k\le t\}\cup\{c_k\;|\; 1\le k\le j-1\}\cup [e+1, e+sn_i] \cup \{c_i + es + \frac{s}{2}(n_is+1)\}$ so that $\chi_{la}(G(V_i,s)) \le sn_i + t-r + j$. Note that for $1\le i\le j-1$, if $V_i$ has a pendant vertex, then $G(V_i,s)$ has $sn_i+t-r+b-1$ pendant vertices, otherwise $G(V_i,s)$ has $sn_i+t-r+b$ pendant vertices. For $j\le i\le t$, $G(V_i,s)$ also has $sn_i+t-r+b$ pendant vertices. Combining with Theorem~\ref{thm-pendant}, we have the conclusion. Moreover, if $c_{j-1}=e$ and $b=j-1$, only case (a) exists for $1\le i\le j-1$ so that $\chi_{la}(G(V_i,s))=sn_i+t-r+j-1$, and $\chi_{la}(G(V_i,s))=sn_i+t-r+j$ for $j\le i\le r$. \end{enumerate} In particular, if $c_{r-1}\le e < c_r$, we know the condition $e+sn_r \ge c_{r-1}$ always hold and can be omitted. \end{proof} \begin{example}\label{eg-wheel}\hspace*{\fill}{} \begin{enumerate}[(1)] \item In~\cite[Theorem 5]{LNS}, the authors proved that every $G=W_{4k}, k\ge 1$ with $e=8k$ edges admits a local antimagic labeling with $\chi_{la}(W_{4k})=3$ such that for $k\ge 2$, $e < c_1=9k+2 < c_2 = 11k+1 < c_3 = 2k(12k+1)$, while $W_4$ has $c_1=11,c_2=15,c_3=20$. Moreover, $n_1=n_2=2k, n_3=1$. Thus, $r=t=3$. Suppose $k\ge 2$, we can add $s \ge 12k-3$ ($s$ even) pendant edges to each vertex in $V_i, i=1$ or $2$, and label them accordingly. We can also add $s\ge 3k+1$ pendant edges to the vertex in $V_3$ and label them accordingly. By Theorem~\ref{thm-addpendant}, we get $W_{4k}(V_i,s)$ that has $\chi_{la} = sn_i + 1$ for the respective $s$ and $n_i$. We can also add edges to $W_4$ similarly. \item The right graph under Theorem~\ref{thm-pendant}, say $G$, has $e=7$ with $t=r=2$; $c_1=7, c_2=14$ and $n_1=4, n_2=2$. Thus, $\chi_{la}(G(V_1,s)) = 4s+1$. In particular, since $c_1=e$ and $b=1$, we also can get $\chi_{la}(G(V_2,s)) = 2s + 2$. \item The left graph under Theorem~\ref{thm-pendant}, say $H$, has $e=5$ with $t=r=3$; $c_1=4 < c_2 = e < c_3$ and $b=2$. Thus, $\chi_{la}H(V_3,s) = s + 3$ for $s\ge 1$. \end{enumerate} \end{example} \nt By a similar argument for Theorem~\ref{thm-addpendant}, we can get the following theorem. \begin{theorem} \label{thm-addpendant2} Suppose $G\not\cong K_{1,e}$ and $G(V_i,s)$ are as defined above with $r < t$ and $e + s \ge c_r$. Let $r+1\le i\le t$. \begin{enumerate}[(1)] \item If $e < c_1$, then $\chi_{la}(G(V_i,s)) = s + t-r$. \item If $c_{j-1} \le e < c_j$ for $2\le j\le r$, then $s+t-r+b\le \chi_{la}(G(V_i,s))\le s + t-r + j-1$. Moreover, if $b=j-1$, then $\chi_{la}(G(V_i,s)) = s + t-r + j-1$. \end{enumerate} \end{theorem} \begin{example}\hspace*{\fill}{} \begin{enumerate}[(1)] \item It is easy to verify that the graph $G$ below has $\chi_{la}(G) = 9$ with $e=9 < c_1=10, c_2=20, c_3=25$ and $r=3 < t=9$. We may apply Theorem~\ref{thm-addpendant2} (1) accordingly. \vskip3mm \centerline{\epsfig{file=k3o2.EPS, width=3cm}} \item We can add $k\ge 1$ pendant edges to the degree 4 vertex of the left graph under Theorem~\ref{thm-pendant} and label the edges by $6$ to $5+k$ bijectively to get a graph, say $G$, with $k+2$ pendant vertices. Clearly, $r=3 < t = k+3$ with $c_1=4, c_2=5 < e= k+5 < c_3 =(k+3)(k+8)/2$, $c_i = i+2$ for $i=4, \ldots, k+2$, $j=3$ and $b=j-1$. By Theorem~\ref{thm-addpendant2} (2), we know $\chi_{la}(G(V_i,s)) = s+k+2$ for $s\ge (k+2)(k+7)/2$. \end{enumerate} \end{example} \nt Suppose $G$ is a graph containing $k\ge 1$ pendant vertices with $\chi_{la}(G)=k+1$. Keeping the notation defined before Theorem~\ref{thm-addpendant}, we have $t=k+1$. Then there is only one independent set, which is $V_r$, containing no pendant vertex. Clearly, $r\ge 1$. So we have $c_1<\cdots <c_{r-1}\le e<c_r$ and $c_{r+1}<\cdots <c_{k+1} \le e$ for $r\ge 2$, whereas for $r=1$, $G\cong K_{1,k}, k\ge 2$ with $c_1 = k(k+1)/2$ and $c_i = i-1$ for $2 \le i\le k+1$. \begin{corollary}\label{cor-addpendant3} Keep all notation defined before Theorem~\ref{thm-addpendant}. Suppose $G$ has $k$ pendant vertices and $\chi_{la}(G)=k+1$. Let $$e+sn_i\ge \begin{cases}c_r &\mbox{ for } i\in[1, k+1]\setminus \{r\}\\ c_{r-1} &\mbox{ for } i = r \end{cases}$$ such that $s\ge 1$ if $n_i=1$, and $s\ge 2$ is even if $n_i\ge 2$, then $G(V_r,s)$ has $sn_r+k-1$ pendant vertices with $\chi_{la}(G(V_r,s)) = sn_r+k+1$, whereas $G(V_i,s)$ has $sn_i+k-1$ pendant vertices with $\chi_{la}(G(V_i,s))=sn_i+k$ when $i\in[1, k+1]\setminus \{r\}$. \end{corollary} \begin{proof} Suppose $r=1$. Recall that $e=k$ and $c_1=k(k+1)/2$. Since $G(V_1,s)\cong K_{1,k+s}$, we only consider $G(V_i,s), 2\le i\le k+1$. By labeling the $s$ added edges of $G(V_i,s)$ by $k+1$ to $k+s$ bijectively, $c_i$ is now replaced by $i-1+ks + s(s+1)/2 > k+s \ge c_1 > c_{k+1}>\cdots > c_2$. Thus, $G(V_i,s)$ now admits a local antimagic labeling with vertex color set $[1,k+s]\setminus \{i-1\} \cup \{i-1+ks + s(s+1)/2\}$. Therefore, $\chi_{la}(G(V_i,s)) \le s+k$. Since $G(V_i,s)$ has $s+k-1$ pendant vertices, by Theorem~\ref{thm-pendant}, the equality holds. \ms\nt Consider $r\ge 2$. Suppose $i=r$, then $G(V_r,s)$ has $sn_r + k$ pendant vertices so that $\chi_{la}(G(V_r,s))\ge sn_r+k+1$. By a labeling $g_i$ as in the proof of Theorem~\ref{thm-addpendant}, we know $g_i$ is a local antimagic labeling with induced vertex color set $\{c_1,c_2,\ldots, c_{r-1}, c_{r+1}, \ldots, c_{k+1}\}\cup [e+1,e+sn_r]\cup\{g_r^+(v) \,|\, v \in V_r\}$ of size $sn_r+k+1$. By Theorem~\ref{thm-pendant}, we have $\chi_{la}(G(V_r,s))= sn_r+k+1$. \ms\nt Suppose $i\in[1,k+1]\setminus\{r\}$, then $G(V_i,s)$ has $sn_i+k-1$ pendant vertices so that $\chi_{la}(G(V_i,s))\ge sn_i+k$. By a labeling $g_r$ as in the proof of Theorem~\ref{thm-addpendant}, we know $g_r$ is a local antimagic labeling with induced vertex color set $(\{c_1,c_2, \ldots, c_{k+1}\}\setminus\{c_i,c_r\})\cup [e+1,e+sn_i]\cup\{g_i^+(v) \,|\, v \in V_i\}$ of size $sn_r+k$. By Theorem~\ref{thm-pendant}, we have $\chi_{la}(G(V_r,s))= sn_r+k$. \end{proof} \begin{example} Consider $W_4$ under Example~\ref{eg-wheel}. We have $W_4(V_3,12)$ with $k=12$, $\chi_{la}(W_4(V_3,12)) = 13$ and $c_1=11, c_2=15 < e=20 < c_3=194$. Moreover, $P_n, n\ge 3$ and $K_{1,k}$, is a tree with $k\ge 2$ pendant vertices and $\chi_{la} = k+1$. We may apply Corollary~\ref{cor-addpendant3} accordingly. \end{example} \nt Thus, we get the following. \begin{theorem} For $k\ge 1$, there exist infinitely many graphs $G$ with $k \ge \chi(G)-1 \ge 1$ pendant vertices and $\chi_{la}(G) = k+1$. \end{theorem} \nt Let $G-e$ be the graph $G$ with an edge $e$ deleted. In~\cite[Lemmas 2.2-2.4]{LSN}, the authors obtained sufficient conditions for $\chi_{la}(G) = \chi_{la}(G-e)$. We note that these lemmas may be applied to $G(V_i,s)$ if all vertices in each $V_j, 1\le j\le t$, are of same degree like $W_{4k}(V_i,s)$ in Example~\ref{eg-wheel}. \\ \section{Existing results and open problems} \nt In\cite{Arumugam+L+P+W},~~\cite[Theorem 7 and Theoreom 9]{LNS} and~\cite[Theorems 2.4-2.6, 2.9, 3.1, Lemma 2.10 and Section 3]{LSN2}, the authors also determined the exact value of $\chi_{la}(G)$ for many families of $G$ with pendant vertices. Particularly, they showed that there are infinitely many caterpillars of $k$-pendant vertices with $\chi_{la} = k+1$ or $\chi_{la} \ge k+2$. Note that~\cite[Theorem~2.6]{LSN2} corrected Theorem~2.2 in~\cite{Nuris+S+D}. A lobster is a tree such that the removal of its pendant vertices resulted in a caterpillar. Note that the graph $Sp(2^{[n]})$ is also a lobster. We end this paper with the followings. \begin{conjecture} Every tree $T_k$, other than certain caterpillars, spiders and lobsters, with $k\ge 2$ pendant vertices has $\chi_{la}(T_k) = k+1$. \end{conjecture} \begin{problem} Characterize all graphs $G$ with $k\ge \chi(G) - 1 \ge 1$ pendant vertice(s) and $\chi_{la}(G) = k+1$. \end{problem}
1,941,325,220,103
arxiv
\section{Introduction} Topological properties of fermion band theory have dominated research in condensed matter physics and other areas over the past decade or so \cite{has1, has2, cas,wang1, aab1, Xu}. Recently, it has been shown that the magnon bulk bands of Heisenberg (anti)ferromagnet on the honeycomb lattice exhibit Dirac points at the corners of the Brillouin zone (BZ) \cite{jf}. The low-energy Hamiltonian near these points realizes a massless 2D Dirac-like Hamiltonian with Dirac nodes at nonzero energy. This system preserves pseudo spin time-reversal ($\mathcal{T}$) symmetry. It was also shown that the Dirac points are robust against magnon-magnon interactions and any perturbation that preserves the pseudo spin $\mathcal{T}$-symmetry of the Bogoluibov Hamiltonian. In this paper, we provide evidence of non-trivial topology (magnon edge states) in the magnon bulk bands of Heisenberg (anti)ferromagnet and XY model on the honeycomb lattice, when a gap opens at the Dirac points. We show that the simplest practical way to open a gap at the Dirac points is by breaking the inversion symmetry of the lattice, which subsequently breaks the pseudo spin $\mathcal{T}$-symmetry of the Bogoliubov Hamiltonian. We show that this can be achieved by introducing a next-nearest neighbour Dzyaloshinskii-Moriya (DM) interaction. The opening of a gap at the Dirac points leads to magnon edge states reminiscent of Haldane model in electronic systems \cite{adm}. In the case of XY model, we observe the same topological effects with magnon edge states propagating in the vicinity of the magnon bulk gap. Remarkably, the resulting Hamiltonian for the XY model maps to interacting hardcore bosons. Therefore, these magnon edge states can be simulated numerically. As magnons are uncharged particles, noninteracting topological magnons can propagate for a long time without dissipation, thus they are considered as a good candidate for magnon spintronics \cite{kru,kru1,kru2,shin,shin1}. The topological properties of these Dirac magnons are not just analogues of fermion band theory. They are called topological magnon insulators \cite{zhh, zhh1} and has been recently observed on the kagome magnet Cu(1-3, bdc)\cite{alex6a}. For the honeycomb lattice, they can actually be searched for in many accessible quantum magnets such as Na$_3$Cu$_2$SbO$_6$ \cite{aat1} and $\beta$-Cu$_2$V$_2$O$_7$ \cite{aat} which are spin-$1/2$ Heisenberg antiferromagnetic materials with a honeycomb structure. They can also be investigated in ultra-cold atoms trapped in honeycomb optical lattices as the bosonic tight binding model is analogous to that of Haldane model, which has been realized experimentally in optical fermionic lattice \cite{jott}. In 3D quantum magnets, Dirac and Weyl points are possible in the magnon excitations. Recently, Weyl points have been investigated in quantum magnets using 3D Kitaev fermionic model \cite{aab0}. Weyl points were recently shown to occur in the magnon excitations of breathing pyrochlore lattice antiferromagnet \cite{fei, up}. In this case, the criteria that give rise to them seem to be unknown unlike in fermionic systems. Here, we show that the resulting Bogoliubov Hamiltonian has the form of Weyl Hamiltonian in electronic systems and that the Weyl points should break the pseudo spin $\mathcal{T}$-symmetry by expanding the Bogoliubov Hamiltonian near the non-degenerate dispersive band-touching points and projecting onto the bands. Hence, the criterion for Weyl nodes to exist in electronic systems also applies to magnons. \section{Honeycomb Dirac Magnon} In 2D quantum spin magnetic materials, non-degenerate band-touching points or Dirac points require at least two energy branches of the magnon excitations. Therefore, ordered quantum magnets that can be treated with one sublattice are devoid of Dirac points. The simplest two-band model that exhibits Dirac nodes is the Heisenberg ferromagnet or antiferromagnet on the honeycomb lattice \cite{jf}. The Hamiltonian is governed by \begin{eqnarray} &H=-\sum_{\langle lm\rangle}J_{lm}{\bf S}_{l}\cdot{\bf S}_{m}, \label{hhh1} \end{eqnarray} where $J_{lm}$ depends on the bonds along the nearest neighbours. As mentioned above, Eq.~\ref{hhh1} describes several realistic compounds \cite{aat1, aat}. For simplicity we take $J_{lm}=J>0$. The ground state of Eq.~\ref{hhh1} is a ferromagnet with two-sublattice structure on the honeycomb lattice; see Fig.~\ref{honey_comb}. This is equivalent to Heisenberg antiferromagnet by flipping the spins on one sublattice. In many cases of physical interest, magnon excitations are studied by linear spin wave theory via the standard linearize Holstein Primakoff (HP) transformation. This is an approximation valid at low-temperature when few magnons are excited. \begin{figure}[ht] \centering \includegraphics[width=4in]{honey_comb} \caption{Color online. The honeycomb lattice (left) and the Brillouin zone (right). The reciprocal lattice vectors are ${\bf b}_1= 2\pi(1,\sqrt{3})/3a$ and ${\bf b}_2= 2\pi(1,-\sqrt{3})/3a$.} \label{honey_comb} \end{figure} \begin{figure} \centering \subfigure[\label{HCA}]{\includegraphics[width=.45\linewidth]{HCA}} \quad \subfigure[\label{DOS}]{\includegraphics[width=.45\linewidth]{DOS}} \caption{Color online. $(a)$ The energy bands of the Heisenberg honeycomb ferromagnet. $(b)$ Density of states per unit cell of the Heisenberg ferromagnet on the honeycomb lattice. The blue and red lines denote the two bands and the green circle is the point of degeneracy with $E=3v_s$.} \end{figure} The momentum space Hamiltonian is given by $H=\sum_{\bold{k}}\Psi^\dagger_\bold{k}\cdot \mathcal{H}_B(\bold{k})\cdot\Psi_\bold{k} $, where $\Psi^\dagger_\bold{k}= (a_{\bold{k}}^{\dagger},\thinspace b_{\bold{k}}^{\dagger})$ and the mean field energy has been dropped. In this model, the Bogoliubov quasiparticle operators are the same as the bosonic operators. The Bogoliubov Hamiltonian is given by \begin{eqnarray} \mathcal{H}_B(\bold{k})&=zv_s\sigma_0- zv_s(\sigma_+\gamma_\bold{k} +h.c.), \label{honn} \end{eqnarray} where $\sigma_0$ is an identity $2\times 2$ matrix, and $\sigma_\pm=(\sigma_x\pm i\sigma_y)/2$ are Pauli matrices acting on the sublattices; $z=3$ is the coordination number of the lattice and $v_s=JS$. The structure factor $ \gamma_\bold{k}$ is complex given by \begin{eqnarray}\gamma_\bold{k}=\frac{1}{z}\sum_{\mu} e^{i\bold{k}\cdot \boldsymbol{\delta}_\mu},\end{eqnarray} where $ \boldsymbol{\delta}_\mu$ are the three nearest neighbour vectors on the honeycomb lattice, $ \boldsymbol{\delta}_1=(\hat x,\sqrt{3}\hat y)/2$, $ \boldsymbol{\delta}_2=(\hat x,-\sqrt{3}\hat y)/2$ and $ \boldsymbol{\delta}_3=(-\hat x, 0)$. The eigenvalues of Eq.~\ref{honn} are given by \begin{eqnarray} \epsilon_\pm({\bold{k}})= 3{v}_s\lb1 \pm |\gamma_\bold{k}|\right). \label{sfk} \end{eqnarray} The energy bands have Dirac nodes at the corners of the BZ reminiscent of graphene model. In contrast to graphene, the Dirac nodes occur with a nonzero energy $3{v}_s$ as shown in Fig.~\ref{HCA}. In addition to the Dirac nodes, there is a zero energy mode in the lower band, which corresponds to a Goldstone mode due to the spontaneous symmetry breaking of SU(2) rotational symmetry of the quantum spin Hamiltonian. As many physical systems are anisotropic, it is important to note that with spatial anisotropy $J_{lm}\neq J$, several Dirac points can be obtained by tuning the anisotropy in each bond. The density of states per unit cell as a function of energy is shown in Fig. \ref{DOS} for the two energy bands in Eq.~\ref{sfk}. The interesting properties of this system are manifested near the Dirac points. There are only two inequivalent Dirac points located at $\bold{K}_\pm=\left({2\pi}/{3}, \pm{2\pi}/{3\sqrt{3}}\right)$ as shown in Fig.~\ref{honey_comb}. In the case of Heisenberg antiferromagnet, only a single Dirac point occurs at $\bold{k}=0$ \cite{jf}. Expanding Eq.~\ref{honn} near $\bold{K}_\pm$ we obtain a linearized model \begin{eqnarray} \mathcal{H}_B(\bold q)&=3{v}_s\sigma_0+ \tilde{v}_s(\sigma_x q_x- \tau\sigma_y q_y), \label{me1} \end{eqnarray} where $\bold{q}=\bold{k}-\bold{K}_\pm$, $\tilde{v}_s={v}_s/2$, and $\tau=\pm$ describes states at $\bold{K}_\pm$. Thus, the low-energy excitation spectrum near the Dirac points is similar to the Bloch Hamiltonian of graphene model. Let us now compute the specific heat at a constant volume given by \begin{eqnarray} c_v=\left(\frac{\partial u}{\partial T}\right)_V, \end{eqnarray} where $u$ is the internal energy density of the system given by \begin{eqnarray} u=\sum_{\lambda=\pm}\int \frac{d^2q}{(2\pi)^2} \epsilon_\lambda(\bold q)n_B[\xi_\lambda(\bold q)], \end{eqnarray} $\xi_\lambda(\bold q)=\epsilon_\lambda(\bold q)-\mu$, $n_B[\epsilon_\lambda(\bold q)]=[e^{\beta \xi_\lambda(\bold q)}-1]^{-1}$ is the Bose function, $\beta=1/T$ is the inverse temperature, and $\mu$ is the chemical potential. The specific heat can be integrated exactly by turning the chemical potential at the Dirac nodes $\mu=3v_s$. In this case we find \begin{eqnarray} c_v= \frac{v_s^2}{2T^2}\int \frac{d^2q}{(2\pi)^2}\frac{|\gamma(\bold q)|^2}{\sinh^2\left(\frac{v_s|\gamma(\bold q)|}{2T}\right)}=\frac{12\zeta(3)}{\pi v_s^2}T^2, \end{eqnarray} where $\zeta(3)=1.20206$. This is similar to the $T^2$-law found in graphene. A very crucial point is the role of time-reversal symmetry. Since the excitations of quantum magnets are usually described in terms of the HP bosons, an ordered state must be assumed. Hence, the system must contain an even number of half integral spins with $\mathcal T^2=(-1)^N$, where $N$ is even. Thus, magnons behave like bosons. However, in the pseudo spin space ${\mathcal T}$-operator can be defined for the Bogoliubov Hamiltonian, ${\mathcal T}=i\sigma_y\mathcal K$ where $\mathcal K$ denotes complex conjugation and ${\mathcal T}^2=-1$. This pseudo spin symmetry is preserved provided Dirac points exist in the BZ. \section{Honeycomb Topological Magnon Insulator} \subsection{Heisenberg ferromagnetic insulator} Topological magnon insulators are the analogues of topological insulators in electronic systems. They are characterized by the existence of edge state modes when a gap opens at the Dirac points. For the honeycomb ferromagnets, a next-nearest neighbour interaction of the form $H=-J^\prime\sum_{\langle \langle lm\rangle\ra}{\bf S}_{l}\cdot{\bf S}_{m}$ ($J^\prime>0$) only shifts the positions of the Dirac points as it contributes a term of the form $v_s^\prime(6-g_\bold{k})\sigma_0$, where $v_s^\prime=J^\prime S$, and $g_\bold{k}=2\sum_\mu\cos \bold{k}\cdot\boldsymbol{\rho}_\mu$. The next-nearest neighbour vectors are $ \boldsymbol{\rho}_1=-(3\hat x,\sqrt{3}\hat y)/2$, $ \boldsymbol{\rho}_2=(3\hat x,-\sqrt{3}\hat y)/2$, $ \boldsymbol{\rho}_3=(0,\sqrt{3}\hat y)$. The simplest realistic way to open a gap at the Dirac points is by breaking the inversion symmetry of the lattice, which in turn breaks the ${\mathcal T}$-symmetry of the Bogoliubov Hamiltonian. This can be achieved by introducing a next-nearest neighbour DM interaction \begin{eqnarray} H_{DM}= \sum_{\langle \langle lm\rangle\ra}{\bf D}_{lm}\cdot{\bf S}_{l}\times{\bf S}_{m}, \label{h1} \end{eqnarray} where ${\bf D}_{lm}$ is the DM interaction between sites $l$ and $m$. The total Hamiltonian of a honeycomb ferromagnetic insulator can be written as \begin{eqnarray} H= -J\sum_{\langle lm\rangle}{\bf S}_{l}\cdot{\bf S}_{m}-J^\prime\sum_{\langle \langle lm\rangle\ra}{\bf S}_{l}\cdot{\bf S}_{m}+\sum_{\langle \langle lm\rangle\ra}{\bf D}_{lm}\cdot{\bf S}_{l}\times{\bf S}_{m}. \end{eqnarray} In the HP bosonic mapping, we obtain \begin{eqnarray} H&=-v_s\sum_{\langle lm\rangle}( b_l^\dagger b_m+ h.c.) - v_t\sum_{\langle \langle lm\rangle\ra}(e^{i\phi_{lm}} b^\dagger_l b_{m}+h.c.) +v_0\sum_l b_l^\dagger b_l, \label{hp3} \end{eqnarray} where $v_0=zv_s+z^\prime v_s^\prime$, $v_D=DS$, $v_t=\sqrt{v_s^{\prime 2} +v_D^2}$, and $z^\prime=6$ is the coordination number of the NNN sites. We have assumed a DM interaction along the $z$-axis. The phase factor $\phi_{lm}=\nu_{lm}\phi$, where $\phi=\arctan(D/J^\prime)$ is a magnetic flux generated by the DM interaction on the NNN sites, similar to the Haldane model with $\nu_{lm}=\pm 1$ as in electronic systems. The total flux enclosed in a unit cell vanishes as depicted in Fig.~\ref{unit_cell_1}. In contrast to electronic systems, the phase factor $\phi$ depends on the parameters of the Hamiltonian. \begin{figure}[ht] \centering \includegraphics[width=2in]{unit_cell_1} \caption{Color online. The magnetic flux treads on the honeycomb lattice generated by a NNN DM interaction; $i,~j,~k$ label sites on the triangular plaquettes and give rise to a nonzero spin chirality $\chi={\bf S}_i\cdot {\bf S}_j\times {\bf S}_k$.} \label{unit_cell_1} \end{figure} The Bogoliubov Hamiltonian is given by \begin{eqnarray} \mathcal{H}_B(\bold k)&=h_0\sigma_0 +h_x\sigma_x + h_y\sigma_y + h_z\sigma_z, \end{eqnarray} where $h_0= 3v_s-2v_t\cos\phi g_\bold{k}$, $h_x=-v_s\sum_\mu\cos \bold{k}\cdot\boldsymbol{\delta}_\mu$, $h_y=-v_s\sum_\mu\sin \bold{k}\cdot\boldsymbol{\delta}_\mu$, and $h_z=-2v_t\sin\phi \rho_\bold{k}$, where $\rho_\bold{k}=\sum_\mu\sin \bold{k}\cdot\boldsymbol{\rho}_\mu$ . Expanding near the Dirac points we generate a gap at ${\bf K}_\pm$ and the full Hamiltonian for $J^\prime =0$ ($\phi=\pi/2$ in this case) becomes \begin{eqnarray} \mathcal{H}_B(\bold q)&=3{v}_s\sigma_0+ \tilde{v}_s(\sigma_x q_x- \tau\sigma_y q_y) +m\tau\sigma_z, \label{mme1} \end{eqnarray} where $m=3\sqrt{3}v_D$. \begin{figure} \centering \subfigure[\label{Edge}]{\includegraphics[width=.45\linewidth]{Edge}} \quad \subfigure[\label{Edge1}]{\includegraphics[width=.45\linewidth]{Edge1}} \caption{Color online. The energy band for a one-dimensional strip on the honeycomb lattice for spin-$1/2$ in units of $J=1$. $(a)$ $J^\prime=0, D=0.5J,~\phi=\pi/2$. $(b)$ $J^\prime=D=0.5J,~\phi=\pi/4$. } \end{figure} \begin{figure}[ht] \centering \includegraphics[width=3in]{Edge_states_1} \caption{Color online. Schematics of magnon edge states in topological magnon insulator material.} \label{Edge2} \end{figure} This model can be regarded as the bosonic analogue of Haldane model in electronic systems \cite{adm}. In fermionic systems, there is a topological invariant quantity which is quantized when the Fermi energy lies between the gap, such that the lower band is occupied. In the bosonic model, there is no Fermi energy and not all states are occupied. The bosons can condense at the Goldstone mode in the lower band. However, the topological invariant quantity is, in principle, independent of the statistical property of the particles. It merely predicts edge states in the vicinity of the bulk gap. In the magnon excitations the Chern number $n_H= \rm{sign}(m)$ simply predicts a pair of counter-propagating magnon edge states in the vicinity of the bulk gap as shown in Figs.~\ref{Edge} and \ref{Edge1} for $\phi=\pi/2$ and $\phi=\pi/4$ respectively. Hence, the Heisenberg (anti)ferromagnet on the honeycomb lattice realizes topological magnon insulator with magnon edge states propagating at the edge of the sample as depicted in Fig.~\ref{Edge2}. As mentioned above, the propagation of magnon edge states differs from those in electronic systems. Therefore, they a useful in many technological devices and magnon spintronics. Besides, they can be accessible in many accessible quantum magnetic systems. \subsection{XY ferromagnetic insulator: hardcore bosons} Dirac points occur in a variety of quantum honeycomb ferromagnetic insulators. Let us consider the XY model on the honeycomb lattice \begin{eqnarray} H=-2J\sum_{\langle lm\rangle}(S_l^xS_m^x+S_l^yS_m^y). \label{ham0} \end{eqnarray} The ground state of this model is an ordered ferromagnet or N\'eel state in the $xy$ plane. Choosing $S_x$ quantization axis, the momentum space Hamiltonian in linear spin wave theory is generally written as \begin{eqnarray} H=&\mathcal{E}_c+ \sum_{\bold{k},\mu,\nu}\mathcal{A}_{\mu\nu}(\bold{k}) b_{\bold{k} \mu}^\dagger b_{\bold{k} \nu}\label{main} +\mathcal{B}_{\mu\nu}(\bold{k}) b_{\bold{k} \mu}^\dagger b_{-\bold{k} \nu}^\dagger+ \mathcal{B}_{\mu\nu}^*(\bold{k})b_{-\bold{k} \mu} b_{\bold{k} \nu}, \end{eqnarray} where $\mu,\nu$ label the sublattices. Equation~\ref{main} can be written as \begin{eqnarray} H=\mathcal{E}_0+ \sum_{\bold{k}}\Psi^\dagger_\bold{k} \cdot \mathcal{H}(\bold{k})\cdot\Psi_\bold{k} +\rm{const.}, \label{hp}\end{eqnarray} where $\Psi^\dagger_\bold{k}= (\psi^\dagger_\bold{k}, \thinspace \psi_{-\bold{k}} )$, $\psi^\dagger_\bold{k}=(b_{\bold{k} 1}^{\dagger}\thinspace b_{\bold{k} 2}^{\dagger}\cdots \thinspace b_{\bold{k} N}^{\dagger})$, and $N$ is the number of sublattice, $\mathcal{E}_0={\mathcal{E}_c}-S\sum_{\bold{k}\mu}\mathcal{A}_{\mu\mu}(\bold{k})$, and \begin{eqnarray} \mathcal{H}(\bold{k}) = \sigma_0 \otimes \boldsymbol{\mathcal{A}(\bold{k})} + \sigma_+\otimes\boldsymbol{\mathcal{B}(\bold{k})}+ \sigma_- \otimes\boldsymbol{\mathcal{B^*}(\bold{k})},\end{eqnarray} where $\boldsymbol{\mathcal{A}}(\bold{k})$ and $\boldsymbol{\mathcal{B}}(\bold{k})$ are $N\times N$ matrices. This Hamiltonian is diagonalized by a matrix $\mathcal{U}(\bold{k})$ via the transformation $\Psi^\dagger_\bold{k}\to\mathcal{U}(\bold{k})\mathcal{P}(\bold{k})$, which satisfies the relation \begin{eqnarray}\mathcal{U}^\dagger \mathcal{H}(\bold{k}) \mathcal{U}= \epsilon(\bold{k}); \quad \mathcal{U}^\dagger \eta \mathcal{U}= \eta,\end{eqnarray} with $\eta=\rm{diag}(\mathbf{I}_{N\times N}, -\mathbf{I}_{N\times N} )$. $\mathcal P(\bold{k})$ contains the Bogoliubov operators $\alpha_{\bold{k}}^{\dagger}$ and $ \beta_{\bold{k}}^{\dagger}$ and $\epsilon(\bold{k})$ is the eigenvalues. This is equivalent to saying that we need to diagonalize a non-hermitian Bogoliubov Hamiltonian $\mathcal{H}_B(\bold{k})=\eta\mathcal{H}(\bold{k})$, where \begin{eqnarray} \mathcal{H}_B(\bold{k})&= \sigma_x\otimes\boldsymbol{\mathcal{B}}_-(\bold{k}) +i \sigma_y\otimes\boldsymbol{\mathcal{B}}_+(\bold{k})+\sigma_z\otimes\boldsymbol{\mathcal{A}(\bold{k})}, \label{Bogoliubovb} \end{eqnarray} and $\boldsymbol{\mathcal{B}}_\pm(\bold{k})=[ \boldsymbol{\mathcal{B}(\bold{k})}\pm\boldsymbol{\mathcal{B^*}(\bold{k})}]/2$. The eigenvalues of $\mathcal{H}_B(\bold{k})$ are given by $\eta\epsilon(\bold{k})=[\epsilon_\mu(\bold{k}), -\epsilon_\mu(\bold{k})]$, where \begin{eqnarray}\epsilon_\mu(\bold{k})=\sqrt{A_\mu^2(\bold{k})-|B_\mu(\bold{k})|^2}\label{posi},\end{eqnarray} $A_\mu$ and $B_\mu$ are the eigenvalues of $\boldsymbol{\mathcal{A}}(\bold{k})$ and $\boldsymbol{\mathcal{B}}(\bold{k})$ respectively. For the XY model $\mathcal{H}_B(\bold{k})$ is given by \begin{figure} \centering \subfigure[\label{XYband}]{\includegraphics[width=.45\linewidth]{XY_band}} \quad \subfigure[\label{HCB}]{\includegraphics[width=.45\linewidth]{HCB_edge}} \caption{Color online. $(a)$ The band structure of the XY honeycomb ferromagnet. $(b)$ The energy band for a one-dimensional strip on the honeycomb lattice. The parameters are $J=0.5$,~$D=0.25J$, $S=1/2$. } \end{figure} \begin{eqnarray} \mathcal{H}_B(\bold{k})&= 3v_s[\sigma_z \boldsymbol{\mathcal{A}}(\bold{k})+i\sigma_y \boldsymbol{\mathcal{B}}(\bold{k})], \label{ham3} \end{eqnarray} with $\boldsymbol{\mathcal{A}}(\bold{k})=\tau_0-\boldsymbol{\mathcal{B}}(\bold{k})$ and $\boldsymbol{\mathcal{B}}(\bold{k})=(\tau_+\gamma_\bold{k} +h.c.)/2$. The positive eigenvalues [Eq.~\ref{posi}] are given by \begin{eqnarray} \epsilon_\pm({\bold{k}})= 3{v}_s\sqrt{ 1 \pm |\gamma_\bold{k}|}. \label{hg} \end{eqnarray} The magnon excitations exhibit Dirac nodes at $\bold K_\pm$ with an energy of $3v_s$. To generate a gap, we follow the same approach above. For simplicity, we ignore an external magnetic field and a NNN isotropic interaction $J^\prime$. Hence, the DM interaction that would open a gap has to be parallel to the $x$-quantization axis, \begin{eqnarray} H_{DM}=D\sum_{\langle\langle lm\rangle\rangle}\nu_{lm}(S_l^yS_m^z -S_l^zS_m^y). \label{ham4} \end{eqnarray} In the HP bosonic mapping, this corresponds to a magnetic flux of $\phi=\pi/2$. In the $S_x$ quantization axis, $S_y$ and $S_z$ are off-diagonals. The momentum space Hamiltonian is given by \begin{eqnarray} \mathcal{H}^{DM}_B(\bold{k})= -v_D\sigma_0\tau_z\rho_\bold{k}. \label{ham5} \end{eqnarray} The positive eigenvalues of the full Hamiltonian are given by \begin{eqnarray} &\epsilon_\pm({\bold{k}})= \Bigg[ \left( 3{v}_s \pm\sqrt{(v_D\rho_\bold{k})^2 +\left(\frac{3v_s|\gamma_\bold{k}|}{2}\right)^2}\right)^2-\left(\frac{3v_s|\gamma_\bold{k}|}{2}\right)^2\Bigg]^{1/2}, \label{ham6} \end{eqnarray} At $\bold K_\pm$, a gap of magnitude $|\Delta|= 2|m|$ is generated as shown in Fig.~\ref{XYband}. Similar to the Heisenberg model, there exist magnon edge states in the vicinity of the bulk gap as depicted in Fig.~\ref{HCB}. Surprisingly, Eqs.~\ref{ham0} and \ref{ham4} actually map to interacting hardcore bosons on the honeycomb lattice via the Matsubara-Matsuda transformation \cite{mats} $S_l^x\to(b_l^\dagger+b_l)/2;~S_l^y\to(b_l^\dagger-b_l)/2i;~S_l^z=n_l-1/2$, where $n_l=b_l^\dagger b_l$. The hardcore boson Hamiltonian is given by \begin{eqnarray} &H= -t\sum_{\langle lm\rangle}\left( b^\dagger_l b_m + h.c.\right) - t^\prime e^{-i\phi}\sum_{\langle\langle lm\rangle\rangle}\nu_{lm}\bigg[ (b_l^\dagger-b_l) f_m- (b_m^\dagger-b_m) f_l\bigg], \label{ham6} \end{eqnarray} where $t\to J,~t^\prime\to D$, $f_l=n_l-1/2$, and $\phi=\pi/2$. This model [Eq.~\ref{ham6}] offers a physical realization of these magnon edge states using ultracold atoms trapped in honeycomb optical lattices. Unfortunately, this model cannot be simulated by quantum Monte Carlo (QMC) methods due to a sign problem. However, it is amenable to other numerical simulations such as exact diagonalization. Introducing a magnetic field introduces additional phases into the system. For instance, a magnetic field along the $z$-axis introduces superfluid and Mott insulating phases, whereas a staggered magnetic field introduces a charge-density wave insulator. In this case, the corresponding hard-core boson model can be written as \begin{eqnarray} H&= -t\sum_{\langle lm\rangle}( b^\dagger_l b_m + h.c.) - t^\prime \sum_{\langle\langle lm\rangle\rangle}(e^{i\nu_{lm}\phi}b_l^\dagger b_m +h.c.) -\sum_l (\mu +U_l) n_l, \label{hamm6} \end{eqnarray} where $\mu$ is the chemical potential and $U_l$ is a staggered onsite potential on sublattice $A$ and $B$. They correspond to a (staggered) magnetic field along the $z$-axis in the spin language. For $t^\prime=0$, Eq.~\ref{hamm6} is amenable to QMC simulation as recently shown \cite{alex9}. Also recently, we have complemented the QMC results using the method presented in this paper \cite{sol}. Hence, the HP spin wave method offers a simple approach to capture the topological properties of bosonic models that cannot be simulated by QMC. \subsection{Weyl magnon} Finally, we address the Weyl magnons in 3D lattices. Recently, Weyl points were observed on breathing pyrochlore lattice governed by \cite{fei, up} \begin{eqnarray} H &=& J \sum_{\langle{ij}\rangle \in \rm{u}} {\bf S}_i \cdot {\bf S}_j + J' \sum_{\langle{ij}\rangle \in \rm{d}} {\bf S}_i \cdot {\bf S}_j + K \sum_{i} \left( {\bf S}_i \cdot \hat{z}_i \right)^2, \label{eq1} \end{eqnarray} where $J>0$ and $J'>0$ are the exchange couplings between the nearest-neighbour spins on the up-pointing and down-pointing tetrahedra respectively (see Ref. \cite{fei}), and $D$ is a single-ion anisotropy. It can be easy-axis ($K<0$) or easy-plane ($K>0$). In the former case, the spins would prefer the $z$-axis; whereas in the latter case the $xy$ plane is auspicious. Although a comprehensive analysis of this model has been studied in Ref. \cite{fei}, the criteria for the existence of Weyl magnons were not mentioned and understood properly. Here, we argue that breaking of pseudo spin $\mathcal{T}$-symmetry is a condition for Weyl points to exist in quantum magnetic systems. The linear spin wave theory Hamiltonian derived in Ref. \cite{fei} has the general form given in Eq.~\ref{main}. Now, the Bogoliubov Hamiltonian can be cast into the form of Eq.~\ref{Bogoliubovb}. From this equation, we see that the momentum space Hamiltonian resembles that of electronic systems. In addition to Weyl nodes obtained along the BZ paths for $K>0$ and $J\neq J'$ \cite{fei}, there is additional non-degenerate band-touching points at the corners of the BZ. The system should realize Dirac Hamiltonian at the corners of the BZ as shown above and also a Weyl Hamiltonian near the Weyl points. To check whether pseudo spin ${\mathcal T}$-symmetry is preserved or broken at the Dirac or Weyl points respectively, one should follow the approach outlined above. Basically, one has to expand $\boldsymbol{\mathcal{A}}(\bold{k})$ and $\boldsymbol{\mathcal{B}}_\pm(\bold{k})$ near the band-touching points and project the resulting Hamiltonian onto the bands. In principle, the Bogoliubov Hamiltonian [Eq.~\ref{Bogoliubovb}] near the Weyl points should break ${\mathcal T}$-symmetry. Therefore, one recovers the usual criteria for Weyl semimetals \cite{aab1}. In contrast to 2D systems, a gap is not needed to observe edge states in 3D Weyl magnons. Edge states exist in 3D Weyl magnons provided the momentum lies between the Weyl nodes \cite{fei}. \section{Conclusion} In summary, we have shown that physical realistic models of honeycomb quantum spin magnets exhibit nontrivial topology in the magnon excitations. In 2D ordered honeycomb quantum magnets, we showed that the non-degenerate band-touching points (at the corners of the Brillouin zone) in the magnon excitation spectrum realize a massless Dirac Hamiltonian. Opening of a gap at the Dirac points requires the breaking of inversion symmetry of the lattice. This leads to nontrivial topological magnon insulator with magnon edge states propagating on the edges of the material, similar to topological insulators in electronic systems. These magnon edge states also manifest in hardcore bosons on honeycomb lattice. The hardcore boson model proposed in Eqs.~\ref{ham6} and \ref{hamm6} should be studied by numerical approach to further substantiate the existence of magnon edge modes in this system. Since there are many physical 2D honeycomb quantum magnetic materials in nature, these results suggest new experiments in ordered quantum magnets and ultracold atoms in honeycomb optical lattices, to search for magnon Dirac materials and topological magnon insulators on the honeycomb lattice. For 3D ordered quantum magnets, Weyl points are possible in the magnon excitations \cite{fei}. We argued that the Bogoliubov Hamiltonian near the Weyl points should yield a low-energy Hamiltonian that breaks time-reversal symmetry of the pseudo spins. At nonzero temperature and external magnetic field, there is a possibility of topological magnon Hall effect \cite{hka, xc} and spin Nernst effect \cite{hka1}, similar to the kagome, Lieb, and pyrochlore lattices \cite{hka, xc, hka1}. The analysis of magnon Hall effect for the honeycomb lattice will be reported elsewhere. \section{Acknowledgments} The author would like to thank African Institute for Mathematical Sciences. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. \section*{References}
1,941,325,220,104
arxiv
\section{Introduction}\label{sec-Intro} Associated with any Lie algebra $\gg$, there is a canonical extension \begin{eqnarray}\label{adExt} \xymatrix{ 0 \ar[r] & \mathfrak{z}(\gg) \ar@{^{(}->}[r] & \gg \ar[r]^{\ad\quad} & \ad(\gg) \ar[r] & 0 } \end{eqnarray} of the linear Lie subalgebra $\ad(\gg)\leq\ggl(\gg)$ by the center $\mathfrak{z}(\gg)$. Such extensions are classified by $H_{CE}^2(\ad(\gg),\mathfrak{z}(\gg))$, the Chevalley-Eilenberg cohomology of $\ad(\gg)$ with values on $\mathfrak{z}(\gg)$. Recall the following theorem due to van Est: \begin{theorem}\cite{vanEst:1953}\label{vanEst} Let $G$ be a Lie group with Lie algebra $\gg$ and a representation on $V$. If $G$ is $k$-connected, then the map that differentiates group cochains into Chevalley-Eilenberg cochains induces isomorphisms \begin{eqnarray}\label{vanEstIso} \xymatrix{ \Phi :H_{Gp}^n(G,V) \ar[r] & H_{CE}^n(\gg,V) } \end{eqnarray} for all $n\leq k$ and is injective when $n=k+1$. \end{theorem} Observe that, being linear, $\ad(\gg)$ is integrable and recall that one can always pick a $2$-connected integration, say $G$. Thus, Theorem~\ref{vanEst} implies there exists a unique cohomology class $[\int\omega_\gg]\in H_{Gp}^2(G,\mathfrak{z}(\gg))$, whose image under (\ref{vanEstIso}) is $[\omega_\gg]\in H_{CE}^2(\ad(\gg),\mathfrak{z}(\gg))$, the cohomology class that corresponds to the extension (\ref{adExt}). Since $H_{Gp}^2(G,V)$ classifies extensions of $G$ by $V$, there is a unique extension \begin{eqnarray}\label{intExt} \xymatrix{ 1 \ar[r] & \mathfrak{z}(\gg) \ar[r] & \G \ar[r] & G \ar[r] & 1 } \end{eqnarray} corresponding to $[\int\omega_\gg]$, and $\G$ is a Lie group integrating $\gg$. The purpose of this article is to adapt this strategy -that we henceforth refer to as the \textit{van Est strategy} \cite{vanEst:1955,Crainic:2003}- to prove the integrability of strict Lie $2$-algebras. A strict Lie $2$-group is a groupoid \begin{eqnarray}\label{ALie2Gp} \xymatrix{ \G\times_H\G \ar[r]^{\quad m} & \G \ar@<0.5ex>[r]^{s} \ar@<-0.5ex>[r]_{t} \ar@(l,d)[]_{\iota} & H \ar[r]^{u} & \G , } \end{eqnarray} in which the spaces of objects, arrows and composable arrows are Lie groups, and all of whose structural morphisms are Lie group homomorphisms. Differentiating the whole structure, one gets a (strict) Lie $2$-algebra \begin{eqnarray}\label{ALie2Alg} \xymatrix{ \gg_1\times_\hh \gg_1 \ar[r]^{\quad \hat{m}} & \gg_1 \ar@<0.5ex>[r]^{\hat{s}} \ar@<-0.5ex>[r]_{\hat{t}} \ar@(l,d)[]_{\hat{\iota}} & \hh \ar[r]^{\hat{u}} & \gg_1 . } \end{eqnarray} In \cite{Sheng_Zhu1:2012}, it is proven using the path method that all finite-dimensional Lie $2$-algebras are the infinitesimal counterpart of a Lie $2$-group. In the sequel, we present a cohomological proof of this fact. Such approach still works in infinite dimensions and was historically used to construct the first example of a non-integrable Lie algebra \cite{vanEst_Korthagen:1964}; thus, it bears the potential to improve our current understanding of the Lie theory of other categorified objects (see, \textit{e.g.}, \cite{Bursztyn_Cabrera_delHoyo:2016,Stefanini:2008,Neeb:2002}). Let us list the necessary ingredients for the van Est strategy to run: \begin{itemize} \item[1)] The canonically associated adjoint extension (\ref{adExt}). \item[2)] Global and infinitesimal cohomology theories that classify extensions. \item[3)] A van Est map and theorem. \item[4)] That linear Lie algebras be integrable to $2$-connected Lie groups. \end{itemize} Lie $2$-algebras have a canonically associated adjoint representation (see Example~\ref{2ad} below). In \cite{Angulo1:2020,Angulo2:2020}, complexes whose second cohomology classify respectively extensions of Lie $2$-algebras and extensions of Lie $2$-groups are introduced. Each of these is the total complex of a triple complex of sorts. In order to describe them, let us fix notation. First, recall that the categories of Lie $2$-algebras and Lie $2$-groups are respectively equivalent to the categories of crossed modules of Lie algebras and of Lie groups \cite{Baez_Crans:2004,Baez_Lauda:2004,Loday:1982}. \begin{definition}\label{AlgCrossMod} A \textit{crossed module of Lie algebras} is a Lie algebra morphism $\xymatrix{\gg \ar[r]^\mu & \hh}$ together with a Lie algebra action by derivations $\xymatrix{\Lie:\hh \ar[r] & \ggl(\gg)}$ satisfying \begin{align*} \mu(\Lie_y x) & =[y,\mu(x)], & \Lie_{\mu(x_0)}x_1 & =[x_0,x_1] \end{align*} for all $y\in\hh$ and $x,x_0,x_1\in\gg$. Following the convention in the literature, we refer to these equations respectively as equivariance and infinitesimal Peiffer. \end{definition} \begin{definition}\label{GpCrossMod} A \textit{crossed module of Lie groups} is a Lie group homomorphism $\xymatrix{G \ar[r]^i & H}$ together with a right action of $H$ on $G$ by Lie group automorphisms satisfying \begin{align*} i(g^h) & =h^{-1}i(g)h, & g_1^{i(g_2)} & =g_2^{-1}g_1g_2, \end{align*} for all $h\in H$ and $g,g_1,g_2\in G$, where we write $g^h$ for $h$ acting on $g$. Following the convention in the literature, we refer to these equations respectively as equivariance and Peiffer. \end{definition} Representations of both Lie $2$-algebras and Lie $2$-groups take values on so-called $2$-vector spaces. These are (flat) abelian objects in either category, which, in crossed module presentation, correspond simply to $2$-term complexes of vector spaces $\xymatrix{W \ar[r]^\phi & V}$. The category of linear invertible self-functors of a $2$-vector space and homomorphic natural transformations $GL(\phi)$ has got the structure of a Lie $2$-group whose Lie $2$-algebra $\ggl(\phi)$ is the category of linear functors and linear natural transformations (see Subsection~\ref{sss-LinAndRep} for details). Respectively, representations are by definition maps of Lie $2$-groups to $GL(\phi)$ and maps of Lie $2$-algebras to $\ggl(\phi)$. Lastly, we assume the following unconventional notation for the spaces of $p$-composable arrows: \begin{align}\label{p-comp} \G_p & :=\lbrace(\gamma_1,...,\gamma_p)\in\G^p:s(\gamma_k)=t(\gamma_{k+1}),\quad\forall k\rbrace , & \textnormal{ and }\quad \gg_p & :=\lbrace(\xi_1,...,\xi_p)\in\gg^p:\hat{s}(\xi_k)=\hat{t}(\xi_{k+1}),\quad\forall k\rbrace . \end{align} We are ready to define the three-dimensional lattices of vector spaces underlying the definition of the complexes of Lie $2$-algebra and Lie $2$-group cochains taking values on the $2$-vector space $\xymatrix{W \ar[r]^\phi & V}$. For a Lie $2$-algebra (\ref{ALie2Alg}) with associated crossed module $\xymatrix{\gg \ar[r] & \hh}$, set \begin{eqnarray}\label{Alg3dimLat} C^{p,q}_r(\gg_1,\phi):=\bigwedge^q\gg_p^*\otimes\bigwedge^r\gg^*\otimes W & \textnormal{ for } r>0, \textnormal{ and } & C^{p,q}_0(\gg_1,\phi):=\bigwedge^q\gg_p^*\otimes V, \end{eqnarray} where $\gg_0=\hh$. For a Lie $2$-group (\ref{ALie2Gp}) with associated crossed module $\xymatrix{G \ar[r] & H}$, set \begin{eqnarray}\label{Gp3dimLat} C^{p,q}_r(\G,\phi):=C(\G^q_p\times G^r,W) & \textnormal{ for } r>0, \textnormal{ and } & C^{p,q}_0(\G,\phi):=C(\G^q_p,V), \end{eqnarray} where $\G_0 =H$, $\G_1=\G$ and $C(X,A)$ is the vector space of $A$-valued smooth functions. These lattices come together with a three-dimensional \textit{grid} of maps that is a complex in each direction; in the Lie $2$-algebra case, the grid is built out of Chevalley-Eilenberg complexes, while in the Lie $2$-group case, the grid is built out of groupoid cochain complexes (see Subsection~\ref{sss-Cxs} for details). We refrain from calling either grid a triple complex because not all differentials commute with one another. In each case, two of the building differentials commute only up to homotopy (or up to isomorphism when $r=0$). In \cite{Angulo1:2020,Angulo2:2020}, it is explained how adding the homotopies to the total differential $\nabla$ makes up for this defect, ultimately turning \begin{eqnarray}\label{Cxs} C^{n}_\nabla(\gg_1,\phi)=\bigoplus_{p+q+r=n}C^{p,q}_r(\gg_1,\phi) & \textnormal{ and } & C^{n}_\nabla(\G,\phi)=\bigoplus_{p+q+r=n}C^{p,q}_r(\G,\phi) \end{eqnarray} into actual complexes. The fundamental property of the complexes (\ref{Cxs}), as it was mentioned, is that their second cohomology classify extensions. In fact, the equivalence happens at the level of cocycles, \textit{i.e.}, a $2$-cocycle in either $C^{2}_{tot}(\gg_1,\phi)$ or $C^{2}_{tot}(\G,\phi)$ univoquely defines an extension by the $2$-vector space $\xymatrix{W \ar[r]^\phi & V}$, and two such extensions are isomorphic if and only if the cocycles are cohomologous. In so, the map that \textit{linearizes} a Lie $2$-group extension induces a map \begin{eqnarray}\label{2vanEst2} \xymatrix{ \Phi:C_\nabla^2(\G,\phi) \ar[r] & C_\nabla^2(\gg_1,\phi) } \end{eqnarray} whenever $\gg_1$ is the Lie $2$-algebra of $\G$. Due to its nature, we are bound to call $\Phi$ the van Est map. This map can be proved to be assembled from groupoid van Est maps (see Section~\ref{sec-Theos}). Recall that the van Est theorem (Theorem~\ref{vanEst}) admits an extension to Lie groupoids. \begin{theorem}\cite{Crainic:2003}\label{Crainic-vanEst} Let $\xymatrix{\G \ar@<0.5ex>[r] \ar@<-0.5ex>[r] & M}$ be a Lie groupoid with Lie algebroid $A$ and a representation on the vector bundle $E$. If the source fibres of $\G$ are $k$-connected, the map that differentiates groupoid cochains into algebroid cochains induces isomorphisms \begin{eqnarray}\label{Crainic-vanEstIso} \xymatrix{ \Phi:H_{Gpd}^n(G,E) \ar[r] & H_{CE}^n(A,E) } \end{eqnarray} for all $n\leq k$ and it is injective for $n=k+1$. \end{theorem} We refer to Theorem~\ref{Crainic-vanEst} as the Crainic-van Est theorem in the sequel. The Crainic-van Est theorem can be rephrased as a vanishing result for the cohomology of the mapping cone of $\Phi$ (see Proposition~\ref{ConeCoh}). \begin{theorem}\label{Crainic-vanEstRephrased} If $\G$ is source $k$-connected, then \begin{eqnarray*} H^n(\Phi)=(0),\quad\textnormal{for all } n\leq k, \end{eqnarray*} where $H^\bullet(\Phi)$ is the cohomology of the mapping cone of $\Phi$. \end{theorem} One can combine the fact that $\Phi$ is assembled from groupoid van Est maps and Theorem~\ref{Crainic-vanEstRephrased} to prove a van Est type theorem. To see how, let us momentarily take $W=(0)$. In this case, the three-dimensional grids (\ref{Alg3dimLat}) and (\ref{Gp3dimLat}) collapse to honest double complexes \begin{eqnarray*} \xymatrix{ \vdots & \vdots & \vdots & \\ C(H^2,V) \ar[r]^{\partial}\ar[u] & C(\G^2,V) \ar[r]^{\partial}\ar[u] & C(\G_2^2)\ar[r]\ar[u] & \dots \\ C(H,V) \ar[r]^{\partial}\ar[u]^\delta & C(\G,V) \ar[r]^{\partial}\ar[u]^\delta & C(\G_2) \ar[r]\ar[u]^\delta & \dots \\ V \ar[r]^{\partial}\ar[u]^\delta & V \ar[r]^{\partial}\ar[u]^\delta & V \ar[r]\ar[u]^\delta & \dots } & \xymatrix{ & \\ & \\ & \\ \ar[r]^\Phi & } & \xymatrix{ \vdots & \vdots & \vdots & \\ \bigwedge^2\hh^*\otimes V \ar[r]^{\partial}\ar[u] & \bigwedge^2\gg_1^*\otimes V \ar[r]^{\partial}\ar[u] & \bigwedge^2\gg_2^*\otimes V\ar[r]\ar[u] & \dots \\ \hh^*\otimes V \ar[r]^{\partial}\ar[u]^\delta & \gg_1^*\otimes V \ar[r]^{\partial}\ar[u]^\delta & \gg_2^*\otimes V \ar[r]\ar[u]^\delta & \dots \\ V \ar[r]^{\partial}\ar[u]^\delta & V \ar[r]^{\partial}\ar[u]^\delta & V \ar[r]\ar[u]^\delta & \dots } \end{eqnarray*} where $\Phi$ is defined columnwise by $\xymatrix{\Phi_p:C_{Gp}^\bullet(\G_p,V) \ar[r] & C_{CE}^\bullet(\gg_p,V)}$, the usual van Est map for $\G_p$. Now, $\Phi$ is a map of double complexes if and only if \begin{align*} \xymatrix{ C(\Phi_0) \ar[r] & C(\Phi_1) \ar[r] & C(\Phi_2) \ar[r] & \dots } \end{align*} is a double complex, and the first page of the spectral sequence of its filtration by columns is \begin{align}\label{1stPage} \xymatrix{ & \vdots & \vdots & \vdots & \\ & H^2(\Phi_0) \ar[r] & H^2(\Phi_1) \ar[r] & H^2(\Phi_2) \ar[r] & \dots \\ E^{p,q}_1:& H^1(\Phi_0) \ar[r] & H^1(\Phi_1) \ar[r] & H^1(\Phi_2) \ar[r] & \dots \\ & H^0(\Phi_0) \ar[r] & H^0(\Phi_1) \ar[r] & H^0(\Phi_2) \ar[r] & \dots \\ } \end{align} Therefore, if we suppose that $H$ is $2$-connected and $\G$ is $1$-connected, $\Phi$ induces isomorphisms between the total cohomologies in all degrees less than or equal to $2$. The general case is not much more complicated. Indeed, when $W\neq(0)$, the restriction \begin{eqnarray}\label{Phi_p} \xymatrix{ \Phi_p:C^{p,\bullet}_\bullet(\G,\phi) \ar[r] & C^{p,\bullet}_\bullet(\gg_1,\phi) } \end{eqnarray} is a map of honest double complexes for which one can apply the above reasoning. The cohomology of $\Phi$ coincides with the total cohomology of a double complex of sorts each of whose columns is the total complex of $\Phi_p$ (see~(\ref{Cllpsed}) below). In spite of not being an actual double complex, this object can be filtrated by columns and the first page of its spectral sequence essentially coincides with (\ref{1stPage}); ultimately, allowing us to prove: \begin{theorem}\label{2vanEst} Let $\G$ be a Lie $2$-group with associated crossed module $\xymatrix{G \ar[r] & H}$, Lie $2$-algebra $\gg_1$ and a representation on the $2$-vector space $\xymatrix{W \ar[r]^\phi & V}$. If $H$ and $G$ are both $2$-connected, the map that extends the linearization of extensions induces isomorphisms \begin{eqnarray}\label{2vanEstIso} \xymatrix{ \Phi:H_{\nabla}^n(\G,\phi) \ar[r] & H_{\nabla}^n(\gg_1,\phi) } \end{eqnarray} for all $n\leq 2$ and it is injective for $n=3$. \end{theorem} We can now run van Est strategy as the final ingredient, that linear Lie $2$-algebras are integrable is proved in \cite{Sheng_Zhu2:2012}, and one can choose such an integration $\xymatrix{G \ar[r] & H}$ with both $H$ and $G$ $1$-connected. Since $1$-connected Lie groups are automatically $2$-connected, we conclude that there is a unique Lie $2$-group extension integrating the canonical adjoint extension of a Lie $2$-algebra, thus implying its integrability. The body of the article is dedicated to formalize how to apply the van Est strategy to the case of strict Lie $2$-algebras. In Section~\ref{sec-Pre}, we convene notation and state some basic facts. In Section~\ref{sec-TheMap}, we define the van Est map and prove it is a map of complexes. In Section~\ref{sec-Theos}, we prove a van Est type theorem realizing the van Est map as a composition of groupoid van Est maps. \section{Preliminaries}\label{sec-Pre} In this section, we settle notation by reviewing the necessary notions to formally define the complexes of Lie $2$-algebra and Lie $2$-group cochains with $2$-vector space coefficients, and recall the elements of homological algebra of which we make use below. \subsection{Lie 2-algebras, Lie 2-groups and their cohomology}\label{subsec-2Coh} \subsubsection{The equivalence with the categories of crossed modules}\label{sss-Equiv} In the sequel, we make no distinction between a Lie $2$-group or a Lie $2$-algebra and their corresponding crossed modules. For future reference, we make explicit the equivalence at the level of objects. Let $\G$ be a Lie $2$-group (\ref{ALie2Gp}). In order to make clear the difference between the group operation and the groupoid operation in $\G$, we assume the following convention: \begin{eqnarray*} \gamma_1\vJoin\gamma_2, & \gamma_3\Join\gamma_4 \end{eqnarray*} stand respectively for the group multiplication and the groupoid multiplication whenever $(\gamma_1,\gamma_2)\in\G^2$ and $(\gamma_3,\gamma_4)\in\G\times_H\G$. This notation intends to reflect that we think of the group multiplication as being ``vertical'', and the groupoid multiplication as being ``horizontal''. Since the source map is a surjective submersion and the unit map is a canonical splitting thereof, letting $G$ be the Lie subgroup $\ker s\leq\G$, $\G\cong G\times H$. The associated crossed module map is given by $\xymatrix{G \ar[r]^{t\vert_{\ker s}} & H}$. Thinking of the underlying $2$-vector space of a Lie $2$-algebra (\ref{ALie2Alg}) as an abelian Lie $2$-group, this construction yields a canonical isomorphism $\gg_1\cong\gg\oplus\hh$, where $\gg:=\ker\hat{s}$ and the crossed module map $\xymatrix{\gg \ar[r]^{\hat{t}\vert_{\gg}} & \hh}$. In the Lie $2$-group case, the right action is given by conjugation by units in the group $\G$ \begin{align}\label{GpAction} g^h :=u(h)^{-1}\vJoin g\vJoin u(h), \end{align} for $h\in H$ and $g\in G$. We stress that the $-1$ power stands for the inverse of the group multiplication $\vJoin$. In the Lie $2$-algebra case, the action by derivations is given by \begin{align}\label{AlgAction} \Lie_yx :=[u(y),x]_1, \end{align} for $y\in\hh$ and $x\in\gg$. Here, $[\cdot,\cdot]_1$ stands for the Lie bracket of $\gg_1$. Conversely, given a crossed module $\xymatrix{G \ar[r]^i & H}$ as in Definition~\ref{GpCrossMod}, the space of arrows of its associated Lie $2$-group $\G$ is defined to be the product $G\times H$ together with the structural maps \begin{align*} s(g,h)=h, & \qquad t(g,h)=hi(g), & \iota(g,h)=(g^{-1},hi(g)), & \qquad u(h)=(1,h), \end{align*} \begin{align}\label{GpStrMaps} (g',hi(g))\Join(g,h):=(gg',h) \end{align} for $h\in H$ and $g,g'\in G$. Thinking of the underlying $2$-term complex of vector spaces of a crossed module $\xymatrix{\gg \ar[r]^\mu & \hh}$ as in Definition~\ref{AlgCrossMod} as an abelian crossed module of Lie groups, this construction yields a $2$-vector space $\gg_1:=\gg\oplus\hh$ with structural maps given by the same formulae that we transcribe using additive notation \begin{align*} \hat{s}(x,y)=y, & \qquad \hat{t}(x,y)=y+\mu(x), & \hat{\iota}(x,y)=(-x,y+\mu(x)), & \qquad \hat{u}(y)=(0,y), \end{align*} \begin{equation}\label{AlgStrMaps} (x',y+\mu(x))\Join(x,y)=\hat{m}(x',y+\mu(x);x,y):=(x+x',y) \end{equation} for $y\in\hh$ and $x,x'\in\gg$. In the Lie group case, $\G$ is endowed with the group structure of the semi-direct product $G\rtimes H$ with respect to the $H$-action. Explicitly, the product is given by \begin{align}\label{ArrowsProduct} (g_1,h_1)\vJoin(g_2,h_2)=(g_1^{h_2}g_2,h_1h_2), \end{align} for $(g_1,h_1),(g_2,h_2)\in G\rtimes H$. In the Lie algebra case, $\gg_1$ is endowed with the bracket of the semi-direct sum $\gg\oplus_\Lie\hh$, explicitly given by \begin{align}\label{ArrowsBracket} [(x_0,y_0),(x_1,y_1)]_\Lie:=([x_0,x_1]+\Lie_{y_0}x_1-\Lie_{y_1}x_0,[y_0,y_1]), \end{align} for $(x_0,y_0),(x_1,y_1)\in\gg\oplus\hh$. \subsubsection{The general linear Lie 2-group and the linear Lie 2-algebra}\label{sss-LinAndRep} The General Linear Lie $2$-group $GL(\phi)$ \cite{Norrie:1987,Sheng_Zhu2:2012} is the Lie $2$-group which plays the r\^ole of space of automorphisms of the $2$-vector space $\xymatrix{W\ar[r]^\phi & V}$. $GL(\phi)$ is by definition the crossed module \begin{align}\label{GLinCrossMod} \xymatrix{ \Delta:GL(\phi)_1 \ar[r] & GL(\phi)_0:A \ar@{|->}[r] & (I+A\phi ,I+\phi A), } \end{align} where \begin{align*} GL(\phi)_1 & =\lbrace A\in Hom(V,W):(I+A\phi ,I+\phi A)\in GL(W)\times GL(V)\rbrace, \\ GL(\phi)_0 & =\lbrace(F,f)\in GL(W)\times GL(V):\phi\circ F=f\circ\phi\rbrace, \end{align*} and the right action given by \begin{align}\label{act} A^{(F,f)} & :=F^{-1}Af, & \textnormal{for }(F,f)\in GL(\phi)_0,\quad A\in GL(\phi)_1. \end{align} The group structure on the Whitehead group $GL(\phi)_1$ is given by \begin{align}\label{gpStr} A_1\odot A_2 & := A_1+A_2+A_1\phi A_2, & A_1,A_2\in GL(\phi)_1, \end{align} whose identity element is the $0$ map, and where the inverse of an element $A$ is given by either $-A(I+\phi A)^{-1}=-(I+A\phi )^{-1}A$. The Lie $2$-algebra of $GL(\phi)$ is $\ggl(\phi)$, explicitly given by \cite{Sheng_Zhu2:2012} \begin{align}\label{LinCrossMod} \xymatrix{ \Delta' :\ggl(\phi)_1 \ar[r] & \ggl(\phi)_0:A \ar@{|->}[r] & (A\phi,\phi A), } \end{align} where \begin{align*} \ggl(\phi)_1 & =Hom(V,W), & \ggl(\phi)_0 & =\lbrace (F,f)\in\ggl(W)\oplus\ggl(V):\phi\circ F=f\circ\phi\rbrace, \end{align*} and the action is given by \begin{align}\label{linAct} \Lie^{\phi}_{(F,f)}A & :=FA-Af, & \textnormal{for }(F,f)\in\ggl(\phi)_0,\quad A\in\ggl(\phi)_1. \end{align} The Lie bracket on $\ggl(\phi)_1$ is given by \begin{align}\label{linBrack} [A_1,A_2]_\phi & := A_1\phi A_2-A_2\phi A_1, & A_1,A_2\in\ggl(\phi)_1. \end{align} Regarding the maps (\ref{GLinCrossMod}) and (\ref{LinCrossMod}) as diagonal homomorphisms to $GL(W\oplus V)$ and $\ggl(W\oplus V)$, one can deduce the formula for the exponential $\xymatrix{\exp_{GL(\phi)_1} :\ggl(\phi)_1 \ar[r] & GL(\phi)_1}$, \begin{align}\label{TheExpOfGL(phi)} \exp_{GL(\phi)_1}(A)=A\sum_{n=0}^\infty\frac{(\phi A)^n}{(n+1)!}=\sum_{n=0}^\infty\frac{(A\phi)^n}{(n+1)!}A. \end{align} A representation of a Lie $2$-group $\G$ with crossed module $\xymatrix{G \ar[r]^i & H}$ on $\xymatrix{W \ar[r]^\phi & V}$ is a morphism of Lie $2$-groups \begin{align}\label{GpRep} \xymatrix{ \rho:\G \ar[r] & GL(\phi). } \end{align} Explicitly, $\rho$ consists of Lie group representations of $H$ on $W$ and on $V$, \begin{align}\label{GpRho0} \xymatrix{ \rho_0:H \ar[r] & GL(\phi)_0\leq GL(W)\times GL(V):h \ar@{|->}[r] & (\rho_0^1(h),\rho_0^0(h)), } \end{align} intertwining $\phi$, \textit{i.e.}, such that $\phi\circ\rho_0^1(h)=\rho_0^0(h)\circ\phi$ for all $h\in H$; and a Lie group homomorphism \begin{align}\label{GpRho1} \xymatrix{ \rho_1:G \ar[r] & GL(\phi)_1, } \end{align} so that \begin{align}\label{GpRho1Homo} \rho_1(g_0g_1) & =\rho_1(g_0)+\rho_1(g_1)+\rho_1(g_0)\circ\phi\circ\rho_1(g_1), & \textnormal{for all }g_0,g_1\in G. \end{align} Due to the compatibility with the crossed module structure, the following relations hold for all $g\in G$ and $h\in H$: \begin{align}\label{GpRepEqns} \rho^0_0(i(g)) & =I+\phi\circ\rho_1(g), & \rho^1_0(i(g)) & =I+\rho_1(g)\circ\phi, & \rho_1(g^h) & =\rho_0^1(h)^{-1}\rho_1(g)\rho_0^0(h). \end{align} A representation of a Lie $2$-algebra $\gg_1$ with crossed module $\xymatrix{\gg \ar[r]^\mu & \hh}$ on $\xymatrix{W \ar[r]^\phi & V}$ is a morphism of Lie $2$-algebras \begin{align}\label{AlgRep} \xymatrix{ \rho:\gg \ar[r] & \ggl(\phi). } \end{align} Explicitly, $\rho$ consists of Lie algebra representations of $\hh$ on $W$ and on $V$, \begin{align}\label{AlgRho0} \xymatrix{ \rho_0:\hh \ar[r] & \ggl(\phi)_0\leq\ggl(W)\oplus\ggl(V):y \ar@{|->}[r] & (\rho_0^1(y),\rho_0^0(y)), } \end{align} intertwining $\phi$, \textit{i.e.}, such that $\phi\circ\rho_0^1(y)=\rho_0^0(y)\circ\phi$ for all $y\in\hh$; and a Lie algebra homomorphism \begin{align}\label{AlgRho1} \xymatrix{ \rho_1:\gg \ar[r] & \ggl(\phi)_1. } \end{align} Due to the compatibility with the crossed module structure, the following relations hold for all $x\in\gg$ and $y\in\hh$: \begin{align}\label{AlgRepEqns} \rho^0_0(\mu(x)) & =\phi\circ\rho_1(x), & \rho^1_0(\mu(x)) & =\rho_1(x)\circ\phi, & \rho_1(\Lie_yx) & =\rho_0^1(y)\rho_1(x)-\rho_1(x)\rho_0^0(y), \end{align} where $\Lie$ is the action of $\hh$ on $\gg$. Clearly, the differential of a Lie $2$-group representation $\rho$ yields a representation $\dot{\rho}$ of its Lie $2$-algebra. \begin{example}\label{2ad}[The adjoint representation] Define the adjoint representation of a Lie $2$-algebra on itself by: \begin{align*} \ad_1: & \xymatrix{\gg \ar[r] & \ggl(\mu)_1} & \ad_1(x)(u) & :=-\Lie_u x ,\quad x\in\gg, u\in\hh \\ \ad_0^1: & \xymatrix{\hh \ar[r] & \ggl(\gg)} & \ad_0^1(y)(v) & :=\Lie_y v ,\quad y\in\hh, v\in\gg \\ \ad_0^0: & \xymatrix{\hh \ar[r] & \ggl(\hh)} & \ad_0^0(y)(u) & :=[y,u], \quad y,u\in\hh. \end{align*} \end{example} \subsubsection{The cochain complexes with coefficients}\label{sss-Cxs} We define the differentials $\nabla$ for the complexes in (\ref{Cxs}). As it was briefly stated, both differentials are defined as a graded sum of the differentials in a three-dimensional grid whose vertices are the spaces in (\ref{Alg3dimLat}) and (\ref{Gp3dimLat}) together with \textit{difference maps} that account for the fact that not all these commute. Let $\G$ be a Lie $2$-group (\ref{ALie2Gp}) with associated crossed module $\xymatrix{G \ar[r]^i & H}$. The face maps of the simplicial structure on the nerve of $\G$ are given by \begin{align}\label{faceMaps} \partial _k (\gamma_0,...,\gamma_p)=\begin{cases} (\gamma_1,...,\gamma_p)& \textnormal{if }k=0 \\ (\gamma_0,...,\gamma_{k-2},\gamma_{k-1}\Join\gamma_k,\gamma_{k+1},...,\gamma_p)& \textnormal{if }0<k\leq p \\ (\gamma_0,...,\gamma_{p-1})& \textnormal{if }k=p+1,\end{cases} \end{align} for a given element $(\gamma_0,...,\gamma_p)\in\G_{p+1}$. Under the canonical isomorphism $\G\cong G\times H$ (see Subsection~\ref{sss-Equiv}), the space of $p$-composable arrows $\G_p$ corresponds to $G^p\times H$; hereafter, we consider this isomorphism to be fixed and treat it as an equality when necessary. For each coordinate $\gamma_j$ of $\gamma=(\gamma_1,...,\gamma_p)\in\G_p\leq\G^p$, there is a corresponding $(g_j,h_j)\in G\times H$. According to Eq. (\ref{GpStrMaps}), the defining relation for $\G_p$ then reads $h_j=h_{j+1}i(g_{j+1})$, thus making the map $\xymatrix{\gamma \ar@{|->}[r] & (g_1,...,g_p;h_p)}$ an isomorphism with inverse $\xymatrix{(g_1,...,g_p;h) \ar@{|->}[r] & (g_1,hi(g_p...g_2);...;g_{p-1},hi(g_p);g_p,h)}$. Under this isomorphism, the face maps become \begin{align}\label{2GpFaceMaps} \partial_k(g_0,...,g_p;h)=\begin{cases} (g_1,...,g_p;h) & \textnormal{if }k=0 \\ (g_0,...,g_{k-2},g_kg_{k-1},g_{k+1},...,g_p;h) & \textnormal{if }0<k\leq p \\ (g_0,...,g_{p-1};hi(g_p))& \textnormal{if }k=p+1.\end{cases} \end{align} Let $\xymatrix{t_p:\G_p \ar[r] & H:(\gamma_1,...\gamma_p) \ar@{|->}[r] & t(\gamma_1)}$ be the map that returns the final target of a $p$-tuple of composable arrows. We remark that $t_p$ is a composition of face maps and hence a group homomorphism, and rewrite it as \begin{align} \xymatrix{ t_p:G^p\times H \ar[r] & H:(g_1,...,g_p;h) \ar@{|->}[r] & hi\Big{(}\prod_{j=0}^{p-1}g_{p-j}\Big{)}=hi(g_p...g_1). } \end{align} If $E$ is a vector bundle over $H$ and $\xymatrix{\G{}_s\times_H E \ar[r] & E:(\gamma,e) \ar@{|->}[r] & \Delta_\gamma e}$ is a left action along the projection, the differential of the complex of Lie groupoid cochains with values on $E$, $\xymatrix{\partial:C^\bullet(\G;E)\ar[r] & C^{\bullet+1}(\G;E)}$, where $C^p(\G;E):=\Gamma(t_p^*E)$, is defined by \begin{align}\label{LGpdDifferential} (\partial\varphi)(\gamma_0,...,\gamma_p) & =\Delta_{\gamma_0}\partial_0^*\varphi(\gamma_0,...,\gamma_p)+\sum_{k=1}^{p+1}(-1)^k\partial_k^*\varphi(\gamma_0,...,\gamma_p) \end{align} for $\varphi\in C^p(\G;E)$ and $(\gamma_0,...,\gamma_p)\in\G_{p+1}$. Analogously, when $E$ is a right representation, one uses the initial source map to define $C^p(\G;E):=\Gamma(s_p^*E)$, and, keeping the notation for the action, the differential for $\varphi\in C^p(\G;E)$ and $(\gamma_0,...,\gamma_p)\in\G_{p+1}$ is defined to be \begin{align}\label{RGpdDifferential} (\partial\varphi)(\gamma_0,...,\gamma_p) & =\sum_{k=0}^{p}(-1)^k\partial_k^*\varphi(\gamma_0,...,\gamma_p)+(-1)^{p+1}\Delta_{\gamma_p}\partial_{p+1}^*\varphi(\gamma_0,...,\gamma_p). \end{align} The differential of the Chevalley-Eilenberg complex of a Lie algebra $\gg$ with values in a representation $\rho$ on the vector space $V$, \begin{align*} \xymatrix{ \delta_{CE}:\bigwedge^\bullet\gg^*\otimes V \ar[r] & \bigwedge^{\bullet +1}\gg^*\otimes V, } \end{align*} is defined by \begin{align}\label{CEDiff} (\delta_{CE}\omega)(X) & =\sum_{j=0}^q(-1)^j\rho(x_j)\omega(X(j))+\sum_{m<n}(-1)^{m+n}\omega([x_m,x_n],X(m,n)) \end{align} for $\omega\in\bigwedge^q\gg^*\otimes V$ and $X=(x_0,...,x_q)\in\gg^{q+1}$. We adopt the convention that, for $0\leq a_1<...<a_k\leq q$, \begin{align}\label{conv} X(a_1,...,a_k) & :=(x_0,...,x_{a_1-1},x_{a_1+1},...,x_{a_k-1},x_{a_k+1},...,x_{q}), \end{align} as opposed to the usual $\hat{\cdot}$ notation. To define the complex of Lie $2$-algebra cochains with values on $\xymatrix{W \ar[r]^\phi & V}$, fix a Lie $2$-algebra (\ref{ALie2Alg}) with associated crossed module $\xymatrix{\gg \ar[r]^\mu & \hh}$ whose action we write $\Lie$ and fix a representation $\rho$ (\ref{AlgRep}). We think of $\rho$ as a triple $(\rho_0^1,\rho_0^0;\rho_1)$, where $\rho_0=(\rho_0^1,\rho_0^0)$ is the map (\ref{AlgRho0}) and $\rho_1$ is the map (\ref{AlgRho1}). Let $C^{p,q}_r(\gg_1,\phi)$ be given by (\ref{Alg3dimLat}). {\bf The p-direction} - For constant $r$, when $q=0$, one has got the trivial complexes \begin{align*} \xymatrix{ C^{0,0}_r(\gg_1,\phi) \ar[r]^{\partial=0} & C^{1,0}_r(\gg_1,\phi) \ar[r]^{\partial=Id} & C^{2,0}_r(\gg_1,\phi) \ar[r]^{\partial=0} & C^{3,0}_r(\gg_1,\phi) \ar[r] & \cdots ; } \end{align*} for $q>0$, we set the complexes \begin{eqnarray*} \xymatrix{ \partial:\bigwedge^q\gg_\bullet^*\otimes V \ar[r] & \bigwedge^q\gg_{\bullet+1}^*\otimes V } & \textnormal{and} & \xymatrix{ \partial:\bigwedge^q\gg_\bullet^*\otimes\bigwedge^r\gg^*\otimes W \ar[r] & \bigwedge^q\gg_{\bullet+1}^*\otimes\bigwedge^r\gg^*\otimes W } \end{eqnarray*} to be the subcomplexes of multilinear alternating groupoid cochains of the complex of $\xymatrix{\gg _1^q \ar@<0.5ex>[r] \ar@<-0.5ex>[r] & \hh^q}$ with values in the trivial representation on $\xymatrix{\hh^q\times V \ar[r] & \hh^q}$ and $\xymatrix{\hh^q\times(\bigwedge^r\gg^*\otimes W) \ar[r] & \hh^q}$ respectively. {\bf The q-direction} - For constant $p$ and $r=0$, we set the complex \begin{align*} \xymatrix{ \delta:\bigwedge^\bullet\gg_p^*\otimes V \ar[r] & \bigwedge^{\bullet+1}\gg_p^*\otimes V } \end{align*} to be the Chevalley-Eilenberg complex of $\gg_p$ with values in the pull-back representation $\rho_p:=\hat{t}_p^*\rho_0^0$ on $V$. For constant $r>0$, we set the complex \begin{align*} \xymatrix{ \delta:\bigwedge^\bullet\gg_p^*\otimes\bigwedge^r\gg^*\otimes W \ar[r] & \bigwedge^{\bullet+1}\gg_p^*\otimes\bigwedge^r\gg^*\otimes W } \end{align*} to be the Chevalley-Eilenberg complex of $\gg_p$ with values in the pull-back representation $\rho_p^{(r)}:=\hat{t}_p^*\rho^{(r)}$ on $\bigwedge^r\gg^*\otimes W$, where $\xymatrix{\rho^{(r)}:\hh \ar[r] & \ggl(\bigwedge ^r\gg ^*\otimes W)}$ is the dual representation given for $\omega\in\bigwedge^r\gg^*\otimes W$, $y\in\hh$ and $z_1,...,z_r\in\gg$ by \begin{align}\label{q-rep} \rho ^{(r)}(y)\omega(z_1,...,z_r) & :=\rho_0^1(y)\omega(z_1,...,z_r)-\sum_{k=1}^r\omega(z_1,...,\Lie _y z_k,...,z_r). \end{align} {\bf The r-direction} - For constant $p$ and $q$, we set the complex \begin{align*} \xymatrix{ \delta_{(1)}:\bigwedge^q\gg_p^*\otimes\bigwedge^{\bullet}\gg ^*\otimes W \ar[r] & \bigwedge^q\gg_p^*\otimes\bigwedge ^{\bullet+1}\gg^*\otimes W } \end{align*} to be the Chevalley-Eilenberg complex of $\gg$ with values in $\xymatrix{\rho_{(1)}:\gg \ar[r] & \ggl(\bigwedge^q\gg _p^*\otimes W)}$ given for $\omega\in\bigwedge^q\gg _p^*\otimes W$, $\Xi\in\gg_p^q$ and $z\in\gg$ by \begin{eqnarray}\label{rRep} \rho_{(1)}(z)\omega(\Xi):=\rho_0^1(\mu(z))\omega(\Xi). \end{eqnarray} Since the $0$th degree is $\bigwedge^q\gg_p^*\otimes V$ instead of $\bigwedge^q\gg_p^*\otimes W$, we specify the first differential to be \begin{align}\label{Alg1st r} \xymatrix{ \delta':\bigwedge^q\gg_p^*\otimes V \ar[r] & \bigwedge^q\gg_p^*\otimes\gg^*\otimes W } & :\delta'\omega(\Xi;z)=\rho_1(z)\omega(\Xi), \end{align} where $\Xi\in\gg_p^q$ and $z\in\gg$. {\bf Difference maps} - The $k$th difference map \begin{align*} \xymatrix{ \Delta_{k}:C^{p,q}_r(\gg_1,\phi) \ar[r] & C^{p+1,q+k}_{r-k}(\gg_1,\phi) } \end{align*} is defined for $\omega\in C^{p,q}_r(\gg_1,\phi)$, $Z\in\gg^{r-k}$ and $\Xi=(\xi_0,...,\xi_{q+k-1})\in\gg_{p+1}^{q+k}$ by \begin{align}\label{AlgHigherDiffMaps} (\Delta_k\omega)(\Xi;Z) & :=\sum_{a_1<...<a_k}(-1)^{a_1+...+a_k}\omega(\hat{\partial}_0\Xi(a_1,...,a_k);x_{a_1}^0,...,x_{a_k}^0,Z). \end{align} Here, we used the notation convention (\ref{conv}), and identified each $\xi_j$ with $j\in\lbrace0,...,q+k-1\rbrace$ with $(x_j^0,...,x_j^p;y_j)$. In the special case $k=r$, the map $\Delta_r$ is essentially defined by Eq.~(\ref{AlgHigherDiffMaps}), but composed with $\phi$, so that it takes values in the right vector space. We sometimes drop the subindex of the first difference map and write $\Delta$ instead of $\Delta_1$. The three dimensional grid knit by $\partial$, $\delta$ and $\delta_{(1)}$ falls short of defining a triple complex because $\partial$ and $\delta$ do not commute. Since all difference maps are homogeneous of degree $+1$ with respect to the diagonal grading, it makes sense to use them as differentials: \begin{align}\label{AlgNabla} \nabla & :=\partial+(-1)^{p+q}(\delta+\delta_{(1)})+\sum_{k=1}^r\Delta_k . \end{align} The higher difference maps in Eq.~(\ref{AlgNabla}) make up for the non-commuting differentials, so that $\nabla$ squares to zero \cite{Angulo1:2020}. Thus defined, the complex $(C_\nabla(\gg_1,\phi),\nabla)$ verifies the following property. \begin{theorem}\cite{Angulo1:2020}\label{H2Alg} $H^2_\nabla(\gg_1,\phi)$ is in one-to-one correspondence with isomorphism classes of extensions of the Lie $2$-algebra $\gg_1$ by the $2$-vector space $\xymatrix{W\ar[r]^\phi & V.}$ \end{theorem} To define the complex of Lie $2$-group cochains with values on $\xymatrix{W \ar[r]^\phi & V}$, fix a Lie $2$-group (\ref{ALie2Gp}) with associated crossed module $\xymatrix{G \ar[r]^i & H}$ for whose action we use exponential notation and fix a representation $\rho$ (\ref{GpRep}). We think of $\rho$ as a triple $(\rho_0^1,\rho_0^0;\rho_1)$, where $\rho_0=(\rho_0^1,\rho_0^0)$ is the map (\ref{GpRho0}) and $\rho_1$ is the map (\ref{GpRho1}). Let $C^{p,q}_r(\G,\phi)$ be given by (\ref{Gp3dimLat}). {\bf The p-direction} - For constant $r$, when $q=0$, one has got the trivial complexes \begin{align*} \xymatrix{ C^{0,0}_r(\G,\phi) \ar[r]^{\partial=0} & C^{1,0}_r(\G,\phi) \ar[r]^{\partial=Id} & C^{2,0}_r(\G,\phi) \ar[r]^{\partial=0} & C^{3,0}_r(\G,\phi) \ar[r] & \cdots } \end{align*} When $q>0$ and $r=0$, we set the complex \begin{align*} \xymatrix{ \partial:C(\G_{\bullet}^q,V) \ar[r] & C(\G_{\bullet+1}^q,V) } \end{align*} to be the cochain complex of the product groupoid $\xymatrix{\G^q \ar@<0.5ex>[r]\ar@<-0.5ex>[r] & H^q}$ with respect to the trivial representation on the vector bundle $\xymatrix{H^q\times V \ar[r] & H^q}$. For any other value of $r$, we set the complex \begin{align*} \xymatrix{ \partial:C(\G_{\bullet}^q\times G^r,W) \ar[r] & C(\G_{\bullet+1}^q\times G^r,W) } \end{align*} to be the cochain complex of the product groupoid $\xymatrix{\G^q\times G^r \ar@<0.5ex>[r]\ar@<-0.5ex>[r] & H^q\times G^r}$ with respect to the left representation on the trivial bundle $\xymatrix{H^q\times G^r\times W \ar[r] & H^q\times G^r}$ given for $\gamma_k=(g_k,h_k)\in\G$ and $\vec{f}\in G^r$ by \begin{align}\label{p-rep} (\gamma_1,...,\gamma_q;\vec{f})\cdot(h_1,...,h_q;\vec{f},w) & :=(h_1i(g_1),...,h_qi(g_q);\vec{f},\rho_0^1(i(pr_G(\gamma_1\vJoin\cdots\vJoin\gamma_q)))^{-1}w). \end{align} {\bf The q-direction} - When $r=0$, we set the complex \begin{align*} \xymatrix{ \delta:C(\G_p^{\bullet},V) \ar[r] & C(\G_p^{\bullet+1},V) } \end{align*} to be the group complex of $\G_p$ with values in the pull-back of the representation $\rho_0^0$ along the final target map $t_p$; when $r\neq 0$, we set the complex \begin{align*} \xymatrix{ \delta:C(\G_p^{\bullet}\times G^r,W) \ar[r] & C(\G_p^{\bullet+1}\times G^r,W) } \end{align*} to be the cochain complex of the (right!) transformation groupoid $\xymatrix{\G_p\ltimes G^r \ar@<0.5ex>[r]\ar@<-0.5ex>[r] & G^r}$ with respect to the right representation \begin{align}\label{q-Rep} (g_1,...,g_r;w)\cdot(\gamma;g_1,...,g_r) & :=(g_1^{t_p(\gamma)},...,g_r^{t_p(\gamma)};\rho_0^1(t_p(\gamma))^{-1}w) \end{align} on the trivial vector bundle $\xymatrix{G^r\times W \ar[r] & G^r}$, where $g_1,...,g_r\in G$, $\gamma\in\G_p$ and $w\in W$. When writing the groupoid differential, we use the shorthand $\rho^r_{\G_p}(\gamma;g)w$ instead of the lengthier Eq. (\ref{q-Rep}). {\bf The r-direction} - When $q=0$, we set the complex \begin{align*} \xymatrix{ \delta_{(1)}:C(G^{\bullet},W) \ar[r] & C(G^{\bullet+1},W) } \end{align*} to be the group complex of $G$ with values in the pull-back of the representation $\rho_0^1$ along the crossed module homomorphism $i$, but for the $0$th degree; when $q\neq 0$, we set the complex \begin{align}\label{rDiff} \xymatrix{ \delta_{(1)}:C(\G_p^q\times G^{\bullet},W) \ar[r] & C(\G_p^q\times G^{\bullet+1},W), } \end{align} again except for the $0$th degree, to be the cochain complex of the Lie group bundle $\xymatrix{\G_p^q\times G \ar@<0.5ex>[r]\ar@<-0.5ex>[r] & \G_p^q}$ with respect to the left representation \begin{align}\label{r-Rep} (\gamma_1,...,\gamma_q;g)\cdot(\gamma_1,...,\gamma_q;w) & :=(\gamma_1,...,\gamma_q;\rho_0^1(i(g^{t_p(\gamma_1)...t_p(\gamma_q)}))w) \end{align} on the trivial vector bundle $\xymatrix{G_p^q\times W \ar[r] & G_p^q}$, where $\gamma_1,...,\gamma_q\in\G_p$, $g\in G_p$ and $w\in W$. Though right and left representations of a Lie group bundle coincide, we emphasize that Eq. (\ref{r-Rep}) be taken as a left representation. The missing maps $\xymatrix{\delta':V \ar[r] & C(G,W)}$ and $\xymatrix{\delta':C(\G_p^q,V) \ar[r] & C(\G_p^q\times G,W)}$ are defined respectively for $v\in V$, $g\in G$, $\omega\in C(\G_p^q,V)$ and $\gamma_1,...,\gamma_q\in\G_p$ by \begin{eqnarray}\label{Gp1st r} (\delta'v)(g):=\rho_1(g)v & \textnormal{ and } & (\delta'\omega)(\gamma_1,...,\gamma_q;g)=\rho_0^1(t_p(\gamma_1)...t_p(\gamma_q))^{-1}\rho_1(g)\omega(\gamma_1,...,\gamma_q). \end{eqnarray} {\bf Difference maps} - In order to define the first difference maps, we introduce the following notation. We think of an element $\vec{\gamma}\in\G_p^q$ as having components \begin{equation}\label{gammaMatrix} \vec{\gamma}=\begin{pmatrix} \gamma_1 \\ \vdots \\ \gamma_q \end{pmatrix}=\begin{pmatrix} \gamma_{11} & \gamma_{12} & ... & \gamma_{1p} \\ \gamma_{21} & \gamma_{22} & ... & \gamma_{2p} \\ \vdots & \vdots & & \vdots \\ \gamma_{q1} & \gamma_{q2} & ... & \gamma_{qp} \end{pmatrix}=\begin{pmatrix} g_{11} & g_{12} & ... & g_{1p} & h_1 \\ g_{21} & g_{22} & ... & g_{2p} & h_2 \\ \vdots & \vdots & & \vdots & \vdots \\ g_{q1} & g_{q2} & ... & g_{qp} & h_q \end{pmatrix}, \end{equation} where the last equality is a notation abuse corresponding to the row-wise isomorphism $\G_{p}\cong G^{p}\times H$. Here, for each value of $a$ and $b$, $(g_{ab},h_{ab})$ is the image of $\gamma_{ab}$ under the canonical isomorphism $\G\cong G\rtimes H$ (see Subsection~\ref{sss-Equiv}). We proceed to define the difference maps \begin{align*} \xymatrix{ \Delta_{a,b}:C^{p,q}_r(\G,\phi) \ar[r] & C^{p+a,q+b}_{r+1-(a+b)}(\G,\phi) } \end{align*} for $(a,b)\in\lbrace(1,1),(r,1),(1,r)\rbrace$. If $a+b=r+1$, $\omega\in C(\G_p^q\times G^{a+b-1},W)$ and $\vec{\gamma}\in\G_{p+a}^{q+b}$, set \begin{align}\label{Delta-ab} (\Delta_{a,b}\omega)(\vec{\gamma}) & :=\rho_0^0(t_p(\partial_0^a\gamma_1)...t_p(\partial_0^a\gamma_{q+b}))\circ\phi\big{(}\omega(\partial_0^a\delta_0^b\vec{\gamma};c_{a,b}(\vec{\gamma}))\big{)}, \end{align} where $\xymatrix{c_{a,b}:\G_{p+a}^{q+b} \ar[r] & G^r}$ are respectively \begin{eqnarray}\label{most} c_{1,1}(\vec{\gamma}):=g_{11}, & c_{r,1}(\vec{\gamma}):=(g_{1r},g_{1r-1},...,g_{12},g_{11}), & \textnormal{and }c_{1,r}(\vec{\gamma}):=(g_{11}^{h_{21}...h_{r1}},g_{21}^{h_{31}...h_{r1}},...,g_{(r-1)1}^{h_{r1}},g_{r1}). \end{eqnarray} When $r>1$, \begin{align*} \xymatrix{ \Delta_{1,1}:C(\G_p^q\times G^{r},W) \ar[r] & C(\G_{p+1}^{q+1}\times G^{r-1},W) } \end{align*} is defined for $\omega\in C(\G_p^q\times G^r,W)$, $\vec{f}=(f_1,...,f_{r-1})\in G^{r-1}$ and $\vec{\gamma}\in\G_{p+1}^{q+1}$ as in Eq. (\ref{gammaMatrix}) by \begin{align*} (\Delta_{1,1}\omega)(\vec{\gamma};\vec{f}) =\rho_0^1(i(pr_G(\gamma_{21}\vJoin\cdots\vJoin & \gamma_{(q+1)1})))^{-1}\Big{[}\rho_0^1(i(g_{11}^{h_{21}...h_{(q+1)1}}))^{-1}\omega(\partial_0\delta_0\vec{\gamma};(\vec{f})^{h_{11}},g_{11})+ \\ & +\sum_{n=1}^{r-1}(-1)^{n+1}\big{(}\omega(\partial_0\delta_0\vec{\gamma};c_{2n-1}(\vec{f};\gamma_{11}))-\omega(\partial_0\delta_0\vec{\gamma};c_{2n}(\vec{f};\gamma_{11}))\big{)}\Big{]}, \end{align*} where $(\vec{f})^{h_{11}}:=(f_1^{h_{11}},...,f_{r-1}^{h_{11}})$ and $\xymatrix{c_{2n-1},c_{2n}:G^{r-1}\times \G \ar[r] & G^r}$ are respectively given by \begin{align}\label{c(11)} c_{2n-1}(\vec{f};\gamma_{11}) & :=\Big{(}f_1^{h_{11}i(g_{11})},...,f_{r-n-1}^{h_{11}i(g_{11})},g_{11}^{-1},f_{r-n}^{h_{11}},...,f_{r-2}^{h_{11}},f_{r-1}^{h_{11}}g_{11}\Big{)}, \\ c_{2n}(\vec{f};\gamma_{11}) & :=\Big{(}f_1^{h_{11}i(g_{11})},...,f_{r-n-1}^{h_{11}i(g_{11})},g_{11}^{-1},f_{r-n}^{h_{11}},...,f_{r-2}^{h_{11}},g_{11}\Big{)}, \end{align} for $0<n<r$. We often drop the subindex of the difference map $\Delta_{1,1}$ and write $\Delta$. In the Lie $2$-group case, $\partial$ and $\delta$ do not commute either and stand in the way of yielding a triple complex. Nonetheless, the difference maps are homogeneous of degree $+1$ with respect to the diagonal grading, and make up for the non-commuting differentials \cite{Angulo2:2020} by setting: \begin{align}\label{GpNabla} \nabla & :=(-1)^p\Big{(}\delta_{(1)}+\sum_{a+b>0}(-1)^{(a+1)(r+b+1)}\Delta_{a,b}\Big{)}, \end{align} where $\Delta_{1,0}:=\partial$, $\Delta_{0,1}:=\delta$, and $\Delta_{a,0}=\Delta_{0,b}=0$ whenever $a,b>1$. Thus defined, the complex $(C_\nabla(\G,\phi),\nabla)$ verifies the following property. \begin{theorem}\cite{Angulo2:2020}\label{H2Gp} $H^2_\nabla(\G,\phi)$ is in one-to-one correspondence with isomorphism classes of split extensions of the Lie $2$-group $\G$ by the $2$-vector space $\xymatrix{W \ar[r]^\phi & V}$. \end{theorem} \begin{remark}\label{Incomplete} As of the writing of this paper, a general formula for $\Delta_{a,b}$ is still unavailable. In \cite{Angulo2:2020}, there are formulas for several families of difference maps and, in particular, the complex of Lie $2$-group cochains with values on a $2$-vector is defined up until degree $5$. Due to the scope of our application, we just included the necessary maps to define the complex up to degree $2$. \end{remark} \subsection{A brief excursus on homological algebra}\label{subsec-HomAlg} As it was stated in the Introduction, we make use of the van Est theorem written in terms of the mapping cone. Though we could not find Proposition~\ref{ConeCoh} as stated in the literature, it follows from standard techniques that can be found in \cite{Weibel:1994}. Given a map $\xymatrix{\Phi:(A^\bullet,d_A) \ar[r] & (B^\bullet,d_B)}$, the mapping cone of $\Phi$ is defined to be \begin{align*} C(\Phi) & :=(A[1]\oplus B,d_\Phi ), \end{align*} where $d_\Phi=\begin{pmatrix} -d_A & 0 \\ \Phi & d_B \end{pmatrix}$. $C(\Phi)$ is complex if and only if $\Phi$ is a map of complexes. We write $H(\Phi)$ for the cohomology of the mapping cone of $\Phi$. \begin{proposition}\label{ConeCoh} Let $\xymatrix{\Phi:A^\bullet \ar[r] & B^\bullet}$ be a map of complexes. The following are equivalent: \begin{itemize} \item[i)] $H^n(\Phi)=(0)$ for $n\leq k$. \item[ii)] The induced map in cohomology \begin{eqnarray*} \xymatrix{ \Phi^n :H^n(A) \ar[r] & H^n(B), } \end{eqnarray*} is an isomorphism for $n\leq k$, and it is injective for $n=k+1$. \end{itemize} \end{proposition} \begin{proof} Clearly, $C(\Phi)$ fits in an exact sequence \begin{align}\label{ExactConeSeq} \xymatrix{0 \ar[r] & B \ar[r]^j & C(\Phi) \ar[r]^\pi & A[1] \ar[r] & 0, } \end{align} whose associated long exact sequence in cohomology is \begin{eqnarray*} \xymatrix{ 0 \ar[r] & \cancelto{0}{H^{-1}(B)} \ar[r]^{j^{-1}} & H^{-1}(\Phi) \ar[r]^{\pi^{-1}} & H^{-1}(A[1]) \ar[r] & H^0(B) \ar[r]^{j^0} & H^0(\Phi) \ar[r]^{\pi^0} & ... }\\ \qquad\qquad\xymatrix{ ... \ar[r]^{\pi^0\quad} & H^0(A[1]) \ar[r] & H^1(B) \ar[r]^{j^1} & H^1(\Phi) \ar[r]^{\pi^1} & H^1(A[1]) \ar[r] & ... } \end{eqnarray*} After observing that $H^n(A[1])=H^{n+1}(A)$, and that the connecting homomorphism is $\Phi^*$, the proof is straightforward. \end{proof} Proposition~\ref{ConeCoh} holds for the complexes defined by Eq.'s~(\ref{AlgNabla}) and (\ref{GpNabla}); however, it will be relevant to us that the cone of $\Phi$ can be endowed with a triple grading inherited from (\ref{Cxs}). First, suppose $\xymatrix{\Phi:A^{\bullet,\bullet} \ar[r] & B^{\bullet,\bullet}}$ is a map between double complexes, we define the \textit{mapping cone double} $C^{p,q}(\Phi)=A^{p,q+1}\oplus B^{p,q}$ by setting the $p$th column to be the mapping cone of $\Phi\vert_{A^{p,\bullet}}$. The product of the horizontal differentials defines a map of complexes between columns, and thus a double complex, if and only if $\Phi$ is a map of double complexes. $C^{\bullet,\bullet}(\Phi)$ fits in an exact sequence of double complexes analogous to (\ref{ExactConeSeq}); therefore, the conclusion of Proposition~\ref{ConeCoh} holds for its total cohomology $H_{tot}(\Phi)$ and the total cohomologies of $A$ and $B$. Next, consider a map $\xymatrix{\Phi:C_\nabla(\G,\phi) \ar[r] & C_\nabla(\gg_1,\phi)}$ that respects the triple grading of the complexes (\ref{Cxs}). We define the \textit{mapping cone triple} $C^{p,q}_r(\Phi)=C^{p,q+1}_r(\G,\phi)\oplus C^{p,q}_r(\gg_1,\phi)$ by setting, for constant $p$, the $p$-page to be the mapping cone double of the map $\Phi_p$ (\ref{Phi_p}) and formally summing the remaining differentials and difference maps. The total differential $\nabla_\Phi$ squares to zero if and only if $\Phi$ is a map of complexes. $C^{\bullet,\bullet}_{\bullet}(\Phi)$ fits in an exact sequence analogous to (\ref{ExactConeSeq}); therefore, the conclusion of Proposition~\ref{ConeCoh} holds for its total cohomology $H_{\nabla}(\Phi)$ and the cohomologies of $\G$ and $\gg_1$. We point out that the cohomology of the mapping cone triple of $\Phi$ tautologically coincides with the total cohomology of \begin{align}\label{Cllpsed} \xymatrix{ \vdots & \vdots & \vdots & \vdots \\ C_{tot}^3(\Phi_0) \ar[r]\ar[u]\ar@{-->}[rrd]\ar@{.>}[rrrdd] & C_{tot}^3(\Phi_1) \ar[r]\ar[u]\ar@{-->}[rrd]\ar@{.>}[rrrdd] & C_{tot}^3(\Phi_2) \ar[r]\ar[u]\ar@{-->}[rrd] & C_{tot}^3(\Phi_3) \ar[r]\ar[u] & \dots \\ C_{tot}^2(\Phi_0) \ar[r]\ar[u]\ar@{-->}[rrd] & C_{tot}^2(\Phi_1) \ar[r]\ar[u]\ar@{-->}[rrd] & C_{tot}^2(\Phi_2) \ar[r]\ar[u]\ar@{-->}[rrd] & C_{tot}^2(\Phi_3) \ar[r]\ar[u] & \dots \\ C_{tot}^1(\Phi_0) \ar[r]\ar[u] & C_{tot}^1(\Phi_1) \ar[r]\ar[u] & C_{tot}^1(\Phi_2) \ar[r]\ar[u] & C_{tot}^1(\Phi_3) \ar[r]\ar[u] & \dots \\ C_{tot}^0(\Phi_0) \ar[r]\ar[u] & C_{tot}^0(\Phi_1) \ar[r]\ar[u] & C_{tot}^0(\Phi_2) \ar[r]\ar[u] & C_{tot}^0(\Phi_3) \ar[r]\ar[u] & \dots , } \end{align} each of whose columns is the total complex of the mapping cone double of $\Phi_p$ (\ref{Phi_p}) and where the maps $\xymatrix{C_{tot}^q(\Phi_p) \ar[r] & C_{tot}^{q+1-a}(\Phi_{p+a})}$ - exemplified respectively for $a\in\lbrace 1,2,3\rbrace$ by the horizontal, dashed and pointed arrows in (\ref{Cllpsed})- correspond to the sum $\sum_{b=0}^k\Delta_{a,b}$. One may filter (\ref{Cllpsed}) by columns giving rise to a spectral sequence whose first page is $E^{p,q}_1=H_{tot}^q(\Phi_p)$. We close this section with the following lemma, which is much used below. It follows from a simple application of spectral sequences (see, \textit{e.g.}, \cite{Weibel:1994}). \begin{lemma}\label{BelowDiag} Let $(E^{p,q}_r,d_r)$ be the spectral sequence of a double graded object $C^{\bullet,\bullet}$ that can be filtrated by columns. If there is a page for which $E^{p,q}_r$ is zero for all $(p,q)$ satisfying $p+q\leq k$, then \begin{eqnarray*} H_{tot}^n(C)=(0)\quad\textnormal{for }n\leq k. \end{eqnarray*} \end{lemma} \section{The 2-van Est Map}\label{sec-TheMap} In this section, we define the van Est map and prove that it defines a map of complexes. Throughout, let $\G$ be a Lie $2$-group with Lie $2$-algebra $\gg_1$ and let $\rho$ be a representation on $\xymatrix{W \ar[r]^\phi & V}$. Define the van Est map \begin{align}\label{vanEstMap} \Phi & :\xymatrix{ C^{p,q}_r(\G,\phi) \ar[r] & C^{p,q}_r(\gg_1,\phi), } & (\Phi\omega)(\xi_1,...,\xi_q ;z_1,...,z_r) & :=\sum_{\sigma\in S_q}\sum_{\varrho\in S_r}\abs{\sigma}\abs{\varrho}\overrightarrow{R}_{\sigma(\Xi)}\overrightarrow{R}_{\varrho(Z)}\omega, \end{align} where $\Xi=(\xi_1,...,\xi_q)\in\gg_p^q$, $Z=(z_1,...,z_r)\in\gg^r$, $\abs{\cdot}$ stands for the sign of the permutation, and \begin{align*} (\overrightarrow{R}_{\varrho(Z)}\omega)(\vec{\gamma}) & :=\frac{d}{d\tau_r}\rest{\tau_r=0}\cdots\frac{d}{d\tau_1}\rest{\tau_1=0}\omega(\vec{\gamma};\exp_G(\tau_1 z_{\varrho(1)}),...,\exp_G(\tau_r z_{\varrho(r)})), & \textnormal{for } & \vec{\gamma}\in\G_p^q; \\ \overrightarrow{R}_{\sigma(\Xi)}\overrightarrow{R}_{\varrho(Z)}\omega & =\frac{d}{d\lambda_q}\rest{\lambda_q=0}\cdots\frac{d}{d\lambda_1}\rest{\lambda_1=0}(\overrightarrow{R}_{\varrho(Z)}\omega)(\exp_{\G_p}(\lambda_1\xi_{\sigma(1)}),...,\exp_{\G_p}(\lambda_q\xi_{\sigma(q)})). \end{align*} Regarding each of these $\overrightarrow{R}$ operators as compositions of independent $R_\bullet$ operators that lower degree by differentiating a single entry in the direction of the right-invariant vector field of $\bullet$ and evaluating at the identity, it is clear that $\Phi\omega\in C^{p,q}_r(\gg_1,\phi)$ and $\Phi$ is thus well-defined. Observe that, in the page $r=0$, the formula (\ref{vanEstMap}) coincides with the classic van Est map sending $\G_p$ cochains to $\gg_p$ cochains; analogously, in the page $q=0$, (\ref{vanEstMap}) coincides with the classic van Est map for the Lie group $G$. We show that, under the correspondences of Theorems~\ref{H2Alg} and \ref{H2Gp}, $\Phi$ is the map that takes in a Lie $2$-group extension (or an isomorphism class thereof) and returns its Lie $2$-algebra. Let $\vec{\omega}=(\varphi,\omega_0,\alpha,\omega_1)\in C^{1,1}_0(\G,\phi)\oplus C^{0,2}_0(\G,\phi)\oplus C^{0,1}_1(\G,\phi)\oplus C^{0,0}_2(\G,\phi)$. In the course of proving Theorem~\ref{H2Gp}, one learns that $\vec{\omega}$ is a $2$-cocycle if and only if \begin{align}\label{CocGpExt} \xymatrix{G{}_{\rho^1_0\circ i}\ltimes^{\omega_1}W \ar[r] & H{}_{\rho^0_0}\ltimes^{\omega_0}V:(g,w) \ar@{|->}[r] & (i(g),\phi(w)+\varphi{\begin{pmatrix} g \\ 1 \end{pmatrix}})} \end{align} with action \begin{align}\label{AlphaAction} (g,w)^{(h,v)} & =(g^h,\rho_0^1(h)^{-1}(w+\rho_1(g)v)+\alpha(h;g)) & \textnormal{for }(g,w)\in G\times W, (h,v)\in H\times V, \end{align} is a crossed module. Here, we use the isomorphism of Subsection~\ref{sss-Equiv} to cast $\varphi$ as a function of $G\rtimes H$. Also, $X{}_{\rho}\ltimes^{\omega}A$ stands for the twisted semi-direct product with respect to the representation $\rho$ of $X$ on $A$ defined by $\omega\in C_{Gp}^2(X,A)$, which is a Lie group if and only if $\omega$ is a Lie group $2$-cocycle. If $\vec{\omega}$ is a $2$-cocycle, in particular, so are $\omega_0$ and $\omega_1$. Hence, $\Phi\omega_0$ and $\Phi\omega_1$ are Lie algebra $2$-cocycles respectively defining the Lie algebras of $H{}_{\rho^0_0}\ltimes^{\omega_0}V$ and $G{}_{\rho^1_0\circ i}\ltimes^{\omega_1}W$ as the twisted semi-direct sums $\hh_{\dot{\rho}^0_0}\oplus^{\Phi\omega_0}V$ and $\gg{}_{\dot{\rho}^1_0\circ\mu}\oplus^{\Phi\omega_1}W$. Differentiating (\ref{CocGpExt}) at the identity yields \begin{align}\label{CocAlgExt} \xymatrix{\gg{}_{\dot{\rho}^1_0\circ\mu}\oplus^{\Phi\omega_1}W \ar[r] & \hh{}_{\dot{\rho}^0_0}\oplus^{\Phi\omega_0}V:(x,w) \ar@{|->}[r] & (\mu(x),\phi(w)+d_{(1,1)}\varphi{\begin{pmatrix} x \\ 0 \end{pmatrix}})}, \end{align} and, clearly, $d_{(1,1)}\varphi=\Phi\varphi$. As for the action, each $(h,v)\in H\times V$ defines a Lie group automorphism \begin{align}\label{(h,v)-act} \xymatrix{(-)^{(h,v)}:G{}_{\rho^1_0\circ i}\ltimes^{\omega_1}W \ar[r] & G{}_{\rho^1_0\circ i}\ltimes^{\omega_1}W}. \end{align} Differentiating (\ref{(h,v)-act}) at the identity yields a Lie algebra automorphism, for which we use the same notation \begin{align*} \xymatrix{(-)^{(h,v)}:\gg{}_{\dot{\rho}^1_0\circ\mu}\oplus^{\Phi\omega_1}W \ar[r] & \gg{}_{\dot{\rho}^1_0\circ\mu}\oplus^{\Phi\omega_1}W}. \end{align*} Explicitly, for $(x,w)\in\gg\oplus W$, \begin{align*} (x,w)^{(h,v)}& =(x^h,\rho_0^1(h)^{-1}(w+\dot{\rho}_1(x)v)+\frac{d}{d\tau}\rest{\tau=0}\alpha(h;\exp_G(\tau x))), \end{align*} where we also abuse notation and write $x^h$ for the induced action of $H$ on $\gg$. By definition, the structural action of the Lie $2$-algebra of (\ref{CocGpExt}) is the differential at the identity of the group homomorphism \begin{align*} \xymatrix{H{}_{\rho^0_0}\ltimes^{\omega_0}V \ar[r] & Aut(\gg{}_{\dot{\rho}^1_0\circ\mu}\oplus^{\Phi\omega_1}W):(h,v) \ar@{|->}[r] & (-)^{(h,v)^{-1}}}. \end{align*} Thus, the action by derivations of $\hh_{\dot{\rho}^0_0}\oplus^{\Phi\omega_0}V$ on $\gg{}_{\dot{\rho}^1_0\circ\mu}\oplus^{\Phi\omega_1}W$ is given by \begin{align*} \Lie_{(y,v)}(x,w)=(\Lie_y x,\dot{\rho}_0^1(y)w-\dot{\rho}_1(x)v+\frac{d}{d\lambda}\rest{\lambda=0}\frac{d}{d\tau}\rest{\tau=0}\alpha(\exp_H(\lambda y)^{-1};\exp_G(\tau x))) \end{align*} for $(y,v)\in\hh\oplus V$ and $(x,w)\in\gg\oplus W$. Together with this action, (\ref{CocAlgExt}) is naturally seen as an extension of $\gg_1$ by $\xymatrix{W \ar[r]^\phi & V}$ whose classifying $2$-cocycle under Theorem~\ref{H2Alg} is $(\Phi\varphi,\Phi\omega_0,\Phi\alpha,\Phi\omega_1)\in C_\nabla^2(\gg_1,\phi)$. In the remainder of this section, we prove that $\Phi$ commutes with all differentials and difference maps defining $\nabla$, thus defining a map of complexes. To illustrate the van Est strategy, we consider two cases separately: First, we deal with the case where the representation takes values on an honest vector space; then, we move on to the general case. \subsection{Vector space coefficients} One way of regarding the category of vector spaces as a subcategory of $2$-vector spaces is realizing a vector space $V$ as the groupoid that has $V$ as its space of objects and single arrow attached to each object, the $2$-term complex of which is $\xymatrix{(0) \ar[r] & V}$. Having $W=(0)$ has the effect of collapsing (\ref{Alg3dimLat}) and (\ref{Gp3dimLat}) to the page $r=0$. Though in general $\partial$ and $\delta$ do not commute, they do so up to isomorphism in the $2$-vector space when $r=0$. Since two elements in a vector space are isomorphic if and only if they are the same, $\partial$ and $\delta$ commute and form double complexes. We prove that, in this case, $\Phi$ yields a map of double complexes. In the generic cube, \begin{align}\label{r=0Cube} \xymatrix@!0{ & & C(\G_{p}^{q+1},V) \ar[rrrrr]^\Phi\ar[ddll]_\partial & & & & & \bigwedge^{q+1}\gg_{p}^*\otimes V \ar[ddll]_\partial \\ \\ C(\G_{p+1}^{q+1},V) \ar[rrrrr]^\Phi & & & & & \bigwedge^{q+1}\gg _{p+1}^*\otimes V \\ & & C(\G_{p}^{q},V) \ar'[rrr]^\Phi[rrrrr]\ar'[u][uuu]^\delta\ar[ddll]_\partial & & & & & \bigwedge^{q}\gg_{p}^*\otimes V \ar[uuu]_\delta\ar[ddll]^\partial \\ \\ C(\G_{p+1}^{q},V) \ar[rrrrr]^\Phi \ar[uuu]^\delta & & & & & \bigwedge^{q}\gg_{p+1}^*\otimes V \ar[uuu]^\delta } \end{align} the left and right squares commute because they lie in their corresponding double complexes. Due to the observation that (\ref{vanEstMap}) coincides with the classic Van Est map, the front and back squares commute as well. We are left to prove is the following lemma. \begin{lemma} In (\ref{r=0Cube}), $\Phi\circ\partial=\partial\circ\Phi$. \end{lemma} \begin{proof} Each face map $\partial_k$ in a Lie $2$-group is a homomorphism whose derivative is the face map $\hat{\partial}_k$ in its Lie $2$-algebra; therefore, if $\xi\in\gg_{p+1}$, \begin{align}\label{expFaces} \partial_k(\exp_{\G_{p+1}}(\xi))=\exp_{\G_{p}}(\hat{\partial}_k\xi). \end{align} Let $\omega\in C(\G_p^q,V)$ and $\Xi=(\xi_1,...,\xi_q)\in\gg_{p+1}^q$. Then \begin{align*} \overrightarrow{R}_\Xi(\partial\omega) & =\frac{d}{d\lambda_q}\rest{\lambda_q=0}...\frac{d}{d\lambda_1}\rest{\lambda_1=0}\sum_{k=0}^{p+1}(-1)^k\omega(\partial_k\exp_{\G_{p+1}}(\lambda_1\xi_{1}),...,\partial_k\exp_{\G_{p+1}}(\lambda_q\xi_{q)})) \\ & =\sum_{k=0}^{p+1}(-1)^k\frac{d}{d\lambda_q}\rest{\lambda_q=0}...\frac{d}{d\lambda_1}\rest{\lambda_1=0}\omega(\exp_{\G_{p}}(\lambda_1\hat{\partial}_k\xi_1),...,\exp_{\G_{p}}(\lambda_q\hat{\partial}_k\xi_q)) =\sum_{k=0}^{p+1}(-1)^k\overrightarrow{R}_{\partial_k\Xi}\omega , \end{align*} and \begin{align*} \Phi(\partial\omega)(\Xi) & =\sum_{\sigma\in S_q}\abs{\sigma}\overrightarrow{R}_{\sigma(\Xi)}(\partial\omega) =\sum_{\sigma\in S_q}\abs{\sigma}\sum_{k=0}^{p+1}(-1)^k\overrightarrow{R}_{\partial_k\sigma(\Xi)}\omega =\sum_{k=0}^{p+1}(-1)^k(\Phi\omega)(\partial_k\Xi)=\partial(\Phi\omega)(\Xi). \end{align*} \end{proof} In this case, we have the following van Est type theorem. \begin{theorem}\label{2vE-vs} Let $\G$ be a Lie $2$-group with associated crossed module $\xymatrix{G \ar[r] & H}$, Lie $2$-algebra $\gg_1$ and a representation on the $2$-vector space $\xymatrix{(0) \ar[r] & V}$. If $H$ is $k$-connected, $G$ is $(k-1)$-connected and $\Phi$ is the van Est map (\ref{vanEstMap}), then the total cohomology of its mapping cone double vanishes: \begin{eqnarray*} H^n_{tot}(\Phi)=(0),\quad\textnormal{for all degrees } n\leq k. \end{eqnarray*} \end{theorem} \begin{proof} The first page of the spectral sequence of the filtration by columns of $C^{\bullet,\bullet}(\Phi)$ is (\ref{1stPage}). Recall that $\G_p\cong G^p\times H$; hence, using the K\"unneth formula and the connectedness hypotheses for $H$ and $G$, it follows that $\G_p$ is $(k-1)$-connected. Theorem~\ref{vanEst} implies all columns in $E^{p,q}_1$ vanish below $k-1$, and so the result follows from Lemma~\ref{BelowDiag}. \end{proof} Rephrasing with Proposition~\ref{ConeCoh}: \begin{corollary}\label{vanEstVs} Under the hypotheses of Theorem~\ref{2vE-vs}, the van Est map (\ref{vanEstMap}) induces isomorphisms \begin{eqnarray*} \xymatrix{ \Phi :H_\nabla^n(\G ,V) \ar[r] & H_\nabla^n(\gg_1 ,V), } \end{eqnarray*} for $n\leq k$, and it is injective for $n=k+1$. \end{corollary} As an application, we prove the following partial integrability result. \begin{theorem} Let $\gg_1$ be a Lie $2$-algebra with associated crossed module $\xymatrix{\gg \ar[r]^\mu & \hh}$. If \begin{align}\label{hyp} \gg\cap\mathfrak{c}(\hat{u}(\hh)) & =(0), \end{align} where $\mathfrak{c}(\hat{u}(\hh))$ is the centralizer of $\hat{u}(\hh)$ in $\gg_1$, then $\gg_1$ is integrable. \end{theorem} \begin{proof} We are to use the van Est strategy. Consider the exact sequence \begin{eqnarray}\label{vE2-Ext} \xymatrix{ 0 \ar[r] & \ker(\ad_1) \ar[d]\ar@{^{(}->}[r] & \gg \ar[d]_\mu\ar[r]^{\ad_1\quad} & \ad_1(\gg) \ar[d]\ar[r] & 0 \\ 0 \ar[r] & \ker(\ad_0) \ar@{^{(}->}[r] & \hh \ar[r]_{\ad_0\quad} & \ad_0(\hh) \ar[r] & 0, } \end{eqnarray} associated to the adjoint representation of Example~\ref{2ad}. If $x\in\gg$, from Eq.~(\ref{hyp}), there exists a $y_x\in\hh$ such that $[x,\hat{u}(y_x)]_1=\ad_1(x)(y_x)\neq 0$. Consequently, $\ad_1(x)\neq 0\in Hom(\hh,\gg)$ for every $x\in\gg$, and $\ker(\ad_1)=(0)$. Let $[\omega]\in H_{\nabla}^2(\gg_1,\ker(\ad_0))$ be the class corresponding to (\ref{vE2-Ext}) under Theorem~\ref{H2Alg}. The image of any linear functor between $2$-vector spaces yields a Lie subgroupoid, and Lie $2$-subalgebras of $\ggl(\phi)$ can be integrated using exponentials \cite{Sheng_Zhu2:2012}; hence, $\xymatrix{\ad_1(\gg) \ar[r] & \ad_0(\hh)}$ is integrable to a Lie $2$-group $\G$ with associated crossed module $\xymatrix{G \ar[r] & H}$. Picking $G$ and $H$ $1$-connected, we may use Corollary~\ref{vanEstVs} to conclude \begin{align*} [\omega] & =\Phi[\smallint\omega], & \textnormal{for a unique } & [\smallint\omega]\in H_{\nabla}^2(\G ,\ker(\ad_0)). \end{align*} The extension of $\G$ by $\ker(\ad_0)$ that corresponds to $[\smallint\omega]$ under Theorem~\ref{H2Gp} integrates $\gg_1$. \end{proof} \begin{remark} One could cast Eq.~(\ref{hyp}) in terms of the isotropy Lie algebras of the action of $\hh$ on $\gg$ by asking equivalently that $\dim\lbrace y\in\hh :\Lie_y x=0\rbrace>0$ for all $x\in\gg$. \end{remark} \subsection{The general case} We devote the remainder of this section to prove that, when values are taken on a $2$-vector space $\xymatrix{W \ar[r]^\phi & V}$ with $W\neq (0)$, the van Est map $\Phi$ (\ref{vanEstMap}) still defines a map of complexes. In particular, we show that $\Phi$ commutes with all differentials in Subsection~\ref{sss-Cxs} and all difference maps necessary to define the complexes (\ref{Cxs}) up to degree $2$. The proofs we present boil down to long and unfortunately unenlightening computations. For notational convenience, we adopt the following shorthands. For an index set $I=\lbrace 1,...,n\rbrace$, \begin{align*} \frac{d^I}{d\tau_I}\rest{\tau_I=0}:=\frac{d}{d\tau_n}\rest{\tau_n=0}\cdots\frac{d}{d\tau_1}\rest{\tau_1=0}. \end{align*} On the other hand, for any Lie algebra $\gg$, if $X=(x_1,...,x_n)\in\gg^n$, we define \begin{align*} \exp(\tau_I\cdot X):=(\exp_G(\tau_1x_1),...,\exp_G(\tau_nx_n))\in G^n, \end{align*} where $G$ is a Lie group integrating $\gg$. We also use the following partitions of the symmetric group $S_n$. First, \begin{align*} S_n=\bigcup_{a=1}^nS_{n-1}(m\vert a), \end{align*} where $S_{n-1}(m\vert a)$ is the set of permutations that fix the $m$th element to be $a$; in symbols, $S_{n-1}(m\vert a):=\lbrace\sigma\in S_n:\sigma(m)=a\rbrace$. Each element $\sigma\in S_{n-1}(m\vert a)$ can be factored as $\sigma=\sigma'\circ\sigma^m_a$ where, \begin{align*} \sigma^m_a(j):=\begin{cases} a & \textnormal{if }j=m \\ j-1 & \textnormal{if }m<j\leq a \\ j+1 & \textnormal{if }a\leq j<m \\ j & \textnormal{otherwise.}\end{cases} \end{align*} The residual permutation $\sigma'$ leaves the $m$th element alone and shifts the remaining $n-1$ elements; thus, one can regard $\sigma'$ as belonging to the permutation group $S_{n-1}$, incidentally justifying the notation. Since $\sigma^m_a$ is a composition of $\abs{m-a}$ transpositions, $\abs{\sigma}=\abs{\sigma'}\abs{\sigma^m_a}=(-1)^{m-a}\abs{\sigma'}$. Iterating this process, for any $k<n$ and $a_0<...<a_{k-1}$, one can partition the symmetric group as \begin{align*} S_n=\bigcup_{a_0<...<a_{k-1}}\bigcup_{\varrho\in S_k}S_{n-k}(m\vert a_{\varrho(0)}...a_{\varrho(k-1)}), \end{align*} where $S_{n-k}(m\vert a_0...a_{k-1}):=\lbrace\sigma\in S_n:\sigma(m+j)=a_j,\forall 0\leq j<k\rbrace$. Each element $\sigma\in S_{n-k}(m\vert a_{\varrho(0)}...a_{\varrho(k-1)})$ can be factored as $\sigma=\sigma^{(r)}\circ\sigma^m_{a_0...a_{k-1}}\circ\varrho$, where $\varrho$ is interpreted to act only on $\lbrace a_0,...,a_{k-1}\rbrace$ and $\sigma^m_{a_0...a_{k-1}}=\sigma^{m+k-1}_{a_{k-1}}\circ\cdots\circ\sigma^{m+1}_{a_1}\circ\sigma^m_{a_0}$. Again, the residual permutation $\sigma^{(r)}$ leaves fixed the $k$ elements following $m$ and shifts the remaining $n-k$ elements, thus allowing it to be regarded as belonging to $S_{n-k}$. The sign is computed to be \begin{align*} \abs{\sigma}=\abs{\sigma^{(r)}}\abs{\sigma^m_{a_0...a_{k-1}}}\abs{\varrho}=(-1)^{km+\frac{(k-1)k}{2}-(a_0+...+a_{k-1})}\abs{\sigma^{(r)}}\abs{\varrho}. \end{align*} {\bf The r-direction} - The following results prove that the van Est map $\Phi$ (\ref{vanEstMap}) commutes with the differentials in the $r$-direction. \begin{lemma}\label{delta'} Let $\omega\in C^{p,q}_0(\G,\phi)$, then \begin{align*} \Phi(\delta'\omega)=\delta'(\Phi\omega)\in C^{p,q}_1(\gg_1,\phi). \end{align*} \end{lemma} \begin{proof} For $q=0$, $\omega=v\in V$ and $z\in\gg$, \begin{align*} \Phi(\delta'v)(z) & =\frac{d}{d\tau}\rest{\tau=0}(\delta'v)(\exp(\tau z))=\frac{d}{d\tau}\rest{\tau=0}\rho_1(\exp(\tau z))v=\dot{\rho}_1(z)v=(\delta'v)(z), \end{align*} and $\Phi$ is defined to be the identity when $(q,r)=(0,0)$. $t_p$ is a composition of face maps and so is its derivative $\hat{t}_p$; therefore, it is a group homomorphism and, if $\xi\in\gg_p$, \begin{align}\label{expFinTarget} t_p(\exp_{\G_p}(\xi))=\exp_{H}(\hat{t}_p\xi). \end{align} For $q>0$, let $\Xi=(\xi_1,...,\xi_q)\in\gg_p^q$, then setting $h_j:=\prod_{k=j}^q\exp(\lambda_k\hat{t}_p(\xi_k))=\exp(\lambda_{j}\hat{t}_p(\xi_{j}))...\exp(\lambda_q\hat{t}_p(\xi_q))\in H$, we compute \begin{align*} \overrightarrow{R}_{\Xi}R_z(\delta'\omega) & =\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\frac{d}{d\tau}\rest{\tau=0}(\delta'\omega)(\exp(\lambda_I\cdot\Xi);\exp(\tau z))=\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\rho_0^1(h_1)^{-1}\dot{\rho}_1(z)\omega(\exp(\lambda_I\cdot\Xi)) \\ & =\rho_0^1(h_2)^{-1}\dot{\rho}_1(z)R_{\xi_1}\omega(\exp(\lambda_{2}\xi_{2}),...,\exp(\lambda_q\xi_q)); \end{align*} inductively implying $\overrightarrow{R}_{\Xi}R_z(\delta'\omega)=\dot{\rho}_1(z)(\overrightarrow{R}_{\Xi}\omega)$. Thus, \begin{align*} \Phi(\delta'\omega)(\Xi;z) & =\sum_{\sigma\in S_q}\abs{\sigma}\overrightarrow{R}_{\sigma(\Xi)}R_z(\delta'\omega)=\sum_{\sigma\in S_q}\abs{\sigma}\dot{\rho}_1(z)(\overrightarrow{R}_{\sigma(\Xi)}\omega)=\dot{\rho}_1(z)\big{(}(\Phi\omega)(\Xi)\big{)}=\delta'(\Phi\omega)(\Xi;z). \end{align*} \end{proof} \begin{lemma}\label{delk} Let $G$ be a Lie group with Lie algebra $\gg$ and let $V$ be a vector space. If $\omega\in C(G^p,V)$, $X=(x_0,...,x_p)\in\gg^{p+1}$, and $\partial_k$ is the $k$th face map (\ref{faceMaps}) with $0<k<p+1$, then for all $m<n$, \begin{align*} \sum_{\sigma\in S_{p-1}(k-1\vert mn)\cup S_{p-1}(k-1\vert nm)}\abs{\sigma}\overrightarrow{R}_{\sigma(X)}\partial_k^*\omega & =(-1)^{m+n}\sum_{\varrho\in S_{p-1}}\abs{\varrho}\overrightarrow{R}_{\varrho(X^{(k)}_{mn})}\omega, \end{align*} where, we interpret $S_{p-1}$ as the space of bijections between the set of coordinates of $X(k-1,k)$ and the set of coordinates of $X(m,n)$, and $\varrho(X^{(k)}_{mn}):=(\varrho(x_0),...,\varrho(x_{k-2}),[x_m,x_n],\varrho(x_{k+1}),...,\varrho(x_p))$. \end{lemma} \begin{proof} Let $\sigma\in S_{p-1}(k-1\vert mn)$, then $\sigma=\sigma''\circ\sigma^{k-1}_{mn}$ and there exists a unique $\bar{\sigma}\in S_{p-1}(k-1\vert nm)$ given by $\bar{\sigma}=\sigma''\circ\sigma^{k-1}_{nm}$. By definition, if $\varphi\in C(G,V)$, \begin{align*} R_{[x_m,x_n]}\varphi & =\frac{d}{d\tau_2}\rest{\tau_2=0}\frac{d}{d\tau_1}\rest{\tau_1=0}\varphi(\exp(\tau_1x_n)\exp(\tau_2x_m))-\varphi(\exp(\tau_1x_m)\exp(\tau_2x_n)); \end{align*} hence, regarding $\sigma''$ as belonging to $S_{p-1}$, \begin{align*} \abs{\sigma}\overrightarrow{R}_{\sigma(X)}\partial_k^*\omega +\abs{\bar{\sigma}}\overrightarrow{R}_{\bar{\sigma}(X)}\partial_k^*\omega & =(-1)^{m+n}\abs{\sigma''}\overrightarrow{R}_{\sigma''(X^{k}_{mn})}\omega. \end{align*} \end{proof} \begin{proposition}\label{delta1} Let $r>0$ and $\omega\in C^{p,q}_r(\G,\phi)$, then \begin{align*} \Phi(\delta_{(1)}\omega)=\delta_{(1)}(\Phi\omega)\in C^{p,q}_{r+1}(\gg_1,\phi). \end{align*} \end{proposition} \begin{proof} Let $\Xi=(\xi_1,...,\xi_q)\in\gg_p^q$ and $Z=(z_0,...,z_r)\in\gg^{r+1}$. $\overrightarrow{R}_\Xi\overrightarrow{R}_Z(\delta_{(1)}\omega)$ has got three types of terms corresponding respectively to the dual face maps $\delta_0^*$, $\delta_k^*$ for $1\leq k\leq r$ and $\delta_{r+1}^*$ in $\delta_{(1)}\omega$: \begin{align*} I & :=\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}\rho_0^1(i(\exp(\tau_0z_0)^{\prod_{j=1}^q t_p(\exp(\lambda_j\xi_j))}))\omega(\exp(\lambda_I\cdot\Xi);\exp(\tau_1z_1),...,\exp(\tau_rz_r)) \\ II & :=\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}\omega(\exp(\lambda_I\cdot\Xi);\exp(\tau_0z_0),...,\exp(\tau_kz_k)\exp(\tau_{k+1}z_{k+1}),...,\exp(\tau_rz_r)) \\ III & :=\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}\omega(\exp(\lambda_I\cdot\Xi);\exp(\tau_0z_0),...,\exp(\tau_{r-1}z_{r-1})). \end{align*} Type $III$ terms are constant with respect to $\tau_r$ and thus vanish. For any fixed $0\leq k\leq r$, partition $S_{r+1}=\bigcup_{m<n}(S_{r-1}(k-1\vert mn)\cup S_{r-1}(k-1\vert nm))$ and use Lemma~\ref{delk} to conclude \begin{align*} \sum_{\varrho\in S_{r+1}}\abs{\varrho}\overrightarrow{R}_{\varrho(Z)}\delta_k^*\omega & =\sum_{m<n}(-1)^{m+n}\sum_{\varrho'\in S_{r-1}}\abs{\varrho'}\overrightarrow{R}_{\varrho'(Z^{(k)}_{mn})}\omega. \end{align*} For each pair $m<n$, using $S_r=\bigcup_{k=1}^rS_{r-1}(0\vert k)$, \begin{align*} \sum_{\varrho\in S_r}\abs{\varrho}\overrightarrow{R}_{\varrho([z_m,z_n],Z(m,n))}\omega=\sum_{k=1}^r\sum_{\varrho'\in S_{r-1}}(-1)^k\abs{\varrho'}\overrightarrow{R}_{\varrho'(Z^{(k)}_{mn})}\omega; \end{align*} thus, summing all type $II$ terms yields \begin{align*} \sum_{\sigma\in S_q}\sum_{\varrho\in S_{r+1}}\sum_{k=1}^r(-1)^k\abs{\sigma}\abs{\varrho}\overrightarrow{R}_{\sigma(\Xi)}\overrightarrow{R}_{\varrho(Z)}(\delta_k^*\omega)=\sum_{m<n}(-1)^{m+n}(\Phi\omega)(\Xi;[z_m,z_n],Z(m,n)). \end{align*} Concludingly, recall that for $y\in\hh$ and $x\in\gg$, \begin{align*} \frac{d}{d\lambda}\rest{\lambda=0}\frac{d}{d\tau}\rest{\tau=0}\rho_0^1(i(\exp(\tau x)^{\exp(\lambda y)})) & =\frac{d}{d\lambda}\rest{\lambda=0}\dot{\rho}_0^1(\mu(x^{\exp(\lambda y)}))=-\dot{\rho}_0^1(\mu(\Lie_yx)); \end{align*} hence, setting $h_j:=\prod_{k=j}^q\exp(\lambda_{k}\hat{t}_p(\xi_{k}))=\exp(\lambda_{j}\hat{t}_p(\xi_{j}))...\exp(\lambda_q\hat{t}_p(\xi_q))\in H$, we compute \begin{align*} \frac{d}{d\lambda_1}\rest{\lambda_1=0}\dot{\rho}_0^1(\mu(z_0^{h_1}))(\overrightarrow{R}_{Z(0)}\omega)(\exp(\lambda_I\cdot\Xi)) & =\dot{\rho}_0^1(\mu(z_0^{\exp(0)h_2}))R_{\xi_1}(\overrightarrow{R}_{Z(0)}\omega)(\exp(\lambda_{2}\xi_{2}),...,\exp(\lambda_q\xi_q))+ \\ & \qquad -\dot{\rho}_0^1(\mu((\Lie_{\hat{t}_p(\xi_1)}z_0)^{h_2}))(\overrightarrow{R}_{Z(0)}\omega)(\exp(0),\exp(\lambda_{2}\xi_{2}),...,\exp(\lambda_q\xi_q)) \\ & =\dot{\rho}_0^1(\mu(z_0^{h_2}))R_{\xi_1}(\overrightarrow{R}_{Z(0)}\omega)(\exp(\lambda_{2}\xi_{2}),...,\exp(\lambda_q\xi_q)) \end{align*} and inductively, $I=\dot{\rho}_0^1(\mu(z_0))\overrightarrow{R}_{\Xi}\overrightarrow{R}_{Z(0)}\omega$. Using the partition of $S_{r+1}$ by $S_r(0\vert k)$'s and summing type $I$ terms yields \begin{align*} \sum_{\sigma\in S_{q}}\sum_{\varrho\in S_{r+1}}\abs{\sigma}\abs{\varrho}\overrightarrow{R}_{\sigma(\Xi)}\overrightarrow{R}_{\varrho(Z)}(\delta_0^*\omega) & =\sum_{\sigma\in S_{q}}\sum_{k=0}^r\sum_{\varrho'\in S_r(0\vert k)}(-1)^{k}\abs{\sigma}\abs{\varrho'}\dot{\rho}_0^1(\mu(z_k))\overrightarrow{R}_{\sigma(\Xi)}\overrightarrow{R}_{\varrho'(Z(k))}\omega \\ & =\sum_{k=0}^r(-1)^k\dot{\rho}_0^1(\mu(z_{k}))(\Phi\omega)(\Xi;Z(k)), \end{align*} and the result follows. \end{proof} {\bf The q-direction} - The following results prove that the van Est map $\Phi$ (\ref{vanEstMap}) commutes with the differentials in the $q$-direction. Since the differentials in the $r$-direction do commute with differentials in the $q$-direction, we prove that for constant $p$, $\Phi$ yields a map of double complexes that we refer to as $p$-pages. \begin{lemma}\label{Multilin} Let $\xymatrix{T:V\times ...\times V \ar[r] & W}$ be an $r$-multilinear map. If $H_\lambda$ is a differentiable path of automorphisms of $V$ with $H_0=Id_V$, then \begin{align*} \frac{d}{d\lambda}\rest{\lambda=0}T(H_\lambda(v_1),...,H_\lambda(v_r)) & =\sum_{k=1}^rT(v_1,...,v_{k-1},\frac{d}{d\lambda}\rest{\lambda=0}H_{\lambda}(v_k),v_{k+1},...,v_r) \end{align*} \end{lemma} \begin{proof} Let $\lbrace e_i\rbrace_{i=1}^n$ be a basis for $V$. In these coordinates, $R_{a_1...a_r}:=R(e_{a_1},...,e_{a_r})$ and $H_\lambda(e_a)=H_a^b(\lambda)e_b$. Since $H_0=Id_V$, $H_a^b(0)=\delta_a^b$, on basic elements, \begin{align*} \frac{d}{d\lambda}\rest{\lambda=0}R(H_\lambda(e_{a_1}),...,H_\lambda(e_{a_r})) & =\frac{d}{d\lambda}\rest{\lambda=0}R(H_{a_1}^{b_1}(\lambda)e_{b_1},...,H_{a_r}^{b_r}(\lambda)e_{b_r})=\frac{d}{d\lambda}\rest{\lambda=0}H_{a_1}^{b_1}(\lambda)...H_{a_r}^{b_r}(\lambda)R_{b_1...b_r} \\ & =R_{b_1...b_r}\sum_{k=1}^rH_{a_1}^{b_1}(0)...H_{a_{k-1}}^{b_{k-1}}(0)\dot{H}_{a_k}^{b_k}(0)H_{a_{k+1}}^{b_{k+1}}(0)...H_{a_r}^{b_r}(0) \\ & =\sum_{k=1}^rR_{a_1...a_{k-1}b_ka_{k+1}...a_r}\dot{H}_{a_k}^{b_k}(0) \\ & =\sum_{k=1}^rR(e_{a_1},...,e_{a_{k-1}},\frac{d}{d\lambda}\rest{\lambda=0}H_{a_k}^{b_k}(\lambda)e_{b_k},e_{a_{k+1}},...,e_{a_r}), \end{align*} as desired. \end{proof} \begin{proposition}\label{delta} Let $r>0$ and $\omega\in C^{p,q}_r(\G,\phi)$, then \begin{align*} \Phi(\delta\omega)=\delta(\Phi\omega)\in C^{p,q+1}_r(\gg_1,\phi). \end{align*} \end{proposition} \begin{proof} Let $\Xi=(\xi_0,...,\xi_q)\in\gg_p^{q+1}$ and $Z=(z_1,...,z_r)\in\gg^r$. $\overrightarrow{R}_\Xi\overrightarrow{R}_Z(\delta\omega)$ has got three types of terms corresponding respectively to the dual face maps $\delta_0^*$, $\delta_j^*$ for $1\leq j\leq q$ and $\delta_{q+1}^*$ in $\delta\omega$: \begin{align*} I & :=\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}\omega(\exp(\lambda_1\xi_1),...,\exp(\lambda_{q}\xi_{q});\exp(\tau_J\cdot Z)^{t_p(\exp(\lambda_0\xi_0))}) \\ II & :=\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}\omega(\exp(\lambda_0\xi_0),...,\exp(\lambda_{j-1}\xi_{j-1})\exp(\lambda_j\xi_j),...,\exp(\lambda_q\xi_q);\exp(\tau_J\cdot Z)) \\ III & :=\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}\rho_0^1(t_p(\exp(\lambda_q\xi_q)))^{-1}\omega(\exp(\lambda_0\xi_0),...,\exp(\lambda_{q-1}\xi_{q-1});\exp(\tau_J\cdot Z)). \end{align*} Here, we used the notation $(g_1,...,g_r)^h:=(g_1^h,...,g_r^h)$ for $g_1,...,g_r\in G$ and $h\in H$. Using Eq.~(\ref{expFinTarget}), $III=-\dot{\rho}_0^1(\hat{t}_p(\xi_q))\overrightarrow{R}_{\Xi(q)}\overrightarrow{R}_Z\omega$; hence, using $S_{q+1}=\bigcup_{j=0}^qS_q(q\vert j)$ and summing type $III$ terms yields \begin{align}\label{III} \sum_{\sigma\in S_{q+1}} & \sum_{\varrho\in S_r}\abs{\sigma}\abs{\varrho}\frac{d}{d\lambda_{q+1}}\rest{\lambda_{q+1}=0}\rho_0^1(t_p(\exp(\lambda_q\xi_{\sigma(q)})))^{-1}\overrightarrow{R}_{\sigma(\Xi)}\overrightarrow{R}_{\varrho(Z)}(\delta_{q+1}^*\omega) \\ & =\sum_{j=0}^q\sum_{\sigma'\in S_{q}(q\vert j)}\sum_{\varrho\in S_r}(-1)^{q-j+1}\abs{\sigma'}\abs{\varrho}\dot{\rho}_0^1(\hat{t}_p(\xi_{j}))\overrightarrow{R}_{\sigma'(\Xi(j))}\overrightarrow{R}_{\varrho(Z)}\omega =\sum_{j=0}^q(-1)^{q-j+1}\dot{\rho}_0^1(\hat{t}_p(\xi_{j}))(\Phi\omega)(\Xi(j);Z).\nonumber \end{align} Let $\vec{\gamma}\in\G_p^q$, then $(\overrightarrow{R}_\bullet\omega)(\vec{\gamma})\in\bigwedge^r\gg^*\otimes W$; in particular, it is $r$-multilinear. Due to Eq.~(\ref{expFinTarget}), consider the differentiable path $(-)^{\exp(\lambda_0\hat{t}_p(\xi_0))}$ of automorphisms through the identity and invoke Lemma~\ref{Multilin} to conclude \begin{align*} \frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\overrightarrow{R}_{Z^{\exp(\lambda_0\hat{t}_p(\xi_0))}}(\delta_0^*\omega)(\exp(\lambda_I\cdot\Xi)) & =-\sum_{k=1}^r\overrightarrow{R}_{\Xi(0)}R_{z_r}...R_{z_{k+1}}R_{\Lie_{\hat{t}_p(\xi_0)}z_k}R_{z_{k-1}}...R_{z_1}\omega. \end{align*} Using $S_{q+1}=\bigcup_{j=0}^qS_q(0\vert j)$, the sum type $I$ terms yields \begin{align}\label{I} \sum_{\sigma\in S_{q+1}} & \sum_{\varrho\in S_r}\abs{\sigma}\abs{\varrho}\frac{d}{d\lambda_0}\rest{\lambda_0=0} \overrightarrow{R}_{\sigma(\Xi)}\overrightarrow{R}_{\varrho(Z)^{\exp(\lambda_0\hat{t}_p(\xi_{\sigma(0)}))}}(\delta_0^*\omega) \\ & =\sum_{j=0}^q\sum_{\sigma'\in S_{q}(0\vert j)}\sum_{\varrho\in S_r}\sum_{k=1}^r(-1)^{j+1}\abs{\sigma'}\abs{\varrho}\overrightarrow{R}_{\sigma'(\Xi(0))}\overrightarrow{R}_{z_{\varrho(r)}}...R_{z_{\varrho(k+1)}}R_{\Lie_{\hat{t}_p(\xi_j)}z_{\varrho(k)}}R_{z_{\varrho(k-1)}}...R_{z_{\varrho(1)}}\omega\nonumber \\ & =\sum_{j=0}^q\sum_{k=1}^r(-1)^{j+1}(\Phi\omega)(\Xi(j);z_1,...,\Lie_{\hat{t}_p(\xi_j)}z_k,...,z_r).\nonumber \end{align} Adding together (\ref{I}) and $(-1)^{q+1}$(\ref{III}), we get $\sum_{j=0}^q(-1)^{j+1}\rho^{(r)}(\xi_j)(\Phi\omega)(\Xi(j);Z)$ (cf. Eq.~(\ref{q-rep})). Using a reasoning parallel to that in the proof of Proposition~\ref{delta1} for type $II$ terms, one concludes \begin{align*} \sum_{\sigma\in S_{q+1}}\sum_{\varrho\in S_r}\sum_{j=1}^q(-1)^j\abs{\sigma}\abs{\varrho}\overrightarrow{R}_{\sigma(\Xi)}\overrightarrow{R}_{\varrho(Z)}(\delta_j^*\omega)=\sum_{m<n}(-1)^{m+n}(\Phi\omega)([\xi_m,\xi_n],\Xi(m,n);Z), \end{align*} and the result follows. \end{proof} \begin{theorem}\label{p-pag} For constant $p$, $\Phi$ restricts to a map $\Phi_p$ (\ref{Phi_p}) of double complexes. \end{theorem} {\bf The p-direction} - The following results prove that the van Est map $\Phi$ (\ref{vanEstMap}) commutes with the differentials in the $p$-direction. Since the differentials in the $r$-direction do commute with differentials in the $p$-direction, we prove that for constant $q$, $\Phi$ yields a map of double complexes that we refer to as $q$-pages. \begin{proposition}\label{partial} Let $r>0$ and $\omega\in C^{p,q}_r(\G,\phi)$, then \begin{align*} \Phi(\partial\omega)=\partial(\Phi\omega)\in C^{p+1,q}_r(\gg_1,\phi). \end{align*} \end{proposition} \begin{proof} Let $\Xi=(\xi_1,...,\xi_q)\in\gg_{p+1}^q$ and $Z\in\gg^r$. Writting $\exp_{\G_{p+1}}(\xi)=(e^\xi_1,...,e^\xi_{p+1})\in\G_{p+1}\leq\G^{p+1}$, for $\xi\in\gg_{p+1}$, define \begin{align*} g_j & :=pr_G\Big{(}e^{\lambda_j\xi_j}_1\vJoin e^{\lambda_{j+1}\xi_{j+1}}_1\vJoin\cdots\vJoin e^{\lambda_{q-1}\xi_{q-1}}_1\vJoin e^{\lambda_q\xi_q}_1\Big{)}\in G, & \textnormal{for } & 1\leq j\leq q. \end{align*} Using Eq.~(\ref{expFaces}), \begin{align*} \overrightarrow{R}_{\Xi}\overrightarrow{R}_Z(\partial\omega) & =\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\Big{[}\rho_0^1(i(g_1))^{-1}(\overrightarrow{R}_Z\omega)(\partial_0\exp(\lambda_I\cdot\Xi))+\sum_{j=1}^{p+1}(-1)^j(\overrightarrow{R}_Z\omega)(\partial_j\exp(\lambda_I\cdot\Xi))\Big{]} \\ & =\sum_{j=1}^{p+1}(-1)^j\overrightarrow{R}_{\hat{\partial}_j\Xi}\overrightarrow{R}_Z\omega+\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\rho_0^1(i(g_1))^{-1}(\overrightarrow{R}_Z\omega)(\exp(\lambda_I\cdot\hat{\partial}_0\Xi)). \end{align*} Now, \begin{align*} \frac{d}{d\lambda_1}\rest{\lambda_1=0}\rho_0^1(i(g_1))^{-1}(\overrightarrow{R}_Z\omega)(\exp(\lambda_I\cdot & \hat{\partial}_0\Xi))=\rho_0^1(i(g_2))^{-1}R_{\hat{\partial}_0\xi_1}(\overrightarrow{R}_Z\omega)\big{(}\exp(\lambda_2\hat{\partial}_0\xi_2),...,\exp(\lambda_q\hat{\partial}_0\xi_q)\big{)}+ \\ & +\Big{(}\frac{d}{d\lambda_1}\rest{\lambda_1=0}\rho_0^1(i(g_1))^{-1}\Big{)}(\overrightarrow{R}_Z\omega)\big{(}\exp(0),\exp(\lambda_2\hat{\partial}_0\xi_2),...,\exp(\lambda_q\hat{\partial}_0\xi_q)\big{)}; \end{align*} thus, inductively, $\overrightarrow{R}_\Xi\overrightarrow{R}_Z(\partial\omega)=\sum_{j=0}^{p+1}(-1)^j\overrightarrow{R}_{\hat{\partial}_j\Xi}\overrightarrow{R}_Z\omega$, and as a consequence, \begin{align*} \Phi(\partial\omega)(\Xi;Z) & =\sum_{\sigma\in S_q}\sum_{\varrho\in S_r}\sum_{j=0}^{p+1}(-1)^j\abs{\sigma}\abs{\varrho}\overrightarrow{R}_{\sigma(\hat{\partial}_j\Xi)}\overrightarrow{R}_{\varrho(Z)}\omega=\sum_{j=0}^{p+1}(-1)^j\Phi\omega(\hat{\partial}_j\Xi;Z)=\partial(\Phi\omega)(\Xi;Z). \end{align*} \end{proof} \begin{theorem}\label{q-pag} For constant $q$, $\Phi$ restricts to a map of double complexes. \end{theorem} {\bf Difference maps} - We conclude this Section by proving that the van Est map $\Phi$ (\ref{vanEstMap}) commutes with the difference maps. We restrict our attention to the difference maps necessary to prove that $\Phi$ defines a map of complexes up to degree $2$. Under the isomorphisms of Subsection~\ref{sss-Equiv}, $\xi=(\xi_1,...,\xi_p)\in\gg_p$ corresponds to a unique element $(x_1,...,x_p;y)\in\gg^p\oplus\hh$, where each individual $\xi_m\in\gg_1$ corresponds to $(x_m,y_m)=(x_m,y+\mu(x_{m+1}+...+x_p))\in\gg\oplus\hh$. In the upcoming computations, we often need to consider $\xi$ as the sum \begin{align}\label{BreakXi} \xi & =(X_1^r,0)+(0,\hat{\partial}_0^r\xi)=(x_1,...,x_r,0,...,0;0)+(0,...,0,x_{r+1},...,x_p;y), \end{align} where $X_1^r:=(x_1,...,x_r)\in G^r$. Here, we abused notation and wrote an $=$ sign to mean the image under the isomorphism of Subsection \ref{sss-Equiv}. Since the inclusion of $G$ in $\G$ is a group homomorphism, and the full nerve of the Lie $2$-group lies in the category of Lie groups, \begin{align}\label{degMaps} \exp_{\G_p}(0,\hat{\partial}_0^r\xi)& =(1,\exp_{\G_{p-r}}(\hat{\partial}_0^r\xi))=(1,\partial_0^r\exp_{\G_{p-r}}(\xi)) \\ \exp_{\G_p}(0,...,0,x,0,...,0;0) & =(1,...,1,\exp_G(x),1,...,1;1)\in G^p\times H\cong\G_p. \nonumber \end{align} In the latter equation, if $x\in\gg$ is in the $k$th position, so is $\exp_G(x)\in G$. Note that, for $1<r\leq p$, none of the inclusions of $G^r$ in $\G_p$ is a Lie group homomorphism; hence, there is no relation analog to (\ref{degMaps}) when there is more than one non-zero entry. \begin{proposition}\label{Delta} Let $r>0$ and $\omega\in C^{p,q}_r(\G,\phi)$, then \begin{align*} \Phi(\Delta\omega)=\Delta(\Phi\omega)\in C^{p+1,q+1}_{r-1}(\gg_1,\phi). \end{align*} \end{proposition} \begin{proof} If $r=1$, let $\Xi=(\xi_0,...,\xi_q)^T\in\gg_{p+1}^{q+1}$. If $\xi_j\in\gg_{p+1}$ corresponds to $(x_j^0,...,x_j^p;y_j)\in\gg^{p+1}\oplus\hh$, using the convention of Eq.~(\ref{BreakXi}), write $\Xi=\Xi_1+\Xi_2$, where \begin{align}\label{brkAgain} \Xi_1 & =\begin{pmatrix} x_0^0\quad 0 \\ \Xi(0) \end{pmatrix} & \Xi_2 & =\begin{pmatrix} 0\quad\hat{\partial}_0\xi_0 \\ \Xi(0) \end{pmatrix}. \end{align} Since $\overrightarrow{R}_\bullet(\Delta\omega)\in\bigotimes^{q+1}\gg_{p+1}^*\otimes V$, $\overrightarrow{R}_\Xi(\Delta\omega)=\overrightarrow{R}_{\Xi_1}(\Delta\omega)+\overrightarrow{R}_{\Xi_2}(\Delta\omega)$. Now, from Eq.~(\ref{degMaps}), $c_{11}(\Xi_2)=1$; hence, $\overrightarrow{R}_{\Xi_2}(\Delta\omega)=0$. On the other hand, letting $I'=\lbrace 1,...,q\rbrace$, \begin{align*} \overrightarrow{R}_{\Xi_1}(\Delta\omega) & =\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\rho_0^0(t_p(\partial_0\exp(\lambda_1\xi_1))...t_p(\partial_0\exp(\lambda_q\xi_q)))\circ\phi\big{(}\omega(\partial_0\exp(\lambda_{I'}\cdot\Xi(0));\exp_G(\lambda_0x_0^0))\big{)} \\ & =\frac{d^{I'}}{d\lambda_{I'}}\rest{\lambda_{I'}=0}\rho_0^0(t_p(\partial_0\exp(\lambda_1\xi_1))...t_p(\partial_0\exp(\lambda_q\xi_q)))\circ\phi\big{(}(R_{x_0^0}\omega)(\partial_0\exp(\lambda_{I'}\cdot\Xi(0)))\big{)}. \end{align*} Computing, for any $h\in H$ and letting $I''=\lbrace 2,...,q\rbrace$, \begin{align*} \frac{d}{d\lambda_1}\rest{\lambda_1=0}\rho_0^0(t_p(\partial_0\exp(\lambda_1\xi_1)) & h)\circ\phi\big{(}(R_{x_0^0}\omega)(\partial_0\exp(\lambda_{I'}\cdot\Xi(0)))\big{)}=\rho_0^0(h)\circ\phi\big{(}(R_{\hat{\partial}_0\xi_1}R_{x_0^0}\omega)(\partial_0\exp(\lambda_{I''}\cdot\Xi(0,1)))\big{)}+ \\ & +\Big{(}\frac{d}{d\lambda_1}\rest{\lambda_1=0}\rho_0^0(t_p(\partial_0\exp(\lambda_1\xi_1))h)\Big{)}\circ\phi\big{(}(R_{x_0^0}\omega)(\partial_0\exp(0),\partial_0\exp(\lambda_{I''}\cdot\Xi(0,1)))\big{)}; \end{align*} inductively implying $\overrightarrow{R}_{\Xi_1}(\Delta\omega)=\phi\big{(}\overrightarrow{R}_{\hat{\partial}_0\Xi(0)}R_{x_0^0}\omega\big{)}$. Using the partition $S_{q+1}=\bigcup_{j=0}^qS_q(0\vert j)$, one concludes \begin{align*} \Phi(\Delta\omega)(\Xi) & =\sum_{j=0}^q\sum_{\sigma'\in S_q(0\vert j)}(-1)^j\abs{\sigma'}\phi\big{(}\overrightarrow{R}_{\sigma'(\hat{\partial}_0\Xi(j))}R_{x_j^0}\omega\big{)}=\sum_{j=0}^q(-1)^j(\Phi\omega)(\hat{\partial}_0\Xi(j);x_j^0)=\Delta(\Phi\omega)(\Xi). \end{align*} If $r>1$, let $Z=(z_1,...,z_{r-1})\in\gg^{r-1}$ and $\Xi$ as before. This time round, $\overrightarrow{R}_\bullet\overrightarrow{R}_Z(\Delta\omega)\in\bigotimes^{q+1}\gg_{p+1}^*\otimes W$; thus, using Eq.~(\ref{brkAgain}), $\overrightarrow{R}_\Xi\overrightarrow{R}_Z(\Delta\omega)=\overrightarrow{R}_{\Xi_1}\overrightarrow{R}_Z(\Delta\omega)+\overrightarrow{R}_{\Xi_2}\overrightarrow{R}_Z(\Delta\omega)$. The fact that $c_{11}(\Xi_2)=1$ remains; hence, $\overrightarrow{R}_{\Xi_2}(\Delta\omega)=0$. Now, the terms of type $c_{2n}$ are constant with respect to $\tau_{r-1}$ and thus vanish. Writing $\gamma(\lambda_0)$ for $(\exp_G(\lambda_0x_0^0)\quad 1)$, we compute \begin{align*} \frac{d}{d\lambda_0} & \rest{\lambda_0=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}c_{2n-1}(\exp(\tau_J\cdot Z);\gamma(\lambda_0)) \\ & =\frac{d}{d\lambda_0}\rest{\lambda_0=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}(\exp(\tau_J\cdot Z)_{[1,r-n)}^{i(\exp(\lambda_0x_0^0))},\exp(\lambda_0x_0^0)^{-1},\exp(\tau_J\cdot Z)_{[r-n,r-2]},\exp(\tau_{r-1}z_{r-1})\exp(\lambda_0x_0^0)) \\ & =-\frac{d}{d\lambda_0}\rest{\lambda_0=0}\frac{d^J}{d\tau_J}\rest{\tau_J=0}(\exp(\tau_J\cdot Z)_{[1,r-n)}^{i(\exp(0))},\exp(\lambda_0x_0^0),\exp(\tau_J\cdot Z)_{[r-n,r-2]},\exp(\tau_{r-1}z_{r-1})\exp(0)). \end{align*} Ultimately, putting $\varrho'(Z)^{(j)}_{x_j^0}:=(z_{\varrho'(1)},...,z_{\varrho'(j-1)},x_j^0,z_{\varrho'(j)},...,z_{\varrho'(r-1)})$, and using successively the partitions $S_{q+1}=\bigcup_{j=0}^qS_q(0\vert j)$ and $S_{r}=\bigcup_{n=0}^{r-1}S_{r-1}(j\vert n)$, we compute \begin{align*} \Phi(\Delta\omega)(\Xi;Z) & =\sum_{j=0}^q\sum_{\sigma'\in S_q(0\vert j)}\sum_{\varrho'\in S_{r-1}}(-1)^j\abs{\sigma'}\abs{\varrho'}\overrightarrow{R}_{\sigma'(\Xi(j))}R_{\xi_j}\overrightarrow{R}_{\varrho'(Z)}(\Delta\omega) \\ & =\sum_{j=0}^q\sum_{\sigma'\in S_q(0\vert j)}\sum_{\varrho'\in S_{r-1}}\sum_{n=0}^{r-1}(-1)^{j+n}\abs{\sigma'}\abs{\varrho'}\overrightarrow{R}_{\sigma'(\partial_0\Xi(j))}\overrightarrow{R}_{\varrho'(Z)^{(j)}_{x_j^0}}\omega \\ & =\sum_{j=0}^q\sum_{\sigma'\in S_q(0\vert j)}\sum_{\varrho\in S_r}(-1)^j\abs{\sigma'}\abs{\varrho}\overrightarrow{R}_{\sigma'(\partial_0\Xi(j))}\overrightarrow{R}_{\varrho(x_j^0,Z)}\omega =\sum_{j=0}^q(-1)^j(\Phi\omega)(\partial_0\Xi(j);x_j^0,Z), \end{align*} which yields $\Delta(\Phi\omega)(\Xi;Z)$ as desired. \end{proof} \begin{proposition}\label{Deltar1} Let $r>1$ and $\omega\in C^{p,q}_r(\G,\phi)$, then \begin{align*} \Phi(\Delta_{r,1}\omega)\equiv0\in C^{p+r,q+1}_0(\gg_1,\phi). \end{align*} \end{proposition} \begin{proof} Let $\Xi=(\xi_0,...,\xi_q)^T\in\gg_{p+r}^{q+1}$ and $(x_0^1,...,x_0^{p+r};y_0)\in\gg^{p+r}\oplus\hh$ be the image of $\xi_0\in\gg_{p+r}$ under the isomorphism of Subsection~\ref{sss-Equiv}. Using the convention of Eq.~(\ref{BreakXi}), write $\Xi=\Xi_0+...+\Xi_r$, where \begin{align*} \Xi_0 & =\begin{pmatrix} 0\quad\hat{\partial}_0^r\xi_0 \\ \Xi(0) \end{pmatrix} & \Xi_k & =\begin{pmatrix} 0\cdots0\quad x_0^k\quad 0\cdots0 \\ \Xi(0) \end{pmatrix},\quad\text{for }1\leq k\leq r. \end{align*} Since $\overrightarrow{R}_\bullet(\Delta_{r,1}\omega)\in\bigotimes^{q+1}\gg_{p+r}\otimes V$, $\overrightarrow{R}_\Xi(\Delta_{r,1}\omega)=\overrightarrow{R}_{\Xi_0}(\Delta_{r,1}\omega)+...+\overrightarrow{R}_{\Xi_r}(\Delta_{r,1}\omega)$. It then follows from Eq.~(\ref{degMaps}) that \begin{align*} \overrightarrow{R}_{\Xi_0}(\Delta_{r,1}\omega) & =\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\rho_0^0(t_p(\partial_0^r\exp(\lambda_0\xi_0))...t_p(\partial_0^r\exp(\lambda_q\xi_q)))\circ\phi\big{(}\omega(\partial_0^r\delta_0\exp(\lambda_I\cdot\Xi_0);1,...,1)\big{)}=0, \end{align*} and, for $1\leq k\leq r$, \begin{align*} \overrightarrow{R}_{\Xi_k} & (\Delta_{r,1}\omega) \\ & =\frac{d^I}{d\lambda_I}\rest{\lambda_I=0}\rho_0^0(t_p(\partial_0^r\exp(\lambda_1\xi_1))...t_p(\partial_0^r\exp(\lambda_q\xi_q)))\circ\phi\big{(}\omega(\partial_0^r\delta_0\exp(\lambda_I\cdot\Xi_k);1,...,\exp_G(\lambda_0x_0^k),...,1)\big{)}=0. \end{align*} \end{proof} \begin{proposition}\label{Delta1r} Let $r>1$ and $\omega\in C^{p,q}_r(\G,\phi)$, then \begin{align*} \Phi(\Delta_{1,r}\omega)=(-1)^{\frac{r(r+1)}{2}}\Delta_r(\Phi\omega)\in C^{p+1,q+r}_0(\gg_1,\phi). \end{align*} \end{proposition} \begin{proof} Let $\Xi=(\xi_1,...,\xi_{q+r})^T\in\gg_{p+1}^{q+r}$ and $(x_j^0,...,x_j^p;y_j)\in\gg^{p+1}\oplus\hh$ be the image of $\xi_j\in\gg_{p+1}$ under the isomorphism of Subsection~\ref{sss-Equiv}. For $\xi\in\gg_{p+1}$, write $(g_\xi^0,...,g_\xi^p;h_\xi)\in G^{p+1}\times H$ for the image of $\exp_{\G_{p+1}}(\xi)\in\G_{p+1}$ under the isomorphism of Subsection \ref{sss-Equiv}. Making $\Xi=\Xi_1+\Xi_2$ in the manner of Eq.~(\ref{brkAgain}), $\overrightarrow{R}_\Xi(\Delta_{1,r}\omega)=\overrightarrow{R}_{\Xi_1}(\Delta_{1,r}\omega)+\overrightarrow{R}_{\Xi_2}(\Delta_{1,r}\omega)$ as $\overrightarrow{R}_\bullet(\Delta_{1,r}\omega)\in\bigotimes^{q+r}\gg_{p+1}\otimes V$. Given that \begin{align*} \omega(\partial_0\delta_0^r\exp(\lambda_I\cdot\Xi_2);1^{h_{\lambda_2\xi_2}...h_{\lambda_{q+r}\xi_{q+r}}},(g_{\lambda_2\xi_2}^0)^{h_{\lambda_2\xi_2}...h_{\lambda_{q+r}\xi_{q+r}}},...,(g_{\lambda_{q+r-1}\xi_{q+r-1}}^0)^{h_{\lambda_{q+r}\xi_{q+r}}},g_{\lambda_{q+r}\xi_{q+r}}^0) & =0, \end{align*} $\overrightarrow{R}_{\Xi_2}(\Delta_{1,r}\omega)=0$; therefore, inductively using Eq.~(\ref{brkAgain}) on $\Xi(1,...,k)$, one ultimately concludes $\overrightarrow{R}_{\Xi}(\Delta_{1,r}\omega)=\overrightarrow{R}_{\Xi_{(r)}}(\Delta_{1,r}\omega)$, where \begin{align*} \Xi_{(r)}:=\begin{pmatrix} x_1^0\quad 0 \\ \vdots \\ x_r^0\quad 0 \\ \Xi(1,...,r) \end{pmatrix} , \end{align*} and, as in the proof of Proposition~\ref{Delta}, \begin{align*} \overrightarrow{R}_{\Xi_{(r)}}(\Delta_{1,r}\omega) & =\phi\big{(}\overrightarrow{R}_{\hat{\partial}_0\Xi(1,...,r)}\overrightarrow{R}_{X_{1,...,r}^0}\omega\big{)}, & \text{where } & X_{1,...,r}^0=(x_1^0,...,x_r^0)\in\gg^r. \end{align*} Partitioning $S_{q+r}=\bigcup_{a_1<...<a_r}\bigcup_{\varrho\in S_r}S_q(1\vert a_{\varrho(1)}...a_{\varrho(r)})$, \begin{align*} \Phi(\Delta_{1,r}\omega)(\Xi) & =\sum_{a_1<...<a_r}\sum_{\varrho\in S_r}\sum_{\sigma^{(r)}\in S_q(1\vert a_{\varrho(1)}...a_{\varrho(r)})}(-1)^{a_1+...+a_r+\frac{r(r+1)}{2}}\abs{\sigma^{(r)}}\abs{\varrho}\phi\big{(}\overrightarrow{R}_{\sigma^{(r)}(\hat{\partial}_0\Xi(1,...,r))}\overrightarrow{R}_{X_{a_{\varrho(1)},...,a_{\varrho(r)}}^0}\omega\big{)} \\ & =\sum_{a_1<...<a_r}(-1)^{a_1+...+a_r+\frac{r(r+1)}{2}}\phi\big{(}(\Phi\omega)(\partial_0(\Xi(a_1,...,a_k));X_{a_1,...,a_r}^0)\big{)}=(-1)^{\frac{r(r+1)}{2}}\Delta_r(\Phi\omega)(\Xi). \end{align*} \end{proof} All the previous results add up to the following theorem. \begin{theorem}\label{itIs} The van Est map $\Phi$ (\ref{vanEstMap}) induces a map of complexes \begin{align*} \Phi & :\xymatrix{C_\nabla^n(\G,\phi) \ar[r] & C_\nabla^n(\gg_1,\phi)} \end{align*} between the complexes (\ref{Cxs}) for $n\leq 2$. \end{theorem} \begin{remark} Continuing Remark~\ref{Incomplete}, Theorem~\ref{itIs} extends to $n\leq 5$ for the difference maps made explicit in \cite{Angulo2:2020}. We abstain from presenting a proof because it lies outside of our application. \end{remark} \section{A Collection of van Est Type Theorems}\label{sec-Theos} In this section, we prove a van Est type theorem relating the cohomology of Lie $2$-groups and Lie $2$-algebras: \begin{theorem}\label{2-vanEstTheo} Let $\G$ be a Lie $2$-group with associated crossed module $\xymatrix{G \ar[r] & H}$, Lie $2$-algebra $\gg_1$ and a representation on the $2$-vector space $\xymatrix{W \ar[r]^\phi & V}$. If $H$ and $G$ are both $k$-connected and the van Est map $\Phi$ (\ref{vanEstMap}) induces a map of complexes between the complexes (\ref{Cxs}) for $n\leq k+1$, then \begin{align*} \xymatrix{ \Phi:H_\nabla^n(\G ,\phi) \ar[r] & H_\nabla^n(\gg_1 ,\phi), } \end{align*} is an isomorphism for $n\leq k$ and it is injective for $n=k+1$. \end{theorem} Theorem~\ref{2-vanEstTheo} follows from the vanishing of the cohomology of the mapping cone of $\Phi$, which in turn, using the spectral sequence of the filtration of (\ref{Cllpsed}) by columns, follows from van Est type theorems that ensure the vanishing of its columns below the diagonal. Throughout, fix a Lie $2$-group $\G$ with Lie $2$-algebra $\gg_1$ and a representation $\rho$ on $\xymatrix{W \ar[r]^\phi & V}$. \subsection{First approximation}\label{subsec-1stAprox} Momentarily disregarding the full crossed module structure, consider only the Lie group $H$ acting to the right by Lie group automorphisms of $G$. In \cite{Angulo2:2020}, it is explained that $H\ltimes G$ has got the structure of a double Lie groupoid in the sense of Ehresmann \cite{Ehresmann:1963}, where the top groupoid is a Lie group bundle over $H$ and the left groupoid is the right action groupoid of $H$ over $G$. Furthermore, for $r>0$, the $p$-pages of $C^{p,q}_r(\G,\phi)$ can be thought of as naturally associated double complexes induced by a map of double Lie groupoids \begin{align}\label{DblRepn} \xymatrix{ & H\ltimes G \ar@<0.5ex>[dl]\ar@<-0.5ex>[dl]\ar@<0.5ex>[dd]\ar@<-0.5ex>[dd]\ar[r] & GL(W)\ltimes GL(W) \ar@<0.5ex>[dr]\ar@<-0.5ex>[dr]\ar@<0.5ex>[dd]\ar@<-0.5ex>[dd] & \\ H \ar[rrr]^{\rho_H\qquad\quad}\ar@<0.5ex>[dd]\ar@<-0.5ex>[dd] & & & GL(W) \ar@<0.5ex>[dd]\ar@<-0.5ex>[dd] \\ & G \ar@<0.5ex>[dl]\ar@<-0.5ex>[dl]\ar[r]^{\rho_G\quad} & GL(W) \ar@<0.5ex>[dr]\ar@<-0.5ex>[dr] & \\ \ast \ar[rrr] & & & \ast , } \end{align} where $W$ is a vector space and the double Lie groupoid to the right is the one given by the right action by conjugation of $GL(W)$ on itself (cf. (\ref{p-pagRepn})). Associated to a double Lie groupoid, there are two LA-groupoids \cite{Mackenzie:1992}, which are roughly given by passing the Lie functor in the vertical and the horizontal directions. In the case of $H\ltimes G$, these are respectively given by \begin{align}\label{LAGpds} & \xymatrix{ \hh\ltimes G \ar@<0.5ex>[r]\ar@<-0.5ex>[r]\ar[d] & \hh \ar[d] \\ G \ar@<0.5ex>[r]\ar@<-0.5ex>[r] & \ast } & & \xymatrix{ H\ltimes \gg \ar@<0.5ex>[r]\ar@<-0.5ex>[r]\ar[d] & \gg \ar[d] \\ H \ar@<0.5ex>[r]\ar@<-0.5ex>[r] & \ast . } \end{align} Bear in mind that the notation $\ltimes$ stands alternatively for the transformation groupoid associated to an action of a Lie group and the action Lie algebroid associated to the action of a Lie algebra. Correspondingly, there are two morphisms of LA-groupoids which correspond to the differentiation of the map (\ref{DblRepn}) in each direction. The subsequent results say there are double complexes naturally associated to each of the latter derivatives. \begin{proposition}\label{Ver p-page} Let $\xymatrix{\hh\ltimes G \ar[r] & \ggl(W)\ltimes GL(W):(y;g) \ar@{|->}[r] & (\rho_\hh(y),\rho_G(g))}$ be the differentiation of the map (\ref{DblRepn}) in the vertical direction. Then, for every $q\geq 0$, there is a representation $\rho_G^q$ of the Lie group bundle \begin{align}\label{verGpBdlRep} & \xymatrix{\hh^q\times G \ar@<0.5ex>[r] \ar@<-0.5ex>[r] & \hh^q} & \text{on } & \xymatrix{\hh^q\times W \ar[r] & \hh^q}, \end{align} and for every $r\geq 0$, there is a representation $\rho^r_\hh$ of the action Lie algebroid \begin{align}\label{verActAlgbdRep} & \xymatrix{\hh\ltimes G^r \ar[r] & G^r} & \text{on } & \xymatrix{G^r\times W \ar[r] & G^r} \end{align} such that the grid \begin{align}\label{protoVerDbl} \xymatrix{ \vdots & \vdots & \vdots & \\ \bigwedge^3\hh^*\otimes W \ar[r]^{\partial\quad}\ar[u] & C(G,\bigwedge^3\hh^*\otimes W) \ar[r]^{\partial}\ar[u] & C(G^2,\bigwedge^3\hh^*\otimes W) \ar[r]\ar[u] & \dots \\ \bigwedge^2\hh^*\otimes W \ar[r]^{\partial\quad}\ar[u]^{\delta} & C(G,\bigwedge^2\hh^*\otimes W) \ar[r]^{\partial}\ar[u]^{\delta} & C(G^2,\bigwedge^2\hh^*\otimes W) \ar[r]\ar[u]^{\delta} & \dots \\ \hh^*\otimes W \ar[r]^{\partial\quad}\ar[u]^{\delta} & C(G,\hh^*\otimes W) \ar[r]^{\partial}\ar[u]^{\delta} & C(G^2,\hh^*\otimes W) \ar[r]\ar[u]^{\delta} & \dots \\ W \ar[r]^{\partial\quad}\ar[u]^{\delta} & C(G,W) \ar[r]^{\partial}\ar[u]^{\delta} & C(G^2,W) \ar[r]\ar[u]^{\delta} & \dots } \end{align} whose rows are the subcomplexes of alternating $q$-multilinear Lie groupoid cochains with values in $\rho_G^q$ and whose columns are Lie algebroid complexes with values on $\rho_\hh^r$ is a double complex. \end{proposition} \begin{proof} Since (\ref{DblRepn}) is a map of double groupoids, $\rho_\hh\times\rho_G$ is a map of LA-groupoids and its restrictions to the base Lie group and to the side Lie algebra give respectively representations $\rho_G$ of $G$ and $\rho_\hh$ of $\hh$, both on $W$. These are the representations for the group bundle over a point $\xymatrix{G \ar@<0.5ex>[r] \ar@<-0.5ex>[r] & \ast}$ and for the trivial action Lie algebroid $\xymatrix{\hh\ltimes\ast \ar[r] & \ast}$. For each $q>0$, define $\rho_G^q :=pr_G^*\rho_G$, which is a representation of the group bundle (\ref{verGpBdlRep}) as the projection onto $G$ is a Lie groupoid homomorphism. Analogously, for each $r>0$, define $\rho_\hh^r:=\hat{t}_r^*\rho_\hh$, where $\hat{t}_r:(\hh\times G)_r\cong\xymatrix{\hh\times G^r \ar[r] & \hh}$ is the final target map of the LA-groupoid and hence a Lie algebroid map. Since for any group bundle, $(\hh^q\times G)_r\cong\hh^q\times G^r$, $C^r(\hh^q\times G;\hh^q\times W)\cong C(\hh^q\times G^r,W)$, one can make sense of the subspace of alternating $q$-multilinear in the $\hh$-coordinates $r$-cochains of (\ref{verGpBdlRep}) with values on $\rho_G^q$, \begin{align*} C^r_{lin}(\hh^q\times G,W) & :=\lbrace\omega\in C^r(\hh^q\times G;\hh^q\times W):\omega(-;\vec{g})\in\bigwedge^q\hh^*\otimes W,\forall\vec{g}\in G^r\rbrace . \end{align*} $C^\bullet_{lin}(\hh^q\times G,W)$ is a subcomplex of $C^\bullet(\hh^q\times G;\hh^q\times W)$. Indeed, for $Y\in\hh^q$ and $\vec{g}=(g_0,...,g_r)\in G^{r+1}$, $\partial_k(Y;\vec{g})=(Y;\delta_k\vec{g})$, where $\delta_k$ is the $k$th face map in the nerve of the Lie group $G$. Thus, for $\omega\in C^r_{lin}(\hh^q\times G,W)$, \begin{align*} \partial\omega(Y;\vec{g}) & =\rho_G^q(Y;g_0)\omega(Y;\delta_0\vec{g})+\sum_{k=1}^{r+1}(-1)^k\omega(Y;\delta_k\vec{g}), \end{align*} but $\rho_G^q(Y;g_0)=\rho_G(g_0)$; hence, $\partial\omega(-;\vec{g})$ is a linear combination of alternating $q$-multilinear maps. Since by definition \begin{align*} C^r_{lin}(\hh^q\times G,W)=C(G^r,\bigwedge^q\hh^*\otimes W)=\Gamma\Big{(}\bigwedge^q(\hh\ltimes G^r)^*\otimes(G^r\times W)\Big{)}, \end{align*} for fixed $(q,r)$, the spaces of $q$-multilinear $r$-cochains of the Lie groupoid (\ref{verGpBdlRep}) with values on $\rho_G^q$ and of $q$-cochains of the Lie algebroid (\ref{verActAlgbdRep}) with values on $\rho_\hh^r$ coincide. We are left to prove that the generic square \begin{align}\label{generic Ver p-square} \xymatrix{ C(G^r,\bigwedge^{q+1}\hh^*\otimes W) \ar[r]^{\partial} & C(G^{r+1},\bigwedge^{q+1}\hh^*\otimes W) \\ C(G^r,\bigwedge^q\hh^*\otimes W) \ar[r]_{\partial}\ar[u]^{\delta} & C(G^{r+1},\bigwedge^q\hh^*\otimes W) \ar[u]_{\delta} } \end{align} commutes. Given that the brackets of all the Lie algebroids involved are completely determined by the bracket of $\hh$, we restrict to prove the commutativity of (\ref{generic Ver p-square}) for constant sections. Let $\omega\in C(G^r,\bigwedge^q\hh^*\otimes W)$, $Y=(y_0,...,y_q)\in\hh^{q+1}$ and $\vec{g}$ as above. Then, \begin{align*} \partial(\delta\omega)(Y;\vec{g}) & =\rho_G^{q+1}(Y;g_0)\Big{[}\sum_{j=0}^{q}(-1)^{j+1}\rho_\hh^{r}(y_j)\omega(Y(j);\delta_0\vec{g})+\sum_{m<n}(-1)^{m+n}\omega([y_m,y_n],Y(m,n);\delta_0\vec{g})\Big{]}+ \\ & \qquad +\sum_{k=1}^{r+1}(-1)^{k}\Big{[}\sum_{j=0}^{q}(-1)^{j+1}\rho_\hh^{r}(y_j)\omega(Y(j);\delta_k\vec{g})+\sum_{m<n}(-1)^{m+n}\omega([y_m,y_n],Y(m,n);\delta_k\vec{g})\Big{]}, \end{align*} and \begin{align*} \delta(\partial & \omega)(Y;\vec{g})=\sum_{j=0}^{q}(-1)^{j+1}\rho_\hh^{r+1}(y_j)\Big{[}\rho_G^{q}(Y(j);g_0)\omega(Y(j);\delta_0\vec{g})+ \sum_{k=1}^{r+1}(-1)^{k}\omega(Y(j);\delta_k\vec{g})\Big{]}+ \\ & +\sum_{m<n}(-1)^{m+n}\Big{[}\rho_G^{q}([y_m,y_n],Y(m,n);g_0)\omega([y_m,y_n],Y(m,n);\delta_0\vec{g})+\sum_{k=1}^{r+1}(-1)^{k}\omega([y_m,y_n],Y(m,n);\delta_k\vec{g})\Big{]}. \end{align*} The equality follows from noticing the following identities: First, $\rho_G^{q+1}(Y;g_0)=\rho_G^q([y_m,y_n],Y(m,n);g_0)=\rho_G(g_0)$. Next, for $(y;\vec{g})\in\hh\ltimes G^r$, $\hat{t}_r(y;\vec{g})=y$ and its anchor is by definition $\frac{d}{d\lambda}\rest{\lambda=0}(\vec{g})^{\exp(\lambda y)}\in T_{\vec{g}}G^r$. Letting $\lbrace e_a\rbrace$ be a basis for $W$ and \begin{align*} \omega(Y(j);\delta_k\vec{g})=\omega^a(Y(j);\delta_k\vec{g})e_a , \end{align*} using the fact that for all $h\in H$ and all ranging value of $k$, $(\delta_k\vec{g})^h=\delta_k(\vec{g})^h$, we conclude \begin{align*} \rho_\hh^r(y_j)\omega(Y(j);\delta_k\vec{g})=\omega^a(Y(j);\delta_k\vec{g})\rho_\hh(y_j)e_a+\Big{(}\frac{d}{d\lambda}\rest{\lambda=0}\omega^a\big{(}Y(j);\delta_k(\vec{g})^{\exp(\lambda y_j)}\big{)}\Big{)}e_a=\rho_\hh^{r+1}(y_j)\omega(Y(j),\delta_k\vec{g}). \end{align*} Finally, as $\rho_\hh\times\rho_G$ is an LA-groupoid map, the following diagram whose vertical maps are the anchors commutes, \begin{align*} \xymatrix{ TG \ar[rr]^{d\rho_G\qquad} & & TGL(W) \\ \hh\ltimes G \ar[rr]_{\rho_\hh\times\rho_G\qquad}\ar[u] & & \ggl(W)\ltimes GL(W); \ar[u] } \end{align*} therefore, $\frac{d}{d\lambda}\rest{\lambda=0}\rho_G(g^{\exp(\lambda y)})=\rho_G(g)\rho_\hh(y)-\rho_\hh(y)\rho_G(g)$. Writing $\omega^a=\omega^a(Y(j);\delta_0\vec{g})$, we compute \begin{align*} \rho_G^{q+1}(Y;g_0)\rho_\hh^r(y_j)\omega(Y(j);\delta_0\vec{g}) & =\rho_G(g_0)\Bigg{[}\omega^a\rho_\hh(y_j)e_a+\Big{(}\frac{d}{d\lambda}\rest{\lambda=0}\omega^a\big{(}Y(j);(\delta_0\vec{g})^{\exp(\lambda y_j)}\big{)}\Big{)}e_a\Bigg{]}; \end{align*} while on the other hand, $\rho_G^{q}(Y;g_0)\omega(Y(j);\delta_0\vec{g})=\omega^a\rho_G(g_0)e_a$ and \begin{align*} \rho & _\hh^{r+1}(y_j)\rho_G^{q}(Y;g_0)\omega(Y(j);\delta_0\vec{g})=\omega^a\rho_\hh(y_j)\rho_G(g_0)e_a+\Big{(}\frac{d}{d\lambda}\rest{\lambda=0}\omega^a\big{(}Y(j);(\delta_0\vec{g})^{\exp(\lambda y_j)}\big{)}\rho_G\big{(}g_0^{\exp(\lambda y_j)}\big{)}\Big{)}e_a \\ & =\omega^a\rho_\hh(y_j)\rho_G(g_0)e_a+\Big{(}\frac{d}{d\lambda}\rest{\lambda=0}\omega^a\big{(}Y(j);(\delta_0\vec{g})^{\exp(\lambda y_j)}\big{)}\Big{)}\rho_G(g_0)e_a+\omega^a(Y(j);\delta_0\vec{g})\Big{(}\frac{d}{d\lambda}\rest{\lambda=0}\rho_G\big{(}g_0^{\exp(\lambda y_j)}\big{)}\Big{)}e_a \\ & =\omega^a\rho_\hh(y_j)\rho_G(g_0)e_a+\rho_G(g_0)\Big{[}\Big{(}\frac{d}{d\lambda}\rest{\lambda=0}\omega^a\big{(}Y(j);(\delta_0\vec{g})^{\exp(\lambda y_j)}\big{)}\Big{)}e_a+\omega^a\rho_\hh(y_j)e_a\Big{]}-\omega^a\rho_\hh(y_j)\rho_G(g_0)e_a, \end{align*} ultimately implying the commutativity of the square (\ref{generic Ver p-square}). \end{proof} We aim at approximating the cohomology of the map $\Phi_p$ of Theorem \ref{p-pag} assembling van Est maps from $C^{p,\bullet}_\bullet(\G,\phi)$ to the complexes of Proposition~\ref{Ver p-page}. Since, each $p$-page is induced by the double Lie groupoid map \begin{align}\label{p-pagRepn} \xymatrix{ \G_p\ltimes G \ar[r] & GL(W)\ltimes GL(W):(\gamma ;g) \ar@{|->}[r] & (\rho_0^1(t_p(\gamma))^{-1},\rho_0^1(i(g))) , } \end{align} replacing the first column of maps by (\ref{Gp1st r}), we introduce a first column replacement for the associated double complex of Proposition~\ref{Ver p-page}. \begin{lemma}\label{VerLA r-cx} Let $\omega\in\bigwedge^q\gg_p^*\otimes V$, $\Xi\in\gg_p^q$ and $g\in G$. If $\partial'\omega(\Xi;g):=\rho_1(g)\omega(\Xi)$, then, using the notation of Proposition~\ref{Ver p-page}, \begin{align*} \xymatrix{\bigwedge^q\gg_p^*\otimes V \ar[r]^{\partial'\quad} & C_{lin}^1(\gg_p^q\times G,W) \ar[r]^{\partial} & C_{lin}^2(\gg_p^q\times G,W)} \end{align*} is a complex. \end{lemma} \begin{proof} First, notice that $\partial'$ is well-defined. Then, letting $g_0,g_1\in G$, if follows from Eq.'s~(\ref{GpRho1Homo}) and (\ref{GpRepEqns}) that \begin{align*} \partial(\partial'\omega)(\Xi;g_0,g_1) & =\rho_G^{q}(\Xi;g_0)\rho_1(g_1)\omega(\Xi)-\rho_1(g_0g_1)\omega(\Xi)+\rho_1(g_0)\omega(\Xi) \\ & =\rho_0^1(i(g_0))\rho_1(g_1)\omega(\Xi)-(I+\rho_1(g_0)\circ\phi)\rho_1(g_1)\omega(\Xi)=0. \end{align*} \end{proof} \begin{proposition}\label{(q,r)-VerLAdoubleCx} For each $p\geq 0$, define $C_{LA}(\gg_p\ltimes G,\phi)$ to be \begin{align}\label{VerLADbl} \xymatrix{ \vdots & \vdots & \vdots & \\ \bigwedge^3\gg_p^*\otimes V \ar[r]^{\partial'\quad} \ar[u] & C(G,\bigwedge^3\gg_p^*\otimes W) \ar[r]^{\partial}\ar[u] & C(G^2,\bigwedge^3\gg_p^*\otimes W) \ar[r]\ar[u] & \dots \\ \bigwedge^2\gg_p^*\otimes V \ar[r]^{\partial'\quad}\ar[u]^{\delta} & C(G,\bigwedge^2\gg_p^*\otimes W) \ar[r]^{\partial}\ar[u]^{\delta} & C(G^2,\bigwedge^2\gg_p^*\otimes W) \ar[r]\ar[u]^{\delta} & \dots \\ \gg_p^*\otimes V \ar[r]^{\partial'\quad}\ar[u]^{\delta} & C(G,\gg_p^*\otimes W) \ar[r]^{\partial}\ar[u]^{\delta} & C(G^2,\gg_p^*\otimes W) \ar[r]\ar[u]^{\delta} & \dots \\ V \ar[r]^{\delta'}\ar[u]^{\delta} & C(G,W) \ar[r]^{\delta_{(1)}}\ar[u]^{\delta} & C(G^2,W) \ar[r]\ar[u]^{\delta} & \dots , } \end{align} where the maps in the first column are given by Lemma~\ref{VerLA r-cx} and the rest by Proposition~\ref{Ver p-page}. Then $C_{LA}(\gg_p\ltimes G,\phi)$ is a double complex, the vertical LA-double complex of $C^{p,\bullet}_\bullet(\G,\phi)$. \end{proposition} \begin{proof} The proof reduces to show that, in the first column, all squares commute. For a constant section $\xi\in\Gamma(\gg_p\ltimes G)$ and $g\in G$, one has \begin{align*} \rho_{\gg_p}^1(\xi)\rho_1(g) & =\rho_{\gg_p}(\xi)\rho_1(g)+\frac{d}{d\lambda}\rest{\lambda=0}\rho_1(g^{t_p(\exp(\lambda\xi))})\\ & =\dot{\rho}_0^1(\hat{t}_p(\xi))\rho_1(g)+\frac{d}{d\lambda}\rest{\lambda=0}\rho_0^1(t_p(\exp(\lambda\xi)))^{-1}\rho_1(g)\rho_0^0(t_p(\exp(\lambda\xi)) \\ & =\dot{\rho}_0^1(\hat{t}_p(\xi))\rho_1(g)-\dot{\rho}_0^1(\hat{t}_p(\xi))\rho_1(g)+\rho_1(g)\dot{\rho}_0^0(\hat{t}_p(\xi))=\rho_1(g)\dot{\rho}_0^0(\hat{t}_p(\xi)). \end{align*} If $q=0$, let $v\in V$, then $\delta(\delta'v)(\xi;g)=\rho_{\gg_p}^1(\xi)\rho_1(g)v=\rho_1(g)\dot{\rho}_0^0(\hat{t}_p(\xi))v=\partial'(\delta v)(\xi;g)$. If $q>0$, let $\omega\in\bigwedge^q\gg_p^*\otimes V$ and $\Xi=(\xi_0,...,\xi_q)\in\gg_p^{q+1}$, then \begin{align*} \delta(\delta'\omega)(\Xi;g) & =\sum_{j=0}^{q}(-1)^j\rho_{\gg_p}^{1}(\xi_j)\rho_1(g)\omega(\Xi(j))+\sum_{m<n}(-1)^{m+n}\rho_1(g)\omega([\xi_m,\xi_n],\Xi(m,n)) \\ & =\rho_1(g)\Big{(}\sum_{j=0}^{q}(-1)^{j+1}\dot{\rho}_0^0(\hat{t}_p(\xi_j))\omega(\Xi(j))+\sum_{m<n}(-1)^{m+n}\omega([\xi_m,\xi_n],\Xi(m,n))\Big{)}=\partial'(\delta\omega)(\Xi;g). \end{align*} \end{proof} Let \begin{align}\label{VerVanEstMap} \xymatrix{\Phi_V:C^{p,\bullet}_\bullet(\G,\phi) \ar[r] & C_{LA}(\gg_p\ltimes G,\phi)} \end{align} be defined by assembling column-wise van Est maps \begin{align*} & \xymatrix{\Phi_V^0:C(\G_p^q,V) \ar[r] & \bigwedge^q\gg_p^*\otimes V} & \text{and }\qquad & \xymatrix{\Phi_V^r:C(\G_p^\bullet\times G^r,W) \ar[r] & C(G^r,\bigwedge^\bullet\gg_p^*\otimes W).} \end{align*} To describe $\Phi_V^r$ explicitly consider a $q$-cochain $\omega\in C(\G_p^q\times G^r,W)$, $q$ sections $\xi_1,...,\xi_q\in\Gamma(\gg_p\ltimes G^r)$ and let $\lbrace u_a\rbrace$ be a basis for $\gg_p$. Then, writing $(\xi_j)_{\vec{g}}=\xi_j^a(\vec{g})u_a$ for $\vec{g}\in G^r$, the right-invariant vector field $\vec{\xi}_j\in\mathfrak{X}(\G_p\ltimes G^r)$ associated to $\xi_j$ is given by \begin{align*} (\vec{\xi}_j)_{(\gamma;\vec{g})}: & =\frac{d}{d\lambda_a}\rest{\lambda_a=0}\xi_j^a((\vec{g})^{t_p(\gamma)})(\exp_{\G_p}(\lambda_au_a);(\vec{g})^{t_p(\gamma)})\Join(\gamma;\vec{g}) \\ & =\xi_j^a((\vec{g})^{t_p(\gamma)})\frac{d}{d\lambda_a}\rest{\lambda_a=0}(\gamma\vJoin\exp_{\G_p}(\lambda_au_a);\vec{g})=\xi_j^a((\vec{g})^{t_p(\gamma)})dL_\gamma(u_a)\in T_\gamma\G_p\leq T_\gamma\G_p\oplus T_{\vec{g}}G^r. \end{align*} Consequently, if $\gamma_2,...,\gamma_q\in\G_p$, $(R_{\xi_q}\omega)(\gamma_2,...,\gamma_q;\vec{g})=\xi_q^a((\vec{g})^{t_p(\gamma_q)...t_p(\gamma_2)})\frac{d}{d\lambda_a}\rest{\lambda_a=0}\omega(\exp_{\G_p}(\lambda_au_a),\gamma_2,...,\gamma_q;\vec{g})$, and \begin{align*} R & _{\xi_{q-1}}(R_{\xi_q}\omega)(\gamma_3,...,\gamma_q;\vec{g})=\xi_{q-1}^b((\vec{g})^{t_p(\gamma_q)...t_p(\gamma_3)})\frac{d}{d\lambda_{b}}\rest{\lambda_{b}=0}(R_{\xi_q}\omega)(\exp_{\G_p}(\lambda_{b}u_b),\gamma_3,...,\gamma_q;\vec{g}) \\ & =\xi_{q-1}^b((\vec{g})^{t_p(\gamma_q)...t_p(\gamma_3)})\frac{d}{d\lambda_{b}}\rest{\lambda_{b}=0}\Big{(}\xi_q^a((\vec{g})^{t_p(\gamma_q)...t_p(\gamma_3)\exp(\lambda_{b}u_b)})\frac{d}{d\lambda_a}\rest{\lambda_a=0}\omega(\exp(\lambda_au_a),\exp(\lambda_bu_b),\gamma_3,...,\gamma_q;\vec{g})\Big{)} \\ & =(\xi_{q-1}^b\xi_q^a)((\vec{g})^{t_p(\gamma_q)...t_p(\gamma_3)})\frac{d}{d\lambda_{b}}\rest{\lambda_{b}=0}\frac{d}{d\lambda_a}\rest{\lambda_a=0}\omega(\exp_{\G_p}(\lambda_au_a),\exp_{\G_p}(\lambda_bu_b),\gamma_3,...,\gamma_q;\vec{g}). \end{align*} Inductively, for $\Xi=(\xi_1,...,\xi_q)$, $\overrightarrow{R}_\Xi=(\xi_1^{a_1}...\xi_q^{a_q})(\vec{g})\frac{d^I}{d\lambda_{I}}\rest{\lambda_I=0}\omega(\exp_{\G_p}(\lambda_1u_{a_1}),...,\exp_{\G_p}(\lambda_qu_{a_q});\vec{g})$ and \begin{align}\label{PhiVr} (\Phi_V^r\omega)(\Xi;\vec{g}) & =\sum_{\sigma\in S_q}\abs{\sigma}\frac{d^I}{d\lambda_{I}}\rest{\lambda_I=0}\omega\big{(}\exp(\lambda_I\cdot\sigma(\Xi_{\vec{g}}));\vec{g}\big{)}. \end{align} \begin{proposition} The map $\Phi_V$ (\ref{VerVanEstMap}) is a map of double complexes. \end{proposition} \begin{proof} Since by definition $\Phi_V$ defines a map of complexes when restricted to columns, we prove that it is compatible with the horizontal differentials in (\ref{VerLADbl}). For $r=0$, let $\omega\in C(\G_p^q,V)$, $\Xi=(\xi_1,...,\xi_q)\in\gg_p^q$ and $g\in G$, then \begin{align*} \overrightarrow{R}_{\Xi}\delta'\omega(g) & =\frac{d^I}{d\lambda_I}\rest{\lambda=0}\rho_0^1(t_p(\exp_{\G_p}(\lambda_1\xi_1)...\exp_{\G_p}(\lambda_q\xi_q)))^{-1}\rho_1(g)\omega(\exp(\lambda_I\cdot\Xi)); \end{align*} inductively yielding $\overrightarrow{R}_{\Xi}\delta'\omega(g)=\rho_1(g)\overrightarrow{R}_{\Xi}\omega$. Taking the alternating sum over $S_q$, $\Phi_V^1\delta'\omega =\partial'\Phi_V^0\omega$. For $r>0$, let $\omega\in C(\G_p^q\times G^r,W)$, $\vec{g}=(g_0,...,g_r)\in G^{r+1}$ and $\Xi$ as above, then \begin{align*} \overrightarrow{R}_\Xi(\delta_{(1)}\omega)(\vec{g}) & =\frac{d^I}{d\lambda_{I}}\rest{\lambda_I=0}\rho_0^1\big{(}g_0^{t_p(\exp(\lambda_1\xi_1))...t_p(\exp(\lambda_1\xi_1))}\big{)}\omega(\exp(\lambda_I\cdot\Xi);\delta_0\vec{g})+\sum_{k=1}^{r+1}(-1)^{k}\omega(\exp(\lambda_I\cdot\Xi);\delta_k\vec{g}). \end{align*} Inductively, $\frac{d^I}{d\lambda_{I}}\rest{\lambda_I=0}\rho_0^1\big{(}g_0^{t_p(\exp(\lambda_1\xi_1))...t_p(\exp(\lambda_1\xi_1))}\big{)}\omega(\exp(\lambda_I\cdot\Xi);\delta_0\vec{g})=\rho_0^1(i(g_0))\overrightarrow{R}_\Xi\omega(\delta_0\vec{g})$; hence, taking the alternating sum over $S_q$ and recalling $\rho_G^q(\Xi;g_0)=\rho_0^1(i(g_0))$, \begin{align*} \Phi_V^{r+1}(\delta_{(1)}\omega)(\Xi;\vec{g}) & =\rho_G^q(\Xi;g_0)\Phi_V^r\omega(\Xi;\delta_0\vec{g})+\sum_{k=1}^{r+1}(-1)^{k}\Phi_V^r\omega(\Xi;\delta_k\vec{g})=\partial(\Phi_V^r\omega)(\Xi;\vec{g}). \end{align*} \end{proof} \begin{theorem}\label{Ver p-vanEst} If $\G_p$ is $k$-connected, $H^n_{tot}(\Phi_V)=(0)$ for all degrees $n\leq k$. \end{theorem} \begin{proof} We compute the cohomology of the mapping cone double of $\Phi_V$ using the spectral sequence of its filtration by columns, whose first page is \begin{align*} \xymatrix{ & \vdots & \vdots & \vdots & \\ & H^2(\Phi_V^0) \ar[r] & H^2(\Phi_V^1) \ar[r] & H^2(\Phi_V^2) \ar[r] & \dots \\ E^{p,q}_1: & H^1(\Phi_V^0) \ar[r] & H^1(\Phi_V^1) \ar[r] & H^1(\Phi_V^2) \ar[r] & \dots \\ & H^0(\Phi_V^0) \ar[r] & H^0(\Phi_V^1) \ar[r] & H^0(\Phi_V^2) \ar[r] & \dots } \end{align*} Since $\Phi_V$ is defined column-wise by the van Est maps $\Phi_V^r$, the $r$th column of the mapping cone double coincides with the mapping cone of $\Phi_V^r$. Invoking Theorem~\ref{Crainic-vanEstRephrased}, the $r$th column of $E^{p,q}_1$ vanishes below $k$; indeed, $\G_p$ is the $s$-fibre of $\xymatrix{\G_p\ltimes G^r \ar@<0.5ex>[r]\ar@<-0.5ex>[r] & G^r}$ and is $k$-connected by hypothesis. Given that $E^{p,q}_1\Rightarrow H^{p+q}_{tot}(\Phi_V)$, the result follows from Lemma \ref{BelowDiag}. \end{proof} \subsection{Second approximation and the main theorem}\label{subsec-2ndAprox} Observe that the $q$th row of $C_{LA}(\gg_p\ltimes G,\phi)$ coincides with the cochain complex of the Lie group $G$ with values in the representation \begin{align*} & \xymatrix{\rho_{(q)}:G \ar[r] & GL(\bigwedge^q\gg_p^*\otimes W),} & \rho_{(q)}(g)\omega &=\rho_0^1(i(g))\omega\quad\text{for }\omega\in\bigwedge^q\gg_p^*\otimes W \end{align*} except in degree $0$. Note that assembling row-wise van Est map extended by the identity in degree $0$ defines a map \begin{align}\label{RowVanEstMap} \xymatrix{\Phi_{row}:C_{LA}(\gg_p\ltimes G,\phi) \ar[r] & C^{p,\bullet}_\bullet(\gg_1 ,\phi)} \end{align} that casually lands in the $p$-page of the grid of the Lie $2$-algebra. Let $\xymatrix{\Phi_{row}^q:C(G^r,\bigwedge^q\gg_p^*\times W) \ar[r] & \bigwedge^q\gg_p^*\otimes\bigwedge^r\gg\otimes W}$ be the van Est map defined by the van Est map for $r>0$ and the identity of $\bigwedge^q\gg_p^*\times V$ for $r=0$. $\Phi_{row}^q$ is explicitly given by \begin{align}\label{PhiRow} (\Phi_{row}^q\omega)(\Xi;Z) & =\sum_{\varrho\in S_r}\abs{\varrho}\frac{d^J}{d\tau_{J}}\rest{\tau_J=0}\omega\big{(}\Xi;\exp(\tau_J\cdot\varrho(Z_{\vec{\gamma}}))\big{)} \end{align} for $\omega\in C(G^r,\bigwedge^q\gg_p^*\times W)$, $\Xi\in\gg_p^q$ and $Z=(z_1,...,z_r)\in\gg^r$. \begin{proposition} $\Phi_{row}$ is a map of double complexes. \end{proposition} \begin{proof} Since by definition $\Phi_{row}$ defines a map of complexes when restricted to rows and to the first column, we are left to prove that it is compatible with the vertical differentials in (\ref{VerLADbl}). Let $\omega\in C(G^r,\bigwedge^q\gg_p^*\otimes W)$, $\Xi=(\xi_0,...,\xi_q)\in\gg_p^{q+1}$ and $Z=(z_1,...,z_r)\in\gg^r$, then \begin{align*} \overrightarrow{R}_Z\delta\omega(\Xi) & =\frac{d^J}{d\tau_J}\rest{\tau_J=0}\sum_{j=0}^{q}(-1)^j\rho_{\gg_p}^r(\xi_j)\omega(\Xi(j);\exp(\tau_J\cdot Z))+\sum_{m<n}\omega([\xi_m,\xi_n],\Xi(m,n);\exp(\tau_J\cdot Z)) . \end{align*} Let $\lbrace e_a\rbrace$ be a basis for $W$ and $\omega(\Xi(j);\exp(\tau_J\cdot Z))=\omega^a(\Xi(j);\exp(\tau_J\cdot Z))e_a$. By definition \begin{align*} \rho_{\gg_p}^r(\xi_j)\omega(\Xi(j);\exp(\tau\cdot X))=\omega^a(\Xi(j);\exp(\tau_J\cdot Z))\dot{\rho}_0^1(\hat{t}_p(\xi_j))e_a+\big{(}\frac{d}{d\lambda}\rest{\lambda=0}\omega^a(\Xi(j);\exp(\tau_J\cdot Z)^{t_p(\exp(\lambda\xi_j))})\big{)}e_a ; \end{align*} hence, we compute \begin{align*} \frac{d}{d\tau_1}\rest{\tau_1=0}\frac{d}{d\lambda}\rest{\lambda=0}\omega^a( & \Xi(j);\exp(\tau_J\cdot Z)^{t_p(\exp(\lambda\xi_j))}) =\frac{d}{d\lambda}\rest{\lambda=0}R_{x_1^{t_p(\exp(\lambda\xi_j))}}\omega^a(\Xi(j);\exp(\tau_J\cdot Z)(1)^{t_p(\exp(\lambda\xi_j))}) \\ & =-R_{\Lie_{\hat{t}_p(\xi_j)}x_1}\omega^a(\Xi(j);\exp(\tau_J\cdot Z)(1)))+\frac{d}{d\lambda}\rest{\lambda=0}R_{x_1}\omega^a(\Xi(j);\exp(\tau_J\cdot Z)(1)^{t_p(\exp(\lambda\xi_j))}), \end{align*} which inductively yields \begin{align*} \frac{d^J}{d\tau_J}\rest{\tau_J=0}\frac{d}{d\lambda}\rest{\lambda=0}\omega^a(\Xi(j);\exp(\tau_J\cdot Z)^{t_p(\exp(\lambda\xi_j))}) & =-\sum_{k=1}^rR_{x_r}...R_{x_{k+1}}R_{\Lie_{\hat{t}_p\xi}x_k}R_{x_{k-1}}...R_{x_1}\omega(\Xi(j)). \end{align*} The result now follows from taking the alternating sum over $S_r$. \end{proof} That the cohomology of $\Phi_{row}$ vanishes follows from a strategy analog to that of Theorem~\ref{Ver p-vanEst}; however, we cannot use the Crainic-van Est Theorem directly as we replaced the space of $0$-cochains. In the sequel, we prove a van Est type theorem for $\Phi_{row}^q$ that takes into account these replacements. \begin{lemma}\label{tweak0} Let $\partial'$ be the map of Lemma~\ref{VerLA r-cx} and $\delta'$ the map of Eq.~(\ref{Alg1st r}). For constant $p,q\geq 0$, if $G$ is connected, $\ker\partial'=\ker\delta'$. \end{lemma} \begin{proof} ($\subseteq$) If $\omega\in\ker\partial'$, $(\partial'\omega)(\Xi;g)=0$ for all $(\Xi;g)\in\gg_p^q\times G$. Then, for $z\in\gg$, \begin{align*} (\delta'\omega)(\Xi;z)=\Phi_{row}^q(\partial'\omega)(\vec{\gamma};z)=\frac{d}{d\tau}\rest{\tau =0}(\partial'\omega)(\Xi;\exp_G(\tau z))=0. \end{align*} ($\supseteq$) Conversely, if $\omega\in\ker\delta'$, $(\delta'\omega)(\Xi;x)=\dot{\rho}_1(z)\omega(\Xi)=0$ for all $(\Xi;z)\in\gg_p^q\times\gg$. Being connected, $G$ is generated by $\exp_G(U)\subset G$ for some neighborhood of the identity $U$. Therefore, for all $g\in G$, there exist $z_1,...,z_n\in\gg$ such that $g=\exp_G(z_1)...\exp_G(z_n)$. Since $\rho_1$ is a Lie group homomorphism, it follows from Eq.~(\ref{TheExpOfGL(phi)}) that \begin{align*} \rho_1(\exp_G(z))\omega(\Xi)=\exp_{GL(\phi)_1}(\dot{\rho}_1(z))\omega(\Xi)=\sum_{n=0}^\infty\frac{(\dot{\rho}_1(z)\phi)^n}{(n+1)!}\dot{\rho}_1(z)\omega(\Xi)=0 \end{align*} for all $z\in\gg$. Now, $\partial'\omega(\Xi;g)=\rho_1(g)\omega(\Xi)=\rho_1(\exp_G(z_1)...\exp_G(z_n))\omega(\Xi)$ and it follows from Eq.~(\ref{GpRho1Homo}) that \begin{align*} \rho_1(\exp_G(z_1)...\exp_G(z_n)) & =\rho_1(\exp(z_1)...\exp(z_{n-1}))+\rho_1(\exp_G(z_n))+\rho_1(\exp(z_1)...\exp(z_{n-1}))\phi\rho_1(\exp_G(z_n)); \end{align*} hence, the result follows from a simple induction. \end{proof} As $\Phi_{row}$ restricted to the first column of (\ref{VerLADbl}) is the identity and there are no cochains of negative degree, Lemma~\ref{tweak0} is a van Est type theorem in degree $0$ and a consequence of several pieces of Lie theory. In contrast, the following lemma is stated as a general result of homological algebra and implies naturally that if $G$ is $1$-connected, $\Phi_{rwo}$ induces isomorphism in degree $1$. \begin{definition}\label{phiRel} If $(C_1^\bullet,d_{C_1})$ and $(C_2^\bullet,d_{C_2})$ are equal complexes except in degree zero, and there is a map $\xymatrix{\phi:C_1^0 \ar[r] & C_2^0}$ such that $d_{C_1}=d_{C_2}\circ\phi$, then they are called $\phi$-related. \end{definition} \begin{lemma}\label{tweak1} Let $(C_1^\bullet,d_{C_1})$, $(C_2^\bullet,d_{C_2})$ be $\phi_C$-related complexes, and let $(D_1^\bullet,d_{D_1})$, $(D_2^\bullet,d_{D_2})$ be $\phi_D$-related complexes. Let $\xymatrix{\Phi_1:C_1^\bullet \ar[r] & D_1^\bullet}$ and $\xymatrix{\Phi_2:C_2^\bullet \ar[r] & D_2^\bullet}$ be maps of complexes that coincide except in degree zero, where \begin{align*} \xymatrix{C_1^0 \ar[rd]^{d_{C_1}}\ar[dd]_{\phi_C}\ar[rr]^{\Phi_1} & & D_1^0 \ar[rd]^{d_{D_1}}\ar'[d]_{\phi_D}[dd] & \\ & C_1^1=C_2^1 \ar[rr]^{\quad\qquad\Phi_1=\Phi_2} & & D_1^1=D_2^1. \\ C_2^0 \ar[ur]_{d_{C_2}}\ar[rr]_{\Phi_2} & & D_2^0 \ar[ur]_{d_{D_2}} & } \end{align*} If $\Phi_1$ induces an isomorphisms $H^1(C_1)\cong H^1(D_1)$, then $\Phi_2$ induces an isomorphism $H^1(C_2)\cong H^1(D_2)$. \end{lemma} \begin{proof} Let $Z^1_{X_k}:=\xymatrix{\ker(d_{X_k}:X_k^1 \ar[r] & X_k^2)}$ and $B^1_{X_k}:=d_{X_k}(X_k^0)$, for $X\in\lbrace C,D\rbrace$ and $k\in\lbrace 1,2\rbrace$, and consider the maps of exact sequences \begin{align*} \xymatrix{ 0 \ar[r] & B^1_{C_k} \ar[d]_{\Phi_k\rest{B^1_{C_k}}}\ar[r] & Z^1_{C_k} \ar[d]_{\Phi_k}\ar[r] & H^1(C_k) \ar[d]_{[\Phi_k]}\ar[r] & 0 \\ 0 \ar[r] & B^1_{D_k} \ar[r] & Z^1_{D_k} \ar[r] & H^1(D_k) \ar[r] & 0 } \end{align*} whose associated long exact sequence is \begin{align*} \xymatrix{ 0 \ar[r] & \ker(\Phi_k\rest{B^1_{C_k}}) \ar[r] & \ker\Phi_k \ar[r] & \ker[\Phi_k] \ar[r] & \coker(\Phi_k\rest{B^1_{C_k}}) \ar[r] & \coker\Phi_k \ar[r] & \coker[\Phi_k] \ar[r] & 0. } \end{align*} By hypothesis, $\ker[\Phi_1]$ and $\coker[\Phi_1]$ are trivial thus implying $\coker(\Phi_1\rest{B^1_{C_1}})\cong\coker\Phi_1$ and $\ker(\Phi_1\rest{B^1_{C_1}})=\ker\Phi_1$, which we interpret as $\ker\Phi_1\subset B^1_{C_1}$. Since for every element $x\in C_1^0$, $d_{C_1}(x)=d_{C_2}(\phi_C(x))$, we have got that $B^1_{C_1}\subseteq B^1_{C_2}$; therefore, $\ker(\Phi_2\rest{B^1_{C_2}})=\ker\Phi_1\cap B^1_{C_2}=\ker\Phi_1$. As a consequence, $\ker[\Phi_2]$ vanishes, so the induced map in cohomology is injective and we are left with the short exact sequence \begin{align*} \xymatrix{ 0 \ar[r] & \coker(\Phi_2\rest{B^1_{C_2}}) \ar[r] & \coker\Phi_2 \ar[r] & \coker[\Phi_2] \ar[r] & 0.} \end{align*} Now, $d_{D_1}=d_{D_2}\circ\phi_D$ implies $B^1_{D_1}\subseteq B^1_{D_2}$, so there is a map of exact sequences \begin{align}\label{finSeq} \xymatrix{ 0 \ar[r] & \coker(\Phi_1\rest{B^1_{C_1}}) \ar[d]_{\alpha}\ar[r] & \coker\Phi_1 \ar[d]_{Id}\ar[r] & 0 \ar[d]\ar[r] & 0 \\ 0 \ar[r] & \coker(\Phi_2\rest{B^1_{C_2}}) \ar[r] & \coker\Phi_2 \ar[r] & \coker[\Phi_2] \ar[r] & 0, } \end{align} where, for $y\in D^0_1$, $\alpha(d_{D_1}(y)+\Phi(B^1_{C_1})):=d_{D_2}(\phi_D(y))+\Phi(B^1_{C_2})$. The long exact sequence of (\ref{finSeq}) tells us that $\alpha$ is an isomorphism and $\coker[\Phi_2]$ is trivial, so the induced map in cohomology is surjective. \end{proof} \begin{remark} In the proof of Lemma~\ref{tweak1}, the inclusions $B^1_{X_1}\subseteq B^1_{X_2}$ also give rise to exact sequences \begin{align*} \xymatrix{ 0 \ar[r] & B^1_{X_1} \ar[d]\ar[r] & Z^1_{C_1} \ar[d]_{Id}\ar[r] & H^1(X_1) \ar[d]\ar[r] & 0 \\ 0 \ar[r] & B^1_{X_2} \ar[r] & Z^1_{C_2} \ar[r] & H^1(X_2) \ar[r] & 0 } \end{align*} out of whose long exact sequences one reads \begin{align}\label{dirSum} H^1(X_1)\cong H^1(X_2)\oplus \frac{B^1_{X_2}}{B^1_{X_1}}. \end{align} What the proof of Lemma~\ref{tweak1} ultimately says is that the isomorphism $H^1(C)\cong H^1(D)$ is diagonal with respect to the direct sum decompositions of Eq.~(\ref{dirSum}). \end{remark} \begin{proposition}\label{TweakedVanEst} For constant $q\geq 0$, if $G$ is $k$-connected, $H^n(\Phi_{row}^q)=(0)$ for all degrees $n\leq k$. \end{proposition} \begin{proof} Since $G$ is the $s$-fibre of the group bundle $\xymatrix{\gg_p^q\times G \ar@<0.5ex>[r]\ar@<-0.5ex>[r] & \gg_p^q}$ and is $k$-connected by hypothesis, Theorem~\ref{Crainic-vanEstRephrased} implies the result for $1<n\leq k$. That there is an isomorphism in degree $0$ follows from Lemma~\ref{tweak0}. As for degree $1$, the result follows from Lemma~\ref{tweak1} after noticing that letting \begin{align*} \phi_p^q & \xymatrix{:\bigwedge^q\gg_p^*\otimes W \ar[r] & \bigwedge^q\gg_p^*\otimes V} & (\phi_p^q\omega)(\Xi) & :=\phi(\omega(\Xi))\quad\text{for }\Xi\in\gg_p^q, \end{align*} the $q$th rows of (\ref{protoVerDbl}) and (\ref{VerLADbl}) are $\phi_p^q$-related and the same holds for the Chevalley-Eilenberg complex of $\gg$ with values in (\ref{rRep}) and the $q$th row of the $p$-page $C^{p,\bullet}_\bullet(\gg_1,\phi)$. \end{proof} \begin{theorem}\label{res-vanEst} If $G$ is $k$-connected, $H^n_{tot}(\Phi_{row})=(0)$ for all degrees $n\leq k$. \end{theorem} \begin{proof} We compute the cohomology of the (transposed) mapping cone double of $\Phi_{row}$ using the spectral sequence of its filtration by rows, whose first page is \begin{align*} \xymatrix{ & \vdots & \vdots & \vdots & \\ & H^0(\Phi_{row}^2) \ar[u] & H^1(\Phi_{row}^2) \ar[u] & H^2(\Phi_{row}^2) \ar[u] & \dots \\ E^{p,q}_1: & H^0(\Phi_{row}^1) \ar[u] & H^1(\Phi_{row}^1) \ar[u] & H^2(\Phi_{row}^1) \ar[u] & \dots \\ & H^0(\Phi_{row}^0) \ar[u] & H^1(\Phi_{row}^0) \ar[u] & H^2(\Phi_{row}^0) \ar[u] & \dots } \end{align*} Since $\Phi_{row}$ is defined row-wise by van Est maps, the $q$th row of the transposed mapping cone double coincides with the mapping cone of $\Phi_{row}^q$. Invoking Proposition~\ref{TweakedVanEst}, the $q$th row of $E^{p,q}_1$ vanishes below $k$; indeed, $G$ is $k$-connected by hypothesis. Given that $E^{p,q}_1\Rightarrow H^{p+q}_{tot}(\Phi_{row})$, the result follows from Lemma \ref{BelowDiag}. \end{proof} \begin{remark} One can prove results analog to those in Subsection~\ref{subsec-1stAprox} for the horizontal differential of (\ref{DblRepn}) yielding as a first approximation a map to the double complex associated to the other LA-groupoid in (\ref{LAGpds}). In that case, the ideas needed to prove that its cohomology vanishes parallel those of Lemmas~\ref{tweak0} and \ref{tweak1}. We opted for the present approach because, in the second approximation, one would need a van Est theory adapted to the subcomplex of multilinear cochains. \end{remark} We are ready to prove that the cohomology of the restriction $\Phi_p$ (\ref{Phi_p}) of the van Est map $\Phi$ (\ref{vanEstMap}) to the $p$-pages vanishes. \begin{theorem}\label{vanEst p-pages} If $H$ and $G$ are both $k$-connected, $H^n_{tot}(\Phi_p)=(0)$ for all degrees $n\leq k$. \end{theorem} \begin{proof} As in the proof of Theorem~\ref{2vE-vs}, it follows from the K\"unneth formula and the $k$-connectedness of $H$ and $G$, that $\G_p$ is $k$-connected as well. Theorem~\ref{Ver p-vanEst} and Proposition \ref{ConeCoh} imply that $\Phi_V$ induces isomorphism in cohomology for $n\leq k$, and it is injective for $n=k+1$. Analogously, as $G$ is $k$-connected, Theorem~\ref{res-vanEst} and Proposition~\ref{ConeCoh} imply that the same holds for $\Phi_{row}$. The result follows from Proposition~\ref{ConeCoh} after noticing that (cf. Eq.'s~(\ref{PhiVr}) and (\ref{PhiRow})), for constant $p$, \begin{align*} \xymatrix{ C(\G_p^q\times G^r,W) \ar[rr]^{\Phi_p}\ar[dr]_{\Phi_V^r} & & \bigwedge^q\gg_p^*\otimes\bigwedge^r\gg^*\otimes W . \\ & C(G^r,\bigwedge^q\gg_p^*\otimes W) \ar[ur]_{\Phi_{row}^q} & } \end{align*} \end{proof} As announced in the introduction, we can now prove the main Theorem. \begin{proof}(\textit{of Theorem}~\ref{2-vanEstTheo}) We compute the cohomology of the mapping cone triple of $\Phi$ using the spectral sequence of the filtration by columns of (\ref{Cllpsed}), whose first page is (schematically) \begin{align}\label{CllpsedCoh} \xymatrix{ & \vdots & \vdots & \vdots & \vdots \\ & H_{tot}^3(\Phi_0) \ar[r]\ar@{-->}[rrd]\ar@{.>}[rrrdd] & H_{tot}^3(\Phi_1) \ar[r]\ar@{-->}[rrd]\ar@{.>}[rrrdd] & H_{tot}^3(\Phi_2) \ar[r]\ar@{-->}[rrd] & H_{tot}^3(\Phi_3) \ar[r] & \dots \\ E^{p,q}_1: & H_{tot}^2(\Phi_0) \ar[r]\ar@{-->}[rrd] & H_{tot}^2(\Phi_1) \ar[r]\ar@{-->}[rrd] & H_{tot}^2(\Phi_2) \ar[r]\ar@{-->}[rrd] & H_{tot}^2(\Phi_3) \ar[r] & \dots \\ & H_{tot}^1(\Phi_0) \ar[r] & H_{tot}^1(\Phi_1) \ar[r] & H_{tot}^1(\Phi_2) \ar[r] & H_{tot}^1(\Phi_3) \ar[r] & \dots\\ & H_{tot}^0(\Phi_0) \ar[r] & H_{tot}^0(\Phi_1) \ar[r] & H_{tot}^0(\Phi_2) \ar[r] & H_{tot}^0(\Phi_3) \ar[r] & \dots } \end{align} By definition, the $p$th column in (\ref{CllpsedCoh}) is given by the cohomology of the mapping cone double of $\Phi_p$. Invoking Theorem~\ref{vanEst p-pages}, the $p$th column of $E^{p,q}_1$ vanishes below $k$; indeed, both $H$ and $G$ are $k$-connected by hypothesis. Given that $E^{p,q}_1\Rightarrow H^{p+q}_{tot}(\Phi)$, the result follows from Lemma \ref{BelowDiag} and Proposition~\ref{ConeCoh}. \end{proof} And having all the ingredients to run van Est's strategy: \begin{corollary} Every finite-dimensional Lie $2$-algebra is integrable. \end{corollary} \acks This work was supported by CAPES; the \emph{National Council for Scientific and Technological Development} - CNPq; and FAPERJ [grant number E-26/202.439/2019].
1,941,325,220,105
arxiv
\section{Introduction}\label{S:intro} Systems of interacting particles arise in a myriad of applications ranging from opinion dynamics ~\cite{hegselmann2002opinion}, granular materials ~\cite{BCCP98,carrillo2003kinetic,BGG2013} and mathematical biology ~\cite{keller1971model,BCM} to statistical mechanics ~\cite{martzel2001mean}, galactic dynamics~\cite{binney2008}, droplet growth~\cite{conlon2017}, plasma physics~\cite{bittencourt1986fund}, and synchronisation ~\cite{kuramoto1981rhythms}. Apart from being of independent interest, these systems find applications in a diverse range of fields such as: particle methods in numerical analysis~\cite{delmoral2010intro}, consensus-based methods for global optimisation~\cite{carrillo2018analytical}, and nonlinear filtering~\cite{crisan1997}. They have also been studied in the context of multiscale analysis~\cite{gomes2018mean}, in the presence of memory-like effects and in a non-Markovian setting~\cite{duong2018mean}, and in the discrete setting of graphs~\cite{EFLS16}. In this paper, we analyse the partial differential equation (PDE) associated to the system of interacting stochastic differential equations (SDEs) on ${\mathbb T}^d$, the torus of side length $L>0$, of the following form \begin{align} dX^i_t = -\frac{\kappa}{ N} \sum\limits_{i \neq j}^N\nabla W(X^i_t -X^j_t) \dx{t} + \sqrt{2 \beta^{-1}} dB^i_t \, , \label{eq:lang} \end{align} where the $X^i_t \in {\mathbb T}^d,i= 1\dots N $ represent the positions of the $N$ ``particles'', $W$ is a periodic interaction potential, and the $B^i_t, i = 1 \dots N$ represent $N$ independent ${\mathbb T}^d$-valued Brownian motions. The constants $\kappa,\beta>0$ represent the strength of interaction and inverse temperature respectively. Since one of the two parameters is redundant, we keep $\beta$ fixed for the rest of the paper. It is clear that what we have described is a set of interacting overdamped Langevin equations. Based on the choice of $W(x)$, one can then obtain models for numerous phenomena from the physical, biological, and social sciences. We refer to~\cite{review,pareschi2013interacting,muntean2014collective,MT2014} and the references therein for a comprehensive list of such models. Systems of interacting diffusions have been studied extensively. They were first analysed by McKean (cf.~\cite{mckean1966class,mckean1967propagation}) who noticed an interesting relation between them and a class of nonlinear parabolic partial differential equations. In particular, it is well known (cf.~\cite{oelschlager1984martingale,sznitman1991topics}) that for this class of SDEs one can pass to the so-called mean field limit: if we consider the empirical measure defined as follows \begin{align} \varrho^{(N)} : = \frac{1}{N} \sum\limits_{i=1}^N \delta_{X^i_t}, \quad\text{ with }\quad \operatorname{Law}(X_0:=(X^1_0 \cdots X^N_0 )) =\prod\limits_{i=1}^N \varrho_0(x_i) \, , \end{align} then, provided that $W$ is smooth, as $N \to \infty$, $\mathbb{E}\bra{\varrho^{(N)}}$ converges in the sense of weak convergence of probability measures to some measure $\varrho$ satisfying the following nonlocal parabolic PDE \begin{align} \label{eq:mv1} \begin{aligned} \partial_t \varrho & = \beta^{-1} \Delta \varrho + \kappa \nabla \cdot (\varrho \nabla W \star \varrho )\,, \\ \varrho(x,0) &=\varrho_0(x)\, . \end{aligned} \end{align} The above equation is commonly referred to as the McKean--Vlasov equation, the latter name stemming from the fact that it also arises as the overdamped limit of the Vlasov--Fokker--Planck equation. Equation~\eqref{eq:mv1} can also be thought of as a nonlinear Fokker--Planck equation for the following nonlinear SDE, commonly referred to as the McKean SDE, \begin{align} dX_t = -\kappa(\nabla W \star \varrho)(X_t,t) \dx{t} + \sqrt{2 \beta^{-1}} dB_t \, , \end{align} where $\varrho= \textrm{Law}(X_t)$. The PDE~\eqref{eq:mv1} itself has a very rich structure associated to it - we have the following free energy functional \begin{align}\label{eq:freeenergyintro} \ensuremath{\mathscr F}_{\kappa}(\varrho) & = \beta ^{-1}\int\limits_{{\mathbb T}^d} \varrho \log \varrho \dx{x} + \frac{\kappa}{2} \iint_{{\mathbb T}^d \times {\mathbb T}^d} W(x-y) \varrho(y) \varrho(x) \dx{y} \dx{x} = \beta^{-1}S(\varrho) + \frac{\kappa}{2} \mathcal{E}(\varrho,\varrho) \ , \end{align} where $S(\varrho)$ and $\ensuremath{\mathcal E}(\varrho,\varrho)$ represent the entropy and interaction energy associated with $\varrho$ respectively. It is well known, starting from the seminal work in~\cite{jordan1998variational,otto2001geometry}, that this equation belongs to a larger class of dissipative PDEs including the heat equation, the porous medium equation, and the aggregation equation, which can be written in the form \begin{align} \partial_t \varrho = \nabla \cdot \bra*{\varrho \nabla \frac{\delta \ensuremath{\mathscr F}}{\delta \varrho}} \, , \end{align} for some free energy $\ensuremath{\mathscr F}$ and are gradient flows for the associated free energy functional with respect to the $d_2$ transportation distance defined on probability measures having finite second moment, see ~\cite{carrillo2003kinetic,villani2003topics}. We refer the reader to~\cite{ambrosio2008gradient,santam} for more information on the abstract theory of gradient flows in the space of probability measures. Our goals are to study some aspects of the asymptotic behaviour and the stationary states of the McKean--Vlasov equation for a wide class of interaction potentials. In terms of the asymptotic behaviour, we analyse the stability conditions for the homogeneous steady state $1/L^d$ and the rate of convergence to equilbrium. We extend the $\ensuremath{{L}}^2$-decay results of~\cite{chazelle2017well} to arbitrary dimensions and arbitrary sufficiently nice interactions and also provide sufficient conditions for convergence to equilibrium in relative entropy. The rest of the paper is devoted to the analysis of the properties of non-trivial stationary states of the Mckean--Vlasov system, i.e., nontrivial solutions of \begin{align} \beta^{-1} \Delta \varrho + \kappa \nabla \cdot (\varrho \nabla W \star \varrho )=0 \, . \label{eq:smv1} \end{align} Previous results in this direction include those by Tamura ~\cite{tamura1984asymptotic}, who provided some criteria for the existence of local bifurcations on the whole space by using tools from nonlinear functional analysis, in particular, the Crandall--Rabinowitz theorem. Unfortunately, his analysis depends crucially on the unphysical assumption that the interaction potential is an odd function. One of the main results of the present work is a complete, quantitative, local bifurcation analysis under physically realistic assumptions. Dawson ~\cite{dawson1983critical} studied for the first time the existence of nontrivial stationary states for a particular double-well confinement and Curie--Weiss interaction on the line. The existence of nontrivial stationary states or the bifurcation of nontrivial solutions from the homogeneous steady state is usually referred as phase transition in the literature. We also mention that more recently several authors ~\cite{tugaut2014phase,DFL,BCCD} looked at the existence of phase transitions in the whole space with different confinement and interactions. The most related work to us in the literature is due to Chayes and Panferov ~\cite{chayes2010mckean}, who studied the problem on the torus and provided some criteria for the existence of continuous and discontinuous phase transitions. In addition to presenting an existence and uniqueness theory for the evolution problem, we extend considerably the results of both~\cite{tamura1984asymptotic} and~\cite{chayes2010mckean}. We provide explicit criteria based on the Fourier coefficients of the interaction potential $W$ for the existence of local bifurcations by studying the implicit symmetry in the problem. In fact, we show that for carefully chosen potentials it is possible to have infinitely many bifurcation points. Additionally, we extend the results of~\cite{chayes2010mckean} and provide additional criteria for the existence of continuous and discontinuous phase transitions. \subsection{Statement of main results} We only state simplified versions of our results in one dimension, so as to avoid the use of notation that will be introduced later. We only need to define the cosine transform, $\tilde{W}(k):= (2/L)^{1/2}\intom{W(x) \cos\bra*{\frac{2 \pi k}{L} x}}$ for $k \in {\mathbb Z}, k>0$. We work with classical solutions of~\eqref{eq:mv1} which are constructed in~\cref{thm:wellp}. \begin{thm}(Convergence to equilibrium) \label{thm:m1} Let $\varrho$ be a classical solution of the Mckean--Vlasov equation~\eqref{eq:mv1} with smooth initial data and smooth, even, interaction potential $W$. Then we have: \begin{tenumerate} \item If $0 < \kappa< \frac{2 \pi}{3 \beta L \norm{\nabla W}_\infty}$, then $\norm*{\varrho(\cdot ,t)- \frac{1}{L}}_2 \to 0$, exponentially, as $t \to \infty$\label{thm:m1a}, \item If $\tilde{W}(k) \geq 0$ for all $k \in {\mathbb Z}, k>0$, or $0 < \kappa< \frac{2 \pi^2}{ \beta L^2 \norm{\Delta W}_\infty}$, then $ \ensuremath{\mathcal H}\bra*{\varrho(\cdot ,t)|\frac{1}{L}} \to 0$, exponentially, as $t \to \infty$ \label{thm:m1b}, \end{tenumerate} where $\ensuremath{\mathcal H}\bra*{\varrho(\cdot ,t)|\frac{1}{L}}:= \intom{\varrho(\cdot ,t) \log \bra*{\frac{\varrho(\cdot ,t)}{\varrho_\infty}}}$ denotes the relative entropy. \end{thm} The previous theorem implies that the uniform state can fail to be the unique stationary solution only if the interaction potential has a negative Fourier mode, i.e., the interaction potential is not $H$-stable. Thus, the concept of $H$-stability introduced by Ruelle ~\cite{ruelle1999statistical} is relevant for the study of the stationary McKean--Vlasov equation as noticed in ~\cite{chayes2010mckean}. We have the following conditions for the existence of bifurcating branches of steady states. \begin{thm}(Local bifurcations)\label{thm:m2} Let $W$ be smooth and even and let $(1/L,\kappa)$ represent the trivial branch of solutions. Then every $k^* \in {\mathbb Z}, k^*>0$ such that \begin{enumerate} \item $\operatorname{card}\set*{k \in {\mathbb Z}, k>0 : \tilde{W}(k)=\tilde{W}(k^*)}=1$ , \item $\tilde{W}(k^*) <0$, \end{enumerate} leads to a bifurcation point $(1/L,\kappa_*)$ of the stationary McKean--Vlasov equation through the formula \begin{align} \kappa_*=-\frac{(2L)^{1/2}}{\beta \tilde{W}(k^*)} \, . \end{align} \end{thm} We are also able to sharpen sufficient conditions for the existence of continuous or discontinuous bifurcating branches. The following theorem is a simplified version of the exact statements that are presented in~\cref{thm:dctp} and~\cref{thm:spgap}. \begin{thm}(Discontinuous and continuous phase transitions)\label{thm:m3} Let $W$ be smooth and even and assume the free energy $\ensuremath{\mathscr F}_{\kappa,\beta}$ defined in~\eqref{eq:freeenergyintro} exhibits a transition point, $\kappa_c<\infty$, in the sense of~\cref{defn:tp}. Then we have the following two scenarios: \begin{tenumerate} \item If there exist strictly positive $k^a,k^b,k^c \in {\mathbb Z}$ with $\tilde{W}(k^a)\approx \tilde{W}(k^b)\approx \tilde{W}(k^c)\approx\min_k\tilde{W}(k)<0$ such that $k^a=k^b +k^c$, then $\kappa_c$ is a discontinuous transition point.\label{thm:m3a} \item Let $k^\sharp = \argmin_k \tilde{W}(k)$ be uniquely defined with $\tilde{W}(k^\sharp)<0$ and $\kappa_\sharp=\sqrt{2L}/(\beta \tilde{W}(k^\sharp))$. Let $W_\alpha$ denote the potential obtained by multiplying all the negative Fourier modes $\tilde{W}(k)$ except $\tilde{W}(k^\sharp)$ by some $\alpha \in(0,1]$. Then if $\alpha$ is made small enough, the transition point $\kappa_c$ is continuous and $\kappa_c=\kappa_\sharp$.\label{thm:m3b} \end{tenumerate} \end{thm} The proof of the above theorem relies mainly on~\cref{prop:CharactTP} which states that if $\varrho_\infty$ is the unique minimiser of the free energy $\ensuremath{\mathscr F}_\kappa$ at $\kappa=\kappa_\sharp$ then $\kappa_c=\kappa_\sharp$ is a continuous transition point; on the other hand if $\varrho_\infty$ is not the global minimiser of $\ensuremath{\mathscr F}_\kappa$ at $\kappa=\kappa_\sharp$, then $\kappa_c<\kappa_\sharp$ and $\kappa_c$ is a discontinuous transition point. We conclude the introduction with a figure to provide the reader with some more intuition about the spectral signature of continuous and discontinuous phase transitions. As it can be seen in Figure~\ref{fig:dcctp}, the results of \cref{thm:m3} essentially apply to two perturbative regimes. Figure~\ref{fig:dcctp}(a) shows the scenario for the existence of a discontinuous transition point in which there are multiple resonating/near-resonating dominant modes $k^a,k^b,k^c$ which satisfy the algebraic condition $k^a=k^b+k^c$ from~\cref{thm:m3}\ref{thm:m3a}. This condition allows us to construct a competitor state at $\kappa=\kappa_\sharp$ which has a lower value of $\ensuremath{\mathscr F}_\kappa$ than $\varrho_\infty$ by controlling the sign of the higher order terms in the Taylor expansion of the free energy. The statement~\cref{thm:m3}\ref{thm:m3a} is then a direct consequence of~\cref{prop:CharactTP}. Figure~\ref{fig:dcctp}(b) shows the scenario in which there is one dominant negative mode and all other negative modes are restricted to a small neighbourhood of $0$. In this case, there exists a continuous transition point. The proof follows by showing that $\varrho_\infty$ is the unique minimiser of $\ensuremath{\mathscr F}_\kappa$ at $\kappa=\kappa_\sharp$. For controlling the involved error terms, the neighbourhood needs to made by small, which is equivalent to making $\alpha$ small in the statement of ~\cref{thm:m3}\ref{thm:m3b}. As it will become clear in~\autoref{S:thermodynamic}, the condition in~\cref{thm:m3}\ref{thm:m3b} is essentially an assumption on the size of the spectral gap of the linearised McKean--Vlasov operator. Again, applying~\cref{prop:CharactTP}, the result follows. \begin{figure}[ht] \centering \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{resonating1.eps} \caption*{(a)} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{dominant.eps} \caption*{(b)} \end{minipage} \caption{(a) The near-resonating modes scenario, in which the modes $k^a=7,k^b=5,k^c=2$ satisfy the algebraic condition $k^a=k^b+k^c$}; (b) The dominant mode scenario. \label{fig:dcctp} \end{figure} This work provides a complete local and global bifurcation analysis for the Mckean--Vlasov equation on the torus. This enables us to study phase transitions for several important models that have been introduced in the literature. This is done in~\autoref{S:app}. In particular, we apply our results to the following examples: the noisy Kuramoto model for synchronisation, the Hegselmann--Krausse model for opinion dynamics, the Keller--Segel model for bacterial chemotaxis, the Onsager model for liquid crystal alignment, and the Barr\'e--Degond--Zatorska model for interacting dynamical networks. As an example of the typical bifurcation diagram expected for this kind of system, we discuss the noisy Kuramoto model which has the interaction potential $W(x)=-(2/L)^{1/2}\cos(2 \pi x/L)$. For $\kappa$ sufficiently small, the uniform state is the unique stationary solution. At some critical $\kappa=\kappa_c$ a clustered solution branches out from the uniform state and for all $\kappa>\kappa_c$ this clustered state is preferred solution, i.e., it is the global minimiser of the free energy, $\ensuremath{\mathscr F}_{\kappa}$. The bifurcation diagram and a plot of the clustered solution can be seen in Figure~\ref{fig:kurbif}. The model is discussed in more detail in~\autoref{ss:kura}. \begin{figure}[ht] \centering \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{kurbif.eps} \caption*{(a)} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{1clust.eps} \caption*{(b)} \end{minipage} \caption{(a). The bifurcation diagram for the noisy Kuramoto system: the solid blue line denotes the stable branch of solutions while the dotted red line denotes the unstable branch of solutions (b). An example of a clustered solution representing phase synchronisation of the oscillators} \label{fig:kurbif} \end{figure} \subsection{Organisation of the paper} The paper is organised in the following manner: In Section~\ref{S:not} we introduce the main notation and assumptions on the interaction potential $W$, state a basic existence and uniqueness theorem for classical solutions of the evolutionary problem and present a series of results about the stationary problem and the associated free energy that we use for our later analysis. In Section~\ref{S:gas} we present the proof of~\cref{thm:m1}\ref{thm:m1b}, whereas the proof of ~\cref{thm:m1}\ref{thm:m1a} is similar to the argument in~\cite{chazelle2017well} and can be found in Version 1 of the arXiv manuscript. Additionally, we perform a linear stability analysis of the Mckean--Vlasov PDE about $1/L^d$. Section~\ref{S:lbt} is dedicated mainly to the the proof of~\cref{thm:m2}, including further details about the structure of the bifurcating branches and the structure of the global bifurcation diagram. In Section~\ref{S:thermodynamic} we give sufficient conditions for the existence of continuous and discontinuous phase transitions and we present the proofs of~\cref{thm:m3}\ref{thm:m3a} and~\cref{thm:m3}\ref{thm:m3b}, along with some supplementary results. In Section~\ref{S:app}, we apply our results to various models from the biological, physical and social sciences. \section{Preliminaries}\label{S:not} \subsection{Set up and notation} Let $U={\mathbb R}^{d}/ L {\mathbb Z}^d \hat{=} \left(-\frac{L}{2},\frac{L}{2}\right)^d \subset {\mathbb R}^d$ be the torus of size~$L>0$. We denote by ${\mathbb N} = \set{0,1,\dots}$ the nonnegative integers. Furthermore, we will denote by $\mathcal{P}(U)$ the space of Borel probability measures on $U$, by $\mathcal{P}_{\mathup{ac}}(U)$ the subset of $\mathcal{P}(U)$ absolutely continuous w.r.t the Lebesgue measure, and by $\mathcal{P}_{\mathup{ac}}^+(U)$ the subset of $\mathcal{P}_{\mathup{ac}}(U)$ having strictly positive densities a.e. Additionally, $C^k(U)$ will denote the restriction to $U$ of all $L$-periodic and $k$-times continuously differentiable functions, $\ensuremath{\mathcal D}(U)$ the space of test functions, and $\skp*{f,g}_\mu$ the $\ensuremath{{L}}^2(U,\mu)$ inner product. \subsection{Assumptions on \texorpdfstring{$W$}{W}}Throughout the subsequent discussion we will assume that $W(x)$ is at least integrable and coordinate-wise even, that is \begin{equation} \forall x\in {\mathbb R}^d \ \forall i \in \set{1,\dots, d} : \qquad W(x_1, \dots ,x_i, \dots, x_d)=W(x_1, \dots ,-x_i, \dots, x_d) \, . \end{equation} For the evolutionary problem we will assume \begin{equation}\label{ass:A} W \in \ensuremath{{\cW}}^{2,\infty}(U) \, ,\tag{\textbf{A1}} \end{equation} while for the stationary problem we will assume \begin{equation}\label{ass:B} W \in \ensuremath{{H}}^1(U) \implies W \in \ensuremath{{L}}^1(U) \qquad\text{and}\qquad W_- \in \ensuremath{{L}}^\infty(U) \quad\text{with}\quad W_-(x)=\min \{0,W(x)\} \tag{\textbf{A2}} \, , \end{equation} where the $\ensuremath{{L}}^p(U)$ with $1 \leq p \leq \infty$ represent the Lebesgue spaces and $\ensuremath{{\cW}}^{k,p}(U)$ represent the periodic Sobolev spaces with $\ensuremath{{H}}^k(U) = \ensuremath{{\cW}}^{k,2}(U)$. Wherever required, weaker or stronger assumptions will be indicated in the text. As one may expect, the assumptions on $W(x)$ for the evolutionary and stationary problems to be the same, it is important to mention that these assumptions are in no way sharp and the aim of this paper is not to study low regularity theory for this class of PDEs. For the space $\ensuremath{{L}}^2(U)$ we define the orthonormal basis, $\{w_k\}_{k \in {\mathbb Z}^d}$, $k=(k_1 , k_2 , \ldots , k_d)$, as follows: \begin{align}\label{e:def:wk} w_k(x)= N_k\prod\limits_{i=1}^d w_{k_i}(x_i), \qquad\text{ where } \qquad w_{k_i}(x_i)= \begin{cases} \cos\left(\frac{2 \pi k_i}{L} x_i\right) & k_i>0, \\ 1 & k_i=0, \\ \sin\left(\frac{2 \pi k_i}{L} x_i\right) & k_i<0, \\ \end{cases} \end{align} and $N_k$ is defined as \begin{align} N_k:=\frac{1}{L^{d/2}}\prod\limits_{i=1}^d \left(2-\delta_{k_i , 0} \right)^{\frac{1}{2}}=: \frac{\Theta(k)}{L^{d/2}} \, ,\label{e:def:thetak} \end{align} where $\delta_{i,j}$ denotes the Kronecker delta. We then have the following form for the discrete Fourier transform of any $f \in \ensuremath{{L}}^2(U)$ \begin{align} \tilde{f}(k)&= \skp{f,w_k }, \qquad k \in {\mathbb Z}^d\, . \end{align} We denote by ``$\star$'' the convolution of any two functions, $f(x),g \in \ensuremath{{L}}^2(U)$ and for $f(x)=W(x)$ we have the following representation in Fourier space: \begin{align} (W \star g)(y)= \sum\limits_{k \in {\mathbb N}^d}\tilde{W}(k) \frac{1}{N_k} \sum\limits_{\sigma \in \mathrm{Sym}(\Lambda)}\tilde{g}(\sigma(k))w_{\sigma(k)}(y) \, . \end{align} Here, we have used the fact that $W(x)$ is coordinate-wise even. $\mathrm{Sym}(\Lambda)$ represents the symmetry group of the product of two-point spaces $\Lambda=\{1,-1\}^d$, which acts on ${\mathbb Z}^d$ by pointwise multiplication, i.e., $(\sigma(k))_i=\sigma_i k_i, k \in {\mathbb Z}^d, \sigma \in \mathrm{Sym}(\Lambda)$. Another expression that we will use extensively in the sequel is the Fourier expansion of the following bilinear form \begin{align}\label{Fourier:Interaction} \iint\limits_{U \times U} \! W(x-y)g(x) g(y) \dx{x} \dx{y} = \sum\limits_{k \in {\mathbb N}^d} \tilde{W}(k)\frac{1}{N_k}\sum\limits_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{g}(\sigma(k))|^2 \, . \end{align} It will be useful to note that for any function $g(x)$ and $k \in {\mathbb Z}^d$ the sum $\sum_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{g}(\sigma(k))|^2 $ is translation invariant, i.e., the value of the sum is the same for $g$ and $g_\tau(x) =g(x+\tau)$ for $\tau\in U$. In later sections we will also use the space $\ensuremath{{L}}^2_s(U) \subset \ensuremath{{L}}^2(U)$, which we define as the space of coordinate-wise even functions in $\ensuremath{{L}}^2(U)$ given by \begin{equation}\label{eq:def:L2s} \ensuremath{{L}}^2_s(U) = \set*{ f\in \ensuremath{{L}}^2(U) : f(x_1,\dots,x_i,\dots,x_d)= f(x_1,\dots,-x_i,\dots, x_d) , i\in \set{1,\dots,d} , x\in U } \, . \end{equation} It should be noted that any pointwise properties (like being coordinate-wise even) should be understood in a pointwise a.e. sense. The space $\ensuremath{{L}}^2_s(U)$ is a closed subspace of $\ensuremath{{L}}^2(U)$ and thus is a Hilbert space in its own right. It is also easy to check that $\{w_k\}_{k \in {\mathbb N}^d} \subset \{w_k\}_{k \in {\mathbb Z}^d}$ forms an orthonormal basis for $\ensuremath{{L}}^2_s(U)$. If $g$ is assumed to be in $\ensuremath{{L}}^2_s(U)$, then the above expressions reduce to, \begin{align} (W \star g)(y)&= \sum\limits_{k \in {\mathbb N}^d,k_i>0} \tilde{W}(k) \frac{1}{N_k} \tilde{g}(k)w_k(y) \, ,\\ \iint\limits_{U \times U} \! W(x-y)g(x) g(y) \dx{x} \dx{y} &= \sum\limits_{k \in {\mathbb N}^d,k_i>0} \tilde{W}(k) \frac{1}{N_k} |\tilde{g}(k)|^2 \, . \end{align} In addition, the sign of the individual Fourier modes of $W$ is quite important in the subsequent analysis and we introduce the following definition. \begin{defn}[$H$-stability]\label{def:Hstable} A function $W \in \ensuremath{{L}}^2(U)$ is said to be $H$-stable, denoted by $W \in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}$, if it has non-negative Fourier coefficients, i.e, \begin{align} \tilde{W}(k) \geq 0, \quad \forall k \in \mathbb{Z}^d \ , \end{align} where, $\tilde{W}_k=\skp{W,w_k}$. This is by~\eqref{Fourier:Interaction} equivalent to the condition that, \begin{align} \iint\limits_{U \times U} W (x-y) \eta(x) \eta(y) \dx{x} \dx{y} \geq 0, \qquad \forall \eta \in \ensuremath{{L}}^2(U). \end{align} Thus every potential is decomposed into two parts $W(x)=W_\ensuremath{\mathup{s}}(x) + W_\ensuremath{\mathup{u}}(x)$, where \begin{align} W_\ensuremath{\mathup{s}}(x)&= \sum\limits_{k \in {\mathbb Z}^d} \bra*{\skp{W,w_k}}_+ w_k(x)\qquad\text{and}\qquad W_\ensuremath{\mathup{u}}(x)=W(x)-W_\ensuremath{\mathup{s}}(x) \, . \end{align} Hereby, $(a)_+ = \max\set{0, a}$ (resp. $(a)_-= \min\set{0, a}$) denotes the positive (resp. negative) part for a real number $a\in {\mathbb R}$. We will denote a potential $W \in \ensuremath{{L}}^2(U)$ which is not $H$-stable by $W \in \ensuremath{{\mathbb H}}_s^c$. \end{defn} An immediate consequence of the identity~\eqref{Fourier:Interaction} is that $H$-stable potentials have nonnegative interaction energy. The above definition can be thought of as a continuous analogue of the notion of $H$-stability encountered in the study of discrete systems (cf. \cite{ruelle1999statistical}). We refer to~\cite{canizo2015existence} for an example of the notion of $H$-stability applied to continuous systems. For the rest of the paper we will drop the subscript~$U$ under the integral sign and all integrals in space will be taken over $U$ unless specified otherwise. \subsection{Existence and uniqueness for the dynamics} We present an existence and uniqueness result for the McKean--Vlasov equation and comment on the nontrivial parts of the proof. The proof is quite standard. Our result is an extension of~\cite[Theorem 4.5]{chazelle2017well} since we consider all potentials $W$ satisfying Assumption ~\eqref{ass:A} in any dimension $d$, as opposed to~\cite[Theorem 4.5]{chazelle2017well} which deals with the Hegselmann--Krause potential in one dimension. Additionally, we prove \emph{strict} positivity of solutions as opposed to the nonnegativity proved in~\cite{chazelle2017well}. We prove below the existence of classical solutions $\varrho(\cdot,t) \in C^2 (U)$ to the system \begin{align} \label{eq:PDE} \begin{alignedat}{3} \frac{\partial \varrho}{\partial t}& = \beta^{-1}\Delta \varrho + \kappa \dive (\varrho\nabla W \star \varrho), \qquad &&(x,t) \in U \times (0,T] \ ; \\ \varrho(x,0) &=\varrho_0(x), \qquad && x \in U \, . \end{alignedat} \end{align} \begin{thm} Assume Assumption~\eqref{ass:A} holds, then for $\varrho_0 \in \ensuremath{{H}}^{3+d}(U)\cap\mathcal{P}_{\mathup{ac}}(U) $, there exists a unique classical solution $\varrho$ of \eqref{eq:PDE} such that $\varrho(\cdot,t) \in \mathcal{P}_{\mathup{ac}}(U)\cap C^2(U)$ for all $t>0$. Additionally, $\varrho(\cdot,t)$ is strictly positive and has finite entropy, i.e, $\varrho(\cdot ,t)>0$ and $S(\varrho(\cdot,t))< \infty$, for all $t>0$. \label{thm:wellp} \end{thm} The strategy of the proof is identical to that used in the proof of~\cite[Theorem 4.5]{chazelle2017well}. We construct a sequence of linear problems that approximate the McKean--Vlasov equation \begin{align} \frac{\partial \varrho_n}{\partial t} = \beta^{-1}\Delta \varrho_n + \kappa \dive (\varrho_n\nabla W \star \varrho_{n-1}) \quad &\textrm{ in } U \times (0,T] \ , \nonumber \\ \forall i \in \set{1,\dots, d} : \qquad \varrho_n(\cdot + L \mathbf{e}_i)=\varrho_n(\cdot) \quad &\textrm{ on } \partial U \times [0,T]\ , \label{eq:nPDE} \\\nonumber \varrho=\varrho' \quad &\textrm{ in } U \times \{0\} \, , \end{align} which for smooth initial data, $\varrho'\in \mathcal{P}_{\mathup{ac}}(U) \cap C^\infty(U)$ have unique smooth solutions. Similar apriori estimates to \cite{chazelle2017well} obtained using the $\ensuremath{{\cW}}^{2,\infty}(U)$-regularity of $W$ allows us to pass to the limit as $n \to \infty$ and recover weak solutions of the McKean--Vlasov equation which are proved to be unique. Their regularity follows from bootstrapping and using the regularity of $W$ and the initial data. We now comment on the proof of strict positivity for classical solutions $\varrho(x,t)$ of~\eqref{eq:PDE}. The nonnegativity of the solutions follows from a similar argument to~\cite[Corollary 2.2]{chazelle2017well}. Consider now the ``frozen'' linearised version of the McKean--Vlasov equation, i.e., \begin{align} \frac{\partial \vartheta}{\partial t}= \dive \left(\beta^{-1}\nabla \vartheta + \kappa \vartheta (\nabla W \star \varrho(x,t))\right) \end{align} This is a linear parabolic PDE with uniformly bounded and continuous coefficients. Additionally, $\varrho(x,t)$ is a classical solution to this PDE. Thus we have a Harnack's inequality of the following form (cf. ~\cite[Theorems 8.1.1-8.1.3]{bogachev2015fokker} for sharp versions of this result) \begin{align} \sup_U \varrho(x,t_1) < C \inf_U \varrho(x,t_2) \ , \end{align} for $0<t_1<t_2< \infty$ for some positive constant $C$. Since $\varrho(x,t)$ is nonnegative and $\norm{\varrho(x,t)}_{1}=1$ for all $0\leq t < \infty$, this implies that $\inf_U\varrho(x,t)$ is positive for any positive time. The fact that the entropy is finite follows from the fact that $\varrho(x,t)$ is positive and bounded above. \begin{comment} \begin{proof}[Sketch of proof] \emph{Existence.} We start by arguing that for smooth initial data, $\varrho'\in \mathcal{P}_{\mathup{ac}}(U) \cap C^\infty(U)$, the following sequence of linear parabolic PDE have unique smooth solutions, \begin{align} \frac{\partial \varrho_n}{\partial t} = \beta^{-1}\Delta \varrho_n + \kappa \dive (\varrho_n\nabla W \star \varrho_{n-1}) \quad &\textrm{ in } U \times (0,T] \ , \nonumber \\ \forall i \in \set{1,\dots, d} : \qquad \varrho_n(\cdot + L \mathbf{e}_i)=\varrho_n(\cdot) \quad &\textrm{ on } \partial U \times [0,T]\ , \label{eq:nPDE} \\\nonumber \varrho=\varrho' \quad &\textrm{ in } U \times \{0\} \, . \end{align} The proof of this follows from classical results in the theory of linear parabolic equations (cf. ~\cite[Chapter 7]{evans2010partial}). Thus, we have a sequence $\{\varrho_n\}_{n \in {\mathbb N}} \in C^\infty(\bar{U} \times [0,T]) \cap \mathcal{P}_{\mathup{ac}}(U)$. We define a weak solution of our nonlocal McKean--Vlasov PDE as any function $\varrho \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U)) \cap \ensuremath{{L}}^\infty(0,T;\ensuremath{{L}}^2(U)) $ and $\partial_t \varrho \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^{-1}(U))$ such that \begin{align} \int_0^T \skp*{\eta, \partial_t \varrho}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} + \int_0^T \intom{\beta^{-1} \nabla \varrho \cdot \nabla \eta + \kappa\varrho \nabla (W \star \varrho) \cdot \nabla \eta} \dx{t} =0, \quad \forall \eta \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U)) \end{align} with $\varrho(x,0)=\varrho'$. The idea then is to show that the sequence $\{\varrho_n\}_{n \in {\mathbb N}}$ has some compactness properties so that up to a subsequence we obtain limits which we would then expect to satisfy our definition of a weak solution. Using essentially the same ideas as in ~\cite{chazelle2017well} and the given regularity of $W$ we obtain the following results \begin{subequations} \begin{align} \varrho_{n} \to \varrho &\qquad\textrm{ in } \ensuremath{{L}}^1(0,T;\ensuremath{{L}}^1(U)) \, , \label{eq:sl1l1}\\ \varrho_{n_k} \rightharpoonup \varrho &\qquad\textrm{ in } \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U)) \, , \label{eq:wl2h1} \\ \partial_t \varrho_{n_k} \rightharpoonup \partial_t \varrho &\qquad\textrm{ in } \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^{-1}(U)) \label{eq:wl2hm1} \, , \end{align} \end{subequations} where $\varrho_{n_k}$ is a subsequence and $\varrho$ is the limiting object which we expect to be a weak solution. Additionally, both the limit and the sequence satisfy the following estimate: \begin{align} \norm*{\varrho}_{\ensuremath{{L}}^{\infty}(0,T;\ensuremath{{L}}^2(U))} + \norm*{\varrho}_{\ensuremath{{L}}^{2}(0,T;\ensuremath{{H}}^1(U))} + \norm*{\partial_t \varrho}_{\ensuremath{{L}}^{2}(0,T;\ensuremath{{H}}^{-1}(U)) } \leq C(T)\norm*{\varrho'}_2 \, . \label{eq:estel} \end{align} Now we multiply~\eqref{eq:nPDE} by some $\eta \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U)) $ for some $n=n_k$ and integrate by parts to obtain \begin{align} \int_0^T \skp*{\eta, \partial_t \varrho_{n_k}}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} + \int_0^T \intom{\beta^{-1}\nabla \varrho_{n_k} \cdot \nabla \eta + \kappa\varrho_{n_k} \nabla (W \star \varrho_{n_k-1}) \cdot \nabla \eta} \dx{t} =0 \, \label{eq:wcPDE}. \end{align} Consider the following term, \begin{align} \int_0^T \intom{ \kappa\varrho_{n_k} \nabla (W \star \varrho_{n_k-1}) \cdot \nabla \eta} \dx{t} &= \int_0^T \intom{ \kappa(\varrho_{n_k} -\varrho)\nabla (W \star \varrho_{n_k-1}) \cdot \nabla \eta} \dx{t} \\ &\quad +\int_0^T \intom{ \kappa\varrho \nabla (W \star (\varrho_{n_k-1}-\varrho)) \cdot \nabla \eta} \dx{t} \\ &\quad +\int_0^T \intom{ \kappa\varrho \nabla (W \star \varrho) \cdot \nabla \eta} \dx{t} \, . \end{align} For the first term on the right hand side notice that \[ \nabla (W \star \varrho_{n_k-1}) \cdot \nabla \eta \in \ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U)) \subset \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^{-1}(U)) \, . \] Indeed, we have \begin{align} \int_0^T \norm*{\nabla (W \star \varrho_{n_k-1}) \cdot \nabla \eta }_2^2 \dx{t}\leq L^{2d} \norm*{\nabla W}_\infty^2 \norm*{\nabla \eta}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))}^2 \, . \label{eq:est11} \end{align} For the second term we have \begin{align} \int_0^T \intom{ \varrho \nabla (W \star (\varrho_{n_k-1}-\varrho)) \cdot \nabla \eta} \dx{t} &\leq \norm*{\varrho}_{\ensuremath{{L}}^{\infty}(0,T;\ensuremath{{L}}^2(U))} \norm*{\nabla \eta}_{\ensuremath{{L}}^2(\ensuremath{{L}}^2)} \; \norm*{\nabla (W \star (\varrho_{n_k-1}-\varrho))}_{\ensuremath{{L}}^2(\ensuremath{{L}}^2)} \\ &\leq C(T) \norm*{\varrho'}_2 \norm*{\nabla \eta}_{\ensuremath{{L}}^2(\ensuremath{{L}}^2)}\norm*{\nabla W}_\infty \; \bra*{\int_0^T \norm*{\varrho_{n_k-1}-\varrho}_1^2\dx{t}}^{\frac{1}{2}} \,. \end{align} We know that $\norm*{\varrho_{n_k-1}-\varrho}_1 \leq \norm*{\varrho_{n_k-1}}_1+\norm*{\varrho}_1=2$ therefore we obtain \begin{align} \bra*{\int_0^T \norm*{\varrho_{n_k-1}-\varrho}_1^2\dx{t}}^{\frac{1}{2}} \leq \sqrt{2} \bra*{\norm*{\varrho_{n_k-1}-\varrho}_{\ensuremath{{L}}^1(0,T;\ensuremath{{L}}^1(U))}}^{\frac{1}{2}} \, . \label{eq:est2} \end{align} Using~\eqref{eq:sl1l1} and~\eqref{eq:wl2h1} along with the estimates ~\eqref{eq:est11} and~\eqref{eq:est2} we obtain \begin{align} \int_0^T \intom{ \kappa\varrho_{n_k} \nabla (W \star \varrho_{n_k-1} )\cdot \nabla \eta }\stackrel{k \to \infty }{\longrightarrow} \int_0^T \intom{ \kappa\varrho\nabla (W \star \varrho)\cdot \nabla \eta}, \qquad \forall \eta \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U)) \, . \end{align} Additionally, we can use~\eqref{eq:wl2hm1} to argue that \begin{align} \int_0^T \skp*{\eta, \partial_t \varrho_{n_k}}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} \stackrel{k \to \infty }{\longrightarrow} \int_0^T \skp*{\eta, \partial_t \varrho}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} , \qquad \forall \eta \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U))\, . \end{align} Finally using~\eqref{eq:wl2h1} we obtain \begin{align} \int_0^T \intom{\nabla \varrho_{n_k}\cdot \nabla \eta} \dx{t} \stackrel{k \to \infty }{\longrightarrow} \int_0^T \intom{\nabla \varrho\cdot \nabla \eta} \dx{t} , \qquad \forall \eta \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U))\, . \end{align} Putting it all together we obtain \begin{align} \int_0^T \skp*{\eta , \partial_t \varrho}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} + \int_0^T \intom{\beta^{-1}\nabla \varrho \cdot \nabla \eta + \kappa\varrho \nabla (W \star \varrho) \cdot \nabla \eta} \dx{t} =0, \qquad \forall \eta \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U)) \, . \end{align} What remains is to check that $\varrho$ satisfies the initial condition, i.e., $\varrho(\cdot,0)=\varrho'$. This condition makes sense as we know from~\cite[§5.9.2, Theorem 3]{evans2010partial} that $\varrho \in C(0,T;\ensuremath{{L}}^2(U))$. Picking $\eta \in C^1(0,T; \ensuremath{{H}}^1(U))$ such that $\eta(x,T)=0$ the weak form of the PDE reduces to: \begin{align} -\int_0^T \skp*{\varrho, \partial_t \eta}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} + \int_0^T \intom{\beta^{-1}\nabla \varrho \cdot \nabla \eta + \kappa\varrho \nabla (W \star \varrho) \cdot \nabla \eta} \dx{t} = \intom{\varrho(x,0)\eta(x,0)} \, . \end{align} Similarly, choosing the same $\eta$ for~\eqref{eq:wcPDE}, we obtain \begin{align} -\int_0^T \skp*{\varrho_{n_k}, \partial_t \eta}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} + \int_0^T \intom{\beta^{-1}\nabla \varrho_{n_k} \cdot \nabla \eta + \kappa\varrho_{n_k} \nabla (W \star \varrho_{n_k}) \cdot \nabla \eta} \dx{t} = \intom{\varrho' \eta(x,0)} \, . \end{align} As $k \to \infty$, we obtain the following relationship: \begin{align} \intom{\varrho(x ,0)\eta(x,0)}=\intom{\varrho'\eta(x,0)}\, . \end{align} Since $\eta(x,0)$ can be any arbitrary $\ensuremath{{H}}^1(U)$ function it follows that $\varrho(x,0)=\varrho'$. It should also be noted that~\eqref{eq:sl1l1} implies that $\varrho(\cdot, t) \in \mathcal{P}_{\mathup{ac}}(U)$, $t$ a.e. This completes the proof of existence of a weak solution for smooth initial data. We can now relax the regularity assumption on the initial condition to $\varrho \in \ensuremath{{L}}^2(U)\cap \mathcal{P}_{\mathup{ac}}(U)$ to obtain existence of a weak solution. We do this by mollifying the initial data $\varrho'_\epsilon= \varphi_\epsilon \star \varrho_0$(here $\varphi_\epsilon$ is the standard Friedrichs mollifier), applying the above arguments again, and then passing to the limit as $\epsilon \to 0$. \noindent\emph{Uniqueness.} Assume $\varrho_1$ and $\varrho_2$ are two weak solutions for some initial data $\varrho_0 \in \ensuremath{{L}}^2(U) \cap \mathcal{P}_{\mathup{ac}}(U)$ and set $\xi=\varrho_1 -\varrho_2$. Then we have for all $\eta \in \ensuremath{{L}}^2(0,T;\ensuremath{{H}}^1(U))$ \begin{align} \int_0^T \skp*{\eta , \partial_t \xi}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} + \int_0^T \intom{\beta^{-1}\nabla \xi \cdot \nabla \eta} \dx{t} = &-\int_0^T \intom{ \kappa\xi \nabla (W \star \varrho_1) \cdot \nabla \eta} \dx{t} \\ &-\int_0^T \intom{ \kappa\varrho_2 \nabla (W \star \xi) \cdot \nabla \eta} \dx{t} \, . \label{eq:unwpde} \end{align} We can estimate the first terms on the right hand side of the above expression as follows \begin{align} \kappa \abs*{\int_0^T \intom{\xi \nabla (W \star \varrho_1) \cdot \nabla \eta} \dx{t}} & \leq \kappa\norm*{\nabla (W \star \varrho_1)}_{\ensuremath{{L}}^\infty(0,T;\ensuremath{{L}}^\infty(U))} \norm*{\xi}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))} \norm*{\nabla \eta}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))} \\ & \leq \kappa\norm*{\nabla W }_\infty \norm*{\xi}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))} \norm*{\nabla \eta}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))} \\ &\leq \frac{\beta^{-1}}{2} \norm*{\nabla \eta}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))}^2 + \beta \kappa^2 \norm*{\nabla W}_\infty^2 \norm*{\xi}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))}^2 \, . \end{align} The second term follows in a very similar manner \begin{align} \kappa \abs*{\int_0^T \intom{\varrho_2 \nabla (W \star \xi) \cdot \nabla \eta} \dx{t}} &\leq \kappa \norm*{\varrho}_{\ensuremath{{L}}^\infty(0,T; \ensuremath{{L}}^2(U))} \norm*{\nabla \eta}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))} \norm*{\nabla W \star \xi}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))} \\ &\leq \kappa C(T)\norm*{\varrho_0}_2\norm*{\nabla W}_\infty \norm*{\nabla \eta}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))}\norm*{\xi}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))} \\ &\leq \frac{\beta^{-1}}{2} \norm*{\nabla \eta}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))}^2 + \beta C(T)^2\norm*{\varrho_0}_2^2 \kappa^2 \norm*{\nabla W}_\infty^2 \norm*{\xi}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))}^2 \, . \end{align} Using the above estimates and setting $\eta =\xi$ in~\eqref{eq:unwpde}, we obtain \begin{align} \int_0^T \skp*{\partial_t \xi, \xi}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}} \dx{t} &\leq \bra*{\beta \kappa^2 \norm*{\nabla W}_\infty^2 + \beta C(T)^2\norm*{\varrho_0}_2^2 \kappa^2 \norm*{\nabla W}_\infty^2 } \norm*{\xi}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))}^2 \, . \end{align} Applying~\cite[§ 5.9.2, Theorem 3]{evans2010partial}, we know that $\skp*{\partial_t \xi, \xi}_{\ensuremath{{H}}^1,\ensuremath{{H}}^{-1}}=\frac{1}{2} \frac{\dx}{\dx{t}}\norm*{\xi(t)}_2^2$. Thus we have \begin{align} \frac{1}{2}\int_0^T \frac{\dx}{\dx{t}}\norm*{\xi(t)}_2^2 \dx{t} &\leq \bra*{C_1 + C_2(T) \norm*{\varrho_0}_2^2 } \norm*{\xi}_{\ensuremath{{L}}^2(0,T;\ensuremath{{L}}^2(U))}^2 \,. \end{align} Thus for almost every $t \in (0,T]$, it must hold that \begin{align} \frac{1}{2} \frac{\dx}{\dx{t}}\norm*{\xi(t)}_2^2 &\leq \bra*{C_1 + C_2(T) \norm*{\varrho_0}_2^2 } \norm*{\xi(t)}_2^2 \,. \end{align} Applying Gr\"onwall's inequality and noticing from~\cite[§ 5.9.2, Theorem 3]{evans2010partial} that $\varrho_1,\varrho_2 \in C(0,T;\ensuremath{{L}}^2(U))$, we have the desired uniqueness, i.e., $\norm*{\xi(t)}_2=0$ for all $t \in(0,T]$. \noindent\emph{Regularity.} We return to the sequence of linear PDEs presented in~\eqref{eq:nPDE}. If we mollify the initial data, $\varrho'^\epsilon= \varphi_\epsilon \star \varrho_0$, we obtain sequence of linear parabolic PDE with smooth and bounded coefficients which thus have smooth solutions, $\{\varrho_n^\epsilon\}_{n=0}^{\infty}$. This leaves us free to take derivatives to any order. After establishing the required regularity estimates on this mollified sequence, we can pass to to the limit as $n \to \infty$. We will again omit the standard arguments for passing to the limit as $\epsilon \to 0$ and will suppress the dependence on $\epsilon$ in our calculations. For regularity in space, we need to derive the following estimate: \begin{align} \norm*{\varrho_n}_{\ensuremath{{L}}^2(0,T; \ensuremath{{H}}^{k+2}(U))} + \norm*{\varrho_n}_{\ensuremath{{L}}^\infty(0,T; \ensuremath{{H}}^{k+1}(U))} \leq C_k(T,\varrho') , \textrm{ for } \varrho_0 \in \ensuremath{{H}}^k(U) \cap \mathcal{P}_{\mathup{ac}}(U) \, , \label{eq:estspace1} \end{align} where $k \geq 0$ and $C_k$ is a constant that depends on the $\ensuremath{{H}}^\ell(U)$ norms of the initial data for $\ell \leq k$. We prove this by induction noting that the $k=0$ case follows from estimate~\eqref{eq:estel}. We assume the estimate is true for $k-1 \geq 0$ and try to prove it for $k$. Differentiating~\eqref{eq:nPDE} by $\partial_\alpha $ for some multiindex $\alpha$ with $\abs{\alpha} = k$, multiplying it by $\Delta \partial_\alpha \varrho_n$ and integrating by parts, we obtain \begin{align} \frac{1}{2}\frac{\dx{}}{\dx{t}}\norm*{\nabla\partial_\alpha \varrho_n}_2^2 + \beta^{-1}\norm*{D^2 \partial_\alpha \varrho_n}_2^2 \leq \kappa\intom{\abs*{\Delta \partial_\alpha \varrho_n \partial_\alpha \nabla \cdot (\varrho_n \nabla W \star \varrho_{n-1})}} \,. \end{align} Applying Young's inequality, we deduce \begin{align} \frac{1}{2}\frac{\dx{}}{\dx{t}}\norm*{\nabla\partial_\alpha \varrho_n}_2^2 + \beta^{-1}\norm*{D^2 \partial_\alpha \varrho_n}_2^2 \leq \frac{\beta^{-1}}{2} \norm*{D^2 \partial_\alpha \varrho_n}_2^2 + C \sum_{i=1}^d \norm*{\varrho_n \bra*{\partial_{x_i} W \star \varrho_{n-1}}}_{\ensuremath{{H}}^{k+1}}^2 \, .\label{eq:estspace2} \end{align} All that remains now is to obtain a useful estimate on the second term on the right hand side given the regularity on the interaction potential that we have, i.e., $W \in \ensuremath{{\cW}}^{2,\infty}(U)$. We have by Poincar\'e's inequality \begin{align} \norm*{\varrho_n \bra*{\partial_{x_i}W \star \varrho_{n-1}}}_{\ensuremath{{H}}^{k+1}}^2 \leq C \bra*{\norm*{\varrho_n \bra*{\partial_{x_i} W \star \varrho_{n-1}}}_2^2+ \sum\limits_{|\gamma|=k+1}\norm*{\partial_\gamma\bra*{ \varrho_n \partial_{x_i} \bra*{W \star \varrho_{n-1}}}}_2^2} \, . \end{align} For the second term(for some multiindex $\gamma$ with $\abs{\gamma}=k+1$) by the Leibniz rule we obtain \begin{align} \norm*{\partial_\gamma\bra*{ \varrho_n \partial_{x_i} \bra*{W \star \varrho_{n-1}}}}_2^2&= \norm*{\sum\limits_{\ell \leq \gamma} C(\ell,\gamma)\bra{\partial_{\gamma -\ell} \varrho_n} \bra{\partial_{\ell} \partial_{x_i} W \star \varrho_{n-1}} }_2^2 \\ & \leq \sum\limits_{\ell \leq \gamma, |\ell|\leq \min (d,k+1)} C(\ell,\gamma) \norm{\partial_{\gamma -\ell} \varrho_n}_2^2 \norm{\partial_\ell\partial_{x_i} W \star \varrho_{n-1}}_\infty^2 \\ &+ \sum\limits_{\ell \leq \gamma, |\ell|>\min (d,k+1)} C(\ell,\gamma) \norm{\partial_{\gamma -\ell} \varrho_n}_\infty^2 \norm{\partial_\ell\partial_{x_i} W \star \varrho_{n-1}}_2^2 \\ &\leq \sum\limits_{\ell \leq \gamma, |\ell|\leq \min (d,k+1)} C(\ell,\gamma) \norm{ \varrho_n}_{\ensuremath{{H}}^{k+1}}^2 \norm{W}_{\ensuremath{{\cW}}^{2,\infty}}^2\norm{ \varrho_{n-1}}_{\ensuremath{{H}}^{\min (d,k+1)-1}}^2 \\ &+ \sum\limits_{\ell \leq \gamma, |\ell|>\min (d,k+1)} C(\ell,\gamma) \norm{\partial_{\gamma -\ell} \varrho_n}_\infty^2 \norm{W}_{\ensuremath{{\cW}}^{2,\infty}}^2\norm{ \varrho_{n-1}}_{\ensuremath{{H}}^{k+1-\min (d,k+1)}}^2 \,. \end{align} If $d\geq k+1$, then it is clear that the second sum vanishes and we have \begin{align} \norm*{\partial_\gamma\bra*{ \varrho_n \partial_{x_i} \bra*{W \star \varrho_{n-1}}}}_2^2& \leq C \norm{\varrho_n}_{\ensuremath{{H}}^{k+1}}^2 \norm{\varrho_{n-1}}_{\ensuremath{{H}}^k}^2 \, . \end{align} Let us assume that this is not the case, i.e, $\min(d,k+1)=d$. We now note that for $|\ell|>d$ we have $|\gamma -\ell|=|\gamma| -|\ell|< k+1 -d$. Thus for $|\ell|>d$, $\partial_{\gamma-\ell}\varrho_n \in \ensuremath{{H}}^d(U)$ and by the Sobolev embedding theorem, $\norm{\partial_{\gamma -\ell} \varrho_n}_\infty \leq \norm{\partial_{\gamma -\ell} \varrho_n}_{\ensuremath{{H}}^d}$. Using this and retaining only the highest order Sobolev norms we can obtain the following estimate \begin{align} \norm*{\partial_\gamma\bra*{ \varrho_n \partial_{x_i} \bra*{W \star \varrho_{n-1}}}}_2^2& \leq C \norm{\varrho_n}_{\ensuremath{{H}}^{k+1}}^2 \norm{\varrho_{n-1}}_{\ensuremath{{H}}^k}^2 \, . \end{align} Thus finally we have the following bound \begin{align} \norm*{\varrho_n \partial_{x_i}\bra*{W \star \varrho_{n-1}}}_{\ensuremath{{H}}^{k+1}}^2 \leq C \norm{\varrho_n}_{\ensuremath{{H}}^{k+1}}^2 \norm{\varrho_{n-1}}_{\ensuremath{{H}}^k}^2 \, . \end{align} Substituting this into~\eqref{eq:estspace2}, summing over all $|\alpha|=k$, and integrating with respect to time we derive \begin{align} \sup\limits_{0\leq t \leq T}\norm*{\varrho_n}_{H^{k+1}}^2 &+ \frac{\beta^{-1}}{2}\norm*{\varrho_n}_{\ensuremath{{L}}^2(0,T;\ensuremath{{H}}^{k+2}(U))}^2 \\& \leq \norm*{ \varrho'}_{H^{k+1}}^2 + C \norm{\varrho_n}_{\ensuremath{{L}}^2(0,T;\ensuremath{{H}}^{k+1}(U))}^2 \norm{\varrho_{n-1}}_{\ensuremath{{L}}^\infty(0,T;\ensuremath{{H}}^k(U))}^2 \, \\ &:= C_{k+1} (\varrho',T) \, , \end{align} where in the last step we have used the induction hypothesis for $\varrho_n$ and $\varrho_{n-1}$. We are now free to pass to the $n \to \infty$ limit and obtain \begin{align} \norm*{\varrho}_{\ensuremath{{L}}^2(0,T; \ensuremath{{H}}^{k+2}(U))} + \norm*{\varrho}_{\ensuremath{{L}}^\infty(0,T; \ensuremath{{H}}^{k+1}(U))} \leq C_{k+1}(T,\varrho')\, . \label{eq:estspacem} \end{align} The method for obtaining regularity in time is identical to~\cite[Theorem 4.3]{chazelle2017well} and we have the following estimates for $\varrho_0 \in \ensuremath{{H}}^{2j}(U) \cap \mathcal{P}_{\mathup{ac}}(U)$ : \begin{align} \frac{\partial^m \varrho}{\partial t^m} \in \ensuremath{{L}}^2(0,T; \ensuremath{{H}}^{2j -2m +1}(U)) \cap \ensuremath{{L}}^\infty(0,T; \ensuremath{{H}}^{2j -2m }(U)), \textrm{ for } 0 \leq m \leq j \, . \label{eq:esttimem} \end{align} Finally, we are in a position to apply our assumptions on the initial data. Setting $k+1=3+d$ in~\eqref{eq:estspacem}, we have that $\varrho \in \ensuremath{{L}}^\infty(0,T; \ensuremath{{H}}^{3+d}(U))$. It follows by the Sobolev embedding theorem that $\varrho(t) \in C^2(\bar{U})$. Similarly, setting $2j=3+d$ it follows that, $\partial_t \varrho \in \ensuremath{{L}}^2(0,T; \ensuremath{{H}}^{2+d }(U)) $ and $\partial^2_{tt} \varrho \in \ensuremath{{L}}^2(0,T; \ensuremath{{H}}^{d}(U))$(since $2 \leq (3+d)/2$, for any $d \geq 1$). Since the embedding $\ensuremath{{H}}^{2+d} \hookrightarrow \ensuremath{{H}}^{1+d}$ is compact, by the Aubin--Lions lemma it follows that $\partial_t \varrho \in C(0,T; \ensuremath{{H}}^{1+d}(U))$. Again by the Sobolev embedding theorem we have $\partial_t \varrho \in C(0,T; C(\bar{U}))$, which gives us the required pointwise differentiability in time. \noindent\emph{Positivity.} Consider the ``frozen'' linearised version of the McKean--Vlasov equation, i.e., \begin{align} \frac{\partial \vartheta}{\partial t}= \dive \left(\beta^{-1}\nabla \vartheta + \kappa \vartheta (\nabla W \star \varrho(x,t))\right) \end{align} This is a linear parabolic PDE with uniformly bounded and continuous coefficients. Additionally, $\varrho(x,t)$ is a classical solution to this PDE. Thus we have a Harnack's inequality of the following form (cf. ~\cite[Theorems 8.1.1-8.1.3]{bogachev2015fokker} for sharp versions of this result) \begin{align} \sup_U \varrho(x,t_1) < C \inf_U \varrho(x,t_2) \ , \end{align} for $0<t_1<t_2< \infty$ for some positive constant $C$. Since we know that $\varrho(x,t)$ is nonnegative this implies that $\inf\limits_U\varrho(x,t)$ is positive for any positive time. The fact that the entropy is finite follows from the fact that $\varrho(x,t)$ is positive and bounded above. \end{proof} \end{comment} \subsection{Characterisation of the stationary solutions}\label{S:stationary} In subsequent sections we will study the stationary solutions of the McKean--Vlasov equation \eqref{eq:PDE}, i.e, classical solutions $\varrho \in C^2 (U)$ of \begin{align} \dive \left(\beta^{-1}\nabla\varrho + \kappa \varrho \nabla W \star \varrho \right)=0, \quad & x \in U \label{eq:sPDE} \, . \end{align} In this subsection we present standard results about the stationary McKean--Vlasov equation that will be useful for our later analysis. The main results in this section are~\cref{thm:wellpsPDE} which discusses the existence of solutions and their regularity,~\cref{prop:tfae} which connects stationary solutions to minimisers of the free energy, and~\cref{thm:dirmet} which discusses the existence of minimisers for the free energy. We start by discussing the existence and and regularity question for the stationary problem. The proof relies on the link between the stationary PDE and the fixed points of a nonlinear map as was discussed in~\cite{tamura1984asymptotic} and~\cite{dressler1987stationary}. \begin{thm}[Existence, regularity, and strict positivity of solutions for the stationary problem] \label{thm:wellpsPDE} $ $\\ Consider the stationary McKean--Vlasov PDE \eqref{eq:sPDE} such that Assumption \eqref{ass:B} holds. Then we have \begin{tenumerate} \item There exists a weak solution, $\varrho \in \ensuremath{{H}}^1(U) \cap \mathcal{P}_{\mathup{ac}}(U)$ of \eqref{eq:sPDE} and any weak solution is a fixed point of the nonlinear map $\ensuremath{\mathcal T}: \mathcal{P}_{\mathup{ac}}(U) \to \mathcal{P}_{\mathup{ac}}(U)$ \begin{align}\label{def:T} \ensuremath{\mathcal T}\!\varrho=\frac{1}{Z(\varrho,\kappa,\beta)}e^{-\beta \kappa W \star \varrho}, \quad\text{where}\quad Z(\varrho,\kappa,\beta)= \intom{e^{- \beta \kappa W \star \varrho}}\,. \end{align} \label{thm:ex} \item Any weak solution $\varrho \in \ensuremath{{H}}^1(U) \cap \mathcal{P}_{\mathup{ac}}(U)$ is smooth and strictly positive ,i.e., $\varrho\in C^\infty(\bar{U})\cap \mathcal{P}_{\mathup{ac}}^+(U)$\,. \label{thm:re} \end{tenumerate} \end{thm} \begin{proof} The weak formulation of \eqref{eq:sPDE} is \begin{align} -\beta^{-1}\intom{\nabla \varphi \cdot \nabla \varrho } -\kappa \intom{\varrho \nabla \varphi \cdot \nabla W \star \varrho}=0, \quad \forall \varphi \in \ensuremath{{H}}^1(U) \, , \label{eq:wsPDE} \end{align} where we look for solutions $\varrho \in \ensuremath{{H}}^1(U) \cap \mathcal{P}_{\mathup{ac}}(U)$. We have the following estimate on the map $\ensuremath{\mathcal T}$ from~\eqref{def:T} \begin{align} \norm*{\ensuremath{\mathcal T}\!\varrho}_2^2 \leq \norm*{\ensuremath{\mathcal T}\!\varrho}_\infty \leq e^{\beta \kappa (\norm*{W_-}_\infty + \norm*{W}_1)} \, . \label{eq:linfT} \end{align} Thus it makes sense to search for fixed points of this equation in the set $E:=\{\varrho \in \ensuremath{{L}}^2(U) \cap \mathcal{P}_{\mathup{ac}}(U): \norm*{\varrho}_2^2 \leq e^{\beta \kappa (\norm*{W_-}_\infty + \norm*{W}_1)}\}$ as all fixed points must be in this set. It is easy to check that $E$ is a closed, convex subset of $\ensuremath{{L}}^2(U)$. We can now redefine $\ensuremath{\mathcal T}$ to act on $E$. Additionally, for any $\varrho \in E$, we have that \begin{align} \norm*{\ensuremath{\mathcal T}\!\varrho}_{\ensuremath{{H}}^1}^2 &= \norm*{\ensuremath{\mathcal T}\!\varrho}_2^2 + \norm*{\nabla \ensuremath{\mathcal T}\!\varrho}_2^2 \leq \norm*{\ensuremath{\mathcal T}\!\varrho}_\infty\left(1 + L^d\beta^2 \kappa^2 \norm*{\ensuremath{\mathcal T}\!\varrho}_\infty^2\norm*{\nabla W}_2^2\right) \ ,\label{eq:H1T} \end{align} where we have used the fact that $W \in \ensuremath{{H}}^1(U)$. Thus using~\eqref{eq:linfT}, we have that $\ensuremath{\mathcal T}(E) \subset E$ is uniformly bounded in $\ensuremath{{H}}^1(U)$. By Rellich's compactness theorem, this implies that $\ensuremath{\mathcal T}(E)$ is relatively compact in $\ensuremath{{L}}^2(U)$, and therefore in $E$, since $E$ is closed. Furthermore, $\ensuremath{\mathcal T}$ is Lipschitz continuous, i.e., we have for $\varrho_1,\varrho_2 \in E:$ $\norm*{\ensuremath{\mathcal T}\!\varrho_1 -\ensuremath{\mathcal T}\!\varrho_2}_2 \leq C\norm*{\varrho_1-\varrho_2}_2$, for some positive constant $C$. By the Schauder fixed point theorem, there exists a fixed point of $\varrho \in E$ of $\ensuremath{\mathcal T}$ which by \eqref{eq:H1T} is in $\ensuremath{{H}}^1(U)$. Plugging this expression into the weak form of the PDE~\eqref{eq:wsPDE} we obtain~\ref{thm:ex}. Also note that fixed points of $\ensuremath{\mathcal T}$ are bounded from below by $e^{-\beta \kappa (\norm*{W_-}_\infty + \norm*{W}_1 \norm*{\ensuremath{\mathcal T}\!\varrho}_\infty )}$, proving the positivity of them. Before proceeding to the proof of~\ref{thm:re}, we argue that every weak solution in $\ensuremath{{H}}^1(U)\cap \mathcal{P}_{\mathup{ac}}(U)$ is a fixed point of the nonlinear map, $\ensuremath{\mathcal T}$. Consider the ``frozen'' version of the weak form in \eqref{eq:wsPDE}, \begin{align} -\beta^{-1}\intom{\nabla \varphi \cdot \nabla \vartheta } -\kappa \intom{\vartheta \nabla \varphi \cdot \nabla W \star \varrho}=0 \ , \quad \forall \varphi \in \ensuremath{{H}}^1(U) \, , \label{eq:fwf} \end{align} where $\varrho \in \ensuremath{{H}}^1(U)\cap\mathcal{P}_{\mathup{ac}}(U)$ is a weak solution of \eqref{eq:sPDE} and $\vartheta$ is the unknown function. The above equation is the weak form of a uniformly elliptic PDE whose associated bilinear form is coercive in the weighted space, $\ensuremath{{H}}^1_0(U, \ensuremath{\mathcal T}\!\varrho)$ where $\ensuremath{{H}}^1_0(U)=\ensuremath{{H}}^1(U) /\ {\mathbb R}$. To see this, set $\vartheta(x)=h(x) \ensuremath{\mathcal T}\!\varrho$. We then obtain the following integral formulation of the transformed PDE, \begin{align} -\beta^{-1}\intom{\nabla \varphi \cdot \nabla h \; \ensuremath{\mathcal T}\!\varrho }=0 \ , \qquad \forall \varphi \in \ensuremath{{H}}^1(U) \, . \end{align} Let $h_1$ and $h_2$ be two weak solutions of the above equation. By choosing $\varphi =h_1 -h_2 = h$, we obtain a unique weak solution to~\eqref{eq:fwf} up to normalisation. Here, we also used that $\ensuremath{\mathcal T}\!\varrho$ has full support, since it is bounded from below. Hence, if it is chosen to be a probability measure, it is unique. We observe that $\vartheta=\ensuremath{\mathcal T}\!\varrho$ is such a weak solution, as is $\varrho$. This implies that any weak solution must be such that $\varrho=\ensuremath{\mathcal T}\!\varrho$. We obtain regularity of solutions by observing that if $f \in \ensuremath{{H}}^m(U), g \in \ensuremath{{H}}^n(U)$, then we have that $f \star g \in \ensuremath{{\cW}}^{m+n,\infty}(U)$. Then we use a bootstrap argument, i.e., $W \in \ensuremath{{H}}^1(U), \varrho \in \ensuremath{{H}}^1(U)$ implies that $\varrho =\ensuremath{\mathcal T}\!\varrho \in \ensuremath{{\cW}}^{2,\infty}(U)$. This implies that $W \star \varrho \in \ensuremath{{\cW}}^{3,\infty}(U)$ and so on and and so forth. Thus we have that $\varrho \in \ensuremath{{H}}^m(U) \cup \ensuremath{{\cW}}^{m,\infty}(U)$ for any $m \in {\mathbb N}$. The strict positivity follows from the lower bound on $\ensuremath{\mathcal T}\!\varrho$. \end{proof} We already know that associated with this PDE we have a free energy functional $\ensuremath{\mathscr F}_{\kappa}: \mathcal{P}_{\mathup{ac}}^+(U) \to {\mathbb R}$ defined on the space $\mathcal{P}_{\mathup{ac}}^+(U)$ of strictly positive absolutely continuous probability measures on $U$ by \begin{align}\label{eq:FreeEnergy} \ensuremath{\mathscr F}_{\kappa}(\varrho) & = \beta ^{-1}\int \varrho \log \varrho \dx{x} + \frac{\kappa}{2} \iint W(x-y) \varrho(y) \varrho(x) \dx{y} \dx{x} \\ & = S_{\beta}(\varrho) + \frac{\kappa}{2} \mathcal{E}(\varrho,\varrho) \ . \nonumber \end{align} Since, we regard $\beta$ as a fixed parameter, we omit it in the subscript on $\ensuremath{\mathcal F}_\kappa$. The free energy $\ensuremath{\mathscr F}_{\kappa}$ is a Lyapunov function for the evolution and its negative derivative along the flow is given by the entropy dissipation functional $\mathcal{J}_\kappa: \mathcal{P}_{\mathup{ac}}^+(U) \to {\mathbb R}^+ \cup \set{+\infty}$ with \begin{align}\label{e:def:dissipation} \mathcal{J}_{\kappa}(\varrho)= \begin{cases}\intT{\abs*{\nabla \log \frac{\varrho}{\exp\bra{- \beta \kappa W \star \varrho}}}^2 \varrho} \ , & \varrho \in \mathcal{P}_{\mathup{ac}}^+(U) \cap \ensuremath{{H}}^1(U) \\ + \infty \ , & \textrm{otherwise} \, . \end{cases} \end{align} This follows from rewriting~\eqref{eq:PDE} as $\partial_t \varrho = \dive \bra*{ \varrho \bra*{\beta^{-1} \nabla \log \varrho + \nabla W \star \varrho}}$ and differentiating the free energy functional along the flow \begin{equation} \frac{\dx{}}{\dx{t}} \ensuremath{\mathscr F}_{\kappa}(\varrho) = \int \bra*{ \beta^{-1} \log \varrho + \kappa W \star \varrho} \partial_t \varrho \dx{x} = - \int \abs*{ \beta^{-1} \nabla \log \varrho + \kappa \nabla W \star \varrho}^2 \varrho \dx{x} = -\ensuremath{\mathcal J}_{\kappa}(\varrho(t)) \leq 0. \end{equation} Finally we have the Gibbs state map $F_{\kappa}:\mathcal{P}_{\mathup{ac}}(U) \to \mathcal{P}_{\mathup{ac}}(U)$. This equation encodes the stationary states as fixed points of the nonlinear mapping $\ensuremath{\mathcal T}$ from~\eqref{def:T} \begin{align}\label{eq:fixedPoint} F_{\kappa}(\varrho)=\varrho-\ensuremath{\mathcal T}\!\varrho= \varrho -\frac{1}{Z(\varrho,\kappa,\beta)}e^{-\beta \kappa W \star \varrho} \, , \qquad\text{where}\qquad Z(\varrho,\kappa,\beta)= \intT{e^{- \beta \kappa W \star \varrho}} \ . \end{align} The identification of stationary states \eqref{eq:sPDE}, critical points of $\ensuremath{\mathscr F}_{\kappa}$ and $\mathcal{J}_\kappa$, and zeros of $F_{\kappa}$ is given by the following proposition. \begin{prop} \label{prop:tfae} Assume $W(x)$ satisfies Assumption~\eqref{ass:B} and fix $\kappa >0$. Let $\varrho \in \mathcal{P}_{\mathup{ac}}^+(U)$. Then the following statements are equivalent: \begin{enumerate} \item $\varrho$ is a classical solution of the stationary McKean--Vlasov equation \eqref{eq:sPDE}. \item $\varrho$ is a zero of the map $F_\kappa(\varrho)$. \item $\varrho$ is a critical point of the free energy $\ensuremath{\mathscr F}_\kappa(\varrho)$. \item $\varrho$ is a global minimiser of the entropy dissipation functional $\ensuremath{\mathcal J}_\kappa (\varrho)$. \end{enumerate} \end{prop} \begin{proof}\begin{itemize}[leftmargin=4.5em] \item[(1)$\Leftrightarrow$(2):] Observe that $\varrho$ is a zero of $F_\kappa(\varrho)$ if and only if it is a fixed point of $\ensuremath{\mathcal T}$. Thus by part (a) of~\cref{thm:wellpsPDE} we have the desired equivalence. \item[(2)$\Rightarrow$(3):] The main observation for this is that zeroes of $F_\kappa$ represent solutions of the Euler--Lagrange equations for $\ensuremath{\mathscr F}_\kappa$. Let $\varrho, \varrho_1 \in \mathcal{P}_{\mathup{ac}}^+(U)$, we define the standard convex interpolant, $\varrho_s=(1-s)\varrho+s \varrho_1$, $s \in (0,1)$ such that $\ensuremath{\mathscr F}(\varrho),\ensuremath{\mathscr F}(\varrho_1)< \infty$. Then we have the following form of the Euler--Lagrange equations (which are well-defined for $\varrho,\varrho_1 \in \mathcal{P}_{\mathup{ac}}^+(U)$), \begin{align} \left.\frac{\dx{}}{\dx{s}}\ensuremath{\mathscr F}_\kappa(\varrho_s)\right\rvert_{s=0}= \intT{\left(\beta^{-1}\log \varrho + \kappa W \star \varrho\right) \eta} =0 \ , \label{eq:eullag} \end{align} where $\eta = \varrho_1-\varrho$. Now if $\varrho$ is a zero of $F_\kappa$ it is easy to check that the above expression is zero for any $\varrho_1 \in \mathcal{P}_{\mathup{ac}}^+(U)$. \item[(3)$\Rightarrow$(2):] On the other hand assume that $\varrho$ is a critical point. If the integrand $\beta^{-1}\log \varrho + \kappa W \star \varrho$ in \eqref{eq:eullag} is not constant a.e., we can find without loss of generality a set $A \in \mathcal{B}(U)$ of nonzero Lebesgue measure such that \begin{align} A := \left\{x \in U : \left(\beta^{-1}\log \varrho + \kappa W \star \varrho\right) > \int \bra*{\beta^{-1}\log \varrho + \kappa W \star \varrho} \varrho \dx{y} \right\} \, . \end{align} We are now free to choose $\varrho_1 \in \mathcal{P}_{\mathup{ac}}^+(U)$ to be \begin{align} \varrho_1 = \frac{1}{L^d}\bra[\big]{(1- \varepsilon) \chi_A(x) + \varepsilon \chi_A^c(x)} \ , \end{align} for some $\varepsilon>0$. For this choice of $\varrho_1$, we have, \begin{align} \left.\frac{\dx{}}{\dx{s}}\ensuremath{\mathscr F}_\kappa(\varrho_s)\right\rvert_{s=0}&=(1-\varepsilon) a + \varepsilon b \, , \\ \text{where } \qquad a &= \frac{1}{L^d} \int_{A}\bra*{ \left(\beta^{-1}\log \varrho + \kappa W \star \varrho\right) - \int \bra*{\beta^{-1}\log \varrho + \kappa W \star \varrho} \varrho \dx{y} }\dx{x} \, ,\\ \text{and }\qquad b &=\frac{1}{L^d} \int_{A^c}\bra*{ \left(\beta^{-1}\log \varrho + \kappa W \star \varrho\right) - \int \bra*{\beta^{-1}\log \varrho + \kappa W \star \varrho} \varrho \dx{y} }\dx{x} \, . \end{align} From our choice of the set $A$, it is clear that $a >0$ and $b \leq 0$. Since $\varepsilon$ can be made arbitrarily small, $(1-\varepsilon)a + \varepsilon b$ can be made positive. Thus we have derived a contradiction since $\varrho$ is a critical point of $\ensuremath{\mathscr F}_\kappa$ and therefore it must satisfy the Euler--Lagrange equations in \eqref{eq:eullag}. Thus the integrand must be constant a.e. from which we obtain (3)$\Rightarrow$(2). \item[(2)$\Rightarrow$(4):] Clearly, $\ensuremath{\mathcal J}_\kappa$ is nonnegative. Thus if $\ensuremath{\mathcal J}_\kappa(\varrho)=0$ for some $\varrho \in \mathcal{P}_{\mathup{ac}}^+(U)$ then it is necessarily a global minimiser. Plugging in $\varrho$ for some zero of $F_\kappa$ finishes this implication. \item[(4)$\Rightarrow$(2):] Now, any global minimiser $\varrho$ of $\ensuremath{\mathcal J}_\kappa(\varrho)$ must satisfy $\ensuremath{\mathcal J}_\kappa(\varrho)=0$ since $\ensuremath{\mathcal J}_\kappa(\varrho_\infty)=0$. From the expression for $\ensuremath{\mathcal J}_\kappa(\varrho)$ and the fact that $\varrho$ has full support this is possible only if \begin{align} \nabla \frac{ \log \varrho}{e^{-\beta \kappa W \star \varrho}} =0, \quad \mathrm{a.e.} \end{align} Thus, we have that, $\varrho - C e^{-\beta \kappa W \star \varrho}=0$, a.e., for some constant, $C>0$, which is given precisely by $Z(\varrho,\kappa,\beta)$ since $\varrho \in \mathcal{P}_{\mathup{ac}}(U)$. Thus we have that $\varrho$ is a zero of $F_\kappa(\varrho)$ and the reverse implication, (4)$\Rightarrow$(2). \qedhere \end{itemize} \end{proof} The following lemma is taken from~\cite{chayes2010mckean} in which it is shown that for any unbounded $\varrho \in \mathcal{P}_{\mathup{ac}}(U)$ there exists a bounded $\varrho^\dagger \in\mathcal{P}_{\mathup{ac}}(U)$ having a lower value of the free energy. \begin{lem}[\cite{chayes2010mckean}] Assume that $W$ satisfies Assumption~\eqref{ass:B} and fix $\kappa \in (0, \infty)$. Then there exists a positive constant $B_0<\infty$ such that for all $\varrho \in \mathcal{P}_{\mathup{ac}}(U)$ with $\norm*{\varrho}_{\infty} > B_0$ there exists some $ \varrho^{\dagger} \in \mathcal{P}_{\mathup{ac}}(U)$ with $\norm*{\varrho^{\dagger}}_{\infty}\leq B_0 $ satisfying \begin{align} \ensuremath{\mathscr F}_{\kappa}(\varrho^{\dagger}) < \ensuremath{\mathscr F}_{\kappa}(\varrho) \, . \end{align} \label{lem:1} \end{lem} \begin{comment} \begin{proof} We start by noticing that we can control the entropy and interaction energy from below: \begin{align} S(\varrho) &\geq \log \varrho_\infty \, , \label{eq:elb}\\ \mathcal{E}(\varrho,\varrho) &\geq -\norm*{W_-}_\infty \, \label{eq:ielb}, \end{align} where the bound on the entropy follows from Jensen's inequality and the bound on the interaction energy follows from Assumption~\eqref{ass:B}. Thus, the free energy $\ensuremath{\mathscr F}_{\kappa}$ is bounded below. Now fix some $B>0$ and $\varrho \in \mathcal{P}_{\mathup{ac}}$. Let $\varepsilon_B$ be the $\varrho$-measure of the Borel set, $\mathbb{B}_B=\{x \in U : \varrho(x)\geq B \}$, i.e, \begin{align} \varepsilon_B= \int_{\mathbb{B}_B} \! \varrho \dx{x} \ . \end{align} Following Chayes and Panferov ~\cite{chayes2010mckean}, we consider two different cases of $(\varrho, B)$, namely, $\varepsilon_B \geq \frac{1}{2}$ and $\varepsilon_B<\frac{1}{2}$. For the case, $\varepsilon_B \geq \frac{1}{2}$, we have by an application of Jensen's inequality \begin{align} \beta^{-1}\intT{\varrho \log \varrho} &\geq \frac{1}{2\beta} \log B + \beta^{-1}\int_{\mathbb{B}_B^c} \! \varrho \log \varrho \dx{x} \\ & \geq \beta^{-1}\left(\frac{1}{2} \log B + (1- \varepsilon_B) \log(1- \varepsilon_B)\right) \\ & \geq \beta^{-1}\left(\frac{1}{2\beta}\log B + e^{-1}\right) \ . \end{align} Since from~\eqref{eq:ielb} the interaction energy $\mathcal{E}(\varrho,\varrho)$ is bounded below, we choose $B$ large enough such that there exists some positive constant $ B_1 <B$ such that for all $ \varrho \in \mathcal{P}_{\mathup{ac}}$ with $\norm*{\varrho}_{\infty}>B_1, \varepsilon_{B_1} \geq \frac{1}{2}$ we have $\varrho_{\infty}< B_1$ and \begin{align} \ensuremath{\mathscr F}_{\kappa}(\varrho_\infty) < \ensuremath{\mathscr F}_{\kappa}(\varrho) \, . \end{align} Now consider the case for $(\varrho, B)$ with $\varepsilon_B < \frac{1}{2}$. We write $\varrho =\varrho_B + \varrho_r$, where, $\varrho_B= \varrho \chi_B$ and $\varrho_r= \varrho-\varrho_B$. It should be noted that $\varrho_r$ is not a probability measure as it is not normalised. For $B>1, \varepsilon_B >0$ we have \begin{align} S(\varrho) \geq \beta^{-1}\varepsilon_B \log B + S(\varrho_r) > S(\varrho_r) \, . \end{align} Consider the case $\ensuremath{\mathscr F}_{\kappa}(\varrho) \leq \ensuremath{\mathscr F}_{\kappa}(\varrho_\infty)$(otherwise the proof is complete). The interaction energy $\mathcal{E}(\varrho, \varrho)$ is bounded below and we have the following estimate \begin{align} S(\varrho) & \leq -\frac{\kappa}{2}\mathcal{E}(\varrho,\varrho) + \log \varrho_\infty+ \frac{\kappa}{2}\varrho_\infty\norm*{W}_1 \leq \frac{\kappa }{2}\norm*{W_-}_\infty +\log \varrho_\infty + \frac{\kappa}{2} \varrho_\infty\norm*{W}_1 :=s ^{*} \, . \end{align} We also write \begin{align} \mathcal{E}(\varrho, \varrho) & \leq \frac{2}{\kappa}\left( S(\varrho_\infty) -S(\varrho) \right) + \mathcal{E}(\varrho_\infty,\varrho_\infty) < \mathcal{E}(\varrho_\infty,\varrho_\infty) \, . \end{align} Using this and the fact that $\varepsilon_B < \frac{1}{2},$ we obtain \begin{align} \mathcal{E}(\varrho_r, \varrho_r ) &< \mathcal{E}(\varrho_\infty,\varrho_\infty) +(2 \varepsilon_B - \varepsilon_B^2 )\norm*{W_-}_\infty <\mathcal{E}(\varrho_\infty, \varrho_\infty) + \norm*{W_-}_\infty :=e^* \, . \end{align} Set the $\overline{\varrho_r}= (1- \varepsilon_B)^{-1}\varrho_r $ which is a probability measure. We are now in a position to compute the difference in free energy, $\ensuremath{\mathscr F}_{\kappa}(\varrho)-\ensuremath{\mathscr F}_{\kappa}(\overline{\varrho_r})$: \begin{align} S(\varrho) - S(\overline{\varrho_r}) &\geq \beta^{-1} \varepsilon_B \log B -\frac{\varepsilon_B}{1- \varepsilon_B} S(\varrho_r) + \beta^{-1}\log(1-\varepsilon_B) \geq \beta^{-1}\varepsilon_B \left( \log B - \frac{1 + \beta s^{*}}{1- \varepsilon_B} \right) \, . \end{align} For the interaction energy we know that \begin{align} \mathcal{E}(\varrho , \varrho) \geq \mathcal{E}(\varrho_r,\varrho_r) +(- 2 \varepsilon_B + \varepsilon_B^2) \norm*{W_-}_\infty \, . \end{align} The above estimate together with $\varepsilon_B<\frac{1}{2}$ gives \begin{align} \mathcal{E}(\varrho,\varrho) - \mathcal{E}(\overline{\varrho_r},\overline{\varrho_r}) \geq \left( \frac{-2 \varepsilon_B +\varepsilon_B^2}{(1-\varepsilon_B)^2} \right) \left( \norm*{W_-}_\infty + \mathcal{E}(\varrho,\varrho) \right) \geq -8 \varepsilon_B e^* \,. \end{align} Thus if we pick $B$ to be larger than some $B_2$, we have that $\ensuremath{\mathscr F}_{\kappa}(\overline{\varrho_r}) < \ensuremath{\mathscr F}_{\kappa}(\varrho)$. However, $\overline{\varrho_r}$ can take values as large as $2B_2$ a.e. due to the normalisation. Thus we set, $B_0= \max \{B_1,1,2B_2\}$ and pick for $\varrho^{\dagger}$, depending on $\varepsilon_B$ either $\varrho_\infty$ or $\overline{\varrho_r}$. \end{proof} \end{comment} The next lemma shows that minimisers of $\ensuremath{\mathscr F}_\kappa(\varrho)$ over $\mathcal{P}_{\mathup{ac}}(U)$ are attained in $\mathcal{P}_{\mathup{ac}}^+(U)$. \begin{lem}\label{lem:pos} Assume $W(x)$ satisfies Assumption~\eqref{ass:B} and let $\varrho \in \mathcal{P}_{\mathup{ac}}(U) \setminus \mathcal{P}_{\mathup{ac}}^+(U)$. Then, there exists $\varrho^+ \in \mathcal{P}_{\mathup{ac}}^+(U)$ such that, \begin{align} \ensuremath{\mathscr F}_\kappa(\varrho^+) < \ensuremath{\mathscr F}_\kappa(\varrho) \, . \end{align} \end{lem} \begin{proof} Let ${\mathbb B}_0:=\{ x \in U:\varrho(x)=0\}$, then from assumption $\varrho \notin \mathcal{P}_{\mathup{ac}}^+(U)$, it follows $|{\mathbb B}_0|>0$. We define the competitor state \begin{align} \varrho_\epsilon(x)= \frac{1}{1 + \epsilon |{\mathbb B}_0|}\bra*{\varrho(x) + \epsilon \chi_{{\mathbb B}_0}(x)} \in \mathcal{P}_{\mathup{ac}}^+(U) \end{align} and show that for $\epsilon>0$ sufficiently small $\varrho_\epsilon$ has smaller free energy. We first compute its entropy \begin{align} S(\varrho_\epsilon) &= \frac{1}{1 + \epsilon |{\mathbb B}_0|} \intom{\bra{\varrho + \epsilon \chi_{{\mathbb B}_0}} \log \bra{\varrho + \epsilon \chi_{{\mathbb B}_0}}} - \log(1 +\epsilon |{\mathbb B}_0|) \\ &< S(\varrho) -\frac{\epsilon |{\mathbb B}_0|}{1 +\epsilon |{\mathbb B}_0|} S(\varrho) + \frac{\epsilon |{\mathbb B}_0| \log \epsilon }{1 +\epsilon |{\mathbb B}_0|} < S(\varrho) -\frac{\epsilon |{\mathbb B}_0|}{1 +\epsilon |{\mathbb B}_0|}\bra*{ S(\varrho_\infty) - \log \epsilon } \, , \end{align} where we have used the fact that $S(\varrho)> S(\varrho_\infty), \forall \varrho \in \mathcal{P}_{\mathup{ac}}(U), \varrho \neq \varrho_\infty$. For computing the interaction term, we use the fact that $\ensuremath{\mathcal E}(\varrho,\varrho)> - \norm*{W_-}_\infty$ to estimate \begin{align} \frac{\kappa}{2} \ensuremath{\mathcal E}(\varrho_\epsilon,\varrho_\epsilon) &=\frac{\kappa}{2}\iintom{W(x-y) \varrho_\epsilon(x)\varrho_\epsilon(y) } \\ & < \frac{\kappa}{2}\ensuremath{\mathcal E}(\varrho,\varrho) + \frac{\kappa}{2}\bra*{\frac{1}{(1 + \epsilon |{\mathbb B}_0|)^2}-1} \ensuremath{\mathcal E}(\varrho,\varrho) + \kappa \norm*{W}_1 \frac{\epsilon}{(1 + \epsilon |{\mathbb B}_0|)^2} + \frac{\kappa}{2}\norm*{W}_1|{\mathbb B}_0| \frac{\epsilon^2}{(1 + \epsilon |{\mathbb B}_0|)^2} \\ & \leq \frac{\kappa}{2}\ensuremath{\mathcal E}(\varrho,\varrho) +\frac{\kappa}{2}\bra*{\frac{\epsilon |{\mathbb B}_0|(2 + \epsilon |{\mathbb B}_0|)}{(1 + \epsilon |{\mathbb B}_0|)^2}} \norm*{W_-}_\infty + \frac{\epsilon |{\mathbb B}_0|}{1 +\epsilon |{\mathbb B}_0|} C_1\\ &< \frac{\kappa}{2}\ensuremath{\mathcal E}(\varrho,\varrho) + \frac{\epsilon |{\mathbb B}_0|}{1 +\epsilon |{\mathbb B}_0|} (C_1 + C_2) \, , \end{align} where $C_1,C_2<\infty$ depend on $W$ and ${\mathbb B}_0$ and we have chosen $\epsilon$ sufficiently small. Combining the two expressions together we obtain, \begin{align} \ensuremath{\mathscr F}_\kappa(\varrho_\epsilon) < \ensuremath{\mathscr F}_\kappa(\varrho) +\frac{\epsilon |{\mathbb B}_0| }{1 + \epsilon |{\mathbb B}_0|}\bra*{\beta^{-1} \log \epsilon -\beta^{-1} S(\varrho_\infty) + (C_1+C_2) \epsilon} \, . \end{align} Thus for $\epsilon$ sufficiently small but positive the logarithmic term will dominate and give us the required result. \end{proof} \begin{thm}[Existence of a minimiser~\cite{chayes2010mckean}]\label{thm:dirmet} Assume $W(x)$ satisfies Assumption~\eqref{ass:B}. For $\kappa \in (0,\infty)$ the free energy $\ensuremath{\mathscr F}_{\kappa}(\varrho)$ has a smooth minimiser $\varrho_{\kappa} \in C^\infty(U) \cap \mathcal{P}_{\mathup{ac}}^+(U)$. \label{thm:1} \end{thm} \begin{proof} We start by noticing that we can control the entropy and interaction energy from below: \begin{equation} S(\varrho) \geq \log \varrho_\infty \qquad\text{and}\qquad \mathcal{E}(\varrho,\varrho) \geq -\norm*{W_-}_\infty \,, \label{eq:elb} \end{equation} where the bound on the entropy follows from Jensen's inequality and the bound on the interaction energy follows from Assumption~\eqref{ass:B}. Since by~\eqref{eq:elb}, $\ensuremath{\mathscr F}_{\kappa}(\varrho)$ is bounded from below over $\mathcal{P}_{\mathup{ac}}(U)$, there exists a minimising sequence $\{\varrho_j\}_{j=1}^\infty \subset \mathcal{P}_{\mathup{ac}}(U)$. Furthermore, by~\cref{lem:1}, the minimising sequence can be chosen such that $\{\varrho_j\}_{j=1}^\infty \subset \ensuremath{{L}}^2(U)$ with $\norm*{\varrho_j}_2 \leq B_0^{\frac{1}{2}}$, where $B_0$ is the constant from~\cref{lem:1}. Thus, there exists a subsequence which we continue to denote by $\{\varrho_j\}_{j=1}^\infty$ such that $\varrho_j \rightharpoonup \varrho_{\kappa}$ weakly in $\ensuremath{{L}}^2(U)$. Clearly we have that $\intom{\varrho_\kappa}=1$. It is also easy to see that $\varrho_\kappa \geq 0$, a.e. Thus $\varrho_\kappa \in \ensuremath{{L}}^2(U) \cap \mathcal{P}_{\mathup{ac}}(U)$. The lower semicontinuity of $S(\varrho)$ follows from standard results (cf. ~\cite{jost1998calculus}, Lemma 4.3.1). \begin{comment}For $k, l \in {\mathbb N}^d$, we write $k<l$ if $k_i < l_i$ for $i=\set{1,\dots, d}$. Taking the Fourier transform, we obtain for any $k_0 \in {\mathbb N}^d$ \begin{align} |\mathcal{E}(\varrho_\kappa,\varrho_\kappa)-\mathcal{E}(\varrho_j,\varrho_j)| \leq \sum_{k \leq k_0} \frac{\tilde{W}(k)}{N_k}\sum_{\sigma \in \mathrm{Sym}(\Lambda)}\bra*{ |\tilde{\varrho_\kappa}(\sigma(k))|^2 - |\tilde{\varrho_j}(\sigma(k))|^2 } + C \min_{i\in\set{1,\dots,d}}\max_{k: k_i>k_0} \tilde{W}(k) \, , \end{align} where the constant $C$ depends only on $L$ and $k_0$. The second term can be made arbitrarily small by choosing $k_0$ sufficiently large by the Riemann--Lebesgue lemma. The first term can be made arbitrarily small for any $k_0$ due to the weak convergence result by choosing $j$ sufficiently large. \end{comment} Consider now the interaction energy term. For $W \in \ensuremath{{L}}^1(U)$, the interaction energy is weakly continuous in $\ensuremath{{L}}^2(U)$~\cite[Theorem 2.2, Equation (9)]{chayes2010mckean}. This implies that the free energy $\ensuremath{\mathscr F}_\kappa(\varrho)$ has a minimiser $\varrho_\kappa$ over $\mathcal{P}_{\mathup{ac}}(U)$. A direct consequence of this and~\cref{lem:pos} is that the minimisation problem is well-posed in $\mathcal{P}_{\mathup{ac}}^+(U)$ since the minimiser $\varrho_\kappa$ must be attained in $\mathcal{P}_{\mathup{ac}}^+(U)$. We can then use~\cref{thm:wellpsPDE} together with~\cref{prop:tfae} to argue that any such minimiser must be smooth. \end{proof} \begin{prop} \label{prop:con} Assume $W $ satisfies Assumption~\eqref{ass:B} such that $W_\ensuremath{\mathup{u}}$ is bounded from below, where $W_\ensuremath{\mathup{u}}$ is the unstable part defined in~\cref{def:Hstable}. Then, for $\kappa \in \bra*{ 0,\kappa_{\mathrm{con}}}$, where $\kappa_{\mathrm{con}}:=\beta^{-1} \norm*{W_{\ensuremath{\mathup{u}}-}}_\infty^{-1}$, the functional $\ensuremath{\mathscr F}_\kappa(\varrho)$ is strictly convex on $\mathcal{P}_{\mathup{ac}}(U)$, that is for all $s\in(0,1)$ holds \begin{align} \ensuremath{\mathscr F}_{\kappa}\bra[\big]{(1-s) \varrho_1 + s \varrho_2} < (1-s) \ensuremath{\mathscr F}_{\kappa}(\varrho_1) +s \ensuremath{\mathscr F}_{\kappa}(\varrho_2) \qquad \forall \varrho_1,\varrho_2 \in \mathcal{P}_{\mathup{ac}}(U) \ . \end{align} \end{prop} \begin{proof} For $\varrho_1,\varrho_2 \in \mathcal{P}_{\mathup{ac}}^+(U)$, let $\varrho(s)=(1-s) \varrho_1 + s \varrho_2, s \in (0,1)$ and $\eta=\varrho_2-\varrho_1$. Then we have \begin{align} \frac{\dx{}^2 }{\dx{s^2}} \ensuremath{\mathscr F}_\kappa(\varrho_s)&=\beta^{-1}\intom{\frac{\eta^2}{\varrho_s}} + \kappa \intom{(W \star \eta) \eta} =\beta^{-1}\intom{\frac{\eta^2}{\varrho_s^2}\varrho_s} + \kappa \intom{((W_\ensuremath{\mathup{s}} +W_\ensuremath{\mathup{u}})\star \eta) \eta}\,. \end{align} Now, we apply Jensen's inequality and use the fact that $W_\ensuremath{\mathup{s}}\in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}$ which gives us \begin{align} \frac{\dx{}^2 }{\dx{s^2}} \ensuremath{\mathscr F}_\kappa(\varrho_s)& \geq \beta^{-1}\left(\intom{|\eta|}\right)^2 + \kappa \intom{(W_\ensuremath{\mathup{u}} \star \eta) \eta}\,. \end{align} Finally we bound $W_\ensuremath{\mathup{u}}(x)$ from below to obtain, \begin{align} \frac{\dx{}^2 }{\dx{s^2}} \ensuremath{\mathscr F}_\kappa(\varrho_s)& \geq\left(\beta^{-1}- \kappa \norm*{W_{\ensuremath{\mathup{u}}-}}_\infty\right) \left(\intom{|\eta|}\right)^2 \, , \end{align} showing the desired statement. \end{proof} \begin{rem} It follows from the above result that if $W_\ensuremath{\mathup{u}}\equiv 0$, i.e., $W\in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}$, then $\ensuremath{\mathscr F}_\kappa(\varrho)$ is strictly convex for all $\kappa \in (0,\infty)$. \end{rem} \section{Global asymptotic stability}\label{S:gas} \begin{comment} \subsection{The free energy as Lyapunov function} In this section, we will use the free energy as defined in~\eqref{eq:FreeEnergy} to study the global asymptotic stability of the uniform state for the system~\eqref{eq:PDE}. The free energy $\ensuremath{\mathscr F}_\kappa$ is a Lyapunov function for the evolution. This follows from rewriting~\eqref{eq:PDE} as $\partial_t \varrho = \dive \bra*{ \varrho \bra*{\beta^{-1} \nabla \log \varrho + \nabla W \star \varrho}}$ and differentiating the free energy functional along the flow as follows \begin{equation} \frac{\dx{}}{\dx{t}} \ensuremath{\mathscr F}_\kappa(\varrho) = \int \bra*{ \beta^{-1} \log \varrho + \kappa W \star \varrho} \partial_t \varrho \dx{x} = - \int \abs*{ \beta^{-1} \nabla \log \varrho + \kappa \nabla W \star \varrho}^2 \varrho \dx{x} . \end{equation} To analyse the trend to equilibrium, it will be convenient to consider the free energy difference $\ensuremath{\mathscr F}_\kappa(\varrho)-\ensuremath{\mathscr F}_\kappa(\varrho_{\infty})$ where \begin{align} \ensuremath{\mathscr F}_\kappa(\varrho_\infty) &= \beta^{-1}S(\varrho_\infty) + \frac{\kappa}{2}\mathcal{E}(\varrho_\infty, \varrho_\infty) = \beta^{-1}\log \varrho_\infty \ , \end{align} using the fact that $\mathcal{E}(\varrho_\infty, \varrho ) = 0$ for any $\varrho\in \mathcal{P}(U)$ and $W$ with mean zero. By introducing the relative entropy \begin{align}\label{e:def:RelEnt} \mathcal{H}(\varrho |\varrho_{\infty}) = \intom{\varrho \log\left(\frac{\varrho}{\varrho_\infty}\right)} \ , \end{align} we observe the following identity between the free energy gap and the relative entropy \begin{align}\label{FreeEnergyExcess} \ensuremath{\mathscr F}_\kappa(\varrho)-\ensuremath{\mathscr F}_\kappa(\varrho_{\infty}) = \beta^{-1}\mathcal{H}(\varrho | \varrho_\infty) + \frac{\kappa}{2}\mathcal{E}(\varrho-\varrho_\infty,\varrho-\varrho_\infty) \ . \end{align} The identity~\eqref{FreeEnergyExcess} provides us with an intuitive understanding of the two competing mechanisms in the dynamics of this system. On the one hand the relative entropy is minimised for the uniform state $\rho_\infty$ and therefore forces wants the mass to be spread out, where on the other hand the interaction energy with an attractive potential prefers clustered states. A crucial investigation of these two competing mechanisms is the main theme of this work. \end{comment} \subsection{Trend to equilibrium in relative entropy} In this section, we will use the free energy as defined in~\eqref{eq:FreeEnergy} to study the global asymptotic stability of the uniform state for the system~\eqref{eq:PDE}. By introducing the relative entropy \begin{align}\label{e:def:RelEnt} \mathcal{H}(\varrho |\varrho_{\infty}) = \intom{\varrho \log\left(\frac{\varrho}{\varrho_\infty}\right)} \ , \end{align} we observe the following identity between the free energy gap and the relative entropy \begin{align}\label{FreeEnergyExcess} \ensuremath{\mathscr F}_\kappa(\varrho)-\ensuremath{\mathscr F}_\kappa(\varrho_{\infty}) = \beta^{-1}\mathcal{H}(\varrho | \varrho_\infty) + \frac{\kappa}{2}\mathcal{E}(\varrho-\varrho_\infty,\varrho-\varrho_\infty) \ . \end{align} By directly differentiating the relative entropy~\eqref{e:def:RelEnt}, we obtain the rate of change of the relative entropy \begin{align} \frac{\dx{}\mathcal{H}(\varrho | \varrho_\infty)}{\dx{t}}= - \beta^{-1}\intom{|\nabla\log \varrho|^2 \varrho} -\kappa \intom{\nabla( W \star \varrho) \cdot \nabla \varrho} \, . \label{eq:relen} \end{align} \begin{comment} We now state two preliminary facts that are useful in the study of convergence to equilibrium. On the torus, the heat semigroup is hypercontractive and satisfies a logarithmic Sobolev inequality~\cite{emery1987simple} of the form \begin{align} \mathcal{H}(\varrho | \varrho_\infty ) \leq \frac{L^2}{4 \pi^2}\intom{|\nabla \log \varrho|^2 \varrho } \ , \label{eq:LSI} \end{align} where the term on the right hand side is the Fisher information of the measure $\varrho$. It will be convenient to compare the $L^1$ norm with the relative entropy, which is granted by the Csisz\'ar--Kullback--Pinsker (CKP) inequality~\cite{bolley2005weighted} \begin{align} \norm*{\varrho - \varrho_\infty}_{1} \leq \sqrt{2 \mathcal{H}(\varrho|\varrho_\infty)} \label{eq:CKP} \, . \end{align} These preliminary inequalities allow us to obtain exponential convergence to equilibrium in relative entropy. \end{comment} \begin{prop}[Exponential stability and convergence in relative entropy]\label{prop:review} Let $\varrho_0 \in \mathcal{P}_{\mathup{ac}}(U)\cap \ensuremath{{H}}^{3+d}(U)$ with $S(\varrho_0) <\infty$ and $W \in \ensuremath{{\cW}}^{2,\infty}(U)$. Then the classical solution $\varrho$ of \eqref{eq:PDE} is exponentially stable in relative entropy and it holds \begin{equation} \ensuremath{\mathcal H}(\varrho(\cdot,t) | \varrho_\infty) \leq \exp\pra*{\bra*{-\frac{4 \pi^2}{\beta L^2} +2 \kappa \norm*{\Delta W_\ensuremath{\mathup{u}}}_{\infty}} t } \ \ensuremath{\mathcal H}(\varrho_0 | \varrho_\infty) \, . \end{equation} Especially, in the cases $W \in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}$ for any $\beta,\kappa >0$ and if $W\notin \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}$ for $\beta \kappa < \frac{2\pi^2}{L^2 \norm*{\Delta W_\ensuremath{\mathup{u}}}_{\infty}}$ it holds that we have exponentially fast convergence to the uniform state in relative entropy for any initial condition $\varrho_0 \in \mathcal{P}_{\mathup{ac}}(U)\cap \ensuremath{{H}}^{3+d}(U)$. \end{prop} \begin{proof}[Proof of~\cref{thm:m1}\ref{thm:m1b}] We know the solution $\varrho$ is classical, thus $\mathcal{H}(\varrho(\cdot,t) | \varrho_\infty) \in C^1(0,\infty)$. Using~\eqref{eq:relen}, we obtain with another integration by parts \begin{align} \frac{\dx{}}{\dx{t}} \mathcal{H}(\varrho|\varrho_\infty) = - \beta^{-1}\intom{|\nabla\log \varrho|^2 \varrho} + \kappa \intom{\Delta W \star \varrho\ \varrho}, \quad \forall t \in (0,\infty) \, . \label{eq:reldec} \end{align} The first term is the Fisher information and can be controlled by a log-Sobolev inequality of the form \begin{align} \mathcal{H}(\varrho | \varrho_\infty ) \leq \frac{L^2}{4 \pi^2}\intom{|\nabla \log \varrho|^2 \varrho } \ , \label{eq:LSI} \end{align} Now, we rewrite the interaction term in its Fourier series by~\eqref{Fourier:Interaction}, estimate it in terms of the unstable modes and transform it back to position space \begin{align} \intom{\Delta W \star \varrho\ \varrho} &= -\frac{ 4\pi^2}{L^2} \sum_{k \in {\mathbb N}^d} \abs{k}^2 \tilde{W}(k)\frac{1}{N_k}\sum_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{g}(\sigma(k))|^2 \\ &\leq - \frac{ 4\pi^2}{L^2} \sum_{k \in {\mathbb N}^d} \abs{k}^2 \tilde{W}_\ensuremath{\mathup{u}}(k)\frac{1}{N_k}\sum_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{g}(\sigma(k))|^2 \\ &= \int \Delta W_\ensuremath{\mathup{u}} \star \varrho \ \varrho \dx{x} \, . \end{align} Now, we use the fact that $\Delta W_\ensuremath{\mathup{u}}$ has mean zero to replace $\varrho$ by $\varrho- \varrho_\infty$ and estimate \begin{align} \int \Delta W_\ensuremath{\mathup{u}} \star \varrho \ \varrho \dx{x} &\leq \norm*{\Delta W_\ensuremath{\mathup{u}} \star (\varrho -\varrho_\infty)}_\infty \norm*{ \varrho - \varrho_\infty}_1 \leq \norm*{\Delta W_\ensuremath{\mathup{u}}}_\infty \norm*{ \varrho - \varrho_\infty}_1^2 \,. \end{align} The above term can be controlled using the CKP inequality in the following way \begin{align} \norm*{\varrho - \varrho_\infty}_{1} \leq \sqrt{2 \mathcal{H}(\varrho|\varrho_\infty)} \label{eq:CKP} \, . \end{align} In combination with~\eqref{eq:LSI} and~\eqref{eq:CKP}, we obtain the bound \[ \frac{\dx{}}{\dx{t}} \mathcal{H}(\varrho|\varrho_\infty) \leq \bra*{- \frac{4 \pi^2}{\beta L^2} + 2\kappa\norm*{\Delta W_\ensuremath{\mathup{u}}}_\infty} \ensuremath{\mathcal H}(\varrho | \varrho_\infty) \, . \] Finally, by Gronwall's inequality, we have the desired result. \end{proof} \begin{rem} For the case of the noisy Hegselmann--Krausse model studied in ~\cite{chazelle2017well}, we have $W(x)=\int_{0}^y \phi(|x|)x \dx{y}$ with $\phi(|x|)={\mathbbm{1}}_{|x|\leq R}$. We can estimate by the same arguments $\norm*{W_\ensuremath{\mathup{u}}''(x)}_{\infty}\leq \norm*{W''(x)}_{\infty}=R$. Thus for $\kappa<\frac{2 \pi^2}{\beta L^2}$, we have exponential convergence to equilibrium. See~\autoref{S:Hegselmann} for a detailed analysis of this model. \end{rem} \begin{rem} By the improved entropy defect estimate of \cref{lem:entdef}, the above statement could be slightly improved under more specific assumptions on the unstable modes of the potential. For the moment, we want to keep the presentation as concise as possible and refer to \autoref{S:thermodynamic} for the details. \end{rem} \subsection{Linear stability analysis}\label{ssec:lsa} We start this subsection by linearising the stationary Mckean--Vlasov equation around some stationary solution, $\varrho_\kappa$. We obtain the following linear integrodifferential operator: \begin{align} \ensuremath{\mathcal L} w := \beta^{-1} \Delta w+ \kappa\dive \bra[\big]{ \varrho_\kappa \nabla (W \star w)} + \kappa \dive\bra[\big]{w \nabla(W \star \varrho_\kappa)}\, . \end{align} If we pick $\varrho_\kappa$ to be the uniforms state $\varrho_\infty$ the above expression reduces to \begin{align} \ensuremath{\mathcal L} w := \beta^{-1} \Delta w+ \kappa\varrho_\infty \Delta(W \star w) \, . \end{align} We are now interested in studying the spectrum of this operator over mean zero $\ensuremath{{L}}^2(U)$ functions, $\ensuremath{{L}}^2_0(U)$. From the classical theory for symmetric elliptic operators, it follows that the eigenfunctions of this system form an orthonormal basis in $\ensuremath{{L}}^2_0(U)$ given by $\{ L^{-\frac{d}{2}} e^{i \frac{2 \pi}{L} k' \cdot x}\}_{k' \in {\mathbb Z}^d\setminus\set{\mathbf{0}}}$ with the eigenvalues given by \begin{align} \lambda_{k'}= \bra*{-\beta^{-1}\bra*{\frac{2 \pi |k'|}{L}}^2 - \kappa L^{-d/2} \bra*{\frac{2 \pi |k'|}{L}}^2 \hat{W}(k') } \, , \end{align} where $\hat{W}(k')= L^{-\frac{d}{2}}\intom{W(x) e^{-i \frac{2 \pi}{L} k' \cdot x}}$. One can check that we have the following relationship \begin{align} \hat{W}(k') = \frac{1}{\Theta(k)} \tilde{W}(k) \,, \qquad k_\ell= |k'_\ell|, \ k \in {\mathbb N}^d \, , \end{align} where $\Theta(k)$ is as defined in~\eqref{e:def:thetak}. To obtain the above expression we have used the fact that $W$ is coordinate-wise even, which implies that \begin{align} \intom{W(x) e^{-i \frac{2 \pi}{L} k' \cdot x}}&=\intom{W(x) e^{-i \frac{2 \pi}{L} k' \cdot x}}\\ &=\intom{W(x) \prod\limits_{\ell=1}^d \bra*{\cos\bra*{\frac{2 \pi k'_\ell x}{L} } + i \sin\bra*{\frac{2 \pi k'_\ell x}{L} } }} \\ &= \intom{W(x) \prod\limits_{\ell=1}^d \bra*{\cos\bra*{\frac{2 \pi k'_\ell x}{L} } }} \, . \end{align} Thus, we have the following expression for the value of the parameter $\kappa_\sharp$ at which the first eigenvalue of~$\ensuremath{\mathcal L}$ crosses the imaginary axis: \begin{align} \kappa_\sharp = -\frac{L^{d/2}\Theta(k)}{\beta \min_{k \in {\mathbb N}^d\setminus\set{\mathbf{0}}} \tilde{W}(k)} \, \label{eq:koc}. \end{align} We will refer to $\kappa_\sharp$ as the \textbf{point of critical stability}. We denote by $k^\sharp$ the critical wave number (if it is unique) and define it as: \begin{align} k^\sharp:= \argmin_{k \in {\mathbb N}^d\setminus\set{\mathbf{0}}} \tilde{W}(k) \, \label{eq:poc}. \end{align} \section{Bifurcation theory}\label{S:lbt} For the local bifurcation analysis, it is convenient to rewrite the fixed point equation~\eqref{eq:fixedPoint} of the nonlinear mapping~\eqref{def:T} by making the parameter $\kappa \in (0,\infty)$ explicit. Hence, in this section we consider the nonlinear map $F:\ensuremath{{L}}^2_s(U) \times {\mathbb R}^+ \to \ensuremath{{L}}^2_s(U)$ defined as \begin{align} F(\varrho,\kappa)=F_\kappa(\varrho) = \varrho - \frac{1}{Z} e^{-\beta \kappa W \star \varrho}, \qquad\text{where}\qquad Z =\intT{e^{-\beta \kappa W \star \varrho}} \ , \label{eq:kirkwoodmonroe} \end{align} where $\beta > 0$ is fixed, and $W \in \ensuremath{{L}}^2_s(U)$ with $\ensuremath{{L}}^2_s(U)$, the space of coordinate-wise even and square integrable functions as defined in~\eqref{eq:def:L2s}. The purpose of this section is to study the bifurcation problem: \begin{align} F(\varrho,\kappa)=0 \, . \end{align} Any zero of $F(\varrho,\kappa)$ is also a coordinate-wise even fixed point of $\ensuremath{\mathcal T}:\mathcal{P}_{\mathup{ac}} \to \mathcal{P}_{\mathup{ac}}$. The converse is true if~$W$ satisfies Assumption~\eqref{ass:B}. We do not make this assumption for the whole section as we want the bifurcation theory to be valid for more singular potentials, e.g., the Keller--Segel model which we treat in a later section. It is also clear that the map $F(\varrho,\kappa)$ is translation invariant on the whole space $\ensuremath{{L}}^2_s(U)$, i.e., if $\varrho$ is a zero of $F(\varrho,\kappa)$ then so is any translate $\varrho(\cdot - y)$ of $\varrho(\cdot)$ for any $y\in U$. This is the motivation for the restriction of $F$ to the space $\ensuremath{{L}}^2_s(U)$. We will further justify our choice of the space $\ensuremath{{L}}^2_s(U)$ in~\cref{lem:lbfour}. The first result is an easy consequence of the characterisation of stationary solutions from~\autoref{S:stationary}, but could be also derived by standard contraction mapping argument on the map $F$ as done in ~\cite[Theorem 4.1]{tamura1984asymptotic} and ~\cite[Theorem 3]{messer1982statistical}. \begin{prop} Assume $W(x)$ satisfies Assumption~\eqref{ass:B}. Then, for $\kappa$ sufficiently small, the uniform state $\varrho_\infty$ is the only solution of $F(\varrho,\kappa)=0$. \begin{proof} \cref{prop:con} implies that $\ensuremath{\mathscr F}_\kappa(\varrho)$ is strictly convex for $\kappa< \kappa_{\mathrm{con}} = \beta^{-1} \norm*{W_\ensuremath{\mathup{u}}}_\infty^{-1}$. Hence, using~\cref{thm:dirmet}, it has a unique minimiser and exactly one critical point. This implies from \cref{prop:tfae} that $F(\varrho,\kappa)$ has a unique solution. \end{proof} \end{prop} We use the trivial branch of solutions $F(\varrho_\infty,\kappa)=0, \kappa \in(0,\infty)$ with $\varrho_\infty \equiv 1/L^d$ to centre the map and define for any $u\in \ensuremath{{L}}^2_s(U)$ \begin{equation}\label{eq:hatF} \hat{F}(u,\kappa)=F(u + \varrho_\infty,\kappa) \, . \end{equation} In this way, we have $\hat{F}(0,\kappa)=0$. We compute the Fr\'echet derivatives of this map for variations $w_1,w_2, w_3\in \ensuremath{{L}}_s^2(U)$ \begin{align} D_\varrho(\hat{F}(0,\kappa))[w_1] &= w_1 + \beta \kappa \varrho_\infty (W \star w_1) - \beta \kappa \varrho_\infty^2 \intT{(W \star w_1)(x)} \, , \label{eq:dr} \displaybreak[0]\\ D_\kappa(\hat{F}(0,\kappa)) &=0 \, ,\label{eq:dk} \displaybreak[0]\\ D^2_{\varrho \kappa}(\hat{F}(0,\kappa))[w_1] &= \varrho_\infty (W \star w_1) - \varrho_\infty^2 \intT{(W \star w_1) (x)} -\varrho_\infty^2 W \star D_\varrho ( \hat{F}(0,\kappa))[w_1] \, , \label{eq:drk} \displaybreak[0]\\ D^2_{\varrho\varrho}(\hat{F}(0,\kappa))[w_1,w_2]&= \beta \kappa (w_2- D_\varrho( \hat{F}(0,\kappa))[w_2])(W \star w_1) \label{eq:drr} \\ &-\beta \kappa \varrho_\infty (w_2- D_\varrho ( \hat{F}(0,\kappa)) [w_2]) \intT{W \star w_1(x)} \nonumber \\ &-\beta \kappa \varrho_\infty \intT{W \star w_1(x) (w_2- D_\varrho ( \hat{F}(0,\kappa)) [w_2])(x)} \, , \nonumber \displaybreak[0]\\ D^3_{\varrho \varrho \varrho}\hat{F}(0,\kappa)[w_1,w_2,w_3] &= -\beta \kappa D^2_{\varrho \varrho}\hat{F}(0,\kappa)[w_2,w_3](W \star w_1) \label{eq:drrr} \\ &+ \beta \kappa \varrho_\infty (D^2_{\varrho \varrho}\hat{F}(0,\kappa)[w_2,w_3]) \intT{(W \star w_1)(x)} \nonumber \\ &- \beta \kappa (w_2 - D_\varrho \hat{F}(0,\kappa)[w_2]) \intT{(W \star w_1)(x) (w_3 - D_\varrho \hat{F}(0,\kappa)[w_3])(x)} \\ \nonumber &- \beta \kappa (w_3 - D_\varrho \hat{F}(0,\kappa)[w_3]) \intT{(W \star w_1)(x) (w_2 - D_\varrho \hat{F}(0,\kappa)[w_2])(x)} \\ \nonumber &+ \beta \kappa \varrho_\infty \intT{(W \star w_1)(x)(D^2_{\varrho \varrho}\hat{F}(0,\kappa)[w_2,w_3])(x)} \, . \end{align} We have the following characterisation of the local bifurcations of $\hat{F}$: \begin{thm} \label{thm:c1bif} Consider $\hat{F}: \ensuremath{{L}}^2_s(U) \times {\mathbb R}^+ \to \ensuremath{{L}}^2_s(U)$ as defined in~\eqref{eq:kirkwoodmonroe} with $W \in \ensuremath{{L}}^2_s(U)$. Assume there exists $k^* \in {\mathbb N}^d$, such that: \begin{enumerate} \item $\operatorname{card}\set*{k : \frac{\tilde{W}(k)}{\Theta(k)}=\frac{\tilde{W}(k^*)}{\Theta(k^*)}}=1$\,. \item $\tilde{W}(k^*) <0$\,. \end{enumerate} Then $(0,\kappa_*)\in \ensuremath{{L}}^2_s(U) \times {\mathbb R}^+ $ is a bifurcation point of $\hat{F}(\varrho,\kappa)=0$ where \begin{align} \kappa_*=-\frac{L^{\frac{d}{2}} \Theta(k^*)}{\beta \tilde{W}(k^*)} \label{eq:thetak}\, . \end{align} In addition, there exists a branch of solutions of the form \begin{align} \varrho_{*}(s)&= \varrho_\infty + s w_{k^*} + r(s w_{k^*},\kappa(s)) \, , \label{eq:branchstructure} \end{align} where $w_{k^*} \in \ensuremath{{L}}^2_s(U)$ defined in~\eqref{e:def:wk}, $s \in (-\delta,\delta)$ for some $\delta>0$, and $\kappa:(-\delta,\delta) \to V$ is a twice continuously differentiable function in a neighbourhood $V$ of $\kappa_*$ with $\kappa(0)=\kappa_*$. Moreover, it holds $\kappa'(0)=0$, $\kappa''(0)=\frac{2\beta \kappa_*}{3 \varrho_\infty}>0$, and $\varrho_*$ is the only nontrivial solution in a neighbourhood of $(0,\kappa_*)$ in $\ensuremath{{L}}^2_s(U) \times {\mathbb R}$. Specifically, the error $r:\operatorname{span} [w_{k^*}] \times V \to (\operatorname{span} [w_{k^*}])^{\perp}\subset \ensuremath{{L}}^2_s(U)$ is a map satisfying \begin{gather}\label{eq:rprop} \forall s \in (-\delta,\delta): \quad r(s w_{k^*},\kappa(s)) \in \ensuremath{{L}}^{\infty}(U) \qquad \text{with}\qquad r(0,0)=0 \, , \\ \text{and}\qquad \lim_{|s| \to 0} \frac{\norm*{r(s w_{k^*},\kappa(s))}_2}{|s|}=0 \, .\nonumber \end{gather} \end{thm} \begin{proof}[Proof of~\cref{thm:m2}] The proof of this theorem relies on the Crandall--Rabinowitz theorem~\cite{crandall1971bif}, which for the convenience of the reader is included in Appendix \ref{S:lyapschmid}. Before we proceed it is convenient to rewrite $D_\varrho\hat{F}$ from~\eqref{eq:dr} as \begin{align}\label{eq:hatFhatT} D_\varrho(\hat{F}(0,\kappa))&= I - \kappa \hat{T} \ , \end{align} where $\hat{T}: \ensuremath{{L}}^2_s(U) \to \ensuremath{{L}}^2_s(U)$ is defined for $w\in \ensuremath{{L}}^2_s(U)$ by \begin{align}\label{def:Th} (\hat{T} w)(x) = \beta \bra*{ -\varrho_\infty (W \star w)(x) + \varrho_\infty^2 \int (W \star w)(y) \dx{y} } \, . \end{align} Using the above expression one checks that the linear operator $\hat{T}$ is Hilbert--Schmidt with $\norm*{\hat{T}}_{\mathrm{HS}}^2 = \sum_{k \in \mathbb{N}^d}\norm*{\hat{T}w_k}_2^2 < \infty$, where $\{w_k\}_{k \in {\mathbb N}^d}$ is the orthonormal basis of $\ensuremath{{L}}^2_s(U)$ as defined earlier. Thus, $I - \kappa \hat{T}$ is Fredholm by ~\cite[Corollary 4.3.8]{davies2007linear}. Since the index of a Fredholm operator is homotopy invariant (cf. Theorem 4.3.11 ~\cite{davies2007linear}), we show that the mapping $\kappa \mapsto (I- \kappa\hat{T})$ is norm-continuous: \begin{align} \norm*{I- \kappa_1 \hat{T} -I + \kappa_2 \hat{T}} = |\kappa_2 - \kappa_1| \norm*{\hat{T}} \, . \end{align} Thus, the index satisfies $\ind\bra{I-\kappa\hat{T}}= \ind\bra*{I}=0$. We diagonalize $I -\kappa\hat{T}$ with respect to $\set{w_k}_{k\in {\mathbb N}^d}$ \begin{align} \label{eq:diagT} (I-\kappa\hat{T})w_k(x)= \begin{cases} \qquad 1 &, \forall i=1 \dots d: k_i =0 \, , \\ \left( 1 + \beta \kappa \frac{\tilde{W}_k}{L^{d/2} \Theta(k)}\right) w_k(x) &, \text{ else.} \end{cases} \end{align} Now it is easy to see that if Condition (1) in the statement of the theorem is satisfied, then $\dim \ker (I - \kappa \hat{T})=1$ for $\kappa= \kappa_*$. Indeed, if Condition~(1) is satisfied, we have $\ker(I-\kappa_* \hat{T})= \mathrm{span}[w_{k^*}]$ and Condition~(2) ensures that $\kappa_*$ is positive. Thus Condition~(1) of Theorem \ref{thm:cr} is satisfied. Since $\Ima (I - \kappa \hat{T})$ is closed we have that $\Ima(I-\kappa \hat{T})=\ker (I - \kappa \hat{T}^*)^{\perp}$, with $\hat{T}^*$ denoting the adjoint. It is easy to check that if $v_0 \in \ker (I-\kappa \hat{T})$, $v_0 \not\equiv 0 $ then $v_0 \in \ker (I-\kappa \hat{T}^*) $. Then, by differentiating \eqref{eq:hatFhatT} in $\kappa$ and using $v_0 \in \ker (I-\kappa \hat{T})$ , we get the identity \begin{align} \skp*{D^2_{\varrho\kappa}(\hat{F}(0, \kappa))[v_0],v_0}&= -\skp*{\hat{T} v_0,v_0} =-\kappa^{-1} \norm*{v_0}_2^2 \neq 0 \, , \end{align} since $v_0 \not\equiv 0$ by assumption.This implies that $D^2_{\varrho \kappa}(\hat{F}(0, \kappa))[v_0] \notin \ker (I-\kappa \hat{T}^*)^{\perp}$. Thus condition (2) of \cref{thm:cr} is also satisfied. Thus we can now apply \cref{thm:cr} and use \eqref{eq:dk} to obtain \eqref{eq:branchstructure}. Before proceeding, it is useful to characterize $\Ima (I -\kappa_* \hat{T})$. By using \eqref{eq:diagT}, we can see that we have the following orthogonal decomposition of $\ensuremath{{L}}^2_s(U)$ into \begin{align} \ensuremath{{L}}^2_s(U)= \mathrm{span}[w_{k^*}] \oplus \Ima(I -\kappa_* \hat{T} ) \, . \end{align} Using the identity ~\cite[(I.6.3)]{kielhofer2006bifurcation} it follows that $\kappa'(0)=0$ provided that $D^2_{\varrho \varrho}\hat{F}(0,\kappa)[w_{k^*},w_{k^*}] \in \Ima (I -\kappa_* \hat{T} )$. Thus it is sufficient to check that \begin{align} \skp*{D^2_{\varrho \varrho}\hat{F}(0,\kappa)[w_{k^*},w_{k^*}],w_{k^*}}= \skp*{\beta \kappa \tilde{W}(k^*)\left[w_{k^*}^2\left(\frac{L}{2}\right)^{d/2}-\left(\frac{1}{2L}\right)^{d/2}\right], w_{k^*}} =0 \ , \end{align} where we have used \eqref{eq:drr} and the fact that $\intT{w_{k^*}^3}=0$. Thus we conclude that $\kappa'(0)=0$. Likewise, from~\cite[(I.6.11)]{kielhofer2006bifurcation}, we also have that \begin{align} \kappa''(0)&= -\frac{\skp*{D^3_{\varrho \varrho \varrho}\hat{F}(0,\kappa_*)[w_{k^*},w_{k^*},w_{k^*}],w_{k^*}}}{3\skp*{D^2_{\varrho \kappa}\hat{F}(0,\kappa_*)[w_{k^*}],w_{k^*}}} = \frac{2 \beta \kappa_* \tilde{W}(k^*)(L/2)^{d/2}}{3 \varrho_\infty \tilde{W}(k^*)(L/2)^{d/2}} = \frac{2\beta \kappa_*}{3\varrho_\infty} >0 \ , \end{align} where we have used \eqref{eq:drk} and \eqref{eq:drrr}. The first two properties of \eqref{eq:rprop} follow from \cref{thm:cr}. To prove the third property in \eqref{eq:rprop}, we observe that \begin{align} \lim_{|s| +|\kappa(s)-\kappa_*| \to 0} \frac{\norm*{r_1(s\hat{v_0},\kappa(s))}_2}{|s| +|\kappa(s)-\kappa_*|}=0 \, . \end{align} Since $\kappa'(0)=0$, we also have $\lim_{|s|\to 0}\frac{|\kappa(s)-\kappa_*|}{|s|}=0$. Thus, we conclude \begin{align} \lim_{|s| \to 0} \frac{\norm*{r(s w_{k^*},\kappa(s))}_2}{|s|}&= \lim_{|s| \to 0} \frac{\norm*{r(s w_{k^*},\kappa(s))}_2}{|s| +|\kappa(s)-\kappa_*|}\left(\lim_{|s|\to 0}\frac{|s| +|\kappa(s)-\kappa_*|}{|s|}\right) =0 \ , \end{align} where we have used the fact from~\cref{thm:cr} that $\kappa$ is continuously differentiable. This completes the proof. \end{proof} The statment of~\cref{thm:c1bif} becomes more transparent in one dimension: \begin{cor} Fix $U=(-L/2,L/2)$ and consider $\hat{F}: \ensuremath{{L}}^2_s(U) \times {\mathbb R}^+ \to \ensuremath{{L}}^2_s(U)$ as defined in~\eqref{eq:kirkwoodmonroe} with $W \in \ensuremath{{L}}^2_s(U)$. Assume that there exists $k^* \in {\mathbb N}$, such that: \begin{enumerate} \item $\operatorname{card}\set*{k : \tilde{W}(k)=\tilde{W}(k^*)}=1$\,. \item $\tilde{W}(k^*) <0$\,. \end{enumerate} Then $(0,\kappa_*)$ is a bifurcation point of $\hat{F}(\varrho,\kappa)=0$, where \begin{align} \kappa_*=-\frac{ (2L)^{\frac{1}{2}} }{\beta \tilde{W}(k^*)} ; \end{align} that is, there exists a branch of solutions having the following form: \begin{align} \varrho_{*}(s)&= \frac{1}{L}+ s \sqrt{\frac{2}{L}}\cos\bra*{\frac{2 \pi k^* x }{L}} + o(s), \quad s \in(-\delta,\delta) \, , \end{align} with all the other properties of the branch being the same as~\cref{thm:c1bif}. \end{cor} \begin{rem} It should also be noted that one can obtain the existence of bifurcations with higher-dimensional kernels as well, i.e, when $\dim(\ker (\hat{T})) >1$. Since $\hat{T}$ is self adjoint, for any eigenvalue its algebraic and geometric multiplicities are the same. From~\cite[Theorem 28.1]{deimling1985nonlinear} it follows that any characteristic values (the reciprocals of the eigenvalues of $\hat{T}$) of odd algebraic multiplicity correspond to a bifurcation point. This implies that we could replace Condition (1) in~\cref{thm:c1bif} with $\operatorname{card}\set*{k : \frac{\tilde{W}(k)}{\Theta(k)}=\frac{\tilde{W}(k^*)}{\Theta(k^*)}}=m$, where $m$ is odd. However, it is not easy to obtain detailed information about the structure and regularity of the bifurcating branches in this case. \end{rem} \begin{rem}~\label{rem:inj} Condition (1) of \cref{thm:c1bif} is in particular satisfied for an interaction potential $W \in \ensuremath{{L}}^2_s(U)$ if the map $\tilde{W} : {\mathbb N}^d \to {\mathbb R}$ is injective. In this case, every $k_\alpha \in {\mathbb N}^d$ such that $\tilde{W}(k) <0$, corresponds to a unique bifurcation point $ \kappa_\alpha$ of $F(\varrho,\kappa)$ through the relation~\eqref{eq:thetak}. For example consider the interaction potential $W(x)=x^2/2$. In this case $\tilde{W}$ is injective and therefore the system has infinitely many bifurcation points. On the other hand, when $W(x)=-w_k(x)$ for some $k \in {\mathbb N}^d$, the system has only one bifurcation point. \end{rem} \begin{rem} \label{rem:l2pi} In dimensions higher than one, the space $\ensuremath{{L}}^2_s(U)$ may not be small enough for our purposes, i.e., it is possible that the potential may have additional symmetries. For instance, the potential could be exchangeable, that is $W(x)=W(\Pi(x))$ for all possible permutations~$\Pi$ of the $d$ coordinates. In this case it is easy to check that $\skp*{W,w_k}= \skp*{W,w_{\Pi(k)}}$ for all $k \in {\mathbb N}^d$. We can then define the equivalence relation, $k \sim k'$ if $k' = \Pi(k)$ for some permutation $\Pi$ and write $\pra*{k}$ for the corresponding equivalence class. Thus, the consequence of $W(x)$ having this symmetry is that the value $\tilde{W}(k)/\Theta(k)$ is constant on $\pra{k}$. This implies that kernel of $D_\varrho\hat{F}$ is can never be one-dimensional. We can quotient out this symmetry by defining the space $\ensuremath{{L}}^2_{\operatorname{ex}}(U)=\operatorname{span}\set{w_{\pra{k}}}$, where $\{w_{\pra{k}}\}$ is an orthonormal basis defined by \begin{align} w_{\pra{k}}= \frac{1}{\sqrt{\sharp\pra{k}}}\sum_{\ell \in \pra{k}} w_{\ell}(x), \quad k \in {\mathbb N}^d \, , \end{align} where $\sharp\pra{k}$ denotes the cardinality of the equivalence class $\pra{k}$. Then $\hat{F}:\ensuremath{{L}}^2_{\operatorname{ex}}(U) \times {\mathbb R}^+ \to \ensuremath{{L}}^2_{\operatorname{ex}}(U)$ is a well-defined mapping. Then, the results of \cref{thm:c1bif} carry over to $\hat{F}$ defined this way for $W \in \ensuremath{{L}}^2_{\operatorname{ex}}(U)$ and the corresponding orthonormal basis $\{w_{\pra{k}}\}_{k\in {\mathbb N}}$. In this case the conditions read as follows \begin{enumerate} \item $\operatorname{card}\set*{ \pra*{k} : \frac{\tilde{W}([k])}{\Theta([k])} = \frac{\tilde{W}([k^*])}{\Theta([k^*])}} = 1 $ \, , \item $\tilde{W}([k^*]) <0$ \, , \end{enumerate} with $\tilde{W}([k])=\tilde{W}(k),\Theta([k])=\Theta(k)$ for any $k \in \pra{k}$. The bifurcation point is given by \begin{align} \kappa_*=-\frac{L^{\frac{d}{2}} \Theta(\pra{k^*})}{\beta \tilde{W}(\pra{k^*})} \,. \end{align} \end{rem} \begin{rem} Consider the following interaction potential \begin{align} W_s(x)= -\sum_{k=1}^\infty \frac{1}{|k|^{2s}}w_k(x), \quad s \geq 1 \,. \end{align} It is straightforward to check that $W_s(x)$ belongs to $\ensuremath{{H}}^s(U)$ and thus to $C(\bar{U})$. Additionally, $W_s(x) \to -w_1(x)$ uniformly as $s \to \infty$. One can check now that, for any $s>1$, $W_s(x)$ satisfies the conditions of~\cref{thm:c1bif} for all $k \in {\mathbb N}, k \neq 0$ and thus the trivial branch of the system has infinitely many bifurcation points. However, as mentioned in~\cref{rem:inj}, the system $W(x)=-w_1(x)$ has only one bifurcation point. This can be explained by the fact that as $s\to \infty$ all bifurcation points of $W_s(x)$ except one are pushed to infinity. This example illustrates however that two potentials may ``look'' similar but their associated bifurcation structure may be entirely different. Therefore, approximating potentials, even uniformly, by some dense subset, may not reveal all the information about the bifurcation structure of the limiting system. \end{rem} If we now assume that $W$ satisfies assumption~\eqref{ass:B} we can see that the zeros of $F(\varrho,\kappa)$ are fixed points of the map $\ensuremath{\mathcal T}$ which by~\cref{prop:tfae} are equivalent to smooth solutions of the stationary McKean--Vlasov equation. \cref{thm:c1bif} also provides us information about the structure of the branches, i.e., if $w_k(x)$ is the mode such that $k\in {\mathbb N}^d$ satisfies the conditions of \cref{thm:c1bif}, then to leading order the nontrivial solution is of the form $\varrho_\infty + s w_k(x)$. One may think of this as a ``proto-cluster'', with the nodes of $w_k(x)$ corresponding to the positions of the peaks and valleys of the cluster. So far the analysis in this section has been local. We conclude this section by providing a characterisation of the global structure of the bifurcation diagram for $\hat{F}$ as defined in~\eqref{eq:hatF}. \begin{prop} Let $V$ be an open neighbourhood of $(0,\kappa_*)$ in $\ensuremath{{L}}^2_s(U) \times {\mathbb R}$, where $(0,\kappa_*)$ is a bifurcation point of the map $\hat{F}$ in the sense of~\cref{thm:c1bif}. We denote by $\ensuremath{\mathcal C}_V$ the set of nontrivial solutions of $\hat{F}(\varrho,\kappa)=0$ in $V$ and by $\ensuremath{\mathcal C}_{V,\kappa_*}$ the connected component of $\overline{\ensuremath{\mathcal C}_V}$ containing $(0,\kappa_*)$. Then $\ensuremath{\mathcal C}_{V,\kappa_*}$ has at least one of the following two properties: \begin{enumerate} \item $\ensuremath{\mathcal C}_{V,\kappa_*} \cap \partial V \neq \emptyset$\,. \item $\ensuremath{\mathcal C}_{V,\kappa_*}$ contains an odd number of characteristic values of $\hat{T}$, $(0,\kappa_i) \neq (0,\kappa_*)$, which have odd algebraic multiplicity\,. \end{enumerate} \end{prop} \begin{proof} The proof follows from the direct application of the so-called Rabinowitz alternative ~\cite[Theorem 29.1]{deimling1985nonlinear} which we have included as~\cref{thm:rabalt} for the convenience of the reader. It is easy to check that the map $\hat{F}$ can be written in the following form, \begin{align} \hat{F}(\varrho,\kappa)= \varrho - \kappa \hat{T}\varrho + G(\varrho,\kappa) \, , \end{align} with $\hat{T}$ as defined in~\eqref{def:Th}, and \begin{align} G(\varrho,\kappa)= \varrho_\infty - \frac{1}{Z}e^{-\beta \kappa W \star \varrho} + \kappa \hat{T} \varrho \, . \end{align} We now need to show that $G$ is completely continuous and $o\bra*{\norm*{\varrho}_2}$ uniformly in $\kappa$ as $\norm*{\varrho}_2 \to 0 $. For the first result, it is enough to show that $G$ is compact since $\ensuremath{{L}}^2_s(U)$ is reflexive. We establish the following estimate: \begin{align} \norm*{G(\varrho_1,\kappa) - G(\varrho_2,\kappa)}_2 &\leq\frac{1}{Z(\varrho_2)} \norm*{e^{-\beta \kappa W \star \varrho_2} -e^{-\beta \kappa W \star \varrho_1}}_2 + \frac{\norm*{e^{-\beta \kappa W \star \varrho_1}}_\infty}{Z(\varrho_2) Z(\varrho_2)} L^{d/2}\abs*{ Z(\varrho_2)-Z(\varrho_1)} + \\ &+\kappa\norm*{\hat{T}(\varrho_2-\varrho_1)}_2 \\ & \leq \frac{\beta \kappa}{L^{d/2}}e^{\beta \kappa \norm*{W}_2 \norm*{\varrho_2}_2 }\bra*{1+ e^{2\beta \kappa \norm*{W}_2 \norm*{\varrho_1}_2 }} \norm*{W \star (\varrho_2-\varrho_1)}_\infty \\ & +\frac{2\beta \kappa}{L^{d/2}} \norm*{W \star (\varrho_2-\varrho_1)}_\infty \,. \end{align} Now setting $\varrho_2=\varrho$ and $G(\varrho_1,\kappa)=\tau G(\varrho,\kappa)= G(\tau \varrho,\kappa)$(with $\tau f(x+\tau)$) in the above expression we obtain \begin{align} \norm*{G(\varrho,\kappa) - \tau G( \varrho,\kappa)}_2& \leq C_\kappa \norm*{W \star\varrho - \tau W \star\varrho }_\infty \, \label{eq:l2comp}. \end{align} Similarly we can also deduce the following estimate by bounding $W \star (\varrho_2-\varrho_1)$ from above: \begin{align} \norm*{G(\varrho_1,\kappa) - G(\varrho_2,\kappa)}_2 &\leq C_\kappa \norm*{W}_2 \norm*{\varrho_1-\varrho_2}_2 \, . \label{eq:spohn}\\ \end{align} In the above two expressions, $C_\kappa$ is a constant which tends to 0 as $\kappa \to 0$. Setting $\varrho_2=0$ in~\eqref{eq:spohn}, it follows that $G$ is a bounded map on $\ensuremath{{L}}^2(U)$. Together with this and~\eqref{eq:l2comp}, and using the fact that the convolution is uniformly continuous, one can check that that $G(A)$ satisfies the conditions of the Kolmogorov--Riesz theorem, where $A$ is any bounded subset of $\ensuremath{{L}}^2_s(U)$. Thus $G$ is compact. The fact that $G$ is $o\bra*{\norm*{\varrho}_2}$ follows by Taylor expanding $e^{-\beta \kappa W \star \varrho}/Z$. One can now check that if condition (1) of~\cref{thm:c1bif} is satisfied for some $k \in {\mathbb N}^d$, the associated eigenvalue $\kappa^{-1}$(which could be negative) of $\hat{T}$ is simple, i.e., it has algebraic multiplicity one. This implies that all bifurcation points predicted by \cref{thm:c1bif} are associated with simple eigenvalues of $\hat{T}$. Thus, we can apply \cref{thm:rabalt} to complete the proof. \end{proof} \section{Phase transitions for the McKean--Vlasov equation}\label{S:thermodynamic} We know from~\cref{prop:con} that $\varrho_\infty$ is the unique minimiser of the free energy for $\kappa$ sufficiently small. We are interested in studying under what criteria there is a change in the qualitative structure of the set of minimisers of $\ensuremath{\mathscr F}_\kappa$. For the rest of this section we will assume that $W$ satisfies Assumption~\eqref{ass:B}, i.e, $W \in \ensuremath{{H}}^1(U)$ and bounded below. We build on and extend the notions introduced by ~\cite{chayes2010mckean}. The first definition introduces what we mean by a transition point. \begin{defn}[Transition point] \label{defn:tp} A parameter value $\kappa_c >0$ is said to be a \emph{transition point} of $\ensuremath{\mathscr F}_\kappa$ if it satisfies the following conditions: \begin{enumerate} \item For $0<\kappa < \kappa_c$, $\varrho_\infty$ is the unique minimiser of $\ensuremath{\mathscr F}_\kappa(\varrho)$\,. \item For $\kappa=\kappa_c$, $\varrho_\infty$ is a minimiser of $\ensuremath{\mathscr F}_\kappa(\varrho)$\,. \item For $\kappa>\kappa_c$, there exists some $ \varrho_\kappa \in \mathcal{P}_{\mathup{ac}}^+(U)$, not equal to $\varrho_\infty$, such that $\varrho_\kappa$ is a minimiser of $\ensuremath{\mathscr F}_\kappa(\varrho)$\,. \end{enumerate} \end{defn} In the present work, we are only interested in the first transition point by increasing $\kappa$ starting from~$0$, also called the lower transition point. To convince the reader that the above definition makes sense we include the following result from~\cite{chayes2010mckean}. \begin{prop}[{\cite[Proposition 2.8]{chayes2010mckean}}]~\label{prop:2.8cp} Assume $W \in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}^c$ and suppose that for some $\kappa_T <\infty$ there exists $\varrho_{\kappa_T} \in \mathcal{P}_{\mathup{ac}}^+(U)$ not equal to $\varrho_\infty$ such that: \begin{align} \ensuremath{\mathscr F}_{\kappa_T}(\varrho_{\kappa_T}) \leq \ensuremath{\mathscr F}_{\kappa_T}(\varrho_\infty)\, . \end{align} Then, for all $\kappa>\kappa_T$, $\varrho_\infty$ no longer minimises the free energy. \end{prop} In addition, the following result from~\cite{gates1970van} shows that $H$-stability of the potential is a necessary and sufficient condition for the nonexistence of a transition point. \begin{prop}[{\cite{gates1970van}}] \label{prop:tpex} $\ensuremath{\mathscr F}_\kappa$ has a transition point at some $\kappa=\kappa_c<\infty$ if and only if $W \in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}^c$. Additionally for $\kappa>\kappa_\sharp$, with $\kappa_\sharp$ the point of critical stability as defined in~\eqref{eq:koc} in~\autoref{ssec:lsa}, $\varrho_\infty$ is not the minimiser of $\ensuremath{\mathscr F}_\kappa$. \end{prop} From this result it follows directly that if the system possesses a transition point $\kappa_c$, $\varrho_\infty$ can no longer be a minimiser beyond this point. We are also interested in understanding how this transition occurs. In the infinite-dimensional setting it is not always possible to obtain a well-defined order parameter for the system characterizing first and second order phase transitions in the sense of statistical physics. For this reason, it may be better to define such transitions in terms of discontinuity in some norm or metric. \begin{defn}[Continuous and discontinuous transition point] \label{defn:ctp} A transition point $\kappa_c >0$ is said to be a \emph{continuous transition point} of $\ensuremath{\mathscr F}_\kappa$ if it satisfies the following conditions: \begin{enumerate} \item For $\kappa=\kappa_c$, $\varrho_\infty$ is the unique minimiser of $\ensuremath{\mathscr F}_\kappa(\varrho)$\,. \item Given any family of minimisers, $\{\varrho_\kappa|\kappa> \kappa_c \}$, we have that \begin{align} \limsup_{\kappa \downarrow \kappa_c} \norm*{\varrho_\kappa-\varrho_\infty}_1=0 \, . \end{align} \end{enumerate} A transition point $\kappa_c$ which is not continuous is said to be \emph{discontinuous}. \end{defn} We now include a series of results from~\cite{chayes2010mckean} that we need for our subsequent analysis. \begin{prop}[\cite{chayes2010mckean}]\label{prop:cpni} $\min\limits_{\varrho \in \mathcal{P}_{\mathup{ac}}(U)} \ensuremath{\mathscr F}_\kappa(\varrho) - \frac{1}{2}\kappa \ensuremath{\mathcal E}(\varrho_\infty,\varrho_\infty)$ is nonincreasing in $\kappa$. \end{prop} \begin{prop}[\cite{chayes2010mckean}]\label{prop:cpdc} Assume $W \in \ensuremath{{\mathbb H}}_s^c$ and that condition (2) of~\cref{defn:ctp} is violated. Then there exists a discontinuous transition point $\kappa_c < \infty$ and some $\varrho_{\kappa_c} \neq \varrho_\infty$ such that $\ensuremath{\mathscr F}_{\kappa_c}(\varrho_{\kappa_c})=\ensuremath{\mathscr F}_{\kappa_c}(\varrho_\infty)$. \end{prop} \begin{prop}[\cite{chayes2010mckean}]\label{prop:cpc} Assume $W \in \ensuremath{{\mathbb H}}_s^c$ and that the free energy $\ensuremath{\mathscr F}_\kappa$ exhibits a continuous transition point at some $\kappa_c < \infty$. Then it follows that $\kappa_c=\kappa_\sharp$. \end{prop} By combining certain properties of transition points with the previous analysis on critical stability in~\autoref{ssec:lsa}, we obtain more streamlined sufficient conditions for the identification of transition points, which is the basis for the proof of \cref{thm:m3}, or more precisely~\cref{thm:dctp} and~\cref{thm:spgap}. \begin{prop}\label{prop:CharactTP} Let $\ensuremath{\mathscr F}_\kappa$ have a transition point at some $\kappa_c<\infty$ and let $\kappa_\sharp$ denote the point of critical stability defined in~\autoref{ssec:lsa}. \begin{tenumerate} \item If $\varrho_\infty$ is the unique minimiser of $\ensuremath{\mathscr F}_{\kappa_\sharp}$, then $\kappa_c = \kappa_{\sharp}$ is a continuous transition point. \label{prop:CharactTP:cont} \item If $\varrho_\infty$ is not a global minimiser of $\ensuremath{\mathscr F}_{\kappa_\sharp}$, then $\kappa_c < \kappa_{\sharp}$ and $\kappa_c$ is a discontinuous transition point. \label{prop:CharactTP:discont} \end{tenumerate} \end{prop} \begin{rem} The statements of~\cref{prop:CharactTP}\ref{prop:CharactTP:cont} and ~\cref{prop:CharactTP}\ref{prop:CharactTP:discont} are only necessary conditions for the characterisation of transition points. In particular, they are not logical complements of each other, i.e., $\varrho_\infty$ could be a global minimiser of $\ensuremath{\mathscr F}_{\kappa_\sharp}$ without being the unique one or vice versa. \end{rem} \begin{proof} A consequence of the assumption in the first statement~\ref{prop:CharactTP:cont} of the proposition is that $\varrho_\infty$ is the unique minimiser for all $\kappa\leq \kappa_\sharp$. Indeed, from~\cref{prop:cpni}, we know that $\min_{\varrho \in \mathcal{P}_{\mathup{ac}}(U)}\ensuremath{\mathscr F}_\kappa \leq \ensuremath{\mathscr F}_{\kappa_c}(\varrho_\infty)$ for $\kappa \leq \kappa_c$. Thus, if $\varrho_\infty$ is the unique minimiser at some $\kappa=\kappa_c$, it must be a minimiser for all $\kappa \leq \kappa_c$. In fact, using~\cref{prop:2.8cp} we can assert that $\varrho_\infty$ is the unique minimiser of $\ensuremath{\mathscr F}_\kappa$ for all $\kappa \leq \kappa_c$. Indeed, if this were not the case then there exists some $\varrho_{\kappa_T}\in \mathcal{P}_{\mathup{ac}}^+(U)$ not equal to $\varrho_\infty$ such that $\ensuremath{\mathscr F}_{\kappa_T}(\varrho_{\kappa_T}) = \ensuremath{\mathscr F}_{\kappa_T}(\varrho_\infty)$ for some $\kappa_T<\kappa_\sharp$. ~\cref{prop:2.8cp} then tells us that $\varrho_\infty$ can no longer be a minimiser for any $\kappa>\kappa_T$, which is a contradiction. It follows that conditions (1) and (2) from \cref{defn:tp} are satisfied. That condition (3) is satisfied follows directly from~\cref{prop:tpex}. This implies that $\kappa_\sharp$ satisfies the three conditions of being a transition point. Now, we have to verify condition (2) of \cref{defn:ctp} (condition (1) is already satisfied from the statement of the proposition). Assume condition (2) doesn't hold, i.e., there exists a family of minimisers $\{\varrho_\kappa|\kappa> \kappa_c \}$ of $\ensuremath{\mathscr F}_\kappa(\varrho)$ such that $\limsup_{\kappa \downarrow \kappa_c} \norm*{\varrho_\kappa-\varrho_\infty}_1\neq0$. Then we know from~\cref{prop:cpdc} that there exists some $\varrho_{\kappa_c} \in \mathcal{P}_{\mathup{ac}}^+(U)$ not equal to $\varrho_\infty$ such that it is a minimiser of the free energy $\ensuremath{\mathscr F}_\kappa(\varrho)$ at $\kappa=\kappa_c$. Applied in the present setting with $\kappa_c=\kappa_\sharp$, we would deduce that $\varrho_\infty$ is no longer the unique minimiser of $\ensuremath{\mathscr F}_{\kappa_\sharp}(\varrho)$, in contradiction to statement~\ref{prop:CharactTP:cont} of the proposition. Thus both conditions (1) and (2) of~\cref{defn:ctp} are satisfied from which it follows that $\kappa_c=\kappa_\sharp$ is a continuous transition point. To prove the second statement~\ref{prop:CharactTP:discont} of the proposition, let $\varrho$ be such that $\ensuremath{\mathscr F}_{\kappa_\sharp}(\varrho) < \ensuremath{\mathscr F}_{\kappa_\sharp}(\varrho_\infty)$. Then for any $\kappa$ close enough to $\kappa_\sharp$, we also have $\ensuremath{\mathscr F}_{\kappa}(\varrho) < \ensuremath{\mathscr F}_{\kappa}(\varrho_\infty)$. Hence by a combination of \cref{prop:2.8cp} and \cref{prop:tpex} there exists a transition point $\kappa_c < \kappa_\sharp$ and, in particular $\kappa_\sharp$, cannot be a transition point. From~\cref{prop:cpc}, we have the fact that if $\kappa_c$ is a continuous transition point of $\ensuremath{\mathscr F}_{\kappa}$, then necessarily $\kappa_c =\kappa_\sharp$. This implies that $\kappa_c< \kappa_\sharp$ cannot be a continuous transition point. \end{proof} Before proceeding to present the main results of this section, we remind the reader that for the rest of the paper $\kappa_c$ denotes a transition point, $\kappa_\sharp$ denotes the point of critical stability, and $\kappa_*$ denotes a bifurcation point. \subsection{Discontinuous transition points} We provide below a characterisation of potentials which exhibit discontinuous transition points, which proves~\cref{thm:m3}\ref{thm:m3a}. \begin{defn}\label{def:del} Assume $W\in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}^c$ and let $K^{\delta}:=\left\{k' \in {\mathbb N}^d\setminus\set{\mathbf{0}}: \frac{\tilde{W}(k')}{\Theta(k')}\leq \min_{k \in {\mathbb N}^d\setminus\set{\mathbf{0}}} \frac{\tilde{W}(k)}{\Theta(k)} +\delta \right\}$ for some $\delta \geq 0$. We define $\delta_*$ to be the smallest value, if it exists, of $\delta$ for which the following condition is satisfied: \begin{equation}\label{eq:c1} \text{there exist } k^a,k^b, k^c \in K^{\delta_*} , \text{ such that } k^a=k^b + k^c \, \tag{\textbf{C1}}. \end{equation} \end{defn} \begin{thm}\label{thm:dctp} Let $W(x)$ be as in~\cref{def:del}. Then if $\delta_*$ exists and is sufficiently small, $\ensuremath{\mathscr F}_\kappa$ exhibits a discontinuous transition point at some $\kappa_c<\kappa_\sharp$. \end{thm} \begin{proof} We know already from~\cref{prop:tpex} that the system possesses a transition point $\kappa_c$. We are going to use \cref{prop:CharactTP}~\ref{prop:CharactTP:discont} and construct a competitor $\varrho \in \mathcal{P}_{\mathup{ac}}^+(U)$ which has a lower value of the free energy than $\varrho_\infty$ at $\kappa=\kappa_\sharp$. Let \begin{align} \varrho= \varrho_\infty\bra[\bigg]{1 + \epsilon \sum_{k \in K^{\delta_*} }w_{k}} \in \mathcal{P}_{\mathup{ac}}^+(U) \ , \end{align} for some $\epsilon >0$, sufficiently small. We denote by $|K^{\delta_*}|$ the cardinality of $K^{\delta_*}$, which is necessarily finite as $W \in \ensuremath{{L}}^2(U)$. Expanding about $\varrho_\infty$ we obtain \begin{align} \beta^{-1}S(\varrho)&= \beta^{-1}\left(S(\varrho_\infty) +\frac{|K^{\delta_*}|}{2}\varrho_\infty \epsilon^2 -\frac{\varrho_\infty}{3}\intom{\epsilon^3\bra[\bigg]{\sum_{k \in K^{\delta_*} }w_{k}}^3 } + o(\epsilon^3) \right) \\ \textrm{and} \qquad \frac{\kappa_\sharp}{2}\mathcal{E}(\varrho,\varrho)&\leq\frac{\kappa_\sharp}{2}\mathcal{E}(\varrho_\infty,\varrho_\infty) + \frac{\kappa_\sharp\epsilon^2 |K^{\delta_*}| \varrho_\infty^2}{2} \min_{k \in {\mathbb N}^d \setminus\set{\mathbf{0}}} \frac{\tilde{W}(k)}{\Theta(k)} L^{d/2} +\frac{\kappa_\sharp\epsilon^2 |K^{\delta_*}| \delta_*}{2 L^{3d/2}} \, . \end{align} Using the fact that $\kappa_\sharp \min\limits_{k \in {\mathbb N}^d\setminus\set{\mathbf{0}}}\frac{\tilde{W}(k)}{\Theta(k)}=-\beta^{-1}L^{d/2}\ $, we obtain, \begin{align} \ensuremath{\mathscr F}_{\kappa_\sharp}(\varrho)&\leq\ensuremath{\mathscr F}_{\kappa_\sharp}(\varrho_\infty) -\frac{\epsilon^3 \varrho_\infty}{3 \beta}\intom{\bra[\bigg]{\sum\limits_{k \in K^{\delta_*} }w_{k}}^3 } - \frac{\epsilon^2 \delta_* \varrho_\infty |K^{\delta_*}| }{2\beta} \bra*{\min\limits_{k \in {\mathbb N}^d \setminus\set{\mathbf{0}}} \frac{\tilde{W}(k)}{\Theta(k)}}^{-1} + o(\epsilon^3) \, . \end{align} Setting $\epsilon=\delta_*^{\frac{1}{2}}$ (if $\delta_* >0$, otherwise we stop here), we obtain \begin{align} \ensuremath{\mathscr F}_{\kappa_\sharp}(\varrho)&\leq\ensuremath{\mathscr F}_{\kappa_\sharp}(\varrho_\infty) -\frac{\delta_*^{\frac{3}{2}}\varrho_\infty}{3 \beta}\intom{\bra[\bigg]{\sum\limits_{k \in K^{\delta_*} }w_{k}}^3 } + \frac{\delta_*^{2}\varrho_\infty |K^{\delta_*}| }{2\beta} \abs*{\min\limits_{k \in {\mathbb N}^d \setminus\set{\mathbf{0}}} \frac{\tilde{W}(k)}{\Theta(k)}}^{-1} + o(\delta_*^{\frac{3}{2}}) \, . \label{eq:defup} \end{align} One can now check that under condition~\eqref{eq:c1}, it holds that \[ \intom{ \bra[\bigg]{\sum\limits_{k \in K^{\delta_*}}w_{k}}^3 } > a>0 \, , \] where the constant $a$ is independent of $\delta_*$. Indeed, the cube of the sum of $n$ numbers $a_i$, $i=1, \dots, n$ consists of only three types of terms, namely: $a_i^3$, $a_i^2 a_j$ and $a_i a_j a_k$. Setting the $a_i=w_{s(i)}$, with $s(i) \in K^{\delta_*}$, one can check that the first type of term will always integrate to zero. The other two will take nonzero and in fact positive values if and only if condition~\eqref{eq:c1} is satisfied. This follows from the fact that \[ \int_{-\pi}^{\pi} \,\cos({\ell x})\cos(m x) \cos(nx) \! \dx{x}= \frac{\pi}{2}\bra*{\delta_{\ell+m,n}+\delta_{m+n,\ell} + \delta_{n+\ell,m}}\, . \] Thus, for $\delta_*$ sufficiently small considering the fact that $|K^{\delta_*}| \geq 2$ and is nonincreasing as $\delta_*$ decreases, $\varrho$ has smaller free energy and $\varrho_\infty$ is not a minimiser at $\kappa=\kappa_\sharp$. \end{proof} \begin{rem} The case of the above result for $\delta_*=0$ can be thought of as the pure resonance case. In this case the set $K^0$ will denote the set of all resonant modes. Similarly, the above result for $\delta_*$ small but positive can be thought of as the near resonance case. \end{rem} The corollary below tells us that if we have a have a sequence of potentials whose Fourier modes grow closer to each other then it will eventually have a discontinuous transition point, as long as the potentials do not lose mass too fast. \begin{cor}~\label{cor:gamma} Let $\{W^n\}_{n \in {\mathbb N}} \in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}^c$ be a sequence of interaction potentials such that $\delta_*(n) \to 0$ as $n \to \infty$, where $\delta_*$ is as defined in~\cref{def:del}. Assume further that for all $n$ greater than some $N \in {\mathbb N}$, there exists a constant $C>0$ such that $\abs*{\min\limits_{k \in {\mathbb N}^d\setminus\set{\mathbf{0}}} \frac{\tilde{W^n}(k)}{\Theta(k)}} \geq C \delta_*(n)^{ \gamma}$ for some $\gamma<1/2$. Then for $n$ sufficiently large, the associated free energy $\ensuremath{\mathcal F}^n_\kappa(\varrho)$ possesses a discontinuous transition point at some $\kappa_c^n < \kappa_\sharp^n$. \end{cor} \begin{proof} We return to estimate~\eqref{eq:defup} from the proof of~\cref{thm:dctp} \begin{align} \ensuremath{\mathscr F}^n_{\kappa_\sharp}(\varrho)&\leq\ensuremath{\mathscr F}^n_{\kappa_\sharp}(\varrho_\infty) -\frac{\delta_*^{\frac{3}{2}}\varrho_\infty}{3 \beta}\intom{\bra[\bigg]{\sum\limits_{k \in K^{\delta_*} }w_{k}}^3 } + \frac{\delta_*^{2}\varrho_\infty |K^{\delta_*}| }{2\beta} \abs*{\min\limits_{k \in {\mathbb N}^d\setminus\set{\mathbf{0}} } \frac{\tilde{W^n}(k)}{\Theta(k)}}^{-1} + o(\delta_*^{\frac{3}{2}}) \, , \end{align} where we have suppressed the dependence of $\delta_*$ on $n$. We also note that the error term is independent of the potential $W^n$. Using our assumption on the potential (for $n>N$), we have \begin{align} \ensuremath{\mathscr F}^n_{\kappa_\sharp}(\varrho)&<\ensuremath{\mathscr F}^n_{\kappa_\sharp}(\varrho_\infty) -\frac{\delta_*^{\frac{3}{2}}\varrho_\infty}{3 \beta}\intom{\bra[\bigg]{\sum\limits_{k \in K^{\delta_*} }w_{k}}^3 } + \frac{\delta_*^{2- \gamma}\varrho_\infty |K^{\delta_*}| }{2\beta} + o(\delta_*^{\frac{3}{2}}) \, . \end{align} Since $\gamma<1/2$ and $\delta_* \to 0 $ as $n \to \infty$, the result follows. \end{proof} To conclude our discussion of discontinuous transition points, we present the following corollary to provide some more intuition of the types of interaction potentials that exhibit a discontinuous transition point. \begin{cor}~\label{cor:delta} Let $\{W^n\}_{n \in {\mathbb N}} $ be a sequence of interaction potentials with $ \norm{W^n}_1 =C>0$ for all $ n \in {\mathbb N}$ such that $W^n\to -C \delta_0$ in the sense of distributions as $n \to \infty$. Then for $n$ large enough, the associated free energy $\ensuremath{\mathcal F}^n_\kappa(\varrho)$ possesses a discontinuous transition point at some $\kappa_c^n < \kappa_\sharp^n$. \end{cor} \begin{proof} Note first that we have not included the assumption $W^n \in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}^c$ as eventually this must be the case if the potentials converge to a negative Dirac measure. Now we just need to check that the other conditions of~\cref{cor:gamma} hold true. We have the following estimate \begin{align} \tilde{W^n}(k) \geq -C N_k \implies \frac{\tilde{W^n}(k)}{\Theta(k)} \geq -C L^{-d/2} \, \label{eq:minWb}, \end{align} for all $k \in {\mathbb N}^d \setminus \set{\mathbf{0}}$. From the convergence to the Dirac measure it follows that for any $\epsilon>0$ we can find an $N$ large enough such that $\frac{\tilde{W^n}(k)}{\Theta(k)},\frac{\tilde{W^n}(2 k)}{\Theta(2 k)} \in \bra*{-C L^{-d/2},-C L^{-d/2}+\epsilon}$ for all $n >N$, for some $k \in {\mathbb N}^d \setminus \set{\mathbf{0}}$. This and~\eqref{eq:minWb} tells us that $\delta_* \leq \epsilon$ and since $\epsilon $ is arbitrary $\delta_* \to 0$ as $n \to \infty$. From similar arguments we assert that for all $n>N$, $\bra[\bigg]{\min\limits_{k \in {\mathbb N}^d\setminus\set{\mathbf{0}}} \frac{\tilde{W^n}(k)}{\Theta(k)}}< -C L^{-d/2}+\epsilon$. Thus we have that $\abs*{\bra[\bigg]{\min\limits_{k \in {\mathbb N}^d\setminus\set{\mathbf{0}}} \frac{\tilde{W^n}(k)}{\Theta(k)}}}> C \frac{2^{d/2}}{L^{d/2}}-\epsilon$ for $n >N$. Since the conditions of~\cref{cor:gamma} are satisfied, we have the desired result. \end{proof} \begin{rem} As examples of potentials that satisfy the conditions of~\cref{cor:delta} , we have the negative Dirichlet kernel $W^n(x)=-1 -2 \sum_{k=1}^{n} w_k(x)$, the negative F\'ejer kernel $W^n(x)=-\frac{1}{n}\bra*{\frac{1- w_n(x)}{1- w_1(x)}}$, and any appropriately scaled negative mollifier. \end{rem} \subsection{Continuous transition points} We now present a couple of technical lemmas starting with a functional inequality that gives a bound on the defect in the Gibbs inequality from below by the size of individual Fourier modes. These will be useful for the characterisation of continuous transition points provided in~\cref{thm:spgap} and, in particular, in the proof of \cref{thm:m3}\ref{thm:m3b}. \begin{lem} \label{lem:entdef} Let $(\Omega, \Sigma,\mu)$ be a probability space and $\left\{ w_k\right\}_{k\in {\mathbb N}}$ be any orthonormal basis for $\ensuremath{{L}}^2(\Omega,\mu)$. Assume that $f\in \ensuremath{{L}}^2(\Omega, \mu)$ is a probability density with respect to $\mu$, that is $f$ is nonegative and $\int f \dx{\mu}=1$, then we have, for any $b\in {\mathbb R}$ and any $k\in {\mathbb Z}$, the following estimate, \begin{align}\label{RelEnt:L2:bound} \ensuremath{\mathcal H}(f \mu | \mu ) \geq -\log \int_{\Omega} \exp\bra[\big]{ b \skp{f,w_k }_\mu w_k(x)}\dx{\mu} + b \abs{\skp{ f,w_k}_\mu}^2 \ , \end{align} In particular, let $\Omega= U$, $\mu=\varrho_\infty$ and $w_k$ is as defined in~\eqref{e:def:wk}. Moreover, for any $k \in {\mathbb Z}^d\setminus\set{\mathbf{0}}$ let $n=n(k) = \abs*{\set{i: k_i \ne 0}}$ denote the number of nonzero entries. Then, there exists a strictly increasing function $\ensuremath{\mathcal G}: {\mathbb R}^+\to {\mathbb R}^+$ with $\ensuremath{\mathcal G}(0)=0$ such that it holds \begin{align}\label{H:bound} \mathcal{H}(\varrho| \varrho_\infty)- C(n(k)) \tfrac{L^d }{2}|\tilde{\varrho}(k)|^2 \geq \ensuremath{\mathcal G}(|\tilde{\varrho}(k)|) \ , \end{align} where the constant $C(n) > 0$ for is given by $C(1)=C(2)=1$ and for $n>2$ by \begin{equation} C(n) = \frac{(n/2)^n}{(n-1)^{n-1}} < 1 \, . \end{equation} \end{lem} \begin{defn} \label{defn:astable} Assume that $W \in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}^c $ has one dominant negative mode, i.e., there exists a unique $ k^\sharp \in {\mathbb N}^d$ such that $\frac{\tilde{W}(k^\sharp)}{\Theta(k^\sharp)}=\min_{k \in {\mathbb N}^d}\frac{\tilde{W}(k)}{\Theta(k)}$(as defined in~\eqref{eq:poc}). We define the $\alpha$-stabilised potential $W_\alpha(x)$ as follows \begin{align} W_\alpha(x)=\skp*{W,w_{k^\sharp}}w_{k^\sharp}(x)+\alpha(W_\ensuremath{\mathup{u}}(x)- \skp*{W,w_{k^\sharp}}w_{k^\sharp}(x)) + W_\ensuremath{\mathup{s}}(x) \, , \end{align} where $\alpha \in [0,1]$, $W_\ensuremath{\mathup{s}}(x), W_\ensuremath{\mathup{u}}(x)$ are as defined in~\cref{def:Hstable}, and $W_1(x)=W(x)$. \end{defn} The above definition puts into context the discussion around Figure~\ref{fig:dcctp}(a) in~\autoref{S:intro}, i.e., the $\alpha$-stabilised potential $W_\alpha$ pushes all negative modes except the dominant one to some small neighbourhood of $0$. We define the fixed point equation associated with the interaction potential $W_\alpha$ to be \begin{align} F_\kappa(\varrho,\alpha)= \varrho(x)- \frac{1}{Z}e^{-\beta \kappa W_\alpha \star \varrho} \,. \end{align} \begin{lem}\label{lem:lbfour} Let $W_\alpha(x)$ be as in \cref{defn:astable} and let $\ensuremath{\mathcal C} \subset \mathcal{P}_{\mathup{ac}}^+(U)$ denote the set of nontrivial solutions of $F_{\kappa_\sharp}(\varrho,\alpha)=0$ for $\alpha \in [0,\alpha^*) \subset [0,1]$. Then, for $\alpha^*$ sufficiently small, we have the uniform lower bound $\sum\limits_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{\varrho}(\sigma(k^\sharp))|^2 >c$ for all $\varrho \in \ensuremath{\mathcal C}$ and for some $c>0$ independent of $\alpha \in [0,\alpha^*)$. \end{lem} We are now in the position to give the precise statement of \cref{thm:m3}\ref{thm:m3b} and prove it. We present the proofs of~\cref{lem:entdef} and~\cref{lem:lbfour} after the proof of~\cref{thm:spgap}. \begin{thm}\label{thm:spgap} Let $W_\alpha(x)$ be as in \cref{defn:astable} such that $ \Theta(k^\sharp) \leq 2$ where $\Theta(k)$ is as defined in~\eqref{e:def:thetak}. Assume further that $W_\ensuremath{\mathup{u}}$ and $W_\ensuremath{\mathup{s}}$ are bounded below. Then, for $\alpha$ sufficiently small, the system exhibits a continuous transition point at $\kappa_c=\kappa_\sharp$. \end{thm} \begin{proof} By~\cref{prop:CharactTP}~\ref{prop:CharactTP:cont}, it is sufficient to show that at the point of critical stability $\kappa_\sharp$, i.e., \begin{align} \kappa_\sharp=\kappa_c= -\frac{L^{\frac{d}{2}}\Theta(k)}{\beta \tilde{W_\alpha}(k^\sharp)}=-\frac{L^{\frac{d}{2}}\Theta(k)}{\beta \tilde{W}(k^\sharp)} \, , \end{align} the uniform state $\varrho_\infty$ is the unique minimiser, for $\alpha$ small enough. Let $\varrho$ be any solution of $F_{\kappa_\sharp}(\varrho,\alpha)=0$, i.e., a critical point of $\ensuremath{\mathscr F}_{\kappa_\sharp}$ (cf.~\cref{prop:tfae}). Then we have \begin{align} \ensuremath{\mathscr F}(\varrho)-\ensuremath{\mathscr F}(\varrho_\infty)&= \beta^{-1}\mathcal{H(\varrho|\varrho_\infty)} + \frac{\kappa_\sharp}{2}\mathcal{E}(\varrho-\varrho_\infty,\varrho-\varrho_\infty) \\ &= \beta^{-1}\mathcal{H(\varrho|\varrho_\infty)} + \frac{\kappa_\sharp}{2} L^{d/2} \frac{\tilde{W}(k^\sharp)}{\Theta(k^\sharp)}\left(\sum\limits_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{\varrho}(\sigma(k^\sharp))|^2 \right) \\ \nonumber &\qquad + \frac{\kappa_\sharp}{2} L^{d/2} \sum\limits_{k \in {\mathbb N}^d, k \neq k^\sharp }\frac{\tilde{W_\alpha}(k)}{\Theta(k)}\left(\sum\limits_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{\varrho}(\sigma(k))|^2 \right) \, . \end{align} We can translate $\varrho$ w.l.o.g so that $\varrho(\sigma(k^\sharp))=0, \forall \sigma \in (\mathrm{Sym}(\Lambda)-e)$ and throw away all positive $\tilde{W_\alpha}(k)$. A consequence of this is that $|\tilde{\varrho}(k^\sharp)|^2 = \sum\limits_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{\varrho}(\sigma(k^\sharp))|^2$. Thus we obtain \begin{align} \ensuremath{\mathscr F}(\varrho)-\ensuremath{\mathscr F}(\varrho_\infty)&\geq \beta^{-1}\left(\mathcal{H(\varrho|\varrho_\infty)} - \frac{L^d}{2}|\tilde{\varrho}(k^\sharp)|^2\right) \\ &\qquad +\frac{\beta^{-1}L^d}{2} \sum\limits_{k \in {\mathbb N}^d, k \neq k^\sharp }\bra*{\frac{\tilde{W_\alpha}(k)\Theta(k^\sharp)}{\Theta(k)\tilde{W}(k^\sharp)}}_-\left(\sum\limits_{ \sigma \in \mathrm{Sym}(\Lambda)} |\tilde{\varrho}(\sigma(k))|^2 \right) \, . \end{align} Since $\tilde{W}_\alpha(k)=\alpha\tilde{W}(k) $ for all $k \in {\mathbb N}^d, k \neq k^\sharp$ with $\tilde{W}(k)<0$ and by definition $\tilde{W}(k)/\Theta(k) \geq \tilde{W}(k^\sharp)/\Theta(k^\sharp)$, we can obtain the estimate \begin{align} \ensuremath{\mathscr F}(\varrho)-\ensuremath{\mathscr F}(\varrho_\infty)&\geq \beta^{-1} \left(\mathcal{H(\varrho|\varrho_\infty)} - \frac{ L^d}{2}|\tilde{\varrho}(k^\sharp)|^2\right) -\frac{ \alpha\beta^{-1} L^d}{2} \sum\limits_{k \in {\mathbb N}^d, k \neq k^\sharp}\left(\sum\limits_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{\varrho}(\sigma(k))|^2 \right) \, . \end{align} We apply \cref{lem:entdef} to the first term on the right hand side \begin{align} \ensuremath{\mathscr F}(\varrho)-\ensuremath{\mathscr F}(\varrho_\infty)&> \beta^{-1} \bra*{ \ensuremath{\mathcal G}(|\tilde\varrho(k^\sharp)|) - \frac{\alpha L^d}{2} \norm*{\varrho}_2^2} \, . \end{align} Here, we use that the fact that the assumption that $ \Theta(k^\sharp) \leq 2$ is equivalent to $n(k^\sharp)\leq 2$, where $n(k^\sharp)$ is the number of nonzero components in $k^\sharp$ as defined in the statement of~\cref{lem:lbfour}. Now, we use the result of~\cref{lem:entdef} with the constant $c$ and the monotonicity of the function $\ensuremath{\mathcal G}$ to further estimate \begin{align} \ensuremath{\mathscr F}(\varrho)-\ensuremath{\mathscr F}(\varrho_\infty)&> \beta^{-1}\left( \ensuremath{\mathcal G}(c) -\frac{ \alpha L^d}{2}\norm*{\varrho}_2^2\right) \, , \end{align} where $c$ is precisely the constant from \cref{lem:lbfour} for $\alpha \in [0,\alpha^*)$. Since $\varrho$ is a zero of $F_{\kappa_\sharp}(\varrho,\alpha)=0$ we have the following estimate \begin{align} \norm*{\varrho}_2^2 \leq \norm*{\varrho}_\infty \stackrel{\eqref{eq:linfT}}{\leq} \exp \bra*{\beta \kappa \bra*{\norm*{W_{\alpha-}}_\infty + \norm*{W_{\alpha}}_1 }} \leq \exp \bra*{\beta \kappa \bra*{\norm*{W_{\alpha-}}_\infty + L^{-d}\norm*{W_{\alpha}}_2 }} \, . \end{align} If we restrict $\alpha$ to $[0,\alpha^*)$ as in~\cref{lem:lbfour}, we can obtain the following estimates on the norms of $W_\alpha$: \begin{align} \norm*{W_{\alpha -}}_\infty &\leq \norm*{W_{\ensuremath{\mathup{s}}-}}_\infty + \norm*{W_{\ensuremath{\mathup{u}}-}}_\infty +(\alpha +1 )|\tilde{W}(k^\sharp)| \\ & \leq \norm*{W_{\ensuremath{\mathup{s}} -}}_\infty + \norm*{W_{\ensuremath{\mathup{u}} -}}_\infty +(\alpha^* +1 )|\tilde{W}(k^\sharp)| \,, \\ \text{and}\qquad \norm*{W_{\alpha}}_2^2 &= \norm*{W_\ensuremath{\mathup{s}}}_2^2 + \alpha^2\norm*{W_\ensuremath{\mathup{u}}}_2^2 + (1-\alpha)^2 |\tilde{W}(k^\sharp)|^2 \\ &\leq \norm*{W_\ensuremath{\mathup{s}}}_2^2 + (\alpha^*)^2\norm*{W_\ensuremath{\mathup{u}}}_2^2 + |\tilde{W}(k^\sharp)|^2 \, . \end{align} Thus for $\alpha \in [0,\alpha^*)$ we have $\norm*{\varrho}_2^2<c_1$ for some positive constant $c_1$ independent of $\alpha$. Thus, for $\alpha < \frac{2\ensuremath{\mathcal G}(c)}{L^dc_1}$, the result holds. \end{proof} \begin{proof}[Proof of~\cref{lem:entdef}] Using its Fenchel dual, the relative entropy has the following formulation \begin{align}\label{eq:RelEntDual} \ensuremath{\mathcal H}(f \mu | \mu) = \sup_{g \in \ensuremath{{L}}^2(\Omega,\mu)} \set*{ \int f g \dx\mu : \int e^{g} \dx\mu \leq 1 } \, . \end{align} From here a lower bound is obtained by choosing, for $b\in {\mathbb R}$ arbitrary, \[ g(x) =b \skp{f,w_k}_\mu w_k(x) - \log \int \exp\bra[\big]{b \skp{f,w_k}_\mu w_k(x)} \dx{\mu}. \] It is easy to check that $\int e^{g} \dx\mu = 1$ and hence $g$ is admissible in~\eqref{eq:RelEntDual}. The estimate~\eqref{RelEnt:L2:bound} follows by plugging this specific choice of $g$ into~\eqref{eq:RelEntDual} \begin{align}\label{RelEnt:L2:bound:p} \ensuremath{\mathcal H}(f \mu | \mu) & \geq -\log \int \exp\bra[\big]{b \skp{f,w_k}_\mu w_k(x)} \dx{\mu} + b |\skp{f,w_k}_\mu|^2 \,. \end{align} In the special case $\Omega=U$ and $\mu=\varrho_\infty$, setting $f= \frac{\varrho}{\varrho_\infty}$, we obtain from~\eqref{RelEnt:L2:bound:p} the lower bound \begin{align} \mathcal{H}(\varrho|\varrho_\infty) \geq -\log \int \exp\bra*{b \tilde{\varrho}(k)w_k(x)} \varrho_{\infty}\dx{x} + b |\tilde{\varrho}(k)|^2 \, . \end{align} We can pick $b=\alpha L^d$ for some $\alpha>0$ and set $y=L^{d/2} 2^{n/2} \tilde{\varrho}(k)$. We thus obtain, \begin{align} \mathcal{H}(\varrho|\varrho_\infty) \geq \frac{\alpha y^2 }{2^n} - \log\bra[\Bigg]{\varrho_\infty \intom{e^{\alpha y \prod_{i=1}^{n} \cos(2 \pi k_i x_i /L) }} } \ , \end{align} where the $w_{k_i}(x_i)$ are as defined previously and $n \geq 1$ represents the number of $k_i \neq 0$. Setting $x_i=\frac{L}{2 \pi k_i} \theta_i$ for all $k_i \neq 0$, we arrive at \begin{equation} \mathcal{H}(\varrho|\varrho_\infty) \geq \frac{\alpha y^2 }{2^n} - \log \bra*{\frac{1}{2^n \pi^n} \int_{[0,2 \pi]^n} \! \exp\bra*{\alpha y \prod_{i=1}^{n} \cos (\theta_i)} \prod\limits_{j=1}^{n} \dx{\theta_j}} \, . \label{H:bound:p0} \end{equation} We introduce the function \begin{align} \mathcal{I}_n(z)&=\frac{1}{2^n \pi^n} \int_{[0,2 \pi]^n} \exp\bra*{z \prod_{i=1}^{n} \cos (\theta_i)} \prod\limits_{j=1}^{n} \dx{\theta_j} = \sum_{l=0}^\infty \frac{z^{2l}}{(2l)!} \bra*{\frac{1}{\pi} \int_0^{\pi} \cos(\theta)^{2l} \dx{\theta}}^n \\ &= \sum_{l=0}^\infty z^{2l} \frac{((2l)!)^{n-1}}{(l!)^{2n} 2^{2 l n} } \, . \end{align} We will show that \begin{equation}\label{e:mathcalI:lb} \tilde\ensuremath{\mathcal G}(z) =\frac{\lambda z^2}{2^{n+1}} - \log \mathcal{I}_n(z) \qquad\text{with}\qquad \lambda = \lambda(n) = \begin{cases} 1 &, n\in \set{1,2} \\ \frac{(n-1)^{n-1}}{(n/2)^n} &, n >2 \end{cases} \, , \end{equation} is strictly increasing in $z$ with $\tilde\ensuremath{\mathcal G}(0)=0$. Once we have shown~\eqref{e:mathcalI:lb}, the proof concludes by combining this with~\eqref{H:bound:p0} to deduce that \[ \mathcal{H}(\varrho|\varrho_\infty) - \tilde\ensuremath{\mathcal G}(\alpha y) \geq \frac{\alpha y^2 }{2^n} - \frac{\lambda \alpha^2 y^2}{2^{n+1}} = \bra*{2- \alpha \lambda}\alpha \frac{y^2}{2^{n+1}} \stackrel{\alpha=\lambda^{-1}}{=} \frac{y^2}{\lambda \, 2^{n+1}} , \] from where the result~\eqref{H:bound} follows by setting $\ensuremath{\mathcal G}(y) = \tilde\ensuremath{\mathcal G}(y/\lambda)$. It is left now to show~\eqref{e:mathcalI:lb}. For its validity, it is sufficient to note that $\ensuremath{\mathcal I}_n(0)=1$ and to show that $\exp\bra*{\lambda z^2/(\lambda 2^{n+1})}/\mathcal{I}_n(z)$ is strictly increasing in $z$. A sufficient condition for the monotonicity of this quotient is that quotient of the coefficients of the individual power series expansion of numerator and denominator are also increasing (cf.~\cite[Theorem 4.4]{Heikkala2009}, ~\cite{Biernacki1955}). First of all, we observe that the odd coefficients are zero. We are left to investigate \begin{align} \frac{\bra*{\exp\bra*{\lambda z^2/2^{n+1}}}_{2l}}{(\mathcal{I}_n(z))_{2l}} \ &= \ \frac{(l!)^{2n} 2^{2ln} \lambda^{l}}{((2l)! )^{n-1} 2^{(n+1)l} l!} = \frac{(l!)^{2n-1} 2^{l(n-1)}\lambda^{l}}{((2l)!)^{n-1}}\\ &= \begin{cases} \hspace{.5in}l! &, n= 1 \\ \bra*{ \frac{(l!)^{1+\frac{n}{n-1}} 2^l \lambda^{l/(n-1)}}{(2l)!}}^{n-1} =: (a_l)^{n-1} &, n>1 \end{cases}\, . \end{align} In the case $n=1$, the monotonicity follows by the above representation. For $n>1$, we consider \begin{align} \frac{a_{l+1}}{a_l} = \frac{\lambda^{1/(n-1)} (l+1)^{1+\frac{n}{n-1}} 2}{(2l+2)(2l+1)} = \frac{\lambda^{1/(n-1)} (l+1)^{\frac{n}{n-1}}}{2l+1}. \end{align} We need to find a $\lambda$ such that the above expression is greater than or equal to $1$. Hence, we obtain \[ \lambda^{1/(n-1)} = \sup_{l\geq 1} \frac{2l+1}{(l+1)^{\frac{n}{n-1}}} = \frac{n-1}{\bra*{n/2}^{\frac{n}{n-1}}}, \] where we note that the $\sup$ is attained for $l = \frac{n-2}{2}$, hence proving~\eqref{e:mathcalI:lb}. \end{proof} \begin{proof}[Proof of~\cref{lem:lbfour}] For the first part of the proof, we fix $\alpha \in [0,\alpha^*)$. Then, we know that $\kappa=\kappa_\sharp$ independent of $\alpha$ is a bifurcation point, i.e., it satisfies the conditions of \cref{thm:c1bif}. Then one can check that the same set of arguments can be applied in the larger space~$\ensuremath{{L}}^2_{k^\sharp}(U)$ instead of $\ensuremath{{L}}^2_s(U)$, where $\ensuremath{{L}}^2_{k^\sharp}=\{f \in \ensuremath{{L}}^2(U): \skp*{f,w_{\sigma(k^\sharp)}}=0, \forall \sigma \in \mathrm{Sym}(\Lambda), \sigma \neq e \}$, where $e$ represents the identity element. For fixed $\alpha$, we consider the map, $\overline{F} : \ensuremath{{L}}^2_{k^\sharp}(U) \times {\mathbb R}^+ \to \ensuremath{{L}}^2(U)$, $(\varrho ,\kappa) \mapsto F_\kappa(\varrho,\alpha)$ and note that any $\varrho$ such that $\overline{F}(\varrho,\kappa)=0$ is obviously in $\ensuremath{{L}}^2_{k^\sharp}(U)$. Additionally, any zero of $\overline{F}$ defined above is also a zero of $F^* :\ensuremath{{L}}^2_{k^\sharp}(U) \times {\mathbb R}^+ \to \ensuremath{{L}}^2_{k^\sharp}(U)$, which is defined as \begin{align} F^*(\varrho,\kappa)=\overline{F}(\varrho,\kappa) - \sum\limits_{\sigma \in \mathrm{Sym}(\Lambda),\sigma\neq e}\skp*{\overline{F}(\varrho,\kappa), w_{\sigma(k)}(x)}w_{\sigma(k)}(x)\, . \end{align} One can also notice that $F^*(\varrho)$ does not change any of the local properties of $\overline{F}(\varrho)$ near $\varrho_\infty$, i.e, $D_\varrho F^*(\varrho_\infty,\kappa)=\left.D_\varrho \overline{F}(\varrho_\infty,\kappa)\right|_{\ensuremath{{L}}^2_{k^\sharp}}$ and $D^2_{\varrho\kappa} F^*(\varrho_\infty,\kappa)=\left.D^2_{\varrho\kappa} \overline{F}(\varrho_\infty,\kappa)\right|_{\ensuremath{{L}}^2_{k^\sharp}}$. The advantage of defining $F^*$ in this way is that the Fr\'echet derivative of the map is then Fredholm with index zero, which is not the case with $\overline{F}$. We also know from \cref{thm:c1bif} that $\overline{F}$ has at least one nontrivial solution $\varrho_\kappa \in \ensuremath{{L}}^2_s(U)$ in a neighbourhood of $(\varrho_\infty,\kappa_\sharp)$. We can now apply the same bifurcation argument to $F^*$ to obtain that $F^*$ has exactly one nontrivial solution in some neighbourhood of $(\varrho_\infty,\kappa_\sharp)$. Since every zero of $\overline{F}$ is a zero of $F^*$ it follows that $\varrho_\kappa$ is this nontrivial zero in some neighbourhood of $(\varrho_\infty,\kappa_\sharp)$ and that $\overline{F}$ has only one nontrivial solution in this neighbourhood. Thus the problem of studying bifurcations of $\overline{F}$ is reduced to that of studying bifurcations of $F^*$. This justifies our choice in~\autoref{S:lbt} to study the bifurcations of $\hat{F}$ in the space $\ensuremath{{L}}^2_s(U)$ as all bifurcations from the trivial branch lie either in this space or its translates. Now, since we need a lower bound which is uniform in $\alpha$, we redefine $F^*$ to be a function of $\alpha$, i.e., $F^*: X \times {\mathbb R}^+ \to \ensuremath{{L}}^2_{k^\sharp}(U)$, where $X:= \ensuremath{{L}}^2_{k^\sharp}(U) \times {\mathbb R} $ is Banach space equipped with the norm $\norm*{\cdot}_2 + |\cdot|$ and $f=(\varrho,\alpha) \in X$ a typical element of the space. We will now show that due to the particular structure of the problem one can still apply a Crandall--Rabinowitz type argument and obtain existence of local bifurcations. What follows below is a description of the Lyapunov--Schmidt decomposition for the map $F^*$ and a slightly modified version of the proof of the Crandall--Rabinowitz theorem as presented in~\cite{kielhofer2006bifurcation}. We recentre the map as in the proof of~\cref{thm:c1bif} and linearise the map $F^*$ about $((0,0),\kappa_\sharp)$. We also note that $F^*((0,\alpha),\kappa)=0$, for all $\kappa \in (0,\infty),\alpha \in[0,\alpha*)$ and it is precisely this fact that will help us apply a Crandall--Rabinowitz type argument. Before we start out analysis, we write out the exact form of $F^*$ for the convenience of the readers \begin{align} F^*(f,\kappa)= \varrho(x) +\varrho_\infty - \frac{1}{Z}e^{-\beta \kappa W_\alpha \star \varrho} - \sum\limits_{\sigma \in \mathrm{Sym}(\Lambda),\sigma\neq e}\skp*{\varrho(x) - \frac{1}{Z}e^{-\beta \kappa W_\alpha \star \varrho} , w_{\sigma(k^\sharp)}(x)}w_{\sigma(k^\sharp)}(x) \, . \end{align} It is clear that $D_fF^*(f,\kappa)= \begin{pmatrix}D_\varrho F^* & D_\alpha F^* \end{pmatrix} \in L(X,L^2_{k^\sharp})$, the space of linear operators from $X$ to $\ensuremath{{L}}^2_{k^\sharp}(U)$, with \begin{align} D_\varrho F^*((0,0),\kappa_\sharp)[w_1] &= w_1 + \beta \kappa_\sharp \varrho_\infty (W_0 \star w_1) - \beta \kappa_\sharp \varrho_\infty^2 \intT{(W_0 \star w_1)(x)} \, ,\\ D_\alpha F^*((0,0),\kappa_\sharp) &=0 \, , \end{align} where $w_1 \in \ensuremath{{L}}^2_{k^\sharp}(U)$. We will also need $D^2_{f \kappa}F^*(f,\kappa)= \begin{pmatrix}D_{\varrho\kappa} F^* & D_{\alpha \kappa}F^* \end{pmatrix}$, with \begin{align} D^2_{\varrho \kappa} F^*((0,0),\kappa_\sharp)[w_1] &= \varrho_\infty (W_0 \star w_1) - \varrho_\infty^2 \intT{(W_0 \star w_1) (x)} \\ &\qquad -\varrho_\infty^2 W_0 \star D_\varrho ( F^*((0,0),\kappa_\sharp))[w_1] \, , \label{eq:drks}\\ D^2_{\alpha \kappa} F^*((0,0),\kappa_\sharp) &=0 \, . \end{align} Then by using the arguments of \cref{thm:c1bif}, we see that $N:=\ker (D_f F^*((0,0),\kappa_\sharp))=\mathrm{span}[w_{k^\sharp}] \times {\mathbb R} \tilde{=} {\mathbb R}^2$ and $Z_0 := R^\perp= (\Ima (D_f F^*((0,0),\kappa_\sharp)))^\perp= \mathrm{span}[w_{k^\sharp}]$. Thus, $D_f F^*((0,0),\kappa_\sharp))$ is Fredholm and we have the following decompositions into complementary subspaces, \begin{align} X&= N \oplus X_0 \, ,\\ \ensuremath{{L}}^2_{k^\sharp}(U)&= R \oplus Z_0 \, . \end{align} Given these decompositions, we define the following projection operators, \begin{align} P&:X \to N , & (\varrho,\alpha) &\mapsto \bra*{\tilde{\varrho}(k^\sharp)w_{k^\sharp}(x),\alpha}\, ,\\ Q&:\ensuremath{{L}}^2_{k^\sharp}(U) \to Z_0 \ , & \varrho &\mapsto \tilde{\varrho}(k^\sharp) w_{k^\sharp}(x) \, . \end{align} By introducing the splitting $v=Pf$, $w= (I -P)f$, we can solve $F^*(f,\kappa)=0$ individually on complementary subspaces \begin{align} G(v,w,\kappa)&:=(I -Q) F^*(v + w,\kappa)=0 \, ,\\ \Phi(v,w,\kappa)&:=Q F^*(v + w,\kappa)=0 \, . \end{align} As in \cref{thm:lyapschmid}, one can check that $D_wG((0,0),(0,0),\kappa_\sharp)= (I-Q)D_fF^*((0,0),\kappa_\sharp): X_0 \to R$ is a homeomorphism. Thus, applying the implicit function theorem, there exist neighbourhoods $U$ of $((0,0),\kappa_\sharp)$ in $N \times {\mathbb R}$ and $V$ of $(0,0)$ in $X_0$ along with a $C^1$ function $\Psi:U \to V$ such that every solution of $G(v,w,\kappa)=0$ in $U\times V$ is of the form $(v,\kappa,\Psi(v,\kappa))$ with $\Psi((0,0),\kappa_\sharp)= (0,0)$. Thus in $U$ we are left to solve, \begin{align} \Phi(v,\kappa)&:=Q F^*(v + \Psi(v,\kappa),\kappa)=0 \, . \end{align} It is also straightforward to show that $D_\kappa \Psi((0,0),\kappa)=0$. Indeed, \begin{align} D_\kappa(I-Q) F^*(v + \Psi(v,\kappa),\kappa)&=0\\ (I-Q) (D_\kappa F^*(v + \Psi(v,\kappa),\kappa) + D_\varrho F^*(v + \Psi(v,\kappa),\kappa)D_\kappa \Psi(v,\kappa) ) &=0 \end{align} Setting $v=(0,0)$ and $\kappa=\kappa_\sharp$ one can see that $D_\kappa F^*((0,0),\kappa_\sharp)=0$ and since $(D_\kappa \Psi((0,0),\kappa),0) \in X_0$ which is complementary to $N$ giving $D_\kappa \Psi((0,0),\kappa_\sharp),0)=0$. Using an argument similar to the above one, one can show that $D_{\tilde{\varrho}(k^\sharp)} \Psi((0,0),\kappa_\sharp)=0 \in L(N,X_0)$. Since a typical element of $N$ can be represented by $(\tilde{\varrho}(k^\sharp),\alpha)=(s,\alpha)$ we proceed by rewriting $\Phi$ as follows, \begin{align} \tilde{\Phi}((s,\alpha),\kappa)= \int_0^1 \! \frac{d}{dt} \Phi((t s w_{k^\sharp} ,\alpha),\kappa)\,\dx{t} = \int_0^1 \! D_s \Phi((t s w_{k^\sharp} ,\alpha),\kappa) w_{k^\sharp}\,\dx{t} \,, \end{align} where we have used the fact that $\Phi((0,\alpha),\kappa)=0$, since $\varrho_\infty$ is always a trivial solution. Now, $\tilde{\Phi}:{\mathbb R}^2 \times {\mathbb R} \to {\mathbb R}$ is the map, which we analyse in the neighbourhood $U$ and nontrivial solutions correspond to $s \neq0$. Let $\hat{v}= (ts w_{k^\sharp},\alpha) \in N$, then we compute, \begin{align} D_\kappa D_s\Phi(\hat{v},\kappa)w_{k^\sharp}&=D_\kappa \bra*{Q D_\varrho F^*(\hat{v}+\Psi(\hat{v},\kappa),\kappa)(w_{k^\sharp} + D_s \Psi(\hat{v},\kappa))w_{k^\sharp}} \\ &= QD^2_{\varrho\varrho} F^*(\hat{v}+\Psi(\hat{v},\kappa),\kappa) [w_{k^\sharp} + D_s \Psi(\hat{v},\kappa))w_{k^\sharp},D_\kappa \Psi(\hat{v},\kappa)] \\ &\qquad + Q D_\varrho F^*(\hat{v}+\Psi(\hat{v},\kappa),\kappa)D^2_{\varrho\kappa} \Psi(\hat{v},\kappa) w_{k^\sharp} \\ &\qquad + Q D^2_{\varrho\kappa} F^*(\hat{v}+\Psi(\hat{v},\kappa),\kappa) (w_{k^\sharp} + D_s \Psi(\hat{v},\kappa))w_{k^\sharp}) \, . \end{align} Setting $\hat{v}=(0,0)$ and $\kappa=\kappa_\sharp$, we see that the first term of the above expression is zero because $D_\kappa \Psi((0,0),\kappa)=0$ and the second term is zero because $Q$ maps the range of $ D_\varrho F^*((0,0),\kappa_\sharp)$ to zero. Noting that $D_s \Psi((0,0),\kappa_\sharp))=D_{\tilde{\varrho}(k^\sharp)} \Psi((0,0),\kappa_\sharp)=0$, we finally have \begin{align} \frac{\dx{}}{\dx{}\kappa}\tilde{\Phi}((0,0),\kappa_\sharp) &= Q D^2_{\varrho\kappa} F^*((0,0),\kappa_\sharp) w_{k^\sharp} \neq 0 \, . \end{align} Thus we can apply the implicit function theorem to obtain a function $C^1(V_1;V_2)$, $\varphi(s,\alpha)$ such that $\tilde{\Phi}((s,\alpha),\varphi(s,\alpha))=0$, where $V_1$ and $V_2$ are neighbourhoods of $(0,0)$ and $\kappa_\sharp$ respectively and $V_1 \times V_2 \subset U$. Additionally in $V_1 \times V_2$ every solution of $\tilde{\Phi}$ (and hence $\Phi$) is of the form $((s,\alpha),\varphi(s,\alpha))$ and $\varphi((0,\alpha))=\kappa_\sharp$. We know however from \cref{thm:c1bif}, that we could apply the same set of arguments for fixed $\alpha \in [0,1]$ to obtain single locally increasing branches which, at least for some small neighbourhood around $0$, must coincide with $\varphi(s,\alpha)$. Thus, we now know that for each $\alpha \in[0,1]$, we can find $\epsilon_\alpha>0$ such that $\varphi(s,\alpha)>\kappa_\sharp$ for $0<|s|<\epsilon_\alpha$. Now, let $\alpha \in [0,\alpha^*)=A$. If we show that $\inf_A \epsilon_\alpha =\epsilon'>0$ for $\alpha^*$ small enough, we can conclude the proof. To see this, set $V_1'= V_1 \cap (-\epsilon',\epsilon')\times[0,\alpha^*)$ and observe that $((s,\alpha),\varphi(s,\alpha))$ are the only solutions in $V_1' \times V_2$ and $\varphi(s,\alpha)=\kappa_\sharp$ implies $(s,\alpha)=(0,\alpha)$. Thus in $V_1'$, $(0,\alpha)$ is the only solution of the bifurcation equation which would provide the desired result. Assume now that there exists no $\alpha^*$, such that $\inf_A \epsilon_\alpha >0$. It is straightforward to check that this would violate the continuity of $\varphi$ since $\epsilon_0>0$. \end{proof} As an immediate consequence of~\cref{thm:spgap} we have: \begin{cor}\label{cor:unest} Let $W_\alpha(x)$ be as in \cref{defn:astable} such that $W_\ensuremath{\mathup{u}}$ and $W_\ensuremath{\mathup{s}}$ are bounded below. Then, for $\alpha$ sufficiently small, $\varrho_\infty$ is the unique minimiser of the free energy $\mathcal{F}_\kappa(\varrho)$ for $\kappa \in (0, C(n)\kappa_\sharp]$, where $C(n)$ is as defined in~\cref{lem:entdef}. \end{cor} \begin{proof} The proof follows the same arguments as~\cref{thm:spgap} with $\kappa_\sharp$ replaced by $C(n)\kappa_\sharp$. \end{proof} A natural question to ask now is how the estimate from~\cref{cor:unest} compares to the one obtained in~\cref{prop:con} by the convexity argument, i.e., how does $C(n)\kappa_\sharp$ compare to $\kappa_{\mathrm{con}}$. It is easier to make this comparison whenever we can explicitly compute $\norm*{W_{\ensuremath{\mathup{u}}-}}_\infty$. Assume, $W=W_0$, i.e, $W$ has only one negative mode, say $w_{k^\sharp}$, then we have \begin{align} \frac{C(n)\kappa_\sharp}{\kappa_{\mathrm{con}}}=2^{n}C(n)= \begin{cases} 2^n & n=1,2 \\ \frac{n^n}{(n-1)^{n-1}} & n>2 \end{cases}\, , \end{align} with $n=n(k^\sharp)$ as defined in~\cref{lem:entdef}. Thus, for all $n\geq 1$, we have that $C(n)\kappa_\sharp>\kappa_{\mathrm{con}}$. From this we conclude that, for this choice of $W$, ~\cref{cor:unest} provides a sharper estimate on the range of $\kappa$ for which the uniform state is a unique minimiser of the free energy. \begin{rem} \cref{thm:spgap} indicates that if the linearised McKean--Vlasov operator $\ensuremath{\mathcal L}$, has a sufficiently large spectral gap $\lambda$, then (assuming all other conditions are satisfied) the system exhibits a continuous transition point. Indeed, the spectral gap of $\ensuremath{\mathcal L}:\ensuremath{{L}}^2_0(U) \to \ensuremath{{L}}^2_0(U)$ at $\kappa=\kappa_\sharp$ associated with the interaction potential $W_\alpha$ can be computed as \begin{align} \lambda=\min_{k \in {\mathbb N}^d, k \neq k^\sharp}\bra*{-\beta^{-1}\bra*{\frac{2 \pi |k|}{L}}^2 - \kappa_\sharp L^{-d/2} \bra*{\frac{2 \pi |k|}{L}}^2 \frac{\tilde{W_\alpha}(k)}{\Theta(k)} } , \end{align} Let us assume that $|\lambda|>C_1$ for some constant $C_1>0$. This implies that for all $k \in {\mathbb N}^d$ such that $\tilde{W}(k)<0$ it must hold \begin{align} \alpha <\frac{\bra*{ \beta^{-1} - C_1\frac{L^2}{4 \pi^2 |k|^2} }\Theta(k)}{\kappa_\sharp L^{-d/2} \abs{\tilde{W}(k)}} \, . \end{align} It is easy to see then that $\lambda$ being sufficiently large is equivalent to $\alpha$ being sufficiently small. \end{rem} We conclude this section with the following useful proposition which provides us with a comparison principle for interaction potentials to check if they possess continuous transition points. \begin{prop}\label{lem:comp} Let $W\in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}^c$ be an interaction potential such that the associated free energy $\ensuremath{\mathscr F}^W_\kappa(\varrho)$ has a continuous transition point. Additionally, assume that $G\in \ensuremath{{\mathbb H}}_\ensuremath{\mathup{s}}^c$ is such that $\argmin_{k \in {\mathbb N}^d/ \set{\mathbf{0}}}\tilde{G}(k)=\argmin_{k \in {\mathbb N}^d/ \set{\mathbf{0}}}\tilde{W}(k)= k^\sharp$ and $\tilde{G}(k^\sharp)=\tilde{W}(k^\sharp)$ with $\tilde{G}(k)\geq\tilde{W}(k)$ for all $k \neq k^\sharp,k \in {\mathbb N}^d$. Then $\ensuremath{\mathscr F}^G_{\kappa}(\varrho)$ exhibits a continuous transition point. \end{prop} \begin{proof} As in the proof of~\cref{thm:spgap}, it is sufficient to show that at $\kappa=\kappa_\sharp$, the free energy $\ensuremath{\mathscr F}^G_{\kappa_\sharp}(\varrho)$ has $\varrho_\infty$ as its unique minimiser. Noting that given the assumptions on $G$, the value of $\kappa_\sharp$ is the same for $G$ and $W$, we have for $\varrho \neq \varrho_\infty, \varrho \in \ensuremath{{L}}^2(U) \cap \mathcal{P}_{\mathup{ac}}(U)$ \begin{align} \ensuremath{\mathscr F}^G_{\kappa_\sharp}(\varrho)-\ensuremath{\mathscr F}^G_{\kappa_\sharp}(\varrho_\infty) &= \beta^{-1}\ensuremath{\mathcal H}(\varrho|\varrho_\infty) + \frac{\kappa_\sharp}{2}\ensuremath{\mathcal E}^G(\varrho-\varrho_\infty,\varrho-\varrho_\infty) \\ &=\beta^{-1}\ensuremath{\mathcal H}(\varrho|\varrho_\infty) + \frac{\kappa_\sharp}{2}\ensuremath{\mathcal E}^W(\varrho-\varrho_\infty,\varrho-\varrho_\infty) + \frac{\kappa_\sharp}{2}\ensuremath{\mathcal E}^{G-W}(\varrho-\varrho_\infty,\varrho-\varrho_\infty) \\ &= \bra*{\ensuremath{\mathscr F}^W_{\kappa_\sharp}(\varrho)-\ensuremath{\mathscr F}^W_{\kappa_\sharp}(\varrho_\infty)}+ \frac{\kappa_\sharp}{2}\ensuremath{\mathcal E}^{G-W}(\varrho-\varrho_\infty,\varrho-\varrho_\infty) \, , \end{align} where $\ensuremath{\mathcal E}^W(\varrho,\varrho)=\iint W(x-y ) \varrho(x) \varrho(y) \dx{x} \dx{y}$. Using the fact that the term in the brackets must be strictly positive, since the free energy $\ensuremath{\mathscr F}^W_{\kappa_\sharp}(\varrho)$ associated to $W$ possesses a continuous transition point, we obtain \begin{align} \ensuremath{\mathscr F}^G_{\kappa_\sharp}(\varrho)-\ensuremath{\mathscr F}^G_{\kappa_\sharp}(\varrho_\infty) &>\frac{\kappa_\sharp}{2}\ensuremath{\mathcal E}^{G-W}(\varrho-\varrho_\infty,\varrho-\varrho_\infty) \\ &=\frac{\kappa_\sharp}{2} \sum\limits_{k \in {\mathbb N}^d, k \neq k^\sharp }\frac{\tilde{G}(k)-\tilde{W}(k)}{N_k}\left(\sum\limits_{\sigma \in \mathrm{Sym}(\Lambda)}|\tilde{\varrho}(\sigma(k))|^2 \right) \geq 0 \,. \end{align} In the above estimate we have used the fact that $\tilde{G}(k^\sharp)=\tilde{W}(k^\sharp)$ and that $\tilde{G}(k)\geq\tilde{W}(k)$ for all other $k \in {\mathbb N}^d$. Thus, we have the desired result. \end{proof} \section{Applications}\label{S:app} \subsection{The generalised Kuramoto model}\label{ss:kura} Let $W(x)=-w_k(x)$, for some $k \in {\mathbb N}, k \neq 0$, as defined in~\eqref{e:def:wk}. Then we refer to the corresponding McKean SDE given by \begin{align} d X_t^i&= \frac{\kappa}{N}\sum_{i=1}^N w_k'(X_t^i-X_t^j) + \sqrt{2 \beta^{-1}}dB_t^i \quad i=1, \dots,N \, , \end{align} as the generalised Kuramoto model. For $k=1$, it corresponds to the so-called noisy Kuramoto system (also referred to as the Kuramoto--Shinomoto--Sakaguchi model (cf. ~\cite{kuramoto1981rhythms,sakaguchi1988phase,ABPRS})) which models the synchronisation of noisy oscillators interacting through their phases. For infinitely many oscillators, we obtain a mean field approximation of the underlying particle dynamics given precisely by the McKean--Vlasov equation with $W(x)=-w_1(x)$. It is well known that this system exhibits a phase transition for some critical, $\kappa_c$ (cf. ~\cite{bertini2010dynamical}). For $k=2$, it corresponds to the Maiers--Saupe system which is a model for the synchronization of liquid crystals (cf.~\cite{constantin2004remarks,constantin2005note}). Again, in the mean field limit we obtain the McKean--Vlasov equation with the effective interaction potential, $W(x)= -w_2(x)$. The system exhibits a continuous transition point which represents the nematic-isotropic phase transition as the temperature is lowered, i.e., as $\kappa$ is increased. Finally, let us mention that there is a larger picture in the Kuramoto model when different frequency oscillators are allowed, see ~\cite{ABPRS} for a nice review of the subject and ~\cite{CCP} for recent numerical work on phase transitions for this problem. Although it is possible to directly apply \cref{thm:spgap} to prove the existence of a continuous phase transition for this system, we employ an alternative approach that gives us more qualitative information about the structure of the nontrivial solutions. \begin{prop}~\label{prop:kura} The generalised Kuramoto model exhibits a continuous transition point at $\kappa_c=\kappa_\sharp$. Additionally, for $\kappa>\kappa_c$, the equation $F(\varrho,\kappa)=0$ has only two solutions in $\ensuremath{{L}}^2(U)$ (up to translations). The nontrivial one, $\varrho_\kappa$ minimises $\ensuremath{\mathscr F}_\kappa$ for $\kappa>\kappa_c$ and converges in the narrow topology as $\kappa \to \infty$ to a normalised linear sum of equally weighted Dirac measures centred at the minima of $W(x)$. \end{prop} \begin{proof} The strategy of proof is similar to that of \cref{thm:spgap}, i.e, we show that at $\kappa=\kappa_\sharp$, $\varrho_\infty$ is the unique minimiser of the free energy. We do this by showing that $F(\varrho,\kappa)=0$ has a unique solution at $\kappa=\kappa_\sharp$, which implies, by \cref{prop:tfae}(since $W$ satisfies Assumption~\eqref{ass:B}) , uniqueness of the minimiser. For $W(x)=-w_{k^\sharp}(x)$, we can explicitly compute, \begin{align} F(\varrho,\kappa)=\varrho - \frac{e^{\beta \kappa\sqrt{L/2}(\tilde{\varrho}(k^\sharp)w_{k^\sharp}+ \tilde{\varrho}(-k^\sharp)w_{-k^\sharp}) }}{\int_{-L/2}^{L/2} e^{\beta \kappa\sqrt{L/2}(\tilde{\varrho}(k^\sharp)w_{k^\sharp}+ \tilde{\varrho}(-k^\sharp)w_{-k^\sharp}) }}=0 \end{align} Since $F(\varrho,\kappa)$ is translation invariant, one can always translate $\varrho$ so that $\tilde{\varrho}(-k^\sharp)=0$. Thus we obtain the following simplified equation, \begin{align} F(\varrho,\kappa)=\varrho - \frac{e^{\beta \kappa\sqrt{L/2}\tilde{\varrho}(k^\sharp)w_{k^\sharp} }}{\int_{-L/2}^{L/2} e^{\beta \kappa\sqrt{L/2}\tilde{\varrho}(k^\sharp)w_{k^\sharp} }}=0 \label{eq:kmkm} \end{align} Taking the inner product with $w_{k^\sharp}(x)$ we obtain, \begin{align} \tilde{\varrho}(k^\sharp) - \frac{\int_{-L/2}^{L/2} e^{\beta \kappa\tilde{\varrho}(k^\sharp)\cos(2 \pi k^\sharp x /L) }w_{k^\sharp} \dx{x} }{\int_{-L/2}^{L/2} e^{\beta \kappa\tilde{\varrho}(k^\sharp)\cos(2 \pi k^\sharp x /L) } \dx{x}}=0 \end{align} After a change of variables we obtain, \begin{align} \tilde{\varrho}(k^\sharp) - \sqrt{\frac{2}{L}}\frac{\int_{0}^{\pi} e^{\beta \kappa\tilde{\varrho}(k^\sharp)\cos(y) }\cos(y) \dx{x} }{\int_{0}^{\pi} e^{\beta \kappa\tilde{\varrho}(k^\sharp)\cos(y) } \dx{x} }=0 \end{align} We can express the above equation in the following form, \begin{align} M(a,\kappa):=\sqrt{\frac{2}{L}}\beta \kappa \frac{I_1(a)}{I_0(a)}=\sqrt{\frac{2}{L}}\beta \kappa r_0(a)= a\label{eq:besfp} \ , \end{align} where the $I_n$ represent modified Bessel functions of the first kind having order $n$, $r_n(a):=\frac{I_{n+1}(a)}{I_n(a)}$, and $a= \beta \kappa \tilde{\varrho}(k^\sharp)$. This equation is similar to the one derived in Section VI of~\cite{bavaud1991eq}(cf. ~\cite{messer1982statistical,battle1977phase}). It is also qualitatively similar to the self-consistency equation associated with the two-dimensional Ising model. For $\varrho=\varrho_\infty$, we know that $\tilde{\varrho}(\kappa_\sharp)=0$. We argue that any nontrivial solution of $F(\varrho,\kappa)=0$ must have $\tilde{\varrho}(k^\sharp) \neq0$. Assume this is not the case, i.e., there exists $ \varrho_\kappa \neq \varrho_\infty$ which satisfies $F(\varrho_\kappa,\kappa)=0$ and $\tilde{\varrho_\kappa}(k^\sharp) =0$, then from \eqref{eq:kmkm} we have that $\varrho=\varrho_\infty$. Thus $F(\varrho,\kappa)$ has non-trivial solutions if and only if \eqref{eq:besfp} has nonzero solutions. One should note that since $I_1$ is odd and $I_0$ is even, nonzero solutions to \eqref{eq:besfp} come in pairs, i.e, if $a$ is a solution so is $-a$. However, these two solutions are simply translates of each other. We now show that if $\kappa \leq \kappa_\sharp= \sqrt{2L}/\beta$, \eqref{eq:besfp} has no nonzero solutions. As mentioned earlier it is sufficient to study the problem on the half line. Note first, that for $a>0$, $r_0(a)$ is increasing, i.e, $r_0'(a) >0$ (cf. ~\cite[(15)]{amos1974computation}). Additionally, we have that, \begin{align} r_0'(a)&= \frac{1}{2} + \frac{I_0(a)I_2(a)-I_1(a)^2}{2I_0(a)^2}-\frac{r_0(a)^2}{2} \end{align} and so $r_0'(0) = \frac{1}{2}$. We can now use the so-called Turan-type inequalities (cf. ~\cite{thiruvenkatachar1951inequalities,baricz2013turan}) to assert that $I_0(a)I_2(a)-I_1(a)^2<0$ for $a>0$. This tells us that, \begin{align} r_0'(a)&< \frac{1}{2} -\frac{r_0(a)^2}{2} \ , \end{align} with $r_0(a)>0$ for $a>0$. Using the fact that $\kappa \leq \kappa_\sharp$, we obtain, \begin{align} \frac{\partial {M}}{\partial {a}}(a,\kappa)& < 1- r_0(a)^2 \, . \end{align} We know now that $M(a,\kappa)$ is increasing for $a>0$, $M(0,\kappa)=0$, $\frac{\partial {M}}{\partial {a}}(0,\kappa)=1$, and $\frac{\partial {M}}{\partial {a}}(a,\kappa)$ is bounded above by 1 for $a>0$. Thus the curve $y=M(a,\kappa)$ cannot intersect $y=a$ for any $a>0$. Thus $\varrho_\infty$ is the unique minimiser for $\kappa\leq \kappa_\sharp$, which implies by \cref{prop:CharactTP}~\eqref{prop:CharactTP:cont} that $\kappa_c=\kappa_\sharp$ is a continuous transition point. We will now show that for $\kappa>\kappa_\sharp$, $\eqref{eq:besfp}$ has at most one solution for $a>0$. We know that \begin{align} \frac{\partial {M}}{\partial {a}}(0,\kappa)& >1 \, . \end{align} Also for $a$ large enough, $a>M(a,\kappa)$ (since $r_0(a) \to 1$, as $a \to \infty$, and is strictly increasing). Thus by the intermediate value theorem, there exists at least one positive $a$ such that \eqref{eq:besfp} holds for every $\kappa>\kappa_\sharp$. One can now show that $\frac{\partial {M}}{\partial {a}}(a,\kappa)$ is strictly decreasing for $a>0$. This is equivalent to showing that $r_0''(a)$ is strictly negative. We have \begin{align} -r_0''(a) &= \frac{3}{4}r_0 + \frac{3 }{2}r_0^2 r_1 -2 r_0^3 -\frac{1}{4}r_0r_1r_2 =r_0 \bra*{\frac{3}{4} + \frac{3 }{2}r_0 r_1 -2 r_0^2 -\frac{1}{4}r_1r_2 } \, , \end{align} where we have used the formula $\frac{\dx{}}{\dx{a}}I_n=\frac{1}{2}\bra*{I_{n+1}+I_{n-1}}, n\geq1$. The ratios $r_n$ enjoy the following monotonicity and separation properties (cf.~\cite[(10),(11)]{amos1974computation}): \begin{align} r_n &\leq r_{n+1} \label{eq:rmon} \, ,\\ \text{and}\qquad \frac{a}{n +1+ \sqrt{a^2 +(n+1)^2}}&\leq r_n \leq \frac{a}{n + \sqrt{a^2 +(n+2)^2}}, \qquad a \geq 0, n \geq0 \, . \label{eq:ruplo} \end{align} Using these we obtain \begin{align} -r_0''(a) &\stackrel{\mathclap{\eqref{eq:rmon}}}{\ \geq\ } r_0 \bra*{\frac{3}{4} + \frac{3 }{2}r_0 r_1 -\frac{5}{4}r_0^2 } =r_0 \bra*{\frac{3}{4}-\frac{3}{4}r_0 + r_0\bra*{ \frac{3 }{2}r_1 -\frac{1}{2}r_0} } \\ &\stackrel{\mathclap{r_0<1}}{\ >\ } r_0 \bra*{ r_0\bra*{ \frac{3 }{2}r_1 -\frac{1}{2}r_0} } \stackrel{\mathclap{\eqref{eq:ruplo}}}{\ \geq\ } \frac{r_0^2}{2} \bra*{ \frac{3a}{2+ \sqrt{a^2 +9}} - \frac{a}{ \sqrt{a^2 +4}}} \\ &\ =\ \frac{r_0^2}{2} \bra*{\frac{ (\sqrt{9 a^2 +36} -\sqrt{a^2 +9} -2) a }{(2+ \sqrt{a^2 +9})\sqrt{a^2 +4}}} >0, \qquad\text{ for } a>0 \, . \end{align} This implies that $\frac{\partial }{\partial {a}} \left(a- M(a,\kappa)\right) =1 -\frac{\partial {M}}{\partial {a}}(a,\kappa)$ changes sign only once. Thus~\eqref{eq:besfp} has only one solution, $a_\kappa$ for $a>0$ and $\kappa>\kappa_\sharp$. Additionally, $a<M(a,\kappa)$ if and only if $0<a<a_\kappa$ and $a>M(a,\kappa)$ if and only if $a>a_\kappa$. Now let $\kappa_2>\kappa_1> \kappa_\sharp$ with $a_{\kappa_1}$ and $a_{\kappa_2}$ the solutions of~\eqref{eq:besfp} at $\kappa_1$ and $\kappa_2$ respectively. We then have \begin{align} \frac{\kappa_2}{\kappa_1}a_{\kappa_1} &= \frac{\kappa_2}{\kappa_1} M(a_{\kappa_1},\kappa_1)= M(a_{\kappa_1},\kappa_2) < M\bra*{\frac{\kappa_2}{\kappa_1}a_{\kappa_1},\kappa_2} \, , \end{align} where we have used the fact that $\kappa_2>\kappa_1$, the linearity of $M(a,\kappa)$ in $\kappa$, and that $M(a,\kappa)$ is strictly increasing for positive $a$. Using previous arguments, the above inequality tells us that $0<\frac{\kappa_2}{\kappa_1}a_{\kappa_1}<a_{\kappa_2}$ which implies that $a_{\kappa} \to \infty$, as $\kappa \to \infty$. Finally, we have the following form for the solution \begin{align} \varrho(x,a_\kappa)=\frac{1}{L}\frac{e^{a_\kappa \cos(2 \pi k x /L)}}{ I_0(a_\kappa)} \, . \end{align} Let us denote by $\varrho(\dx{x},a_\kappa)$ the measure associated to the density $\varrho(x,a_\kappa)$. We will now show that for $k=1$, $\varrho(\dx{x},a_\kappa)$ converges to $ \delta_0$ as $a_\kappa \to \infty$ in the narrow topology, i.e., tested against bounded, continuous functions. The argument for other $k \in {\mathbb N}$ is then simply an extension of the $k=1$ case. Let $A$ be a continuity set of $\delta_0$, then if $0 \notin A$ it follows that $0 \notin \partial A$. By a large deviations argument, Laplace's principle, we have that \begin{align} \lim_{a_\kappa \to \infty} \bra*{\frac{1}{a_\kappa}\log\bra*{\frac{\pi}{L}\frac{\int_A e^{a_\kappa \cos(2 \pi x /L) }\dx{x}}{\int_0^\pi e^{a_\kappa \cos(y)} \dx{y}}}} = \sup_A \cos(2 \pi x/L)-1 <0 \quad \textrm{ if } 0 \notin A \, . \end{align} Thus, $\varrho(\dx{x},a_\kappa)(A) \to 0$ for every Borel set not containing $0$ and thus $\varrho(\dx{x},a_\kappa)(A) \to 1$ for $0 \in A$. By the portmanteau theorem(cf. ~\cite[Theorem 2.1]{billingsley1999convergence}), we have the desired convergence. For arbitrary $k$, one can apply the same argument on periods of the function $\cos(2 \pi k x /L)$, and due to the periodicity/symmetry of the solution the masses in each Dirac point are equal. \end{proof} \subsection{The noisy Hegselmann--Krause model for opinion dynamics}\label{S:Hegselmann} The noisy Hegselmann--Krause system(cf. ~\cite{hegselmann2002opinion}) models the opinions of $N$ interacting agents such that each agent is only influenced by the opinions of its immediate neighbours. In the large $N$ limit, we obtain again the McKean--Vlasov PDE with the interaction potential $W_{\mathrm{hk}}(x)=-\frac{1}{2}\bra*{\bra*{|x|-\frac{R}{2}}_-}^2$ for some $R>0$. The ratio $R/L$ measures the range of influence of an individual agent with $R/L=1$ representing full influence, i.e., any one agent influences all others. In order to analyse this system further, we compute the Fourier transform of $W_{\mathrm{hk}}(x)$ given by \begin{align} \tilde{W}_{\mathrm{hk}}(k)=\frac{\left(-\pi^2 k^2 R^2+2 L^2\right) \sin \left(\frac{\pi k R}{L}\right)-2 \pi k L R \cos \left(\frac{\pi k R}{L}\right)}{4 \sqrt{2} \pi ^3 k^3 \sqrt{\frac{1}{L}}}, \quad k \in {\mathbb N}, k \neq 0 \, . \end{align} A simple consequence of the above expression is that the model has infinitely many bifurcation points for $R/L=1$. For the other values of $R/L$ the problem reduces to a computational one, namely checking that the conditions of~\cref{thm:c1bif} are satisfied. Also, $W_{\text{hk}}(x)$ is normalised and decays to $0$ uniformly as $R \to 0$, i.e., as the range of influence of an agent decreases so does its corresponding strength. We could define a rescaled version of the potential, $W_{\mathrm{hk}}^R(x)=-\frac{1}{2R^3}\bra*{\bra*{|x|-\frac{R}{2}}_-}^2$ which does not lose mass as $R \to 0$. We conclude this subsection with the following result. \begin{prop} For $R$ small enough, the rescaled noisy Hegselmann--Krause model possesses a discontinuous transition point. \end{prop} \begin{proof} We define $C:=\norm{W_{\mathrm{hk}}^R}_1$ and note that it is independent of $R$. The proof follows from the observation that $W_{\mathrm{hk}}^R \to - C \delta_0$ as $R \to 0$ and applying~\cref{cor:delta}. \end{proof} \subsection{The Onsager model for liquid crystals} In~\autoref{ss:kura}, we discussed the Maiers--Saupe model as a special case of the generalised Kuramoto model. In this subsection we discuss another model for the alignment of liquid crystals, i.e., the Onsager model which has as its interaction potential, $W(x)=\abs*{\sin\bra*{\frac{2 \pi}{L} x}}$. As discussed in~\cite{chenstationary2010}, one can also study the potential $W_\ell(x)=\abs*{\sin\bra*{\frac{2 \pi}{L} x}}^\ell \in \ensuremath{{L}}^2_s(U) \cap C^\infty(\bar{U})$ with $\ell \in {\mathbb N}, \ell \geq 1$, so that the Onsager and Maiers--Saupe potential correspond to the cases $\ell=1$ and $\ell=2$, respectively. We have the following representation of $W_\ell(x)$ in Fourier space \begin{align} \tilde{W_\ell}(k)=\frac{\sqrt{\pi } 2^{\frac{1}{2}-\ell} \cos \left(\frac{\pi k}{2}\right) \Gamma (\ell+1)}{\Gamma \left(\frac{1}{2} (-k+\ell+2)\right) \Gamma \left(\frac{1}{2} (k+\ell+2)\right)} \, .\label{eq:onsf} \end{align} Any nontrivial solutions to the stationary dynamics correspond to the so-called nematic phases of the liquid crystals. We can obtain the following characterisation of bifurcations associated to the $W_\ell(x)$ and thus of the Onsager model. \begin{prop} We have the following results: \begin{tenumerate} \item The trivial branch of the Onsager model, $W_1(x)$, has infinitely many bifurcation points. \item The trivial branch of the Maiers--Saupe model, $W_2(x)$, has exactly one bifurcation point. \item The trivial branch of the model $W_\ell(x)$ for $\ell$ even has at least $\frac{\ell}{4}$ bifurcation points if $\frac{\ell}{2}$ is even and $\frac{\ell}{4} + \frac{1}{2}$ bifurcation points if $\frac{\ell}{2}$ is odd. \item The trivial branch of the model $W_\ell(x)$ for $\ell$ odd has infinitely many bifurcation points if $\frac{\ell-1}{2}$ is even and at least $\frac{\ell +1}{4}$ bifurcation points if $\frac{\ell-1}{2}$ is odd. \end{tenumerate} \end{prop} \begin{proof} The proof of (b) follows from~\cref{prop:kura} so we only need to show (a),(c), and (d). We start by noting that $\tilde{W}_\ell(0)\geq 0$ and $\tilde{W}_\ell(k)=0$ for all odd $k \in {\mathbb N}$. We also note that $\frac{1}{\Gamma(z)}$ is an entire function with zeroes at all nonpositive integers and $\frac{1}{\Gamma(-(2n+1)/2)}, n \in {\mathbb N}$ is negative for all even $n$ and positive otherwise. For the rest of the proof we will always assume that $k>0$. We will now attempt to show that all nonzero values of $\tilde{W}_\ell(k)$ for $k>0$ are distinct. Assume $l$ is even, then we have the following explicit form of $\tilde{W}_\ell(k)$: \begin{align} \tilde{W_\ell}(k)=\frac{\sqrt{\pi } 2^{\frac{1}{2}-\ell} \cos \left(\frac{\pi k}{2}\right) \Gamma (\ell+1)}{ \left(\frac{1}{2} (-k+\ell)\right)! \left(\frac{1}{2} (\ell+k)\right)!} \, , \end{align} where $k$ is assumed to be even and $k < \ell+2$(since it is zero for $k$ odd or $k\geq l+2$). From the above expression one can check that the denominator is strictly increasing as $k$ increases from $2$ to $\ell$, thus $|\tilde{W}_\ell(k)|$ is strictly decreasing. Thus the nonzero values of $\tilde{W}_\ell(k)$ are distinct for $\ell$ even. For $\ell$ odd, we first note that by simple integration by parts we can derive the following recursion relation \begin{align} \tilde{W}_\ell(k)= -\frac{\ell(\ell-1)}{k^2 - \ell^2} \tilde{W}_{\ell-2}(k)\, \label{eq:onre}, \end{align} where again $k$ is even(and thus not equal to $\ell$). For $\ell=1$, we have the following alternative formula for $\tilde{W}_\ell(k)$ for even $k$: \begin{align} \tilde{W}_1(k)=\sqrt{\frac{2}{\pi }}\frac{ (\cos (\pi k)+1)}{1-k^2} \, \label{eq:onbif}. \end{align} It is clear now that for $\ell=1$, $\tilde{W}_1(k)$ has distinct(and in fact negative values) for $k$ even. From the recursion formula in~\eqref{eq:onre} it follows that this holds true for all odd $\ell$, i.e., $|\tilde{W}_\ell(k)|$ takes distinct values for $k$ even. Assume now that $\ell=1$(i.e. the Onsager model), then as mentioned earlier we can deduce from~\eqref{eq:onbif} that $\tilde{W}_1(k)$ is distinct and negative for all $k$ even. It follows that $\tilde{W}_1(k)$ satisfies the conditions of~\cref{thm:c1bif} for all even $k$, thus completing the proof of (a). Now let $\ell>2$ and even. It is clear from the expression in~\eqref{eq:onsf} that then $\tilde{W}_\ell(k)$ can be negative only if $\cos(k \pi/2)/\Gamma \left(\frac{1}{2} (-k+\ell+2)\right)$ is negative. This happens if and only if $\frac{k}{2}$ is odd and $k<\ell+2$ since if $k \geq \ell+2$, $\frac{1}{\Gamma \left(\frac{1}{2} (-k+\ell+2)\right)}$ is evaluated at a negative integer and thus $\tilde{W}_\ell(k)=0$. Since by the previous arguments each $\frac{k}{2}$ odd with $k<\ell+2$ corresponds to a distinct value of $\tilde{W}_\ell(k)$, we can apply ~\cref{thm:c1bif} to deduce that such $k$ correspond to bifurcation points. Given an $\ell>2$ and even, there are $\frac{\ell}{4} + \frac{1}{2}$ such $k$ if $\frac{\ell}{2}$ is odd and $\frac{\ell}{4}$ if $\frac{\ell}{2}$ is even. This completes the proof of (c). Now, we let $\ell>2$ and odd. One can check again that $\tilde{W}_\ell(k)$ is negative if and only if $\frac{k}{2}$ is odd and $k < \ell +2$ when $\frac{\ell-1}{2}$ is odd and if $k$ is even, but $\frac{k}{2}$ is odd if $k < \ell+2$, when $\frac{\ell-1}{2}$ is even. For $\frac{\ell-1}{2}$ odd there are $\frac{\ell +1}{4}$ such $k$, while for $\frac{\ell-1}{2}$ even there are infinitely many such $k$. Applying~\cref{thm:c1bif} again, gives us (d). \begin{comment} Finally we let $\ell>2$ and odd. One can check again that $\tilde{W}_\ell(k)$ is negative if and only if $\frac{k}{2}$ is odd and $k < \ell +2$ when $\frac{\ell-1}{2}$ is odd and if $k$ is even, but $\frac{k}{2}$ is odd if $k < \ell+2$, when $\frac{\ell-1}{2}$ is even. For $\frac{\ell-1}{2}$ odd there are $\frac{\ell +1}{4}$ such $k$, while for $\frac{\ell-1}{2}$ even there are infinitely many such $k$. Applying~\cref{thm:c1bif} again, gives us (d). \end{comment} \end{proof} The above result provides us with a finer analysis to that presented in~\cite{chenstationary2010}, as we are able to count the solutions for general odd and even $\ell$, instead of just proving the existence of nontrivial solutions. The above result also generalises the work in~\cite{luciaexact2010} which studied a truncated version of the Onsager model with only a finite number of modes and proved the existence of nontrivial solutions. It also partially recovers results from~\cite[Theorem 2]{niksirat2015stationary} in which the non-truncated Onsager model is analysed. We refer the reader to~\cite{vollmer2017critical} for an analysis of the Onsager model in 2 dimensions, i.e., for liquid crystals that live in 3 dimensions with two degrees of freedom. \subsection{The Barr\'e--Degond--Zatorska model for interacting dynamical networks} The Barr\'e--Degond--Zatorska system ~\cite{barre2017kinetic} models particles that interact through a dynamical network of links. Each particle interacts with its closest neighbours through cross-links modelled by springs which are randomly created and destroyed. Taking the combined mean field and overdamped limits one obtains the McKean--Vlasov equation with the interaction potential given by \begin{align} W(x)= \begin{cases} (|x|-\ell)^2 -(R -\ell)^2 & |x| <R \\ 0 & |x|\geq R \end{cases} \, , \end{align} for two positive constants $0<\ell\leq R \leq L/2$. In~\cite[Theorem 6.1]{barre2017kinetic}, using formal asymptotic analysis, it was shown (and later numerically verified in~\cite{barre2018particle}) that one can provide conditions for continuous and discontinuous transitions for the above potential based on the values of the Fourier modes. We restate their result using our notation for the convenience of the reader. \begin{prop}[{Sharp characterisation of transition point by formal asymptotics ~\cite[Theorem 6.1]{barre2017kinetic}}] Consider the Barr\'e--Degond--Zatorska model with $\ell, R,L$ chosen such that $\beta \kappa \tilde{W}(1)+ \sqrt{2L} <0$ and $\beta \kappa \tilde{W}(k)+ \sqrt{2L} >0$ for all $k \neq 1, k \in {\mathbb N}$. Then \begin{tenumerate} \item If $2 \tilde{W}(2) - \tilde{W}(1) >0$, then the system exhibits a continuous transition point. \item If $2 \tilde{W}(2) - \tilde{W}(1) <0$, then the system exhibits a discontinuous transition point. \end{tenumerate} \end{prop} The assumptions in the proposition essentially imply a separation of the Fourier modes. It follows immediately under these assumptions that $k=1$ satisfies the conditions of~\cref{thm:c1bif} and thus $\kappa_*=-\frac{ (2L)^{\frac{1}{2}} }{\beta \tilde{W}(1)} $ corresponds to a bifurcation point of the system. Additionally, looking at Figure ~\ref{fig:dcctp} one can see that the conditions (a) and (b) from the above proposition are consistent with our analysis for the existence of continuous and discontinuous transition points. If $\tilde{W}(1)$ and $\tilde{W}(2)$ are resonating/near-resonating then it follows that condition (b), i.e., $2 \tilde{W}(2) - \tilde{W}(1) <0$ must hold for $\delta_*$ small, where $\delta_*$ is as introduced in~\cref{def:del}. Indeed, let $k=1 ,2$ be elements of the set $K^{\delta_*}$, then we have $2 \tilde{W}(2) - \tilde{W}(1) =\tilde{W}(1) + 2(\tilde{W}(2)- \tilde{W}(1) ) \leq \tilde{W}(1) +2\delta_* <0$, for $\delta_*$ sufficiently small. Similarly, using~\cref{lem:comp} and comparing with an $\alpha$-stabilised potential say $G_\alpha$, one can argue that if $\tilde{W}(1)$ is the dominant mode then condition (a), i.e., $2 \tilde{W}(2) - \tilde{W}(1) >0$ must hold for $\alpha$ small, where $\alpha$ is as defined in~\cref{defn:astable}. \subsection{The Keller--Segel model for bacterial chemotaxis} The (elliptic-parabolic) Keller--Segel model is used to describe the motion of a group of bacteria under the effect of the concentration gradient of a chemical stimulus, whose distribution is determined by the density of the bacteria. This phenomenon is referred to as \emph{chemotaxis} in the biology literature~\cite{keller1971model}. For this system, $\varrho(x,t)$ represents the particle density of the bacteria and $c(x,t)$ represents the availability of the chemical resource. The dynamics of the system are then described by the following system of coupled PDEs: \begin{align} \label{eq:ksev} \begin{alignedat}{3} \partial_t \varrho& =\dive \bra*{\beta^{-1}\nabla \varrho + \kappa \varrho \nabla c} \qquad && (x,t) \in U \times (0,\infty)\, , \\ -(-\Delta)^{s} c &=\varrho \quad && (x,t) \in U \times [0,\infty) \, ,\\ \varrho(x,0) &= \varrho_0 \quad && x \in U \times \{0\} \, ,\\ \varrho(\cdot, t) &\in C^2(U) \quad && t \in [0,\infty), \end{alignedat} \end{align} for $s \in(\frac{1}{2},1]$. The link between the model in~\eqref{eq:ksev} and the McKean--Vlasov equation is immediately noticed if one simply inverts $-(-\Delta)^s$ to obtain $c$. Thus, the stationary Keller--Segel equation is given by, \begin{align} \dive \bra*{\beta^{-1}\nabla \varrho + \kappa \varrho \nabla \Phi^s \star \varrho}=0 \quad & x \in U \label{eq:sKS} \, , \end{align} with $\varrho \in C^2(\bar{U})$ and where $\Phi^s$ is the fundamental solution of $-(-\Delta)^s$. Since $\Phi^s$ does not, in general, satisfy assumption~\eqref{ass:B}, \cref{thm:wellpsPDE} does not apply directly. However we can circumvent this issue to obtain the following result. \begin{thm} Consider the stationary Keller--Segel equation~\eqref{eq:sKS}. For $d\leq 2$ and $s \in (\frac{1}{2},1]$, it has smooth solutions and its trivial branch $(\varrho_\infty,\kappa)$ has infinitely many bifurcation points. \end{thm} \begin{proof} \begin{figure}[t] \centering \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{ks2d.eps} \caption*{(a)} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{ks2dk.eps} \caption*{(b)} \end{minipage} \caption{(a). Contour plot of the Keller--Segel interaction potential $\Phi^s$ for $d=2$ and $s=0.51$. The orange lines indicate the positions at which the potential is singular (b). The associated wave numbers which correspond to bifurcation points of the stationary system} \label{fig:ks2d} \end{figure} $\Phi^s$ is given by the following formal Fourier series, \begin{align} \Phi^s(x) = -\bra*{\frac{2 \pi}{L}}^{2s} \sum_{k \in {\mathbb N}^d \setminus\set{0}} \frac{N_k}{|k|^{2s}} w_k \in \ensuremath{\mathcal D}(U)' \, . \end{align} The weak form of~\eqref{eq:sKS} is then given by \begin{align} -\beta^{-1}\intom{\nabla \varphi \cdot \nabla \varrho } -\kappa \intom{\varrho \nabla \varphi \cdot \nabla c}=0, \quad \forall \varphi \in \ensuremath{{H}}^1(U) \, , \label{eq:wsKS} \end{align} where we look for solutions $\varrho$ in $\ensuremath{{H}}^1(U)\cap \mathcal{P}_{\mathup{ac}}(U)$ and $c= \Phi^s \star \varrho$. We start by noticing that any fixed point of $\ensuremath{\mathcal T}^{ks}$ is a weak solution of~\eqref{eq:wsKS} where the map $\ensuremath{\mathcal T}^{ks} : \ensuremath{{L}}^2(U)\to \ensuremath{{L}}^2(U)$ is defined as follows: \begin{align} \ensuremath{\mathcal T}^{ks}\varrho=\frac{1}{Z(c,\kappa,\beta)}e^{-\beta \kappa c}, \quad\text{where}\quad Z(\varrho,\kappa,\beta)= \intom{e^{- \beta \kappa c}} \, . \end{align} Indeed, let $\varrho$ be such a fixed point and $0<\epsilon<s-\frac{1}{2}$, then \begin{align} \sum\limits_{k \in {\mathbb Z}^d} |k|^{2 +2 \epsilon}|\tilde{c}(k)|^2&=\bra*{\frac{2 \pi}{L}}^{4}\sum\limits_{k \in {\mathbb N}^d\setminus\set{0}} \frac{1}{|k|^{4s-2 -2 \epsilon}}\sum\limits_{k \in \mathrm{Sym}(\Lambda)} |\tilde{\varrho}(\sigma(k))|^2 < \infty \, . \end{align} Thus $c \in \ensuremath{{H}}^{1+ \epsilon}(U)$ which by the Sobolev embedding theorem for $d\leq 2$ implies that $c \in C^0(U)$. This tells us that $\varrho \in \ensuremath{{H}}^1(U)\cap \mathcal{P}_{\mathup{ac}}(U)$ with $\nabla \varrho = -\beta \kappa Z^{-1} \, e^{-\beta \kappa c}\nabla c $. Plugging $\varrho$ into~\eqref{eq:wsKS}, we see immediately that it is a solution. The reverse implication follows by arguments identical to those in~\cref{thm:wellpsPDE}. Since $\varrho_\infty$ is a solution to $\varrho=\ensuremath{\mathcal T}^{ks} \varrho$, for all $\kappa>0$, we need to check that any solution of the fixed point equation is smooth. Assume that $\varrho \in \ensuremath{{H}}^\ell(U)$, i.e., $\sum_{k \in {\mathbb Z}^d}|k|^{2\ell} | \tilde{\varrho}(k)|^2 < \infty$. Then for $0<\epsilon<s-\frac{1}{2}$ we have that \begin{align} \sum\limits_{k \in {\mathbb Z}^d} |k|^{2\ell+2 + 2\epsilon}|\tilde{c}(k)|^2 &=\bra*{\frac{2 \pi}{L}}^{4s}\sum\limits_{k \in {\mathbb N}^d\setminus\set{0}} \frac{|k|^{2\ell }}{|k|^{4 s -2 -2 \epsilon}}\sum\limits_{k \in \mathrm{Sym}(\Lambda)} |\tilde{\varrho}(\sigma(k))|^2 \, , \nonumber \\ &<\bra*{\frac{2 \pi}{L}}^{4s}\sum\limits_{k \in {\mathbb Z}^d} |k|^{2\ell } |\tilde{\varrho}(k)|^2 < \infty \, . \label{e:Gamma:est} \end{align} Thus $c \in \ensuremath{{H}}^{\ell+1 +\epsilon}(U)$ and by the Sobolev embedding theorem we have that $\ensuremath{{H}}^{\ell +1 +\epsilon}(U)$ is continuously embedded in $C^{\ell}(U)$. Thus for all multiindices $\alpha$ such that $|\alpha|\leq \ell$, we have that $\partial_\alpha c \in \ensuremath{{L}}^ \infty(U)$\,. Since $\varrho = Z^{-1} e^{-\beta \kappa c}$, computing $\partial_\alpha \varrho$ with $|\alpha|=\ell+1$ gives us: \begin{align} \partial_\alpha \varrho= Z^{-1} e^{-\beta \kappa c} \partial_\alpha c + F(Z^{-1},\beta \kappa,\partial_\xi c), \textrm{ for all } |\xi|\leq l \, . \end{align} Thus $\partial_\alpha c$ enters the expression for $\partial_\alpha \varrho$ linearly. Since all lower derivatives of $c(x)$ are bounded, one can then check that $\norm*{\partial_\alpha \varrho}_2 < \infty$ and thus $\varrho \in \ensuremath{{H}}^{\ell+1}(U)$. We can then bootstrap to obtain smooth solutions. Observe now that for $d \leq 2$ and $s \in (\frac{1}{2},1]$, $\Phi^s \in \ensuremath{{L}}^2_s(U)$. For $d=1$, ~\cref{thm:c1bif} applies directly and the bifurcation points are given by: \begin{align} \kappa_*= \bra*{\frac{L}{2 \pi}}^{2s} \frac{|k|^{2s} L}{\beta}, \textrm{ for } d=1 \,. \end{align} For $d =2$ one can notice that $\Phi^1(x)=\Phi^1(\Pi(x))$ for any permutation $\Pi$ of the $d$ coordinates. Our strategy will be to apply~\cref{thm:c1bif} after reducing the problem to the symmetrised space $\ensuremath{{L}}^2_{\operatorname{ex}}(U)$ and then use the discussion in~\cref{rem:l2pi}. Then, showing that a particular $[k]$ corresponds to a bifurcation point reduces to the condition \begin{align} \mathrm{card}\set*{[k] : \frac{\tilde{W}([k])}{\Theta([k])}=\frac{\tilde{W}([k^*])}{\Theta([k^*])}}=\mathrm{card}\set*{ [k] : \frac{\tilde{W}([k])}{\Theta([k])}= -\bra*{\frac{2\pi}{L}}^{2s}\frac{1}{|k|^{2s}L}} =1 \ , \end{align} which holds for example for $[k]=\{(1,0),(0,1)\}$. We argue that $\kappa_*= -\frac{L^{\frac{d}{2}} \Theta(\pra{p^{n}})}{\beta \tilde{W}(\pra{p^{n}})} $, where $\pra{p^n}=\{(p^n,0),(0,p^n)\}$, $p$ is a prime, and $n \in {\mathbb N}$ satisfy the conditions of being a bifurcation point. We need to check that \begin{align} \mathrm{card}\set*{ [p^n] : \frac{\tilde{W}([p])}{\Theta([p])}= -\bra*{\frac{2\pi}{L}}^{2s}\frac{1}{|p|^{2s}L}} =1 \, , \end{align} which is equivalent to checking that given a prime $p$ there is a unique way (up to permutations) of expressing $p^{2n}$ as the sum of two squares and this is precisely $(p^n)^2 + 0^2$. Jacobi's two square theorem tells us that number of representations, $r(z)$, of a positive integer $z$ as the sum of two squares is given by the formula \begin{align} r(z)= (d_{1,4}(z)-d_{3,4}(z)) \, , \end{align} where $d_{\ell,4}(z)$ is the number of divisors of $z$ of the form $4 k +\ell, k\in {\mathbb N},\ell \geq 1$. If $p=2$, then $d_{1,4}(2^{2n})=1$ and $d_{3,4}(2^{2n})=0$ and thus $r(2^{2n})=1$. For any odd prime, $p$, we know that it is either of the form $4k+1$ or $4k+3$. For either case, one can check that we have $d_{1,4}(p^{2n})=1+n$ and $d_{3,4}(p^{2n})=n$ and thus $r(p^{2n})=1$. The expression for the bifurcation points then follows from the discussion in~\cref{rem:l2pi}. \end{proof}
1,941,325,220,106
arxiv
\section{Introduction} Dualities have vastly contributed towards a better understanding of string theory and beyond. A particular example is mirror symmetry \cite{Lerche:1989uy,Candelas:1989hd,Greene:1990ud,morrison-1993-6,Batyrev:1994hm,Batyrev:1994ju,Batyrev:1997tv,cox1999mirror,mirrorbook} which identifies two Type II superstring theories compactified on Calabi-Yau 3-folds whose Hodge numbers are swapped. A similar example, although only true at low energies, is \textit{toric (Seiberg) duality} \cite{Feng:2000mi,Feng:2001xr,Feng:2002zw,Seiberg:1994pq,Feng:2001bn,2001JHEP...12..001B,Franco:2003ea}. It relates supersymmetric worldvolume theories of D3-branes on singular toric Calabi-Yau 3-folds which have isomorphic mesonic moduli spaces. \begin{figure}[ht!!] \begin{center} \resizebox{0.801\hsize}{!}{ \includegraphics[trim=0cm 0cm 0cm 0cm,totalheight=16 cm]{ctree3.pdf} } \caption{\textit{The three dualities for Brane Tilings with Reflexive Toric Diagrams.} The arrows indicate toric duality (red), specular duality (blue), and reflexive duality (green) which is discussed in \cite{Hanany:2012hi}. The black nodes of the duality tree represent distinct brane tilings, where the labels are taken from \cite{Hanany:2012hi} and \fref{f_sumtoric2}. \label{ctree3}} \end{center} \end{figure} These $4d$ $\mathcal{N}=1$ supersymmetric field theories have mesonic moduli spaces which are toric Calabi-Yau 3-folds. Their geometry is encoded in a convex lattice polygon called the toric diagram. Furthermore, the theories are best expressed by periodic bipartite graphs on $\mathbb{T}^2$, otherwise known as brane tilings \cite{Hanany:2005ve,Franco:2005rj,Franco:2005sm,Hanany:2005ss,Hanany:2006nm,Kennaway:2007tq,Yamazaki:2008bt}. They represent the largest known class of supersymmetric field theories that are associated to toric Calabi-Yau 3-folds. The rich combinatorial structure of brane tilings led recently to new insights beyond toric duality. For instance, certain toric diagrams have a single interior point and exhibit the special property of appearing in polar dual pairs \cite{1997CMaPh.185..495K,Kreuzer:1998vb,Kreuzer:2000qv,Kreuzer:2000xy,2008arXiv0802.3376B,2008arXiv0809.4681C}. They are called reflexive toric diagrams and relate to a correspondence between brane tilings which was studied in \cite{Hanany:2012hi}. Given brane tilings A and B whose reflexive toric diagrams are a dual pair, the toric diagram of brane tiling A is the lattice of generators of the mesonic moduli space of brane tiling B, and vice versa. We call this correspondence \textit{reflexive duality}. In the following, we discuss a new correspondence that was named in \cite{Hanany:2012hi} \textbf{specular duality}. It identifies brane tilings which have isomorphic combined mesonic and baryonic moduli spaces, also known as master spaces $\mathcal{F}^\flat$. The following scenarios of brane tilings apply to this new duality: \begin{itemize} \item[1.] Dual brane tilings A and B are both on $\mathbb{T}^2$. They have reflexive toric diagrams. \item[2.] Brane tiling A is on $\mathbb{T}^2$ and dual brane tiling B is not on $\mathbb{T}^2$. Brane tiling A has a toric diagram which is not reflexive. \item[3.] Both brane tilings A and B are not on $\mathbb{T}^2$. \end{itemize} For brane tilings with reflexive toric diagrams, specular duality manifests itself not only as an isomorphism between master spaces. The additional properties are: \begin{itemize} \item The external/internal perfect matchings of brane tiling A are the internal/external perfect matchings of brane tiling B. \item The mesonic flavour symmetries of brane tiling A are the hidden or anomalous baryonic symmetries of brane tiling B, and vice versa. \end{itemize} The following paper studies specular duality restricted to brane tilings with reflexive toric diagrams. The Hilbert series of $\mathcal{F}^\flat$ is computed explicitly to illustrate its invariance under the new correspondence. The swap between external and internal perfect matchings, and mesonic and baryonic symmetries is explained. Moreover, we illustrate that specular duality is a reflection of the Calabi-Yau cone of $\mathcal{F}^\flat$ along a hyperplane. The new correspondence extends beyond brane tilings with reflexive toric diagrams. Accordingly, specular duality can lead to brane tilings on spheres or Riemann surfaces with genus $g\geq 2$. These have no known AdS dual and have mesonic moduli spaces which are not necessarily Calabi-Yau 3-folds \cite{Benvenuti:2004dw,Benvenuti:2005wi,Kennaway:2007tq}. Their quiver and superpotential however admit a master space which can be traced back to a brane tiling on $\mathbb{T}^2$. Specular duality for brane tilings that are not on $\mathbb{T}^2$ may lead to new insights into quiver gauge theories and Calabi-Yau moduli spaces. The work concludes with this observation and highlights the importance of future studies \cite{HananySeong11b}. \\ The paper is divided into the following sections. Section \sref{s2} reviews brane tilings and their mesonic moduli spaces and master spaces. They are analysed with the help of Hilbert series. Section \sref{s3} begins with a short review on toric duality and compares its properties with the characteristics of specular duality. The new correspondence between brane tilings is explained in terms of the untwisting map \cite{Feng:2005gw,Franco:2011sz,Stienstra:2007dy} and modified shivers \cite{Butti:2007jv,Franco:2007ii,Hanany:2008fj}. Section \sref{s4} studies and summarises the transformation of the brane tiling, the exchange of perfect matchings, and the swap of mesonic and baryonic symmetries under specular duality. The concluding section gives a short introduction on how specular duality relates brane tilings on $\mathbb{T}^2$ with tilings on spheres and Riemann surfaces of genus $g\geq 2$. \\ \section{Brane Tilings and their Moduli Spaces \label{s2}} The following section reviews brane tilings and their mesonic moduli spaces and master spaces. The method of calculating Hilbert series is reviewed. Readers who are familiar with these topics may skip the section and move on to the discussion of specular duality in Section \sref{s3}. \\ \subsection{Brane Tilings, F- and D-term charges, and the Toric Diagram \label{s2a}} A brane tiling represents a worldvolume theory of D3 branes on a singular non-compact Calabi-Yau 3-fold. It encodes the bifundamental matter content and superpotential of the theory. \\ A \textbf{quiver} is a graph which encodes as a set of $G$ nodes the $U(N)_i$ gauge groups and as a set of $e$ arrows the bifundamental fields $X_{ij}$ of the gauge theory. The number of incoming and outgoing arrows is the same at each node. The incidence matrix $d_{G\times e}$ of the graph encodes this property with its vanishing sum of rows.\footnote{$d_{G\times e}$ can therefore be reduced to its $G-1$ independent rows $\Delta_{(G-1)\times e}$.} This property is called the quiver's Calabi-Yau condition \cite{Feng:2002zw,Forcella:2008bb,Forcella:2008eh}. A \textbf{brane tiling} or dimer \cite{Hanany:2005ve,Franco:2005rj,Franco:2005sm,Kennaway:2007tq,2007arXiv0710.1898I} is a periodic bipartite graph on $\mathbb{T}^2$. It has the following components: \begin{itemize} \item \textbf{Faces} relate to $U(N)_i$ gauge groups. The ranks $N$ of all gauge groups are the same and equal to the number of probe D3-branes. \item \textbf{Edges} relate to the bifundamental fields. Every edge $X_{ij}$ in the tiling has two neighbouring faces $U(N)_i$ and $U(N)_j$. The orientation of the bifundamental field $X_{ij}$ is given by the orientation around the black and white nodes at the two ends of the corresponding edge. \item \textbf{White (resp. black) nodes} correspond to positive (negative) terms in the superpotential $W$. They have a clockwise (anti-clockwise) orientation. By following the orientation around a node, one can identify the fields of the related superpotential term in the correct cyclic order. \end{itemize} The geometry of the toric Calabi-Yau 3-fold is encoded in the brane tiling. A new basis of fields is defined from the set of quiver fields in order to describe both F-term and D-term constraints. The new fields are known as gauge linear sigma model (GLSM) fields \cite{Witten:1993yc} and are represented as perfect matchings \cite{Hanany:2005ve,Hanany:2005ss,Kennaway:2007tq,Forcella:2008bb} of the brane tiling:\begin{itemize} \item A \textbf{perfect matching} $p_\alpha$ is a set of bifundamental fields which connects to all nodes in the brane tiling precisely once. It corresponds to a point in the toric diagram of the Calabi-Yau 3-fold. A perfect matchings which relates to an \textbf{extremal (corner)} point of the toric diagram has non-zero IR $U(1)_R$ charge. An \textbf{internal} as well as a non-extremal toric point on the perimeter of the toric diagram has zero R-charge. We call all points on the perimeter \textbf{external}, including extremal ones. The number of internal, external and extremal perfect matchings is denoted by $n_i$, $n_e$ and $n_p$ respectively. All perfect matchings are summarized in a matrix $P_{e\times c}$ \cite{Forcella:2008bb}, where $e$ is the number of matter fields and $c$ the number of perfect matchings. The perfect matching matrix $P_{e\times c}$ takes the form \beal{es00_1cc00} P_{i\alpha}= \left\{ \begin{array}{ll} 1 & ~~\text{if} ~X_i \in p_\alpha \\ 0 & ~~\text{if} ~X_i \notin p_\alpha \end{array} \right. ~~, \end{eqnarray} where $i=1,\dots,e$ and $\alpha=1,\dots,c$. \item \textbf{F-terms} $\partial_X W =0$ are encoded in $P_{e\times c}$. The charges under the F-term constraints are given by the kernel, \beal{es00_1c0} Q_{F~(c-G-2)\times c} = \ker{(P_{e \times c})}~~, \end{eqnarray} where $G$ is the number of gauge groups.\footnote{Note: $\ker$ used here takes the transpose of the matrix.} \item \textbf{D-terms} are encoded in the quiver incidence matrix $d_{G\times e}$. The charges $Q_{D~(G-1)\times c}$ under the D-term constraints are defined by \beal{es00_3} \Delta_{(G-1)\times E} = Q_{D~(G-1)\times c}.P^{t}_{c \times e}~~. \end{eqnarray} \end{itemize} The F- and D-term charge matrices are concatenated to form a total charge matrix \beal{es00_4} Q_{t~(c-3)\times c} = \left(\begin{array}{c} Q_F \\ Q_D\end{array}\right)~~. \end{eqnarray} The kernel of $Q_t$, \beal{es00_5} G_t = \ker(Q_t)~~, \end{eqnarray} corresponds to a matrix whose columns relate to perfect matchings. The rows of $G_t$ are the coordinates of the associated point in the toric diagram. \\ \subsection{The Master Space $\mathcal{F}^\flat$ and the Mesonic Moduli Space $\mathcal{M}^{mes}$ \label{s2b}} \noindent\textbf{Master Space $\mathcal{F}^\flat$.} The master space is the combined mesonic and baryonic moduli space. It has the following properties: \begin{itemize} \item The \textbf{master space} \cite{Forcella:2008bb,Forcella:2008eh,Hanany:2010zz} of the one D3-brane theory relates to the following quotient ring \beal{es2b_1} \mathbb{C}^{e}[X_{1},\dots,X_{e}]/\mathcal{I}_{\partial W =0}~~, \end{eqnarray} where $e$ is the number of bifundamental fields $X_i$. $\mathbb{C}^{e}[X_{1},\dots,X_{e}]$ is the complex ring over all bifundamental fields, and $\mathcal{I}_{\partial W=0}$ is the ideal formed by the F-terms. \item The master space in \eref{es2b_1} is usually reducible into components. The largest irreducible component is known as the \textbf{coherent component} ${}^{\text{Irr}}\mathcal{F}^\flat$ and is toric Calabi-Yau. All other smaller components are generally linear pieces of the form $\mathbb{C}^l$. In our discussion, we will concentrate on the coherent component of the master space and for simplicity use $\mathcal{F}^\flat$ and ${}^{\text{Irr}}\mathcal{F}^\flat$ interchangeably. \item The coherent component of the master space is the following \textbf{symplectic quotient} \beal{es2b_2} {}^{\text{Irr}}\mathcal{F}^\flat = \mathbb{C}^{c}[p_1,\dots,p_c]// Q_F ~~, \end{eqnarray} where $c$ is the number of perfect matchings $p_\alpha$ in the brane tiling. The symplectic quotient captures invariants of the ring $\mathbb{C}^c[p_1,\dots,p_c]$ under the charges $Q_F$ in \eref{es00_1c0}. \item The \textbf{dimension} of the master space $\mathcal{F}^\flat$ is $G+2$. \comment{The mesonic symmetry is $U(1)^3$ or an enhancement of $U(1)^3$ with rank $3$ where one $U(1)$ is R and two $U(1)$'s are flavor symmetries. The baryonic symmetry is $U(1)^{G-1}$ or an enhancement of $U(1)^{G-1}$ with rank $G-1$ which includes both \textbf{anomalous} or enhanced \textbf{hidden} symmetries and \textbf{non-anomalous} baryonic symmetries.} \end{itemize} \noindent The master space exhibits the following symmetries: \begin{itemize} \item The \textbf{mesonic symmetry} is $U(1)^3$ or an enhancement with rank $3$. An enhancement is indicated by extremal perfect matchings which carry the same $Q_F$ charges. The mesonic symmetry contains the $U(1)_R$ symmetry and the flavor symmetries. It derives from the isometry of the toric Calabi-Yau 3-fold. \item The \textbf{baryonic symmetry} is $U(1)^{G-1}$ or an enhancement with rank $G-1$. An enhancement is indicated by non-extremal perfect matchings which carry the same $Q_F$ charges. It contains both anomalous and non-anomalous symmetries which have decoupling gauge dynamics in the IR. Non-Abelian extensions of these symmetries are known as \textbf{hidden symmetries} \cite{Forcella:2008bb,Forcella:2008eh,Hanany:2010zz}. \end{itemize} \noindent Let $I$ and $E$ denote respectively the number of internal and external points in the toric diagram.\footnote{Note: Points in the toric diagram can carry multiplicities according to the number of perfect matchings associated to them. $I$ and $E$ is a counting that ignores multiplicities and hence there is no direct correspondence to the number of perfect matchings $n_i$, $n_e$ and $n_p$.} They are used to define the following quantities: \begin{itemize} \item The number of \textbf{anomalous} $U(1)$ baryonic symmetries or the total rank of enhanced \textbf{hidden} baryonic symmetries is given by $2 I$. \item The number of \textbf{non-anomalous} baryonic $U(1)$'s is $E -3$. \item The total number of baryonic symmetries is as stated above $G-1$. Accordingly, \beal{es2c_10} G-1 = 2 I + E -3 ~\Rightarrow~ A=\frac{G}{2} = I + \frac{E}{2} - 1 \end{eqnarray} which is \textbf{Pick's theorem} generalised to toric diagrams. The unit square area $A$ of a toric diagram is scaled by a factor of 2 in order to relate it to the number of $U(N)$ gauge groups $G$. \end{itemize} Perfect matchings carry charges under the mesonic and baryonic symmetries. The choices of assigning charges on perfect matchings are under certain basic constraints which are summarized in appendix \sref{appch}. \\ \noindent\textbf{Mesonic Moduli Space $\mathcal{M}^{mes}$.} The mesonic moduli space is a subspace of the master space. It has the following properties: \begin{itemize} \item The \textbf{mesonic moduli space} for the one D3 brane theory is the following symplectic quotient \beal{es2c_20} \mathcal{M}^{mes}=(\mathbb{C}^c[p_1,\dots,p_c]//Q_F)//Q_D = \mathcal{F}^\flat // Q_D~~. \end{eqnarray} \item The mesonic moduli space is a toric Calabi-Yau $3$-fold and is generally lower dimensional than the master space. \end{itemize} \noindent\textbf{Hilbert Series.} The Hilbert series is a generating function which counts chiral gauge invariant operators. It contains information on moduli space generators and their relations. A method known as \textbf{plethystics} \cite{Benvenuti:2006qr,Feng:2007ur,Hanany:2007zz} is used to extract the information from the Hilbert series. For charges $Q$ which are either $Q_F$ or $Q_t$ for ${}^{\text{Irr}}\mathcal{F}^\flat$ and $\mathcal{M}^{mes}$ respectively, the corresponding Hilbert series is given by the \textbf{Molien Integral} \beal{es2c_30} g_1(y_\alpha;\mathcal{M}) = \prod_{i=1}^{|Q|} \oint_{|z_i|=1} \frac{\mathrm{d} z_i}{2\pi i z_i} \prod_{\alpha=1}^{c} \frac{1}{1-y_\alpha \prod_{j=1}^{|Q|} z_j^{{Q}_{j\alpha}}} \end{eqnarray} where $c$ is the number of perfect matchings and $|Q|$ is the number of rows in the charge matrix $Q$. The fugacity $y_\alpha=t_\alpha$ counts extremal perfect matchings and the fugacity $y_{s_m}$ counts all other fugacities $s_m$. \\ \section{An introduction to Specular Duality \label{s3}} The following section reviews toric duality of brane tilings and compares it with specular duality. The section illustrates how the new correspondence is related to the untwisting map \cite{Feng:2005gw,Franco:2011sz,Stienstra:2007dy} and the shiver \cite{Butti:2007jv,Franco:2007ii,Hanany:2008fj}. We focus on the 30 brane tilings with reflexive toric diagrams. \\ \subsection{Toric Duality and Specular Duality \label{s2d}} \noindent\textbf{Toric Duality.} Two $4d$ quiver gauge theories with brane tilings are called toric dual \cite{Feng:2000mi,Feng:2001xr,Feng:2002zw,Seiberg:1994pq,Feng:2001bn,2001JHEP...12..001B,Franco:2003ea} if in the UV they have different Lagrangians with a different field content and superpotential, but flow to the same universality class in the IR. \begin{figure}[ht!] \begin{center} \includegraphics[trim=0cm 0cm 0cm 0cm,width=12 cm]{fseiberg.pdf} \caption{\textit{`Urban Renewal'.} Toric duality acts on the brane tiling of the zeroth Hirzebruch surface $F_0$. The points in the toric diagram correspond to perfect matchings and GLSM fields. Perfect matchings are defined as sets of quiver fields.} \label{fseiberg} \end{center} \end{figure} Let us summarise the properties of toric duality for brane tilings: \begin{itemize} \item The \textit{mesonic moduli spaces} $\mathcal{M}^{mes}$ are the same, but the master spaces ${}^{\text{Irr}}\mathcal{F}^\flat$ are not \cite{Forcella:2008ng}. The mesonic Hilbert series are the same up to a fugacity map. \item The \textit{toric diagrams} of $\mathcal{M}^{mes}$ are $GL(2,\mathbb{Z})$ equivalent. However, multiplicities of internal toric points with zero R-charge can differ. \item The number of \textit{gauge groups} $G$ remains constant. \end{itemize} \vspace{0.5cm} \noindent\textit{Example.} The Hirzebruch $\mathbb{F}_0$ model has the superpotential \beal{es2c_50} W_{I} = \underbracket[0.1mm]{X_{14}^{1} X_{42}^{1} X_{23}^{1} X_{31}^{1}}_{A} + \underbracket[0.1mm]{X_{14}^{2} X_{42}^{2} X_{23}^{2} X_{31}^{2}}_{B} - \underbracket[0.1mm]{X_{14}^{2} X_{42}^{1} X_{23}^{2} X_{31}^{1}}_{C} - \underbracket[0.1mm]{X_{14}^{1} X_{42}^{2} X_{23}^{1} X_{31}^{2}}_{D} ~~, \end{eqnarray} with the corresponding brane tiling and toric diagram shown in the left panel of \fref{fseiberg}. By dualizing on the gauge group $U(N)_2$, the superpotential becomes \beal{es2c_51} W_{II} &=& \underbracket[0.1mm]{X_{14}^{1}X_{43}^{1}X_{31}^{1}}_{A} + \underbracket[0.1mm]{X_{14}^{2}X_{43}^{2}X_{31}^{2}}_{B} - \underbracket[0.1mm]{X_{14}^{2}X_{43}^{3}X_{31}^{1}}_{C} - \underbracket[0.1mm]{X_{14}^{1}X_{43}^{4}X_{31}^{2}}_{D} \nonumber\\ && + \underbracket[0.1mm]{X_{14}^{1}X_{43}^{3}X_{31}^{2}}_{E} + \underbracket[0.1mm]{X_{14}^{2}X_{43}^{4}X_{31}^{1}}_{F} - \underbracket[0.1mm]{X_{14}^{1}X_{43}^{1}X_{31}^{1}}_{G} - \underbracket[0.1mm]{X_{14}^{2}X_{43}^{2}X_{31}^{2}}_{H} \end{eqnarray} with the associated brane tiling and toric diagram shown in the right panel of \fref{fseiberg}. The figure labels toric points with the associated perfect matchings. \\ \noindent\textbf{Specular Duality.} The new correspondence has the following properties for dual brane tilings: \begin{itemize} \item ${}^{\text{Irr}}\mathcal{F}^\flat$ are isomorphic\footnote{Note: Specular duality extends to the full master space $\mathcal{F}^\flat$. We restrict the discussion to its largest component ${}^{\text{Irr}}\mathcal{F}^\flat$.} and the Hilbert series are the same up to a fugacity map. \item Except for self-dual cases, $\mathcal{M}^{mes}$ are not the same. \item The number of gauge groups $G$ remains invariant. \item The number of matter fields $E$ remains invariant. \end{itemize} There are 16 reflexive toric diagrams. They are summarized in \fref{f_sumtoric2} \cite{Hanany:2012hi} and relate to 30 brane tilings. Specular duality exhibits additional properties for this set of brane tilings: \begin{itemize} \item Internal/external perfect matchings of brane tiling A become external/internal perfect matchings of the dual brane tiling B. \item The mesonic flavour symmetries of brane tiling A become the anomalous or enhanced hidden baryonic symmetries of brane tiling B. \end{itemize} \begin{figure}[H] \begin{center} \resizebox{0.93\hsize}{!}{ \includegraphics[trim=0cm 0cm 0cm 0cm,totalheight=18 cm]{sumtoric2.pdf} } \caption{\textit{Reflexive Toric Diagrams.} The figure shows the $16$ reflexive toric diagrams which correspond to $30$ brane tilings. Each polygon is labelled by $(G|n_p:n_i|n_W)$, where $G$ is the number of $U(N)$ gauge groups, $n_p$ is the number of extremal perfect matchings, $n_i$ is the number of internal perfect matchings, and $n_W$ is the number of superpotential terms. A reflexive polygon can correspond to multiple brane tilings by toric duality. \label{f_sumtoric2}} \end{center} \end{figure} \begin{table}[H] \centering \begin{tabular}{|c|c|} \hline $d$ & Number of Polytopes\\ \hline\hline 1 & 1 \\ \hline 2 & 16\\ \hline 3 & 4319\\ \hline 4 & 473800776\\ \hline \end{tabular} \caption{\textit{Counting Reflexive Polytopes.} Number of distinct reflexive lattice polytopes in dimension $d\leq 4$. The number of polytopes forms a sequence which has the OEIS identifier A090045.} \label{tpolycount} \end{table} \noindent As noted above, specular duality exhibits additional properties for brane tilings with reflexive toric diagrams. Many of the 30 brane tilings which correspond to the 16 reflexive polygons are toric duals \cite{Hanany:2012hi}. Reflexive polytopes have the following properties: \begin{itemize} \item A \textbf{reflexive polytope} is a convex $\mathbb{Z}^{d}$ lattice polytope whose unique interior point is the origin $(0,\dots,0)$. \item A \textbf{dual (polar) polytope} exists for every reflexive polytope $\Delta$. The dual $\Delta^{\circ}$ is another lattice polytope with points \beal{es00_20} \Delta^{\circ}=\{ v^{\circ}\in\mathbb{Z}^d ~|~ \langle v^{\circ},v \rangle \geq -1 ~\forall v\in \Delta \} \end{eqnarray} $\Delta^{\circ}$ is another reflexive polytope. There are self-dual polytopes, $\Delta=\Delta^{\circ}$.\footnote{Note that this duality between reflexive polytopes does not correspond to specular duality.} \item A \textbf{classification of reflexive polytopes} \cite{Kreuzer:1998vb,Kreuzer:2000qv,Kreuzer:2000xy} is available for the dimensions $d\leq 4$ as shown in \tref{tpolycount}. \end{itemize} \vspace{0.5cm} \begin{figure}[ht!] \begin{center} \resizebox{1\hsize}{!}{ \includegraphics[trim=0cm 0cm 0cm 0cm,totalheight=18 cm]{ctree2d.pdf} } \caption{\textit{Toric and Specular Duality.} These are the duality trees of brane tilings (nodes) with reflexive toric diagrams. The brane tiling labels are taken from \cite{Hanany:2012hi} and \fref{f_sumtoric2}. Arrows indicate toric duality (red) and specular duality (blue). \label{fctree2}} \end{center} \end{figure} Specular duality preserves the reflexivity of the toric diagram and the set of $30$ brane tilings in \fref{f_sumtoric2}: \beal{es2c_40} & 1 \leftrightarrow 1 & \nonumber\\ & 2 \leftrightarrow 4d ~,~ 3a \leftrightarrow 4c ~,~ 3b \leftrightarrow 3b ~,~ 4a \leftrightarrow 4a ~,~ 4b \leftrightarrow 4b & \nonumber\\ & 5 \leftrightarrow 6c ~,~ 6a \leftrightarrow 6a ~,~ 6b \leftrightarrow 6b & \nonumber\\ & 7 \leftrightarrow 10d ~,~ 8a \leftrightarrow 10c ~,~ 8b \leftrightarrow 9c ~,~ 9a \leftrightarrow 10b ~,~ 9b \leftrightarrow 9b ~,~ 10a \leftrightarrow 10a & \nonumber\\ & 11 \leftrightarrow 12b ~,~ 12a \leftrightarrow 12a & \nonumber\\ & 13 \leftrightarrow 15b ~,~ 14 \leftrightarrow 14 ~,~ 15a \leftrightarrow 15a & \nonumber\\ & 16 \leftrightarrow 16 & ~. \end{eqnarray} \fref{fspecdual} shows the reflexive toric diagrams with specular dual brane tilings. \\ \noindent\textbf{Self-dual Brane Tilings.} Certain brane tilings with reflexive toric diagrams are self-dual. These are: \beal{es2c_41} 1~,~ 3b~,~ 4a~,~ 4b~,~ 6a~,~ 6b~,~ 9b~,~ 10a~,~ 12a~,~ 14~,~ 15a~,~ 16~, \end{eqnarray} which are summarized in \fref{fspecselfdual}. The toric diagram and brane tiling are invariant under specular duality. \\ \begin{landscape} \begin{figure}[ht!!] \begin{center} \includegraphics[trim=0.5cm 0.5cm 0cm 0.5cm,totalheight=14.6 cm]{specdual.pdf} \end{center} \caption{\textit{Arbor specularis}. The 30 reflexive toric diagrams with perfect matching multiplicities. The models are labelled with $(n_i,n_e)$, where $n_i$ and $n_e$ are the number of internal and external perfect matchings respectively. The $y$-axis is labelled by the number of gauge groups $G$ or the area of the polygon, and the position along the $x$-axis relates to the difference $n_i-n_e$. \label{fspecdual}} \end{figure} \end{landscape} \begin{figure}[ht!!] \begin{center} \includegraphics[trim=0.5cm 0.5cm 0cm 0.5cm,totalheight=14.5 cm]{specselfdual.pdf} \end{center} \caption{\textit{Self-duals under Specular Duality.} These are the 12 reflexive toric diagrams which have self-dual brane tilings. The models are labelled with $(n_i,n_e)$, where $n_i$ and $n_e$ are the number of internal and external perfect matchings respectively. \label{fspecselfdual}} \end{figure} \subsection{Specular Duality and `Fixing' Shivers \label{s4}} As illustrated in Section \sref{s2d}, toric duality has a natural interpretation on the brane tiling. The following section identifies the interpretation of specular duality on the brane tiling. A toric singularity has an associated \textbf{characteristic polynomial}, also known as the Newton polynomial, \beal{es4_1} P(w,z)= \sum_{(n_1,n_2)\in \Delta} a_{n_1,n_2} w^{n_1} z^{n_2} ~~, \end{eqnarray} where the sum runs over all points in the toric diagram, and $a_{n_1,n_2}$ is a complex number. The geometric description of the \textbf{mirror manifold} \cite{Hori:2000kt,Hori:2000ck,Feng:2005gw} is \beal{es4_2} Y&=&P(w,z)~,~ \nonumber\\ Y&=& u v~, \end{eqnarray} where $w,z\in\mathbb{C}^{*}$ and $u,v\in\mathbb{C}$. The curve $P(w,z)-Y=0$ describes a punctured \textbf{Riemann surface} $\Sigma_Y$ with \begin{itemize} \item the genus $g$ corresponding to the number $I$ of internal toric points \item the number of punctures corresponding to the number $E$ of external toric points. \end{itemize} The Riemann surface is fibered over each point in $Y$. Of particular interest to us is the Riemann surface $\Sigma$ fibered over the origin $Y=0$. It is related to the brane tiling on $\mathbb{T}^2$ under the \textbf{untwisting map} $\phi_u$ \cite{Feng:2005gw,Franco:2011sz,Stienstra:2007dy}. A brane tiling consists of \textbf{zig-zag paths} $\eta_i$ \cite{2003math.ph...5057K,Hanany:2005ss}. These are collections of edges in the tiling that form closed non-trivial paths on $\mathbb{T}^2$. Zig-zag paths maximally turn left at a black node and then maximally turn right at the next adjacent white node. The \textbf{winding numbers} $(p,q)$ of zig-zag paths relate to the $\mathbb{Z}^2$ direction of the corresponding leg in the $(p,q)$-web \cite{Aharony:1997bh}. The dual of the $(p,q)$-web is the toric diagram. By thickening the $(p,q)$-web, one obtains the punctured Riemann surface $\Sigma$. The untwisting map $\phi_u$ has the following action on the brane tiling: \beal{esu1} \phi_u ~:~\hspace{2cm} \text{brane tiling on}~\mathbb{T}^2 &\rightarrow& \text{shiver on}~\Sigma \nonumber\\ \text{zig-zag path}~\eta_i &\mapsto& \text{puncture}~\gamma_i \nonumber\\ \text{face/gauge group}~U(N)_a &\mapsto& \text{zig-zag path}~\tilde{\eta}_a \nonumber\\ \text{node/term}~w_k,~b_k &\mapsto& \text{node/term}~w_k,~b_k \nonumber\\ \text{edge/field}~X_{ab} &\mapsto& \text{edge/field}~X_{ij}~~, \end{eqnarray} where $a,b$ count $U(N)$ gauge groups/brane tiling faces, $i,j$ count zig-zag paths on the original brane tiling on $\mathbb{T}^2$, and $\tilde{\eta}_a$ are zig-zag paths of the shiver on $\Sigma$. An illustration of the untwisting map is in \fref{funtwist}. \begin{figure}[H] \begin{center} \begin{tabular}{ccc} brane tiling on $\mathbb{T}^2$ & $\stackrel{\phi_u}{\rightarrow}$ & shiver on $\Sigma$\\ zig-zag path $\eta_i$ & $\mapsto$ & puncture $\gamma_i$\\ face/gauge group $U(N)_a$ & $\mapsto$ & zig-zag path $\tilde{\eta}_a$ \\ node/term $w_k,~b_k$ & $\mapsto$ & node/term $w_k,~b_k$\\ edge/field $X_{ab}$ & $\mapsto$ & edge/field $X_{ij}$\\ &&\\ \includegraphics[trim=0cm 0cm 0cm 0cm,width=5 cm]{untwist1.pdf} & & \includegraphics[trim=0cm 0cm 0cm 0cm,width=5 cm]{untwist2.pdf} \end{tabular} \end{center} \caption{\textit{The Untwisting Map $\phi_u$.} The untwisting map relates a brane tiling on $\mathbb{T}^2$ to a shiver on a punctured Riemann surface $\Sigma$.\label{funtwist}} \end{figure} The untwisted brane tiling on $\Sigma$ is known as a \textbf{shiver} \cite{Butti:2007jv,Franco:2007ii,Hanany:2008fj}. It is not associated to a quiver, a superpotential and a field theory moduli space, and therefore can be interpreted as a `pseudo-brane tiling' on a punctured Riemann surface. An interesting question to ask at this point is whether a shiver can be `fixed' by a map $\phi_f$ such that it becomes a consistent brane tiling. The main obstructions are the punctures of $\Sigma$ which have no interpretation in the quiver gauge theory context. Let the punctures therefore be identified with $U(N)$ gauge groups under the following definition of the \textbf{shiver fixing map}: \beal{esu2} \phi_f ~:~ \hspace{2cm} \text{shiver on}~\Sigma &\rightarrow& \text{brane tiling on}~\Sigma \nonumber\\ \text{puncture}~\gamma_i &\mapsto& \text{face/gauge group}~U(N)_i~~, \end{eqnarray} with the zig-zag paths $\tilde{\eta}_a$, nodes $w_k$ and $b_k$, and edges $X_{ij}$ on the shiver remaining invariant. Accordingly, using the shiver fixing map $\phi_f$ and the untwisting map $\phi_u$, \textbf{specular duality} on brane tilings can be defined as follows \beal{esu3} \phi_{\text{specular}} = \phi_f \circ \phi_u ~:~ \hspace{2cm} \text{brane tiling A on}~\mathbb{T}^2 &\rightarrow& \text{brane tiling B on}~\Sigma \nonumber\\ \text{zig-zag path}~\eta_i &\mapsto& \text{face/gauge group}~U(N)_i \nonumber\\ \text{face/gauge group}~U(N)_a &\mapsto& \text{zig-zag path}~\tilde{\eta}_a \nonumber\\ \text{node/term}~w_k,~b_k &\mapsto& \text{node/term}~w_k,~b_k \nonumber\\ \text{edge/field}~X_{ab} &\mapsto& \text{edge/field}~X_{ij}~~, \end{eqnarray} where $\phi_{\text{specular}}$ is invertible. A graphical illustration of $\phi_{\text{specular}}$ is in \fref{fspecularmap}. \begin{figure}[H] \begin{center} \resizebox{\hsize}{!}{ \begin{tabular}{ccccc} brane tiling A on $\mathbb{T}^2$ & $\stackrel{\phi_u}{\rightarrow}$ & shiver on $\Sigma$ & $\stackrel{\phi_f}{\rightarrow}$ & brane tiling B on $\Sigma$ \\ zig-zag path $\eta_i$ & $\mapsto$ & puncture $\gamma_i$ & $\mapsto$ & face/gauge group $U(N)_i$ \\ face/gauge group $U(N)_a$ & $\mapsto$ & zig-zag path $\tilde{\eta}_a$ & $\mapsto$ & zig-zag path $\tilde{\eta}_a$ \\ node/term $w_k,~b_k$ & $\mapsto$ & node/term $w_k,~b_k$ & $\mapsto$ & node/term $w_k,~b_k$ \\ edge/field $X_{ab}$ & $\mapsto$ & edge/field $X_{ij}$ & $\mapsto$ & edge/field $X_{ij}$ \\ &&\\ \includegraphics[trim=0cm 0cm 0cm 0cm,width=7 cm]{untwist1b.pdf} & & \includegraphics[trim=0cm 0cm 0cm 0cm,width=7 cm]{untwist2b.pdf} && \includegraphics[trim=0cm 0cm 0cm 0cm,width=7 cm]{untwist3.pdf} \end{tabular} } \end{center} \caption{\textit{Specular Duality on a Brane Tiling.} The map $\phi_{\text{specular}}=\phi_f \circ \phi_u$ which defines specular duality first untwists a brane tiling and then replaces punctures with $U(N)$ gauge groups.\label{fspecularmap}} \end{figure} For a brane tiling to have a Calabi-Yau 3-fold as its mesonic moduli space and to have a known AdS dual \cite{Benvenuti:2004dw,Benvenuti:2005wi,Kennaway:2007tq}, it needs to be on $\mathbb{T}^2$. Brane tilings with reflexive toric diagrams have a specular dual which is always on $\Sigma=\mathbb{T}^2$. This is because, as we recall, reflexive toric diagrams always have by definition $I=1$ and their $(p,q)$-web has therefore always genus $g=1$. \\ \noindent\textbf{Invariance of the master space ${}^{\text{Irr}}\mathcal{F}^\flat$.} Specular duality has an important effect on a brane tiling's superpotential $W$ which can be demonstrated with the following example \beal{esuu1} W = \dots + A B C - A D E + \dots ~~, \end{eqnarray} where $A,\dots,E$ are quiver fields.\footnote{There is an overall trace in the superpotential which is not written down for simplicity.} The corresponding nodes in the brane tiling are illustrated along with zig-zag paths in the left panel of \fref{fcyclic}. \begin{figure}[H] \begin{center} \includegraphics[trim=0cm 0cm 0cm 0cm,width=15 cm]{untwist4.pdf} \end{center} \caption{\textit{Untwisting the Superpotential.} There are two equivalent ways of untwisting the brane tiling. The order of fields around either a white (clockwise) or black (anti-clockwise) node in the brane tiling is reversed under the untwisting. Either way results in the same brane tiling. \label{fcyclic}} \end{figure} Specular duality untwists the brane tiling in such a way that the order of quiver fields around either white (clockwise) nodes or black (anti-clockwise) nodes is reversed. For the example in \eref{esuu1}, the superpotential of the dual brane tiling has either the form \beal{esuu2} W_{\text{(a)}}= \dots + A C B - A D E + \dots \end{eqnarray} or the form \beal{esuu3} W_{\text{(b)}}= \dots + A B C - A E D + \dots \end{eqnarray} as illustrated in the right panel of \fref{fcyclic}. The options of reversing the orientation around white nodes or black nodes are equivalent up to an overall swap of node colours. For the case of single D3 brane theories with $U(1)$ gauge groups, the fields commute such that \beal{esuu2} W=W_{\text{(a)}}=W_{\text{(b)}}~~. \end{eqnarray} The $U(1)$ superpotential is invariant under specular duality. Since the master space ${}^{\text{Irr}}\mathcal{F}^\flat$ is defined in terms of F-terms, the observation in \eref{esuu2} implies that it is invariant under specular duality. \\ \noindent\textbf{No specific Quiver from an Abelian $W$.} In order to show that the master spaces of dual one brane theories are isomorphic, it is sufficient to illustrate that the superpotentials are the same when the quiver fields commute. However, it is important to note that if the cyclic order of fields in a given superpotential is not recorded, its correspondence to a specific quiver and hence a brane tiling is not unique. A simple example would be the Abelian potential for $\mathbb{C}^3$ or the conifold $\mathcal{C}$ which is $W=0$. In contrast to the distinct non-Abelian superpotentials, the trivial Abelian superpotential for these models encodes no information about the field content of the associated brane tilings. Since specular duality is a well defined map between brane tilings, not just between Abelian superpotentials, we study in the following sections the new correspondence with the help of \textit{characteristics of the mesonic moduli space}. An important observation is that specular duality exchanges internal and external perfect matchings for brane tilings with reflexive toric diagrams. The difference between internal and external perfect matchings is a property of the mesonic moduli space and its toric diagram. Perfect matchings as GLSM fields are used for the symplectic quotient description of ${}^{\text{Irr}}\mathcal{F}^\flat$. Since perfect matchings represent a choice of coordinates to identify the master space cone, one is free to introduce a new set of coordinates that correspond to the global symmetry of the field theory. In the following sections, we identify coordinate transformations that relate the exchange of internal and external perfect matchings to the exchange of mesonic flavour symmetries and hidden or anomalous baryonic symmetries. Moreover, one can find a third set of coordinates which relate to the boundaries of the Calabi-Yau cone and are used to illustrate how an exchange of internal and external perfect matchings leads to a reflection of the ${}^{\text{Irr}}\mathcal{F}^\flat$ cone along a hyperplane. \\ \section{Model 13 ($Y^{2,2}$, $\mathbb{F}_{2}$, $\mathbb{C}^3/\mathbb{Z}_4$) and Model 15b ($Y^{2,0}$, $\mathbb{F}_{0}$, $\mathcal{C}/\mathbb{Z}_{2}$) \label{s4}} In the following section, we study specular duality with Model 13 which is known as $Y^{2,2}$, $\mathbb{F}_2$ or $\mathbb{C}^3/\mathbb{Z}_4$ with action $(1,1,2)$ in the literature, and Model 15b which is known as phase II of $Y^{2,0}$, $\mathbb{F}_0$ or $\mathcal{C}/\mathbb{Z}_2$ with action $(1,1,1,1)$. \subsection{Brane Tilings and Superpotentials \label{s5_1}} \begin{figure}[ht!] \begin{center} \includegraphics[trim=0cm 0cm 0cm 0cm,width=16 cm]{fm13d.pdf} \end{center} \caption{ \textit{Specular Duality between Models 13 and 15b.} The untwisting map $\phi_u$ acts on the brane tiling of Model 13 which results in a shiver. The shiver is then fixed with $\phi_f$ which results in the brane tiling of Model 15b. \label{fm13d}} \end{figure} \fref{fm13d} shows how the untwisting map $\phi_u$ acts on the brane tiling of Model 13 to give a shiver. The fixing map $\phi_f$ then takes the shiver to give the brane tiling of Model 15b. Beginning with the superpotential of Model 13, \beal{esd13_1} W_{13}&=& + X_{12}^{1} X_{24} X_{41}^{1} + X_{31} X_{12}^{2} X_{23}^{2} + X_{41}^{2} X_{13} X_{34}^{1} + X_{34}^{2} X_{42} X_{23}^{1} \nonumber\\ && - X_{12}^{1} X_{23}^{1} X_{31} - X_{13} X_{34}^{2} X_{41}^{1} - X_{41}^{2} X_{12}^{2} X_{24} - X_{34}^{1} X_{42} X_{23}^{2} ~~, \end{eqnarray} the zig-zag paths are identified as follows \beal{esd13_2} \eta_1 &=& \{X_{12}^{1}, X_{23}^{1}, X_{34}^{2}, X_{41}^{1}\}~~, \nonumber\\ \eta_2 &=& \{X_{12}^{2}, X_{24}, X_{41}^{1}, X_{13}, X_{34}^{1}, X_{42}, X_{23}^{1}, X_{31}\}~~, \nonumber\\ \eta_3 &=& \{X_{23}^{2}, X_{34}^{1}, X_{41}^{2}, X_{12}^{2}\}~~, \nonumber\\ \eta_4 &=& \{X_{13}, X_{34}^{2}, X_{42}, X_{23}^{2}, X_{31}, X_{12}^{1}, X_{24}, X_{41}^{2}\} ~~. \end{eqnarray} The intersections of zig-zag paths highlighted in \fref{fm13d} are \beal{esd13_2} && (A,B,C,D,E,F,G,H,I,J,K,L)= \nonumber\\ && \hspace{2cm} ( X_{31},X_{13},X_{12}^{2},X_{34}^{1},X_{41}^{2},X_{23}^{2},X_{24},X_{42}, X_{41}^{1},X_{23}^{1},X_{12}^{1},X_{34}^{2} )~~. \end{eqnarray} Under specular duality, the intersections are mapped to the ones for zig-zag paths on the brane tiling of Model 15b. In terms of intersections, the superpotential in \eref{esd13_1} takes the form \beal{esd13_3} W_{13} &=& + K G I + A C F + E B D + L H J \nonumber\\ && - K J A - B L I - E C G - D H F \end{eqnarray} The intersections are also fields in the dual brane tiling of Model 15b. Accordingly, the corresponding superpotential can be written as \beal{esd13_4} \widetilde{W_{13}} = W_{15b} &=& + X_{14}^{1} X_{42}^{1} X_{21}^{1} + X_{42}^{4} X_{23}^{2} X_{34}^{1} + X_{34}^{2} X_{42}^{3} X_{23}^{1} + X_{14}^{2} X_{42}^{2} X_{21}^{2} \nonumber\\ && - X_{14}^{1} X_{42}^{4} X_{21}^{2} - X_{42}^{3} X_{21}^{1} X_{14}^{2} - X_{34}^{2} X_{42}^{1} X_{23}^{2} - X_{23}^{1} X_{34}^{1} X_{42}^{2} \nonumber\\ &=& + K G I + A C F + E B D + L H J \nonumber\\ && - K A J - B I L - E G C - D F H ~~. \end{eqnarray} We note that the two superpotentials are the same up to a reversal of cyclic order of negative terms in \eref{esd13_4}. For the Abelian single D3 brane theory, the superpotentials and the corresponding F-terms are the same and hence lead to the same master space ${}^{\text{Irr}}\mathcal{F}^\flat$. \\ \subsection{Perfect Matchings and the Hilbert Series \label{s5_2}} \begin{figure}[H] \begin{center} \includegraphics[trim=0cm 0cm 0cm 0cm,width=4.5 cm]{quiver13.pdf} \includegraphics[width=5 cm]{toric13.pdf} \includegraphics[width=5 cm]{tiling13.pdf} \\ \vspace{-0.5cm} \begin{eqnarray} W_{13}&=& + X_{12}^{1} X_{24} X_{41}^{1} + X_{31} X_{12}^{2} X_{23}^{2} + X_{41}^{2} X_{13} X_{34}^{1} + X_{34}^{2} X_{42} X_{23}^{1} \nonumber\\ && - X_{12}^{1} X_{23}^{1} X_{31} - X_{13} X_{34}^{2} X_{41}^{1} - X_{41}^{2} X_{12}^{2} X_{24} - X_{34}^{1} X_{42} X_{23}^{2} \nonumber \end{eqnarray} \vspace{-1cm} \caption{The quiver, toric diagram, brane tiling and superpotential of Model 13.} \label{f13} \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[trim=0cm 0cm 0cm 0cm,width=4.5 cm]{quiver15b.pdf} \includegraphics[width=5 cm]{toric15b.pdf} \includegraphics[width=5 cm]{tiling15b.pdf} \\ \vspace{-0.5cm} \begin{eqnarray} W_{15b} &=& + X_{14}^{1} X_{42}^{1} X_{21}^{1} + X_{42}^{4} X_{23}^{2} X_{34}^{1} + X_{34}^{2} X_{42}^{3} X_{23}^{1} + X_{14}^{2} X_{42}^{2} X_{21}^{2} \nonumber\\ && - X_{14}^{1} X_{42}^{4} X_{21}^{2} - X_{42}^{3} X_{21}^{1} X_{14}^{2} - X_{34}^{2} X_{42}^{1} X_{23}^{2} - X_{23}^{1} X_{34}^{1} X_{42}^{2} \nonumber \end{eqnarray} \vspace{-1cm} \caption{The quiver, toric diagram, brane tiling and superpotential of Model 15b.} \label{f15b} \end{center} \end{figure} In order to illustrate that specular duality exchanges internal and external perfect matchings of brane tilings, we consider the symplectic quotient description of ${}^{\text{Irr}}\mathcal{F}^\flat$. It uses GLSM fields which relate to perfect matchings in a brane tiling. They are summarized in matrices which are for Model 13 and 15b respectively \noindent\makebox[\textwidth]{% \footnotesize $P^{13}= \left( \begin{array}{c|ccc|cc|cccc} \; & p_{1} & p_{2} & p_{3} & q_{1} & q_{2} & s_{1} & s_{2} & s_{3} & s_{4} \\ \hline I=X_{41}^{1} & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ E=X_{41}^{2} & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ J=X_{23}^{1} & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ F=X_{23}^{2} & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ C=X_{12}^{2} & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ K=X_{12}^{1} & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ D=X_{34}^{1} & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ L=X_{34}^{2} & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ H=X_{42} & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 \\ A=X_{31} & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ B=X_{13} & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 \\ G=X_{24} & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \end{array} \right)~,~ P^{15b}= \left( \begin{array}{c|cccc|ccccc} \; & p_1 & p_2 & p_3 & p_4 & s_1 & s_2 & s_3 & s_4 & s_5 \\ \hline I=X_{21}^{1} & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ E=X_{34}^{2} & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ J=X_{21}^{2} & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ F=X_{34}^{1} & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ C=X_{23}^{2} & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 \\ K=X_{14}^{1} & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ D=X_{23}^{1} & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\ L=X_{14}^{2} & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 \\ H=X_{42}^{2} & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ A=X_{42}^{4} & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ B=X_{42}^{3} & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ G=X_{42}^{1} & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \end{array} \right)~. $ } \\ \vspace{0.2cm} \noindent The corresponding F-term charge matrices are \noindent\makebox[\textwidth]{% \footnotesize $ Q_{F}^{13}= \left( \begin{array}{ccc|cc|cccc} p_{1} & p_{2} & p_{3} & q_{1} & q_{2} & s_{1} & s_{2} & s_{3} & s_{4} \\ \hline 0 & 0 & -1 & -1 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 & -1 & 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & -1 & -1 & 0 & 0 & 0 & 0 \end{array} \right) ~,~ Q_{F}^{15b}= \left( \begin{array}{cccc|ccccc} p_1 & p_2 & p_3 & p_4 & s_1 & s_2 & s_3 & s_4 & s_5 \\ \hline 1 & 1 & 0 & 0 & 0 & 0 & -1 & -1 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 & -1 & 0 & -1 \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & -1 & -1 \end{array} \right)~. $ } \\ \noindent From the quiver incidence matrices, one obtains the following D-term charge matrices \noindent\makebox[\textwidth]{% \footnotesize $ Q_{D}^{13}= \left( \begin{array}{ccc|cc|cccc} p_{1} & p_{2} & p_{3} & q_{1} & q_{2} & s_{1} & s_{2} & s_{3} & s_{4} \\ \hline 0 & 0 & 0 & 1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 \end{array} \right) ~,~ Q_{D}^{15b}= \left( \begin{array}{cccc|ccccc} p_1 & p_2 & p_3 & p_4 & s_1 & s_2 & s_3 & s_4 & s_5 \\ \hline 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 \end{array} \right)~. $ } \\ \noindent The kernel of the total charge matrix $Q_t$ leads to the coordinates of points in the toric diagram, \noindent\makebox[\textwidth]{% \footnotesize $ G_t^{13}= \left( \begin{array}{ccc|cc|cccc} p_{1} & p_{2} & p_{3} & q_{1} & q_{2} & s_{1} & s_{2} & s_{3} & s_{4} \\ \hline 0 & 0 & 2 & 0 & 0 & 1 & 1 & 1 & 1 \\ 2 & 0 & -1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{array} \right) ~,~ G_t^{15b}= \left( \begin{array}{cccc|ccccc} p_1 & p_2 & p_3 & p_4 & s_1 & s_2 & s_3 & s_4 & s_5 \\ \hline 2 & 0 & 2 & 0 & 1 & 1 & 1 & 1 & 1 \\ 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{array} \right)~. $ } \\ \noindent Note that the corresponding toric diagrams in \fref{f13} and \fref{f15b} are $GL(2,\mathbb{Z})$ transformed. The columns in the $G_t$ matrices indicate the coordinates of points in the toric diagram with the associated perfect matchings. Using this information, one relates columns of the matrices $Q_F$, $Q_D$ and $P$ to either external or internal perfect matchings. Specular duality swaps external and internal perfect matchings as follows \beal{essx1} (p_1,p_2,p_3,q_1,q_2,s_1,s_2,s_3,s_4)_{13} \leftrightarrow (s_1,s_2,s_3,s_4,s_5,p_1,p_2,p_3,p_4)_{15b}~~. \end{eqnarray} Accordingly, the duality maps the perfect matching matrix $P^{13}$ to $P^{15b}$ as well as the F-term charge matrix $Q_{F}^{13}$ to $Q_{F}^{15b}$ by a swap of matrix columns. As a result, the following symplectic quotient descriptions of the master spaces ${}^{\text{Irr}}\mathcal{F}^\flat$ are isomorphic \beal{essx2} {}^{\text{Irr}}\mathcal{F}^\flat_{13}&=& \mathbb{C}^{9}[p_{1},p_{2},p_{3},q_{1},q_{2},s_{1},s_{2},s_{3},s_{4}]//Q_F^{13} ~,~\nonumber\\ {}^{\text{Irr}}\mathcal{F}^\flat_{15b}&=& \mathbb{C}^{9}[p_1,p_2,p_3,p_4,s_1,s_2,s_3,s_4,s_5]//Q_F^{15b}~. \end{eqnarray} Specular duality can therefore be observed on the level of the Hilbert series of ${}^{\text{Irr}}\mathcal{F}^\flat$. Starting with Model 15b, its symplectic quotient leads to the following refined Hilbert series \beal{essx3} g_1(t_i,y_{s_i};{}^{\text{Irr}}\mathcal{F}^\flat_{15b}) &=& \prod_{i=1}^{3}\oint_{|z_i|=1} \frac{\mathrm{d} z_i}{2\pi i z_i}~ \frac{1}{ (1-z_1 t_1) (1-z_1 t_2) (1-z_2 t_3) (1-z_2 t_4) (1-z_3 s_1) } \nonumber\\ && \hspace{3cm} \times \frac{1}{ (1-z_3 s_2) (1-\frac{1}{z_1 z_2} s_3) (1-\frac{1}{z_1 z_3} s_4) (1-\frac{1}{z_2 z_3} s_5) } \nonumber\\ &=& \frac{P(t_i,y_{s_i})}{ (1-t_1 t_2 y_{s_3}) (1-t_2 t_3 y_{s_3}) (1-t_1 t_4 y_{s_3}) (1-t_2 t_4 y_{s_3}) } \nonumber\\ && \times \frac{1}{ (1-t_1 s_1 y_{s_4}) (1-t_2 s_1 y_{s_4}) (1-t_1 y_{s_2} y_{s_4}) (1-t_2 y_{s_2} y_{s_4}) } \nonumber\\ && \times \frac{1}{ (1-t_3 y_{s_1} y_{s_5}) (1-t_4 y_{s_1} y_{s_5}) (1-t_3 y_{s_2} y_{s_5}) (1-t_4 y_{s_2} y_{s_5}) } ~~, \nonumber\\ \end{eqnarray} where the numerator $P(t_i,y_{s_i})$ is presented in appendix \sref{appnum1}. Fugacities $t_i$ and $y_{s_i}$ count external and internal perfect matchings $p_i$ and $s_i$ of Model 15b respectively. The plethystic logarithm of the Hilbert series is \beal{essx4} && PL[g_1(t_i,y_{s_i};{}^{\text{Irr}}\mathcal{F}^\flat_{15b})]= y_{s_1} y_{s_4} t_1 + y_{s_2} y_{s_4} t_1 + y_{s_1} y_{s_4} t_2 + y_{s_2} y_{s_4} t_2 + y_{s_1} y_{s_5} t_3 + y_{s_2} y_{s_5} t_3 \nonumber\\ && \hspace{0.5cm} + y_{s_1} y_{s_5} t_4 + y_{s_2} y_{s_5} t_4 + y_{s_3} t_1 t_3 + y_{s_3} t_2 t_3 + y_{s_3} t_1 t_4 + y_{s_3} t_2 t_4 - y_{s_1} y_{s_2} y_{s_4} y_{s_5} t_1 t_3 \nonumber\\ && \hspace{0.5cm} - y_{s_1} y_{s_2} y_{s_4} y_{s_5} t_2 t_3 - y_{s_1} y_{s_2} y_{s_4} y_{s_5} t_1 t_4 - y_{s_1} y_{s_2} y_{s_4} y_{s_5} t_2 t_4 - y_{s_1} y_{s_2} y_{s_4}^2 t_1 t_2 - y_{s_1} y_{s_2} y_{s_5}^2 t_3 t_4 \nonumber\\ && \hspace{0.5cm} - y_{s_1} y_{s_3} y_{s_4} t_1 t_2 t_3 - y_{s_2} y_{s_3} y_{s_4} t_1 t_2 t_3 - y_{s_1} y_{s_3} y_{s_4} t_1 t_2 t_4 - y_{s_2} y_{s_3} y_{s_4} t_1 t_2 t_4 - y_{s_1} y_{s_3} y_{s_5} t_1 t_3 t_4 \nonumber\\ &&\hspace{0.5cm} - y_{s_2} y_{s_3} y_{s_5} t_1 t_3 t_4 - y_{s_1} y_{s_3} y_{s_5} t_2 t_3 t_4 - y_{s_2} y_{s_3} y_{s_5} t_2 t_3 t_4 - y_{s_3}^2 t_1 t_2 t_3 t_4 +\dots~~. \end{eqnarray} It is not finite and therefore indicates that the master space is not a complete intersection. By specular duality, we obtain the Hilbert series in terms of the perfect matching fugacities of Model 13. The perfect matching map in \eref{essx1} translates to the fugacity map \beal{essx5} (y_{s_i},t_{1,2,3},y_{q_{1,2}})_{13} \leftrightarrow (t_i,y_{s_{1,2,3}},y_{s_{4,5}})_{15b}~~, \end{eqnarray} where $(y_{s_i},t_{1,2,3},y_{q_{1,2}})$ are the fugacities for perfect matchings $(s_i,t_{1,2,3},q_{1,2})$ of Model 13 respectively. \\ \subsection{Global Symmetries and the Hilbert Series \label{s5_2}} In order to discuss global symmetries, let us introduce the notation of subscripts and superscripts on groups which refer to fugacities and model numbers respectively. The F-term charge matrix for Model 13 indicates that the global symmetry is $SU(2)_{x}^{[13]} \times U(1)_f^{[13]} \times SU(2)_{h_1}^{[13]} \times SU(2)_{h_2}^{[13]} \times U(1)_{b}^{[13]} \times U(1)_R^{[13]}$, where $SU(2)_{x}^{[13]}\times U(1)_{f}^{[13]} \times U(1)_{R}^{[13]}$ represents the mesonic symmetry, $SU(2)_{h_1}^{[13]}\times SU(2)_{h_2}^{[13]}$ the hidden baryonic symmetry, and $U(1)_{b}^{[13]}$ the remaining baryonic symmetry. In comparison, for Model 15b, where internal and external perfect matchings are swapped under specular duality, the global symmetry is $SU(2)_{x}^{[15b]}\times SU(2)_{y}^{[15b]} \times SU(2)_{h_1}^{[15b]} \times U(1)_{h_2}^{[15b]} \times U(1)_{b}^{[15b]} \times U(1)_R^{[15b]}$. The mesonic symmetry is $SU(2)_{x}^{[15b]}\times SU(2)_{y}^{[15b]} \times U(1)_R^{[15b]}$, the hidden baryonic symmetry is $SU(2)_{h_1}^{[15b]}\times U(1)_{h_2}^{[15b]}$, and the remaining baryonic symmetry is $U(1)_{b}^{[15b]}$. Accordingly, we observe that the swap of external and internal perfect matchings under specular duality leads to the following correspondence between global symmetries \beal{ess2x1} SU(2)_{x}^{[13]}\times U(1)_{f}^{[13]} &\leftrightarrow& SU(2)_{h_1}^{[15b]}\times U(1)_{h_2}^{[15b]} \nonumber\\ SU(2)_{h_1}^{[13]}\times SU(2)_{h_2}^{[13]} &\leftrightarrow& SU(2)_{x}^{[15b]}\times SU(2)_{y}^{[15b]} \nonumber\\ U(1)_{b}^{[13]} &\leftrightarrow& U(1)_{b}^{[15b]} ~~. \end{eqnarray} It is a swap between mesonic flavour and hidden baryonic symmetries. Following the discussion in appendix \sref{appch}, one can find global charges on perfect matchings such that the swap of external and internal perfect matchings corresponds to a swap of mesonic flavor and hidden baryonic symmetry charges. A choice of such perfect matching charges for Model 13 and Model 15b is in \tref{t13} and \tref{t15b} respectively. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|c|l|} \hline \; & $SU(2)_{x}$ & $U(1)_{f}$ & $SU(2)_{h_1}$ & $SU(2)_{h_2}$ & $U(1)_{b}$ & $U(1)_R$ & fugacity \\ \hline \hline $p_1$ & +1 &+1 & 0 & 0 & 0 & 2/3 & $t_1$\\ $p_2$ & -1 &+1 & 0 & 0 & 0 & 2/3 & $t_2$\\ $p_3$ & 0 &-2 & 0 & 0 & 0 & 2/3 & $t_3$\\ $q_1$ & 0 & 0 & 0 & 0 & +1 & 0 & $y_{q_1}$\\ $q_2$ & 0 & 0 & 0 & 0 & -1 & 0 & $y_{q_2}$\\ $s_1$ & 0 & 0 &+1 & 0 & 0 & 0 & $y_{s_1}$\\ $s_2$ & 0 & 0 &-1 & 0 & 0 & 0 & $y_{s_2}$\\ $s_3$ & 0 & 0 & 0 & +1 & 0 & 0 & $y_{s_3}$\\ $s_4$ & 0 & 0 & 0 & -1 & 0 & 0 & $y_{s_4}$\\ \hline \end{tabular} \caption{Perfect matchings of Model 13 with global charge assignment.\label{t13}} \end{table} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|c|l|} \hline \; & $SU(2)_{x}$ & $SU(2)_{y}$ & $SU(2)_{h_1}$ & $U(1)_{h_2}$ & $U(1)_{b}$ & $U(1)_R$ & fugacity \\ \hline \hline $p_1$ & +1 & 0 & 0 & 0 & 0 & 1/2 & $t_1$\\ $p_2$ & -1 & 0 & 0 & 0 & 0 & 1/2 & $t_2$\\ $p_3$ & 0 & +1 & 0 & 0 & 0 & 1/2 & $t_3$\\ $p_4$ & 0 & -1 & 0 & 0 & 0 & 1/2 & $t_4$\\ $s_1$ & 0 & 0 & +1 & +1 & 0 & 0 & $y_{s_1}$\\ $s_2$ & 0 & 0 & -1 & +1 & 0 & 0 & $y_{s_2}$\\ $s_3$ & 0 & 0 & 0 & -2 & 0 & 0 & $y_{s_3}$\\ $s_4$ & 0 & 0 & 0 & 0 & +1 & 0 & $y_{s_4}$\\ $s_5$ & 0 & 0 & 0 & 0 & -1 & 0 & $y_{s_5}$\\ \hline \end{tabular} \caption{Perfect matchings of Model 15b with global charge assignment.\label{t15b}} \end{table} Starting from Model 15b, the following fugacity map \beal{ess2x2} & t = (y_{s_1} y_{s_2} y_{s_3} y_{s_4} y_{s_5} t_1 t_2 t_3 t_4)^{1/4} ~,~ x = t_1^{1/2} t_2^{-1/2} ~,~ y = t_3^{1/2} t_4^{-1/2} ~,~ & \nonumber\\ & b = (y_{s_4} y_{s_5})^{1/2}~ (t_1 t_2)^{1/4}~ (t_3 t_4)^{-1/4} ~,~ h_1 = y_{s_1}^{1/2} y_{s_2}^{-1/2} ~,~ h_2 = (y_{s_1} y_{s_2} y_{s_4} y_{s_5})^{1/4} ~ y_{s_3}^{-1/4} ~, &\nonumber\\ \end{eqnarray} leads to the refined Hilbert series in \eref{essx3} and the corresponding plethystic logarithm in \eref{essx4} in terms of characters of irreducible representations of the global symmetry. The expansion of the Hilbert series takes the form \beal{ess2x3} && g_{1}(t,x,y,h_i,b;{}^{\text{Irr}}\mathcal{F}^\flat_{15b}) = \nonumber\\ && \hspace{1cm} \sum_{n_1=0}^{\infty} \sum_{n_2=0}^{\infty} \sum_{n_3=0}^{\infty} ~ h_2^{n_1+n_2 -2n_3} b^{- n_1 + n_2} ~ [n_2+n_3; n_1+n_3; n_1+n_2] t^{n_1+n_2+2n_3}~, \nonumber\\ \end{eqnarray} where $[n_1;n_2;n_3]\equiv [n_1]_x [n_2]_y [n_3]_{h_{1}}$ is the combined character of representations of $SU(2)_x \times SU(2)_y \times SU(2)_{h_1}$.\footnote{cf. \cite{Forcella:2008ng} with a choice of charges on fields which relates to the choice presented here. The identification $F_1=SU(2)_x$, $F_2=SU(2)_y$, $A_2=SU(2)_{h_1}$, $A_1=U(1)_{h_2}$, $B=U(1)_{b}$ and $R=U(1)_R$ is made.} The corresponding plethystic logarithm is \beal{ess2x3b} PL[g_{1}(t,x,y,h_i,b;{}^{\text{Irr}}\mathcal{F}^\flat_{15b})]&=& [1;0;1] h_2 b t + [0;1;1] h_2 b^{-1} t + [1;1;0] h_2^{-2} t^2 \nonumber\\ && - [1;1;0] h_2^2 t^2 - [1;0;1] h_2^{-1} b^{-1} t^3 - [0;1;1] h_2^{-1} b t^3 \nonumber\\ && - h_2^2 b^2 t^2 - h_2^2 b^{-2} t^2 - h_2^{-4} t^4 +\dots ~~. \end{eqnarray} In comparison, in terms of global charges on perfect matchings of Model 13, the fugacity map \beal{ess2x4} & t = (y_{s_1} y_{s_2} y_{s_3} y_{s_4} y_{q_1} y_{q_2} t_1 t_2 t_3)^{1/3} ~,~ x = t_1^{1/2} t_2^{-1/2} ~,~ & \nonumber\\ & f = (y_{s_1} y_{s_2} y_{s_3} y_{s_4})^{-1/12} ~(y_{q_1} y_{q_2} t_1 t_2)^{1/6} ~t_3^{-1/3} ~,~ & \nonumber\\ & h_1 = y_{s_1}^{1/2} y_{s_2}^{-1/2} ~,~ h_2 = y_{s_3}^{1/2} y_{s_4}^{-1/2} ~,~ & \nonumber\\ & b = (y_{s_1} y_{s_2})^{1/4}~ (y_{s_3} y_{s_4})^{-1/4} ~y_{q_1}^{1/2} y_{q_2}^{-1/2} ~,~& \end{eqnarray} leads to the following Hilbert series \beal{ess2x4b} && g_{1}(t,x,f,h_i,b;{}^{\text{Irr}}\mathcal{F}^\flat_{13}) = \nonumber\\ && \hspace{1cm} \sum_{n_1=0}^{\infty} \sum_{n_2=0}^{\infty} \sum_{n_3=0}^{\infty} f^{n_1+n_2-2n_3} b^{-n_1 + n_2} ~ [n_1 + n_2;n_2+n_3;n_1+n_3] ~ t^{n_1+n_2+n_3}~, \nonumber\\ \end{eqnarray} where $[n_1;n_2;n_3]\equiv [n_1]_x [n_2]_{h_1} [n_3]_{h_2}$ is the combined character of representations of $SU(2)_{x}\times SU(2)_{h_1} \times SU(2)_{h_2}$. The $U(1)_R$ charges on perfect matchings of Model 15b are not mapped by specular duality to $U(1)_R$ charges on perfect matchings of Model 13. This is mainly because only extremal perfect matchings carry non-zero R-charges. In order to illustrate specular duality in terms of the refined Hilbert series, one can without loosing track of the algebraic structure of the moduli space mix the $U(1)_R$ symmetry with the remaining symmetry. This effectively modifies the charge assignment under the global symmetry.\footnote{The algebraic structure of the moduli space is not lost when the orthogonality of global charges on perfect matchings is preserved as discussed in appendix \sref{appch}.} The modification is done via the fugacity map \beal{ess2x5} & \tilde{t} = (y_{s_1} y_{s_2} y_{s_3} y_{s_4} y_{q_1} y_{q_2} t_1 t_2 t_3)^{1/4} ~,~ x = t_1^{1/2} t_2^{-1/2} ~,~ & \nonumber\\ & \tilde{f} = (y_{q_1} y_{q_2} t_1 t_2)^{1/4}~ t_3^{-1/4} ~,~ & \nonumber\\ & h_1 = y_{s_1}^{1/2} y_{s_2}^{-1/2} ~,~ h_2 = y_{s_3}^{1/2} y_{s_4}^{-1/2} ~,~ & \nonumber\\ & b = (y_{s_1} y_{s_2})^{1/4}~ (y_{s_3} y_{s_4})^{-1/4}~ y_{q_1}^{1/2} y_{q_2}^{-1/2} ~,& \end{eqnarray} which leads to the Hilbert series \beal{ess2x6} && g_1(\tilde{t},x,f,h_i,b;{}^{\text{Irr}}\mathcal{F}^\flat_{13}) = \nonumber\\ && \hspace{1cm} \sum_{n_1=0}^{\infty} \sum_{n_2=0}^{\infty} \sum_{n_3=0}^{\infty} \tilde{f}^{n_1+n_2-2n_3} b^{-n_1 + n_2} [n_1 + n_2; n_2 + n_3; n_1 + n_3] \tilde{t}^{n_1+n_2+2n_3} ~~,\nonumber\\ \end{eqnarray} where $[n_1;n_2;n_3]\equiv [n_1]_x [n_2]_{h_1} [n_3]_{h_2}$. One observes that the fugacity map equivalent to the exchange of mesonic flavour and hidden baryonic symmetries is \beal{ess2x7} (x,\tilde{f},\tilde{t},h_1,h_2,b)_{13} \leftrightarrow (h_1,h_2,t,x,y,b)_{15b}~~. \end{eqnarray} It relates the Hilbert series in \eref{ess2x3} to the one in \eref{ess2x6}. \\ \subsection{Generators, the Master Space Cone and the Hilbert Series \label{s5_2}} The master space is toric Calabi-Yau and has a conical structure. Since the dimension of the master space is $G+2=6$, the corresponding Hilbert series can be rewritten in terms of $6$ fugacities $T_i$ such that the exponents of $T_i$ are positive only. This means that all elements of the ring and the corresponding integral points of the moduli space cone relate to monomials of the form $\prod_i T_i^{m_i}$ with $m_i\geq 0$ in the Hilbert series expansion. The appropriate interpretation for these monomials is that if $b$ $T_i$ vanish in $\prod_i T_i^{m_i}$, the associated integral point is on a codimension $b$ cone. All points associated to monomials $\prod_i T_i^{m_i}$ with $m_i>0$ for all $i$ lie within the codimension $0$ cone. The boundary of the codimension $0$ cone is defined by monomials of the form $T_i^{m_i}$ with $m_i>0$. \begin{figure}[ht!] \begin{center} \includegraphics[width=8 cm]{spec1315b.pdf} \caption{\textit{The Specular Axis.} This is a schematic illustration of the master space cone of Models 13 and 15b. The rays corresponding to the basis of the cone are labelled with the associated fugacities $T_i$ of the Hilbert series. The cone is symmetric along a hyperplane which we call the specular axis.} \label{fspec1315b} \end{center} \end{figure} Starting with the perfect matchings of Model 15b, the fugacity map \beal{ess3x1} & T_1 = x = t_1^{1/2} t_2^{-1/2}~,~ T_2 = y = t_3^{1/2} t_4^{-1/2}~,~ & \nonumber\\ & T_3 = b = (y_{s_4} y_{s_5})^{1/2}~ (t_1 t_2)^{1/4}~ (t_3 t_4)^{-1/4}~,~ & \nonumber\\ & T_4 = h_1 = y_{s_1}^{1/2} y_{s_2}^{-1/2}~,~ T_5 = h_2 = (y_{s_1} y_{s_2} y_{s_5})^{1/4} y_{s_3}^{-1/4} ~,~ & \nonumber\\ & T_6 = \frac{t}{x y b h_1 h_2} = (y_{s_1} y_{s_2} y_{s_3} y_{s_4} y_{s_5} t_1 t_2 t_3 t_4)^{1/4} ~,~ & \end{eqnarray} allows us to re-write the Hilbert series such that the corresponding plethystic logarithm in \eref{essx4} takes the form \beal{ess3x2} && PL[g(T_i;{}^{\text{Irr}}\mathcal{F}^\flat_{15b})] = T_1^2 T_2 T_3^2 T_4^2 T_5^2 T_6 +T_1^2 T_2 T_3^2 T_5^2 T_6 +T_2 T_3^2 T_4^2 T_5^2 T_6 +T_2 T_3^2 T_5^2 T_6 \nonumber\\ && \hspace{0.5cm} +T_1 T_2^2 T_4^2 T_5^2 T_6 +T_1 T_2^2 T_5^2 T_6 +T_1 T_4^2 T_5^2 T_6 +T_1 T_5^2 T_6 +T_1^3 T_2^3 T_3^2 T_4^2 T_6^2 +T_1 T_2^3 T_3^2 T_4^2 T_6^2 \nonumber\\ && \hspace{0.5cm} +T_1^3 T_2 T_3^2 T_4^2 T_6^2 +T_1 T_2 T_3^2 T_4^2 T_6^2 -T_1^3 T_2^3 T_3^2 T_4^2 T_5^4 T_6^2 -T_1 T_2^3 T_3^2 T_4^2 T_5^4 T_6^2 \nonumber\\ && \hspace{0.5cm} -T_1^3 T_2 T_3^2 T_4^2 T_5^4 T_6^2 -T_1 T_2 T_3^2 T_4^2 T_5^4 T_6^2 -T_1^2 T_2^2 T_3^4 T_4^2 T_5^4 T_6^2 -T_1^2 T_2^2 T_4^2 T_5^4 T_6^2 \nonumber\\ && \hspace{0.5cm} -T_1^3 T_2^4 T_3^4 T_4^4 T_5^2 T_6^3 -T_1^3 T_2^4 T_3^4 T_4^2 T_5^2 T_6^3 -T_1^3 T_2^2 T_3^4 T_4^4 T_5^2 T_6^3 -T_1^3 T_2^2 T_3^4 T_4^2 T_5^2 T_6^3 \nonumber\\ && \hspace{0.5cm} -T_1^4 T_2^3 T_3^2 T_4^4 T_5^2 T_6^3 -T_1^4 T_2^3 T_3^2 T_4^2 T_5^2 T_6^3 -T_1^2 T_2^3 T_3^2 T_4^4 T_5^2 T_6^3 -T_1^2 T_2^3 T_3^2 T_4^2 T_5^2 T_6^3 \nonumber\\ && \hspace{0.5cm} -T_1^4 T_2^4 T_3^4 T_4^4 T_6^4 +\dots ~~. \end{eqnarray} As desired, the plethystic logarithm as for the Hilbert series is such that the exponents of the fugacities $T_i$ are positive. In comparison, in relation to perfect matchings of Model 13, the fugacity map \beal{ess3x3} T_1 = x ~,~ T_2 = \tilde{f} ~,~ T_3 = b ~,~ T_4 = h_1 ~,~ T_5 = h_2 ~,~ T_6 = \frac{\tilde{t}}{x \tilde{f} b h_1 h_2} ~,~ \end{eqnarray} rewrites the Hilbert series and plethystic logarithm such that they are related to the ones from Model 15b via \beal{ess3x4} (T_1,T_2,T_3,T_4,T_5,T_6) \leftrightarrow (T_4,T_5,T_3,T_1,T_2,T_6)~~. \end{eqnarray} Note that the above map for fugacities $T_i$ relates to the one for global symmetry fugacities in \eref{ess2x7}. Given that the fugacities $T_i$ relate to the boundary of the Calabi-Yau cone, the above fugacity map can be interpreted as a reflection along a hyperplane which is associated to monomials of the form $T_3^{m_3} T_6^{m_6}$. We call the hyperplane the \textbf{specular axis}. It is schematically illustrated in \fref{fspec1315b}. The generators of the master space in terms of perfect matchings of Model 13 and Model 15b are shown with the corresponding global symmetry charges in \tref{t13gen} and \tref{t15bgen} respectively. The master space cone with a selection of generators and the specular axis are illustrated schematically in \fref{fspec1315bcone}. Specular duality maps generators into each other along the specular axis. \begin{table}[H] \centering \resizebox{.85\hsize}{!}{ \begin{tabular}{|l|l|c|c|c|c|c|c|l|} \hline generator & fields & $SU(2)_x$ & $U(1)_{f}$ &$SU(2)_{h_1}$ & $SU(2)_{h_2}$ & $U(1)_{b}$ & $U(1)_{R}$ & fugacity \\ \hline \hline $p_3 ~ s_1 s_3$ & $X_{24}$ & 0 & -2 & +1 & +1 & 0 & 1/3 & $T_1^2 T_3^2 T_4^3 T_5^3 T_6^2$ \nonumber\\ $p_3 ~ s_1 s_4$ & $X_{41}^{1}$ & 0 & -2 & +1 & -1 & 0 & 1/3 & $T_1^2 T_3^2 T_4^3 T_5 T_6^2$ \nonumber\\ $p_3 ~ s_2 s_3$ & $X_{41}^{1}$ & 0 & -2 & -1 & +1 & 0 & 1/3 & $T_1^2 T_3^2 T_4 T_5^3 T_6^2$ \nonumber\\ $p_3 ~ s_2 s_4$ & $X_{42}$ & 0 & -2 & -1 & -1 & 0 & 1/3 & $T_1^2 T_3^2 T_4 T_5 T_6^2$ \nonumber\\ $p_1 ~ q_1 ~ s_1$ & $X_{13}$ & +1 & +1 & +1 & 0 & +1 & 1/3 & $T_1^2 T_2^2 T_3^2 T_4^2 T_5 T_6$ \nonumber\\ $p_1 ~ q_1 ~ s_2$ & $X_{12}^{2}$ & +1 & +1 & -1 & 0 & +1 & 1/3 & $T_1^2 T_2^2 T_3^2 T_5 T_6$ \nonumber\\ $p_2 ~ q_1 ~ s_1$ & $X_{34}^{2}$ & -1 & +1 & +1 & 0 & +1 & 1/3 & $T_2^2 T_3^2 T_4^2 T_5 T_6$ \nonumber\\ $p_2 ~ q_1 ~ s_2$ & $X_{34}^{1}$ & -1 & +1 & -1 & 0 & +1 & 1/3 & $T_2^2 T_3^2 T_5 T_6$ \nonumber\\ $p_1 ~ q_2 ~ s_3$ & $X_{12}^{1}$ & +1 & +1 & 0 & +1 & -1 & 1/3 & $T_1^2 T_2^2 T_4 T_5^2 T_6$ \nonumber\\ $p_1 ~ q_2 ~ s_4$ & $X_{31}$ & +1 & +1 & 0 & -1 & -1 & 1/3 & $T_1^2 T_2^2 T_4 T_6$ \nonumber\\ $p_2 ~ q_2 ~ s_3$ & $X_{23}^{2}$ & -1 & +1 & 0 & +1 & -1 & 1/3 & $T_2^2 T_4 T_5^2 T_6$ \nonumber\\ $p_2 ~ q_2 ~ s_4$ & $X_{23}^{2}$ & -1 & +1 & 0 & -1 & -1 & 1/3 & $T_2^2 T_4 T_6$ \nonumber\\ \hline \end{tabular} } \caption{The generators of the master space of Model 13 with the corresponding charges under the global symmetry. \label{t13gen} } \end{table} \begin{table}[H] \centering \resizebox{.85\hsize}{!}{ \begin{tabular}{|l|l|c|c|c|c|c|c|l|} \hline generator & fields & $SU(2)_x$ & $SU(2)_y$ & $SU(2)_{h_1}$ & $U(1)_{h_2}$ & $U(1)_{b}$ & $U(1)_R$ & fugacity \\ \hline \hline $p_1 p_3 ~ s_3$ & $X_{42}^{2}$ & +1 & +1 & 0 & -2 & 0 & 1 & $T_1^3 T_2^3 T_3^2 T_4^2 T_6^2$ \nonumber\\ $p_1 p_4 ~ s_3$ & $X_{42}^{4}$ & +1 & -1 & 0 & -2 & 0 & 1 & $T_1^3 T_2 T_3^2 T_4^2 T_6^2$ \nonumber\\ $p_2 p_3 ~ s_3$ & $X_{42}^{3}$ & -1 & +1 & 0 & -2 & 0 & 1 & $T_1 T_2^3 T_3^2 T_4^2 T_6^2$ \nonumber\\ $p_2 p_4 ~ s_3$ & $X_{42}^{1}$ & -1 & -1 & 0 & -2 & 0 & 1 & $T_1 T_2 T_3^2 T_4^2 T_6^2$ \nonumber\\ $p_1 ~ s_1 s_4$ & $X_{21}^{1}$ & +1 & 0 & +1 & +1 & +1 & 1/2 & $T_1^2 T_2 T_3^2 T_4^2 T_5^2 T_6$ \nonumber\\ $p_2 ~ s_1 s_4$ & $X_{21}^{2}$ & -1 & 0 & +1 & +1 & +1 & 1/2 & $T_2 T_3^2 T_4^2 T_5^2 T_6$ \nonumber\\ $p_1 ~ s_2 s_4$ & $X_{34}^{2}$ & +1 & 0 & -1 & +1 & +1 & 1/2 & $T_1^2 T_2 T_3^2 T_5^2 T_6$ \nonumber\\ $p_2 ~ s_2 s_4$ & $X_{34}^{1}$ & -1 & 0 & -1 & +1 & +1 & 1/2 & $T_2 T_3^2 T_5^2 T_6$ \nonumber\\ $p_3 ~ s_1 s_5$ & $X_{23}^{2}$ & 0 & +1 & +1 &+1 & -1 & 1/2 & $T_1 T_2^2 T_4^2 T_5^2 T_6$ \nonumber\\ $p_4 ~ s_1 s_5$ & $X_{23}^{1}$ & 0 & -1 & +1 &+1 & -1 & 1/2 & $T_1 T_4^2 T_5^2 T_6$ \nonumber\\ $p_3 ~ s_2 s_5$ & $X_{14}^{1}$ & 0 & +1 & -1 &+1 & -1 & 1/2 & $T_1 T_2^2 T_5^2 T_6$ \nonumber\\ $p_4 ~ s_2 s_5$ & $X_{14}^{2}$ & 0 & -1 & -1 &+1 & -1 & 1/2 & $T_1 T_5^2 T_6$ \nonumber\\ \hline \end{tabular} } \caption{The generators of the master space of Model 15b with the corresponding charges under the global symmetry. \label{t15bgen} } \end{table} \begin{figure}[ht!] \begin{center} \includegraphics[width=12 cm]{fcone1513ii.pdf} \caption{\textit{The Specular Axis and Moduli Space Generators.} The schematic illustration shows a selection of master space generators of Model 15b and Model 13 which are highlighted in red and blue respectively. The dotted lines indicate the identifications of generators under specular duality.} \label{fspec1315bcone} \end{center} \end{figure} \section{Beyond the torus and Conclusions \label{sconc}} Our work discusses specular duality between brane tilings which represent $4d$ $\mathcal{N}=1$ supersymmetric gauge theories with toric Calabi-Yau moduli spaces. Starting from the observations made in \cite{Hanany:2012hi}, this paper identifies the following properties of specular duality for brane tilings on $\mathbb{T}^2$ with reflexive toric diagrams: \begin{itemize} \item Dual brane tilings have the same master space ${}^{\text{Irr}}\mathcal{F}^\flat$. The corresponding Hilbert series are the same up to a fugacity map. \item The new correspondence swaps internal and external perfect matchings. \item Mesonic flavor and anomalous or hidden baryonic symmetries are interchanged. \item Specular duality represents a hyperplane along which the cone of ${}^{\text{Irr}}\mathcal{F}^\flat$ is symmetric. \end{itemize} The new duality is an automorphism of the set of $30$ brane tilings with reflexive toric diagrams \cite{Hanany:2012hi}. When specular duality acts on a brane tiling whose toric diagram is not reflexive, the dual brane tiling is either on a sphere or on a Riemann surface of genus 2 or higher. Such brane tilings have no known AdS duals and their mesonic moduli spaces are not necessarily Calabi-Yau 3-folds \cite{Benvenuti:2004dw,Benvenuti:2005wi,Kennaway:2007tq}. In general, the number of faces $G$ of a brane tiling relates to the number of faces $\tilde{G}$ of the dual tiling by \beal{esco_1} \tilde{G} = E = G - 2 I+2~. \end{eqnarray} $I$ and $E$ are respectively the number of internal and external toric points for the original brane tiling. \begin{figure}[H] \begin{center} \includegraphics[trim=0cm 0cm 0cm 0cm,width=8 cm]{dualC3Z2quiver.pdf} \caption{The quiver of the specular dual of the brane tiling for the Abelian orbifold of the form $\mathbb{C}^3/\mathbb{Z}_{2n}$ with orbifold action $(1,1,-2)$ \cite{HananySeong11b}.} \label{C3Z2quiver} \end{center} \end{figure} First examples of brane tilings on Riemann surfaces can be generated from Abelian orbifolds of $\mathbb{C}^3$ \cite{Hanany:2010cx,Davey:2011dd,Hanany:2010ne,Davey:2010px,Hanany:2011iw}. Consider the brane tilings which correspond to the Abelian orbifolds of the form $\mathbb{C}^3/\mathbb{Z}_{2n}$ with orbifold action $(1,1,-2)$ and $n>0$. The dual brane tiling is on a Riemann surface of genus $n-1$. For the first few examples with $n=1,2,3$, the superpotentials are \beal{esc_1} W_{\widetilde{\mathbb{C}^3/\mathbb{Z}_{2,(1,1,0)}}} &=& X_{34}^{1} X_{41} X_{13} + X_{34}^{2} X_{42} X_{23} - X_{34}^{2} X_{41} X_{13} - X_{34}^{1} X_{42} X_{23} ~~, \\ W_{\widetilde{\mathbb{C}^3/\mathbb{Z}_{4,(1,1,2)}}} &=& X_{34}^{1} X_{41}^{1} X_{13}^{1} + X_{34}^{2} X_{42}^{1} X_{23}^{1} + X_{34}^{3} X_{41}^{2} X_{13}^{2} + X_{34}^{4} X_{42}^{2} X_{23}^{2} \nonumber\\ && - X_{34}^{4} X_{41}^{2} X_{13}^{1} - X_{34}^{1} X_{42}^{2} X_{23}^{1} - X_{34}^{2} X_{41}^{1} X_{13}^{2} - X_{34}^{3} X_{42}^{1} X_{23}^{2} ~~, \\ W_{\widetilde{\mathbb{C}^3/\mathbb{Z}_{6,(1,1,4)}}} &=& X_{34}^{1} X_{41}^{1} X_{13}^{1} + X_{34}^{2} X_{42}^{1} X_{23}^{1} + X_{34}^{3} X_{41}^{2} X_{13}^{2} + X_{34}^{4} X_{42}^{2} X_{23}^{2} \nonumber\\ && + X_{34}^{5} X_{41}^{3} X_{13}^{3} + X_{34}^{6} X_{42}^{3} X_{23}^{3} - X_{34}^{6} X_{41}^{3} X_{13}^{1} - X_{34}^{1} X_{42}^{3} X_{23}^{1} \nonumber\\ && - X_{34}^{2} X_{41}^{1} X_{13}^{2} - X_{34}^{3} X_{42}^{1} X_{23}^{2} - X_{34}^{4} X_{41}^{2} X_{13}^{3} - X_{34}^{5} X_{42}^{2} X_{23}^{3} \comment{ ~~, \\ W_{\widetilde{\mathbb{C}^3/\mathbb{Z}_{8,(1,1,6)}}} &=& X_{34}^{1} X_{41}^{1} X_{13}^{1} + X_{34}^{2} X_{42}^{1} X_{23}^{1} + X_{34}^{3} X_{41}^{2} X_{13}^{2} + X_{34}^{4} X_{42}^{2} X_{23}^{2} \nonumber\\ && + X_{34}^{5} X_{41}^{3} X_{13}^{3} + X_{34}^{6} X_{42}^{3} X_{23}^{3} + X_{34}^{7} X_{41}^{4} X_{13}^{4} + X_{34}^{8} X_{42}^{4} X_{23}^{4} \nonumber\\ && - X_{34}^{8} X_{41}^{4} X_{13}^{1} - X_{34}^{1} X_{42}^{4} X_{23}^{1} - X_{34}^{2} X_{41}^{1} X_{13}^{2} - X_{34}^{3} X_{42}^{1} X_{23}^{2} \nonumber\\ && - X_{34}^{4} X_{41}^{2} X_{13}^{3} - X_{34}^{5} X_{42}^{2} X_{23}^{3} - X_{34}^{6} X_{41}^{3} X_{13}^{4} - X_{34}^{7} X_{42}^{3} X_{23}^{4} } ~~. \end{eqnarray} \begin{figure}[H] \begin{center} \includegraphics[trim=0cm 0cm 0cm 0cm,width=12 cm]{specc3z6.pdf} \caption{\textit{Brane Tiling on a $g=2$ Riemann Surface.} The figure shows the octagonal fundamental domain of the brane tiling which is the specular dual of $\mathbb{C}^3/\mathbb{Z}_6$ with action $(1,1,4)$.} \label{fspecc3z6} \end{center} \end{figure} The corresponding quivers are shown in \fref{C3Z2quiver}. The Hilbert series of the master spaces are, \beal{esc_2} && g_{1}(t;\widetilde{\mathbb{C}^3/\mathbb{Z}_{2,(1,1,0)}}) = \frac{1 - t^4}{(1 - t) (1 - t^2)^4} ~~, \nonumber\\ && g_{1}(t;\widetilde{\mathbb{C}^3/\mathbb{Z}_{4,(1,1,2)}}) = \frac{1 + 6 t^3 + 6 t^6 + t^9}{(1 - t^3)^6} ~~, \nonumber\\ && g_{1}(t;\widetilde{\mathbb{C}^3/\mathbb{Z}_{6,(1,1,4)}}) = (1 + 3 t^2 + 7 t^4 + 18 t^6 + 38 t^8 + 72 t^{10} + 122 t^{12} + 186 t^{14} + 267 t^{16} \nonumber\\ && \hspace{2cm} + 363 t^{18} + 456 t^{20} + 537 t^{22} + 588 t^{24} + 603 t^{26} + 588 t^{28} + 537 t^{30} + 456 t^{32} \nonumber\\ && \hspace{2cm} + 363 t^{34} + 267 t^{36} + 186 t^{38} + 122 t^{40} + 72 t^{42} + 38 t^{44} + 18 t^{46} + 7 t^{48} \nonumber\\ && \hspace{2cm} + 3 t^{50} + t^{52}) \times \frac{ (1 - t^2)^3 (1 - t^4) }{ (1 - t^6)^7 (1 - t^8)^5 }~~. \end{eqnarray} The fundamental domain of the brane tiling for the specular dual of $\mathbb{C}^3/\mathbb{Z}_{6,(1,1,4)}$ is in \fref{fspecc3z6}. It is of great interest to study such brane tilings on higher genus Riemann surfaces. One obtains a new class of quivers and field theories via specular duality which is the subject of a future investigation \cite{HananySeong11b}. \\ \section*{Acknowledgements} We like to thank S. Cremonesi, S. Franco and G. Torri for fruitful discussions. We also thank J. Stienstra for interesting correspondence. A. H. thanks Stanford University and SLAC for the kind hospitality during various stages of this project. R.-K. S. is grateful to the Simons Center for Geometry and Physics at Stony Brook University and the Hebrew University of Jerusalem for kind hospitality.
1,941,325,220,107
arxiv
\section{Introduction} The ability of modern deep learning techniques to process high-dimensional sensory inputs (e.g., vision or depth) provides a promising avenue for training autonomous robotic systems such as drones, robotic manipulators, or autonomous vehicles to operate in complex and real-world environments. However, one of the fundamental challenges with current learning-based approaches for controlling robots is their limited ability to \emph{generalize} beyond the specific set of environments they are trained on \cite{Sunderhauf18}. This lack of generalization is a particularly pressing problem for safety- or performance-critical systems for which one would ideally like to provide \emph{formal guarantees} on generalization to novel environments. A primary contributing factor to this challenge is the fact that real-world datasets for training robotic systems are often limited in size (e.g., in comparison to large-scale datasets available for training visual recognition models via supervised learning). Such datasets often have to be carefully and painstakingly curated, e.g., by scanning indoor environments using 3D cameras for creating a dataset for visual navigation tasks \cite{armeni_cvpr16, replica19arxiv, xia2020interactive}, or scanning objects and characterizing their physical properties (e.g., inertia, friction, and mass) for creating a dataset for robotic manipulation~\cite{mahler2017dex, calli2017yale, chang2015shapenet}. One way to address this challenge of scarce real-world data is to leverage data from a \emph{generative model} of environments. As an example, consider the problem of manipulating mugs (Fig.~\ref{fig:anchor}); one could hand-craft a generative model that produces shapes that are similar to mugs (e.g., hollow cylinders; Fig. \ref{fig:anchor}) or potentially train a generative model over shapes using a dataset of different objects (e.g., bowls). \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figures/GenPAC_anchor.pdf} \caption{A schematic of our overall approach. We provide a framework for providing \emph{generalization guarantees} on novel environments by combining a (potentially inaccurate) generative model of environments (e.g., a distribution that generates hollow cylinders) with a finite dataset of real-world environments (e.g., a datset of mugs). We validate our approach on hardware using the Franka Panda arm, and through multiple simulation experiments. } \label{fig:anchor} \vspace{-5mm} \end{figure} The two sources of data outlined above have complementary features: real-world data is scarce but representative, while data from a generative model is plentiful but potentially different from environments the robot will encounter when deployed. Thus, relying entirely on the small real-world dataset can pose the risk of overfitting, while relying entirely on the generative model may cause the robot to overfit to the specific features of this model and prevent generalization to real-world environments. How can we effectively combine these two sources in order to \emph{guarantee} that the robot will generalize to novel real-world environments? \textit{Statement of contributions: } We provide a framework for providing \emph{formal guarantees on generalization} to novel environments for robotic systems with rich sensory inputs by leveraging a combination of \emph{finite real-world data} and a (potentially inaccurate) \emph{generative model} of environments. To our knowledge, the approach presented here is the first to leverage these two sources of data while providing generalization guarantees for robotic systems. The key technical insight behind our approach is to utilize the generative model for specifying a \emph{prior} over control policies. In order to achieve this, we develop a technique for \emph{implicitly} parameterizing policies via datasets of environments. We then train a \emph{posterior distribution} over policies using the real-world dataset; this posterior is trained to minimize an \emph{upper bound} on the expected cost across novel environments derived via \emph{Probably Approximately Correct (PAC)-Bayes} generalization theory. Minimizing the PAC-Bayes bound allows us to automatically trade-off reliance on the real-world dataset and the generative model, while resulting in policies with a guaranteed bound on expected performance in novel environments. We demonstrate our approach on two examples which use vision inputs: (i) navigation of an unmanned aerial vehicle (UAV) through obstacle fields, and (ii) grasping of mugs by a robotic manipulator. For both examples, we obtain PAC-Bayes bounds that guarantee successful completion of the task in 80--95\% of novel environments. Comparisons with prior work demonstrate the ability of our approach to obtain stronger generalization guarantees by utilizing generative models. We also present hardware experiments for validating our bounds on the grasping task. \subsection{Related Work} {\bf Domain randomization and data augmentation.} Domain randomization (DR) is a popular technique for improving the generalization of policies learned via reinforcement learning (RL). DR generates new training environments by randomizing specified dynamics and environmental parameters, e.g., object textures, friction properties, and lighting conditions \cite{tobin2017domain, peng2018sim, tan2018sim, akkaya2019solving, mehta2020active}, or generating new objects for manipulation by combining different shape primitives \cite{tobin2018domain}. Similarly, data augmentation techniques such as random cutout and cropping \cite{kostrikov2020image, laskin2020reinforcement} seek to improve generalization for vision-based RL tasks by performing transformations on the observation space. While these techniques have been empirically shown to improve generalization, they do not provide any guarantees on generalization (which is the focus of our work). {\bf Generative modeling of environments.} Domain randomization techniques do not necessarily generate realistic environments for training. Consequently, another line of work seeks to address this challenge by generating environments with more realistic structure, e.g., via scene grammars and variational inference \cite{izatt2020generative, qi2018human, kar2019meta}, procedural generation \cite{cobbe2020leveraging}, or evolutionary algorithms \cite{morrison2020egad}. Adversarial techniques have also been developed for generating challenging environments \cite{wang2019adversarial, ren2021distributionally}. Prior work has also explored augmenting real-world training with large amounts of procedurally generated environments via domain adaptation techniques \cite{bousmalis2018using}, transfer learning \cite{kang2019generalization}, or fine-tuning \cite{kulhanek2021visual}. We highlight that none of the methods above provide guarantees on generalization to real-world environments. In this work, we provide a framework based on PAC-Bayes generalization theory in order to combine environments from a generative model with real-world environments and provide generalization guarantees for the resulting policies. Our work is thus complementary to the above techniques and could potentially leverage advances in generative modeling. {\bf Generalization theory.} Generalization theory provides a framework for learning hypotheses (in supervised learning) with guaranteed bounds on the true expected loss on new examples drawn from the underlying (but unknown) data-generating distribution, given only a finite number of training examples. Early frameworks include Vapnik-Chervonenkis (VC) theory \cite{Vapnik68} and Rademacher complexity \cite{Shalev14}. However, these methods often provide vacuous generalization bounds for high-dimensional hypothesis spaces (e.g., neural networks). Bounds based on PAC-Bayes generalization theory \cite{Shawe-Taylor97, McAllester99, Seeger02} have recently been shown to provide strong generalization guarantees for neural networks in a variety of supervised learning settings \cite{Dziugiate17, Langford03, Germain09, Bartlett17, Jiang20, Perez-Ortiz20}, and have been significantly extended and improved \cite{Catoni04, Catoni07, McAllester13, Rivasplata19, Thiemann17, Dziugaite18}. PAC-Bayes has also recently been extended to learn policies for robots with guarantees on generalization to novel environments \cite{Majumdar18, Veer20, Ren20, Majumdar21}. In this paper, we build on this work and provide a framework for leveraging generative models as a form of prior knowledge within PAC-Bayes. Comparisons with the approaches presented in \cite{Veer20, Ren20} demonstrate that this leads to stronger generalization guarantees and empirical performance (see Section \ref{sec:examples} for numerical results). \section{Problem Formulation}\label{sec:prob-form} {\bf Dynamics, environments, and sensing.} Consider a robotic system with discrete-time dynamics given by: \begin{equation} x_{t+1} = f_E(x_t, u_t), \end{equation} where $x_t \in$ $\mathcal{X} \subseteq \mathbb{R}^{n_x}$ is the state of the robot at time-step $t$, $u_t \in \mathcal{U} \subseteq \mathbb{R}^{n_u}$ is the control input, and $E \in \mathcal{E}$ is the environment that the robot is operating in. The term ``environment" is used broadly to represent all external factors which influence the evolution of the state of the robot, e.g., an obstacle field that a UAV has to avoid, external disturbances such as wind gusts, or an object that a robotic manipulator is grasping. The dynamics of the robot may be nonlinear/hybrid. Let $\mathcal{O} \subseteq \mathbb{R}^{n_o}$ denote the space corresponding to the robot's sensor observations (e.g., the space of images for a camera). {\bf Policies and cost functions.} Let $\pi: \mathcal{O} \rightarrow \mathcal{U}$ be a policy that maps observations (or potentially a history of observations) to actions, and let $\Pi$ denote the space of policies (e.g., neural networks with a certain architecture). The robot's task is specified via a cost function; we let $C_E(\pi)$ denote the cost incurred by policy $\pi$ when deployed in environment $E$ over a time horizon $T$. As an example in the context of UAV navigation, the cost function can assign 1 if the UAV collides with an obstacle, or 0 if it successfully reaches its goal. We assume that the cost is bounded, and without further loss of generality assume that $C_E(\pi) \in [0,1]$. Importantly, we make no further assumptions on the cost function (e.g., we \emph{do not} assume continuity or Lipschitzness). {\bf Dataset of real-world environments.} We assume that there is an underlying distribution $\mathcal{D}$ from which real-world environments that the robot operates in are drawn (e.g., an underlying distribution over obstacle environments for UAV navigation, or objects for grasping). Importantly, we \emph{do not} assume that we have explicit knowledge of $\mathcal{D}$ or the space $\mathcal{E}$ of real-world environments. Instead, we assume access to a finite dataset $S := \{E_1, E_2, ..., E_N\}$ of $N$ real-world environments drawn independently from $\mathcal{D}$. {\bf Generative model.} In addition to the (potentially small) dataset of real-world environments, we assume access to a \emph{generative model} over environments. This generative model takes the form of a distribution $\mathcal{D}_\text{gen}$ over a space $\mathcal{E}_\text{gen}$ of environments. Importantly, $\mathcal{D}_\text{gen} \neq \mathcal{D}$ and $\mathcal{E}_\text{gen} \neq \mathcal{E}$ in general. Indeed, the space $\mathcal{E}_\text{gen}$ will typically be significantly simpler than the space $\mathcal{E}$ of real-world environments. For example, in the context of manipulation (Fig. \ref{fig:anchor}), $\mathcal{E}$ may correspond to the space of all mugs while $\mathcal{E}_\text{gen}$ may correspond to the space of hollow cylinders (described by a small number of geometric and physical parameters). {\bf Goal.} Our goal is to learn a policy that \emph{provably generalizes} to novel real-world environments drawn from $\mathcal{D}$. In this paper, we will employ a slightly more general formulation where we choose a \emph{distribution} $P$ over policies (instead of choosing a single policy). This allows for the use of PAC-Bayes generalization theory. Our goal is then to tackle the following optimization problem: \begin{equation}\label{eq:OPT} \min_{P \in \mathcal{P}} \ C_\mathcal{D}(P), \ \text{where} \ C_\mathcal{D}(P) := \mathop{\mathbb{E}}_{E \sim \mathcal{D}} \mathop{\mathbb{E}}_{\pi \sim P}[C(\pi; E)]. \end{equation} The primary challenge in tackling this problem is that the distribution $\mathcal{D}$ is \emph{unknown} to us. Instead, we have access to a finite number of real-world environments and a (potentially inaccurate) generative model. In the next section, we describe how to leverage these two sources of data in order to learn a distribution $P$ over policies with a \emph{guaranteed bound} on the expected cost $C_\mathcal{D}(P)$, i.e., a provable guarantee on generalization to novel environments drawn from $\mathcal{D}$. \section{Generalization Guarantees with \\ Generative Models} In this section, we describe how to combine generative models with a finite amount of real data in order to produce strong generalization guarantees via PAC-Bayes theory. \subsection{PAC-Bayes Control} Our objective is to solve the optimization problem \eqref{eq:OPT}. However, the lack of an explicit characterization of $\mathcal{D}$ prohibits us from directly minimizing $C_\mathcal{D}(P)$. PAC-Bayes generalization bounds \cite{McAllester99} provide a high-confidence upper bound on $C_\mathcal{D}(P)$ in terms of the empirical cost on the training environments $S$ that are drawn from $\mathcal{D}$ and a regularizer. As both these terms can be computed, we minimize the PAC-Bayes upper bound in order to indirectly minimize $C_\mathcal{D}(P)$. Additionally, the PAC-Bayes bound serves as a certificate of generalization to novel environments drawn from $\mathcal{D}$. Let $\Pi:=\{\pi_\theta~|~\theta\in\Theta\subseteq\mathbb{R}^{n_\theta}\}$ denote the space of policies parameterized by the vector $\theta$; as an example, $\theta$ could be the weights and biases of a neural network. For a ``posterior" policy distribution $P$ on $\Pi$ and a real-world dataset $S \coloneqq \{E_{1}, E_{2}, \cdots ,E_{N} \}$ of $N$ environments drawn i.i.d from $\mathcal{D}$, we define the \emph{empirical cost} as the expected cost across the environments in $S$: \begin{align} C_{S}(P) \coloneqq \frac{1}{N} \sum_{E \in S} \mathop{\mathbb{E}}_{\theta \sim P} [C(\pi_\theta, E)]. \label{eq:emp_cost} \end{align} Let $P_0$ be a ``prior" distribution over $\Pi$ which is specified before the training dataset $S$ is observed. The PAC-Bayes theorem below then provides an upper bound on the true expected cost $C_\mathcal{D}(P)$ which holds with high probability. \begin{theorem}[adapted from \cite{Veer20}]\label{thm:pac-bayes} For any $\delta\in(0,1)$ and posterior $P$, with probability at least $1-\delta$ over sampled environments $S\sim \mathcal{D}^N$, the following inequality holds: \begin{align} ~C_{\mathcal{D}}(P) & \leq C_{PAC}(P,P_0) \nonumber \\ & := \big(\sqrt{C_S(P) + R(P,P_0)} + \sqrt{R(P,P_0)}\big)^2 , \label{eq:quad-pac-bound} \end{align} where $R(P,P_0)$ is a regularization term defined as: \begin{equation}\label{eq:R} R(P,P_0):=\frac{\textrm{KL}(P||P_0) + \log\big(\frac{2\sqrt{N}}{\delta}\big) }{2N} \enspace. \end{equation} \end{theorem} It is challenging to specify good priors $P_0$ on the policy space $\Pi$ in general (e.g., specifying a prior on neural network weights); our previous approaches resorted to techniques such as data splitting \cite{Veer20} and imitation learning \cite{Ren20} to obtain priors. On the other hand, generative models offer an intuitive approach for embedding prior domain knowledge in learning \cite{Sunderhauf18,izatt2020generative,qi2018human,kar2019meta,cobbe2020leveraging}. Motivated by this, we will leverage generative models (based on inductive bias or other data) as priors for the PAC-Bayes theorem. \subsection{Policy Parameterization With Datasets} \label{subsec:policy-param} The posterior $P$ and the prior $P_0$ distributions in Theorem~\ref{thm:pac-bayes} are on the space $\Pi$ of policies. Our key idea for leveraging generative models to provide generalization guarantees is to provide an approach for \emph{implicitly} parameterizing policies $\pi_\theta$ via synthetic datasets drawn from the generative model. This parameterization is then used in Theorem~\ref{thm:pac-bayes} such that the PAC-Bayes bound is specified in terms of the posterior $Q$ and the prior $Q_0$ on the space $\mathcal{E}_\text{gen}$ of synthetic environments. Let $\hat{S}$ be a synthetic (i.e., generated) dataset of cardinality $l$ and let $L:\Pi\times\mathcal{E}_\text{gen}^l\to [0,\infty)$ be a loss function; e.g., $L$ can be the average cost of deploying a policy $\pi_\theta$ in environments in $\hat{S}$. Then, let $A:\mathcal{E}_\text{gen}^l\to\Theta$ be an arbitrary \emph{deterministic algorithm} for (approximately) solving the optimization problem: \begin{align}\label{eq:parameterization-obj} \arg\inf_{\theta\in\Theta} L(\pi_\theta, \hat{S}) \enspace. \end{align} Any such algorithm then provides a way to parameterize policies $\pi_{A(\hat{S})}$ implicitly via datasets $\hat{S}$. We note that we do not impose any additional conditions on $A$ (e.g., $A$ need not solve \eqref{eq:parameterization-obj} to global/local optimality). Moreover, although we require $A$ to be deterministic, we can use stochastic optimization approaches --- such as stochastic gradient descent --- by fixing a random seed (this ensures deterministic outputs for a given input). The algorithm $A$ gives rise to a push-forward measure for distributions from the synthetic environment space $\mathcal{E}_\text{gen}$ to the policy space $\Pi$. We overload the notation to express the push-forward distribution on the policy space as $A(Q)$. \subsection{PAC-Bayes Bounds With Generative Models} \label{subsec:pac-bayes-gen} In order to provide PAC-Bayes bounds using generative models, we encode the posterior $P$ and the prior $P_0$ on the policy space via posterior $Q$ and prior $Q_0$ generative models as follows: $P = A(Q)$, and $P_0 = A(Q_0)$. We are now ready to present the PAC-Bayes bound with generative models. \begin{theorem}\label{thm:pac-bayes-gen} Let $A$ be a deterministic algorithm as defined above. For any $\delta\in(0,1)$ and posterior generative model $Q$ on $\mathcal{E}_\text{gen}$, with probability at least $1-\delta$ over sampled real-world environments $S\sim \mathcal{D}^N$, the following holds: \begin{align} ~C_{\mathcal{D}}(A(Q)) & \leq C_{PAC}(Q,Q_0) \nonumber \\ & := \big(\sqrt{C_S(A(Q)) + R(Q,Q_0)} + \sqrt{R(Q,Q_0)}\big)^2 , \label{eq:gen-quad-pac-bound} \end{align} where \begin{align}\label{eq:data-gen-pac-bayes-emp} C_S(A(Q)) := \frac{1}{N} \sum_{E \in S} \mathop{\mathbb{E}}_{\hat{S} \sim Q} [C(\pi_{A(\hat{S})}, E)] \end{align} and $R(Q,Q_0)$ is the same as \eqref{eq:R}. \end{theorem} \begin{proof} The proof follows by choosing $A(Q)$ as the posterior policy distribution $P$ and $A(Q_0)$ as the prior policy distribution $P_0$ in \eqref{eq:quad-pac-bound}, giving us the following bound: \begin{align} C_{\mathcal{D}}(A(Q)) & \leq \big(\sqrt{C_S(A(Q)) + R(A(Q),A(Q_0))} \\ & \phantom{\leq} + \sqrt{R(A(Q),A(Q_0))}\big)^2 ,\label{eq:inter-1} \end{align} The empirical cost can be expressed as: \begin{align} C_{S}(A(Q)) = \frac{1}{N} \sum_{E \in S} \mathop{\mathbb{E}}_{\theta\sim A(Q)} [C(\pi_\theta, E)]. \end{align} Sampling $\theta$ from the push-forward measure $A(Q)$ is equivalent to sampling $\hat{S}$ from $Q$ and then computing $A(\hat{S})$. Therefore, the empirical cost can be expressed as \eqref{eq:data-gen-pac-bayes-emp}. Using the data processing inequality \cite{Cover99} we have $KL(A(Q)||A(Q_0)) \leq KL(Q||Q_0)$, which further results in $R(A(Q),A(Q_0)) \leq R(Q,Q_0)$. Using this in \eqref{eq:inter-1} completes the proof. \end{proof} Minimizing $C_{PAC}$ provides us a policy distribution $A(Q)$ with a guaranteed bound on the expected cost $C_\mathcal{D}$ on novel environments, thereby tackling the optimization problem \eqref{eq:OPT}. \section{Training} \label{sec:training} In this section, we present our training pipeline for combining a generative model with real-world data in order to provide strong generalization guarantees. First, we describe the algorithm $A$ used for parameterizing policies through datasets (Sec.~\ref{subsec:policy-param}). Then we provide the algorithm for minimizing the PAC-Bayes upper bound in Theorem~\ref{thm:pac-bayes-gen}. \subsection{Policy Parameterization With Datasets} As discussed in Sec. \ref{subsec:policy-param}, we require a deterministic algorithm $A$ (that attempts to minimize a loss $L$) in order to implicitly parameterize policies $\pi_{A(\hat{S})}$ via datasets $\hat{S}$. For the results in this paper, we use $L$ as the average cost of deploying a policy $\pi_\theta$ in environments contained in $\hat{S}$: \begin{align} L(\pi_\theta,\hat{S}):= \frac{1}{l} \sum_{E_\text{gen} \in \hat{S}} C(\pi_\theta, E_\text{gen}). \end{align} To minimize $L$, we choose the algorithm $A$ to be Evolutionary Strategies (ES) \cite{Wierstra14} with an a priori fixed random seed; fixing the random seed ensures that the algorithm is deterministic. ES belongs to a family of black-box optimizers which train a distribution on the policy space. The choice of ES is driven by our use of black-box simulators through which the gradient of the loss cannot be backpropagated (e.g., due to the loss being non-differentiable or due to the dynamics of the robot being hybrid). Additionally, ES permits a high degree of parallelization, thereby allowing us to effectively exploit clouding computing resources. In the interest of space, further details on our implementation of ES are not provided here and can be found in \cite[Sec.~4.1]{Veer20}. \subsection{Training a PAC-Bayes Generative Model} We assume availability of a generative model expressed by a distribution $\mathcal{D}_\text{gen}$ on $\mathcal{E}_\text{gen}$ (ref. Sec. \ref{sec:prob-form}); this model could be hand-specified based on prior knowledge or constructed using other data. Leveraging $\mathcal{D}_\text{gen}$ we first construct a prior generative model and then train a posterior generative model by minimizing the PAC-Bayes bound in Theorem~\ref{thm:pac-bayes-gen}. As has been shown in \cite{Veer20} and \cite{Majumdar21}, PAC-Bayes minimization takes the form of an \emph{efficiently-solvable convex program} for discrete probability distributions. To exploit this convex formulation (which allows one to optimize the PAC-Bayes bound in a computationally efficient manner), we construct a prior generative model $q_0$ which takes the form of a discrete probability distribution as follows:\\ \emph{Let $\mathcal{D}_\text{gen}$ be a generative model which takes the form of a distribution on the synthetic environment space $\mathcal{E}_\text{gen}$, as discussed in Sec.~\ref{sec:prob-form}. Sample $m$ datasets of cardinality $l$ each from $\mathcal{D}_\text{gen}$ to construct the set of datasets $\tilde{S}:=\{\hat{S}_1,\cdots,\hat{S}_m~|~\hat{S}_i\sim \mathcal{D}_\text{gen}^l\}$. The prior generative model $q_0$ is then defined as the uniform distribution on $\hat{S}$.} To train a posterior generative model $q$ (which is a discrete probability distribution on the set $\tilde{S}$ of synthetic datasets), we minimize the PAC-Bayes upper bound in Theorem~\ref{thm:pac-bayes-gen}. To transform this minimization into a convex program, we first compute a cost vector $C\in\mathbb{R}^m$. Each entry $C_i$ of this vector corresponds to the expected cost of deploying the policy $\pi_{A(\hat{S}_i)}$, parameterized by the synthetic dataset $\hat{S}_i$, in the real-world training dataset $S$. Therefore, the empirical cost $C_S(A(Q))$ can be expressed as $Cq$ (which is linear in the generative model posterior $q$). Leveraging this, we can express the PAC-Bayes bound minimization as follows: \begin{align} \min_{q\in\mathbb{R}^m} \quad & \big(\sqrt{Cq + R(q,q_0)} + \sqrt{R(q,q_0)}\big)^2 \label{eq:REP}\\ \textrm{s.t.} \quad & \sum_{i=1}^m q_i = 1, 0\leq q_i \leq 1. \nonumber \end{align} Using the epigraph trick, as detailed in \cite{Veer20}, \eqref{eq:REP} can be further transformed to a convex program. In the interest of space, we direct the reader to \cite[Sec.~4.2]{Veer20} for complete details of the algorithm to solve \eqref{eq:REP}. We provide a sketch of our entire training pipeline in Alg.~\ref{alg:train-pipeline}. \begin{algorithm}[h] \caption{Training Pipeline \label{alg:train-pipeline}} \small \begin{algorithmic}[1] \State \textbf{Input:} Generative model: $\mathcal{D}_\text{gen}$; real-world dataset: $S\sim\mathcal{D}^N$ \State \textbf{Input:} Number of synthetic datasets: $m$ \State \textbf{Input:} Cardinality of each synthetic dataset: $l$ \State \textbf{Input:} Deterministic algorithm for \eqref{eq:parameterization-obj}: $A$ \State Sample $\hat{S}_1,\cdots, \hat{S}_m\sim\mathcal{D}_\text{gen}^l$ \State $q_0 \gets [1/m, \cdots, 1/m] $ \State $q\gets \text{PAC-Bayes}(S,A,q_0,\{\hat{S}_i\}_{i=1}^m)$ by solving \eqref{eq:REP} \\ \Return $q$ \end{algorithmic} \normalsize \end{algorithm} \section{Examples} \label{sec:examples} We demonstrate the ability of our framework to provide strong generalization guarantees for two robotic systems with nonlinear/hybrid dynamics and rich sensory inputs: a drone navigating obstacle fields using onboard vision, and a manipulator grasping mugs using an external depth camera. All training is conducted on a \texttt{Lambda Blade} server with \texttt{2x Intel Xeon Gold 5220R} (96 threads), 760 GB of RAM, and 8 \texttt{NVIDIA GeForce RTX 2080}, each with 12 GB memory. We compare our bounds against those in previous works with similar examples. \subsection{Vision-based obstacle avoidance with a drone} \begin{figure}[t] \centering \subfigure[] { \includegraphics[trim={14cm 3cm 14cm 0cm},clip,width=0.23\textwidth]{figures/quadrotor-navigation.png} \label{fig:UAV-env} } \hspace{-3mm} \subfigure[] { \includegraphics[width=0.23\textwidth]{figures/quadrotor-primitives.png} \label{fig:UAV-prim} } \vskip -10pt \caption{Vision-based navigation with a UAV. \textbf{(a)} Environment with randomly generated obstacles. \textbf{(b)} Primitive library for the UAV. \label{fig:UAV} } \vspace{-5mm} \end{figure} \textbf{Overview.} In this example, we train a quadrotor equipped with an onboard depth camera to navigate across obstacle fields. The obstacle course is a tunnel populated by cylindrical obstacles as shown in Fig. \ref{fig:UAV-env}. The dynamics and sensor are simulated using \texttt{PyBullet} \cite{pybullet}. \textbf{Environments.} The distribution $\mathcal{D}$ over environments samples the radii, locations, and orientations of 23 obstacles in order to generate an environment; the radii are drawn from a uniform distribution over [5cm, 30cm], the locations of the center of the cylinders are drawn from [-5m, 5m]$\times$[0m, 14m], and the orientations are quaternion vectors drawn from a normal distribution. \textbf{Generative model.} The generative model $\mathcal{D_{\rm gen}}$ samples radii, locations, and orientations of obstacles from the same distributions as $\mathcal{D}$. However, the number of obstacles in each environment drawn from $\mathcal{D_{\rm gen}}$ is different from the number of obstacles in environments drawn from $\mathcal{D}$. In our experiments, we will study the effects of degrading the quality of the generative model by varying this parameter. \textbf{Motion primitives and planning policy.} We pre-compute a library of 25 motion primitives (Fig. \ref{fig:UAV-prim}), each of which is generated by connecting the initial position of the robot to a desired final position by a smooth sigmoidal trajectory. The robot's policy takes a $50 \times 50$ depth image from the onboard camera as input and selects a motion primitive to execute. This policy is applied in a receding-horizon manner (i.e., the robot selects a primitive, executes it, selects another primitive, etc.). The policy is parameterized using a deep neural network ($\sim$14K parameters) and is based on the policy architecture for vision-based UAV navigation presented in \cite{Veer20}. \textbf{Training.} We choose the cost $1 - \frac{k}{K}$ where $k$ is the number of motion primitives successfully executed before colliding with an obstacle and $K$ is the total possible primitive executions; in our example $K = 12$. We train policies via the pipeline described in Section \ref{sec:training}. We choose $m=50$ datasets in $\tilde{S}$, and each dataset $\hat{S}_i \in \tilde{S}$ has cardinality $l=50$. With 6 GPUs and 48 CPUs, it takes $\sim$ 6-8 hours to train the priors and $\sim$ 200-1000 seconds to train the posterior (depending on the number of real environments used). \textbf{Results.} We consider different generative models $\mathcal{D}_\text{gen}$ by varying the number of obstacles $N_\text{O}$ sampled in any generated environment; we vary this parameter in the set $\{10, 15, 20, 23, 25, 30\}$. Generalization guarantees are obtained using each variation of the generative model. We set $\delta = 0.01$ to have bounds that hold with probability 0.99. \begin{figure} \centering \includegraphics[scale = 0.52]{figures/UAV_Comparison.png} \caption{PAC-Bayes bounds for different choices of the generative model (obtained by varying the number $N_\text{O}$ of obstacles sampled in each environment). Bounds generally become stronger as we increase $N_\text{O}$. Comparisons with \cite{Veer20} (dotted lines) demonstrate the benefits of our approach, particularly for smaller values of $N$ (the number of available real-world environments).} \label{fig:UAV_Comparison} \end{figure} \begin{table}[h] \centering \renewcommand{\arraystretch}{1.2} \begin{adjustbox}{width=1\columnwidth,center} \begin{tabular}{|c|c|p{1.5cm}|c|p{1.5cm}|} \hline \multirow{2}{0.7 cm}{\textbf{Envs (N)}} & \multicolumn{2}{c|}{\textbf{Using generative model (ours)}} & \multicolumn{2}{c|}{\textbf{Approach from \cite{Veer20}}}\\ \cline{2-5} & \textbf{PAC Bound} & \textbf{True Cost (Estimate)} & \textbf{PAC Bound} & \textbf{True Cost (Estimate)}\\ \hline 980 & 20.92 \% & 13.81 \% & 29.82 \% & 19.7 \% \\ \hline 1480 & 19.60 \% & 13.81 \% & 26.02 \% & 18.34 \% \\ \hline 4480 & 16.76 \% & 13.86 \% & 21.52 \% & 18.43 \% \\ \hline \end{tabular} \end{adjustbox} \caption{\footnotesize{Comparison of PAC-Bayes bounds with true expected cost on novel environments (estimated via exhaustive sampling). The framework presented here provides both stronger guarantees and empirical performance on novel environments as compared to \cite{Veer20}.}} \label{tab:table_1} \end{table} Figure \ref{fig:UAV_Comparison} plots the PAC-Bayes bounds on the expected cost for different choices of $N_\text{O}$. For example, when $N_\text{O}=30$ and $N=4480$, the PAC-Bayes bound using our approach is $0.1676$. Thus, we can guarantee that on average the quadrotor will successfully navigate through at least 83.24\% ($100\% - 16.76\%$) of novel real-world environments. We also compare these bounds with those provided by the method in \cite{Veer20} (plotted with dotted lines), which splits a given dataset of $N$ real-world environments into two portions; the first portion is used to train a prior over policies and the second portion is used to obtain a posterior distribution over policies by minimizing the PAC-Bayes bound in Theorem \ref{thm:pac-bayes}. We provide our approach with the same number of real-world environments (i.e., $N$) as used in \cite{Veer20} in order to ensure a fair comparison. For each $N$, the bounds generally become stronger as we increase the number of obstacles $N_\text{O}$ sampled by the generative model. Figure \ref{fig:UAV_Comparison} demonstrates that the approach presented here is able to produce stronger bounds than the ones provided by \cite{Veer20}, with significant differences when $N_\text{O} = 30$. Interestingly, the benefits of our approach become more apparent when the number $N$ of available real-world environments is small. For example, when $N=980$, the bounds provided by our approach are stronger for all choices of $N_\text{O}$. When $N$ is small, the prior information provided by the generative model becomes important (as one would intuitively expect). Table \ref{tab:table_1} compares the theoretical generalization bounds obtained for the case when $N_\text{O}=30$ with the true expected cost on novel environments (estimated via exhaustive sampling of novel environments). Results are presented for different numbers $N$ of real-world environments for both our method and the one from \cite{Veer20}. As the table illustrates, our approach results in significantly improved performance on novel environments for all values of $N$. \vspace{-2mm} \subsection{Grasping a diverse set of mugs} \textbf{Overview.} This example aims to train a Franka Panda arm to grasp and lift a mug (Fig. \ref{fig:anchor}). The arm has an overhead camera which provides a 128 $\times$ 128 depth image. The simulation environment for this system is implemented using \texttt{PyBullet} \cite{pybullet}, and we also present hardware results on the Franka arm shown in Fig. \ref{fig:anchor} (right). \textbf{Environments.} The real-world environments used for training are drawn from a set of mugs with diverse shapes and sizes collected from the ShapeNet dataset \cite{chang2015shapenet}. The initial x-y position of these mugs is sampled from the uniform distribution over $[0.45~\text{cm},~0.55~\text{cm}] \times [-0.05~\text{cm},~0.05~\text{cm}]$, and yaw orientations are sampled from the uniform distribution over $[-\pi~\text{rad},\pi~\text{rad}]$. All mugs are placed upright. \textbf{Generative model. }The generative model $\mathcal{D_{\rm gen}}$ comprises of hollow cylinders which are generated using \texttt{trimesh} \cite{trimesh}. The inner radii, outer radii, and height of the cylinders are sampled from uniform distributions. The ratio of the maximum possible outer radius to inner radius is 2, and the height ranges from twice the maximum inner radius to twice the maximum outer radius. The initial location and yaw are sampled from the same distributions as $\mathcal{D}$. \textbf{Policy.} The robot's policy is parameterized using a deep neural network (DNN) which takes a depth map of an object and a latent state $z\in\mathbb{R}^{10}$ sampled from a Gaussian distribution as input and outputs a grasp location and orientation. We keep the weights of the DNN fixed and update the distribution on the latent space. Effectively, the latent space acts as the space of policy parameters $\Theta$ and the Gaussian distribution on it is the policy distribution $P$; further details of the policy's architecture can be found in \cite{Ren20}. \textbf{Training.} If the arm is able to grasp and lift a mug by 10 cm, we consider the rollout to be successful and assign a cost of 0, otherwise we assign a cost of 1. We follow the pipeline in Alg.~\ref{alg:train-pipeline} for training. We choose m = 50 datasets in $\Tilde{S}$, with each dataset $\Tilde{S}_i$ $\in$ $\Tilde{S}$ having cardinality l = 50. With 80 CPUs, the priors train in $\sim$ 3 hours, and the posterior takes $\sim$ 900 seconds. \textbf{Simulation results.} We obtain theoretical generalization guarantees using the generative model described above and compare it with the theoretical guarantees obtained in \cite{Ren20}. We use the same set of 500 mugs from ShapeNet used by \cite{Ren20} as our real dataset in order to train the posterior and obtain the PAC-Bayes bound. Our resulting PAC-Bayes bound (with $\delta = 0.99$) is 0.054. Thus, our policy is guaranteed to have a success rate of at least 94.6 \%, which is higher than the 93 \% guaranteed success rate in \cite{Ren20} (despite using the same real-world dataset of mugs for training). \textbf{Hardware results.} The posterior policy distribution trained in simulation is deployed on the hardware setup shown in Fig. \ref{fig:anchor} without additional training (i.e., zero-shot sim-to-real transfer). 10 mugs with diverse shapes are used (Fig.~\ref{fig:mugs}). Among three sets of experiments with different seeds (for sampling the latent $z$), the success rates are 100\% (10/10), 100\% (10/10), and 90\% (9/10). The overall success rate is 96.67\% (29/30) and thus validates the PAC-Bayes bound of $94.6\%$ trained in simulation. \begin{figure}[t] \centering \includegraphics[trim={0cm 18cm 0cm 18cm},clip,width=0.45\textwidth]{figures/mugs2.jpg} \caption{Mugs used for hardware validation of the grasping policy. \label{fig:mugs} } \vspace{-7mm} \end{figure} \section{Conclusions and Future Work} We have presented an approach for learning policies for robotic systems with \emph{guarantees on generalization} to novel environments by leveraging a finite dataset of real-world environments in combination with a (potentially inaccurate) generative model of environments. The key idea behind our approach is to use the generative model in order to implicitly specify a prior over policies, which is then updated using the real-world environments by optimizing generalization bounds derived via PAC-Bayes theory. Our simulation and hardware results demonstrate the ability of our approach to provide strong generalization guarantees for systems with nonlinear/hybrid dynamics and rich sensing modalities, and obtain stronger guarantees and empirical performance than prior methods that do not leverage generative models. Exciting directions for future work include (i) obtaining stronger guarantees by going beyond the hand-crafted generative models used here and using state-of-the-art techniques for generative modeling, (ii) directly optimizing a posterior generative model $Q$ in Theorem \ref{thm:pac-bayes-gen} (without performing the finite sampling described in Section \ref{sec:training}), and (iii) implementing the UAV navigation example on a hardware platform. \bibliographystyle{IEEEtran}
1,941,325,220,108
arxiv
\subsection*{References}
1,941,325,220,109
arxiv
\section{Lower Bound for Anonymous One-Shot Agreement} \label{m-conc-lower-sec} \indent We now turn to anonymous algorithms, where processes are not equipped with identifiers and are programmed identically. We also assume that the domain of possible input values is \nat. In this section, we show that any $n$-process anonymous algorithm for $m$-obstruction-free (one-shot) $k$-set agreement requires $\Omega(\sqrt{\frac{nm}{k}})$ registers. Note that this bound on space complexity reflects all three parameters: increasing $n$ or $m$ makes the problem harder and increasing $k$ makes the problem easier. It also generalizes the anonymous result of Fich, Herlihy and Shavit \cite{FHS98} (which is the special case when $m=k=1$) by showing the dependence on two additional parameters $m$ and $k$. The assumption of anonymity allows us to add {\it clones} to an execution. A clone of a process $p$ is another process $p'$ that has the same input as $p$. Whenever $p$ takes a step, $p'$ takes an identical step immediately afterwards. \journalversion{For the journal version, we should see just how big the domain of possible values has to be to make the lower bound work. It should work for exponential size domains. See paper: The minimum number of disjoint pairs in set systems and related problems by Das, Gan and Sudakov (which cites Erdos-Ko-Rado theorem of 1961 that might be useful). In fact, the lower bound might even hold if the input domain has size n. } Let $A$ be an anonymous algorithm that solves $m$-obstruction-free $k$-set agreement among $n$ processes using finitely many registers. For each set $V$ of $m$ distinct input values, fix an execution $\alpha(V)$ such that at most $m$ processes take steps during $\alpha$ and output all values in~$V$. (Such an execution exists, by Lemma \ref{lem:m-val}.) Let $\mathbf{R}(V)$ be the sequence of distinct registers written during $\alpha(V)$ in the order they are first written in $\alpha(V)$. For any sequence $\mathbf{R}$ of distinct registers, define $\textswab{V}(\mathbf{R}) = \{V \subset \nat : |V|=m \mbox{ and }\mathbf{R} \mbox{ is a prefix of } \mathbf{R}(V)\}$. \begin{lemma} \label{anonymous-gluing} Let $r>0$ and suppose $n\geq \ceil{\frac{k+1}{m}}(m+\frac{r^2-r}{2})$. Then, for $i=0,\ldots,r+1$, there is a sequence $\mathbf{R}_i$ of length $i$ such that $\textswab{V}(\mathbf{R}_i)$ is an infinite set. \end{lemma} \begin{proof} We prove the claim by induction on $i$. Base case ($i=0$): $\mathbf{R}_0$ is the empty sequence and $\textswab{V}(\mathbf{R}_0) = \{V\subset \nat : |V|=m\}$ is infinite. Induction step: Let $i\in \{1,2,\ldots,r+1\}$. Assume there is a sequence $\mathbf{R}_{i-1}=\langle R_1, R_2, \ldots, R_{i-1}\rangle$ such that $\textswab{V}(\mathbf{R}_{i-1})$ is infinite. The induction step is technical, so we begin with an informal overview. Let $c=\ceil{\frac{k+1}{m}}$. We first show that there cannot be $c$ disjoint sets $V_1,\ldots,V_c$ in $\textswab{V}(\mathbf{R}_{i-1})$ such that each $\alpha(V_\ell)$ writes only to registers in $\mathbf{R}_{i-1}$; otherwise, we could glue together the $\alpha(V_\ell)$'s so that each $\alpha(V_\ell)$ is invisible to all the others, and the number of output values in this glued-together execution would be $|V_1\cup V_2\cup \cdots \cup V_c| = mc >k$. Then, the rest of the argument is easy: infinitely many sets in $\textswab{V}(\mathbf{R}_{i-1})$ must have register sequences of length at least $i$. Since there are only finitely many registers, infinitely many of those sets have the same register $R$ in position $i$ of their sequence. These form the infinite set $\textswab{V}(\mathbf{R}_i)$, where $\mathbf{R}_i = \mathbf{R}_{i-1}\cdot R$. To derive a contradiction, assume that (*) there exist $c$ disjoint sets $V_1,\ldots,V_c$ in $\textswab{V}(\mathbf{R}_{i-1})$ such that for all $\ell$, $\alpha(V_\ell)$ writes only to registers in $\mathbf{R}_{i-1}$. Let $P_1,\ldots,P_c$ be $c$ disjoint sets of $m$ processes each. The following claim describes how we can glue together the $\alpha(V_\ell)$'s. If $\beta$ is an execution and $P$ is a set of processes, $\beta|P$ denotes the subsequence of $\beta$ consisting of steps taken by processes in $P$. \medskip {\sc Claim:} For $j=0,1,\ldots,i-1$, there exists an execution $\beta_j$ with the following properties. \begin{enumerate} \item \label{process-bound} Exactly $\frac{cj(j-1)}{2}$ processes outside of $P_1\cup \ldots\cup P_c$ take steps during $\beta_j$. \item \label{write-exists} For $\ell = 1,\ldots,c$, there is a write by some process in $P_\ell$ to each of $R_1,R_2,\ldots,R_j$ during $\beta_j$. \item \label{writes-contained} No process writes to any register outside of $\{R_1,R_2,\ldots,R_j\}$ during $\beta_j$. \item \label{indistinguishable} For $\ell= 1,\ldots,c$, $\beta_j | P_\ell$ is the prefix of $\alpha(V_\ell)$ up to but not including the first write to $R_{j+1}$ (or the entire execution $\alpha(V_\ell)$ if $j=i-1$). \end{enumerate} We prove the claim by inductively constructing the executions $\beta_j$. {\sc Base case} ($j=0$): We build $\beta_0$ by concatenating the maximal prefixes of $\alpha(V_1), \alpha(V_2), \ldots, \alpha(V_c)$ that do not contain any writes, performed by process sets $P_1, \ldots, P_c$, respectively. No processes outside $P_1\cup \cdots\cup P_c$ take steps in $\beta_0$. Property \ref{write-exists} is vacuously satisfied. Properties \ref{writes-contained} and \ref{indistinguishable} follow immediately from the definition of $\beta_0$. {\sc Inductive step}: Let $j\in \{1,\ldots,i-1\}$. Assume that there is a $\beta_{j-1}$ satisfying the four properties. We describe how to construct $\beta_j$. For each $\ell$, we insert $j-1$ clones of processes in $P_\ell$, and we pause one clone just before the last write by a process in $P_\ell$ to each of $R_1,\ldots,R_{j-1}$. Such a write exists, by property \ref{write-exists} of the induction hypothesis. Moreover, there are enough processes to create these clones, since the number of processes that take steps in $\beta_{j-1}$ plus the $c(j-1)$ additional clones needed to construct $\beta_j$ total at most $mc+\frac{c(j-1)(j-2)}{2} + c(j-1) = mc + \frac{cj(j-1)}{2} \leq mc + \frac{c(i-1)(i-2)}{2} \leq mc+\frac{cr(r-1)}{2} = \ceil{\frac{k+1}{m}}(m+\frac{r^2-r}{2})$ and by the hypothesis of the lemma, there are this many processes in the system. Let $\beta_{j-1}'$ be the execution that results from adding all of the clones to $\beta_{j-1}$. We add some more steps to the end of $\beta_{j-1}'$ as follows. For each $\ell = 1,\ldots, c$, we add a block write by the clones of processes in $P_\ell$ followed by steps of processes in $P_\ell$ continuing the steps of $\alpha(V_\ell)$ until some process is poised to write to $R_{j+1}$ for the first time (or until the end of $\alpha(V_\ell)$ if $j=i-1$). (This is legal, because the block write ensures that all registers have the same state as they would have after $\beta_{j-1}|P_\ell$, which is a prefix of $\alpha(V_\ell)$, by induction hypothesis \ref{indistinguishable}.) Thus, we ensure that $\beta_j$ satisfies property~\ref{indistinguishable}. By property \ref{indistinguishable} of the inductive hypothesis, the first newly added step by a process in $P_\ell$ writes to $R_j$. Combined with induction hypothesis \ref{write-exists}, this proves property \ref{write-exists}. For $j<i-1$, property \ref{writes-contained} holds because we stop the processes in $P_\ell$ just before they write to any register outside of $\{R_1, \ldots, R_j\}$. For $j=i-1$, property \ref{writes-contained} follows from our assumption (*) that $\alpha(V_\ell)$ writes only to registers in $\mathbf{R}_{i-1}$. The processes outside $P_1\cup \cdots \cup P_c$ that take steps in $\beta_j$ are the $\frac{c(j-1)(j-2)}{2}$ processes that take steps in $\beta_{j-1}$ plus the $c(j-1)$ clones that we added when constructing $\beta_{j-1}'$. So the total number of such processes is $\frac{cj(j-1)}{2}$, satisfying property \ref{process-bound}. This completes the proof of the claim. \medskip In $\beta_{i-1}$ processes in $P_\ell$ output all $m$ values in $V_\ell$ (for all $\ell$). Since $V_1,\ldots,V_c$ are disjoint sets, there are at least $cm = \ceil{\frac{k+1}{m}}\cdot m \geq k+1$ different output values in $\beta_{i-1}$. This contradicts the $k$-agreement property. Thus, assumption (*) is false, so there are fewer than $c$ disjoint sets in $\textswab{V}(\mathbf{R}_{i-1})$ such that $\alpha(V_\ell)$ writes only to registers in $\mathbf{R}_{i-1}$. Thus, there are infinitely many sets $V$ in $\textswab{V}(\mathbf{R}_{i-1})$ such that $\alpha(V)$ writes outside of $\mathbf{R}_{i-1}$. Since there are only finitely many registers, there must be infinitely many of these sets $V$ such that the first register outside of $\mathbf{R}_{i-1}$ written during $\alpha(V)$ is the same for all $V$. Call that register $R$. Let $\mathbf{R}_{i}$ be obtained by concatenating $R$ to the end of $\mathbf{R}_{i-1}$. Then, there are infinitely many sets $V$ such that $\mathbf{R}_i$ is a prefix of $\mathbf{R}(V)$. This completes the proof. \end{proof} \begin{theorem} \label{m-conc-lower-thm} Any anonymous algorithm that solves $m$-obstruction-free $k$-set agreement among $n$ processes using registers must use more than $\sqrt{m(\frac{n}{k}-2)}$ registers. \end{theorem} \begin{proof} Assume an algorithm solves the problem using $r$ registers where $r\leq \sqrt{m(\frac{n}{k}-2)}$. Then, \begin{eqnarray*} r & \leq \sqrt{m\left(\frac{n}{k}-2\right)}\\ & \leq \sqrt{m\left(\frac{2n}{k+m}-2\right)} \\ &(\mbox{ since }m\leq k \Rightarrow \frac{2n}{k+m}\geq\frac{n}{k} )\\ & = \sqrt{\frac{2m(n-k-m)}{k+m}}\\ \Rightarrow & r^2-r \leq \frac{2m(n-k-m)}{k+m}\\ \Rightarrow & \frac{k+m}{m}\cdot\frac{r^2-r}{2} \leq n-k-m\\ \Rightarrow & n \geq \frac{k+m}{m}\left(m+\frac{r^2-r}{2}\right)\\ &\geq \ceil{\frac{k+1}{m}}\left(m+\frac{r^2-r}{2}\right). \end{eqnarray*} So, by Lemma \ref{anonymous-gluing} there exists a sequence of $r+1$ registers used in some executions of $A$, which is impossible since there are only $r$ registers. \end{proof} \ignore{ In the special case where $m=k-1$, the arithmetic in the proof of Theorem \ref{m-conc-lower-thm} can be done more carefully to get a tighter bound, as described in the following proposition, which will be used in Section \ref{resilient-lower}. \begin{proposition} Any anonymous algorithm that solves $(k-1)$-obstruction-free $k$-set agreement among $n$ processes using registers must use more than $\sqrt{2(n-k+1)}$ registers. \end{proposition} \begin{proof} Let $m=k-1$. Assume an algorithm solves the problem using $r$ registers where $r\leq \sqrt{2(n-k+1)}$. Then, \begin{eqnarray*} r -\frac{1}{2} & \leq & \sqrt{2(n-k+1)+\frac{1}{4}}\\ \left(r-\frac{1}{2}\right)^2 & \leq & 2(n-m) + \frac{1}{4}\\ r^2-r & \leq & 2(n-m) \\ n & \geq & m + \frac{r^2-r}{2}\\ n & \geq & \ceil{\frac{k+1}{m}}\left(m+\frac{r^2-r}{2}\right). \end{eqnarray*} So, by Lemma \ref{anonymous-gluing} there exists a sequence of $r+1$ registers used in some executions of $A$, which is impossible since there are only $r$ registers. \end{proof} } \section{Anonymous Algorithm for Repeated Set Agreement} \label{m-conc-alg-sec} \begin{theorem} \label{thm:anon-alg} There is an algorithm that solves $m$-obstruction-free repeated $k$-set agreement among $n$ processes (for $m\leq k$) using $(m+1)(n-k)+m^2+1$ registers. \end{theorem} The anonymous algorithm presented in Figure~\ref{majority} solves $m$-obstruction-free repeated $k$-set agreement among $n$ processes (for $m\leq k$) using $(m+1)(n-k)+m^2+1$ registers. The algorithm uses the same basic idea as the one in Section \ref{linear-space-alg}. It uses a snapshot object with $r=(m+1)(n-k)+m^2$ components, which can be built anonymously and non-blocking using $r$ registers \cite{GR07}. Again, the idea is to allow the first $\ell = n+m-k$ processes to choose arbitrary outputs and then ensure that the last $n-\ell=k-m$ processes output at most $m$ different values, for a total of at most $k$ different values. \begin{figure*} \setcounter{linenum}{0} \begin{code} \stepcounter{linenum}{\scriptsize \arabic{linenum}}\> Shared variables:\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $A$: snapshot object with $r= (m+1)(n-k)+m^2$ components, each initially $\bot$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $H$: register, initially the empty sequence\\[-1.5mm]\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} Persistent local variables:\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $i \leftarrow 0$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $t \leftarrow 0$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $\textit{history}\leftarrow$ empty sequence\\[-1.5mm]\>\hspace*{\value{ind}mm} \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} {\sc Propose}($v$)\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} write $history$ into $H$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $t\leftarrow t+1$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} if $|\textit{history}| \geq t$ then\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} output the $t$th value in \textit{history} and halt\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} run the following two threads in parallel until one of them produces an output\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} thread 1:\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} {\it pref} $\leftarrow v$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $\ell \leftarrow n+m-k$ \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} loop\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} update $i$th component of $A$ with value $(\mbox{\it pref},t,\textit{history})$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $s\leftarrow $ scan of $A$ \label{snap-anon}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} if $\exists j$ such that $s[j]=(w,t',his)$ with $t'> t$ then \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $\textit{history} \leftarrow his$ \label{line:anon-hist1}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} output the $t$-th value in $his$ and halt \label{line:anon-exit1}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} if $|\{s[j] : 0\leq j < r\}| \leq m$ and every entry of $s$ is a $t$-tuple then \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} let $w$ be the most common frequent value in $s$ \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $\textit{history} \leftarrow \textit{history}\cdot w$ \label{line:anon-hist2}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} output $w$ and halt \label{output-anon} \label{line:anon-exit2} \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} if $|\{j : s[j]=(\mbox{\it pref},t,*)\}| < \ell$ and $\exists new$ such that $|\{j:s[j]=(new,t,*)\}|\geq \ell$ then \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} {\it pref} $\leftarrow new$\label{change-pref-anon}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} $i\leftarrow (i+1)\mbox{ mod } r$ \label{change-index-anon}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} end loop\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} thread 2:\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} loop \label{line:anon-t2begin}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} if $|H|\geq t$ then \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} let $w$ be the $t$th element of $H$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $\textit{history} \leftarrow \textit{history}\cdot w$ \label{line:anon-hist3}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} output $w$ and halt \label{output-anon-2} \label{line:anon-exit3}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm}\p end loop \label{line:anon-t2end}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} \addtocounter{ind}{-7}\hspace*{-7mm} end {\sc Propose} \end{code} \caption{Anonymous algorithm for $m$-obstruction-free repeated $k$-set agreement.\label{majority}} \end{figure*} For one-shot $k$-set agreement, processes alternate between storing their preferred value in a component of the snapshot object $A$ and performing a scan of $A$. The conditions for outputting a value and adopting a new preference differ from the algorithm in Section \ref{linear-space-alg} to compensate for the lack of identifiers. Whenever a process observes $m$ or fewer different values in a scan, it can output the one that occurs most frequently. Otherwise, if a process sees fewer than $\ell$ copies of its own preference and at least $\ell$ copies of another value, it adopts this other value as its preference. The adaptation of this algorithm to repeated consensus is similar to the technique used for the non-anonymous case. There is one additional complication: there is no known space-efficient wait-free anonymous snapshot implementation from registers, so we use a non-blocking implementation. Therefore, some processes may \emph{starve} while accessing the snapshot object, under the condition that at least one process manages to complete infinitely many instances of $k$-set agreement. To ensure that starving processes also complete their {\sc Propose} operations we use one additional register $H$ where ``fast'' processes write their outputs. Every process periodically checks $H$ in a parallel thread (lines~\lref{line:anon-t2begin}-\lref{line:anon-t2begin}) and if it finds out that $|H|\geq t$, where $t$ is the instance of agreement the process is working on, it outputs the $t$-th value found in $H$. As in the non-anonymous case, the sequence of values that have been output in the instances of repeated $k$-set agreement the process has completed so far is stored in a local variable $\textit{history}$. To ensure that $\textit{history}$ is updated exactly once per instance of $k$-set agreement, we require that the threads of a process are scheduled so that the pairs of lines~\lref{line:anon-hist1}--\lref{line:anon-exit1}, \lref{line:anon-hist2}--\lref{line:anon-exit2}, and \lref{line:anon-hist3}--\lref{line:anon-exit3} are executed without interruption from the process's other thread. The proof of correctness of our algorithm is given in Appendix~\ref{anonymous-alg}. \section{Proof of correctness of anonymous repeated set agreement} \label{anonymous-alg} \indent To prove Theorem~\ref{thm:anon-alg}, consider our algorithm in Figure~\ref{majority}. The algorithm actually uses a non-blocking snapshot object with $r=(m+1)(n-k)+m^2$ components, which can be built anonymously using $r$ registers \cite{GR07}, plus one additional register. Each component of the snapshot object is initially $\bot$. In this algorithm, a process stores tuples of the form $(v,t,history)$ where $v$ is the process's preferred value, $t$ indicates which instance of set agreement the process is currently working on, and $history$ is a sequence of output values for instances of set agreement. We refer to a tuple whose second element is $t$ as a $t$-tuple. As an invariant, it is easy to see that each of the following can only store input values of some process's $t$ invocation of {\sc Propose}: \begin{itemize} \item a process's {\it pref} variable during the process's $t$th invocation of {\sc Propose}, \item the first component of a $t$-tuple appearing in $A$, and \item the $t$th element of any sequence that is stored in a process's {\it history} variable, in the shared variable $H$ or inside a tuple in $A$. \end{itemize} {\bf Validity} follows. Next, we prove the {\bf $k$-agreement} property. A process is \emph{$t$-deciding} if it outputs a value on line \lref{output-anon}. Any other process that produces an output for its $t$th {\sc Propose} operation outputs the same result as some $t$-deciding process, so it suffices to show that the $t$-deciding processes output at most $k$ different values. As in Section \ref{linear-space-alg}, we show that the last $\ell=n-k+m$ $t$-deciding processes output at most $m$ different values, so that the total number of outputs for instance $t$ is at most $n-\ell+m=k$ values. If at most $n-\ell$ processes are $t$-deciding, then $k$-agreement is trivial for the $t$th instance of set agreement, since $n-\ell = k-m < k$. So, consider an execution in which more than $n-\ell$ processes are $t$-deciding. Order the $t$-deciding processes according to the time that they perform their last scan in their $t$th invocations of {\sc Propose}, and let $q_0$ be the $(n-\ell+1)$th process in this ordering. Let $X$ be the set of tuples that appear in $q_0$'s final scan. Let $V$ be the values that appear in tuples in $X$. We prove that $q_0$ and all $t$-deciding processes that come later in the ordering output values in $V$. We call an update of $A$ after $C_0$ a {\it bad update} if it stores a $t'$-tuple with $t'<t$ or a $t$-tuple whose value is not in $V$. \begin{lemma} After $q_0$ performs its final scan in its $t$th {\sc Propose} operation, each process performs bad updates to at most one component of $A$. \end{lemma} \begin{proof} To derive a contradiction, assume that some process performs bad updates to two components of $A$ after $q_0$'s final scan $scan_0$. Consider the first process $p$ to do a bad update on a second location. Let $s_p$ be the vector returned by the last scan that $p$ performs before its bad update to the second location. This scan causes $p$ to execute line \lref{change-index-anon} so that it can perform an update on the second location. Thus $s_p$ does not contain any $t'$-tuple with $t'>t$. Since $n-\ell+1$ processes have performed their final scan of their $t$th {\sc Propose} operation at or before $scan_0$, at most $\ell-1$ processes can perform updates that store $t'$-tuples with $t'\leq t$ after $scan_0$. By definition of $p$, none of those $\ell-1$ processes have performed bad updates on two different locations between $scan_0$ and $p$'s scan that returned $s_p$. Since $scan_0$ returned a vector that contained only $t$-tuples, $s_p$ must contain at most $\ell-1$ components that are either $t'$-tuples with $t'<t$ or $t$-tuples with values not in $V$. So there are at least $r-\ell+1 = (m+1)(\ell-1)+1-(\ell-1)=m(\ell-1)+1$ locations of $s_p$ that contain $t$-tuples with values in $V$. Since $|V|\leq m$, one of the values in $V$ must appear in $t$-tuples stored in at least $\ell$ locations. Thus $p$ must adopt a value in $V$ after it obtains the scan $s_p$, contradicting the fact that $p$'s next update after this scan uses a value not in $V$. \end{proof} It follows that at any time after $q_0$'s final scan, there are at most $\ell-1$ $t$-tuples in $A$ with values that are not in $V$. Any $t$-deciding process ordered after $q_0$ performs a final scan that returns only $t$-tuples, so one of the values in $V$ must appear in at least $\ell$ of them, and is therefore the most frequent value in the scan. Thus, the value output by any such process must be in $V$. Hence, the total number of values output is at most $(n-\ell) + |V| \leq n-(n-k+m)+m = k$, ensuring that $k$-agreement is satisfied. Finally, we prove {\bf $m$-obstruction-freedom}. For this part of the proof, it is convenient to consider lines \lref{snap-anon} to \lref{change-index-anon} to be a single atomic action. Since there is only one shared-memory access in this block of code, there is no loss of generality in this assumption: every execution has an equivalent execution where this block is executed atomically, so if we prove $m$-obstruction-freedom for executions that satisfy this assumption then it also holds for all executions. Consider an execution where at most $m$ processes continue to take steps forever. Let $P$ be the set of processes that complete infinitely many accesses to the snapshot object. $P$ is non-empty, since the snapshot implementation we use is non-blocking, and $|P|\leq m$. To derive a contradiction, assume that some process in $P$ never completes one of its {\sc Propose} operations. Let $t$ be the smallest number such that some process in $P$ does not complete its $t$th {\sc Propose}. Let $P'$ be the set of processes in $P$ that do not complete their $t$th {\sc Propose} operation. Let $\mu$ be a time after \begin{itemize} \item every process outside $P$ has stopped performing updates on $A$, \item every process in $P'$ has begun its $t$th {\sc Propose} operation, \item every process in $P-P'$ has begun its $(t+1)$th {\sc Propose} operation, and \item no component of $A$ contains a $t'$-tuple for any $t'<t$. \end{itemize} It is possible to choose $\mu$ to satisfy the last condition because each process in $P'$ completes infinitely many iterations of the loop and therefore updates every location of $A$ after $\mu$. Thus, eventually all $t'$-tuples with $t'<t$ are overwritten. Note that after $\mu$, no component of $A$ ever contains a $t'$-tuple with $t'<t$. We say that a value $v$ is a {\it candidate} in a configuration $C$ if it is either the {\it pref} value of some process in $P'$ or it appears in a $t$-tuple in $A$. We shall prove that there is a configuration after $\mu$ with at most $m$ candidates. After that point, only those $m$ values can appear in $t$-tuples in the snapshot object. It follows that every process in $P'$ completes its $t$th {\sc Propose} when it next performs a scan, which contradicts the definition of $P'$. \begin{lemma} \label{disappear} If, in some configuration $C$ after $\mu$, a value $v$ is not the {\it pref} of any process in $P'$ and $t$-tuples with value $v$ appear in fewer than $\ell$ components of the snapshot object, then after some time, $v$ will not be a candidate anymore. \end{lemma} \begin{proof} To derive a contradiction, assume that some process in $P'$ changes its local {\it pref} variable to $v$ in some step after $C$. Consider the first such step by any process after $C$. Let $scan$ be the scan performed in that step. Between $C$ and $scan$, no process executing its $t$th {\sc Propose} stores a $t$-tuple with value $v$, so the result of $scan$ contains $t$-tuples with value $v$ in at most $\ell-1$ components, contradicting the fact that $p$ adopts the value $v$ in the step when it performs $scan$. Thus, no process in $P'$ ever has $v$ as its preferred value after $C$. So, no $t$-tuple with value $v$ is ever stored in $A$ after $C$. Since each process in $P'$ executes infinitely many steps of its $t$th {\sc Propose} operation, and increments its index $i$ in every iteration of the loop, it eventually overwrites every component of $A$. Thus, there is a time (after $C$), after which no component of $A$ contains a $t$-tuple with value $v$. After this time, $v$ is never a candidate. \end{proof} \begin{lemma} \label{choice-available} Whenever a process in $P'$ performs a scan after $\mu$, there is some value $v$ that appears in $t$-tuples in at least $\ell$ components of $A$. \end{lemma} \begin{proof} To derive a contradiction, suppose there is no such value $v$. Consider the configuration $C$ immediately after the scan. By Lemma \ref{disappear}, only the values stored in {\it pref} variables of processes in $P'$ remain candidates forever. There are at most $m$ such values. Thus, there is a time after which every $t$-tuple in $A$ contains only those values. Whenever a process in $P'$ performs a scan after that time, it will terminate, contradicting our assumption that no process in $P'$ ever completes its $t$th {\sc Propose}. \end{proof} For any configuration $C$ and value $v$, let $mult(C,v)$ be the number of components of $A$ that contain $t$-tuples with value $v$ in $C$ plus the number of poised processes that are poised to perform an update and have {\it pref} $v$ in $C$. The following lemma generalizes Lemma \ref{disappear}. \begin{lemma} \label{disappear2} Consider a value $v$. If, in some configuration $C$ after $\mu$, $mult(C,v) < \ell$, then after some time, $v$ will no longer be a candidate. \end{lemma} \begin{proof} We first show that if a single step $st$ takes the system from a configuration $C_1$ to another configuration $C_2$ and $mult(C_1,v)<\ell$ then $mult(C_2,v)<\ell$. If $st$ is a step by a process in $P-P'$, it can only decrease $mult$. If $st$ is an update by a process in $P'$, $st$ may increase by one the number of components of $A$ containing a $t$-tuple with value $v$, but then $st$ will also decrease the number of processes poised to store a $t$-tuple with value $v$ by one, so the value of $mult$ cannot be increased by $st$. Finally, suppose $st$ is an atomic execution of lines \lref{snap-anon}--\lref{change-index-anon}. In $C_1$, fewer than $\ell$ components of $A$ contain $t$-tuples with value $v$ (since $mult(v,C_1)<\ell$). Moreover, by Lemma \ref{choice-available}, there is a value $v'$ such that $t$-tuples with value $v'$ appear in at least $\ell$ components of the scan performed during $st$. Thus, the process performing $st$ adopts some value different from $v$ as its {\it pref}. So, $st$ cannot increase $mult$ for $v$. Thus, in every configuration reachable from $C$, $mult(C,v)<\ell$. As argued above, any process in $P'$ that performs a scan after $C$ adopts a value different from $v$. Thus, eventually, no process will have its {\it pref} equal to $v$, and at that time, $v$ will be in at most $\ell-1$ components of $A$, so Lemma \ref{disappear} ensures that $v$ will eventually cease to be a candidate. \end{proof} Now, consider a configuration $C$ immediately after some process has performed an update (after $\mu$). There are $(m+1)(\ell-1)+1$ registers and at most $m-1$ processes in $P'$ poised to perform an update. Thus, $\sum\limits_v mult(v,C) \leq (m+1)\ell-1$. Therefore, at most $m$ values have $mult(C,v) \geq \ell$. All other values will eventually cease to be candidates, by Lemma \ref{disappear2}, so eventually there will be at most $m$ candidates. All processes in $P'$ will then terminate when they next perform a scan, which contradicts our definition of $P'$. Thus, we have shown that every process in $P$ completes infinitely many {\sc Propose} operations. There remains one more thing to show. There may be some processes not in $P$ that takes infinitely many steps. (These are processes that starve in the non-blocking implementation of the snapshot object.) We must show that each such process $p$ also completes all of its {\sc Propose} operations. Processes in $P$ write longer and longer sequences to $H$ infinitely often and processes not in $P$ eventually stop writing to $H$. Thus, for all $t$, $p$ will eventually see a sequence in $H$ of length at least $t$, and will then complete its $t$th {\sc Propose} operation. This completes the proof of Theorem \ref{thm:anon-alg}. We remark that for the one-shot case, the register $H$ is not required, so we can solve the one-shot version using one less register. \section{Proof of correctness of repeated set agreement} \label{app:non-anon-algorithm} In this section, we prove Theorem~\ref{thm:repeated-alg}. The pseudocode for our repeated $k$-set agreement algorithm appears in Figure~\ref{mconc-rep-kset}. It essentially follows the pseudocode of the one-shot algorithm (Figure~\ref{two-copies}), with additional ``shortcuts'' which a process may use to adopt a value output previously by another process that has already reached a higher instance of repeated set agreement. Also, a value stored by a process in a lower instance is treated as $\bot$. Thus, a process decides in instance $t$ only if all tuples found in $A$ are stored by processes in instance $t$ and there are at most $m$ distinct tuples, or if another process has reached an instance higher than $t$. The local variable {\it history} initially stores an empty sequence and the local variable $t$ is initially 0. The local variable {\it i} stores the location that the process updates and is initially 0. The values of these three local variables persist from one invocation of {\sc Propose} to the next. In particular, this means that the first location of a {\sc Propose} is the last location of the previous {\sc Propose}. A process updates components of the shared snapshot object with tuples of the form $(v,id,t,\textit{history})$, where $v$ is the process's preferred value, $id$ is the identifier of the process, $t$ indicates which instance of set agreement the process is currently working on, and \textit{history} is a sequence of output values for instances of set agreement. We refer to a tuple whose third element is $t$ as a $t$-tuple. To see that the algorithm satisfies {\bf validity}, first observe that when a process invokes {\sc Propose} for the $t$th time, the length of its \textit{history} variable is at least $t-1$. The value in every $t$-tuple in $A$ and, thus, put in the $t$th position of a process's local variable $\textit{history}$, is the input value of some process's $t$th invocation of {\sc Propose}. The following Lemma reformulates Lemma~\ref{invOne} for $t$-tuples, showing that $A$ cannot contain more than one distinct $t$-tuple for a given process. \begin{lemma}\label{invOne:rep} Let $id$ be a process identifier and $t$ be a positive integer. In any reachable configuration, all $t$-tuples with identifier $id$ in $A$ are identical. \end{lemma} \begin{proof} To derive a contradiction, assume that in some reachable configuration $C$, $A[i_1]=(v_1,id,t,his_1)$ and $A[i_2]=(v_2,id,t,his_1)$ such that $(v_1,his_1)\neq (v_2,his_1)$. Let $p_{id}$ be the process with identifier $id$. By the algorithm, $p_{id}$ changes its $\textit{history}$ variable only when it switches to a higher instance of repeated agreement. Thus, $his_1=his_2$ and we must have $v_1\neq v_2$. Let $C$ be reached in some execution at time $\mu$. Let $u_1$ and $u_2$ be the last update steps before $\mu$ in which $p_{id}$ updates $A[i_1]$ and $A[i_2]$, respectively. Without loss of generality, assume that $u_1$ occurred before $u_2$. Then, at some time between $u_1$ and $u_2$, $p_{id}$ changes its {\it pref} variable in instance $t$ (at line~\lref{change-pref-rep}). Consider the first time after $u_1$ when $p_{id}$ performs such a change, and let $i^*$ and $s^*$ be the values of $p_{id}$'s local variables $i$ and $s$ at that time. Since (1) $A[i_1]=(v_1,id,t,his_1)$ at all times between $u_1$ and $\mu$ and (2) $s^*$ is obtained between $u_1$ and $\mu$, $s^*[i_1]$ must be equal to $(v_1,id,t,his_1)$. By the algorithm, $i^*=i_1$; otherwise, the test in line~\lref{cond-rep} would not be satisfied, and $p_{id}$ would not change {\it pref} in line~\lref{change-pref-rep}. Therefore, in the next iteration of the loop, $p_{id}$ will update location $A[i_1]$. This update is after $u_1$ and no later than $u_2$ (and hence before $\mu$), which contradicts the definition of $u_1$ as the last update performed by $p_{id}$ to $A[i_1]$ before~$\mu$. \end{proof} To show {\bf $k$-agreement}, we use arguments similar to the proof for the one shot algorithm. Let $\ell=n-k+m$. We call a process \emph{$t$-deciding} if it outputs a value at line \lref{output-rep} (i.e., without adopting a value from another process's \textit{history} value) during its $t$th invocation of {\sc Propose}. If, for a given instance $t$, at most $n-\ell$ processes are $t$-deciding, then $k$-agreement for instance $t$ is immediate since $n-\ell=k-m<k$. Otherwise, consider an execution in which more than $n-\ell$ processes are $t$-deciding. Order these processes according to the time that they perform their last scan in instance $t$, and let $q_0$ be the $(n-\ell+1)$th process in this ordering. Let $X$ be the set of at most $m$ different tuples that appear in $q_0$'s final scan and $V$ be the set of values in $X$. Then, $|V|\leq |X|\leq m$. We shall show that $q_0$ and all processes that come later in the ordering output values in $V$. Thus, the total number of output values in instance $t$ is at most $(n-\ell) + |V| \leq n-(n-k+m)+m = k$. \begin{lemma}\label{lem:safety:rep} After $q_0$ performs its final scan in instance $t$, only $t$-tuples with values in $V$ can appear twice in $A$. \end{lemma} \begin{proof} This proof is analogous to the proof of Lemma \ref{lem:safety} for the one-shot algorithm. Let $C_0$ be the configuration just after $q_0$'s last scan. We shall show by induction that each configuration reachable from $C_0$, only $t$-tuples with values in $V$ can appear in two or more locations of $A$. For the base case, consider the configuration $C_0$. By the definition of $V$, $A$ contains only tuples with values in $V$, so the claim holds. For the induction step, suppose the claim holds in all configurations from $C_0$ to some configuration $C_1$ reachable from $C_0$. Let $st$ be a step that takes the system from $C_1$ to another configuration $C_2$. We must show that the claim holds in configuration $C_2$. We need only consider steps $st$ in which some process $p_{id}$ stores a tuple $(v,id,t,his)$ in $A$. {\sc Case 1}: $st$ is the first time $p_{id}$ stores a $t$-tuple after $C_0$. If $v\in V$, then $st$ cannot cause a violation of the claim. If $v\notin V$, then $A$ contains exactly one copy of $(v,id,t,his)$ in configuration $C_2$, so again $st$ preserves the claim. {\sc Case 2}: $st$ is not the first time $p_{id}$ stores a $t$-tuple after $C_0$. Let $s_{id}$ be the vector obtained by $p_{id}$'s last scan (at line \lref{snap-rep}) before $st$. Since $s_{id}$ is not in the last iteration of the loop during instance $t$, $s_{id}$ must not satisfy the conditions on line \lref{halt-cond1-rep} or \lref{halt-cond2-rep}. We show that $v\in V$, and hence $st$ preserves the claim, by considering two subcases. {\sc Case 2a}: $s_{id}$ satisfies the condition on line \lref{cond-rep}. Since the condition on line \lref{halt-cond1-rep} is not satisfied and the condition on line \lref{cond-rep} is satisfied, every tuple in $s_{id}$ is a $t$-tuple. Then, $p_{id}$ updates its {\it pref} variable at line \lref{change-pref}, so the value $v$ is the value of a $t$-tuple that appears twice in $s_{id}$. By the induction hypothesis, $v\in V$. {\sc Case 2b}: $s_{id}$ does not satisfy the condition on line \lref{cond-rep}. We call an update after $C_0$ {\it bad} if it stores either a $t'$-tuple with $t'<t$ or a $t$-tuple that is not in $X$. We first argue that each process can do bad updates to at most one location. To derive a contradiction, suppose some process does bad updates to two different locations after $C_0$. Consider the first process $p$ to do a bad update to a second location. Process $p$'s last bad update to one location and its first bad update to the second location must be in the same instance of {\sc Propose}, because $p$ must execute line \lref{change-ind-rep} between them. Let $s_p$ be the vector returned by the scan that $p$ performs at line \lref{snap-rep} during the iteration of the loop when it executes line \lref{change-ind-rep}. Then, $s_p$ must not satisfy the conditions on line \lref{halt-cond1-rep} or \lref{cond-rep}. Recall that at least $n-\ell+1$ processes have updated $A$ for the last time during instance $t$ prior to $C_0$. So at most $\ell-1$ processes can do bad updates. Since no process has done bad updates to two locations before the $p$'s scan obtained the vector $s_p$, and no location of $s_p$ contains a tuple with instance number greater than $t$, at least $r-\ell+1 = m+1$ locations of $s_p$ contain $t$-tuples in $X$. Since $|X|\leq m$, at least two locations of $s_p$ contain the same $t$-tuple. This contradicts the fact that $s_p$ does not satisfy the condition on line \lref{cond-rep}. Thus, each process can do bad updates to at most one location. Hence, at all times after $C_0$, at least $r-(\ell-1) = m+1$ locations have not had any bad updates performed on them. Since $s_{id}$ did not satisfy the condition on line \lref{halt-cond1-rep}, $s_{id}$ must contain at least $m+1$ $t$-tuples in $X$, and therefore $s_{id}$ contains at least two identical $t$-tuples. Moreover, some process $q_0$ satisfied the condition on line \lref{halt-cond2-rep} prior to the scan that returned $s_{id}$, so no component of $s_{id}$ contains $\bot$. Thus, the only reason $s_{id}$ does not satisfy the condition on line \lref{cond-rep} must be that for some $j$ different from $p_{id}$'s position $i$, $s_{id}[j]=(v,id,t,his)$. Just before taking the scan $s_{id}$, $p_{id}$ updates location $i$ with $(v,id,t,his)$. This update occurs after $C_0$, since $st$ is not the first update by $p_{id}$ after $C_0$. In the configuration after this update to location $i$, both $s_{id}[j]$ and $s_{id}[i]$ contain $(v,id,t,his)$. So, by the induction hypothesis, $v\in V$. \end{proof} Lemma~\ref{lem:safety:rep} implies that all $t$-deciding processes after the $(n-\ell)$th output values in $V$ and, thus, a total of at most $n-\ell+m=k$ values are output by $t$-deciding processes. The {\bf $k$-agreement} property follows. To prove {\bf $m$-obstruction-freedom}, consider an execution where the set $P$ of processes that take infinitely many steps has size at most $m$. To derive a contradiction, assume that some process in $P$ completes only a finite number of {\sc Propose} operations. Let $t$ be the smallest number such that a process in $P$ does not complete its $t$th {\sc Propose}. Let $P'$ be the set of processes in $P$ that do not complete the $t$th {\sc Propose}. By the algorithm, no process in $P'$ ever witnesses the presence of a process in a higher instance; otherwise, it would output a value decided in instance $t$ at line \lref{halt-rep}. Eventually, processes stop storing tuples with instance numbers $t'<t$ in $A$. Below we reuse the arguments of the proof of Lemma~\ref{term-dontchange} to show that at least one process in $P'$ updates each component of $A$ infinitely often. Recall that each time a process in $P'$ executes the loop in instance $t$, it either keeps its preferred value and increments $i$ (the next location to update) modulo $r$ or changes its preferred value without modifying $i$. Let $NS$ denote the set of processes in $P'$ that increment $i$ infinitely often and the set $S$ denotes the rest of the processes in $P'$, i.e., the processes that eventually get stuck updating to the same location forever. \begin{lemma}\label{term-dontchange:rep} $NS\neq \emptyset$. \end{lemma} \begin{proof} The proof is by contradiction. Assume it is not the case ($P'=S$). Let $M$ be the set of at most $m$ locations that processes in $S$ eventually settle on. Note that no process in $P-P'$ can update a location outside of $M$ infinitely often because then the processes in $P'$ would eventually see a tuple with instance number greater than $t$ and complete their $t$th {\sc Propose} operation. Let $\mu$ be a time after which only processes in $P$ take steps and no process updates a location outside of $M$. Let $NM$ be the set of at least $n+m-k\ge 2$ locations that are never changed after $\mu$. Since all positions in $NM$ that contain tuples of earlier instances are ignored, we simply reuse the arguments of the proof of Lemma~\ref{term-dontchange}, to derive a contradiction. \end{proof} By Lemma~\ref{term-dontchange:rep}, (1)~there is a time after which only tuples stored by processes in $P'$ are found in scans performed by processes in $P'$, and all of them are $t$-tuples. By~Lemma~\ref{invOne:rep}, (2)~all $t$-tuples in $A$ of the same process are identical and (3)~$|P'|\leq|P|\leq m$. (1), (2) and (3) imply that there is a time after which, whenever a process $p\in P'$ performs a scan, it finds at most $m$ different $t$-tuples in the returned vector and, thus, decides, contradicting the definition of $P'$. This completes the proof of the {\bf $m$-obstruction-freedom} property. Thus, we have shown that the algorithm solves repeated $k$-set agreement using a snapshot object with $n+2m-k$ registers, which can be implemented using $\min(n,n+2m-k)$ registers, as described in the proof of Theorem \ref{one-shot-alg}. This completes the proof of Theorem \ref{thm:repeated-alg}. \section{Lower Bound for Repeated Set Agreement} \label{sec-repeated-lower} \indent In this section, we prove that solving $m$-obstruction-free repeated $k$-set agreement among $n$ processes requires at least $n+m-k$ registers. Since the proof is technical, we first provide a brief overview. For simplicity, assume for now that $k+1$ is a multiple of $m$. We assume that there is an algorithm that uses fewer than $n+m-k$ registers, and construct an execution in which processes return $k+1$ different values in some instance of set agreement, contradicting the $k$-agreement property. The proof first constructs $c=\frac{k+1}{m}$ disjoint sets $Q_1,Q_2, \ldots, Q_{c}$ of $m$ processes each, and an execution $\alpha$ that passes through a sequence of configurations $D_1,D_2, \ldots, D_{c}$ with the following property. For $1\leq i<c$, {\it every possible} execution fragment by the processes in $Q_i$ starting from $D_i$ writes only to registers that are overwritten immediately after $D_i$ in $\alpha$. Moreover, processes in $Q_i$ take no more steps after $D_i$ in $\alpha$. We can then splice into $\alpha$ any execution fragment by processes in $Q_i$ at $D_i$, knowing that the rest of $\alpha$ will not be affected, since all evidence of the inserted steps will be overwritten. For each group $Q_i$, the fragment we splice into $\alpha$ accesses a ``fresh'' instance of set agreement that was never accessed during $\alpha$. (In each fragment that is spliced in, only the $m$ processes in $Q_i$ take steps, so all {\sc Propose} operations terminate and the processes will eventually reach and complete the fresh instance of set agreement.) We ensure that these groups of $m$ processes output disjoint sets of $m$ different values each for this one instance of set agreement, for a total of $c\cdot m = k+1$ different outputs, a contradiction. \begin{theorem} \label{repeated-lower-bound} Any algorithm for $m$-obstruction-free repeated $k$-set agreement among $n$ processes requires at least $n+m-k$ registers. \end{theorem} \begin{proof} To derive a contradiction, assume there exists an algorithm for $m$-obstruction-free repeated $k$-set agreement using $r=n+m-k-1$ registers. Let $c=\ceil{\frac{k+1}{m}}$. Since $k\geq m$, we have $c\geq 2$. We define a {\it block write} to a set $A$ of registers by a set $P$ of processes to be an execution fragment in which each process of $P$ takes a single step, such that the set of registers written during the fragment is $A$. We first construct an execution \begin{equation} \label{execution} C_0 \goes{\alpha_1} D_1 \goes{\beta_1} C_1 \goes{\alpha_2} D_2 \goes{\beta_2} C_2 \goes{\alpha_3} \cdots \goes{\beta_{c-1}} C_{c-1} \end{equation} and sets $A_1,\ldots,A_{c-1}$ of registers such that $C_0$ is the initial configuration and for all $j$, \begin{enumerate} \item \label{alphaCD} $\alpha_j$ is an execution fragment containing only steps by two disjoint sets $P_j$ and $Q_j$ of processes that goes from configuration $C_{j-1}$ to configuration $D_j$, \item \label{betaDC} $\beta_j$ is a block write to $A_j$ by $P_j$ that goes from configuration $D_j$ to configuration $C_j$, \item \label{Q1size} $|Q_1| = k+1-(c-1)m$, \item \label{Qjsize} if $j>1$, $|Q_j|=m$, \item \label{PAsize} $|P_j|=|A_j|$, \item \label{Qdisjoint} $Q_j \cap Q_{j'} =\emptyset$ for $j'\neq j$, \item \label{PQdisjoint} $Q_j\cap P_{j'} =\emptyset$ for $j'>j$, and \item \label{no-continuation} there is no execution fragment starting from $D_j$ in which only processes in $Q_j$ take steps and some process writes outside $A_j$. \end{enumerate} {\sc Base case} ($j=0$): Let $C_0$ be the initial configuration. \smallskip {\sc Inductive step}: Let $1\leq j\leq c-1$. Assume we have constructed the execution from $C_0$ to $C_{j-1}$ satisfying all the properties. The algorithm in Figure \ref{construction-alg} constructs the execution fragment $\alpha_j$ and the sets $P_j$, $Q_j$ and $A_j$. Then, let $\beta_j$ be the execution fragment starting from $D_j$ where each process in $P_j$ takes a single step and let $C_j$ be the resulting configuration. \begin{figure*} \setcounter{linenum}{0} \begin{code} \stepcounter{linenum}{\scriptsize \arabic{linenum}}\> let $\alpha_j$ be the empty execution fragment\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $D_j \leftarrow C_{j-1}$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $P_j\leftarrow \emptyset$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $A_{j}\leftarrow \emptyset$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} if $j>1$ then $size \leftarrow m$ else $size \leftarrow k+1-(c-1)m$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} let $Q_j$ be a set of $size$ processes disjoint from $Q_1 \cup Q_2 \cup \cdots \cup Q_{j-1}$ \label{choose-proc1}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} loop until no execution fragment starting from $D_j$ by $Q_j$ writes outside $A_j$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} let $\delta$ be an execution fragment starting from $D_j$ by $Q_j$ until some process $q\in Q_j$ is poised for\\\>\hspace*{\value{ind}mm} \> the first time to write to a register that is not in $A_j$ and let $R$ be that register\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $\alpha_j \leftarrow \alpha_j \cdot \delta$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} let $D_j$ be the configuration reached from $C_{j-1}$ by performing $\alpha_j$\label{updateDj}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} let $q'$ be some process outside $Q_1\cup Q_2\cup \cdots \cup Q_j \cup P_j$\label{choose-proc2}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $A_j \leftarrow A_j \cup \{R\}$\label{updateAj}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $P_j \leftarrow P_j \cup \{q\}$\label{updatePj}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $Q_j \leftarrow (Q_j - \{q\}) \cup \{q'\}$\label{updateQj}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} end loop\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} output $\alpha_j,D_j,P_j,Q_j,A_j$ \end{code} \caption{Algorithm used in the proof of Theorem \ref{repeated-lower-bound} to construct $\alpha_j, D_j, P_j, Q_j$ and $A_j$.\label{construction-alg}} \end{figure*} Observe that the construction algorithm terminates: each loop iteration adds a new register to $A_j$, so it terminates after at most $r$ iterations. We next check that the required processes on line \lref{choose-proc1} and \lref{choose-proc2} exist. When $j=1$, we have $size=k+1-(c-1)m =k+1-\ceil{\frac{k+1}{m}}\cdot m +m\leq m < n$, so one can choose the required processes on line~\lref{choose-proc1}. For $j>1$, one can choose the process on line~\lref{choose-proc1} because \begin{eqnarray*} |Q_1\cup\cdots\cup Q_{j-1}| &=& k+1-(c-1)m + (j-2)m \\ &&\mbox{(by induction hypothesis \ref{Q1size}, \ref{Qjsize} and \ref{Qdisjoint})}\\ &\leq& k+1 - (c-1)m + (c-3)m\\ && (\mbox{since }j\leq c-1)\\ &=& k+1-2m \leq n-2m\\ &&(\mbox{since }k<n). \end{eqnarray*} Similarly, one can choose the required process $q'$ at line \lref{choose-proc2} because \vspace*{-1mm} \begin{eqnarray*} &&|Q_1\cup \cdots\cup Q_j\cup P_j|\\ &\leq& k+1-2m +|Q_j| + |P_j| \\ &&(\mbox{since } |Q_1\cup\cdots\cup Q_{j-1}|\leq k+1-2m)\\ &\leq & k+1-m + r - 1 \\ &&(\mbox{since $|Q_j|=m$ and }|P_j|=|A_j| \leq r-1)\\ &=& n -1 \\ &&(\mbox{since }r=n+m-k-1). \end{eqnarray*} We verify the construction satisfies all of the properties. Line \lref{updateDj} of the algorithm updates $D_j$ each time $\alpha_j$ is updated, to ensure property 1. Property \ref{betaDC} is true by definition of $\beta_j$ and $C_j$. $Q_j$ is initialized to a set whose size satisfies property \ref{Q1size} or \ref{Qjsize} on line \lref{choose-proc1} and the size of this set is preserved whenever $Q_j$ is altered on line \lref{updateQj}. $P_j$ and $A_j$ are initialized to be empty, and both are updated by adding one element to each on line \lref{updateAj} and \lref{updatePj}, so they remain the same size after every iteration of the loop. (Note that $P_j$ and $Q_j$ are disjoint at the beginning of each iteration of the loop, so line \lref{updatePj} does add a new process to $P_j$.) Every process placed in $Q_j$ at line \lref{choose-proc1} or \lref{updateQj} was chosen to be outside $Q_1\cup \ldots\cup Q_{j-1}$, guaranteeing property \ref{Qdisjoint}. Similarly, processes added to $P_j$ are always outside $Q_1\cup\ldots\cup Q_{j-1}$, and whenever a process is added to $P_j$, it is removed from $Q_j$, so property \ref{PQdisjoint} is satisfied. Finally, property \ref{no-continuation} is guaranteed by the exit condition of the loop. This completes the inductive construction. \smallskip Now, let $s$ be the maximum number of invocations of {\sc Propose} by any process in the execution that takes the system to configuration $C_{c-1}$. Let $Q_c$ be a set of $m$ processes disjoint from $Q_1\cup\cdots \cup Q_{c-1}$. (These $m$ processes exist since $|Q_1\cup \cdots\cup Q_{c-1}| = k+1-m \leq n-m$.) Let $D_c=C_{c-1}$. For each $j\in\{1,\ldots,c\}$, we now construct an execution fragment $\gamma_j$ by the processes in $Q_j$ starting from $D_j$. Since $|Q_j|\leq m$, each {\sc Propose} in $\gamma_j$ must terminate. First, the processes in $Q_j$ run one by one until each completes its first $s$ invocations of {\sc Propose}. Then, the processes of $Q_j$ run their $(s+1)$th invocation of {\sc Propose}, each using its own id as its input value so that they decide $|Q_j|$ different output values. By Lemma~\ref{lem:m-val}, such an execution fragment exists. Note that for $j<c$, $\gamma_j$ cannot write outside of $A_j$, by property \ref{no-continuation}. So, all traces of $\gamma_j$'s activity are obliterated by the block write $\beta_j$. Thus, we can insert $\gamma_1,\ldots,\gamma_{c}$ into execution (\ref{execution}) at $D_1,\ldots,D_c$, respectively, and the resulting execution is still legal. In the resulting execution, the number of distinct outputs for the $(s+1)$th instance of set agreement is $\sum\limits_{j=1}^c|Q_j| = k+1$, violating $k$-agreement. This completes the proof. \end{proof} \section{Algorithm for Repeated Set Agreement} \label{linear-space-alg} \subsection{One-shot $k$-set agreement} We first give an algorithm that uses a snapshot object of $r=n+2m-k$ components to solve (one-shot) $m$-obstruction-free $k$-set agreement, and then describe how to extend it to solve repeated set agreement. The one-shot algorithm is shown in Figure~\ref{two-copies}. Roughly speaking, the first $k-m$ processes to decide can output arbitrary values, but we ensure that the last $\ell=n-k+m$ processes all agree on at most $m$ different values (for a total of at most $k$ different values). Each process stores its preferred value in its local variable {\it pref}. Initially, it prefers its own input value. Each process executes a loop in which it stores its {\it pref} and identifier into a component of the snapshot object, takes a scan of the snapshot object and updates its {\it pref} variable based on the scan. The location $i$ that the process updates advances in each iteration of the loop, as long as the process's {\it pref} value remains the same. When the process updates its {\it pref}, it does not advance to the next location: instead it updates the same location during the next iteration of the loop. The process repeats this loop until a scan returns a vector containing at most $m$ different value-id pairs, at which point it returns one of those values. In each iteration, a process updates its {\it pref} value when it does not see any copies of its current value-id pair anywhere in the vector returned by the scan, except for the component it just updated, {\it and} it does see two copies of some other pair. In this case, it adopts the value of the pair that appears twice as its {\it pref}. The algorithm in Figure \ref{two-copies} is an improvement on the algorithm of \cite{DFGR13}, which was designed for the special case where $m=1$ and uses $2(n-k)$ registers, compared to the $n-k+2$ registers used by ours. \setcounter{linenum}{0} \begin{figure*} \begin{code} \stepcounter{linenum}{\scriptsize \arabic{linenum}}\> Shared variable:\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $A$: snapshot object with $r=n+2m-k$ components, each initially $\bot$\\[-1.5mm]\>\hspace*{\value{ind}mm}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} {\sc Propose}($v$)\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} {\it pref} $\leftarrow v$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $i\leftarrow 0$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} loop \label{beginloop}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} update $i$th component of $A$ with $(\mbox{\it pref},id)$\label{write-pref}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $s\leftarrow $ scan of $A$ \label{snap}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} if $|\{s[j] : 0\leq j < r\}| \leq m$ and $\forall j$, $s[j]\neq\bot$ then \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} let $j_1 \leftarrow \min\{j_1 : \exists j_2>j_1 \mbox{ such that }s[j_1]=s[j_2]\}$, output value in $s[j_1]$ and halt\label{output}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} if $\forall j\neq i, s[j]\notin\{\bot, (\mbox{\it pref},id)\}$ and $\exists j_1\neq j_2$ such that $s[j_1]=s[j_2]$ then \label{cond}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $j_1\leftarrow \min\{j_1 : \exists j_2>j_1 \mbox{ such that }s[j_1]=s[j_2]\}$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} {\it pref} $\leftarrow$ value in $s[j_1]$ \label{change-pref}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} else $i\leftarrow (i+1) \mbox{ mod } r \label{change-ind}$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} end loop\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} end {\sc Propose} \end{code} \caption{Algorithm for $m$-obstruction-free $k$-set agreement. Code for a process with identifier $id$.\label{two-copies}} \end{figure*} We now prove that the algorithm in Figure~\ref{two-copies} indeed solves $m$-obstruction-free $k$-set agreement. It is easy to see that {\bf validity} holds: the only values that can appear in the snapshot object or in a process's local {\it pref} variable are input values. Thus, only input values can be produced as outputs. Before proving $k$-agreement and termination, we first establish the following invariant. \begin{lemma}\label{invOne} For each process identifier $id$, all the pairs in $A$ with identifier $id$ have the same value. \end{lemma} \begin{proof} To derive a contradiction, assume there is an execution that reaches a configuration $C$ in which $A[i_1]=(v_1,id)$ and $A[i_2]=(v_2,id)$ where $v_1\neq v_2$. Let $p_{id}$ be the process with identifier $id$. Let $u_1$ and $u_2$ be the last steps before $C$ in which $p_{id}$ updates $A[i_1]$ and $A[i_2]$, respectively. Without loss of generality, assume $u_1$ is before $u_2$. Then, between $u_1$ and $u_2$, $p_{id}$ changes its {\it pref} variable at line~\lref{change-pref}. Consider the first time after $u_1$ that $p_{id}$ performs such a change, and let $i^*$ and $s^*$ be the values of $p_{id}$'s local variables $i$ and $s$ at that time. Since $s^*$ was obtained from a scan between $u_1$ and $C$ and $A[i_1]=(v_1,id)$ throughout that interval, $s^*[i_1]$ is $(v_1,id)$. Thus, $i^*=i_1$; otherwise the test on line \lref{cond} would not be satisfied, and $p_{id}$ would not change {\it pref} at line~\lref{change-pref}. Therefore, in the next iteration of the loop, $p_{id}$ will update location $A[i_1]$. This update is after $u_1$ and no later than $u_2$ (and hence before $C$), which contradicts the definition of $u_1$ as the last update performed by $p_{id}$ on $A[i_1]$ before~$C$. \end{proof} To prove {\bf $k$-agreement}, let $\ell=n-k+m$. If at most $n-\ell$ processes decide, then $k$-agreement is trivial since $n-\ell = k-m < k$. So, consider an execution in which more than $n-\ell$ processes decide. Order the processes that decide according to the times when each performs its last scan, and let $q_0$ be the $(n-\ell+1)$th process in this ordering. Let $X$ be the set of at most $m$ different pairs that appear in the vector that $q_0$'s final scan returns. Let $V$ be the set of values that appear in pairs of $X$. Then, $|V|\leq|X|\leq m$. We prove that $q_0$ and all processes that come later in the ordering output values in~$V$. Thus, the total number of values output is at most $(n-\ell) + |V| \leq n-(n-k+m)+m = k$. \begin{lemma}\label{lem:safety} In any configuration after $q_0$ performs its final scan, only pairs with values in $V$ can appear in two or more locations of $A$. \end{lemma} \begin{proof} Let $C_0$ be the configuration just after $q_0$'s final scan. We shall show by induction that in each configuration reachable from $C_0$, only pairs with values in $V$ can appear in two or more locations of~$A$. For the base case, consider the configuration $C_0$. By the definition of $V$, $A$ contains only pairs with values in $V$, so the claim holds. For the induction step, suppose the claim holds in all configurations from $C_0$ to some configuration $C_1$ reachable from $C_0$. Let $st$ be a step that takes the system from $C_1$ to another configuration $C_2$. We show that the claim holds in configuration $C_2$. We need only consider the case where $st$ is an update by some process $p_{id}$. Let $(v,id)$ be the pair that $st$ stores in a component of $A$. {\sc Case 1}: $st$ is the first update by $p_{id}$ after $C_0$. If $v\in V$, then $st$ cannot cause a violation of the claim. If $v\notin V$, then $A$ contains exactly one copy of $(v,id)$ in configuration $C_2$, since $(v,id)\notin X$, so again $st$ preserves the claim. {\sc Case 2}: $st$ is not the first update by $p_{id}$ after $C_0$. Let $s_{id}$ be the vector obtained by $p_{id}$'s last scan before $st$. We show that $v\in V$, and hence $st$ preserves the claim, by considering two subcases. {\sc Case 2a}: $s_{id}$ satisfies the condition on line \lref{cond}. Then, $p_{id}$ updates its {\it pref} variable at line \lref{change-pref}, so the value $v$ is the value of a pair that appears twice in $s_{id}$. By the induction hypothesis, $v\in V$. {\sc Case 2b}: $s_{id}$ does not satisfy the condition on line \lref{cond}. We first argue that at least one pair appears twice in $s_{id}$. Recall that there are at most $\ell-1$ undecided processes in $C_0$. Since $A$ contains at most $m$ distinct pairs ($|X|\leq m$) in $C_0$ and at most $\ell-1$ processes update $A$ after $C_0$, Lemma~\ref{invOne} implies that, when the scan $s_{id}$ is performed, $A$ contains at most $m+\ell-1=n+2m-k-1$ distinct pairs. Since there are $r=n+2m-k$ locations in $A$, at least one pair appears twice in $s_{id}$. Since $q_0$ has previously output a value, $s_{id}$ contains no $\bot$ elements. Thus, the reason that $s_{id}$ does not satisfy the condition on line \lref{cond} must be that for some $j$ different from $p_{id}$'s position $i$, $s_{id}[j]=(\textit{pref},id)$. Just before taking the scan $s_{id}$, $p_{id}$ stores $(v,id)$ in location $i$. This update occurs after $C_0$, since $st$ is not the first update by $p_{id}$ after $C_0$. In the configuration after this update of location $i$, both $s_{id}[j]$ and $s_{id}[i]$ contain $(v,id)$. So, by the induction hypothesis, $v\in V$. \end{proof} Lemma~\ref{lem:safety} implies that all processes after the $(n-\ell)$th in the ordering can only decide one of the (at most) $m$ values in $V$ and, thus, {\bf $k$-agreement} is ensured. To prove {\bf $m$-obstruction-freedom}, consider an execution where the set $P$ of processes that take infinitely many steps has size at most $m$. To derive a contradiction, assume some process in $P$ never decides. In each loop iteration, a process either keeps its preferred value and increments $i$ (its location to update) modulo $r$ or sets its preferred value without modifying $i$. We partition $P$ into two subsets: the set $NS$ of ``non-stabilizing'' processes that modify $i$ infinitely often and the set $S$ of ``stabilizing'' processes that eventually get stuck updating the same location $i$ forever. \journalversion{Every process in $NS$ updates each component of $A$ infinitely often, and there is a time after which each process in $S$, whenever it executes the loop, changes its preferred value and stores it in the same location. } \begin{lemma}\label{term-dontchange} There is at least one process in $NS$. \end{lemma} \begin{proof} To derive a contradiction, assume the claim is false (i.e., $P=S$). Let $\mu$ be a time after which only processes in $P$ take steps and no process changes its local variable $i$. Then there is a set $M$ of at most $m$ locations whose contents are updated after $\mu$. Let $NM$ be the set of at least $n+m-k\ge 2$ locations that are not updated after $\mu$. Let $\mu'$ be any time when each process in $P$ has performed at least one update after $\mu$. Thus, at $\mu'$, every location in $M$ contains a pair stored by a process in $P$. Let $p$ be a process in $P$ that performs a scan that returns a vector $s_p$ after $\mu'$. By the hypothesis, $p$ changes its preferred value in every iteration after $\mu'$, so $s_p$ satisfies the condition on line \lref{cond}. Process $p$ then changes {\it pref} to a value $v$ in a pair $(v,k)$ that appears twice in $s_p$. Since each component in $M$ is updated by different processes, no two can contain the same pair after $\mu'$. We consider two cases. {\sc Case 1:} in $s_p$, $(v,k)$ appears in one component of $M$ and one of $NM$. As $(v,k)$ is read from a component in $M$ after $\mu'$, $p_k\in P$. Consider the time (after $\mu$) at which $p_k$ stores $(v,k)$ in a component in $M$. Since no register in $NM$ ever changes its value after $\mu$, in $p_k$'s subsequent scan, $(v,k)$ is in some register of $NM$ and $p_k$ will not change its preferred value, contradicting the fact that $P=S$. {\sc Case 2:} in $s_p$, $(v,k)$ appears in two components of $NM$. By the definitions of $NM$ and $\mu'$, $(v,k)$ is found twice in $NM$ at all times after $\mu$. As $p$ changes its preferred value after its next update, it must have found another pair that appears twice and was not in $A$ previously. Then this new pair cannot be in two locations in $NM$. The pair cannot be in two locations in $M$ either because all the locations of $M$ are updated by different processes. Thus, this new pair is in one location of $M$ and one location of $NM$. But, as we have seen in Case 1, this leads to a contradiction. \end{proof} Thus, some process updates each component of $A$ infinitely often, yielding the following corollary. \begin{corollary \label{term-cor} There is a time after which $A$ contains only pairs stored by processes in $P$. \end{corollary} By Corollary~\ref{term-cor}, there is a time $\nu$ after which (1)~$A$ contains only pairs stored by processes in $P$. By Lemma~\ref{invOne}, (2)~all pairs in $A$ with the same id have the same value. By the assumption, (3)~$|P|\leq m$. (1), (2) and (3) imply that after $\nu$, each time a process $p\in P$ performs a scan it finds at most $m$ different pairs in the snapshot and decides. This contradiction establishes the {\bf $m$-obstruction-freedom} property. \begin{theorem} \label{one-shot-alg} For $1\leq m\leq k<n$, there is an $m$-obstruction-free algorithm that solves $k$-set agreement among $n$ processes using $\min(n+2m-k,n)$ registers. \end{theorem} \begin{proof} We established above that the algorithm in Figure~\ref{two-copies} solves the problem using a snapshot object of $n+2m-k$ components. If $n+2m-k\leq n$, the snapshot object can be implemented from $n+2m-k$ registers~\cite{EFR07}. Otherwise, the snapshot can be implemented from $n$ single-writer registers \cite{AAD93,VA86}. \journalversion{If the set of process ids is known a priori, then this completes the proof. Otherwise, the $n$ single-writer registers can be implemented from $n$ multi-writer registers in a non-blocking manner \cite{DFGR15}.} \end{proof} \medskip \subsection{Repeated $k$-set agreement} The one-shot $k$-set agreement algorithm can be transformed into an algorithm for repeated set agreement with the same space complexity to prove the following theorem. Since it is quite similar to the one-shot algorithm, we describe it briefly. \setcounter{linenum}{0} \begin{figure*} \begin{code} \stepcounter{linenum}{\scriptsize \arabic{linenum}}\> Shared variable:\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $A$: snapshot object with $r=n+2m-k$ components, each initially $\bot$\\[-1.5mm]\>\hspace*{\value{ind}mm}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} Persistent local variables:\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $i\leftarrow 0$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $t \leftarrow 0$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $\textit{history} \leftarrow $ empty sequence\\[-1.5mm]\>\hspace*{\value{ind}mm}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} {\sc Propose}($v$)\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $t \leftarrow t+1$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} if $|\textit{history}|\geq t$ then \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} output the $t$-th value in \textit{history} and halt\label{output-old-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} {\it pref} $\leftarrow v$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} loop \label{beginloop-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} update $i$th component of $A$ with $(\mbox{\it pref},id,t,\textit{history})$\label{write-pref-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $s\leftarrow $ scan of $A$ \label{snap-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} if $\exists j$ such that $s[j]=(w,id',t',his)$ with $t' > t$ then \label{halt-cond1-rep} \\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} $\textit{history} \leftarrow his$, output the $t$-th value in $his$ and halt \label{halt-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} if $|\{s[j] : 0\leq j < r\}| \leq m$ and $\forall j$, $s[j]$ is neither $\bot$ nor of the form $(w,q,t',his)$ with $t'<t$ then \label{halt-cond2-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm} let $j_1 \leftarrow \min\{j_1 : \exists j_2>j_1 \mbox{ such that }s[j_1]=s[j_2]\}$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} let $w$ be value in $s[j_1]$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} $\textit{history} \leftarrow \textit{history}\cdot w$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} output $w$ and halt\label{output-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} if $\forall j\neq i, s[j]\notin\{\bot, (\mbox{\it pref},id,t,\textit{history})\}$ and $\exists j_1\neq j_2$ such that $s[j_1]$ and $s[j_2]$ contain \\\>\hspace*{\value{ind}mm} \addtocounter{ind}{7}\hspace*{7mm}\n identical $t$-tuples then \label{cond-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} $j_1\leftarrow \min\{j_1 : \exists j_2>j_1 \mbox{ such that $s[j_1]$ and $s[j_2]$ contain identical $t$-tuples}\}$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} {\it pref} $\leftarrow$ value in $s[j_1]$ \label{change-pref-rep}\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} else $i\leftarrow (i+1) \mbox{ mod } r \label{change-ind-rep}$\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} end loop\\\stepcounter{linenum}{\scriptsize \arabic{linenum}}\>\hspace*{\value{ind}mm} \addtocounter{ind}{-7}\hspace*{-7mm} end {\sc Propose} \end{code} \caption{Algorithm for $m$-obstruction-free repeated $k$-set agreement. \label{mconc-rep-kset}} \end{figure*} The pseudocode for our repeated $k$-set agreement algorithm is given in Figure~\ref{mconc-rep-kset}. It essentially follows the pseudocode of the one-shot algorithm (Figure~\ref{two-copies}), with additional ``shortcuts'' which a process may use to adopt a value output previously by another process that has already reached a higher instance of repeated set agreement. Also, a value stored by a process in a lower instance is treated as $\bot$. Thus, a process decides in instance $t$ only if all tuples found in $A$ are stored by processes in instance $t$ and there are at most $m$ distinct tuples, or if another process has reached an instance higher than $t$. Each process $p$ maintains a local variable $\textit{history}$ that stores a sequence of output values that have been produced in the first instances of repeated $k$-set agreement. In the current instance $t$, $p$ essentially follows the one-shot algorithm (Figure~\ref{two-copies}), except that it appends the current instance number $t$ and $\textit{history}$ to each value it stores in the shared memory. Thus, each element of the vector returned by a scan of $A$ contains either $\bot$ or a tuple of the form $(id,v,t',his)$. If $t'>t$, then $p_{id}$ has already completed instance $t$ and $his$ contains the corresponding output value. If this is the case, $p$ adopts all the values output by $p_{id}$ for instances from $t$ to $t'-1$. If $t'<t$, indicating that $p_{id}$ has not yet reached instance $t$, then the position of $A$ is treated as if it were $\bot$ in the one-shot algorithm. To prove {\bf $k$-agreement}, we focus on processes that produce their output for instance $t$ without adopting a value from the history that another process stored in $A$. We call these \emph{$t$-deciding processes}. Since each other processes that completes its $t$th {\sc Propose} adopts one of the value of a $t$-deciding process, it suffices to prove that $t$-deciding processes output at most $k$ different values. As in the proof for the one-shot case, we show that the last $\ell=n-k+m$ $t$-deciding processes output at most $m$ values. There is one complication in the argument: after the $(n-\ell+1)$th $t$-deciding process performs its last scan during instance $t$, processes may store a $t'$-tuple with $t'<t$. We show that each process can do this only in a single location, which ensures the agreement property for instance $t$ is not disrupted. To show {\bf $m$-obstruction-freedom}, consider an execution where the set $P$ of processes that take infinitely many steps has size at most $m$. To derive a contradiction, assume some process in $P$ does not complete a {\sc Propose}. Let $t$ be the smallest number for which some process does not complete its $t$th {\sc Propose} and let $P'$ be the set of processes that do not complete their $t$th {\sc Propose}. Since the processes in $P'$ never witness the presence of a process in a higher instance of set-agreement, the argument for the one-shot case can be applied to this set $P'$ to obtain the desired contradiction. A detailed proof of the algorithm can be found in Appendix~\ref{app:non-anon-algorithm}. \begin{theorem} \label{thm:repeated-alg} For $1\leq m\leq k <n$, there is an $m$-obstruction-free algorithm that solve repeated $k$-set agreement among $n$ processes using $\min(n+2m-k,n)$ registers. \end{theorem} \section{Introduction} \indent Algorithms that allow processes to reach agreement are one of the central concerns of the theory of distributed computing, since some kind of agreement underlies many tasks that require processes to coordinate with one another. In the classical consensus problem, each process begins with an input value, and all processes must agree to output one of those input values. Chaudhuri \cite{Cha93} introduced the $k$-set agreement problem, which generalizes the consensus problem by allowing processes to output up to $k$ different input values in any execution. Consensus is the special case where $k=1$. Set agreement is trivial for $n$ processes if $k\geq n$: each process can simply output its own input value. We consider the $k$-set agreement problem for $k<n$ in an asynchronous system equipped with shared read/write registers. To satisfy {\it wait-free} termination, non-faulty processes must terminate even if an arbitrary number of processes fail. The impossibility of solving wait-free $k$-set agreement using registers was a landmark result proved by three groups of researchers \cite{BG93b,HS99,SZ00}. However, Herlihy, Luchangco and Moir~\cite{HLM03} observed that $k$-set agreement {\it is} solvable (even for $k=1$) under a weaker termination property, known as {\it obstruction-freedom} or {\it solo-termination}, which requires that a process must eventually terminate if it takes enough steps without interruption from other processes. Obstruction-freedom was introduced as a way of separating concerns: obstruction-free algorithms maintain safety properties in all possible executions, but make progress only when one process can run for long enough without encountering contention. Various scheduling mechanisms designed to reduce contention (such as backing off) can then be used to satisfy this condition. Taubenfeld \cite{Tau09a} introduced the $m$-obstruction-freedom progress property, which requires that, in any execution where at most $m$ processes take infinitely many steps, each process that continues to take steps will eventually terminate successfully. Wait-freedom and obstruction-freedom are special cases, with the extreme values $m=n$ and $m=1$, respectively. Like ordinary obstruction-freedom, $m$-obstruction-free algorithms guarantee safety in all runs. However, $m$-obstruction-freedom provides a stronger progress property: larger values of $m$ require less rigid constraints on the scheduler in order to ensure progress. Since $k$-set agreement has no wait-free solution among $k+1$ processes, it follows that there is no $m$-obstruction free solution when $m>k$. The converse follows from the work of Yang, Neiger and Gafni \cite{YNG98}: $m$-obstruction-free $k$-set agreement {\it can} be solved if $m\leq k$. In this paper, we study how the number of registers required to solve $m$-obstruction-free $k$-set agreement among $n$ processes depends on the parameters $m,k$ and $n$. Previously, the only non-trivial space lower bound was for the very special case where $m=k=1$. In this case, Fich, Herlihy and Shavit~\cite{FHS98} showed $\Omega(\sqrt{n})$ registers are needed. The best upper bound for this case is the trivial one of $n$ registers, which comes from the fact that $n$ (large) single-writer registers can implement any number of multi-writer registers \cite{VA86}. Closing the gap between the linear upper bound and the $\Omega(\sqrt{n})$ lower bound is a major open problem. Unfortunately, there has been no progress on this gap in the past two decades. We first prove nearly tight linear upper and lower bounds on the number of registers required for {\it repeated} set agreement. In many applications, such as Herlihy's universal construction \cite{Her91}, there is a sequence of (independent) agreement tasks that must be solved, rather than just one. We define the repeated $k$-set agreement problem to model this situation. Processes access an infinite sequence of instances $k$-set agreement in order. For all executions and for all $i$, processes accessing the $i$th instance of $k$-set agreement may output at most $k$ of the values that are used as inputs to that instance. We prove that any $m$-obstruction-free solution to repeated $k$-set agreement among $n$ processes requires at least $n+m-k$ registers. We also give a novel algorithm for this task using $\min(n+2m-k,n)$ registers. Previously, the only known set agreement algorithm that uses fewer than $n$ registers was a 1-obstruction-free $k$-set agreement algorithm that uses $2n-2k$ registers \cite{DFGR13}. Our algorithm generalizes that algorithm (to handle any value of $m$) and improves the number of registers used in the case where $m=1$ from $2(n-k)$ to $n-k+2$. For the case where $m=k=1$, our results establish that obstruction-free repeated consensus requires exactly $n$ registers. Thus, the gap between the $\Omega(\sqrt{n})$ lower bound and the $O(n)$ upper bound is closed when we consider the {\it repeated} version of the problem. For the one-shot version of $k$-set agreement, we focus on the restricted case of anonymous systems, where processes do not have unique identifiers and are all programmed identically. We prove that any anonymous algorithm must use more than $\sqrt{m(\frac{n}{k}-2)}$ registers. The $\Omega(\sqrt{n})$ lower bound of Fich, Herlihy and Shavit~\cite{FHS98} (for the anonymous case) is a special case of our result with $m=k=1$, but the new result gives additional insight into the problem by showing a dependence on $m$ and $k$. Moreover, the technique used in our proof is somewhat different, since it requires building an execution involving many different input values where each process is prevented from learning about any input value different from its own. We also prove that it is possible to solve the problem anonymously. Our algorithm for the repeated version of the problem uses $(m+1)(n-k)+m^2+1$ registers. (The usual construction using $n$ single-writer registers is not applicable, since it presupposes unique identifiers.) Figure \ref{summary} summarizes our results. Our four main results are in boldface; the others are corollaries. \begin{figure*} \begin{center} \begin{tabular}{|c|ll|ll|}\hline & \hspace*{25mm}Repeated&& \hspace*{28mm}One-shot &\\\hline \rule{0pt}{3ex}% Non- & {\bf lower}: $n+m-k$ & (Section~\ref{sec-repeated-lower}) & lower: 2 & \cite{DFGR13}\\ \raisebox{-2ex}{\rule{0pt}{5ex}}% Anonymous & {\bf upper}: $\min(n+2m-k,n)$ & (Section~\ref{linear-space-alg}) & upper: $\min(n+2m-k,n)$ & (Section~\ref{linear-space-alg})\\\hline \rule{0pt}{3ex}% Anonymous & lower: $n+m-k$ & (Section~\ref{sec-repeated-lower}) & {\bf lower}: $\sqrt{m(\frac{n}{k}-2)}$ for $D=\nat$ & (Section~\ref{m-conc-lower-sec}) \\ \raisebox{-2ex}{\rule{0pt}{5ex}} & {\bf upper}: $(m+1)(n-k)+m^2+1$ & (Section~\ref{m-conc-alg-sec}) & upper: $(m+1)(n-k)+m^2$ & (Section~\ref{m-conc-alg-sec})\\\hline \end{tabular} \end{center} \caption{\label{summary}Lower and upper bounds on the number of registers to solve $m$-obstruction-free $k$-set agreement among $n$ processes, where $1\leq m\leq k<n$ and input values are from domain $D$ (with $|D|>k$). Our main results appear in boldface; the others are corollaries.} \end{figure*} \section{Preliminaries} \indent We consider the standard asynchronous shared-memory model, in which $n>1$ processes $p_1,\ldots , p_n$ communicate by applying read and write operations to shared \emph{registers}. The registers are \emph{multi-writer} and \emph{multi-reader}, i.e., there are no restrictions on which processes may access which registers. Each process has a local state that consists of the values stored in its local variables and a programme counter. A computation of the system proceeds in {\it steps} performed by the processes. Each step is one of the following: (1)~an invocation of an operation, (2)~a read or write operation on a shared register, (3)~local computation that results in a change of a process's state, or (4)~a response of an operation. Writes update the state of a shared register. All steps may update the local state of the process that performs it. A \emph{configuration} specifies the state of each register and the local state of each process at one moment. In an \emph{initial configuration}, all registers have the initial values specified by the algorithm and all processes are in their initial states. A process is \emph{active} if an operation has been invoked on the process but the operation has not yet produced a matching response; otherwise the process is called \emph{idle}. We assume that an operation can only be invoked on an idle process and only active processes take steps. We focus on deterministic algorithms. Thus, given the current local state of an active process, the algorithm for this process stipulates the unique next \emph{step} the process can perform. An \emph{execution fragment} of an algorithm is a (possibly infinite) sequence of steps starting from some configuration that ``respects'' the algorithm for each process. An \emph{execution} is an execution fragment that starts from the initial configuration. An operation is completed if its invocation is followed by a matching response. In an infinite execution, a process is \emph{correct} if it takes an infinite number of steps or is idle from some point on. Our algorithms make use of multi-writer \emph{snapshot objects}~\cite{AAD93}, which can be implemented from registers. A snapshot object stores a vector of $r$ values and provide two atomic operations: $\textit{update}(i,v)$ ($i\in\{1,\ldots,r\}$), which writes value $v$ to component $i$, and $\textit{scan}()$, which returns the vector of the most recently written values to components $1,\ldots,r$. \medskip \subsection{Set agreement} We begin with a formal definition of the \emph{repeated $k$-set agreement} problem. Processes may perform {\sc Propose}($v$) operations, where $v$ is drawn from an input domain $D$. Each {\sc Propose} operation outputs a response from $D$ when it terminates. For an execution $\alpha$, let $In_i(\alpha)$ be the set of values that are used as the argument to some process's $i$th invocation of {\sc Propose} and let $Out_i(\alpha)$ be the set of values that are the response of some process's $i$th {\sc Propose} operation. Then, in every execution $\alpha$ of an algorithm that solves repeated $k$-set agreement the following properties must hold. \begin{itemize} \item \emph{Validity}: $\forall i$, $Out_i(\alpha)\subseteq In_i(\alpha)$. \item \emph{$k$-Agreement}: $\forall i$, $|Out_i(\alpha)|\leq k$. \end{itemize} An $m$-obstruction-free algorithm must additionally satisfy the following termination condition. \begin{itemize} \item \emph{$m$-Obstruction-Freedom}: in every execution in which at most $m$ processes take infinitely many steps, every correct process completes each of its operations. \end{itemize} The special case when $k=1$ is called \emph{consensus}. In the \emph{(one-shot) $k$-set agreement problem}, every process invokes {\sc Propose} at most once. \ignore{ \begin{theorem}\label{thm:kset-mof} There is no algorithm solving $m$-obstruction-free $k$-set-agreement using registers if $k<m$. \end{theorem} \begin{proof} The fact that $(k+1)$-process $k$-set agreement cannot be solved~\cite{HS99,BG93b,SZ00} implies that $(k+1)$-obstruction-free (and, thus, $k'$-obstruction-free for $k'\geq k$) cannot be solved: it is sufficient to consider the set of executions in which only the first $k+1$ processes are active. \end{proof} It is then straightforward to derive that: } It is known that wait-free $(k+1)$-process $k$-set agreement cannot be solved using registers~\cite{BG93b,HS99,SZ00}. This implies the following lemma, which we shall use to prove our space lower bounds. \begin{lemma}\label{lem:m-val} Let $A$ be any algorithm that solves $m$-obstruction-free $k$-set agreement using registers. For any set $V$ of $m$ input values and any set $Q$ of $m$ processes, there is an execution of $A$ in which only processes in $Q$ take steps and all values in $V$ are output. \end{lemma} \begin{proof} Suppose the opposite for some sets $V$ and $Q$ and consider all executions of $A$ in which only processes in $Q$ with inputs in $V$ take steps. By the assumption, at most $m-1$ distinct values are decided in each of these executions, which implies a wait-free $m$-process $(m-1)$-set agreement algorithm, violating~\cite{BG93b,HS99,SZ00}. \end{proof} Lemma~\ref{lem:m-val} implies that no algorithm can solve $m$-obstruction-free $k$-set agreement using registers if $k<m$. In the rest of the paper, we derive lower and upper bounds on the space complexity of $m$-obstruction-free $k$-set agreement for $n$ processes, where $m\leq k < n$. (If $k\geq n$, the problem is trivial and no registers are required: each process can simply output its own input value.) \input{non-anon-tr.tex} \input{anon-tr.tex} \ignore{ \section{Related Work} The problem of $k$-set agreement, a generalization of consensus~\cite{FLP85}, was introduced by Chaudhuri~\cite{Cha93}. Borowsky and Gafni \cite{BG93b}, Herlihy and Shavit~\cite{HS99}, and Saks and Zaharoglou~\cite{SZ00} showed that no algorithm can solve $k$-set agreement using registers for $k+1$ or more processes, which implies that no $m$-obstruction-free $k$-set agreement algorithm exists for $m>k$. Using the $k$-converge algorithm of Yang et al.~\cite{YNG98}, $k$-obstruction-free $k$-set agreement can be solved using $n$ single-writer registers (one per process). However little has been known about the space complexity of $k$-set agreement algorithms so far. For the special case of obstruction-free consensus ($m=k=1$), Fich, Herlihy and Shavit~\cite{FHS98} showed $\Omega(\sqrt{n})$ registers are needed, while only known upper bound for this case remains $n$. Delporte et al.~\cite{DFGR13} proposed an algorithm for obstruction-free $k$-set agreement ($m=1$) that uses $2n-2k$ registers. In this paper, we improve and generalize the algorithm in~\cite{DFGR13} to all $m\leq k$ that uses $\min\{n+2m-k,n\}$ registers. We also present a \emph{repeated} $m$-obstruction-free $k$-set agreement with the same space complexity and provide a very close lower bound of $n+m-k$ registers. The technique we used to derive the lower bound is inspired by the result of Burns and Lynch on the space complexity of mutual exclusion~\cite{BL93}. In the case of anonymous algorithms, the only lower bound known so far was $Omega(\sqrt{n})$ for obstruction-free consensus~\cite{FHS98} and all known upper bounds use $O(n)$ registers. } \section{Concluding Remarks} \indent \journalversion{Could mention that lower bounds apply even for the weaker termination condition of $m$-obstacle-freedom (also defined by Taubenfeld) which says that if at most m processes take infinitely many steps, then some process completes its operation (but starvation of individual processes is allowed).} A small gap remains between the upper and lower bounds for non-anonymous repeated set agreement. The one-shot algorithm of \cite{DFGR13} uses fewer registers than ours for one special case: when $m=1$ and $k=n-1$, it uses two registers compared to our three. This suggests the upper bound could perhaps be improved to $n+m-k$. The gaps are larger for the other scenarios shown in Figure \ref{summary}. It would be interesting to see if there is an anonymous algorithm that uses linear space, rather than quadratic space. Another natural continuation of this work would be to extend the one-shot anonymous lower bound to the non-anonymous setting. However, closing the gap for the one-shot setting eludes us still.
1,941,325,220,110
arxiv
\section*{Informational Fourth Page} \end{document}
1,941,325,220,111
arxiv
\section{Introduction} From genes and proteins that govern our cellular function, to our everyday use of the Internet, Nature and our lives are surrounded by interconnected systems \cite{barabasi2016network}. Network science aims to study these complex networks, and provides a powerful framework to understand their structure, function, dynamics, and growth. Studies in network science typically have a substantial computational component, borrowing tools from graph theory to extract relevant information about the underlying system. With the advent of quantum computation, a natural question to ask is which problems in network science can be explored with this new computing paradigm, and what benefits it can yield. This question can be interpreted in at least two different ways. First, there is a large body of work in quantum algorithms for graph theoretical problems, some examples being Refs. \cite{durr2006quantum, ambainis2006quantum, chakraborty2016spatial, chakraborty2017optimal}, which may have their own applications in network science problems. However, network science algorithms often look for specific patterns or organizing principles based on empirical observations from the real underlying systems, which may warrant the development of problem-specific quantum algorithms. This constitutes a novel research direction, different from the development of more general graph-theoretical algorithms. Previous connections have been made between quantum phenomena and complex networks, both by using quantum tools to study complex networks \cite{tsomokos2011quantum, sanchez2012quantum, faccin2013degree, mukai2020discrete} and by using complex network tools to study quantum systems \cite{faccin2014community}. Nevertheless, to our knowledge, potential quantum speedups for network science problems have not been addressed. In this work, we propose a quantum algorithm for the problem of link prediction in complex networks based on Continuous-Time Quantum Walks (CTQW) \cite{farhi1998quantum, kempe2003quantum}, and discuss potential quantum speedups over classical algorithms. The objective in link prediction is to identify unknown connections in a network \cite{Liben:2007, wang2015link, albert2004conserved, getoor2005link, Lu:2011}. For example, in social networks, we aim to predict which individuals will develop shared friendships, professional relations, exchange of goods and services or others \cite{Liben:2007, wang2015link}. In biological networks, the main focus is the issue of data incompleteness, which hinders our understanding of complex biological function. For example, in protein-protein interaction (PPI) networks link prediction methods have already proven to be a valuable tool in mapping out the large amount of missing data \cite{kovacs2019network, luck2020interactome}. While there are many approaches to the problem of link prediction, such as using machine learning techniques \cite{al2006link} or studying global perturbations \cite{lu2015toward}, other methods focus on simple topological features like paths of different length between nodes, which we describe next. \subsection{Classical Link Prediction} \label{sec:clp} Network-based link prediction methods take as input a graph $G(\mathcal{V},\,\mathcal{E})$, where $\mathcal{V}$ is the set of nodes with size $N=|\mathcal{V}|$ and $\mathcal{E}$ is the set of undirected links, and output a matrix of predictions $P\in\mathbf{R}^{N\times N}$ where each entry $p_{ij}$ is a score value quantifying the likelihood of a link existing between nodes $i$ and $j$ (see Figure \ref{fig:1}). Each method computes $P$ differently, depending on the assumptions made about the network and its emergent topological features. Most methods are based on the Triadic Closure Principle (TCP), assuming that two nodes are more likely to connect the more similar they are \cite{Lu:2011, kovacs2019network}. Similarity is often quantified based on the number of shared connections, i.e., paths of length two between two nodes, or in general of even length. It has been shown that, despite its dominant use in biological networks, the TCP approach is not valid for most protein pairs \cite{kovacs2019network}. Instead, in \cite{kovacs2019network}, a link prediction method (L3) is proposed without the assumption that node similarity correlates with connectivity. L3 is based on the assumption that a candidate partner is similar to the existing partners of a node, $P=AS$, as illustrated in Figure \ref{fig:1} c). These results \cite{kovacs2019network}, and follow-up studies \cite{Cannistraci:2018, pech2019link, kitsak2020latent}, show that the L3 method significantly outperforms other link prediction methods, for example, in some complementarity driven networks. \begin{figure}[h!] \centering \includegraphics[width=0.8\columnwidth]{Figures/fig1.pdf} \caption{\footnotesize\textbf{Classical network-based link prediction.} \textbf{a)} Given a complex network described by a graph with a corresponding adjacency matrix $A$, \textbf{b)} one can predict new links by associating a prediction value $p_{ij}$, or \textit{score}, to every pair of nodes \{i, j\}, such that a higher value $p_{ij}$ correlates to a higher probability of the link \{i, j\} appearing. \textbf{c)} Predictions based on TCP rely on similarity (matrix $S$) between nodes, quantified in the simplest case as $P\sim S\sim A^2$, counting paths of length 2 between pairs of nodes. As an alternative, proteins often connect to others that are similar to their neighbours, but not necessarily similar to themselves \cite{kovacs2019network}. In the simplest case, the authors in \cite{kovacs2019network} quantify this principle by taking $P\sim AS\sim A^3$, counting paths of length 3 between nodes. A possible extension of these principles is to quantify direct similarity with even powers of $A$ and neighbour similarity with odd powers of $A$. \textbf{d)} Most classical link prediction methods output the full matrix $P$, often dense, organized in a ranked list of scores from highest to lowest, where the relevant top $l$ predictions are those where the precision is above a user-determined threshold $\delta$, contributing to the final inferred network (\textbf{e)}). The need to calculate all pairwise scores constitutes a computational burden which can become intractable as we scale the methods to larger networks.} \label{fig:1} \end{figure} Our quantum approach takes inspiration from both these paradigms, utilizing even-length (TCP) and odd-length (L3-like) paths. One of the main reasons why link prediction may prove suitable to be tackled with a quantum computer is the realisation that in practice we are not interested in knowing the scores of all pairs of nodes, but we simply wish to know which ones have the highest score up to a certain cut-off threshold, as illustrated in Figure \ref{fig:1} d) and e). By encoding the prediction scores in the amplitudes of a quantum superposition and performing quantum measurements on the system, the predictions with the highest score will be naturally sampled with higher probability, which can potentially be advantageous compared to the classical case of explicitly computing all scores. We proceed now in Section \ref{sec:quantum} with the description of the quantum method and discuss in Section \ref{sec:results} the comparison with classical path-based methods both in terms of prediction precision and resource complexity. \section{Quantum Link Prediction} \label{sec:quantum} We now describe our method for quantum link prediction, denoted as QLP, which we summarize at the end. We base our approach on a Continuous-Time Quantum Walk (CTQW) \cite{farhi1998quantum, kempe2003quantum}, where the Hilbert space of the quantum walker is defined by the orthonormal basis set $\{\ket{j}\}_{j\in\mathcal V}$, with each $\ket{j}$ corresponding to a localized state at a node $j$. For simplicity we consider the Hamiltonian of the evolution as the adjacency matrix of the graph, $H=A$, but later extend this selection to include a degree normalization. In Figure \ref{fig:2} we show the main structure of the QLP circuit using a qubit representation. In the simplest case, we require $n=\log_2{N}$ qubits to add a binary label to each of the $N$ nodes, hereafter marked by the subscript $n$, and we consider an extra ancilla qubit $q_a$ that doubles the Hilbert space of the quantum walk, such that any node $j$ has two associated basis states $\ket{0}_a\ket{j}_n$ and $\ket{1}_a\ket{j}_n$. For an initial state $\ket{\psi_j(0)}=\ket{0}_a\ket{j}_n$, the first step in the circuit of Figure \ref{fig:2} is to apply an Hadamard gate (Figure \ref{fig:2} c) to $q_a$, which creates the superposition $\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})_a\ket{j}_n$. A conditional CTQW is then applied which evolves the $q_a=\ket{0}$ subspace with $e^{-iAt}$ and the $q_a=\ket{1}$ subspace with $e^{+iAt}$. Finally, a second Hadamard gate is applied to $q_a$ to interfere the two quantum walks in the computational basis, leading to the state \begin{equation} \ket{\psi_j(t)}=\ket{0}_a\left(\frac{e^{-iAt}+e^{iAt}}{2}\right)\ket{j}_n + \ket{1}_a\left(\frac{e^{-iAt}-e^{iAt}}{2}\right)\ket{j}_n. \label{eq:qlpstate0} \end{equation} To make the connection with link prediction more evident, it is useful to rewrite the previous expression as \begin{equation} \ket{\psi_j(t)}=\ket{0}_a\left(\sum_{k=0}^{+\infty}c_\text{even}(k, t)\,A^{2k}\right)\ket{j}_n + i\ket{1}_a\left(\sum_{k=0}^{+\infty}c_\text{odd}(k, t)\,A^{2k+1}\right)\ket{j}_n, \label{eq:qlpstate} \end{equation} where we have replaced the exponential terms with their respective power series, and defined the time-dependent coefficients as $c_\text{even}(k, t)=(-1)^kt^{2k}/(2k)!$ and $c_\text{odd}(k, t)=(-1)^{k+1}t^{2k+1}/(2k+1)!$. A detailed calculation leading to Eq. \ref{eq:qlpstate} can be found in Supplementary Note 1. Given some initial node $j$, Eq. \ref{eq:qlpstate} describes the state that is created following the QLP evolution. This state has two entangled components, one with a linear combination of even powers of $A$ for $q_a=\ket{0}$, and another with odd powers of $A$ for $q_a=\ket{1}$. The time $t$ of the quantum walk defines the linear weights, and acts as a hyperparameter in the model. This describes the unitary part of the protocol. To obtain relevant predictions from this state we must perform repeated measurements on the system to draw multiple samples, as we now describe. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{Figures/fig2.pdf} \caption{\footnotesize\textbf{Quantum link prediction (QLP).} \textbf{a)} A set of $n=\log_2(N)$ qubits creates a Hilbert space that encodes each of the $N$ nodes as a basis state. With an extra ancilla qubit $q_a$ in a superposition of $\ket{0}$ and $\ket{1}$, created by an Hadamard gate (\textbf{c)}), and an initial state marking a node $j$, the $q_a=\ket{0}$ subspace is evolved with a $e^{-iAt}$ quantum walk, and the $q_a=\ket{1}$ subspace is evolved with the conjugate $e^{+iAt}$ quantum walk (\textbf{c)}). A second Hadamard gate applied to the ancilla qubit mixes the two subspaces together and creates an interference between the two quantum walks. \textbf{b)} Finally, by measuring $q_a$ the state of the network collapses to one of two possible cases, imposing either a sum or subtraction of the two conjugate evolutions, which encodes even powers of $A$ (even predictions) for $q_a=\ket{0}$ and odd powers of $A$ (odd predictions) for $q_a=\ket{1}$. The measurement of the remaining $n$ qubits returns a bit string marking a certain node $i$, which together with the initial node $j$ forms a sample of a link $(i,\,j)$.} \label{fig:2} \end{figure} The first step is to measure $q_a$, yielding $\ket{0}$ or $\ket{1}$ and collapsing the state of the remaining qubits to $\ket{\psi_{j}(t)}_n^\text{even}\propto\left(\sum_{k}c_\text{even}(k, t)\,A^{2k}\right)\ket{j}$ or $\ket{\psi_{j}(t)}_n^\text{odd}\propto\left(\sum_{k}c_\text{odd}(k, t)\,A^{2k+1}\right)\ket{j}$, respectively, where we omitted the normalization. This effectively selects whether the link sampled will be drawn from a distribution encoding even or odd powers of $A$. The last step is then to measure the remaining qubits, yielding a bit string corresponding to a sample of some node $i$ with probability \begin{equation} p_{ij}^\text{even}\propto\left|\mel{i}{\left(\sum_{k=0}^{+\infty}c_\text{even}(k, t)\,A^{2k}\right)}{j}\right|^2\quad\text{or}\quad p_{ij}^\text{odd}\propto\left|\mel{i}{\left(\sum_{k=0}^{+\infty}c_\text{odd}(k, t)\,A^{2k+1}\right)}{j}\right|^2,\label{eq:qlpscores} \end{equation} which together with the initial node $j$ forms a sample of a link $(i,\,j)$. The values $p_{ij}^\text{even}$ and $p_{ij}^\text{odd}$ encode the prediction scores of the link $(i,\,j)$, but these can not be directly extracted from the algorithm. Instead, what this algorithm allows is the repeated sampling of these distributions, yielding pairs of nodes $(i,\,j)$ with probability proportional to $p_{ij}^\text{even}$ or $p_{ij}^\text{odd}$. This is analogous to sampling entries $(i,\,j)$ the from the matrix of prediction scores $P$ with probability proportional to $|P_{ij}|^2$. As discussed in Section \ref{sec:clp}, predictions coming from even or odd powers of $A$ are typically useful in different types of networks. For a given network application of QLP, whether each sample obtained corresponds to an even or odd prediction depends on the value measured in the ancilla qubit, and this postselection can only be done probabilistically \cite{kothari2014efficient}. This is a potential sampling overhead, as unwanted predictions need to be discarded. Another overhead to consider is the possibility of sampling the initial node, or to sample already existing links, given the contribution of the identity $I$ in $p_{ij}^\text{even}$ and $A$ in $p_{ij}^\text{odd}$, which must also be discarded. As stated, QLP uses a linear combination of powers of $A$ weighted by the time $t$. A classical prediction method with a linear combination of odd powers of $A$ was presented in \cite{pech2019link}, which was shown to sometimes improve the prediction precision compared to the original L3 method from \cite{kovacs2019network} by also fitting an additional model parameter. Another popular link prediction method is the Katz index \cite{katz1953new}, which uses a linear combination of all powers of $A$. We can now summarize the QLP algorithm. Firstly, an initial state $\ket{\psi_j(0)}=\ket{0}_a\ket{j}_n$ is prepared for a node $j$ in the network. Secondly, the QLP evolution leading to Eq. \ref{eq:qlpstate} is performed for a specific time $t$. Finally, the ancilla and node qubits are measured to obtain a sample of a link $(i,j)$ corresponding to an even or odd prediction, and the procedure is repeated. The number of samples that output a certain link $(i,j)$ will follow the distributions described by Eq. \ref{eq:qlpscores}, and thus represent a score for link $(i,j)$. Once predictions associated with node $j$ are sufficiently characterized, the procedure can be repeated for other relevant nodes in the network. \section{Results and Discussion} \label{sec:results} \subsection{Complexity analysis} To identify a potential quantum advantage, we briefly discuss how link prediction scales on a classical computer. Complex networks are typically sparse \cite{barabasi2016network} with the average degree $k_\mathrm{av}\ll N$, and thus there are $\mathcal{O}(N^2)$ potentially missing links. Thus, the general case of computing all possible scores leads to a classical complexity of at least $\mathcal{O}(N^2)$. Different methods scale differently depending on the assumptions made about the solution. For example, the scaling of simple length-2 based methods is $\mathcal{O}(N\langle k^2\rangle)$ and the scaling of L3 \cite{kovacs2019network} is upper bounded by $\mathcal{O}(N\langle k^3\rangle)$, where $\langle k^n\rangle$ is the average of the $n$-th power of the degrees. For certain realistic values of $\gamma$, the exponent in the power-law degree distribution of a scale-free network, the moments $\langle k^2\rangle$ and $\langle k^3\rangle$ diverge with growing $N$, as we estimate in Supplementary Note 2. These methods do not calculate a score for every possible missing link, only for those corresponding to nodes at distance 2 or 3. However, other methods also surpass the $\mathcal{O}(N^2)$ scaling, as is the case of LO \cite{pech2019link} that uses a matrix inversion to represent a linear combination of odd powers of $A$, and is one of the best performing classical methods tested. Complex networks can easily reach sizes of up to millions or billions of nodes, consider for example online social and e-commerce networks, or the neuronal network in the human brain \cite{azevedo2009equal}. Improving these scalings may thus be decisive in the application of link prediction methods to larger networks in the future. In order to estimate the complexity of QLP, we must estimate the number of samples required for a given network. It is often reasonable to assume the number of missing links at any given node $j$ will be proportional to its observed degree $k_j$, which happens for instance when the missing links are removed randomly from the network. Then, each initial node $j$ will require a number of repetitions of QLP proportional to $k_j$ to sufficiently characterize the predictions associated with node $j$. This leads to $\mathcal{O}(Nk_\mathrm{av})$ total samples for a network of $N$ nodes and average degree $k_\mathrm{av}$. To provide a more detailed estimate we must analyse the cost of implementing the unitary $e^{-iHt}$ on a quantum computer, representing the CTQW used to obtain each sample. For this we can look at results from quantum simulation using a $d-$sparse Hamiltonian model, meaning that $H$ has at most $d$ entries in any given row. A state of the art result \cite{low2017optimal} shows that implementing $e^{-iHt}$ scales as $\mathcal{O}\left(d\,t\|H\|_\text{max} + \frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\right)$, where $t$ is the time interval of the evolution, $\epsilon$ is the allowed error, and $\|H\|_\text{max}$ is the maximum entry in absolute value. In our case, $d=k_\text{max}$, the maximum degree of the network, and $\|H\|_\text{max}=1$ for $H=A$, which allows us to write the complexity of implementing $e^{-iHt}$ as $\mathcal{\tilde{O}}(k_\text{max}t)$, omitting logarithmic factors. We can thus write a direct complexity estimate for QLP as $\mathcal{\tilde{O}}(Nk_\text{av}k_\text{max}t)$. The most meaningful complexity comparison we can make is between methods that make similar assumptions. In that sense, both QLP and LO assume the solution is a linear combination of powers of the adjacency matrix, and as we will see in the next section, these methods are often the best performing. Here, we can see that QLP has a potential quantum speedup given the polynomially lower dependence on $N$ but with an extra $k_\text{av}k_\text{max}$ factor. Comparing QLP to simple length-2 and length-3 based methods is less straightforward, as the difference is solely based on the degree factors. As mentioned earlier, while the moments $\langle k^2\rangle$ and $\langle k^3\rangle$ diverge with growing $N$, the average degree remains finite. This hints at a potential quantum speedup, but it is not clear if the dependence on $k_\text{max}$ will spoil the difference. We note that the dependence on $k_\text{max}$ comes from assuming a circuit-based simulation of the quantum walk. However, our method is general and can also admit an analog quantum walk implementation, which would require a different complexity analysis. \subsection{Cross-validation tests} In Figure \ref{fig:3} we compare the prediction precision of QLP with classical link prediction methods using the standard link prediction benchmark of cross-validation on a selection of networks from different applications. Here we used a degree-normalized adjacency matrix as the Hamiltonian for QLP, $H=D^{-\frac{1}{4}}AD^{-\frac{1}{4}}$, with the final predictions mapped back to $A$ as $P=D^{\frac{1}{4}}\tilde{P}D^{\frac{1}{4}}$, where $D$ is a diagonal matrix with each entry $k_i$ being the degree of node $i$. This penalizes the counting of paths that go through hubs in the network \cite{barabasi1999emergence}, which, for the purpose of link prediction, can introduce superfluous shortcuts in the network \cite{kovacs2019network}. The scores used for QLP were an exact calculation of the distributions in Eq. \ref{eq:qlpscores} by computing the full evolution of the quantum walkers. For each network, we selected the time $t$ that maximizes the prediction precision in the first 10\% of the plotted ranks. As shown in Figure \ref{fig:3}, we can conclude that QLP matches the best performing classical link prediction methods tested in terms of prediction precision for a range of real life complex networks \cite{luck2020interactome, biogrid, messel, hamsterster, facebook, wikivote}. In most cases, we observe that both QLP-Odd and LO stand out as the best performing methods, a result which further affirms the case that there can be advantages in including higher order powers of the adjacency matrix in the predictions \cite{pech2019link}. In some cases, QLP-Odd has a slight advantage over LO. We highlight the results for the Facebook dataset as the only case where even power methods stand out (RA-L2 and QLP-Even), although matched by LO, and also the only case where there is a clear difference between QLP-Odd and LO. While QLP-Odd and LO, when expressed as a power series, have a similar form, the predictions they produce are in fact different, as shown in the Supplementary Note 3 and Table 3. Further results for the cross-validation benchmark are shown in Supplementary Figure 1, as well as detailed results for each of the experimental screens that contribute to the full HI-III-20 network in Supplementary Figure 2. Here, we predict interactions that have been obtained by independent, full experimental screens, simulating the case of real life performance against future experiments. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{Figures/fig3.pdf} \caption{\footnotesize\textbf{Computational cross-validation.} Cumulative precision over the top 5\% ranked predictions out of $Nk_\text{av}$ scores for each network, averaged over a 10-fold cross validation procedure. The shaded regions correspond to the standard deviation. In each trial 10\% of the links were randomly removed and the remainder used as input to the link prediction methods. The networks used correspond to the PPI networks HI-III-20, the most recent PPI mapping of the human interactome \cite{luck2020interactome}, Yeast-Bio, a PPI network of a yeast organism \cite{biogrid}, Messel, a food web \cite{messel}, Hamsterster \cite{hamsterster} and Facebook \cite{facebook}, two online social networks, and Wiki-Vote, a vote network between users for adminship of Wikipedia \cite{wikivote}. For comparison, we implemented five classical link prediction methods: the L3 method \cite{kovacs2019network}, the LO method \cite{pech2019link}, the CH-L3 method \cite{Cannistraci:2018}, and two even power methods, RA-L2 (resource allocation) \cite{Zhou:2009}, and CH-L2 \cite{cannistraci2013link, Cannistraci:2018}. The dataset parameters characterizing each network are shown in Supplementary Table 1, and the values selected for the optimal parameters $t$ in the QLP method and $\alpha$ in the LO method are shown in Supplementary Table 2.} \label{fig:3} \end{figure} \section{Conclusions} To the best of our knowledge, QLP is the first quantum algorithm for link prediction in complex networks, and the first to potentially offer a quantum speedup for a practical network science problem. Furthermore, the inclusion of even and odd paths allows QLP to make both TCP-like and L3-like predictions, thus encompassing all types of networks where these topological patterns play a role. Our results serve as a proof of principle for potential future applications of QLP in large complex networks using quantum hardware. Recently, a 62-node network CTQW was demonstrated experimentally \cite{gong2021quantum}, an important first step towards this goal. Besides the potential improvement in complexity when sampling from the quantum solution, we should also note that a classical simulation of QLP relies on the diagonalization of the adjacency matrix, and thus it has a comparable classical complexity to other classical link prediction methods. This makes QLP easier to be further developed with a focus on immediately relevant real world applications, while at the same time exploring other ways in which quantum features of QLP can be advantageous when quantum hardware becomes more widely available. These findings open the way to explore this novel frontier of quantum-computational advantage for complex network applications. \section*{Acknowledgements} The authors thank Albert-László Barabási for the useful discussion, and acknowledge the support from the JTF project \textit{The Nature of Quantum Networks} (ID 60478). JPM, BC and YO thank the support from Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT, Portugal), namely through project UIDB/50008/2020, as well as from projects TheBlinQC and QuantHEP supported by the EU H2020 QuantERA ERA-NET Cofund in Quantum Technologies and by FCT (QuantERA/0001/2017 and QuantERA/0001/2019, respectively), and from the EU H2020 Quantum Flagship project QMiCS (820505). JPM acknowledges the support of FCT through scholarships SFRH/BD/144151/2019, and BC acknowledges the support of FCT through project CEECINST/00117/2018/CP1495. \section{Introduction} From genes and proteins that govern our cellular function, to our everyday use of the Internet, Nature and our lives are surrounded by interconnected systems \cite{barabasi2016network}. Network science aims to study these complex networks, and provides a powerful framework to understand their structure, function, dynamics, and growth. Studies in network science typically have a substantial computational component, borrowing tools from graph theory to extract relevant information about the underlying system. With the advent of quantum computation, a natural question to ask is which problems in network science can be explored with this new computing paradigm, and what benefits it can yield. This question can be interpreted in at least two different ways. First, there is a large body of work in quantum algorithms for graph theoretical problems, some examples being Refs. \cite{durr2006quantum, ambainis2006quantum, chakraborty2016spatial, chakraborty2017optimal}, which may have their own applications in network science problems. However, network science algorithms often look for specific patterns or organizing principles based on empirical observations from the real underlying systems, which may warrant the development of problem-specific quantum algorithms. This constitutes a novel research direction, different from the development of more general graph-theoretical algorithms. Previous connections have been made between quantum phenomena and complex networks, both by using quantum tools to study complex networks \cite{tsomokos2011quantum, sanchez2012quantum, faccin2013degree, mukai2020discrete} and by using complex network tools to study quantum systems \cite{faccin2014community}. Nevertheless, to our knowledge, potential quantum speedups for network science problems have not been addressed. In this work, we propose a quantum algorithm for the problem of link prediction in complex networks based on Continuous-Time Quantum Walks (CTQW) \cite{farhi1998quantum, kempe2003quantum}, and discuss potential quantum speedups over classical algorithms. The objective in link prediction is to identify unknown connections in a network \cite{Liben:2007, wang2015link, albert2004conserved, getoor2005link, Lu:2011}. For example, in social networks, we aim to predict which individuals will develop shared friendships, professional relations, exchange of goods and services or others \cite{Liben:2007, wang2015link}. In biological networks, the main focus is the issue of data incompleteness, which hinders our understanding of complex biological function. For example, in protein-protein interaction (PPI) networks link prediction methods have already proven to be a valuable tool in mapping out the large amount of missing data \cite{kovacs2019network, luck2020interactome}. While there are many approaches to the problem of link prediction, such as using machine learning techniques \cite{al2006link} or studying global perturbations \cite{lu2015toward}, other methods focus on simple topological features like paths of different length between nodes, which we describe next. \subsection{Classical Link Prediction} \label{sec:clp} Network-based link prediction methods take as input a graph $G(\mathcal{V},\,\mathcal{E})$, where $\mathcal{V}$ is the set of nodes with size $N=|\mathcal{V}|$ and $\mathcal{E}$ is the set of undirected links, and output a matrix of predictions $P\in\mathbf{R}^{N\times N}$ where each entry $p_{ij}$ is a score value quantifying the likelihood of a link existing between nodes $i$ and $j$ (see Figure \ref{fig:1}). Each method computes $P$ differently, depending on the assumptions made about the network and its emergent topological features. Most methods are based on the Triadic Closure Principle (TCP), assuming that two nodes are more likely to connect the more similar they are \cite{Lu:2011, kovacs2019network}. Similarity is often quantified based on the number of shared connections, i.e., paths of length two between two nodes, or in general of even length. It has been shown that, despite its dominant use in biological networks, the TCP approach is not valid for most protein pairs \cite{kovacs2019network}. Instead, in \cite{kovacs2019network}, a link prediction method (L3) is proposed without the assumption that node similarity correlates with connectivity. L3 is based on the assumption that a candidate partner is similar to the existing partners of a node, $P=AS$, as illustrated in Figure \ref{fig:1} c). These results \cite{kovacs2019network}, and follow-up studies \cite{Cannistraci:2018, pech2019link, kitsak2020latent}, show that the L3 method significantly outperforms other link prediction methods, for example, in some complementarity driven networks. \begin{figure}[h!] \centering \includegraphics[width=0.8\columnwidth]{Figures/fig1.pdf} \caption{\footnotesize\textbf{Classical network-based link prediction.} \textbf{a)} Given a complex network described by a graph with a corresponding adjacency matrix $A$, \textbf{b)} one can predict new links by associating a prediction value $p_{ij}$, or \textit{score}, to every pair of nodes \{i, j\}, such that a higher value $p_{ij}$ correlates to a higher probability of the link \{i, j\} appearing. \textbf{c)} Predictions based on TCP rely on similarity (matrix $S$) between nodes, quantified in the simplest case as $P\sim S\sim A^2$, counting paths of length 2 between pairs of nodes. As an alternative, proteins often connect to others that are similar to their neighbours, but not necessarily similar to themselves \cite{kovacs2019network}. In the simplest case, the authors in \cite{kovacs2019network} quantify this principle by taking $P\sim AS\sim A^3$, counting paths of length 3 between nodes. A possible extension of these principles is to quantify direct similarity with even powers of $A$ and neighbour similarity with odd powers of $A$. \textbf{d)} Most classical link prediction methods output the full matrix $P$, often dense, organized in a ranked list of scores from highest to lowest, where the relevant top $l$ predictions are those where the precision is above a user-determined threshold $\delta$, contributing to the final inferred network (\textbf{e)}). The need to calculate all pairwise scores constitutes a computational burden which can become intractable as we scale the methods to larger networks.} \label{fig:1} \end{figure} Our quantum approach takes inspiration from both these paradigms, utilizing even-length (TCP) and odd-length (L3-like) paths. One of the main reasons why link prediction may prove suitable to be tackled with a quantum computer is the realisation that in practice we are not interested in knowing the scores of all pairs of nodes, but we simply wish to know which ones have the highest score up to a certain cut-off threshold, as illustrated in Figure \ref{fig:1} d) and e). By encoding the prediction scores in the amplitudes of a quantum superposition and performing quantum measurements on the system, the predictions with the highest score will be naturally sampled with higher probability, which can potentially be advantageous compared to the classical case of explicitly computing all scores. We proceed now in Section \ref{sec:quantum} with the description of the quantum method and discuss in Section \ref{sec:results} the comparison with classical path-based methods both in terms of prediction precision and resource complexity. \section{Quantum Link Prediction} \label{sec:quantum} We now describe our method for quantum link prediction, denoted as QLP, which we summarize at the end. We base our approach on a Continuous-Time Quantum Walk (CTQW) \cite{farhi1998quantum, kempe2003quantum}, where the Hilbert space of the quantum walker is defined by the orthonormal basis set $\{\ket{j}\}_{j\in\mathcal V}$, with each $\ket{j}$ corresponding to a localized state at a node $j$. For simplicity we consider the Hamiltonian of the evolution as the adjacency matrix of the graph, $H=A$, but later extend this selection to include a degree normalization. In Figure \ref{fig:2} we show the main structure of the QLP circuit using a qubit representation. In the simplest case, we require $n=\log_2{N}$ qubits to add a binary label to each of the $N$ nodes, hereafter marked by the subscript $n$, and we consider an extra ancilla qubit $q_a$ that doubles the Hilbert space of the quantum walk, such that any node $j$ has two associated basis states $\ket{0}_a\ket{j}_n$ and $\ket{1}_a\ket{j}_n$. For an initial state $\ket{\psi_j(0)}=\ket{0}_a\ket{j}_n$, the first step in the circuit of Figure \ref{fig:2} is to apply an Hadamard gate (Figure \ref{fig:2} c) to $q_a$, which creates the superposition $\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})_a\ket{j}_n$. A conditional CTQW is then applied which evolves the $q_a=\ket{0}$ subspace with $e^{-iAt}$ and the $q_a=\ket{1}$ subspace with $e^{+iAt}$. Finally, a second Hadamard gate is applied to $q_a$ to interfere the two quantum walks in the computational basis, leading to the state \begin{equation} \ket{\psi_j(t)}=\ket{0}_a\left(\frac{e^{-iAt}+e^{iAt}}{2}\right)\ket{j}_n + \ket{1}_a\left(\frac{e^{-iAt}-e^{iAt}}{2}\right)\ket{j}_n. \label{eq:qlpstate0} \end{equation} To make the connection with link prediction more evident, it is useful to rewrite the previous expression as \begin{equation} \ket{\psi_j(t)}=\ket{0}_a\left(\sum_{k=0}^{+\infty}c_\text{even}(k, t)\,A^{2k}\right)\ket{j}_n + i\ket{1}_a\left(\sum_{k=0}^{+\infty}c_\text{odd}(k, t)\,A^{2k+1}\right)\ket{j}_n, \label{eq:qlpstate} \end{equation} where we have replaced the exponential terms with their respective power series, and defined the time-dependent coefficients as $c_\text{even}(k, t)=(-1)^kt^{2k}/(2k)!$ and $c_\text{odd}(k, t)=(-1)^{k+1}t^{2k+1}/(2k+1)!$. A detailed calculation leading to Eq. \ref{eq:qlpstate} can be found in Supplementary Note 1. Given some initial node $j$, Eq. \ref{eq:qlpstate} describes the state that is created following the QLP evolution. This state has two entangled components, one with a linear combination of even powers of $A$ for $q_a=\ket{0}$, and another with odd powers of $A$ for $q_a=\ket{1}$. The time $t$ of the quantum walk defines the linear weights, and acts as a hyperparameter in the model. This describes the unitary part of the protocol. To obtain relevant predictions from this state we must perform repeated measurements on the system to draw multiple samples, as we now describe. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{Figures/fig2.pdf} \caption{\footnotesize\textbf{Quantum link prediction (QLP).} \textbf{a)} A set of $n=\log_2(N)$ qubits creates a Hilbert space that encodes each of the $N$ nodes as a basis state. With an extra ancilla qubit $q_a$ in a superposition of $\ket{0}$ and $\ket{1}$, created by an Hadamard gate (\textbf{c)}), and an initial state marking a node $j$, the $q_a=\ket{0}$ subspace is evolved with a $e^{-iAt}$ quantum walk, and the $q_a=\ket{1}$ subspace is evolved with the conjugate $e^{+iAt}$ quantum walk (\textbf{c)}). A second Hadamard gate applied to the ancilla qubit mixes the two subspaces together and creates an interference between the two quantum walks. \textbf{b)} Finally, by measuring $q_a$ the state of the network collapses to one of two possible cases, imposing either a sum or subtraction of the two conjugate evolutions, which encodes even powers of $A$ (even predictions) for $q_a=\ket{0}$ and odd powers of $A$ (odd predictions) for $q_a=\ket{1}$. The measurement of the remaining $n$ qubits returns a bit string marking a certain node $i$, which together with the initial node $j$ forms a sample of a link $(i,\,j)$.} \label{fig:2} \end{figure} The first step is to measure $q_a$, yielding $\ket{0}$ or $\ket{1}$ and collapsing the state of the remaining qubits to $\ket{\psi_{j}(t)}_n^\text{even}\propto\left(\sum_{k}c_\text{even}(k, t)\,A^{2k}\right)\ket{j}$ or $\ket{\psi_{j}(t)}_n^\text{odd}\propto\left(\sum_{k}c_\text{odd}(k, t)\,A^{2k+1}\right)\ket{j}$, respectively, where we omitted the normalization. This effectively selects whether the link sampled will be drawn from a distribution encoding even or odd powers of $A$. The last step is then to measure the remaining qubits, yielding a bit string corresponding to a sample of some node $i$ with probability \begin{equation} p_{ij}^\text{even}\propto\left|\mel{i}{\left(\sum_{k=0}^{+\infty}c_\text{even}(k, t)\,A^{2k}\right)}{j}\right|^2\quad\text{or}\quad p_{ij}^\text{odd}\propto\left|\mel{i}{\left(\sum_{k=0}^{+\infty}c_\text{odd}(k, t)\,A^{2k+1}\right)}{j}\right|^2,\label{eq:qlpscores} \end{equation} which together with the initial node $j$ forms a sample of a link $(i,\,j)$. The values $p_{ij}^\text{even}$ and $p_{ij}^\text{odd}$ encode the prediction scores of the link $(i,\,j)$, but these can not be directly extracted from the algorithm. Instead, what this algorithm allows is the repeated sampling of these distributions, yielding pairs of nodes $(i,\,j)$ with probability proportional to $p_{ij}^\text{even}$ or $p_{ij}^\text{odd}$. This is analogous to sampling entries $(i,\,j)$ the from the matrix of prediction scores $P$ with probability proportional to $|P_{ij}|^2$. As discussed in Section \ref{sec:clp}, predictions coming from even or odd powers of $A$ are typically useful in different types of networks. For a given network application of QLP, whether each sample obtained corresponds to an even or odd prediction depends on the value measured in the ancilla qubit, and this postselection can only be done probabilistically \cite{kothari2014efficient}. This is a potential sampling overhead, as unwanted predictions need to be discarded. Another overhead to consider is the possibility of sampling the initial node, or to sample already existing links, given the contribution of the identity $I$ in $p_{ij}^\text{even}$ and $A$ in $p_{ij}^\text{odd}$, which must also be discarded. As stated, QLP uses a linear combination of powers of $A$ weighted by the time $t$. A classical prediction method with a linear combination of odd powers of $A$ was presented in \cite{pech2019link}, which was shown to sometimes improve the prediction precision compared to the original L3 method from \cite{kovacs2019network} by also fitting an additional model parameter. Another popular link prediction method is the Katz index \cite{katz1953new}, which uses a linear combination of all powers of $A$. We can now summarize the QLP algorithm. Firstly, an initial state $\ket{\psi_j(0)}=\ket{0}_a\ket{j}_n$ is prepared for a node $j$ in the network. Secondly, the QLP evolution leading to Eq. \ref{eq:qlpstate} is performed for a specific time $t$. Finally, the ancilla and node qubits are measured to obtain a sample of a link $(i,j)$ corresponding to an even or odd prediction, and the procedure is repeated. The number of samples that output a certain link $(i,j)$ will follow the distributions described by Eq. \ref{eq:qlpscores}, and thus represent a score for link $(i,j)$. Once predictions associated with node $j$ are sufficiently characterized, the procedure can be repeated for other relevant nodes in the network. \section{Results and Discussion} \label{sec:results} \subsection{Complexity analysis} To identify a potential quantum advantage, we briefly discuss how link prediction scales on a classical computer. Complex networks are typically sparse \cite{barabasi2016network} with the average degree $k_\mathrm{av}\ll N$, and thus there are $\mathcal{O}(N^2)$ potentially missing links. Thus, the general case of computing all possible scores leads to a classical complexity of at least $\mathcal{O}(N^2)$. Different methods scale differently depending on the assumptions made about the solution. For example, the scaling of simple length-2 based methods is $\mathcal{O}(N\langle k^2\rangle)$ and the scaling of L3 \cite{kovacs2019network} is upper bounded by $\mathcal{O}(N\langle k^3\rangle)$, where $\langle k^n\rangle$ is the average of the $n$-th power of the degrees. For certain realistic values of $\gamma$, the exponent in the power-law degree distribution of a scale-free network, the moments $\langle k^2\rangle$ and $\langle k^3\rangle$ diverge with growing $N$, as we estimate in Supplementary Note 2. These methods do not calculate a score for every possible missing link, only for those corresponding to nodes at distance 2 or 3. However, other methods also surpass the $\mathcal{O}(N^2)$ scaling, as is the case of LO \cite{pech2019link} that uses a matrix inversion to represent a linear combination of odd powers of $A$, and is one of the best performing classical methods tested. Complex networks can easily reach sizes of up to millions or billions of nodes, consider for example online social and e-commerce networks, or the neuronal network in the human brain \cite{azevedo2009equal}. Improving these scalings may thus be decisive in the application of link prediction methods to larger networks in the future. In order to estimate the complexity of QLP, we must estimate the number of samples required for a given network. It is often reasonable to assume the number of missing links at any given node $j$ will be proportional to its observed degree $k_j$, which happens for instance when the missing links are removed randomly from the network. Then, each initial node $j$ will require a number of repetitions of QLP proportional to $k_j$ to sufficiently characterize the predictions associated with node $j$. This leads to $\mathcal{O}(Nk_\mathrm{av})$ total samples for a network of $N$ nodes and average degree $k_\mathrm{av}$. To provide a more detailed estimate we must analyse the cost of implementing the unitary $e^{-iHt}$ on a quantum computer, representing the CTQW used to obtain each sample. For this we can look at results from quantum simulation using a $d-$sparse Hamiltonian model, meaning that $H$ has at most $d$ entries in any given row. A state of the art result \cite{low2017optimal} shows that implementing $e^{-iHt}$ scales as $\mathcal{O}\left(d\,t\|H\|_\text{max} + \frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\right)$, where $t$ is the time interval of the evolution, $\epsilon$ is the allowed error, and $\|H\|_\text{max}$ is the maximum entry in absolute value. In our case, $d=k_\text{max}$, the maximum degree of the network, and $\|H\|_\text{max}=1$ for $H=A$, which allows us to write the complexity of implementing $e^{-iHt}$ as $\mathcal{\tilde{O}}(k_\text{max}t)$, omitting logarithmic factors. We can thus write a direct complexity estimate for QLP as $\mathcal{\tilde{O}}(Nk_\text{av}k_\text{max}t)$. The most meaningful complexity comparison we can make is between methods that make similar assumptions. In that sense, both QLP and LO assume the solution is a linear combination of powers of the adjacency matrix, and as we will see in the next section, these methods are often the best performing. Here, we can see that QLP has a potential quantum speedup given the polynomially lower dependence on $N$ but with an extra $k_\text{av}k_\text{max}$ factor. Comparing QLP to simple length-2 and length-3 based methods is less straightforward, as the difference is solely based on the degree factors. As mentioned earlier, while the moments $\langle k^2\rangle$ and $\langle k^3\rangle$ diverge with growing $N$, the average degree remains finite. This hints at a potential quantum speedup, but it is not clear if the dependence on $k_\text{max}$ will spoil the difference. We note that the dependence on $k_\text{max}$ comes from assuming a circuit-based simulation of the quantum walk. However, our method is general and can also admit an analog quantum walk implementation, which would require a different complexity analysis. \subsection{Cross-validation tests} In Figure \ref{fig:3} we compare the prediction precision of QLP with classical link prediction methods using the standard link prediction benchmark of cross-validation on a selection of networks from different applications. Here we used a degree-normalized adjacency matrix as the Hamiltonian for QLP, $H=D^{-\frac{1}{4}}AD^{-\frac{1}{4}}$, with the final predictions mapped back to $A$ as $P=D^{\frac{1}{4}}\tilde{P}D^{\frac{1}{4}}$, where $D$ is a diagonal matrix with each entry $k_i$ being the degree of node $i$. This penalizes the counting of paths that go through hubs in the network \cite{barabasi1999emergence}, which, for the purpose of link prediction, can introduce superfluous shortcuts in the network \cite{kovacs2019network}. The scores used for QLP were an exact calculation of the distributions in Eq. \ref{eq:qlpscores} by computing the full evolution of the quantum walkers. For each network, we selected the time $t$ that maximizes the prediction precision in the first 10\% of the plotted ranks. As shown in Figure \ref{fig:3}, we can conclude that QLP matches the best performing classical link prediction methods tested in terms of prediction precision for a range of real life complex networks \cite{luck2020interactome, biogrid, messel, hamsterster, facebook, wikivote}. In most cases, we observe that both QLP-Odd and LO stand out as the best performing methods, a result which further affirms the case that there can be advantages in including higher order powers of the adjacency matrix in the predictions \cite{pech2019link}. In some cases, QLP-Odd has a slight advantage over LO. We highlight the results for the Facebook dataset as the only case where even power methods stand out (RA-L2 and QLP-Even), although matched by LO, and also the only case where there is a clear difference between QLP-Odd and LO. While QLP-Odd and LO, when expressed as a power series, have a similar form, the predictions they produce are in fact different, as shown in the Supplementary Note 3 and Table 3. Further results for the cross-validation benchmark are shown in Supplementary Figure 1, as well as detailed results for each of the experimental screens that contribute to the full HI-III-20 network in Supplementary Figure 2. Here, we predict interactions that have been obtained by independent, full experimental screens, simulating the case of real life performance against future experiments. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{Figures/fig3.pdf} \caption{\footnotesize\textbf{Computational cross-validation.} Cumulative precision over the top 5\% ranked predictions out of $Nk_\text{av}$ scores for each network, averaged over a 10-fold cross validation procedure. The shaded regions correspond to the standard deviation. In each trial 10\% of the links were randomly removed and the remainder used as input to the link prediction methods. The networks used correspond to the PPI networks HI-III-20, the most recent PPI mapping of the human interactome \cite{luck2020interactome}, Yeast-Bio, a PPI network of a yeast organism \cite{biogrid}, Messel, a food web \cite{messel}, Hamsterster \cite{hamsterster} and Facebook \cite{facebook}, two online social networks, and Wiki-Vote, a vote network between users for adminship of Wikipedia \cite{wikivote}. For comparison, we implemented five classical link prediction methods: the L3 method \cite{kovacs2019network}, the LO method \cite{pech2019link}, the CH-L3 method \cite{Cannistraci:2018}, and two even power methods, RA-L2 (resource allocation) \cite{Zhou:2009}, and CH-L2 \cite{cannistraci2013link, Cannistraci:2018}. The dataset parameters characterizing each network are shown in Supplementary Table 1, and the values selected for the optimal parameters $t$ in the QLP method and $\alpha$ in the LO method are shown in Supplementary Table 2.} \label{fig:3} \end{figure} \section{Conclusions} To the best of our knowledge, QLP is the first quantum algorithm for link prediction in complex networks, and the first to potentially offer a quantum speedup for a practical network science problem. Furthermore, the inclusion of even and odd paths allows QLP to make both TCP-like and L3-like predictions, thus encompassing all types of networks where these topological patterns play a role. Our results serve as a proof of principle for potential future applications of QLP in large complex networks using quantum hardware. Recently, a 62-node network CTQW was demonstrated experimentally \cite{gong2021quantum}, an important first step towards this goal. Besides the potential improvement in complexity when sampling from the quantum solution, we should also note that a classical simulation of QLP relies on the diagonalization of the adjacency matrix, and thus it has a comparable classical complexity to other classical link prediction methods. This makes QLP easier to be further developed with a focus on immediately relevant real world applications, while at the same time exploring other ways in which quantum features of QLP can be advantageous when quantum hardware becomes more widely available. These findings open the way to explore this novel frontier of quantum-computational advantage for complex network applications. \section*{Acknowledgements} The authors thank Albert-László Barabási for the useful discussion, and acknowledge the support from the JTF project \textit{The Nature of Quantum Networks} (ID 60478). JPM, BC and YO thank the support from Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT, Portugal), namely through project UIDB/50008/2020, as well as from projects TheBlinQC and QuantHEP supported by the EU H2020 QuantERA ERA-NET Cofund in Quantum Technologies and by FCT (QuantERA/0001/2017 and QuantERA/0001/2019, respectively), and from the EU H2020 Quantum Flagship project QMiCS (820505). JPM acknowledges the support of FCT through scholarships SFRH/BD/144151/2019, and BC acknowledges the support of FCT through project CEECINST/00117/2018/CP1495.
1,941,325,220,112
arxiv
\section{Introduction} Density matrices traced out in real space are becoming a fundamental tool in characterizing different states of matter in condensed matter systems~\cite{op-2,Amico:rmp08,Vidal:prl03,Pollmann:prb10,Qi:prl12,Kitaev:prl06,Flammia:prl09}. While the interest in spatial reduced density matrices (RDM) is quite recent, the use of density matrices in general is quite ubiquitous~\cite{szabo:book}. The calculation and usage of particle RDMs in quantum Monte Carlo (QMC) simulations are quite extensive and many techniques have been developed to calculate such density matrices~\cite{holzmann-1,holzmann-2,reptation1}. Recent QMC entanglement studies have focused on determining the Renyi entropies\cite{renyi-3,renyi-4}. Calculations of spatial Renyi entanglement with QMC include lattice calculations of topological systems~\cite{renyi-2} and continuum calculations of Fermi liquids~\cite{tubman2,tubman3} and molecules~\cite{tubman1}. The Renyi entropies calculated in these works are generally determined with the $\text{Swap}$ operator which can be applied to interacting systems. In the community of \textit{ab initio} research, there has also been much recent work on using QMC to make highly accurate calculations of the momentum distribution of realistic materials~\cite{holzmann-2}. It turns out the estimators used to calculate the momentum distribution can be seen as a form of the $\text{Swap}$ operator. In this work we use the best techniques that have been developed from both of these communities to introduce a generalization of the $\text{Swap}$ operator, making it an efficient tool to calculate the entanglement spectrum of spatial RDMs. The spatial entanglement spectrum is derived from a density matrix $\rho_{A}$ in which a system is split into two regions (A and B), and the degrees of freedom in region B are traced out. The matrix elements of such a density matrix can be expanded in any basis that is complete in region A. However, in our numerical calculations we in general can only consider a finite number of basis elements. Therefore practical calculations of the entanglement spectrum are basis dependent, and a carefully selected basis is required. The algorithm presented here is different from recent proposals~\cite{lau1,assad1,grover1,sling1} for calculating the entanglement spectrum in several ways. First, there is no calculation of the high order Renyi entropies, and no need to use Maximum Entropy techniques to project out the spectrum. Additionally, because $\rho_{A}$ is expanded in a basis set as part of our approach, it is possible to select a good basis set to reduce the size of matrix that needs to be constructed. \textit{Generalized Swap Operator}: Our approach for calculating the entanglement spectrum is to expand the usage of the $\text{Swap}$ operator, which was first used in QMC for calculating the Renyi entropy of spatial RDMs on a spin lattice~\cite{renyi-1}. It is based on the replica trick in which coordinates are swapped between copies of a trial wave function. The $\text{Swap}$ operator was originally defined in a Hilbert space has been enlarged as a tensor product with itself (although smaller enlargements can be and are used in this work), and we consider its effect when applied to a trial wave function written in the form $\Psi_\text{T} = \sum_{\alpha\beta} C_{\alpha_{}\beta_{}}|\alpha_{}\rangle |\beta_{} \rangle$, where $\alpha$ and $\beta$ are orthonormal basis elements in regions A and B respectively. With this form of the wave function the $\text{Swap}$ operator is defined as \begin{align} \widehat{\swapFn}_{A} & \left( \sum_{\alpha_{1}\beta_{1}} C_{\alpha_{1}\beta_{1}}|\alpha_{1}\rangle |\beta_{1} \rangle \right) \otimes \left( \sum_{\alpha_{2}\beta_{2}}D_{\alpha_{2}\beta_{2}} |\alpha_{2}\rangle |\beta_{2} \rangle \right) \notag \\ {} = & \sum_{\alpha_{1}\beta_{1}} C_{\alpha_{1}\beta_{1}} \sum_{\alpha_{2}\beta_{2}}D_{\alpha_{2}\beta_{2}} |\alpha_{2}\rangle |\beta_{1} \rangle \otimes |\alpha_{1}\rangle |\beta_{2} \rangle \; . \label{eqn:swap1} \end{align} Taking the expectation value of the $\text{Swap}$ operator gives \begin{align} & \left\langle \Psi_{\text{T}} \otimes \Psi_{\text{T}} \left| \widehat{\swapFn}_{A} \right| \Psi_{\text{T}} \otimes \Psi_{\text{T}} \right\rangle \notag \\ {} & \qquad = \sum_{\alpha_{1}\beta_{1}\alpha_{2}\beta_{2}}C_{\alpha_{1}\beta_{1}}C_{\alpha_{2}\beta_{1}}^* C_{\alpha_{2}\beta_{2}} C_{\alpha_{1}\beta_{2}}^* \notag \\ {} & \qquad = \sum_{\alpha_{1}\alpha_{2}}(\rho_{A})_{\alpha_{1}\alpha_{2}}(\rho_{A})_{\alpha_{2}\alpha_{1}} = \text{Tr}(\rho_{A}^{2}) \; . \label{eqn:swapexp} \end{align} The degrees of freedom over which a wave function can be partitioned is not limited to spatial degrees of freedom, and are in fact quite general as there has been some recognition that particle and spatial RDMs can be calculated in similar ways~\cite{sling1,herdman1}. More generally, in a QMC calculation one can imagine swapping coordinates of spin, space, particle and momentum~\cite{mom1,mom2} in some cases. Combinations of such degrees of freedom correspond to a ``hybrid reduced density matrix'' are easily accessible, although we are unaware of this being exploited as of yet. Thus the techniques developed here are not limited to the spatial RDM. The term \emph{Swap operator} has not been traditionally used by the QMC community for particle RDM calculation, but the evaluation of such quantities can be thought of as its generalization. Of particular interest to this work is a form most recently used in the accurate calculations of the momentum distribution~\cite{holzmann-1}, \begin{align} \label{eqn:mom} \rho_{1}(k) & = \left\langle \Psi_\text{T}(\mathbf{R'}) \otimes e^{i\mathbf{k}\cdot\mathbf{r'}} \left| \widehat{\swapFn} \right| e^{-i\mathbf{k}\cdot\mathbf{r}} \otimes \Psi_\text{T}(\mathbf{R}) \right\rangle . \end{align} The evaluation of this expectation value is calculated as an exchange of one particle of the many body trial wave function, $\Psi_\text{T}$, with an electron sampled from a single particle plane wave basis element. What is important to note here is that the $\text{Swap}$ operator is being used to project the 1-RDM in a basis set of interest, the plane wave basis. By comparing Equations~(\ref{eqn:swapexp}) and (\ref{eqn:mom}), one might expect that we can use the $\text{Swap}$ formalism to project the spatial entanglement matrix into a basis separately for both region A and region B. We can see that this can be done explicitly by considering $\widehat{\swapFn}$ acting on a single basis element $|\alpha_{1}\rangle$ in region A, \begin{align} \widehat{\swapFn}_{A} & \left[ |\alpha_{1}\rangle \otimes \left( \sum_{\alpha_{2}\beta_{}} C_{\alpha_{2}\beta_{}}|\alpha_{2}\rangle |\beta_{} \rangle \right) \right] \notag \\ {} & = \sum_{\alpha_{2}\beta_{}}C_{\alpha_{2}\beta_{}} |\alpha_{2}\rangle \otimes | \alpha_{1}\rangle |\beta_{} \rangle \; . \end{align} We can then evaluate the expectation of the $\text{Swap}$ operator to calculate the following matrix element, \begin{align} & \left\langle \Psi_{\text{T}} \otimes \alpha_{2} \left| \widehat{\swapFn}_{A} \right| \alpha_{1} \otimes \Psi_{\text{T}} \right\rangle \notag \\ {} & \qquad = \sum_{\beta_{}}C_{\alpha_{2}\beta_{}}C_{\alpha_{1}\beta_{}}^* = (\rho_{A})_{\alpha_{1}\alpha_{2}} \; . \label{eqn:final} \end{align} These are the matrix elements for the spatial RDM of which the eigenvalues make up the entanglement spectrum. The $\alpha$ basis elements are different from the plane waves in Equation~(\ref{eqn:mom}) in that they can involve multiple particles, and they only have support in region A. This equation was derived with a basis set for region A such that $\langle \alpha_{1}|\alpha_{2} \rangle = \delta_{\alpha_{1},\alpha_{2}}$. The estimator in QMC for these matrix elements can be derived as \begin{widetext} \begin{align} & \left\langle \Psi_\text{T} \otimes \alpha_{i} \left| \widehat{\swapFn}_{A} \right| \alpha_{j} \otimes \Psi_\text{T} \right\rangle = \int d\mathbf{x}_{1} \cdots d\mathbf{x}_{N} d\mathbf{x}_{N+1} \cdots d\mathbf{x}_{\alpha(N)} \Psi^{*}_\text{T}(x_{A_{1}},x_{B_{}})\alpha^{*}_{i}(x_{A_{2}})\alpha_{j}(x_{A_{1}})\Psi_\text{T}(x_{A_{2}},x_{B_{}}) \nonumber \\ & \qquad \qquad = \int d\mathbf{x}_{1} \cdots d\mathbf{x}_{N} d\mathbf{x}_{N+1} \cdots d\mathbf{x}_{\alpha(N)} |\Psi_\text{T}(x_{A_{1}},x_{B_{}})|^{2}|\alpha_{i}(x_{A_{2}})|^{2}\frac{\Psi_\text{T}(x_{A_{2}},x_{B_{}})\alpha_{j}(x_{A_{1}})} {\Psi_\text{T}(x_{A_{1}},x_{B_{}})\alpha_{i}(x_{A_{2}})} \label{eqn:est} , \end{align} \end{widetext} where $x_{A_{1}}$, $x_{B}$ are the coordinates of electrons in regions A and B sampled from $|\Psi_\text{T}|^{2}$ and $x_{A_{2}}$ are coordinates sampled from $|\alpha^{}_{i}|^{2}$. Thus, if one had a complete basis set in region A, Equation~(\ref{eqn:final}) is all that is needed in principle to calculate the full entanglement spectrum. For practical calculations it is expensive to use large basis sets and thus finding a rapidly convergent basis set is important. In our implementation we calculate all the matrix elements with equation (\ref{eqn:est}) in a single variational Monte Carlo (VMC) calculation in which the wave function $\Psi_{\text{T}}$ and all the $\alpha_{i}$ are all sampled simultaneously. At each step a walker position is sampled from $|\Psi_{\text{T}}|^{2}$, exactly as in standard VMC. We then identify all the $\alpha_{i}$ that are compatible with this walker, in that they must have exactly the same number of spin-up and spin-down electrons in region A. For all compatible $\alpha_{i}$, where each $\alpha_{i}$ has its own walker, we perform a VMC step that samples from $|\alpha_{i}|^{2}$. We use the wave function evaluations to calculate the denominator of the estimator in equation (\ref{eqn:est}) and then we swap the region A coordinates ($x_{A}$) between the walkers of $\alpha_{i}$ and $\Psi_{\text{T}}$ to calculate the numerator of the estimator. \begin{figure}[tbp] \centering \includegraphics[scale=0.1]{orbitals1.png} \caption{The five orbitals with the largest eigenvalues of the effective entanglement Hamiltonian for H$_{2}$. The orbitals, unlike normal single particle orbitals, exist only in the half space of region A. \label{fig:orbs}} \end{figure} \textit{Efficient basis set generation}: Creating a rapidly converging basis in which to expand the spatial entanglement spectrum is similar to the problem of creating multi-determinant wave functions for continuum Hamiltonians~\cite{szabo:book}, where there are many possible basis states and one has to select which determinants to include in the diagonalization of the Hamiltonian. A common basis that is used for a multi-determinant expansion is the natural orbital basis. The natural orbitals are the eigenvectors for the 1-RDM of a many body trial wave function. Whether or not this is the most compact basis for a multi-determinant expansion is something that has been discussed extensively~\cite{szabo:book} but it is not something that can be proved rigorously as there are notable exceptions~\cite{natorb}. Regardless, it is used in many techniques to diagonalize continuum Hamiltonians~\cite{gamess-1}. In a manner similar to generating natural orbitals, we suggest a good basis set for such an expansion of the spatial RDM can be generated with the correlation method~\cite{corr-2,redmat-1}. The correlation method was developed originally to calculate the spatial entanglement for single determinant wave functions. However, an effective entanglement Hamiltonian from the correlation method can be generated and diagonalized for a multi-determinant wave function. These eigenvectors can be considered the spatial natural orbitals. This is a natural definition to adopt as the correlation matrix which is used to determine the effective entanglement Hamiltonian is given by ${C}_{\alpha_{1}\alpha_{2}} = \text{Tr}({\rho}_\text{1}c^{\dag}_{\alpha_{1}}c_{\alpha_{2}}) $, where $\alpha$ represents degrees of freedom that exist only in region A. In other words we are using matrix elements, from the 1-RDM, that exist only in region A to generate our spatial natural orbitals. The effective entanglement Hamiltonian can be constructed from the correlation matrix as follows, \begin{equation} \mathbf{H}^{(1)}_\text{ent}= \text{ln} \left( \frac{\mathbf{I}-\mathbf{C}}{\mathbf{C}} \right) \; . \end{equation} The rank of the effective entanglement Hamiltonian is arbitrarily large since it was derived from $\rho_{1}$ of an interacting system. We will generally have to limit ourselves to a subset of the eigenvectors of the entanglement Hamiltonian to create our basis. We pick this subset based on the eigenvalues of the entanglement Hamiltonian. The eigenvalues, even for a multi-determinant wave function, range between 0 and 1. For selecting orbitals in our truncated expansion we observe that orbitals with eigenvalues close to 1 are the most important to retain, which is what we expect as this is the rule of thumb for selecting natural orbitals for multi-determinant expansions. The construction of the $|\alpha\rangle$ basis elements is straightforward once the entanglement Hamiltonian is diagonalized~\cite{peschel-func} and a subset of the eigenvectors is selected. The $|\alpha\rangle$ are single determinant states that differ from familiar determinant basis sets in that all particle sectors are present. All the single determinant states of $0$ to $N$ particles should be constructed that are consistent with the physical number of particles in each spin species. \textit{Spectrum of a covalent bond}: Ultimately we want to apply these techniques to condensed matter systems but in this work we instead first look at something interesting that has not yet been studied with the entanglement spectrum, molecular bonding, and in particular, the H$_{2}$ molecule. The H$_{2}$ molecule is the prototypical covalent bond, and we are interested if the entanglement spectrum can be used to characterize the properties of bonding in molecules~\cite{tubman1}. For this system, we take the half space as our spatial partition, dividing the space equally between the two hydrogen atoms. We note that it is possible to use the algorithms here with any spatial partition of interest. For a single determinant the spatial entanglement has at most $N$ degrees of freedom which determine how electrons fluctuation through the spatial partition of interest. However, when interactions are introduced, the entanglement spectrum can take a more complex form. In Figure~\ref{fig:h2-spec}, we show the fully interacting entanglement spectrum for $\text{H}_{2}$ and compare to the Hartree-Fock result. The multi-determinant wave function is a full configuration interaction (CI) calculation with a correlation-consistent basis of penta-zeta quality (cc-pV5Z)~\cite{c2-2,gamess-1}. \begin{figure}[tbp] \centering \includegraphics[scale=0.4]{h2ent2.png} \caption{The entanglement spectrum of full configuration interaction (blue) ground state wave function of H$_{2}$. The red dashed line represents the location of he four Hartree-Fock entanglement eigenvalues (0.25). The four sectors (columns), represent the number of particles and spins in region A. From left to right the sectors are , ``zero", ``spin-up", ``spin-down", and ``two electrons". \label{fig:h2-spec}} \end{figure} For a Hartree-Fock wave function, there are four eigenvalues of the entanglement spectrum that is equally split with the value of 0.25, where each state can be labeled by the number of electrons and spins in region A. These entanglement eigenvalues can be directly interpreted as a probability, and we can then say there is a 25\% chance for each of the following states in region A: zero electrons, one spin-up electron, one spin-down electron and two electrons. For the full CI wave function, the electrons correlations are such that they avoid each other, which can be easily seen in Figure~\ref{fig:h2-spec}. In particular we can identify how the four states from the non-interacting system evolved into the many body spectrum. The spectral weight for the 0- and 2-particle sectors are reduced in the interacting entanglement spectrum from 0.25 to 0.20. This reflects that the electrons should stay away from each other to reduce the coulomb energy of the system. On the other hand, the 1-particle states have had their probabilities increased to 0.28. Additionally there is some extra spectral weight, separated by a gap in the spectrum, that represent other modes of the one electron sectors. We emphasize that although the spectral weight for the higher energy states in H$_{2}$ is small, these states are required to have statistical correlations between the electrons in regions A and B. We can extend this analysis by considering the single particle orbitals that are used to construct an eigenvector of interest. In Figure~\ref{fig:orbs} are the five single particle orbitals with the largest eigenvalues of the entanglement Hamiltonian. Despite being defined only on the half space, we can identify the first three orbitals as having the symmetry that would go into $\sigma$ bonds, and orbitals 4 and 5 as having the character of $\pi$ bonds. In the H$_{2}$ entanglement spectrum there are two degenerate eigenvectors in the high energy part above the gap which consist of orbitals 4 and 5. Thus as far as bonding properties are concerned, the entanglement spectrum yields a description of many body fluctuations and correlations between two regions which can further be organized by the symmetries of the orbitals that create the $\alpha$ basis set. What is new here that is not evident in other bonding descriptions is the effect of many body correlations. A nice picture of this can be seen by noting that the eigenvalues/eigenvectors of the entanglement spectrum (for both $\rho_{A}$ and $\rho_{B}$) can be used to generate a Schmidt decomposition of the wave function, $\psi_\text{T} = \sum_{i} c_{i}|\alpha_{i}\rangle |\beta_{i} \rangle$. This is an exact representation of the wave function in which each basis state represents statistically uncorrelated electrons between regions A and B. Thus whenever there are single states in a sector separated by a large gap from the rest of the spectrum, as is the case for H$_{2}$, then we can build a picture of bonding from these corresponding low energy states. This is to say we characterize the bonding as electrons fluctuating between regions A and B as in the traditional pictures of bonding. However, in the case of a small or vanishing gap, many body correlations become important between the electrons in the different regions. \begin{figure}[tbp] \centering \includegraphics[scale=0.3]{nspec2.png} \caption{The 64 largest values of the entanglement spectrum of a multi-determinant ground state wave function of $\text{N}_{2}$. The red dashed line represents the location of the 64 largest Hartree-Fock entanglement eigenvalues (0.0156). The $x$-axis serve as an index for 16 different sectors which are distinguished by particle number and spin polarization in region A. Sectors 1-8 are symmetric with respect to sectors 9-16 within error bars. \label{fig:N2-spec}} \end{figure} As a demonstration of the method applied to a larger systems, we show in Figure~\ref{fig:N2-spec} the entanglement spectrum of $\text{N}_{2}$, which has 14 electrons. There is a lot of interesting effects represented in the spectrum, the most important of which is the clear formation of bands in the spectrum. The Schmidt decomposition requires that pairs of sectors have equivalent eigenvalues, such as (8,~9) and (5,~12) but it is surprising that all 4 of these sectors (5,~8,~9,~12) are equivalent. Even more surprising is that several eigenvalues show up multiple times in the different sectors, effectively forming bands. We suggest that these bands can be considered many body resonances that are analogous to delocalized bonding orbitals. A full description of this spectrum as well as the underlying physics will be described in a future publication. We mention briefly about the quality of our spatial natural orbital basis set. For both H$_{2}$ and Li$_{2}$ (another molecule we tested), we find the the first four basis elements generated in our method capture more than 99\% of the spectral weight of $\rho_{A}$, and that the next few basis elements captures a large majority of the remaining weight. For N$_{2}$ the first 1024 basis elements capture more than 97\% of the spectral weight. We can say that in the limiting case of Hartree-Fock wave functions is that the spatial natural orbital basis set consists of the exact eigenvectors of $\rho_{A}$ and therefore is the most efficient basis set that can be used. Thus whenever a system is only weakly interacting we expect the method to work especially well for generating basis sets. In the case of Fermi-liquids this means the basis sets are likely to be efficient in the high density limit~\cite{tubman2,tubman3}. For strongly interacting systems, such as transition metals molecules and low density Fermi-liquids, the spatial natural orbitals are likely to remain a good single determinant basis set for the expansion, but the number of basis elements required to converge the spectrum might be large. Of course if methods are developed to build better multi-determinant basis sets, they could be used directly in the method presented here. In this letter we have proposed a method by which the entanglement spectrum can be calculated in QMC. Furthermore, we have shown that this method can be used efficiently with a spatial natural orbital basis. We expect that this method will not only be useful for condensed matter systems but also for chemistry and the study of bonding. We have demonstrated our method on the $\text{H}_{2}$ and $\text{N}_2$ molecules and have shown for the first time what a covalent bond looks like through the entanglement spectrum. In addition it is clear there is still much to be explored with these methods. Entanglement partitioning and multi-determinant localized orbitals (from the eigenvectors of the spatial RDM) may be useful as many-body generalizations of Bader analysis\cite{bader:book} and Wannier orbitals respectively. The techniques described here can be used to study and benchmark density matrix embedding theory techniques in the continuum\cite{dmet1,dmet2}. Although we apply this method only in VMC here, in principle it can be applied with a mixed estimator in fixed-node diffusion Monte Carlo~\cite{holzmann-1} and release-node QMC~\cite{tubman4,tubman5}, in which fermion solutions can be sampled exactly. Additionally a pure estimator can be sampled with forward walking\cite{mcbook} and reptation Monte Carlo\cite{reptation1}. \textit{Acknowledgments}: We would like to thanks David Ceperley, Jeremy McMinis, and Lucas Wagner for useful discussions. This work was supported by DARPA-OLE program and DOE DE-NA0001789. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant No. OCI-1053575.
1,941,325,220,113
arxiv
\section{Introduction} \label{sec:intro} The modern perspective on Effective Field Theories (EFTs) takes them to be well-defined quantum field theories in their own right, with all the attendant structure that implies. In particular, one can define an EFT at the quantum level using a path integral. However, the standard approach to extracting observables is to sidestep this intimidating object by organizing perturbation theory through the use of Feynman diagrams. The recent reintroduction~\cite{Henning:2014wua} of the \emph{covariant derivative expansion} (CDE)~\cite{Gaillard:1985uh,Chan:1986jq,Cheyette:1987qz} in a form tailored to tackling a broader set of EFTs has sparked a renaissance in the extraction of physics directly from the path integral using functional techniques. A noteworthy example of this progress has been a general matching formula at one-loop~\cite{Henning:2016lyp, Fuentes-Martin:2016uol} that captures the far off-shell fluctuations of heavy fields. This result can be used for the extraction of the effective operators and their Wilson coefficients that characterize the effects of the heavy physics, encoded in the so-called Universal One-Loop Effective Action (UOLEA)~\cite{Henning:2014wua, Drozd:2015rsp, Ellis:2016enq, Zhang:2016pja, Ellis:2017jns}; one of its features is that it elegantly packages together what would be multiple independent Feynman diagram calculations. This has seen ready application to beyond the Standard Model (SM) scenarios by facilitating one-loop matching onto the so-called SMEFT. However, there exist a large number of EFTs whose relationship to UV physics cannot be captured by simply integrating out an entire heavy field. Here, we present the first application of the modern functional approach to such EFTs. Specifically, we work in a particular low-energy limit of the SM~\cite{Nussinov:1986hw,Shifman:1987rj,Isgur:1989vq,Isgur:1989ed} expanded around a non-trivial background, the Heavy Quark Effective Theory (HQET)~\cite{Isgur:1989vq, Eichten:1989zv, Georgi:1990um, Grinstein:1990mj}, see also the reviews~\cite{Georgi:1991mr, Neubert:1993mb, Shifman:1995dn, Wise:1997sg, Manohar:2000dt}. There are many novel features of HQET when compared to the SMEFT. Perhaps the most striking is that --- as a model of the long distance fluctuations of a heavy particle --- HQET is a non-relativistic EFT. Obviously, the path integral is a valid description of a non-relativistic theory (after all, it was invented as an alternative description of quantum mechanics~\cite{Feynman:1942us, Feynman:1948ur}), but the concrete demonstration that the functional approach is useful for precision non-relativistic field theory computations has been lacking until now.\footnote{To our knowledge, the only time the path integral has been discussed in the context of HQET was in~\cite{Mannel:1991mc} where the tree-level formulation of the theory was first derived, and in \cite{Chen:1993np, Kilian:1994mg, Sundrum:1997ut} where RPI was briefly discussed in the context of the path integral.} Another intriguing aspect of HQET is that it invokes the concept of a mode expansion. A single heavy quark field is decomposed into a pair of fields, which model short and long distance fluctuations. Then, \emph{assuming a particular kinematic configuration}, only the long distance modes can be accessed as external states. The short distance fluctuations can therefore be integrated out, which generates a tower of local EFT operators. It is this description that is called HQET. This type of one-to-many correspondence plays a role in many modern formulations of EFTs, and for the first time here we put such models on even firmer theoretical footing by computing observables directly from the path integral. Furthermore, a theoretically appealing aspect of this work is that the covariant derivative expansion of the functional integral manifests the symmetries of the theory in a transparent way. This is obviously true for gauge invariance, but we will also be able to approach the residual Lorentz symmetry known as Reparameterization Invariance (RPI) from a novel vantage point. In particular, we will identify an intermediate stage in our calculations where RPI becomes manifest. This provides a nice contrast to the Feynman diagram approach, where this invariance only holds once one sums the full set of relevant diagrams. Through our application of the functional approach to HQET, we will expose some important features of the general formalism. Matching a UV theory onto an EFT, first done for HQET in \Ref{Falk:1990yz}, can be performed diagrammatically by equating matrix elements computed with the two descriptions of the theory at a common kinematic point. This requires knowing the EFT operator expansion and identifying the operators that are relevant to the EFT matrix element calculation. By contract, using functional methods the generation of operators and matching of Wilson coefficients occurs in a single step. There is no need to specify the structure of the EFT before performing a matching calculation. Working through the example of HQET will show that the problem for how to marry the mode decomposition with functional methods has a nearly universal solution. Specifically, given an implementation of the mode decomposition using operator-valued projectors, one can derive a functional equation of motion for the short distance modes. This can be used to integrate out these modes, yielding a non-local EFT description that encodes the complete dynamics of the full theory in the relevant kinematic limit. This justifies the construction of the EFT as a full path integral over a well defined field. Deriving concrete predictions using conventional techniques typically involves several steps and many subtleties, but a functional approach makes the procedure more algorithmic. The complicated multi-mode matching calculation has a simple form whose structure is elucidated by analyzing the resulting integrals using the method of regions~\cite{Beneke:1997zp, Smirnov:2002pj}. As is well known, matching in HQET only receives support from diagrams that have loops with both a short distance mode and a mode that propagates in the EFT. Understanding how to access this exact class of diagrams using functional methods has been the subject of some confusion. Showing that we can derive matching and running in HQET with these methods is a conclusive demonstration that functional methods provide a complete framework at one-loop. These results will be summarized below as a simple \emph{master formula} that encodes the matching of QCD onto HQET at one-loop and to any order in the heavy mass expansion, see \cref{eqn:SHQET1loop}. There are important practical implications of this work. One of the primary purposes of HQET is to provide efficient methods for precision calculations \emph{within} the SM. There are a number of EFTs widely used to facilitation precise predictions including Soft Collinear Effective Theory~\cite{Bauer:2000yr,Bauer:2001ct,Bauer:2001yt}, theories of non-relativistic bound states such as nrQCD~\cite{Caswell:1985ui, Bodwin:1994jh, Grinstein:1997gv}, and others. Improving on the precision of a calculation often requires the application of novel theoretical approaches, with a goal to provide computational benefits over a naive perturbative expansion evaluated using Feynman diagrams. Specifically, functional technology has seen little use in this context, despite the fact that some of the simplifications it provides are arguably most relevant to the questions these kinematic EFTs are designed to answer. By showing we can reproduce non-trivial matching and running results for HQET here, we open the door to understanding how to apply functional techniques in these other contexts. We additionally lay the foundation for performing new calculations within HQET itself. In particular, one can now perform matching calculations to higher order in the heavy mass expansion, which would be relevant to high precision measurements made at experiments such as LHCb and Belle II, and for connecting lattice gauge theory calculations performed in the heavy quark limit to their continuum limits~\cite{Heitger:2003nj, Blossier:2010jk}. The rest of the paper is organized as follows. In the rest of \cref{sec:intro}, we provide a summary of the known results that we reproduce in a novel way throughout this paper. Next, \cref{sec:HQET} provides a condensed introduction to HQET. \Cref{sec:DiagrammaticResidue} provides a summary of the traditional method of calculating the simplest piece of the one-loop HQET matching, the residue matching. (Readers familiar with HQET could skip these two sections.) In \cref{sec:FunctionalMethods}, we review how to extract matching and running from a functional determinant. (Readers familiar with functional methods could skip this section.) In \cref{sec:MatchingHQET}, we introduce the use of functional integration to construct kinematic EFTs, and clarify how the method of regions simplifies the derivation of the resulting master matching formula. \Cref{sec:FunctionalMatching} is then dedicated to an explanation of how such matching is performed to one-loop order. This is followed by \cref{sec:Examples}, which strengthens the case for functional methods by providing additional matching calculations, along with an example that shows how operator running can be derived in the formalism as well. \Cref{sec:Conc} then concludes. An extensive set of pedagogical appendices provide an introduction to many of the relevant technical details. The ``covariant derivative expansion'' technique used here was originally invented in 1980s \cite{Gaillard:1985uh,Chan:1986jq,Cheyette:1987qz}, we refer to this as ``original CDE'' in \cref{appsec:CDE}. This was reintroduced in the context of modern EFT calculations in~\Ref{Henning:2014wua}, and has been applied by~\Refs{Drozd:2015rsp, Ellis:2016enq, Zhang:2016pja, Ellis:2017jns, Summ:2018oko} to develop one-loop universal effective actions. A closely related variant of the original CDE, which we call ``simplified CDE'' in \cref{appsec:CDE}, was proposed in \Ref{Henning:2016lyp}. This more rudimentary version of the CDE turns out to be significantly more convenient for extracting operators that do not involve a gauge field strength. Most of the results in the main text of this paper were derived using this simplified CDE. In \cref{appsec:CDE}, we clarify the relations between these two versions of the CDE. For completeness, in \cref{appsec:CDE} we also provide some simple universal results (tabulated in \cref{appsubsubsec:functionaltraces}) for functional traces derived using the CDE. These include the famous elliptic operator, see \cref{eqn:UniversalDU}, which is the central object of study in the development of the UOLEA. Additionally, we provide the first computation of the contributions to the UOLEA from a term with an open covariant derivative (truncated at dimension four). This result is provided in \cref{eqn:UniversalDUJ}, and was previously unknown as emphasized by Refs.~\cite{Ellis:2017jns, Brivio:2017vri, Kramer:2019fwz}. A variety of RGE example calculations are given in \cref{appsec:ExamplesRGE}, and a generalization of the Heavy-Heavy matching calculation is provided in \cref{sec:HHMatch2}. \subsection{Summary of Results} \label{sec:Summary} In what follows, we will present our methods by way of a few canonical matching and running calculations. First, we derive the high scale HQET Wilson coefficients by matching QCD onto the EFT for the \emph{heavy--light} currents $\bar{q}\, \gamma^\mu (\gamma^5) \,Q$ and the \emph{heavy--heavy} currents $\bar{Q}_1\, \gamma^\mu (\gamma^5)\, Q_2$ using purely functional methods. We also present a functional derivation of running effects, using the first subleading operators in the HQET Lagrangian as a concrete example. In order to fix our notation and make comparison to standard results straightforward, we provide a brief compendium of the results we will reproduce in this paper. The conventional derivation is presented in much more detail in standard references, \eg, \Refs{Neubert:1993mb, Manohar:2000dt}.\footnote{\textbf{Notation:} We have mostly chosen to follow the notation of \Ref{Manohar:2000dt}, but have made a number of minor changes, which is partially why we provide this summary here. For dimensional regularization (dim.\ reg.), our convention is to work in $d=4 - 2\eps$ dimensions. We have also chosen to use a different standard notation of the EFT heavy quark fields. Additionally, we have reserved the superscript numeral in parenthesis notation to denote loop order, $R^{(0)}$ is tree-level, $R^{(1)}$ is the one-loop correction, and so on. This is different from the standard notation in the HQET community, \eg in \Ref{Manohar:2000dt} the one-loop correction to the residue is denoted as $R_1$. Additionally, we note that we will only denote the loop order of the renormalized terms, \ie, counterterms are implicit.} This list provides a useful context for our goals here. \subsubsection*{Propagator Residue} When performing a matching calculation in any off-shell scheme, such as \MSbar, it is critical to track the difference in propagator residues (necessary for obtaining the desired $S$-matrix elements using the LSZ reduction procedure) when moving from the full theory to the EFT. Taking the functional point of view, the resulting effect shows up as a rescaling of the kinetic terms for the EFT fields. This can then be moved to its canonical position in the Wilson coefficients by a field redefinition. Thus, the first step performed in what follows is to derive the one-loop corrections to the kinetic terms from QCD: \begin{equation} \Lag_\text{HQET} \supset \Big(1- \Delta R^{(1)}\hspace{0.8pt} \aS\Big) \, \bar{h}_v\, (i\hspace{0.8pt} v\cdot D)\, h_v \,, \end{equation} where $h_v$ models the long distance fluctuations of the heavy quark field, $v_\mu$ is the reference vector defined in \cref{eq:pmuHQET} below, $D_\mu$ is the covariant derivative, $\alpha_s$ is the strong fine structure constant, and $\Delta R^{(1)}$ is the matching correction for the propagator residue. In a traditional matching calculation, the correction comes from the difference of the residues between the full and EFT descriptions: \begin{equation} \Delta R^{(1)} \hspace{0.8pt} \aS \equiv \bigg(R_Q^{(1)} - R_h^{(1)}\bigg)\, \aS = - \frac{1}{3} \frac{\aS}{\pi} \pqty{ 3\ln\frac{\mu^2}{m_Q^2} + 4} \,. \label{eqn:DeltaRdef} \end{equation} While this result is sensitive to the choice of matching scale $\mu$ due to presence of a UV divergence; the cancellation of terms that depend on the IR regulator serves as a check that the IR behavior of the two theories is identical. \subsubsection*{Heavy--Light Current} At leading order in the heavy-mass expansion, two HQET operators have the same quantum numbers as the heavy--light vector current of QCD, implying that they can appear in the matching: \begin{equation} \bar{q}\, \gamma^\mu \, Q = C_{V,1}\pqty{\frac{m_Q}{\mu}, \aS(\mu)}\, \bar{q}\, \gamma^\mu\, h_v + C_{V,2}\pqty{\frac{m_Q}{\mu}, \aS(\mu)}\, \bar{q}\, v^\mu\, h_v \,, \label{eq:C1V} \end{equation} where $C_{V,i}$ is the to-be-calculated matching coefficient for the vector current, which is a function of the heavy quark mass $m_Q$ and the strong coupling; there is an analogous expression for the axial current derived by replacing $C_{V,i} \rightarrow C_{A,i}$, $\gamma^\mu \rightarrow \gamma^\mu \hspace{0.8pt}\gamma^5$, and $v^\mu \rightarrow v^\mu \hspace{0.8pt}\gamma^5$ in \cref{eq:C1V}. The answer up to one-loop (see Eq.~(3.48) in \Ref{Manohar:2000dt}) can be written as \begin{subequations} \label{eqn:HLresult} \begin{align} \setlength{\jot}{7pt} C_{V,1} &= 1 + \Bigg[ \frac{1}{2} \Delta R^{(1)} + V_{\text{HL},1}^{(1)} - V_\text{eff}^{(1)} \Bigg]\, \aS + \dotsb \,, \\ C_{V,2} &= V_{\text{HL},2}^{(1)}\, \aS + \dotsb \,, \end{align} \end{subequations} where $\Delta R^{(1)}$ is defined in \cref{eqn:DeltaRdef}, and the other terms (see Eqs.~(3.66) and (3.73) in \Ref{Manohar:2000dt}, respectively) are \begin{subequations} \label{eqn:HLcomponents} \begin{align} \setlength{\jot}{5pt} V_{\text{HL},1}^{(1)}\, \aS &= - \frac{1}{3} \frac{\aS}{\pi} \pqty{\MSbardiv + 2} \,, \\ V_{\text{HL},2}^{(1)}\, \aS &= + \frac{2}{3} \frac{\aS}{\pi} \,, \\ V_\text{eff}^{(1)}\, \aS &= - \frac{1}{3} \frac{\aS}{\pi} \pqty{\MSbardiv} \,, \end{align} \end{subequations} where $\MSbardiv$ is the standard factor that is subtracted when using the $\overline{\text{MS}}$ scheme, with $\gamma_\text{E}$ denoting the Euler-Mascheroni constant. Then the axial matching coefficients are given by $C_{A,1} = C_{V,1}$ and $C_{A,2} = - C_{V,2}$. The one-loop matching coefficients are thus (compare with Eq.~(3.74) in \Ref{Manohar:2000dt}) \begin{subequations} \begin{align} C_{V,1} &= 1 + \frac{\aS}{\pi} \pqty{\ln\frac{m_Q}{\mu} - \frac{4}{3}} + \ord\Big(\aS^2\Big) \,, \\ C_{V,2} &= \frac{2}{3} \frac{\aS}{\pi}+ \ord\Big(\aS^2\Big) \,. \end{align} \end{subequations} \subsubsection*{Heavy--Heavy Current} In case of the heavy-heavy currents, for simplicity we will take the special kinematic choice of zero recoil, corresponding to $v_1 = v_2$ in the EFT (the generalization to $v_1 \neq v_2$, first done in \Ref{Neubert:1992tg}, is discussed in \cref{sec:HHMatch2}). In this limit, all possible HQET operators at leading order in the mass expansion are equal by the equations of motion, and the matching between QCD and HQET is simply\footnote{The choice of notation for the HQET fields here is made for of ease of legibility in later sections, but deserves comment here so as to not mislead. The labels $v_{1,2}$ differ only to keep track of finite-mass corrections. They do not indicate different velocities, and the relation between the QCD and HQET operators is not valid except in the zero-recoil limit. (Compare with Eq.~(3.89) in \Ref{Manohar:2000dt}.)} \begin{equation} \bar{Q}_1\, \gamma^\mu\, Q_2 = \eta_{V}\, \bar{h}_{v_1} \, \gamma^\mu\, h_{v_2} \,, \end{equation} with an analogous expression for the axial current given by the replacement $\eta_V \rightarrow \eta_A$ and $\gamma^\mu \rightarrow \gamma^\mu \hspace{0.8pt} \gamma^5$. The results can be parameterized as (compare with Eqs.~(3.98) and~(3.101) in \Ref{Manohar:2000dt}) \begin{subequations}\label{eqn:hhresult} \begin{align} \eta_V &= 1 + \frac{1}{2}\, \pqty{ \Delta R^{(1)}_1 + \Delta R^{(1)}_2}\,\aS + \Delta V_\text{HH}^{(1)}\, \aS \,, \\ \eta_A &= \eta_V - \frac{2}{3}\frac{\aS}{\pi} \,, \end{align} \end{subequations} where $\Delta R_1^{(1)}$ and $\Delta R_2^{{(1)}}$ are \cref{eqn:DeltaRdef} for the heavy quarks $Q_1$ and $Q_2$ respectively, and \begin{equation} \label{eqn:hhcomponents} \Delta V_\text{HH}^{(1)} = - \frac{2}{3} \frac{\aS}{\pi} \, \bqty{ 1 + \frac{3}{m_1-m_{2}} \pqty{m_1\ln\frac{m_{2}}{\mu} - m_{2}\ln\frac{m_1}{\mu}} } \,, \end{equation} where $m_{1,2}$ corresponds to the mass of $Q_{1,2}$. The matching coefficients then take the form \begin{subequations} \begin{align} \eta_V &= 1 + \frac{\aS}{\pi}\, \pqty{ -2 + \frac{m_1+m_{2}}{m_1-m_{2}} \,\ln\frac{m_1}{m_{2}}} \,, \\[2pt] \eta_A &= 1 + \frac{\aS}{\pi}\, \pqty{ -\frac{8}{3} + \frac{m_1+m_{2}}{m_1-m_{2}}\, \ln\frac{m_1}{m_{2}}} \,, \end{align} \end{subequations} in agreement with Eqs.~(3.97), (3.99), and (3.101) of \Ref{Manohar:2000dt}. \subsubsection*{\boldmath $\beta$-functions} Finally, we reproduce the expressions for the running of the HQET matching coefficients at one-loop and at $\ord(1/m_Q)$ (compare with Eq.~(4.8) in \Ref{Manohar:2000dt}): \begin{equation} \Lag_1 = -\, c_\text{kin}(\mu)\, \bar{h}_v\, \frac{D_\perp^2}{2\hspace{0.8pt} m_Q}\, h_v - c_\text{mag}(\mu)\, g_s\, \bar{h}_v\, \frac{\sigma_{\mu\nu}\hspace{0.8pt} G^{\mu\nu}}{4\hspace{0.8pt} m_Q}\, h_v \,. \end{equation} The Renormalization Group Equations (RGEs) are~\cite{Luke:1992cs,Eichten:1990vp,Falk:1990pz} \begin{subequations} \label{eq:RGESummary} \begin{align} \setlength{\jot}{7pt} \mu\hspace{0.8pt}\dv{\mu}\hspace{0.8pt} c_\text{kin} &= 0 \,,\\ \mu\hspace{0.8pt}\dv{\mu}\hspace{0.8pt} c_\text{mag} &= \frac{\aS}{4\hspace{0.8pt}\pi}\, 2\hspace{0.8pt} C_A\, c_\text{mag} \,, \end{align} \end{subequations} where $C_A$ denotes the Casimir factor for the adjoint representation In what follows, we will show how to reproduce all of these results using functional methods equipped with the CDE technique. \section{Heavy Quark Effective Theory} \label{sec:HQET} One of our main goals here is to initiate the study of functional methods for higher-order calculations in EFTs that are derived by performing multi-modal decomposition of full theory fields. Such EFTs are quite common; they occur when the kinematics of the process being studied selects a preferred reference frame due to, \eg a conservation law preventing the decay of a heavy particle, or a measurement function that forces the external states into a particular region of phase space. These restrictions imply that there are full theory modes which cannot be put on-shell within the EFT regime of validity, and so it is sensible to integrate them out. This procedure breaks the full theory space-time symmetries to some subgroup, while potentially also introducing new internal ones. A theory of this type is the Heavy Quark Effective Theory (HQET), which describes the fluctuations of a heavy quark ($m_Q \gg \LQCD$) in the presence of light QCD charged degrees of freedom. The simplifications and universal behavior of QCD in the heavy-mass limit were first appreciated by \Refs{Nussinov:1986hw,Shifman:1987rj} and especially \Refs{Isgur:1989vq,Isgur:1989ed}. A non-covariant EFT making this behavior manifest was later developed and shown to be well-behaved in perturbation theory~\cite{Eichten:1989zv,Grinstein:1990mj}, and finally given a covariant formulation~\cite{Georgi:1990um}. We provide a review of HQET here, with a particular emphasis on the equation of motion due to the critical role it plays in what follows. The reader familiar with HQET can skip ahead to \cref{sec:FunctionalMethods}, while more details can be found in, \eg Sec.~4.1 of \Ref{Manohar:2000dt}. The full theory (QCD) Lagrangian for a heavy quark includes \begin{equation} \label{eq:QLag} \Lag_\text{QCD} \supset \bar{Q}\,\big (i\hspace{0.8pt}\slashed{D} -m_Q\big)\, Q \,, \end{equation} where $Q$ is our heavy quark, and the covariant derivative only includes the QCD interactions, $D_\mu = \partial_\mu - i\hspace{0.8pt} g_s \,G_\mu^a\hspace{0.8pt} T^a$. In the following, additional interactions of the heavy quark will be modeled through the introduction of current operators as needed. Naively, it does not seem that the Lagrangian of \cref{eq:QLag} has a good expansion about the $m_Q \to \infty$ limit. The key insight that allows one to circumventing this issue lies in an appropriately chosen phase redefinition of the field, whose purpose is to cancel the mass term for certain components. The full quark field can then be separated into so-called short distance and long distance fields, where the latter become approximately mass-independent. Physically, this is motivated by the realization that the heavy quark cannot be pushed very far off-shell by degrees of freedom for which $|q| \lesssim \LQCD$. To make this manifest, we decompose a heavy quark's momentum as \begin{equation} \label{eq:pmuHQET} p^\mu = m_Q\hspace{0.8pt} v^\mu + k^\mu \,, \end{equation} where $v^\mu$ is a unit time-like vector, and $k^\mu$ is the residual heavy quark momentum which models small fluctuations about its mass shell. For kinematic configurations such that all Lorentz invariants depending on $k^\mu$ are small compared to $m_Q$, a truncation at finite order in the $|k|/m_Q$ expansion is justified.\footnote{When we expand assuming $|k| \ll m_Q$, we take each element of the $k^\mu$ vector to be much smaller than $m_Q$.} Since the theory is expanded around the $m_Q \to \infty$ limit, the structure of HQET does not know about the mass of the heavy quark except through various non-dynamical quantities. In particular, no sensitivity to $m_Q$ appears in any of the calculations beyond what is encoded in the structure of the matching coefficients. As a brief aside, we note that the decomposition in \cref{eq:pmuHQET} is not unique. In particular, a simultaneous transformation of $k^\mu$ and $v^\mu$ by a fixed vector: \begin{equation} \label{eq:RPIdef} k^\mu \,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, k^\mu + \delta k^\mu \qquad \text{and} \qquad v^\mu \,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, v^\mu - \frac{\delta k^\mu}{m_Q} \,, \end{equation} leaves $p^\mu$ unchanged. Enforcing that $v^\mu$ remains a unit vector implies that $\delta k^\mu$ must satisfy $v \cdot \delta k = \delta k^2/(2\hspace{0.8pt} m_Q)$. Reparameterization invariance (RPI) is then the statement that physical observables cannot depend on $\delta k^\mu$, thereby enforcing the residual Lorentz invariance of the underlying full theory. We will explore the interplay between RPI and the functional approach in \cref{sec:RPI} below. In order to make use of \cref{eq:pmuHQET}, we decompose the heavy quark field into two fields by first extracting the rapidly-varying phase that remains unchanged by low-energy interactions, and then using $v^\mu$-dependent projectors to split the remaining field into short distance and long distance components: \begin{subequations}\label{eq:QProjections} \begin{align} \setlength{\jot}{7pt} h_v(x) &= e^{i\hspace{0.8pt} m_Q\hspace{0.8pt} v\cdot x}\, \frac{1 + \slashed{v}}{2} Q(x) \,, \\ H_v(x) &= e^{i\hspace{0.8pt} m_Q\hspace{0.8pt} v\cdot x}\, \frac{1 - \slashed{v}}{2}\, Q(x)\,, \end{align} \end{subequations} or equivalently \begin{align} Q(x) = e^{-i\hspace{0.8pt} m_Q\hspace{0.8pt} v\cdot x}\, \Big[h_v(x) + H_v(x)\Big]\,. \label{eq:QSplit} \end{align} Plugging \cref{eq:QSplit} into \cref{eq:QLag} yields \begin{equation} \Lag \supset \bar{h}_v \,\big(i\hspace{0.8pt} v \cdot D\big)\, h_v - \bar{H}_v\, \big(i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q\big)\, H_v + \bar{H}_v\, i\hspace{0.8pt} \slashed{D}_\perp\, h_v + \bar{h}_v\, i\hspace{0.8pt}\slashed{D}_\perp\, H_v\,, \label{eqn:LQCD} \end{equation} since the projectors enforce $\slashed{v}\, h_v = h_v$ and $\slashed{v}\, H_v = -H_v$ (and hence $\bar{H}_v\, \slashed{v}\, h_v = \bar{h}_v\, \slashed{v}\, H_v = 0$), and we have defined \begin{align} D_\perp^\mu \equiv D^\mu - v^\mu\, (v\cdot D)\,. \label{eq:Dperp} \end{align} The Lagrangian in \cref{eqn:LQCD} makes the interpretation of $H_v$ as the heavy mode manifest --- this field has an effective mass of $2\hspace{0.8pt} m_Q$, permitting a description at lower energies in terms of $h_v$ alone. To formally integrate out $H_v$ at tree-level, we solve for its equation of motion \begin{equation} H_v = \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}\, i\hspace{0.8pt} \slashed{D}_\perp\, h_v\,, \end{equation} and plug it into \cref{eqn:LQCD}, which yields \begin{equation} \mathcal{L}_\text{HQET}^\text{non-local} \supset \bar{h}_v\, \Bigg( i\hspace{0.8pt} v \cdot D + i\hspace{0.8pt} \slashed{D}_\perp\, \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}\, i\hspace{0.8pt} \slashed{D}_\perp \Bigg)\, h_v\,. \label{eqn:LHQETnonlocal} \end{equation} Provided that the momentum of the field $h_v$ satisfies $|k| \ll m_Q$, \cref{eqn:LHQETnonlocal} can be expressed into a convergent series of local terms \begin{equation} \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} = \frac{1}{2\hspace{0.8pt} m_Q} - \frac{1}{\big(2\hspace{0.8pt} m_Q\big)^2} (i\hspace{0.8pt} v \cdot D) + \frac{1}{\big(2\hspace{0.8pt} m_Q\big)^3}(i\hspace{0.8pt} v \cdot D)^2 + \dotsb\;. \label{eq:HQETExpansion} \end{equation} Since the leading term is then insensitive to the Dirac structure of the spinor components of the field, the theory has gained an approximate $SU(2)$ heavy-spin symmetry, which can be expanded to $SU(2\hspace{0.8pt} n_h)$ in the presence of $n_h$ heavy flavors. This is an example of an aforementioned emergent symmetry of the EFT. To understand how this procedure for deriving the EFT Lagrangian can be interpreted at the path integral level, we note that the projection operators in \cref{eq:QProjections} imply that $h_v$ and $H_v$ can be treated as two orthogonal projections of $Q$. As such, the path-integral measure factorizes: \begin{equation} \int \fdd{Q} = \int\fdd{h_v}\, \int\fdd{H_v}\,. \end{equation} Since the resulting Lagrangian is quadratic in $H_v$, the Gaussian integral over the short-distance field immediately yields \cref{eqn:LHQETnonlocal}. Therefore, the procedure described here allows us to literally integrate out the short distance modes of our original quark. This elegant derivation of the tree-level HQET Lagrangian by way of the path-integral measure was first presented in \Ref{Mannel:1991mc}. Furthermore, it also justifies the use of the functional approach to matching and running developed in \Ref{Henning:2016lyp}. Note that there would be no significant obstructions even if our projectors were quantum operators instead of simply being functions of the kinematics as above, since all objects in the path integral are treated as operators acting on fields. As long as the mode decomposition that is used to define an EFT can be written in terms of operator-valued projectors, we expected the methods developed here to be universally applicable. \subsection{Decoupling the Heavy Quark} \label{sec:Decoupling} The decomposition of the full QCD Lagrangian given in \cref{eqn:LQCD} makes it clear that $H_v$ can be identified as a short distance mode which one can integrate out in the limit that $|k| \ll m_Q$ for all fields in a given process. This procedure yields the non-local Lagrangian given in \cref{eqn:LHQETnonlocal}, which makes predictions that are equivalent to QCD for processes involving only $h_v$ modes in the external states. Expanding the non-local Lagrangian using \cref{eq:HQETExpansion} and truncating the series at some order in $1/m_Q$ yields an EFT which is valid for momenta $|k| \ll m_Q$. In this subsection, we will briefly review the connection between this procedure and the fact that the heavy quark should not contribute to the running of the QCD gauge coupling at scales below $m_Q$. This both has a conceptual benefit, and will also be of practical importance since similar arguments will be used below when we derive our master formula for one-loop matching given in \cref{eqn:SHQET1loop}. For this argument, we will work directly with the $h_v$ and $H_v$ fields, and we will use the terms that are diagonal in these fields to derive propagators, while the mixed terms will be treated as interactions. Two features are of critical importance, the fact that the kinetic terms are linear in the momentum of the state, and the relative minus sign between the $h_v$ and the $H_v$ kinetic terms. The later fact implies that the $i\hspace{0.8pt} \eps$ factors in the propagators will take the opposite sign such that the same Wick rotation can be used to Euclideanize diagrams involving both $h_v$ and $H_v$ propagators: The QCD Lagrangian written as \cref{eqn:LQCD} has three types of couplings between the heavy quark modes and the gluon: diagonal couplings $h_v h_v G_\mu$, $H_v H_v G_\mu$, and an off-diagonal coupling $H_v h_v G_\mu$. The integral that results when computing the contribution to the vacuum polarization for the gluon from the two diagonal loops schematically take the form \begin{align} I_\text{diag} \sim \int \ddp{q} \left(\frac{1}{v\cdot q + m \pm i\hspace{0.8pt}\eps}\right) \left( \frac{1}{v\cdot (q+p) + m \pm i\hspace{0.8pt}\eps}\right) \,, \label{eqn:Idiagonal} \end{align} where $m$ is an IR regulator for the $h_v$ loop and is equal to $2\hspace{0.8pt} m_Q$ for the $H_v$ loop. Due to the linear nature of the kinetic terms and the sign on the factor of $i\hspace{0.8pt}\eps$, when integrating over $q^0$, the poles reside on only one side of the real axis, and one can deform the contour away from all them yielding zero contribution. However, the situation is different for the mixed loop, where the integral takes the form \begin{align} I_\text{off-diag} \sim \int \ddp{q} \left(\frac{1}{v\cdot q + m + i\hspace{0.8pt}\eps}\right) \left( \frac{1}{v\cdot (q+p) + 2\hspace{0.8pt} m_Q - i\hspace{0.8pt}\eps}\right)\, . \label{eqn:Ioffdiagonal} \end{align} Now we see that the opposite sign on the $i\hspace{0.8pt} \eps$ terms implies that there is a pole in both the positive and negative $\Im q^0$ half-planes. The contour will enclose a pole for any possible deformation, yielding a non-zero contribution to the $\beta$-function. The argument above makes it clear why in HQET the heavy quark does not contribute to the RGEs of the gauge coupling. Once we construct HQET by integrating out $H_v$ and expanding, the heavy field $H_v$ is non-propagating. Therefore, there are no diagrams that yield contributions of the type in \cref{eqn:Ioffdiagonal}. The mode $h_v$ only yields potentially relevant integrals of the type in \cref{eqn:Idiagonal}, which vanish as we have argued. Similar reasoning will be critical to the derivation of the master formula for matching in \cref{eqn:SHQET1loop} below. \section{Residues Using Feynman Diagrams} \label{sec:DiagrammaticResidue} In this section, we will review the diagrammatic approach to calculating matching coefficients, using the simple example of the residue of the on-shell propagator for concreteness. Along the way, we will encounter an order of limits issue, which is a manifestation of the IR divergence structure of QCD. We will then revisit the calculation by relying on the so-called method of regions~\cite{Beneke:1997zp, Smirnov:2002pj} that will avoid the need to deal with this subtlety directly. This has the additional benefit of providing a familiar setting to review the method of regions, which will be a critical tool in the derivation of our master formula for HQET matching coefficients below. When performing matching calculations, one typically equates matrix elements as calculated in the full theory and the EFT. Particular care must be taken to include the appropriate residue factors for the external states to ensure that the LSZ reduction is correctly implemented. One way to extract the residue for the heavy quark $Q$ is to take the derivative of the 1PI corrections to the propagator $-i\hspace{0.8pt} \Sigma(\slashed{p})$, and evaluate it on the mass shell. We will compute this factor for a quark in QCD $\RQCD =1 + \RQCD^{(1)} \aS + \dotsb$, and for a quark in HQET $\RHQET = 1 + \RHQET^{(1)} \aS + \dotsb$, from which we get the quantity that appears in matching calculations $\Delta R^{(1)} = \RQCD^{(1)} - \RHQET^{(1)}$, see \cref{eqn:DeltaRdef}. \subsection{Residue in QCD} The one-loop QCD residue $\RQCD^{(1)}$ can be obtained by computing the two-point 1PI function \begin{align} -i\hspace{0.8pt}\Sigma _\text{QCD}(\slashed{p}) &= -\frac{4}{3}\, g_s^2\, \mu^{2\eps} \int \ddp{q} \frac{(2-d)(\slashed{q} + \slashed{p}) + d\,m_Q}{q^2\bqty{(q+p)^2 - m_Q^2}} - \text{c.t.} \notag\\ &= -i\bqty{ (A - A_\text{ct}) \,m_Q + (B - B_\text{ct})\,\slashed{p} }\, . \label{eqn:SigmaQCDIntegral} \end{align} Here as well as throughout this paper, ``c.t.'' denotes the counter term contributions, and we take the Feynman gauge $\xi=1$ for gauge boson propagators. Performing standard manipulations, we derive\footnote{See Eqs.~(3.53), (3.55), and~(3.57) in \Ref{Manohar:2000dt}, noting again that we use $d = 4-2\hspace{0.8pt}\eps$, while \Ref{Manohar:2000dt} uses $d = 4-\eps$.} \begin{subequations} \begin{align} A\big(\hspace{0.8pt} p^2\hspace{0.8pt} \big) &= \frac{\aS}{3\hspace{0.8pt}\pi} \Big(4\hspace{0.8pt}\pi\,\mu^2\Big)^\eps\, \Gamma(\eps)\, 4\left(1 - \frac{\eps}{2}\right) \int_0^1 \text{d} x \left[m_Q^2\, x - p^2\, x(1-x)\right]^{-\eps}\, , \\ B\big(\hspace{0.8pt} p^2\hspace{0.8pt} \big) &= -\frac{\aS}{3\hspace{0.8pt}\pi} \Big(4\hspace{0.8pt}\pi\,\mu^2\Big)^\eps\, \Gamma(\eps)\, 2\,(1 - \eps) \int_0^1 \text{d} x \,(1-x) \left[ m_Q^2\, x - p^2\, x\hspace{0.8pt}(1-x) \right]^{-\eps}\, , \\ A_\text{ct} &= \frac{\aS}{3\hspace{0.8pt}\pi}\, 4\left(\MSbardiv\right)\, , \\ B_\text{ct} &= -\frac{\aS}{3\hspace{0.8pt}\pi} \left(\MSbardiv\right) \,, \end{align} \end{subequations} where the \MSbar counter terms $A_\text{ct}$ and $B_\text{ct}$ are derived by taking the $\eps$ expansion of $A\big(\hspace{0.8pt} p^2\hspace{0.8pt}\big)$ and $B\big(\hspace{0.8pt} p^2\hspace{0.8pt}\big)$. This yields \begin{subequations} \begin{align} \lim_{\eps \to 0} (A - A_\text{ct}) &= \frac{4}{3}\,\frac{\aS}{\pi}\, \pqty{ \frac{3}{2} - \frac{m_Q^2}{p^2} \ln\frac{m_Q^2}{\mu^2} + \frac{m_Q^2-p^2}{p^2} \ln\frac{m_Q^2 - p^2}{\mu^2} }\,, \\[3pt] \lim_{\eps \to 0} (B - B_\text{ct}) &= -\frac{\aS}{3\hspace{0.8pt}\pi}\, \pqty{ 1 - \frac{m_Q^4}{p^4} \ln\frac{m_Q^2}{\mu^2} + \frac{m_Q^4-p^4}{p^4} \ln\frac{m_Q^2 - p^2}{\mu^2} + \frac{m_Q^2}{p^2} }\,. \end{align} \end{subequations} These results are finite but non-analytic at $p^2=m_Q^2$. In particular, their derivatives with respect to $p^2$ are divergent when evaluated at $p^2=m_Q^2$. These are a manifestation of IR divergences that appear when taking on-shell kinematics. One way to side step this issue, thereby allowing us to extract the residue, is to keep $\eps \neq 0$ until after taking the derivative. It is then straightforward to derive \begin{align} \RQCD^{(1)}\aS &= \eval{\dv{\Sigma_\text{QCD}(\slashed{p})}{\slashed{p}}}_{\slashed{p}=m_Q}\notag\\[3pt] &= 2\,m_Q^2 \eval{\bqty{ \dv{(A - A_\text{ct})}{p^2} + \dv{(B - B_\text{ct})}{p^2} }}_{p^2 = m_Q^2} + \eval{(B - B_\text{ct})}_{p^2 = m_Q^2} \notag\\ &= -\frac{\aS}{3\hspace{0.8pt}\pi} \bqty{ {2\,\pqty{\MSbardiv} + 3\,\ln\frac{\mu^2}{m_Q^2} + 4} }\,, \label{eqn:RQCDresult} \end{align} where $\epsilon$ is specifically regulating the IR divergence. Next, we will derive the residue in HQET, where we will see that the same IR divergent terms appear. Therefore, the object of interest $\Delta R^{(1)}$ is IR finite. This is to be expected, since one of the standard tests that one has correctly implemented the matching procedure, namely that one is working with the correct low energy description, is to check that the IR of the full theory and EFT have the same divergence structure. We will provide a procedure for directly extracting $\Delta R^{(1)}$ that avoids this IR subtlety utilizing the method of regions, see \cref{subsec:ResidueRegions} below. \subsection{Residue in HQET} \label{sec:ResInHQET} Next, we perform the two-point function calculation in HQET. Diagrammatically, the structure is identical to the QCD calculation where the relativistic quark propagator is replaced by the HQET propagator, yielding \begin{align} -i\hspace{0.8pt}\Sigma_\text{HQET}(v \cdot k) &= -\frac{4}{3}\, g_s^2\, \mu^{2\eps} \int \ddp{q} \frac{1}{q^2\hspace{0.8pt} \big(v \cdot (q + k)\big)} - \text{c.t.} \notag\\ &= - i\hspace{0.8pt}\Big[ C(v \cdot k) - C_\text{ct}(v \cdot k)\Big]\,, \label{eqn:SigmaHQETIntegral} \end{align} which evaluates to\footnote{See Eqs.~(3.67) and~(3.69) in \Ref{Manohar:2000dt}.} \begin{subequations} \begin{align} C(v \cdot k) &= -\frac{2}{3}\frac{\aS}{\pi}\hspace{0.8pt} \pqty{4\hspace{0.8pt}\pi\,\mu^2}^\eps\, \Gamma(\eps)(-v \cdot k)^{1 - 2\eps}\,\frac{\Gamma(1 - \eps)\, \Gamma\Big(\frac{1}{2} + \eps\Big)}{(1 - 2\hspace{0.8pt}\eps)\,\Gamma\Big(\frac{1}{2}\Big)}\,, \\ C_\text{ct}(v \cdot k) &= \frac{2}{3}\frac{\aS}{\pi}\hspace{0.8pt} v \cdot k\,\pqty{\MSbardiv}\,,\label{eq:Cct} \end{align} \end{subequations} where the counter term is again determined in the $\overline{\text{MS}}$ scheme.\footnote{Note that when computing the counter term in dim.\ reg., one must be careful to isolate the UV divergence. This is done here by keeping $v\cdot k \neq 0$ as an IR regulator at intermediate steps. This is why we must take a derivative of \cref{eq:Cct} before sending $v\cdot k \to 0$ to derive \cref{eq:dCct}. If instead we took $v\cdot k \to 0$ first, we would effectively be using dim.\ reg.\ to regulate the IR divergence as well.} Noting that to zeroth order in $1/m_Q$ the on-shell condition for $h_v$ is $v\cdot k =0$, we evaluate \begin{subequations} \begin{align} \eval{ \dv{C(v \cdot k)}{(v\cdot k)} }_{v \cdot k = 0} &= \eval{ \frac{2}{3}\frac{\aS}{\pi}\hspace{0.8pt} \pqty{4\hspace{0.8pt}\pi\,\mu^2}^\eps\, \Gamma(\eps)\, (-v \cdot k)^{-2\eps}\, \frac{\Gamma(1-\eps)\Gamma\Big( \frac{1}{2} + \eps\Big)}{\Gamma(\frac{1}{2})} }_{v \cdot k = 0} = \,0\,, \\[5pt] \eval{ \dv{C_\text{ct}(v \cdot k)}{(v\cdot k)} }_{v \cdot k = 0} &= \frac{2}{3}\frac{\aS}{\pi}\hspace{0.8pt}\pqty{\MSbardiv}\,, \label{eq:dCct} \end{align} \end{subequations} which yields \begin{equation} \RHQET^{(1)} \aS =\frac{\dd C(v \cdot k) - \dd C_\text{ct}(v \cdot k)}{\dd (v \cdot k)}\bigg|_{v \cdot k = 0} = -\frac{2}{3}\frac{\aS}{\pi}\hspace{0.8pt} \pqty{\MSbardiv}\,. \label{eqn:RHQETresult} \end{equation} Similar to above, the $\lim_{\eps \to 0} \Sigma_\text{HQET}$ is not analytic at $v \cdot k=0$ due to an IR divergence, and so we had to defer taking the $\eps \to 0$ limit until after taking the derivative with respect to $v\cdot k$. \subsection{Residue Difference from the Method of Regions} \label{subsec:ResidueRegions} The IR divergences in \cref{eqn:RQCDresult,eqn:RHQETresult} are the same, as they must be if the EFT correctly captures the dynamics of the full theory below a certain scale. This implies that the residue difference is IR finite: \begin{equation} \Delta R^{(1)} \aS = \pqty{\RQCD^{(1)} - \RHQET^{(1)}}\, \aS = -\frac{\aS}{3\hspace{0.8pt}\pi} \pqty{ 3\hspace{0.8pt}\ln\frac{\mu^2}{m_Q^2} + 4}\,. \label{eqn:DeltaRresult} \end{equation} Operationally, we were forced to maintain $\epsilon \neq 0$ until we evaluated this difference. Therefore, it would be convenient to have an approach that would allow us to compute $\Delta R^{(1)}$ directly. This can be accomplished by exploiting a technique known as the method of regions~\cite{Beneke:1997zp,Smirnov:2002pj}. To begin, we rewrite some expressions in order to make the comparison between $\RQCD^{(1)}$ and $\RHQET^{(1)}$ more obvious. Within QCD, we set $p^\mu = m_Q\, v^\mu + k^\mu$ and define \begin{equation} \Xi_\text{QCD}(k) \equiv \Sigma_\text{QCD}(\slashed{p})\, \frac{1 + \slashed{v}}{2}\,. \end{equation} so that \begin{align} \eval{ v^\mu\, \dv{\Xi_\text{QCD}(k)}{k^\mu} }_{v \cdot k = 0} &= \slashed{v}\, \eval{ \dv{\Sigma_\text{QCD}(\slashed{p})}{\slashed{p}} }_{\slashed{p} = m_Q} \frac{1 + \slashed{v}}{2} \,+ \,\ord\pqty{\frac{1}{m_Q^2}} \notag\\ &= \RQCD^{(1)} \aS\, \frac{1 + \slashed{v}}{2} + \ord\pqty{\frac{1}{m_Q^2}}\,. \end{align} Note that we have changed the evaluation condition from $v \cdot k = 0$ to $\slashed{p} = m_Q$ in the first line, which is valid up to $\ord(1/m_Q)$. Similarly, we define the HQET quantity \begin{equation} \Xi _\text{HQET}(k) \equiv \Sigma_\text{HQET}(v \cdot k)\, \frac{1 + \slashed{v}}{2}\,, \end{equation} which is related to $\RHQET^{(1)}$ as \begin{equation} \eval{ v^\mu\, \dv{\Xi_\text{HQET}(k)}{k^\mu} }_{v \cdot k = 0} = \eval{ \dv{\Xi_\text{HQET}(v \cdot k)}{(v \cdot k)} }_{v \cdot k = 0} \frac{1 + \slashed{v}}{2} = \RHQET^{(1)}\, \aS \frac{1 + \slashed{v}}{2}\,. \end{equation} This allows us to simply express the difference as \begin{equation} \Delta R^{(1)} \alpha_s \frac{1 + \slashed{v}}{2} = \eval{ v^\mu\, \dv{\Xi_\text{QCD}(k) - \dd\Xi_\text{HQET}(k)}{k^\mu} }_{v \cdot k = 0} \,+\, \ord\pqty{\frac{1}{m_Q^2}}\,. \label{eqn:DelRXiSubtract} \end{equation} In order to evaluate this difference, we take the integral expressions for $\Sigma_\text{QCD}$ and $\Sigma_\text{HQET}$ given in \cref{eqn:SigmaQCDIntegral,eqn:SigmaHQETIntegral} to write \begin{subequations} \begin{align} -i\hspace{0.8pt}\Xi_\text{QCD}(k) &= -\frac{4}{3}\, g_s^2\, \mu^{2\eps} \int \ddp{q} \frac{(2 - d)(\slashed{q} + \slashed{p}) + d\,m_Q}{q^2\,\bqty{ (q + p)^2 - m_Q^2}} \frac{1 + \slashed{v}}{2} - \text{c.t.} \,, \label{eq:XiQCD} \\ -i\hspace{0.8pt}\Xi_\text{HQET}(k) &= -\frac{4}{3}\, g_s^2\, \mu^{2\eps} \int \ddp{q} \frac{1}{q^2\,\bqty{v \cdot (q + k)}} \frac{1 + \slashed{v}}{2} - \text{c.t.} \,. \label{eq:XiHQET} \end{align} \end{subequations} Note that the integrands of $\Xi_\text{QCD}$ and $\Xi_\text{HQET}$ are equal in the heavy quark limit: \begin{equation} \lim_{m_Q \to \infty} \frac{(2 - d)(\slashed{q} + \slashed{p}) + d\,m_Q}{q^2\, \bqty{(q + p)^2 - m_Q^2}} \frac{1 + \slashed{v}}{2} = \frac{(2 - d) \slashed{v} + d}{2\,q^2\, \bqty{v \cdot (q + k)}} \frac{1 + \slashed{v}}{2} = \frac{1}{q^2\,\bqty{v \cdot (q + k)}} \frac{1 + \slashed{v}}{2}\,. \label{eqn:ExpansionNaive} \end{equation} Naively, one might be tempted to conclude that $\Xi_\text{QCD} = \Xi_\text{HQET}$ at leading order in $1/m_Q$, which would imply that $\Delta R^{(1)}$ vanishes. This would be in conflict with the result derived in \cref{eqn:DeltaRresult}. This is due to an order of limits issue: since the integral is taken over all momenta, one cannot expanding the integrand for large $m_Q$ before integrating. The method of regions~\cite{Beneke:1997zp,Smirnov:2002pj} is a technique for consistently expanding the integrands such that non-analytic dependence on small parameters is correctly reproduced after integration. The key is to isolate regions of the integration domain that are dominated by single scale contributions. For example, the QCD integral contains two physical scales that are separated by a large hierarchy $|k| \ll m_Q$. To expand the integrand, we introduce an intermediate cutoff scale $\Lambda$ such that $|k| \ll \Lambda \ll m_Q$, which can be used to split the integral over $q^\mu$ into two pieces. The first receives support from the soft region $|q| < \Lambda$, while the second is non-zero due to the hard region $|q| > \Lambda$: \begin{equation} \Xi_\text{QCD}(k) = \Xi_\text{QCD,soft}(k,\Lambda) + \Xi_\text{QCD,hard}(k,\Lambda)\,. \end{equation} The domain of integration is now bounded, so that one can expand the integrand according to the assumed scaling in each region, while maintaining $|k| \ll m_Q$ fixed. The soft region can be isolated by assuming $|q| < \Lambda \ll m_Q$ holds, while the hard region where $|k| \ll \Lambda < |q|$ yields a different expansion. Once the integrand has been expanded, the domain of integration can be restored to infinity when using dimensional regularization so that the explicit cutoff $\Lambda$ no longer appears --- the contribution from the extended integration limits is scaleless and therefore vanishes. This is reflected by our notation since we drop the $\Lambda$ dependence for the expressions where the domain of the integral is taken to infinity. Applying this procedure to the two-point function integrals, we recognize that the soft expansion is identical to the naive approach in \cref{eqn:ExpansionNaive}. Therefore, we conclude that \begin{equation} \Xi_\text{QCD,soft}(k) = \Xi_\text{HQET}(k) \,. \label{eqn:Cancellation} \end{equation} This tells us that the residue difference is fully determined by the hard region from QCD: \begin{equation} \Delta R^{(1)} \aS\hspace{0.8pt} \frac{1 + \slashed{v}}{2} = \eval{ v^\mu\, \dv{\Xi_\text{QCD}(k) - \dd \Xi_\text{HQET}(k)}{k^\mu} }_{v \cdot k = 0} = \eval{ v^\mu\, \dv{\Xi_\text{QCD,hard}(k)}{k^\mu} }_{v \cdot k = 0}\,. \label{eqn:DeltaRnonzero} \end{equation} Note that \cref{eqn:Cancellation} holds to all order in $1/m_Q$. In \cref{eqn:ExpansionNaive}, it was demonstrated only at the leading order, \textit{i.e.} $m_Q\to\infty$. To explicitly verify this at higher orders in $1/m_Q$, one needs to include additional diagrams that contribute to \cref{eq:XiHQET} due to higher order operators in HQET. Then the result in \cref{eqn:DeltaRnonzero} still holds, with an appropriate modification of the evaluation condition $v\cdot k=0$. The practical implication of \cref{eqn:DeltaRnonzero} is that we no longer have to track the individual IR divergences. Instead, the residue difference $\Delta R^{(1)}$ is determined by a single loop integral $\Xi_\text{QCD,hard}$, and the only divergences that appear are from the UV, which can be subtracted using counter terms. For completeness, we evaluate this hard region integral. Starting with \cref{eq:XiQCD}, we expand the integrand assuming $|k| \ll |q|$ and $|k| \ll m_Q$: \begin{align} -i\hspace{0.8pt}\Xi _\text{QCD,hard}(k) &= -\frac{4}{3}\, g_s^2\, \mu^{2\eps} \int \ddp{q} \frac{1}{q^2}\, \Bigg\{ \frac{2\,m_Q + (2 - d)(\slashed{q} + \slashed{k})}{(q + m_Q\, v)^2 - m_Q^2} \notag\\ &\hspace{80pt} - \frac{\bqty{2\,m_Q + (2 - d)\slashed{q}} 2\,(q + m_Q v) \cdot k}{\bqty{(q + m_Q\, v)^2 - m_Q^2}^2} \Bigg\} \frac{1 + \slashed{v}}{2} - \text{c.t.} \,, \end{align} where we have truncated the expansion up to the linear order in $k$, since this is all that is required to isolate the residue. Evaluating the integral yields \begin{align} \Delta R^{(1)} \aS\hspace{0.8pt} \frac{1 + \slashed{v}}{2} &= \eval{ v^\mu\, \dv{\Xi_\text{QCD,hard}(k)}{k^\mu} }_{v \cdot k = 0} \notag\\[5pt] &= -\frac{\aS}{3\hspace{0.8pt}\pi^2} \pqty{\frac{4\hspace{0.8pt}\pi\,\mu^2}{m_Q^2}}^\eps \Gamma(\eps)\hspace{0.8pt} \bqty{1 - \frac{4\hspace{0.8pt}\eps}{(1 - 2\hspace{0.8pt}\eps)(-2\hspace{0.8pt}\eps)}} \frac{1 + \slashed{v}}{2} - \text{c.t.} \notag\\[5pt] &= -\frac{\aS}{3\hspace{0.8pt}\pi^2} \pqty{3\hspace{0.8pt}\ln\frac{\mu^2}{m_Q^2} + 4}\, \frac{1 + \slashed{v}}{2} \,, \end{align} which reproduces the result in \cref{eqn:DeltaRresult}. \section{Matching and Running Using Functional Methods} \label{sec:FunctionalMethods} This section reviews how to utilize the functional approach for calculating matching coefficients and RGEs. To set the stage, we first briefly review the one-particle irreducible (1PI) effective action, which is a key object in functional methods that one could compute directly by evaluating the path integral. The general matching condition can be compactly expressed by equating the 1PI effective actions of the UV theory and the EFT, whose solution gives us a direct link between the Lagrangians of the two theories, \eg see~\Ref{Henning:2016lyp}. Finally, we also provide a brief review on how to extract RGEs from the 1PI effective action; more details for RGE calculations are provided in \cref{appsec:ExamplesRGE}. \subsection{1PI Effective Action from a Functional Determinant} \label{subsec:1PIGamma} In modern functional methods, the central object of study is the so-called 1PI effective action $\Gamma[\phi]$, where $\phi$ collectively denotes all the fields in the theory. It is a generating functional for the 1PI correlation functions: \begin{equation} \Big\langle \phi(x_1) \dotsm \phi(x_n)\Big\rangle_\text{1PI} = \, i\hspace{0.8pt} \frac{\var[n]{\Gamma[\phi]}}{\var{\phi(x_1)} \dotsi \var{\phi(x_n)}} \qquad \,\, \text{for}\qquad n > 2 \,. \end{equation} One can in principle extract any perturbative quantum field theoretic prediction from $\Gamma[\phi]$. Concretely, the 1PI effective action up to one-loop order can be organized as a loop expansion: \begin{equation} \Gamma[\phi] \supset \Gamma^{(0)}[\phi] + \Gamma^{(1)}[\phi] \,, \end{equation} where each piece can be extracted from the Lagrangian as \begin{subequations} \begin{align} \Gamma^{(0)}[\phi] &= S[\phi] \\ \Gamma^{(1)}[\phi] &= \frac{i}{2} \ln\Sdet\pqty{- \fdv[2]{S[\phi]}{\phi}} \,, \end{align} \end{subequations} with $S[\phi] = \int \dd[4]{x} \Lag(\phi)$, and ``$\Sdet$'' is the so-called super-determinant, which tracks the minus sign difference between fermionic and bosonic loops. The normalization factor $\frac{i}{2}$ assumes that we are tracing over ``real'' degrees of freedom: a complex scalar field should be separated into its real and imaginary parts, and Dirac fermions should be decomposed as discussed in Sec.~4 of \Ref{Henning:2016lyp}. \subsection{Matching from a Functional Determinant} \label{subsec:ReviewFunctionalMatching} In general, one performs a matching calculation by equating the EFT $\mathcal{L}_\text{EFT}(\phi)$ with a UV theory $\mathcal{L}_\text{UV}(\phi, \Phi)$ at a matching scale, where $\phi$ ($\Phi$) denotes the light (heavy) particles. The general matching condition is that all the one-light-particle irreducible (1LPI) diagrams agree at the matching scale \cite{Georgi:1991ch,Georgi:1992xg}. The natural definition of this statement when using functional methods is to enforce that the so-called 1LPI effective action $\Gamma_\text{L}[\phi]$ --- the generating functional for all the 1LPI correlation functions --- of each theory coincides at the matching scale: \begin{equation} \Gamma_\text{L,EFT}[\phi] = \Gamma_\text{L,UV}[\phi] \,. \label{eqn:MatchingCondition} \end{equation} One can relate the 1LPI to the 1PI effective action for the UV theory and the EFT: \begin{subequations} \begin{align} \Gamma_\text{L,EFT}[\phi] &= \Gamma_\text{EFT}[\phi] \,, \\ \Gamma_\text{L,UV}[\phi] &= \Gamma_\text{UV}[\phi,\Phi]\big |_{\Phi = \Phi_c[\phi]} \,. \end{align} \end{subequations} The relation for the EFT is trivial. By contrast, to derive $\Gamma_\text{L, UV}$, one must integrate out the heavy field by plugging in the solution $\Phi=\Phi_c[\phi]$ to its equation of motion. Solving \cref{eqn:MatchingCondition} in general is nontrivial. However, \Ref{Henning:2016lyp} was able to make significant progress towards this goal by deriving the following general expression for the EFT Lagrangian up to one-loop order:\footnote{Note that as before, we use superscript numbers in parenthesis to denote loop order. We need not specify the loop order for the full theory action, since we treat the counterterm contributions implicitly. However, matching results in nontrivial loop orders in the EFT Lagrangian.} \begin{subequations}\label{eqn:SEFT} \begin{align} S^{(0)}_\text{EFT} &= \int \dd[4]{x} \sum_i C_i^{(0)}\, \oper_i(\phi) = S_\text{UV}[\phi,\Phi]\big |_{\Phi = \Phi_c[\phi]} \,, \label{eqn:SEFT0} \\ S^{(1)}_\text{EFT} &= \int \dd[4]{x} \sum_i \pqty{ C_{i,\text{heavy}}^{(1)} + C_{i,\text{mixed}}^{(1)} }\, \oper_i(\phi) \,, \label{eqn:SEFT1} \end{align} \end{subequations} where \begin{subequations}\label{eqn:cLoop} \begin{align} \int \dd[4]{x} \sum_i C_{i,\text{heavy}}^{(1)}\, \oper_i(\phi) &= \frac{i}{2} \ln\Sdet \pqty{ \eval{ -\fdv[2]{S_\text{UV}[\phi,\Phi]}{\Phi} }_{\Phi = \Phi_c[\phi]} \,} \,, \label{eqn:cheavy} \\[5pt] \int \dd[4]{x} \sum_i C_{i,\text{mixed}}^{(1)}\, \oper_i(\phi) &= \frac{i}{2} \ln\Sdet \pqty{ -\fdv[2]{S_\text{EFT}^\text{non-local}[\phi]}{\phi} } - \frac{i}{2} \ln\Sdet \pqty{ -\fdv[2]{S_\text{EFT}^{(0)}[\phi]}{\phi} } \notag\\ &= \eval{ \frac{i}{2} \ln\Sdet \pqty{ -\fdv[2]{S_\text{EFT}^\text{non-local}[\phi]}{\phi} } }_\text{hard} \,. \label{eqn:cmixed} \end{align} \end{subequations} A few clarifications are in order: \begin{itemize} \item In this approach, we do need to know the EFT operators $\oper_i(\phi)$ in advance. We simply obtain them (with the appropriate one-loop coefficient) by evaluating the right-hand sides of \cref{eqn:SEFT,eqn:cLoop}. \item The one-loop contributions to the Wilson coefficients derive from two classes of loops in the UV theory. The ``heavy'' Wilson coefficients $C_{i,\text{heavy}}^{(1)}$ collect contributions from loops where only $\Phi$ appear, while the ``mixed'' Wilson coefficients $C_{i,\text{mixed}}^{(1)}$ are generated by loops with both $\Phi$ and $\phi$. \item The non-local Lagrangian appearing in \cref{eqn:cmixed} is given by \begin{equation} S_\text{EFT}^\text{non-local}[\phi] = S_\text{UV}[\phi,\Phi]\big |_{\Phi = \Phi_c[\phi]}\,, \label{eq:SEFTnonlocal} \end{equation} where the heavy field is integrated out using the tree-level solution to its equations of motion $\Phi_c[\phi]$. No expansion in the heavy masses is performed at this stage, and hence the resulting Lagrangian contains non-local terms, \eg see~\cref{eqn:LHQETnonlocal}. This is in contrast to the tree-level EFT action $S_\text{EFT}^{(0)}[\phi]$, which is derived by expanding the non-local Lagrangian in the heavy mass limit; therefore, it only contains local terms, \eg see~\cref{eq:HQETExpansion}. As we have discussed extensively in \cref{subsec:ResidueRegions}, expanding the action in the heavy particle limit yields a critical difference between the two descriptions, where the non-trivial effects result from being careful about the order of limits --- performing the heavy mass expansion does not commute with taking the functional determinant (which is essentially equivalent to performing the loop integration). Although, the two terms in the first line of \cref{eqn:cmixed} are not equal, they are intimately related in such a way that allows for the following simplification: Using method of regions,\footnote{Simplifying the functional matching calculation with method of regions was also discussed in \Ref{Fuentes-Martin:2016uol,Zhang:2016pja}, where a different (but equivalent) approach to the row reduction procedure was used to diagonalize the functional determinant matrix in the space of $(\Phi, \phi)$.} we can identify the second term as equivalent to the soft region of the first term. Their difference leaves behind the hard region alone, as shown in the second line of \cref{eqn:cmixed}. \end{itemize} \subsection{RGEs from the 1PI Effective Action}\label{subsec:ReviewFunctionalRunning} In this section, we briefly review the general procedure for using the 1PI effective action to compute the RGEs~\cite{Henning:2016lyp}. Given a Lagrangian \begin{equation} \Lag(\phi) \supset \oper_K(\phi) + \lambda\, \oper_\lambda(\phi) + \dotsb \,, \end{equation} we are interested in deriving the RGEs for a coupling $\lambda$, where $\oper_\lambda$ is the corresponding operator, $\oper_K$ denotes the kinetic terms, and the ``$\dotsb$'' allow for the possibility of additional interactions. The first step is to compute the 1PI effective action, which takes the form \begin{equation} \Gamma[\phi] \supset \int \dd[4]{x} \Big[(1 + a_K)\, \oper_K(\phi) + (\lambda + a_\lambda)\, \oper_\lambda(\phi)\Big] \,, \end{equation} where $a_K$ and $a_\lambda$ encode the one-loop corrections. Next, we renormalize the kinetic terms to their canonical value by rescaling the fields $\phi$: \begin{equation} \Gamma[\phi] \,\,\longrightarrow\,\, \int \dd[4]{x} \Big[\oper_k(\phi) + (\lambda + b_\lambda) \oper_\lambda(\phi)\Big] \,. \end{equation} where $b_\lambda$ is derived by Taylor expanding the version of $\Gamma[\phi]$ with canonical kinetic terms as a series in $\lambda$. Finally, the RGE for $\lambda$ is obtained: \begin{equation} \mu\dv{\mu} \big(\lambda + b_\lambda\big) = 0 \,. \label{eqn:RGE1PIgeneral} \end{equation} For reference, a number of standard RGEs are derived using this method in \cref{appsec:ExamplesRGE}; additionally, functional methods were used to compute the bosonic dimension-6 SMEFT RGEs in \Ref{Buchalla:2019wsc}. In \cref{sec:Examples}, we will apply this formalism in HQET to show that the kinetic term does not run, and will derive the RGEs for the Wilson coefficient of the HQET magnetic dipole moment operator. \section{HQET Matching from a Functional Determinant}\label{sec:MatchingHQET} In this section, we apply the general matching result \cref{eqn:SEFT,eqn:cLoop} to HQET. As we will see, salient features of HQET will result in further simplifications for matching calculations, which will be summarized by our master formula \cref{eqn:SHQET1loop} for HQET one-loop matching coefficients. In this case, we identify $\Phi = \big(\bar{H}_v, H_v\big)$ as the short distance modes of the heavy quark, and $\phi$ collectively as all the propagating long distance degrees of freedom modeled by HQET. There are two special features of matching QCD onto HQET that simplify the resulting master formula. First, taking second functional derivatives of \cref{eqn:LQCD} with respect to $H_v$ and $\bar{H}_v$, we see that the matching coefficient $C_{i,\text{heavy}}^{(1)}$ vanishes at one-loop: \begin{align} \int \dd[4]{x} \sum_i C_{i,\text{heavy}}^{(1)}\, \oper_i(\phi) &= \frac{i}{2} \ln\Sdet \pqty{ \eval{ -\fdv[2]{S_\text{QCD}[\phi,\Phi]}{\Phi} }_{\Phi = \Phi_c[\phi]} \,} \notag\\ &\propto -i \ln \Sdet \big( i\hspace{0.8pt} v\cdot D + 2\, m_Q\big) = 0 \,. \end{align} This functional determinant is zero due to the same contour arguments presented in \cref{sec:Decoupling} (see also~\cite{Mannel:1991mc}), where we discussed decoupling the heavy quark.\footnote{There is an even simpler (albeit gauge dependent) argument also given in \Ref{Mannel:1991mc}: if one takes the $v\cdot A = 0$ gauge, the determinant no longer depends on any field, and its evaluation simply yields a constant that is absorbed into the path integral measure.} This indicates that all the one-loop matching contributions for HQET are mixed loop contributions of the type given in \cref{eqn:cmixed}. Therefore, the non-local HQET Lagrangian \begin{equation} S_\text{HQET}^\text{non-local}[\phi] = S_\text{QCD}[\phi,\Phi]\big |_{\Phi = \Phi_c[\phi]}\,, \label{eq:SHQETnonlocal} \end{equation} which is given explicitly in \cref{eqn:LHQETnonlocal}, is equivalent to QCD with respect to the dynamics of the light fields $\phi$. Another useful feature of HQET is that loop corrections are scaleless~\cite{Finkemeier:1997re}, and therefore they vanish when using dim.\ reg.. This means that the second term in the first line of \cref{eqn:cmixed} is also zero: \begin{equation} \frac{i}{2} \ln\Sdet \pqty{ -\fdv[2]{S_\text{HQET}^{(0)}[\phi]}{\phi} } =0 \,. \label{eqn:HQETLoopVanish} \end{equation} We emphasize that this expression is valid when (i) we are computing on-shell $S$-matrix elements, and (ii) when dim.\ reg.\ is used to regularize both UV and IR divergences (see \cref{sec:ResInHQET} for a calculation where we regulated the IR by keeping the long distance fluctuations off shell). Taking both simplifications into account, we arrive at our master formula for computing the Wilson coefficients of HQET\footnote{There is one caveat to keep track of when applying this formula, which is that it is valid when one sets all the light masses that appear in a loop integral to zero. If one is interested in computing power corrections proportional to a light mass, then \cref{eqn:cmixed} should be used: one should keep the light masses non-zero, and the hard region must be isolated before integrating.} \begin{equation} \tcboxmath{ S_\text{HQET}^{(1)} = \frac{i}{2} \ln\Sdet \pqty{ -\fdv[2]{S_\text{HQET}^\text{non-local}[\phi]}{\phi} }\,.} \label{eqn:SHQET1loop} \end{equation} In particular, note that we have restored the vanishing region, implying that when using this result we do not need to perform any method of regions style expansion of the integrals that result from taking the functional trace, and can simple evaluate the integrals that result from this procedure directly. In \cref{sec:FunctionalMatching,sec:Examples}, we will demonstrate how to apply this formula by working out a number of explicit examples. We will additionally see that this formula makes symmetry properties such as gauge invariance and RPI more manifest. \section{Residue Difference from a Functional Determinant} \label{sec:FunctionalMatching} This section provides a first non-trivial example of the procedure for matching between QCD and HQET at one-loop using functional methods: the calculation of the difference between the propagator residue for QCD and HQET. Along the way, we will highlight many of the simplifying benefits of performing these calculations directly from the path integral. Then before moving on to a number of additional (more complicated) examples in the next section, we briefly discuss how RPI manifests in the functional language. Before getting into the details, we provide a simple road map highlighting the critical steps of the calculation: \begin{enumerate}[label=(\roman*)] \item Beginning with the UV Lagrangian, we will integrate out the short distance mode using the tree-level equations of motion. This will result in a non-local Lagrangian (using background fields as necessary to implement gauge fixing) that encodes a complete description of the dynamics of the light modes. See~\cref{eq:HQETNL}. \item Given this non-local Lagrangian, we will then derive the matrix of the second-order functional derivatives (see~\cref{eqn:ResidueVariationMatrix}), whose determinant is the central object to evaluate according to our master formula \cref{eqn:SHQET1loop}. \item Next, we will rewrite the determinant into an efficient form for explicit evaluation via row reduction. See~\cref{eq:residueDeterminant}. \item Finally, we will evaluate this trace using the methodology of the covariant derivative expansion as reviewed in \cref{appsec:CDE}. From this expansion, we will isolate the operators of interest. This yields the second line of \cref{eq:EvalResInt}, which is an expression for one-loop Wilson coefficients multiplied by the appropriate operators. The last step will be to explicitly evaluate the loop integral. See the final line of \cref{eq:EvalResInt}. \end{enumerate} Note that along the way, we will drop any terms which do not contribute to the operator of interest for simplicity. The rest of this section is devoted to the explicit calculation of the residue difference. Our goal is to apply \cref{eqn:SHQET1loop} in order to derive the one-loop correction to the two-point function of heavy quarks. This implies that we only need to take variations of the Lagrangian with respect to the gluon and the heavy quark field. The first step is to derive the non-local HQET action. Following \cref{sec:HQET}, where we reviewed the derivation of the HQET Lagrangian, we start with QCD (including the gluon kinetic term, gauge fixing, and ghost contributions) and integrate out the short distance quark modes: \begin{equation} \mathcal{L}_\text{HQET}^\text{non-local} = \bar{h}_v\, \pqty{ i\hspace{0.8pt} v \cdot D + i\hspace{0.8pt}\slashed{D}_\perp \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} i\hspace{0.8pt}\slashed{D}_\perp }\, h_v - \frac{1}{4}\hspace{0.8pt} G_{\mu\nu}^a \, G^{\mu\nu,a} + \Lag_\text{gf} + \Lag_\text{gh} \,, \label{eq:HQETNL} \end{equation} where the explicit gauge-fixing term $\Lag_\text{gf}$ and ghost term $\Lag_\text{gh}$ are specified in \cref{eq:gaugefix,eq:ghost}. Next, we take functional variations of $S_\text{HQET}^\text{non-local}$. We will treat gauge bosons using the background field prescription described in \cref{appsubsec:BackgroundField}: \begin{align} G_\mu &= G_{B,\mu} + A_\mu \,, \end{align} where $G_{B,\mu}$ is a background field and $A_\mu$ encodes the fluctuations. Covariant derivatives are expanded similarly: \begin{align} D_\mu &= D_{B,\mu } - i\hspace{0.8pt} g_s\hspace{0.8pt} A_\mu \,, \end{align} where $D_{B,\mu}$ is the covariant derivative that includes the background gauge field. After taking functional variations with respect to $A_\mu$, we replace \begin{subequations} \begin{align} G_{B,\mu} &\,\,\longrightarrow\,\, G_\mu \,, \\ D_{B,\mu} &\,\,\longrightarrow\,\, D_\mu \,. \end{align} \end{subequations} We only need to take the second variation of the Lagrangian in \cref{eq:HQETNL} with respect to $A_\mu^a$, $h_v$, and $\bar{h}_v$ to compute the residue difference: \begin{equation} \delta^2 S_\text{HQET}^\text{non-local} \supset \pmqty{ \delta A_\mu^a \,\,& \delta h_v^T\,\, & \delta \bar{h}_v} \pmqty{ C^{\mu\nu,ab} & \bar{\Gamma}^{\mu,a} & -\Big(\Gamma^{\mu,a}\Big)^T \\ -\Big(\bar{\Gamma}^{\nu,b}\Big)^T & 0 & -B^T \\[6pt] \Gamma^{\nu,b} & B & 0} \pmqty{\delta A_\nu^b \\[6pt] \delta h_v \\[7pt] \delta \bar{h}_v^T} \,, \label{eqn:ResidueVariationMatrix} \end{equation} where terms not relevant for the residue operator in $U^{\mu\nu,ab}$, $\Gamma^{\mu,a}$, and $\bar{\Gamma}^{\mu,a}$ are discarded: \begin{subequations}\label{eqn:ResidueDetparameters} \begin{align} C^{\mu\nu,ab} &= \eta^{\mu\nu} \big(D^2\big)^{ab} - 2\hspace{0.8pt} U^{\mu\nu,ab} \,, \\ U^{\mu\nu,ab} &= g_s\, f^{abc}\, G_B^{\mu\nu,c} - g_s^2\,\bar{h}_v \gamma_\perp^\mu\, \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} \gamma_\perp^\nu\, T^a\hspace{0.8pt} T^b\, h_v \,, \\ B &= i\hspace{0.8pt} v \cdot D + i\hspace{0.8pt} \slashed D_\perp \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} i\hspace{0.8pt} \slashed D_\perp \,, \\ \Gamma^{\mu,a} &= g_s\hspace{0.8pt} \bqty{ v^\mu + i\hspace{0.8pt} \slashed{D}_\perp \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} \gamma_\perp^\mu}\, T^a\, h_v \,, \\ \bar{\Gamma}^{\mu,a} &= g_s\, \bar{h}_v\, T^a\, \bqty{ v^\mu + \gamma_\perp^\mu \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} i\hspace{0.8pt} \slashed D_\perp } \,. \end{align} \end{subequations} Here $\gamma_\perp^\mu\equiv\gamma^\mu-v^\mu\slashed{v}$ is defined in parallel with $D_\perp^\mu$. The reality of the Lagrangian implies that $B^\dagger = \gamma^0\hspace{0.8pt} B\hspace{0.8pt} \gamma^0$ and $\bar\Gamma^{\mu,a} = (\Gamma^{\mu,a})^\dagger \gamma^0$ must hold, and it is straightforward to verify these with the explicit expressions in \cref{eqn:ResidueDetparameters}. Next, we evaluate the functional determinant by row reducing \cref{eqn:ResidueVariationMatrix}: \begin{align} \fdv[2]{S_\text{HQET}^\text{non-local}}{(A_\mu^a,\bar{h}_v,h_v)} &= \pmqty{ C^{\mu\nu,ab} & \bar{\Gamma}^{\mu,a} & -\Big(\Gamma^{\mu,a}\Big)^T \\[3pt] -\Big(\bar{\Gamma}^{\nu,b}\Big)^T & 0 & -B^T \\[6pt] \Gamma^{\nu,b} & B & 0} \nonumber \\[7pt] &= \pmqty{ C^{\mu\nu,ab} - \bar{\Gamma}^{\mu,a} \hspace{0.8pt} B^{-1}\hspace{0.8pt} \Gamma^{\nu,b} - \bar{\Gamma}^{\nu,b}\hspace{0.8pt} B^{-1}\hspace{0.8pt} \Gamma^{\mu,a} & 0 & 0 \\[3pt] -\Big(\bar{\Gamma}^{\nu,b}\Big)^T & 0 & -B^T \\[6pt] \Gamma^{\nu,b} & B & 0} , \label{eqn:ResidueRowReduction} \end{align} where in the last line we have used $\big (\Gamma^{\mu,a}\big)^T \big(B^{-1}\big)^T \big(\bar{\Gamma}^{\nu,b}\big)^T = -\bar{\Gamma}^{\nu,b}\hspace{0.8pt} B^{-1}\hspace{0.8pt} \Gamma^{\mu,a}$. Following the prescription in \cref{subsec:ReviewFunctionalMatching}, we extract the residue operator part of the one-loop HQET Lagrangian using \begin{align} S_\text{HQET}^{(1)} &= \frac{i}{2} \ln\Sdet\bqty{ -\fdv[2]{S_\text{HQET}^\text{non-local}}{(A_\mu^a, \bar{h}_v, h_v)} } \notag \\[5pt] &= \frac{i}{2} \ln\det\nolimits_G \Big[-C^{\mu\nu,ab} + \bar{\Gamma}^{\mu,a}\hspace{0.8pt} B^{-1}\hspace{0.8pt} \Gamma^{\nu,b} + \bar{\Gamma}^{\nu,b} B^{-1} \Gamma^{\mu,a}\Big] - \frac{i}{2}\ln\det\nolimits_h \pmqty{ 0 & B^T \\ -B & 0} \notag \\[5pt] &\supset \frac{i}{2} \ln\det\nolimits_G \Big[ -\eta^{\mu\nu} \big(D^2\big)^{ab} + 2\hspace{0.8pt} U^{\mu\nu,ab} + \bar{\Gamma}^{\mu,a}\hspace{0.8pt} B^{- 1}\hspace{0.8pt} \Gamma^{\nu,b} + \bar{\Gamma}^{\nu,b} \hspace{0.8pt} B^{-1}\hspace{0.8pt} \Gamma^{\mu,a} \Big] \notag \\[5pt] &\supset -i\Tr \bqty{ \pqty{\frac{1}{D^2}}^{ba} \eta_{\mu\nu}\hspace{0.8pt} \Big(U^{\mu\nu,ab} + \bar{\Gamma}^{\mu,a}\hspace{0.8pt} B^{-1}\hspace{0.8pt} \Gamma^{\nu,b}\Big) }\,, \label{eq:residueDeterminant} \end{align} where the subscripts denote the field that is being traced over, \ie, the state that is running in the loop, and starting with the third line, we have truncated the series and dropped terms that do not contribute to the residue operator. Note that in the last line, the trace over the gluon space is taken (\textit{i.e.} $\mu\nu$ and $ab$ indices are contracted). Next, we evaluate this expression using the explicit objects given in \cref{eqn:ResidueDetparameters}, and simplify the two traces as follows: \begin{align} -i\Tr \bqty{ \pqty{\frac{1}{D^2}}^{ba} \eta_{\mu\nu}\hspace{0.8pt} U^{\mu\nu,ab} } &= -i\hspace{0.8pt} g_s^2\hspace{0.8pt} C_F\, \Tr \bqty{ (d-1) \frac{1}{(i\hspace{0.8pt} D)^2}\, \bar{h}_v\, \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} h_v } \,, \end{align} where we used $T_h^a\hspace{0.8pt} T_h^a = C_F = 4/3$, and \begin{align} \hspace{-20pt} -&i\Tr \bqty{ \pqty{\frac{1}{D^2}}^{ba} \eta_{\mu\nu}\hspace{0.8pt} \bar{\Gamma}^{\mu,a}\hspace{0.8pt} B^{-1}\hspace{0.8pt} \Gamma^{\nu,b} } \\[5pt] &\quad\supset i\hspace{0.8pt} g_s^2\, C_F\, \Tr \Bigg\{ \frac{1}{(i\hspace{0.8pt} D)^2}\, \bar{h}_v\, \frac{1}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_Q\,( i\hspace{0.8pt} v \cdot D)} (i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q)\, h_v \notag\\[5pt] &\quad\qquad + (d - 1) \frac{1}{(i\hspace{0.8pt} D)^2}\, \bar{h}_v\, \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} \frac{1}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_Q\, (i\hspace{0.8pt} v \cdot D)} \bqty{ (i\hspace{0.8pt} D)^2 - (i\hspace{0.8pt} v \cdot D)^2}\, h_v \Bigg\}\,.\notag \end{align} In performing these manipulations, we set $\comm{D_\mu}{D_\nu} = 0$ since this commutator returns the field strength $G_{\mu\nu}^a$, which does not contribute to the residue difference. Putting these two results together, we obtain \begin{align} S_\text{HQET}^{(1)} &= -i\hspace{0.8pt} g_s^2\, C_F\, \Tr \Bqty{ \frac{1}{(i\hspace{0.8pt} D)^2}\, \bar{h}_v \,\frac{1}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_Q \, (i\hspace{0.8pt} v \cdot D)} \Big[ (d-2) (i\hspace{0.8pt} v \cdot D) - 2\hspace{0.8pt} m_Q \Big]}\, h_v\,. \label{eqn:ResidueTrace} \end{align} Finally, we evaluate this functional trace using the simplified CDE technique described in~\cref{appsubsubsec:naiveCDE} to derive\footnote{ Since we treat $D_\mu$ as a commuting object here, the simplified CDE approach is sufficient. If one were interested in extracting Wilson coefficients for operators that involve the field strength, the more sophisticated original CDE approach described in~\cref{appsubsubsec:elegantCDE} would be more convenient.} \begin{align} S_\text{HQET}^{(1)} &\supset -i\hspace{0.8pt} g_s^2\, \mu^{2\eps}\, C_F\,\int \ddx{x} \int \ddp{q} \frac{1}{(i\hspace{0.8pt} D + q)^2} \,\bar{h}_v\, \frac{1}{(i\hspace{0.8pt} D + q)^2 + 2\hspace{0.8pt} m_Q\, v \cdot (i\hspace{0.8pt} D + q)}\notag\\[-2pt] &\hspace{210pt}\times \Big[ (d-2)\, v \cdot (i\hspace{0.8pt} D + q) - 2\hspace{0.8pt} m_Q\Big]\, h_v - \text{c.t.} \notag \\ &\supset -i\hspace{0.8pt} g_s^2\, \mu^{2\eps}\, C_F\, \int \ddx{x} \bqty{\bar{h}_v\, i\hspace{0.8pt} D_\mu\, h_v}\notag\\[-2pt] &\hspace{70pt} \times \int \ddp{q} \frac{(d-2)\, q^2\, v^\mu - 2\hspace{0.8pt} (d-2)\, q^\mu\, v \cdot q + 4\hspace{0.8pt} m_Q\, q^\mu + 4\hspace{0.8pt} m_Q^2\, v^\mu}{q^2\, \pqty{q^2 + 2\hspace{0.8pt} m_Q\, v \cdot q}^2} - \text{c.t.} \notag \\ &= \int \ddx{x} \bqty{\bar{h}_v \, i\hspace{0.8pt} v \cdot D\, h_v}\, \frac{\aS}{3\hspace{0.8pt}\pi} \pqty{\frac{4\hspace{0.8pt}\pi\,\mu^2}{m_Q^2}}^\eps\, \Gamma(\eps) \bqty{ 1 - \frac{4\hspace{0.8pt} \eps}{(1-\eps)(-2\hspace{0.8pt}\eps)} } - \text{c.t.}\notag \\ &= \int \ddx{x} \bqty{\bar{h}_v \,i\hspace{0.8pt} v \cdot D\, h_v}\, \frac{\aS}{3\hspace{0.8pt}\pi}\, \pqty{ 3\,\ln\frac{\mu^2}{m_Q^2} + 4 }\, . \label{eq:EvalResInt} \end{align} The first line follows from \cref{eqn:ResidueTrace} by taking the steps shown in \cref{eqn:T0naiveCDE}. In the second line, we have expanded in the covariant derivative $D_\mu$ --- this is the explicit step that uses the Covariant Derivative Expansion. Note that in taking this expansion we need to invoke $\big|D\big| \ll m_Q$ to justify only keeping the residue operator. Its Wilson coefficient is given by a loop integral with a nonzero hard region but a vanishing soft region. This verifies our general statement made in \cref{sec:MatchingHQET}, in particular \cref{eqn:HQETLoopVanish}. In the third line, we have evaluated the loop integral, with the \MSbar scheme counter terms. The result agrees with \cref{eqn:DeltaRdef}, providing our first demonstration for using functional methods to compute HQET loop effects. Before moving on to additional examples in \cref{sec:Examples}, we will briefly discuss RPI in the context of the residue difference calculation. \subsection{When Does RPI Become Manifest?} \label{sec:RPI} In this section, we briefly explore the interplay between RPI and the calculation detailed in the previous section.\footnote{RPI relations between coefficients are typically obscured in the course of a conventional calculation, since different combinations of Feynman diagrams will contribute to the matching of operators at different orders in the mass expansion. In practice, RPI relations are derived independently and then externally imposed to minimize the number of matching calculations to be performed~\cite{Neubert:1993za}.} Our goal here is to understand at what point RPI holds when computing with the functional approach. For simplicity, we will only explore this question to leading order in $1/m_Q$ in the RPI transformations. The full RPI transformations are significantly more complicated, since the eigenstates $h_v$ and $H_v$ rotate into each other at subleading order, see~\cite{Chen:1993np, Kilian:1994mg, Sundrum:1997ut, Heinonen:2012km, Hill:2012rh} for a discussion. The RPI transformation shifts\footnote{To reduce notational clutter, we have opted to use $k$ as opposed to $\delta k$ here to track the change induced by RPI when comparing with \cref{eq:RPIdef}. Given the form of the relevant expressions, there is no ambiguity.} \begin{align} v \,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, v^\prime &= v - \frac{k}{{{m_Q}}} \,, \notag\\[2pt] {h_v} \,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, {h_{v^\prime}} &= {e^{ - i\hspace{0.8pt} k \cdot x}}\left( {1 - \frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}} \right){h_v} + O\left( {\frac{1}{{{m_Q^2}}}} \right) \,. \label{eq:RPIonhnew} \end{align} We will now show that this is a good symmetry of HQET by noting that the non-local HQET Lagrangian is built using the operator $B$ defined in~\cref{eqn:ResidueDetparameters}. In particular, this object transforms covariantly as \begin{align} B &\equiv i\hspace{0.8pt} v \cdot D + i\hspace{0.8pt} \slashed D_\perp \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} i\hspace{0.8pt}\slashed D_\perp \notag\\ \,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, B' &= B - \frac{{i\hspace{0.8pt} k \cdot D}}{{{m_Q}}} + O\left( {\frac{1}{{{m_Q^2}}}} \right) = {e^{ - i\hspace{0.8pt} k \cdot x}}B{e^{i\hspace{0.8pt} k \cdot x}} + O\left( {\frac{1}{{{m_Q^2}}}} \right) \,, \label{eq:BDefAgain} \end{align} and hence \begin{align} \mathcal{L}_\text{HQET}^\text{non-local} &\supset {{\bar h}_v}\,B\,{h_v}\notag\\ \,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, {{\bar h}_{v'}}\,B'\,{h_{v'}} &= {{\bar h}_v}\left( {1 - \frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}} \right)B\left( {1 - \frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}} \right){h_v} = {{\bar h}_v}\,B\,{h_v} + O\left(\frac{1}{m_Q^2}\right) \,, \label{eq:RPILNL} \end{align} where we used the fact that $\slashed{v}\hspace{0.8pt} h_v = h_v$, $\slashed{v}\hspace{0.8pt} \slashed{k} + \slashed{k}\hspace{0.8pt} \slashed{v} = 2\hspace{0.8pt} v\cdot k$, and $v \cdot k = k^2/(2\hspace{0.8pt} m_Q)$. Alternatively, one can check the interplay between the two terms in the first line of~\cref{eq:BDefAgain}. We begin by analyzing the explicit transformation of the local term. Keeping all terms to $\ord(1/m_Q)$, we find \begin{align} {{\bar h}_v}\left( {i\hspace{0.8pt} v \cdot D} \right){h_v} &\,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, {{\bar h}_v}\left( {1 - \frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}} \right){e^{i\hspace{0.8pt} k \cdot x}}\left( {i\hspace{0.8pt} v \cdot D - \frac{{i\hspace{0.8pt} k \cdot D}}{{{m_Q}}}} \right){e^{ - i\hspace{0.8pt} k \cdot x}}\left( {1 - \frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}} \right){h_v} \nonumber \\ &\hspace{15pt}= {{\bar h}_v}\left( {i\hspace{0.8pt} v \cdot D} \right){h_v} - {{\bar h}_v}\left( {\frac{{i\hspace{0.8pt} k \cdot D}}{{{m_Q}}} + \frac{{{k^2}}}{{2\hspace{0.8pt} m_Q}}} \right){h_v} + O\left(\frac{1}{m_Q^2}\right)\,. \end{align} This change is compensated by a corresponding shift in the non-local piece \begin{align} &{{\bar h}_v}\hspace{0.8pt} i\hspace{0.8pt} {{\slashed D}_ \bot }\frac{1}{{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}}\hspace{0.8pt} i\hspace{0.8pt} {{\slashed D}_ \bot }{h_v} \notag\\ \,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\,\hspace{10pt}& {{\bar h}_v}\left( {1 - \frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}} \right){e^{i\hspace{0.8pt} k \cdot x}}\left( {i\hspace{0.8pt}{{\slashed D}_ \bot }\frac{1}{{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}}\hspace{0.8pt} i\hspace{0.8pt}{{\slashed D}_ \bot }} \right){e^{ - i\hspace{0.8pt} k \cdot x}}\left( {1 - \frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}} \right){h_v} \nonumber \\ &= {{\bar h}_v}\hspace{0.8pt} i\hspace{0.8pt} {{\slashed D}_ \bot }\frac{1}{{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}}\hspace{0.8pt} i\hspace{0.8pt}{{\slashed D}_ \bot }{h_v} + {{\bar h}_v}\left( {\frac{{i\hspace{0.8pt} k \cdot D}}{{{m_Q}}} + \frac{{{k^2}}}{{2\hspace{0.8pt} m_Q}}} \right){h_v} + O\left(\frac{1}{m_Q^2}\right)\,. \end{align} where we used the identities \begin{align} {i\hspace{0.8pt} {\slashed k}_ \bot }{{\slashed D}_ \bot } + i\hspace{0.8pt}{{\slashed D}_ \bot }{{\slashed k}_ \bot } &= 2\hspace{0.8pt} i\,{k_ \bot } \cdot {D_ \bot } = 2\hspace{0.8pt} {k_\mu }\, i\Big( {\hspace{0.8pt} {D^\mu } - {v^\mu } \hspace{0.8pt} v \cdot D} \Big) + O\left( {\frac{1}{{{m_Q}}}} \right) = 2\hspace{0.8pt} i\, k \cdot D + O\left( {\frac{1}{{{m_Q}}}} \right) \,, \notag \\ {{\slashed k}_\bot }{{\slashed k}_\bot } &= {\left( {\slashed k - \slashed v\hspace{0.8pt} v \cdot k} \right)^2} = {k^2} + O\left( {\frac{1}{{{m_Q}}}} \right)\,. \end{align} Then clearly \cref{eq:RPILNL} holds. The loop-level Lagrangian is given by the super-determinant of the second variation of \cref{eq:HQETNL}. Therefore, it is also invariant under RPI. Let us explicitly check this to the order $\ord(1/m_Q)$ for the residue difference calculation presented in the previous section. The various components defined in \cref{eqn:ResidueDetparameters} shift under RPI as \begin{subequations} \begin{align} {U^{\mu \nu ,ab}} &\,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, {U^{\mu \nu ,ab}} + O\left( {\frac{1}{{m_Q^2}}} \right) \,, \\ B &\,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, {e^{ - i\hspace{0.8pt} k \cdot x}}\,B\,{e^{i\hspace{0.8pt} k \cdot x}} + O\left( {\frac{1}{{m_Q^2}}} \right) \,, \\ {\Gamma ^{\mu ,a}} &\,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, {e^{ - i\hspace{0.8pt} k \cdot x}}\left[ {{\Gamma ^{\mu ,a}} - {g_s}\left( {{\gamma ^\mu } + 2\hspace{0.8pt}{v^\mu }} \right)\frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}\,{T^a}\hspace{0.8pt}{h_v}} \right] + O\left( {\frac{1}{{m_Q^2}}} \right) \,, \\ {{\bar \Gamma }^{\mu ,a}} &\,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, \left[ {{{\bar \Gamma }^{\mu ,a}} - {g_s}\,{{\bar h}_v}\,{T^a}\frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}\Big( {{\gamma ^\mu } + 2\hspace{0.8pt}{v^\mu }} \Big)} \right]{e^{i\hspace{0.8pt} k \cdot x}} + O\left( {\frac{1}{{m_Q^2}}} \right) \,. \end{align} \end{subequations} From here, we can check the transformations of each term that appears in the argument of the functional trace in \cref{eq:residueDeterminant}: \begin{subequations} \begin{align} \left(\frac{1}{D^2}\right)^{ab} &\,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, \left(\frac{1}{D^2}\right)^{ab} \,, \\ {\eta_{\mu\nu}\hspace{0.8pt} U^{\mu \nu ,ab}} &\,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, \eta_{\mu\nu}\hspace{0.8pt}{U^{\mu \nu ,ab}} + O\left( {\frac{1}{{m_Q^2}}} \right) \,, \\ {\eta_{\mu\nu}\hspace{0.8pt}{\bar \Gamma }^{\mu ,a}}\,{B^{ - 1}}\,{\Gamma ^{\nu ,b}} &\,\,\xrightarrow[\hspace{5pt}\text{RPI}\hspace{5pt}]{}\,\, \eta_{\mu\nu} \left[ {{{\bar \Gamma }^{\mu ,a}} - {g_s}\,{{\bar h}_v}\,{T^a}\frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}\Big( {{\gamma ^\mu } + 2\hspace{0.8pt}{v^\mu }} \Big)} \right]{B^{ - 1}}\notag\\ &\hspace{70pt}\times\left[ {{\Gamma ^{\nu ,b}} - {g_s}\,\Big( {{\gamma ^\nu } + 2\hspace{0.8pt}{v^\nu }} \Big)\frac{{\slashed k}}{{2\hspace{0.8pt} m_Q}}\,{T^b}\,{h_v}} \right] \nonumber \\ &\hspace{45pt}=\eta_{\mu\nu}\hspace{0.8pt}{{\bar \Gamma }^{\mu ,a}}\,{B^{ - 1}}\,{\Gamma ^{\nu ,b}} + O\left( {\frac{1}{{m_Q^2}}} \right) \,, \end{align} \end{subequations} where in the second derivation we used \begin{equation} \eta_{\mu\nu}\hspace{0.8pt}{{\bar h}_v}\,{T^a}\,{v^\mu }\,{B^{ - 1}}\Big( {{\gamma ^\nu } + 2\hspace{0.8pt}{v^\nu }} \Big)\,\frac{{\slashed k}}{{2\hspace{0.8pt}{m_Q}}}\,{T^b}\,{h_v} = {{\bar h}_v}\,{T^a}\,{B^{ - 1}}\,\Big( {\slashed v + 2} \Big)\frac{{{k^2}}}{{4\hspace{0.8pt} m_Q^2}}\,{T^b}\,{h_v} \,. \end{equation} This demonstrates an elegant feature of functional methods when applied to HQET. Specifically, in the last line of \cref{eq:residueDeterminant} the terms within the square brackets are manifestly invariant under RPI \emph{before} evaluating the trace. This is in contrast with the Feynman diagram approach, where one must sum the full set of Feynman diagrams before the RPI symmetry becomes apparent. \section{More Matching and Running Calculations} \label{sec:Examples} We have provided an explicit demonstration of a functional calculation in HQET for the simplest non-trivial example above. However, we have not yet made contact with an actual observable quantity. The purpose of this section is to do so by exploring more examples of matching calculations and a derivation of the RGEs for some Wilson coefficients. This will provide overwhelming evidence that these techniques capture all of relevant physics. Using the formalism developed in \Refs{Falk:1990yz,Falk:1991nq}, once the one-loop matching of operators is known, one-loop relations between exclusive quantities, such as decay constants and form factors, are straightforward to extract. Furthermore, explaining these examples will provide us with the opportunity to highlight some additional subtle aspects of applying functional methods. \subsection{Heavy-Light Current Matching} This section is devoted to our next example, matching the heavy-light current. The heavy-light operator in QCD is defined as \begin{equation} \oper^\mu = \bar{q}\, \gamma^\mu\, Q \,, \end{equation} where $q$ is a light quark and $Q$ is a heavy quark that will be treated as an HQET field. Since one should use the reference vector $v^\mu$ when constructing HQET operators, two operators can be written down at leading order in $\ord(1/m_Q)$ that manifest the same symmetry properties as the heavy-light QCD operator: \begin{subequations} \begin{align} \oper_1^\mu &= \bar{q}\, \gamma^\mu\, h_v \,,\\ \oper_2^\mu &= \bar{q}\, v^\mu\, h_v \,. \end{align} \end{subequations} The matching condition requires that matrix elements derived using QCD and HQET at a convenient matching scale $\mu$ are equal,\footnote{Note that due to confinement, the matching is being done with unphysical external states. This does not cause any problems since the matching condition between the two theories is universal.} \begin{equation} \Big\langle q\big(0,s'\big)\,\Big|\, \oper^\mu \,\Big|\, Q\big(p,s\big)\Big\rangle_\text{QCD} = \Big\langle q\big(0,s'\big) \,\Big|\, C_1\, \oper_1^\mu + C_2\, \oper_2^\mu \,\Big|\, h_v(k,s)\Big\rangle_\text{HQET}\,\,. \end{equation} where the Wilson coefficients for the HQET operators $C_{1,2} = C_{1,2}(m_Q/\mu, \alpha_s(\mu))$ are functions of $\mu$ and $\alpha_s$. At one-loop order, the two matrix elements can be expressed schematically: \begin{subequations} \label{eq:HLoperatorsExpanded} \begin{align} &\hspace{-7pt}\Big\langle q\big(0,s'\big)\,\Big|\, \oper^\mu \,\Big|\, Q\big(p,s\big)\Big\rangle_\text{QCD} \! = \sqrt{ \RQCD\, R_{q} }\, \bar{u}\big(0,s'\big) \Big[ \Big(1 + V_{\text{HL},1}^{(1)}\,\aS\Big)\, \gamma^\mu + V_{\text{HL},2}^{(1)}\,\aS\, v^\mu\Big]\, u\big(p,s\big)\,, \\[3pt] &\hspace{-7pt} \Big\langle q\big(0,s'\big) \,\Big|\, C_1\, \oper_1^\mu + C_2\, \oper_2^\mu \,\Big|\, h_v(k,s)\Big\rangle_\text{HQET} = \sqrt{ \RHQET\, R_{q} }\, \bar{u}\big(0,s'\big)\, \Big(1+V_\text{eff}^{(1)}\,\aS\Big) \notag\\[-3pt] &\hspace{273pt}\times \Big(C_1\,\gamma^\mu + C_2\, v^\mu\Big)\, u\big(k,s\big)\,, \end{align} \end{subequations} where the one-loop vertex corrections are $V^{(1)}_{\text{HL},i}$, the one-loop residue corrections are $\RQCD^{(1)} \aS = R_Q-1$ and $\RHQET^{(1)} \aS = R_h-1$, and $R_{q}$ denotes the residue for the light quark propagator, which is the same in QCD and HQET. Equating the two expressions in \cref{eq:HLoperatorsExpanded} lead to the simplified form given in \cref{eqn:HLresult}. In particular, $R_q$ drops out and the matching coefficient only depends on the residue difference $\Delta R^{(1)}=\RQCD^{(1)} - \RHQET^{(1)}$, computed in the previous section. In order to extract the heavy-light matching coefficient from the master matching formula given in~\cref{eqn:SHQET1loop}, we follow the same steps outlined at the beginning of \cref{sec:FunctionalMatching}. The set of fluctuating fields that contribute are $A_\mu^a, h_v, \bar{h}_v, q,$ and $\bar{q}$. We need the equation of motion for the $H_v$ field, which can be derived from the UV Lagrangian (where sources $J_\mu^\pm$ for the heavy-light current are now included) \begin{align} \Lag_\text{QCD} \supset \,&\bar{Q} \,\Big(i\hspace{0.8pt} \slashed D - m_Q\Big)\, Q + \bar{q}\, i\hspace{0.8pt} \slashed{D} q + \Big( \bar{q}\, \gamma^\mu\, Q\, J^-_\mu + \text{h.c.}\Big) - \frac{1}{4}\, G_{\mu\nu}^a\hspace{0.8pt} G^{\mu\nu,a} + \Lag_\text{gf} + \Lag_\text{gh} \,. \label{eq:LHeavyLight} \end{align} Note that the sources $J_\mu^\pm$ are not dynamical fields, but must be included to ensure that the desired operators appear when matching the full theory and EFT actions. We split the heavy quark field as in \cref{eq:QSplit} above, and derive the equation of motion for the short distance mode: \begin{equation} H_v = \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} \pqty{ i\hspace{0.8pt} \slashed{D}_\perp h_v + J^+_\mu\, \gamma^\mu\, e^{i\hspace{0.8pt} m_Q\, v \cdot x}\,q}\,. \label{eq:HvHeavyLight} \end{equation} Since $J_\mu^+$ is merely a source, we are free to redefine it to absorb the phase, $J^+_\mu \to J^+_\mu e^{-i\hspace{0.8pt} m_Q\, v \cdot x}$. Plugging the equation of motion \cref{eq:HvHeavyLight} into the UV Lagrangian \cref{eq:LHeavyLight} yields the tree-level non-local HQET Lagrangian: \begin{align} \mathcal{L}_\text{HQET}^\text{non-local} &= \bar{h}_v \, \pqty{ i\hspace{0.8pt} v \cdot D + i\hspace{0.8pt} \slashed{D}_\perp \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}i\hspace{0.8pt} \slashed{D}_\perp }\, h_v -\frac{1}{4}\, G_{\mu\nu}^a\hspace{0.8pt} G^{\mu\nu,a} + \Lag_\text{gf} + \Lag_\text{gh} \notag\\ &\hspace{20pt} + \bar{q}\, \big(i\hspace{0.8pt} \slashed{D}\big)\, q + \pqty{ \bar{q}\, \gamma^\mu\, J^-_\mu\, h_v + \bar{q}\, \gamma^\mu\, J^-_\mu\, \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} i\hspace{0.8pt}\slashed{D}_\perp\, h_v + \text{h.c.} } \,. \end{align} We then take the second variation of the non-local action with respect to the relevant fluctuating fields, again following the prescription described in \cref{appsubsec:BackgroundField} for the gluons: \begin{align} \hspace{-5pt}\delta^2 S_\text{HQET}^\text{non-local} \supset \pmqty{ \var{A_\mu^a} & \var{h_v^T} & \var{\bar{h}_v} & \var{q^T} & \var{\bar{q}}\, } \pmqty{ C^{\mu\nu,ab} & \bar{\Gamma}_1^{\mu,a} & -\Big(\Gamma_1^{\mu,a}\Big)^T & \bar{\Gamma}_2^{\mu,a} & -\Big(\Gamma_2^{\mu,a}\Big)^T \\[5pt] -\Big(\bar{\Gamma}_1^{\nu,b}\Big)^T & 0 & -B_1^T & 0 & -S_2^T \\[8pt] \Gamma _1^{\nu,b} & B_1 & 0 & S_1 & 0 \\[5pt] -\Big(\bar{\Gamma}_2^{\nu,b}\Big)^T & 0 & -S_1^T & 0 & -B_2^T \\[8pt] \Gamma_2^{\nu,b} & S_2 & 0 & B_2 & 0 } \pmqty{\var{A_\nu^b} \\[11pt] \var{h_v} \\[11pt] \var{\bar{h}_v^T} \\[10pt] \var{q} \\[10pt] \var{\bar{q}^T} }\,,\notag\\ \label{eq:secondVarHeavyLight} \end{align} where terms not relevant for the heavy-light operator are discarded in $\Gamma_i^{\mu,a}$ and $\bar{\Gamma}_i^{\mu,a}$ \begingroup \allowdisplaybreaks \begin{subequations}\label{eqn:HLDetParameters} \begin{align} C^{\mu\nu,ab} &= \eta^{\mu\nu}\hspace{0.8pt} \Big(D^2\Big)^{ab} \,, \\ B_1 &= i\hspace{0.8pt} v \cdot D + \slashed{D}_\perp\, \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}\, \slashed{D}_\perp \,, \\ B_2 &= i\hspace{0.8pt} \slashed D \, , \\[5pt] \Gamma_1^{\mu,a} &= g_s\, T^a\, \bqty{ v^\mu + \slashed{D}_\perp \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} \gamma_\perp^\mu }\, h_v \,, \\ \bar{\Gamma}_1^{\mu,a} &= g_s\, \bar{h}_v\, T^a\, \bqty{ v^\mu + \gamma_\perp^\mu \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} i\hspace{0.8pt} \slashed{D}_\perp } \,, \\ \Gamma_2^{\mu,a} &= g_s\, T^a\, \bqty{ \gamma^\mu\, q + \gamma^\alpha\, J^-_\alpha\, \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}\gamma_\perp^\mu\, h_v } \,, \\ \bar{\Gamma}_2^{\mu,a} &= g_s\, \bqty{ \bar{q}\, \gamma^\mu + \bar{h}_v\, \gamma_\perp^\mu \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}\,\gamma^\alpha \,J^+_\alpha } \,T^a \, , \\ S_1 &= \bqty{ 1 + i\hspace{0.8pt} \slashed{D}_\perp \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q} } \,\gamma^\alpha \, J^+_\alpha\, , \\ S_2 &= \gamma^\alpha \, J^-_\alpha \, \bqty{ 1 + \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_Q}\,\slashed{D}_\perp } \,. \end{align} \end{subequations} \endgroup Again, only relevant terms are kept here. Since the Lagrangian is real, we expect \begin{equation} B_i^\dag = \gamma^0\, B_i\, \gamma^0 \, \qc\quad \bar{\Gamma}_i^{\mu,a} = \Big(\Gamma_i^{\mu,a}\Big)^\dag\, \gamma^0\, \qc\quad S_1^\dag = \gamma^0\, S_2\, \gamma^0\,, \end{equation} which is explicitly satisfied by the expressions in \cref{eqn:HLDetParameters}. In order to evaluate this functional determinant, we row reduce the matrix and obtain the one-loop HQET action \begin{align} S_\text{HQET}^{(1)} \supset i\Tr\bqty{ \frac{1}{(i\hspace{0.8pt} D)^2}\, \delta^{ab}\, \eta_{\mu\nu} \Big(\bar{\Gamma}_2^{\mu,a}\, B_2^{-1}\, \Gamma_2^{\nu,b} - \bar{\Gamma}_1^{\mu,a}\, B_1^{-1}\, S_1 \,B_2^{-1}\, \Gamma_2^{\nu,b} - \bar{\Gamma}_2^{\mu,a}\, B_2^{-1}\, S_2\, B_1^{-1}\, \Gamma_1^{\nu,b} \Big) }\,.\notag\\[2pt] \end{align} Next, we use \cref{eqn:HLDetParameters} to derive \begin{align} S_\text{HQET}^{(1)} &\supset -i\hspace{0.8pt} g_s^2\, C_F\, \Tr\left[ \frac{1}{(i\hspace{0.8pt} D)^2}\, \bar{q}\, \gamma^\mu\, \frac{1}{i\hspace{0.8pt}\slashed{D}}\, \gamma^\alpha\, J_\alpha^-\, \right.\notag\\[5pt] &\hspace{70pt}\left. \times \frac{\bqty{ 2m_Q + (1 + \slashed{v})i\slashed{D} } v_\mu + \bqty{ i\hspace{0.8pt} \slashed{D} - \big(1 + \slashed{v}\big)\, i\hspace{0.8pt} v \cdot D}\, \gamma_\mu} {(i\hspace{0.8pt} D)^2 + 2\, m_Q\, i\hspace{0.8pt} v \cdot D} \, h_v + \text{h.c.} \right]\,. \end{align} This functional trace can be converted into integral expressions using the method described in \cref{appsubsubsec:naiveCDE} (since we do not need any operators involving the field strength). This procedure yields \begin{align} S_\text{HQET}^{(1)} &\supset -i\hspace{0.8pt} g_s^2\, \mu^{2\eps}\, C_F\, \int \ddx{x} \int \ddp{p} \left[ \frac{1}{\big(i\hspace{0.8pt} D + p\big)^2}\, \bar{q} \, \gamma^\mu \frac{1}{i\hspace{0.8pt} \slashed{D} + \slashed{p}}\, \gamma^\alpha\, J_\alpha^-\right. \notag\\[5pt] &\hspace{25pt}\left. \times\frac{\bqty{2\hspace{0.8pt} m_Q + \big(1 + \slashed{v}\big)\big(i\hspace{0.8pt}\slashed{D} + \slashed{p}\big) }\, v_\mu + \bqty{ i\hspace{0.8pt} \slashed{D} + \slashed{p} - \big(1 + \slashed{v}\big)\, v \cdot \big(i\hspace{0.8pt} D + p\big) }\, \gamma_\mu} {\big(i\hspace{0.8pt} D + p\big)^2 + 2\hspace{0.8pt} m_Q \, v \cdot \big(i\hspace{0.8pt} D + p\big)}\, h_v + \text{h.c.} \right] \notag\\[5pt] &\supset \int \ddx{x} \bqty{ \bar{q}\, \pqty{-i\hspace{0.8pt} g_s^2\, \mu^{2\eps}\, C_F\, \int \ddp{p}\, \gamma^\mu\, \slashed{p}\, \gamma^\alpha\, \frac{\slashed{p}\, \gamma_\mu + 2\hspace{0.8pt} m_Q \, v_\mu}{p^4\, \Big(p^2 + 2\hspace{0.8pt} m_Q\, v \cdot p\Big)} }\, J_\alpha^-\, {h_v} + \text{h.c.}} \notag\\[5pt] &\supset \int \ddx{x} \pqty{ \bar{q}\, I_\text{HL}^\alpha\, J_\alpha^-\, h_v + \text{h.c.} } \,, \end{align} where we have implicitly defined the integral $I_\text{HL}^\alpha$ in the last line. Evaluating it yields \begin{align} I_\text{HL}^\alpha &= -i\hspace{0.8pt} g_s^2\, \mu^{2\eps}\, C_F\, \int \ddp{p}\,\frac{-(d - 2)\, \slashed{p}\, \gamma^\alpha\, \slashed{p} + 2\hspace{0.8pt} m_Q\, \slashed{v}\, \slashed{p}\, \gamma^\alpha} {p^4 \Big(p^2 + 2\hspace{0.8pt} m_Q v \cdot p\Big)} \notag\\[5pt] &= -\frac{\aS}{3\hspace{0.8pt} \pi} \pqty{\frac{4\hspace{0.8pt} \pi\,\mu^2}{m_Q^2}}^\eps\, \Gamma(\eps)\, \frac{2\hspace{0.8pt} \eps}{1 - 2\hspace{0.8pt} \eps} \Big(\gamma^\alpha - v^\alpha\Big) \,\,\longrightarrow\,\, -\frac{2}{3}\frac{\aS}{\pi} \Big(\gamma^\alpha - v^\alpha\Big)\,, \end{align} and hence \begin{equation} S_\text{HQET}^{(1)} \supset -\frac{2}{3}\frac{\aS}{\pi} \int \ddx{x} \bqty{ \bar{q}\, \Big(\gamma^\alpha - v^\alpha\Big)\, J^-_\alpha\, h_v + \text{h.c.}} \,. \end{equation} This reproduces the one-loop matching coefficient for the heavy-light current, see \cref{eqn:HLresult,eqn:HLcomponents}. \subsection{Heavy-Heavy Current Matching} \label{sec:HHMatch} The last matching example we will present is the derivation of the coefficient for the heavy-heavy current. The steps are essentially identical to the previous calculation, but the details are a bit more involved here since there are two flavors of heavy quarks to track. Note that while we will allow the two heavy quark masses $m_1$ and $m_2$ to differ, we will make the simplifying kinematic assumption that $v_1 = v_2=v$. The velocity labels on the fields are thus to be taken as flavor indices to keep track of masses only, and not to be taken as arbitrary. Otherwise, various simplification relations, \eg $\bar{h}_{v_1} v^\mu h_{v_2} = \bar{h}_{v_1} \gamma^\mu h_{v_2}$ are no longer valid. In \cref{sec:HHMatch2}, we work out the generalization allowing $v_1 \neq v_2$. We start with the UV Lagrangian: \begin{align} \label{eq:LagUVHH} \Lag_\text{QCD} =\,& \bar{Q}_1\, \big(i\hspace{0.8pt} \slashed{D} - m_1\big)\, Q_1 + \bar{Q}_2\, \big(i\hspace{0.8pt} \slashed{D} - m_{2}\big)\, Q_2 + \Big[\bar{Q}_1\, \Big(J^+_\alpha\, \gamma^\alpha + J^+_{5\alpha}\,\gamma^\alpha\,\gamma^5\Big)\, Q_2 + \text{h.c.}\Big] \notag\\ & - \frac{1}{4} G_{\mu\nu}^a G^{\mu\nu,a} + \Lag_\text{gf} + \Lag_\text{gh} \,. \end{align} where $J^\pm_\alpha$ and $J^\pm_{5\alpha}$ are sources for the vector and axial heavy-heavy currents respectively. To reduce the clutter and absorb the phase, we introduce the shorthand \begin{align} J^\pm \equiv \Big(J^\pm_\alpha\,\gamma^\alpha + J^\pm_{5\alpha}\,\gamma^\alpha\,\gamma^5\Big)\, e^{\pm i\hspace{0.8pt}\Delta m\, v\cdot x}\,, \end{align} where $\Delta m = m_1 - m_{2}$. Note that due to the gamma matrices, their conjugation relation is $\Big(J^+\Big)^\dag=\gamma^0 J^- \gamma^0$. A new feature of the heavy-heavy current matching calculation is that now the two heavy quarks mix: \begin{subequations} \begin{align} 0 &= \fdv{S}{\bar{H}_{v_1}} = -\big(i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_1\big)\,H_{v_1} + i\hspace{0.8pt} {\slashed D_{\perp}}\,h_{v_1} + J^+\, \big(h_{v_2} + H_{v_2}\big)\,, \\[6pt] 0 &= \fdv{S}{\bar{H}_{v_2}} = -\big(i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{2}\big)\,H_{v_2} + i\hspace{0.8pt} {\slashed D_{\perp}}\,h_{v_2} + J^-\, \big(h_{v_1} + H_{v_1}\big)\,, \end{align} \end{subequations} For our purposes here, we only need to solve these equations to linear order in $J$: \begin{align}\label{eq:HSolUVHH} \pmqty{ H_{v_1} \\ H_{v_2} } =\,& \pmqty{ \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_1}\, i\hspace{0.8pt}\slashed{D}_\perp\, h_{v_1} \\[4pt] \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{2}}\, i\hspace{0.8pt}\slashed{D}_\perp\, h_{v_2} } + \pmqty{ \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_1}\, J^+ \pqty{ 1 + \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{2}}\, i\hspace{0.8pt}\slashed{D}_\perp }\, h_{v_2} \\[4pt] \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{2}} J^- \pqty{ 1 + \frac{1}{iv \cdot D + 2\hspace{0.8pt} m_1} i\slashed{D}_\perp } h_{v_1} } + \ord\Big(J^2\Big)\,. \end{align} Plugging this solution into the UV Lagrangian, and taking the relevant second variations yields \begin{small} \begin{align} \delta^2 S_\text{HQET}^\text{non-local} = \pmqty{ \var{A_\mu^a} & \var{h_{v_1}^T} & \var{\bar{h}_{v_1}} & \var{h_{v_2}^{T}} & \var{\bar{h}_{v_2}} } \pmqty{ C^{\mu\nu,ab} & \bar{\Gamma}_1^{\mu,a} & -\Big(\Gamma_1^{\mu,a}\Big)^T & \bar{\Gamma}_2^{\mu,a} & -\Big(\Gamma_2^{\mu,a}\Big)^T \\[5pt] -\Big(\bar{\Gamma}_1^{\nu,b}\Big)^T & 0 & -B_1^T & 0 & -S_2^T \\[8pt] \Gamma_1^{\nu,b} & B_1 & 0 & S_1 & 0 \\[5pt] -\Big(\bar{\Gamma}_2^{\nu,b}\Big)^T & 0 & -S_1^T & 0 & -B_2^T \\[8pt] \Gamma_2^{\nu,b} & S_2 & 0 & B_2& 0} \pmqty{ \var{A_\nu^b} \\[10pt] \var{h_{v_1}} \\[10pt] \var{\bar{h}^T_{v_1}} \\[10pt] \var{h_{v_2}} \\[9pt] \var{\bar{h}^{T}_{v_2}} }\,,\notag\\[3pt] \end{align} \end{small}\noindent with again irrelevant terms for the heavy-heavy operator are discarded in $\Gamma_i^{\mu,a}$ and $\bar{\Gamma}_i^{\mu,a}$ \begingroup \allowdisplaybreaks \begin{subequations} \begin{align} \hspace{-7pt} C^{\mu\nu,ab}\! &= \eta^{\mu\nu} \big(D^2\big)^{ab} - 2\,\Big(U_1^{\mu\nu,ab} + 1 \leftrightarrow 2\Big)\, , \\[5pt] \hspace{-7pt} U_{1,2}^{\mu\nu,ab}\! &= -g_s^2\, \bar{h}_{v_{1,2}}\, T^a\hspace{0.8pt} T^b\, \frac{\gamma_\perp^\mu}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{1,2}} \,J^\pm\, \frac{\gamma_\perp^\nu}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{2,1}}\, h_{v_{2,1}}\,, \\[5pt] \hspace{-7pt} B_{1,2} &= i\hspace{0.8pt} v \cdot D + i\hspace{0.8pt} \slashed{D}_\perp\, \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{1,2}} i\hspace{0.8pt} \slashed{D}_\perp\,, \\[5pt] \hspace{-7pt} \Gamma_{1,2}^{\mu,a} &= g_s\, T^a\, \Bigg\{ \bqty{ v^\mu + i\hspace{0.8pt} \slashed{D}_\perp \frac{\gamma_\perp^\mu}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{1,2}} }\, h_{v_{1,2}} \notag\\[2pt] &\hspace{50pt}+ \bqty{ 1 + i\slashed{D}_\perp \frac{1}{iv \cdot D + 2\hspace{0.8pt} m_{1,2}} } \,J^\pm\, \frac{\gamma_\perp^\mu}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{2,1}}\, h_{v_{2,1}} \Bigg\} \,, \\[5pt] \hspace{-7pt} \bar{\Gamma}_{1,2}^{\mu,a} &= g_s\, \Bigg\{ \bar{h}_{v_{1,2}}\, \bqty{ v^\mu + \frac{\gamma_\perp^\mu}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{1,2}} i\slashed{D}_\perp }\notag\\[2pt] &\hspace{50pt} + \bar{h}_{v_{2,1}} \frac{\gamma_\perp^\mu}{iv \cdot D + 2\hspace{0.8pt} m_{2,1}} \,J^\mp\, \bqty{ 1 + \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{1,2}} i\hspace{0.8pt} \slashed{D}_\perp } \Bigg\}\hspace{0.8pt} T^a\,, \\[5pt] \hspace{-7pt} S_{1,2} &= \bqty{ 1 + i\hspace{0.8pt} \slashed{D}_\perp \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{1,2}} } \,J^\pm\, \bqty{ 1 + \frac{1}{i\hspace{0.8pt} v \cdot D + 2\hspace{0.8pt} m_{2,1}} i\hspace{0.8pt} \slashed{D}_\perp }\,. \end{align} \end{subequations} \endgroup Note that this matrix takes the same form as \cref{eq:secondVarHeavyLight}, so the row reduction is identical. The functional determinant is then \begin{align} \MoveEqLeft S^{(1)}_\text{HQET} \supset i\Tr\bqty{ \frac{1}{(i\hspace{0.8pt} D)^2}\, \delta^{ab}\, \eta_{\mu\nu}\, \Big(U_1^{\mu\nu,ab} + \bar{\Gamma}_1^{\mu,a}\, B_1^{-1}\, \Gamma_1^{\nu,b} - \bar{\Gamma}_1^{\mu,a}\, B_1^{-1}\, S_1\, B_2^{-1}\, \Gamma_2^{\nu,b}\Big) } + 1 \leftrightarrow 2 \notag\\[8pt] &\supset -i\hspace{0.8pt} g_s^2\, C_F\, \Tr\left\{\frac{1}{(i\hspace{0.8pt} D)^2}\, \bar{h}_{v_1}\, \frac{i\hspace{0.8pt} \slashed{D} + 2\hspace{0.8pt} m_1}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_1\, i\hspace{0.8pt} v \cdot D}\,J^+\, \frac{i\hspace{0.8pt} \slashed{D} + 2\hspace{0.8pt} m_{2}}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_{2}\, i\hspace{0.8pt} v \cdot D}\, h_{v_2} \right.\notag \\[5pt] &\hspace{30pt}- \frac{1}{(i\hspace{0.8pt} D)^2}\, \bar{h}_{v_1}\, \frac{i\hspace{0.8pt} \slashed{D} - 2\hspace{0.8pt} i\hspace{0.8pt} v \cdot D}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_1\, i\hspace{0.8pt} v \cdot D} \,J^+\, \frac{i\hspace{0.8pt} \slashed{D} - 2\hspace{0.8pt} i\hspace{0.8pt} v \cdot D}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_{2}\, i\hspace{0.8pt} v \cdot D}\, h_{v_2} \notag\\[5pt] &\hspace{30pt} \left.+ \frac{1}{(i\hspace{0.8pt} D)^2}\, \bar{h}_{v_1}\, \gamma^\mu\, \frac{i\hspace{0.8pt} \slashed{D}_\perp - i\hspace{0.8pt} v \cdot D}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_1\, i\hspace{0.8pt} v \cdot D} \,J^+\, \frac{i\hspace{0.8pt} \slashed{D}_\perp - i\hspace{0.8pt} v \cdot D}{(i\hspace{0.8pt} D)^2 + 2\hspace{0.8pt} m_{2}\, i\hspace{0.8pt} v \cdot D}\, \gamma_\mu\, h_{v_2} \right \} \notag\\[5pt] &\hspace{13pt}+ \left( 1 \leftrightarrow 2 ,\, J^+ \leftrightarrow J^-\right) . \label{eq:HHSDetForm} \end{align} Our notation $\left( 1 \leftrightarrow 2 ,\, J^+ \leftrightarrow J^-\right)$ here (as well as later in the paper) represents only one term, which results from exchanging both $1 \leftrightarrow 2$ and $J^+ \leftrightarrow J^-$ in the previous term. Applying the CDE prescription in~\cref{appsubsubsec:naiveCDE} to evaluate these functional traces, we get \begin{align} \label{eq:HHIntForm} &S^{(1)}_\text{HQET} \supset -i\hspace{0.8pt} g_s^2\, \mu^{2\eps}\, C_F\, \int \ddx{x} \int \ddp{p}\notag\\[6pt] &\hspace{10pt} \times \left[\frac{1}{{{{\left( {i\hspace{0.8pt} D + p} \right)}^2}}} \,\bar{h}_{v_1}\, \frac{i\hspace{0.8pt} \slashed{D} + \slashed{p} + 2\hspace{0.8pt} m_1}{(\hspace{0.8pt} iD + p)^2 + 2\hspace{0.8pt} m_1\, v \cdot (i\hspace{0.8pt} D + p)}\, J^+\, \frac{{i\hspace{0.8pt} \slashed{D} + \slashed{p} + 2\hspace{0.8pt} m_2}}{{{{\left( {i\hspace{0.8pt} D + p} \right)}^2} + 2\hspace{0.8pt} {m_2}\, v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}\, h_{v_2} \right.\notag\\[6pt] &\hspace{20pt} - \frac{1}{{{{\left( {iD + p} \right)}^2}}}{\bar{h}_{v_1}}\frac{{i\slashed D + \slashed p - 2\hspace{0.8pt} v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}{{{{\left( {i\hspace{0.8pt} D + p} \right)}^2} + 2\hspace{0.8pt} {m_1}\,v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}J^+\frac{{i\hspace{0.8pt} \slashed D + \slashed p - 2\hspace{0.8pt} v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}{{{{\left( {i\hspace{0.8pt} D + p} \right)}^2} + 2\hspace{0.8pt} {m_2}\,v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}{h_{v_2}}\notag \\[6pt] & \hspace{20pt} \left. + \frac{1}{{{{\left( {i\hspace{0.8pt} D + p} \right)}^2}}}{\bar{h}_{v_1}}{\gamma ^\mu }\frac{{i\hspace{0.8pt} \slashed D_\perp + \slashed p_\perp - v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}{{{{\left( {i\hspace{0.8pt} D + p} \right)}^2} + 2\hspace{0.8pt} {m_1}\,v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}\,J^+\,\frac{{i\hspace{0.8pt} \slashed D_\perp + \slashed p_\perp - v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}{{{{\left( {i\hspace{0.8pt} D + p} \right)}^2} + 2\hspace{0.8pt} {m_2}\,v \cdot \left( {i\hspace{0.8pt} D + p} \right)}}{\gamma _\mu }{h_{v_2}} \right]\notag\\[6pt] &\hspace{10pt} + \left( 1 \leftrightarrow 2 ,\, J^+ \leftrightarrow J^-\right) \,. \end{align} This can be simplified down to \begin{align} S^{(1)}_\text{HQET}&\supset \int \ddx{x} \bqty{ \bar{h}_{v_1} \, \big(J^+_\alpha\, I_\text{HH}^\alpha + J^+_{5\alpha}\, I_{\text{HH},5}^\alpha\big)\, h_{v_2} } + \left( 1 \leftrightarrow 2 ,\, J^+ \leftrightarrow J^-\right) \,, \end{align} where the loop integrals are \begin{subequations} \begin{align} I_\text{HH}^\alpha &\equiv -i\hspace{0.8pt} g_s^2\, \mu^{2\eps}\, C_F\, \int \ddp{p}\,\frac{1}{p^2 \big(p^2 + 2\hspace{0.8pt} m_1\, v \cdot p\big) \big(p^2 + 2\hspace{0.8pt} m_{2}\, v \cdot p\big)}\notag\\[2pt] &\hspace{107pt}\times\bigg\{\big(\slashed{p} + 2\hspace{0.8pt} m_1\big)\, \gamma^\alpha\, \big(\slashed{p} + 2\hspace{0.8pt} m_{2}\big) - \big(\slashed{p} - 2\hspace{0.8pt} v \cdot p\big) \,\gamma^\alpha\, \big(\slashed{p} - 2\hspace{0.8pt} v \cdot p\big)\notag\\[-3pt] &\hspace{145pt} +\gamma^\mu \big( \slashed{p}_\perp - v \cdot p\big) \gamma^\alpha \big( \slashed{p}_\perp - v \cdot p\big)\, \gamma_\mu \bigg\} \,, \\[15pt] I_{\text{HH},5}^\alpha &\equiv I^\alpha\big|_{\gamma^\alpha \rightarrow\, \gamma^\alpha\,\gamma^5}\,. \end{align} \end{subequations} Evaluating $I_\text{HH}^\alpha$ gives \begin{align} I_\text{HH}^\alpha &= \gamma^\alpha\, \frac{\aS}{\pi}\, \bqty{ \MSbardiv - \frac{2}{3} - \frac{2}{\Delta m} \pqty{ m_1 \ln\frac{m_{2}}{\mu} - m_{2} \ln\frac{m_1}{\mu} }} \notag\\[7pt] &\longrightarrow\,\, -\gamma^\alpha\, \frac{2}{3} \frac{\aS}{ \pi} \bqty{ 1 + \frac{3}{\Delta m} \pqty{ m_1 \ln\frac{m_{2}}{\mu} - m_{2} \ln\frac{m_1}{\mu} } }\,, \end{align} and $I_{\text{HH},5}^\alpha$ can be evaluated in the same way. Putting it all together yields \begin{align} S^{(1)}_\text{HQET} &\supset - \frac{2}{3}\frac{\aS}{ \pi} \int \ddx{x} \Bigg\{ \bqty{ 1 + \frac{3}{\Delta m} \pqty{ m_1 \ln\frac{m_{2}}{\mu} - m_{2} \ln\frac{m_1}{\mu} } }\, \bar{h}_{v_1}\, J^+_\alpha\, \gamma^\alpha\, h_{v_2} \notag\\[5pt] &\hspace{25pt}+ \bqty{ 2 + \frac{3}{\Delta m} \pqty{ m_1 \ln\frac{m_{2}}{\mu} - m_{2} \ln\frac{m_1}{\mu} } } \, \bar{h}_{v_1}\, J^+_{5\alpha} \,\gamma^\alpha\, \gamma^5\, h_{v_2} +\text{h.c.} \Bigg\} \,,\notag\\[-3pt] \end{align} which agree with \cref{eqn:hhresult} and \cref{eqn:hhcomponents}. We note that the case of heavy-heavy matching at finite recoil ($v_1 \ne v_2$) is provided in \cref{sec:HHMatch2}. The next section turns to calculation of the RGEs, focusing on the particular examples of the Wilson coefficients for two operators that appear at $\ord(1/m_Q)$. \subsection{HQET RGEs} In this section, we will derive the RGEs for the two operators that appear at subleading order in the $1/m_Q$ HQET expansion: \begin{align} \Lag_\text{HQET} \supset\,& \bar{h}_v\, (i\hspace{0.8pt} v \cdot D)\, h_v + \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, \bar{h}_v\,(i\hspace{0.8pt} D_\perp)^2\, h_v + \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\,g_s\, \bar{h}_v\,\sigma^{\mu\nu}\, G_{\mu\nu}^a\, T^a\, h_v \notag \\[3pt] & -\frac{1}{4}\, G_{\mu\nu}^a\, G^{\mu\nu,a} + \Lag_\text{gf} + \Lag_\text{gh}\,, \end{align} with \begin{equation} D_\perp^\mu \equiv D^\mu - v^\mu (v \cdot D) \qquad \text{and} \qquad \sigma^{\mu\nu} \equiv \frac{i}{2}\, \Big[\gamma^\mu, \gamma^\nu\Big]\,. \end{equation} The Wilson coefficient $c_\text{kin}$ is related to the leading order kinetic term through RPI, and as such it will not be renormalized; we will derive this explicitly at one-loop in this section. The Wilson coefficient for the chromomagnetic moment operator $c_\text{mag}$ does run, and has phenomenological consequences such as predicting the mass splitting between the ground-state vector and pseudoscalar mesons that contain a heavy quark. A precision determination of this mass splitting requires evolving $c_\text{mag}$ between the bottom mass and the charm mass. It is straightforward to match onto these Wilson coefficients at tree-level by expanding~\cref{eqn:LHQETnonlocal}. Note that because \begin{equation} {{\bar h}_v}\, \Big[ {{\sigma _{\mu \nu }}\,{v^\mu }\,( {i\hspace{0.8pt} v \cdot D} )} \Big] \,{h_v} = {{\bar h}_v}\,\bigg[ {\frac{i}{2}\,\big( {\slashed v\, {\gamma _\nu } - {\gamma _\nu }\,\slashed v} \big)\,\big( {i\hspace{0.8pt} v \cdot D} \big)} \bigg]\,{h_v} = 0 \,, \end{equation} we can replace $D_\perp^\mu$ with $D^\mu$ when it is multiplied by $\sigma_{\mu\nu}$: \begin{align} {i\hspace{0.8pt}{{\slashed D}_ \bot }\,i\hspace{0.8pt}{{\slashed D}_ \bot }} &= {\big( {i\hspace{0.8pt} {D_ \bot }} \big)^2} - \frac{i}{2}\,{\sigma _{\mu \nu }}\Big[ {i\hspace{0.8pt} D_ \bot ^\mu ,i\hspace{0.8pt} D_ \bot ^\nu } \Big] \nonumber \\[5pt] &\longrightarrow\,\, {\big( {i\hspace{0.8pt}{D_ \bot }} \big)^2} - \frac{i}{2}\,{\sigma _{\mu \nu }}\,\Big[ {i\hspace{0.8pt}{D^\mu },i\hspace{0.8pt}{D^\nu }} \Big] = {\big( {i\hspace{0.8pt} {D_ \bot }} \big)^2} + \frac{1}{2}\,g_s\,{\sigma ^{\mu \nu }}\,G_{\mu \nu }^a\,{T^a} \,. \end{align} This yields the tree-level matching conditions \begin{equation} c_\text{kin}^{(0)} = c_\text{mag}^{(0)} = 1\,. \end{equation} These results serve as the boundary conditions for integrating the RGEs whose derivation are the subject of what follows. We follow the procedure outlined in \cref{subsec:ReviewFunctionalRunning}. The first step is to compute the 1PI effective action. We need the second variation of the tree-level action with respect to $A_\mu$, $h_v$ and $\bar{h}_v$. This yields an equation of the same form as \cref{eqn:ResidueVariationMatrix}. As opposed to the residue difference calculation, which uses the non-local form of the Lagrangian, here we are computing the 1PI effective action for HQET directly. Therefore, we must Taylor expand in $1/m_Q$ and truncate. In particular, we only keep terms up to order $1/m_Q$, and we drop everything that does not include quark fields: \begingroup \allowdisplaybreaks \begin{subequations} \label{eqn:HQETDetParameters} \begin{align} C^{\mu\nu,ab} &= \eta^{\mu\nu}\, \big(D^2\big)^{ab} - 2\hspace{0.8pt} U^{\mu\nu,ab}\,, \\[5pt] U^{\mu\nu,ab} &= g_s\, f^{abc}\, G^{\mu\nu,c} - g_s^2\, \bar{h}_v\, \bqty{ \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q} \big(\eta^{\mu\nu} - v^\mu v^\nu\big)\, T^a\, T^b + \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\, \sigma^{\mu\nu}\, f^{abc}\, T^c } h_v\,, \\[5pt] B &= i\hspace{0.8pt} v \cdot D + \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q} \big(i\hspace{0.8pt} D_\perp\big)^2 + \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q} \, g_s\, \sigma^{\mu\nu}\, G_{\mu\nu}^a\, T^a\, , \\[5pt] \Gamma^{\mu,a} &= g_s\, T^a\, \bqty{ v^\mu\, h_v + \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, i\hspace{0.8pt} D_\perp^\mu\, h_v + \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q} \big(i\hspace{0.8pt} D_\perp^\mu\, h_v\big)_x - \frac{c_\text{mag}}{2\hspace{0.8pt} m_Q}\,\sigma^{\mu\nu}\, h_v\, D_\nu}\,, \\[5pt] \bar{\Gamma}^{\mu,a} &= g_s\, \bqty{ \bar{h}_v\, v^\mu + \frac{c_\text{kin}}{2m_Q}\, \bar{h}_v\, i\hspace{0.8pt} D_\perp^\mu - \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q} \big(i\hspace{0.8pt} D_\perp^\mu\, \bar{h}_v\big)_x + \frac{c_\text{mag}}{2\hspace{0.8pt} m_Q}\, D_\nu\, \bar{h}_v \,\sigma^{\mu\nu} }\, T^a \,. \end{align} \end{subequations} \endgroup Note that we have used a non-standard notation in the above --- a bracket with a subscript $x$ is to indicate that the covariant derivative is \emph{closed}. This is in contrast with the other terms, where the covariant derivatives are \emph{open}. A detailed elaboration of this notation is given in \cref{appsubsubsec:Territory}, in particular \cref{eqn:defDUix}. Reusing the row reduction result \cref{eqn:ResidueRowReduction}, we follow the same intermediate steps given in \cref{eq:residueDeterminant} to obtain the one-loop effective action \begin{align} \Gamma^{(1)}_\text{HQET} &= \frac{i}{2} \ln\Sdet\bqty{ -\fdv[2]{S_\text{HQET}}{(A_\mu^a, \bar{h}_v, h_v)} } \notag\\[5pt] &\supset -i\Tr \Bigg\{ \eta_{\mu\nu}\, \pqty{\frac{1}{D^2}}^{ba} \Big(U^{\mu\nu,ab} + \bar{\Gamma}^{\mu,a}\, B^{-1}\, \Gamma^{\nu,b}\Big) + \pqty{\frac{1}{D^2}}^{bc} \, U_{\nu\mu}^{cd}\, \pqty{\frac{1}{D^2}}^{da}\, U^{\mu\nu,ab} \notag\\[3pt] &\qquad\qquad\quad + \pqty{\frac{1}{D^2}}^{bc}\, U_{\nu\mu}^{cd}\, \pqty{\frac{1}{D^2}}^{da} \Big(\bar{\Gamma}^{\mu,a}\, B^{-1}\, \Gamma^{\nu,b} + \bar{\Gamma}^{\nu,b}\, B^{-1}\, \Gamma^{\mu,a}\Big) \Bigg\} \,. \end{align} where again we emphasize that we are not including higher order $1/m_Q$ terms than above. Next, we use the expressions in \cref{eqn:HQETDetParameters} to derive \begin{align} \Gamma^{(1)}_\text{HQET} &\supset i\Tr \Bigg\{ \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, g_s^2\, (d-1) \,C_F\, \frac{1}{D^2}\, \bar{h}_v\, h_v - g_s^2\, C_F\, \frac{1}{D^2}\, \bar{h}_v\, \frac{1}{i\hspace{0.8pt} v \cdot D}\, h_v \notag\\[3pt] &\hspace{30pt} + g_s^2\, C_F\, \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, \frac{1}{D^2}\,\bar{h}_v\, \frac{1}{i\hspace{0.8pt} v \cdot D}\big(i\hspace{0.8pt} D_\perp\big)^2 \frac{1}{i\hspace{0.8pt} v \cdot D} \,h_v \notag\\[3pt] &\hspace{30pt} + g_s^2\, \pqty{ C_F - \frac{1}{2}\, C_A } \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\, \frac{1}{D^2}\, \bar{h}_v\,\frac{1}{i\hspace{0.8pt} v \cdot D}\, g_s\, \sigma^{\mu\nu}\, G_{\mu\nu}^a\, T^a\, \frac{1}{i\hspace{0.8pt} v \cdot D}\, h_v \notag\\[3pt] &\hspace{30pt} + 2\,g_s^2\, C_A\, \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\, \frac{1}{D^2}\, g_s\, G_{\mu\nu}^a\, \frac{1}{D^2}\, \pqty{ \bar{h}_v\, \sigma^{\mu\nu} \,T^a\, h_v - i\hspace{0.8pt} D_\alpha\, \bar{h}_v\, v^\nu\, \sigma^{\mu\alpha}\, \frac{1}{i\hspace{0.8pt} v \cdot D} \,T^a\, h_v } \Bigg\} \notag\\[5pt] &\equiv \Gamma_1 + \Gamma_2 + \Gamma_3 + \Gamma_4 + \Gamma_5 + \Gamma_6\,, \label{eqn:GammaHQET1} \end{align} where we have used \begin{equation} T_F^a\, T_F^a = C_F \,,\qquad f^{abc}\, f^{abd} = C_A \,\delta ^{cd}\,,\qquad\text{and} \qquad T_F^a\, T_F^b\, T_F^a = \pqty{C_F - \frac{1}{2}\, C_A} \,T_F^b\,, \end{equation} with $C_F$ and $C_A$ denote the Casimir factor for the fundamental and adjoint representations respectively. Note that we defined the six terms in \cref{eqn:GammaHQET1} as $\Gamma_{i=1,\cdots,6}$ in the order that they appear. Since we want to derive the RGEs, we need to isolate the UV divergence. To this end, we regulate the IR using a non-zero gluon mass $m^2$, which is sufficient for any of the loop integrals we encounter below. For the first two terms we use \cref{eqn:T1,eqn:T1v} to obtain \begin{subequations} \begin{align} \Gamma_1 &\equiv \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, g_s^2\, (d-1)\, C_F\, i\Tr \pqty{ \frac{1}{D^2 + m^2}\, \bar{h}_v\, h_v } \supset 0\,, \\[5pt] \Gamma_2 &\equiv -g_s^2\, C_F\, i\Tr \pqty{ \frac{1}{D^2 + m^2}\, \bar{h}_v\, \frac{1}{i\hspace{0.8pt} v \cdot D}\, h_v }\notag\\[3pt] &\supset -2\,g_s^2\, C_F \int \dd[4]{x} \frac{1}{(4\hspace{0.8pt} \pi)^2} \left(\ln\frac{\mu^2}{m^2}\right) \, \pqty{\bar{h}_v\, i\hspace{0.8pt} v \cdot D\, h_v} \,. \end{align} \end{subequations} For the third term, none of the results in~\cref{appsubsubsec:functionaltraces} directly apply, so we provide some explicit steps: \begin{align} \Gamma_3 &\equiv g_s^2\, C_F\, \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, i\Tr \bqty{ \frac{1}{D^2 + m^2}\, \bar{h}_v\, \frac{1}{i\hspace{0.8pt} v \cdot D} \big(i\hspace{0.8pt} D_\perp\big)^2 \frac{1}{i\hspace{0.8pt} v \cdot D}\, h_v } \nonumber \\[5pt] &= -g_s^2\, C_F\, \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, i\int \ddx{x} \int \ddp{q} \tr \Bigg[ \frac{1}{(i\hspace{0.8pt} D - q)^2 - m^2} \notag\\[-3pt] &\hspace{180pt} \times \bar{h}_v\, \frac{1}{v \cdot (i\hspace{0.8pt} D - q)}(i\hspace{0.8pt} D - q)^2 \frac{1}{v \cdot (i\hspace{0.8pt} D - q)}\, h_v \Bigg] \notag\\[5pt] &\supset g_s^2\, C_F\, \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q} \int \dd[4]{x} \Big(\eta^{\mu\nu}\, V_{1,2} - 4\,V_{2,2}^{\mu\nu}\Big) \Big(\bar{h}_v\, D_\mu\, D_\nu\, h_v\Big) \nonumber \\[5pt] &= \int \dd[4]{x} \frac{1}{(4\hspace{0.8pt}\pi)^2} \ln\frac{\mu^2}{m^2} \Bqty{ -2\,g_s^2\, C_F\, \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q} \bqty{ \bar{h}_v\, (i\hspace{0.8pt} D_\perp)^2\, h_v - 3\bar{h}_v (i\hspace{0.8pt} v \cdot D)^2\, h_v } } \,. \end{align} In the above, the loop integrals $V_{1,2}$ and $V_{2,2}^{\mu\nu}$ can be found in \cref{appsubsubsec:HLIntegralsHQET}. For $\Gamma_4$ and $\Gamma_5$ we can use \cref{eqn:T2} and \cref{eqn:T1vv} to find \begin{subequations} \begin{align} \Gamma _4 &\equiv g_s^2\, \pqty{C_F - \frac{1}{2}\,C_A}\, \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\, i\Tr \pqty{\frac{1}{D^2 + m^2}\, \bar{h}_v\, \frac{1}{i\hspace{0.8pt} v \cdot D}\, g_s \,\sigma^{\mu\nu}\, G_{\mu\nu}^a\, T^a\, \frac{1}{i\hspace{0.8pt} v \cdot D}\,h_v }\notag \\[5pt] &\supset \int \dd[4]{x} \frac{1}{(4\hspace{0.8pt} \pi)^2} \ln\frac{\mu^2}{m^2} \bqty{ g_s^2\, \big(C_A - 2\,C_F\big)\, \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q} \,g_s \bar{h}_v \,\sigma^{\mu\nu}\, G_{\mu\nu}^a\, T^a\, h_v } \,, \\[12pt] \Gamma_5 &\equiv 2\,g_s^2\, C_A\, \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q} i\Tr \pqty{ \frac{1}{D^2 + m^2}\, g_s\, G_{\mu\nu}^a\, \frac{1}{D^2 + m^2}\, \bar{h}_v\, \sigma^{\mu\nu}\, T^a\, h_v } \notag\\[5pt] &\supset \int \dd[4]{x} \frac{1}{(4\hspace{0.8pt} \pi)^2} \ln\frac{\mu^2}{m^2} \bqty{ -2\,g_s^2\, C_A \,\frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\, g_s\, \bar{h}_v\, \sigma^{\mu\nu}\,G_{\mu\nu}^a\, T^a\, h_v}\,. \end{align} \end{subequations} For the sixth term, we again provide the details: \begin{align} \Gamma_6 &\equiv -2\,g_s^2\, C_A\, \frac{c_\text{mag}}{2\hspace{0.8pt} m_Q}\, i\Tr \pqty{ \frac{1}{D^2 + m^2}\, g_s\, G_{\mu\nu}^a\, \frac{1}{D^2 + m^2} i\hspace{0.8pt} D_\alpha \, \bar{h}_v\, v^\nu\, \sigma^{\mu\alpha}\, \frac{1}{i\hspace{0.8pt} v \cdot D}\, T^a\, h_v } \nonumber \\[5pt] &= -2\,g_s^2\, C_A\, \frac{c_\text{mag}}{2\hspace{0.8pt} m_Q} i\int \ddx{x} \int \ddp{q} \left[ \frac{1}{(i\hspace{0.8pt} D - q)^2 - m^2}\, g_s\, G_{\mu\nu}^a\, \frac{1}{(i\hspace{0.8pt} D - q)^2 - m^2} \right.\notag\\[5pt] &\hspace{165pt}\left. \times(i\hspace{0.8pt} D - q)_\alpha\, \bar{h}_v\, v^\nu\, \sigma^{\mu\alpha}\, \frac{1}{v \cdot (i\hspace{0.8pt} D - q)}\, T^a\, h_v \right] \nonumber \\[5pt] &\supset -2\,g_s^2\, C_A \,\frac{c_\text{mag}}{2\hspace{0.8pt} m_Q} i\int \ddx{x} \int \ddp{q}\, \frac{v^\nu \, q_\alpha\,v \cdot q}{\big(q^2 - m^2\big)^2} \,g_s\, \bar{h}_v\, \sigma^{\mu\alpha}\, G_{\mu\nu}^a\,T^a\, h_v \nonumber \\[5pt] &= \int \dd[4]{x} \bqty{ 2\,g_s^2\, C_A\, \frac{c_\text{mag}}{2\hspace{0.8pt} m_Q}\, v^\nu\, \eta_{\alpha\beta} \,V_{2,1}^\beta\, \big( g_s\, \bar{h}_v\, \sigma^{\mu\alpha} G_{\mu\nu}^a \,T^a\, h_v\big) } \nonumber \\[5pt] &= \int \dd[4]{x} \frac{1}{(4\hspace{0.8pt}\pi)^2} \ln\frac{\mu^2}{m^2} \,2\,g_s^2\, C_A \,\frac{c_\text{mag}}{2\hspace{0.8pt} m_Q}\, v^\nu\, g_s\, \bar{h}_v\, v_\alpha \,\sigma^{\mu\alpha}\, G_{\mu\nu}^a\, T^a\, h_v = 0 \,. \end{align} Summing all the six $\Gamma_i$ yields the one-loop level effective action: \begin{align} \hspace{-6pt}\Gamma_\text{HQET}^{(1)} &\supset \Gamma_1 + \Gamma_2 + \Gamma_3 + \Gamma_4 + \Gamma_5 + \Gamma_6\notag \\[5pt] &\supset \int \dd[4]{x} \frac{\alpha_s}{4\hspace{0.8pt}\pi} \ln\frac{\mu^2}{m^2} \Bigg\{ - C_A\,\frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\, g_s\, \bar{h}_v \,\sigma^{\mu\nu} \,G_{\mu\nu}^a\, T^a\, h_v \notag\\[-3pt] &\hspace{18pt} - 2\, C_F \left[ \bar{h}_v\, (i\hspace{0.8pt} v \cdot D)\, h_v + \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, \bar{h}_v\, (i\hspace{0.8pt} D_\perp)^2\, h_v + \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q} \,g_s\, \bar{h}_v\, \sigma^{\mu\nu}\, G_{\mu\nu}^a\, T^a\, h_v \right] \Bigg\} \,. \label{eq:Gamma1RGE} \end{align} Note that in the last line, we have dropped the operator $\bar{h}_v \left(iv\cdot D\right)^2 h_v$ since it is redundant due to the equations of motion. Combining this result with the tree-level effective action $\Gamma_\text{HQET}^{\left(0\right)} = S_\text{HQET}$ yields \begin{align} \Gamma_\text{HQET} &\supset \int \dd[4]{x} \left\{ \bqty{ 1 - \frac{2\,\alpha_s\, C_F}{4\hspace{0.8pt}\pi} \ln\frac{\mu^2}{m^2} } \right.\notag\\[-3pt] &\hspace{60pt}\times \bqty{ \bar{h}_v\, (i\hspace{0.8pt} v \cdot D)\, h_v + \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q} \bar{h}_v\,\big(i\hspace{0.8pt} D_\perp\big)^2\, h_v + \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\, g_s\, \bar{h}_v \,\sigma^{\mu\nu}\, G_{\mu\nu}^a\,T^a\, h_v }\notag\\[-3pt] &\left. \hspace{60pt} - \frac{\alpha_s}{4\hspace{0.8pt}\pi} \ln\frac{\mu^2}{m^2}\, C_A\, \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\,g_s\, \bar{h}_v\, \sigma^{\mu\nu}\, G_{\mu\nu}^a\, T^a\, h_v \right\} \notag\\[8pt] \longrightarrow\,\,&\hspace{12pt} \int \dd[4]{x} \Bigg[ \bar{h}_v (i\hspace{0.8pt} v \cdot D)\, h_v + \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q}\, \bar{h}_v\, \big(i\hspace{0.8pt} D_\perp\big)^2\, h_v\notag\\[-3pt] &\hspace{2cm} +\left(1- \frac{\alpha_s}{4\hspace{0.8pt}\pi}\, C_A \ln\frac{\mu^2}{m^2}\right) \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\,g_s\, \bar{h}_v\, \sigma^{\mu\nu} \,G_{\mu\nu}^a\, T^a\, h_v \Bigg], \end{align} where in the second line, we have performed the field redefinition \begin{equation} h_v \,\,\longrightarrow\,\, \bqty{ 1 - \frac{2\,C_F\, \aS}{4\hspace{0.8pt} \pi} \ln\frac{\mu^2}{m^2} }^{-1/2}\, h_v\,, \end{equation} to canonically normalize the tree-level kinetic term $\bar{h}_v\, (i\hspace{0.8pt} v\cdot D)\, h_v$. Finally, we can read off the RGE equations: \begin{subequations} \begin{alignat}{3} 0 &= \mu\dv{\mu} \frac{c_\text{kin}}{2\hspace{0.8pt} m_Q} &\qquad\Longrightarrow\qquad&& \mu\dv{\mu} c_\text{kin} &= 0\,, \\[5pt] 0 &= \mu\dv{\mu} \bqty{ \frac{c_\text{mag}}{4\hspace{0.8pt} m_Q}\left(1 - \frac{\alpha_s}{4\hspace{0.8pt}\pi}\, C_A \ln\frac{\mu^2}{m^2}\right) } &\qquad\Longrightarrow\qquad&& \mu\dv{\mu} c_\text{mag} &= \frac{\alpha_s}{2\hspace{0.8pt}\pi}\,C_A \,c_\text{mag} \,. \end{alignat} \end{subequations} These results agree with the literature as summarized in \cref{eq:RGESummary} above. \section{Conclusions} \label{sec:Conc} This paper provides the first application of functional methods augmented by the covariant derivative expansion to a kinematic EFT. In particular, we studied HQET --- a description that emerges in the limit that a Dirac fermion is very heavy compared to the scale being probed by the propagating fluctuations. We focused on the particular example of heavy quarks coupled to QCD, and reproduced a variety of matching and running results that had been previously computed using Feynman diagrammatic methods. A critical component was the existence of an operator valued set of projectors that are used to derive an equation of motion for the long distance modes that is valid to all-orders in the heavy mass expansion. Furthermore, the point at which RPI becomes manifest was emphasized. The primary result of this work was the matching master formula given in \cref{eqn:SHQET1loop}. There are two clear directions for future progress. First, one can now use these efficient methods to take one-loop matching calculations to higher order in the heavy mass limit. This would be relevant for high precision applications to either experiments such as LHCb and Belle~II, or to theory explorations comparing with lattice QCD calculations in the heavy mass limit~\cite{Heitger:2003nj, Blossier:2010jk}. Another interesting direction would be to understand how to use these methods for other kinematic EFTs, such as SCET, nrQCD, and others. Furthermore, these methods provide a compelling impetus to revisit the issue of higher-order functional integration directly in the path integral. If the relevant technology could be developed to the stage where two-loop (or higher) calculations could be implemented in a straightforward manner,\footnote{For promising work in this direction, see \Ref{Abbott:1980hw,Capper:1982tf}.} then these techniques could contribute to extend precision in $\aS$ as well. This paper makes the benefits of the functional approach to HQET clear, and furthermore opens the door to using these methods across a broader class of EFTs than had been previously explored. \acknowledgments We thank Zoltan Ligeti, Gil Paz, and Dean Robinson for useful discussions. TC and XL are supported by the U.S. Department of Energy (DOE), under grant DE-SC0011640. MF is supported by the DOE under grant DE-SC0010008 and partially by the Zuckerman STEM Leadership Program. TC and MF performed some of this work at the Munich Institute for Astro- and Particle Physics (MIAPP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2094 -- 390783311. MF thanks the Aspen Center for Physics, supported by National Science Foundation grant PHY-1607611, for hospitality while parts of this work were carried out.
1,941,325,220,114
arxiv
\section{Introduction} \label{sec:intro} The identification of dark matter (DM) is a forefront problem in physics, as DM makes up most of the mass in the universe but its particle properties are unknown~\cite{Jun95, WMAP03, Ber04, Clo06, Fen10, Pet12}. As DM is only known to interact gravitationally, we have measured only its mass density averaged on astronomical distance scales. While the GeV-scale weakly interacting massive particle (WIMP) is a popular DM candidate, still quite allowed~\cite{Lea18}, other candidates from axions to primordial black holes, with masses spanning many orders of magnitude, are also allowed~\cite{Pre82, Cov01, CAST08, ADMX09, Wan09, Bir16, Car16, Car17, ADMX18, Mon19}. An important test of DM is its scattering cross section with nuclei. Direct-detection experiments are sensitive down to extremely small cross sections. In this limit, the coherent spin-independent cross section of DM with a nucleus is related to that with a nucleon by the scaling relation $\sigma_{\chi A} = A^2\,(\mu_{\chi A}/\mu_{\chi N})^2\sigma_{\chi N}$, where $\mu_{\chi A}$ and $\mu_{\chi N}$ are the DM-nucleus and DM-nucleon reduced masses. (For large $m_\chi$, this becomes $\sigma_{\chi A} \simeq A^4 \sigma_{\chi N}$.) But what are the {\it largest} cross sections that can be probed by direct-detection experiments? \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{limits_format_orange.pdf} \caption{Parameter space for spin-independent pointlike DM scattering, with previously claimed constraints overlaid by the new results of Ref.~\cite{Dig19} (we reproduce their Fig.~1). In the middle region, pointlike DM is allowed but the constraints must be recalculated. In the upper region, pointlike DM is not possible and the constraints are invalid. Strong interactions remain possible for composite DM, our focus below.} \label{fig:intro} \vspace{-0.25cm} \end{figure} Until recently, the maximum cross section to which a direct-detection search is sensitive --- the so-called ceiling --- was computed based on DM attenuation in the atmosphere and Earth~\cite{Sta90, Col93, Alb03, Foo14, Kou14, Emk17b, Kav17, Mah17, Sid18}. If the DM cross section with nuclei is too large, the DM will not reach the detector with enough energy, or at all. This has been modeled either analytically or numerically, typically assuming the aforementioned scaling relation. However, Ref.~\cite{Dig19} showed that this scaling fails for large cross sections, due to the breakdown of the first Born approximation, and even more significantly, that pointlike DM with a contact interaction cannot generally have a cross section much larger than the geometric size of the nucleus. (For pointlike DM and the lightest nuclear targets, model-dependent $s$-wave resonances with cross sections above this limit may be broad enough to significantly affect the event rate \cite{Dig19}, but these targets are typically not relevant.) This invalidates most prior calculations of the ceiling, and more generally of the excluded regions for high-cross section DM (e.g., Refs.~\cite{Alb03, Mac07, Khl07, Eri07, Khl08, Far17, Kav17, Mah17, Bho18b, Bra18, Emk18, Far20}). Figure~\ref{fig:intro} provides an overview. However, large cross sections remain possible for composite DM, for which the open parameter space is large (see, e.g., Refs.~\cite{Sta90, Jac14, Gra18, Sid18}). The properties of composite DM candidates are obviously more model-dependent than those of pointlike DM candidates. One can make the simple but reasonable assumption that the DM particle is opaque to nuclei, with a scattering cross section equal to the DM particle's geometric size, regardless of the nuclear target~\cite{Sta90, Jac14, Gra18, Sid18}. As shown in Ref.~\cite{Dig19}, this is roughly accurate in the limit of strong coupling. (Note that in this limit, the usual factors that relate the DM-nucleus and DM-nucleon cross sections --- the $A^2$ for coherence and the $\mu^2$ for the reduced mass squared --- are no longer present, so that the DM-nucleon cross section is equal to any DM-nucleus cross section.) We view this as the most model-independent approach and conservative in a sense explained in Sec.~\ref{sec:results_results}; in the context of specific models, other possibilities could be explored. For elastic scattering of either pointlike or composite DM via a contact interaction, the total cross section is velocity-independent, although the kinematic range of the differential cross section does depend on velocity. Using the new framework discussed below, it would be interesting to renanalyze prior constraints on pointlike DM, but now for composite DM, but this is beyond our scope. In this paper, we derive new limits on composite strongly interacting DM using a novel detector operated at shallow depth at the University of Chicago. This setup, previously used to set limits on sub-GeV moderately interacting DM~\cite{Col18, carlos}, consists of two liquid-scintillator modules separated by 50 cm, one directly above the other. As we assume that the DM cross section is independent of the target nucleus, using a hydrogen-rich target material is ideal for a composite DM search because it maximizes the number of target nuclei per detector mass. Using hydrogen also allows us to probe DM-nucleon scattering directly, making it straightforward to translate our results to other models. For the large masses and cross sections we probe, DM passing through both modules would interact many times in each with minimal change in direction, leading to a coincidence signal with time separation $\sim 2~\mu{\rm s}$, due to the low RMS velocity, $\sim 10^{-3} ~c$, of DM in the standard halo model (SHM)~\cite{Lew96, Cho13, Gre17}. Backgrounds due to cosmic rays and relativistic secondaries can also trigger both modules, but the time separation for these is prompt, $\sim 2$~ns. Fast neutrons can cause nuclear recoils in both modules with larger time separations, mimicking a DM signal. As shown empirically below, the total background rate is very low. The origins of the backgrounds are discussed further in Ref.~\cite{Col18}. In Sec.~\ref{sec:setup}, we describe our experimental setup, data collection, and relevant backgrounds. In Sec.~\ref{sec:results}, we analyze our data and report our constraints. In Sec.~\ref{sec:future}, we detail ways to probe additional parameter space with new experiments. In Sec.~\ref{sec:conclusions}, we summarize our conclusions. \section{Experimental Setup and Data} \label{sec:setup} The detector setup at the University of Chicago employs two low-background EJ-301 modules~\cite{eljen}, each containing 1.5 liters of xylene-based, hydrogen-rich liquid scintillator (with density 0.874 g/cm$^3$ and $4.82 \times 10^{22}$ H atoms/cm$^3$), to provide new experimental sensitivity to high-mass, strongly-interacting DM particles. Using EJ-301 allows us to discriminate between events arising from interactions involving electron recoils (ER), like those produced by gammas and minimum-ionizing particles, and those involving nuclear recoils (NR), as expected from the elastic scattering of neutral particles, be those fast neutrons, or the sought-after DM candidates. This discrimination ability arises from the dissimilar scintillation decay constants for ERs and NRs in EJ-301, the second favoring delayed emission~\cite{decays}. For particles producing signals in both modules, we exploit the time-of-flight (TOF, $\Delta$t) between the modules. Slow-moving DM candidates obeying the kinematics expected from the SHM would generate highly characteristic signatures in $\Delta$t~\cite{Col18}. The methodology, experimental arrangement, detector calibrations, environmental backgrounds, as well as the active and passive shielding surrounding the modules in the shallow underground laboratory (6.25 m.w.e.), are described in full detail in Ref.~\cite{Col18}. Dedicated runs using this setup were performed for the present search. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{ErvsNr.pdf} \caption{Separation between ER-like (red) and NR-like (blue) depositions in individual EJ-301 modules, measured using gamma and neutron sources via the IRT method. See Ref.~\cite{Col18} for further details. This separation improves with increasing energy; above 100 keV$_{ee}$, it remains nearly identical to that shown in the bottom panel.} \label{fig:irt} \end{figure} Figure~\ref{fig:irt} shows the results of calibrations to measure the separation between ER-like and NR-like events. The energy calibration of the modules was obtained using full energy depositions from $^{241}$Am 59.5 keV gammas, the Compton edge from 2,091 keV $^{124}$Sb gammas, and distinct muon traversals along the vertical axis of the detectors, which deposit approximately 22 MeV in organic liquid scintillator modules of this size. NR calibrations are obtained using neutrons from a Pb-shielded AmBe source. Photomultiplier gain was reduced with respect to that employed in Ref.~\cite{Col18}, allowing for the detection of electron-equivalent (ee) energies ranging from 10 keV$_{ee}$ up to approximately 25 MeV$_{ee}$. For events depositing energies above 100 keV$_{ee}$, the promptest fraction of the scintillation pulse saturates the range of the 8-bit digitizer employed. This precluded a search for the characteristic pulse-shape distortion expected from a slow-moving particle losing energy continuously within a scintillating medium. However, this did not affect our ability to efficiently discriminate between particles losing energy via ERs or NRs, achieved over the full energy range via a modified integrated rise-time (IRT) method~\cite{luo, ronchi}. (For further details, see Ref.~\cite{Col18}.) Similarly, an ad-hoc modification of the energy scale above 100 keV$_{ee}$ was used to determine the magnitude of energy depositions partially saturating the digitizer. For purposes of extracting limits, the calculated nuclear recoil energies imparted by heavy DM were converted to electron-equivalent energies using a model of the quenching factor for proton recoils in EJ-301, described and validated in Ref.~\cite{awe}. The measured compound efficiency for identifying NR-like depositions in both modules (IRT $>$ 50 ns in each) is nominally 36.9\% for events with 10--35 keV$_{ee}$ in each module, and 85.9\% for higher energies (Fig.~\ref{fig:irt}). However, these efficiencies are conservative lower limits, as DM particles continuously losing energy via NRs would produce larger IRT values than the neutron scatters used in these calibrations, a result of their $\sim 300$-ns TOF through each module. In our analysis, we conservatively adopt these lower limits, with the remaining fractions of events being misidentified as electron-recoil events and rejected. Figure~\ref{fig:expresults} displays the distribution in $\Delta$t of events with NR-like scintillation deposits of greater than 10 keV$_{ee}$ in each module. In the first run, 69 days long (top panel), the lead shield entirely surrounding the modules was as described in Ref.~\cite{Col18}. In the second run, 58.2 days long (bottom panel), six lead bricks were removed from the top of the shield. In this case, all but a negligible fraction of straight trajectories traversing both modules go through the hexagonal opening created by this action, removing the attenuation caused by 15 cm of lead. The spike of events with $\lvert \Delta t \rvert < 1~\mu{\rm s}$ are prompt coincidences induced by relativistic particles; we exclude this background via a time cut. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{observedevents.pdf} \caption{TOF distributions for events above 10 keV$_{ee}$ collected during the two runs. As the heights of the four central bins (corresponding to prompt coincidences between modules) are outside the vertical range of plot, we note their values. Red histograms show the maximum contributions from DM candidates (which are slow-moving) allowed by the data. Upward-moving DM particles are assumed to be stopped by the Earth, limiting our search to negative TOF values~\cite{Col18}.} \label{fig:expresults} \end{figure} We searched for DM particles with vertically downward trajectories through both detectors (as in Ref.~\cite{Col18}). The region of interest (ROI) is $-3.5~\mu{\rm s} < \Delta t < -0.8~\mu{\rm s}$, corresponding to the velocity distribution of dark matter particles in an Earth-bound laboratory \cite{Col18}. In the first and second runs, the numbers of viable candidate events in the ROI were zero and one, respectively (Fig.~\ref{fig:expresults}). This ROI corresponds to DM velocities between approximately 140 and 625 km/s, which includes over 90\% of the DM velocity distribution at Earth (that is, when the Sun's velocity through the DM halo is included). The $\lesssim$10\% inefficiency has a negligible effect on our results. The mean expected background counts in this $\sim\!2.7~ \mu{\rm s}$-wide ROI can be calculated based on the counts with $\lvert \Delta t \rvert > 1~\mu{\rm s}$, keeping in mind that random coincidences between uncorrelated energy depositions in each module give rise to a flat distribution of events~\cite{Col18}. (This wide range includes the narrow range of the ROI, but the Feldman-Cousins approach allows us to separate signal and background by comparison.) In the first and second runs, the numbers of such events were one and four, respectively. The background counts expected in the ROI are thus 1 event/18 $\mu{\rm s} \times 2.7 \mu{\rm s} = 0.15$ events for the first run, and 4 events/18 $\mu{\rm s} \times 2.7~ \mu{\rm s} = 0.6$ events for the second (the 18-$\mu$s factor is the 20-$\mu$s width of the TOF search window minus the range $\lvert \Delta t \rvert < 1~\mu{\rm s}$ that is dominated by relativistic-particle backgrounds). Using a standard Feldman-Cousins approach~\cite{feldman}, these background expectations and the observed numbers of viable candidates can be translated into 90$\%$ confidence level (C.L.) intervals for the maximum number of DM candidates allowed by the datasets, yielding 2.9 and 4.4 events, respectively. Their expected characteristic TOF distributions~\cite{Col18} and the maximum daily rates of DM interactions allowed by the data are shown in Fig.~\ref{fig:expresults}. \section{Analysis and Results} \label{sec:results} Following the assumption above that DM is opaque to nuclei, we compute the event rate in the detector as a function of mass and cross section. To trigger the detector, the DM must scatter many times in each module. \subsection{Incident Dark Matter Rate} We assume that DM has the bulk properties predicted by the SHM, with a Maxwellian velocity distribution, a velocity dispersion of 270 km/s, and a local density of 0.3 GeV/cm$^3$~\cite{Lew96, Cho13, Gre17}. The impact of altering these assumptions is minimal, as discussed below. We consider only DM particles arriving from above and passing through both detector modules (Earth is opaque to strongly interacting DM). Specifically, we require that a DM particle reaching the center bottom of the lower module pass through the top of the upper module. The cylindrical modules are 10 cm in height, 10 cm in diameter, and have 50 cm of space between them, so this requirement means the experimental setup is sensitive to a fraction $1.3 \times 10^{-3}$ of $4\pi$, and we thus accept this fraction of the total incoming DM flux. While this solid angle is small, for large enough cross sections, every DM particle passing through both modules would interact, so the event rate can still be high. If there were no attenuation above the detector and downgoing DM particles that passed through both modules triggered the detector, the rate would be $10^{11}\,({\rm GeV}/m_{\chi})$/day. This is lowered by more realistic assumptions, that we calculate below. \subsection{Attenuation} DM reaching the detector must pass through about 10 meters of water equivalent (m.w.e\.) of atmosphere, as well as 6 feet ($\sim$6.25 m.w.e.) of concrete shielding above the laboratory. We model attenuation by assuming that DM particles travel along straight-line trajectories, with each particle suffering the average energy loss in each collision ($\cos\theta = 0$ in the CM frame), taking into account the loss of energy for the DM particle as it propagates. This formalism is widely used for computing DM attenuation, e.g. in Refs.~\cite{Sta90, Alb03, Kou14, Emk17b, Kav17, Sid18}, and is an excellent approximation for heavy DM. The energy loss rate is then \begin{equation} \frac{dE}{dx} = - \sum_j n_j \sigma_{\chi} \langle \Delta E_j \rangle\,, \end{equation} where the sum is over different nuclei, $n_j$ is the number density of the $j$th nuclear species, and $\langle \Delta E_j \rangle$ is the average energy loss in a collision with a nucleus of species $j$. For $m_{\chi} \gg m_j$, and $E$ being the initial DM kinetic energy, the average energy change per scatter is \begin{equation}\label{DE} \langle \Delta E_j \rangle \simeq 2 E \, \frac{m_j}{m_{\chi}}\,. \end{equation} From the above equations, and because the cross section does not depend on the nuclear species due to being set by the geometric size of the DM, the energy loss depends only on the total mass column density of nuclei encountered by the DM, and not the elemental composition. Heavier nuclei have lower number density per unit mass density (by $1/A$), but also have more energy loss per collision (by $A$), such that the total stopping power depends just on the mass column density. Thus, different from the usual case, a given mass column density of lead does not have significantly higher stopping power than the same mass column density of concrete. Approximating the energy loss as equal for all DM particles is well motivated for the high cross sections we consider. The DM particles reaching the detector would scatter hundreds or thousands of times while traversing the atmosphere on near-vertical trajectories, meaning the fractional 1$\sigma$ Poisson fluctuation in the number of collisions is at most a few percent. Similarly, for the high masses of concern, it is an excellent approximation to treat particle trajectories as straight lines. For $m_{\chi} \gg m_j$, the DM lab-frame trajectory is deflected by an angle $\theta \sim m_j/m_{\chi}$ in one scattering~\cite{Kav16}. As we show below, our analysis is sensitive to $m_{\chi} \gtrsim 10$ TeV, so $\theta \lesssim 10^{-3}$ for typical nuclei in the overburden. The cumulative RMS deflection angle is then $\theta/\sqrt{N_{\rm scatt}} \lesssim 10^{-4}$, where $N_{\rm scatt} \sim 100$ is appropriate for the minimum cross sections of our allowed region. The amount of kinetic energy a DM particle retains after propagation through the atmosphere and detector shielding is a decaying exponential in $\sigma_{\chi}$, so a small change in $\sigma_{\chi}$ leads to a large fractional change in the amount of energy loss. For this reason, although we take into account the energy dependence of attenuation, simpler calculations would give similar results. That is, defining the ceiling to be the cross section where DM loses $10\%$, $50\%$ or $90\%$ would all give similar results to our procedure, where we compute the energy a DM particle deposits in the detector even if it has lost nearly all its energy before reaching the detector. This is different from calculations for lower masses, where large scattering angles and fluctuations in the number of collisions make detailed propagation codes necessary \cite{Emk17c, DMATIS, Cap19b}. \subsection{Signals in the Detector} For interactions in the detector modules, we conservatively consider only DM scattering with protons. With an RMS DM velocity of $\sim 10^{-3}c$, and DM much heavier than a proton, the typical individual proton recoil energy is $\sim 1$ keV (as can be seen from Eq.~\ref{DE} and derived elsewhere, e.g., Ref.~\cite{Mac07}). The equivalent light yield is reduced from this by a multiplicative quenching factor, which we take to be $20\%$ based on the EJ-301 response model from Ref.~\cite{awe}. In fact, the quenching factor depends on the recoil energy, but the fit in Ref.~\cite{awe} shows that it is actually above $20\%$ for the vast majority of collisions we consider (recoil energies from about 0.15--4 keV), so our assumption is conservative. Including carbon recoils would have only a small effect on our results: although carbon recoils have about 12 times more kinetic energy, their scintillation emission is much more quenched, with a quenching factor of $\sim$1\%~\cite{awe}. Including carbon recoils would increase the total light yield in the detector by less than a factor of 2, which would hardly change the excluded region relative to the huge scale of the plot. For a DM particle passing through the detector, we sum the calculated electron-equivalent energy of all individual proton recoils to compute the total scintillation signal. We require a minimum total energy deposition of 10 keV$_{ee}$ in each module, which corresponds to $\sim 50$ collisions. It would take DM about 300 ns to pass through each module, which is comparable to the long component of the scintillation decay constant ($\sim 270$ ns) for EJ-301~\cite{eljen}, so we integrate the total deposited energy. As described in the previous section, we then use the NR-like particle identification and the time-of-flight criteria to discriminate between DM events and background events, ruling out DM parameter space that corresponds to too large of an event rate. At the highest cross sections we consider, the distance between collisions becomes small enough that it is no longer a good approximation to sum the quenched energies of individual collisions, as the energy-deposition regions would overlap; however, in this limit, the energy deposited is vastly above threshold. \subsection{Results} \label{sec:results_results} Figure~\ref{fig:results} shows our constraints on composite DM. The red triangular region (``This Work") is ruled out by the null results of our search. Parameter space to the left of the dashed line is excluded by both the run with lead shielding and the run without it. Parameter space to the right of the dashed line is excluded only by the run with full shielding. Because the run with shielding had fewer observed events, we can exclude slightly more massive (and thus lower flux) DM. The slight indentation in the bottom-right corner is due to the aforementioned difference in discrimination efficiency for events above and below 35 keV$_{ee}$: DM particles depositing more than 35 keV$_{ee}$ are more efficiently distinguished from electron-recoil events than lower-energy DM particles, so at higher cross section our analysis is sensitive to slightly lower flux (and thus higher mass). The constraints we show are on the geometric size of an opaque, composite DM particle. They should not necessarily be thought of in the usual framework of spin-dependent and spin-independent scattering. (In any case, we directly probe the DM-proton cross section.) Those terms typically refer to specific operators used in nonrelativistic effective field theory, where many other operators are possible~\cite{Fit12}. We consider a DM particle with physical size much larger than the wavelength associated with the momentum transfer of the scattering, so the nucleus does not ``see" the entire DM state. However, we assume that the cross section saturates to the geometric size of the DM particle due to strong internal couplings of the DM composite state. For increasing DM size, there may be a model-dependent (e.g., see Refs.~\cite{App13, Har15, Chu18}) form factor that prevents this saturation. However, the moderate cross sections at the bottom edge of our exclusion region are already sufficient to cause an excess rate, and the rate nominally increases linearly with increasing cross section, so the rate should be quite high even with a form factor. In addition, with a form factor the diagonal edge could become much higher, as it has a strong exponential dependence on the cross section. In light of this, we view our assumptions as conservative. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{ExclusionRegion.pdf} \caption{Constraints (90\% C.L.) on composite DM. The red solid boundary is the exclusion region from our search (for the dashed red line, see the text). Also shown are other limits based on composite DM scattering with nuclei, in the Skylab satellite~\cite{Sta90} and in ancient underground mica~\cite{Pri86, Jac14}, plus one cosmological constraint based on how DM-proton scattering would affect the structure formation of Milky Way satellites~\cite{Nad20}. The usual limits on pointline DM from underground detectors are shown, but we cut them off by the maximum allowed DM-nucleon cross section~\cite{Dig19}. The dotted line shows, just for comparison, where the internal density of the DM would be comparable to that of typical nuclei.} \label{fig:results} \end{figure} The underground-detector limits, derived assuming that $\sigma_{\chi A} \simeq A^4 \sigma_{\chi N}$, are shown cut off at the maximum allowed DM-nucleon cross section for typical nuclear targets~\cite{Dig19}, though even below this line their limits may be improperly derived. For pointlike DM, cross sections above this limit can be obtained for $s$-wave resonances, but these model-dependent resonances are quite narrow in energy range for all but the lightest nuclei, and are not usually considered for direct detection experiments. Our limits on composite DM should not be directly compared to limits on pointlike DM, but we show these pointlike limits for orientation. Really, the results from underground direct-detection experiments should be reanalyzed for composite DM both below this line (where the scaling is uncertain) and above it (where there may still be some sensitivity). The underlying issue is that those analyses used a DM-nucleus cross section that was far too large due to using the coherent scattering scaling relation between DM-nucleus and DM-nucleon cross sections. However, instead using just the geometric cross section of the DM may still allow signal rates that are detectable above backgrounds. New work is needed to determine the actual exclusion regions for composite DM. The Skylab and mica results were explicitly reported as limits on composite DM, also assuming that the DM is opaque to nuclei~\cite{Sta90, Pri86, Jac14}. They are more directly constraints on $\sigma_{\chi A}$, but were recast as limits on $\sigma_{\chi N}$ under the assumption that $\sigma_{\chi A}$ = $\sigma_{\chi N}$ = $\sigma_{\chi}$, the geometric size of the DM particle. The cosmological constraints are derived by considering only DM-proton scattering~\cite{Nad20}. The right edge of our region is set by the DM density; beyond the edge, the number density of DM ($\rho_\chi/m_\chi$) becomes so low that there are simply no signal events within this exposure. The maximum mass we are sensitive to is roughly $m_\chi \simeq \rho v (A \Omega t) \simeq 4 \times 10^{12}$ GeV, where the latter factor is the total exposure, with $A$ being the cross sectional area of the top of the detector and $\Omega$ being the solid angle of acceptance. The diagonal edge is set by attenuation above the detector; beyond the edge, the flux of DM with any appreciable energy (or at all) vanishes exponentially. This is roughly determined by the cross section for which the total mass of scattered nuclei is comparable to the DM mass: $\sigma_{\chi} n_A m_A d\simeq m_{\chi}$, where $n_A$ is the average number density of nuclei of species $A$ and $d$ is the distance traveled through the atmosphere or shielding. With the atmosphere being about 10 m.w.e., we get $\sigma_{\chi}/m_{\chi} \simeq 10^{-27}$ cm$^2$/GeV. The bottom edge is set by the detector threshold; beyond the edge, the DM collisions are too few to deposit enough energy to trigger the detector at that threshold (which is set by the background rates). This is determined by the requirement that the DM scatter at least 50 times with protons in each module, so $\sigma_{\chi} \simeq 50/(n_H L) \simeq 10^{-22}$ cm$^2$, with $L$ being the length of a detector cell and $n_H$ being the proton density in the cell. The bottom edge is not flat, as explained in the next paragraph. Within the excluded region, the calculated DM event rates are generally enormous. For example, for a point near the center (mass $10^9$ GeV and cross section $10^{-20}$ cm$^2$), every DM particle would deposit about 8 MeV in each module and the rate of such depositions would be about 10$^4$/day. The bottom edge is not flat due to Poisson fluctuations. To trigger the detector, DM must deposit 50 keV in each module to produce 10 keV$_{ee}$ of scintillation light, taking into account the 20\% quenching factor. With an average proton recoil energy of 1 keV, this means a DM particle must scatter $\sim$50 times in each module. At the right end of the bottom edge, where the flux of DM is smallest, the minimum excluded cross section is determined by the cross section for which the Poisson expectation is indeed about 50 collisions. The exact value is calculated by requiring a certain number of signal events in the exposure, following the details given above. At lower mass, however, the flux increases as $1/m_\chi$, giving vastly more trials, making it probable that a smaller cross section and Poisson expectation can still give a Poisson yield of 50 or more collisions. At the left end of the bottom edge, roughly $10^8$ DM particles pass through both modules in $\sim$60 days, allowing a factor of a few improvement in the cross section (i.e., a Poisson expectation of $\sim$20 collisions is enough). Our results are stable to deviations within uncertainties of the assumed bulk properties of DM. A change in the DM density would affect only the right edge, and would not be visible on the scale of the figure. A change in the DM RMS velocity would primarily affect the bottom edge, but only slightly. A change in the shape of the DM velocity distribution for the same RMS velocity would affect primarily the diagonal edge, but again only slightly. Thus changes in the DM bulk properties like those discussed in Refs.~\cite{Lis11, Mao14, Boz16, Slo16, Nec18, Nec18b, Eva18, OHa19, Bes19}, the subject of much recent work, can be neglected. We assumed that DM arrives isotropically, which is not strictly true, as it arrives preferentially from the direction in which the Sun is moving around the galaxy (the so-called WIMP wind, e.g., Refs.~\cite{Sci09, Kav17}). However, the overhead direction in Chicago actually points into the WIMP wind for part of the day, making this assumption conservative \cite{Sci09}. Because changing the incoming flux by even a factor of a few would not visibly affect Fig.~\ref{fig:results}, we neglect this effect. For future work, we note some considerations that may become important eventually. Near the top right of our exclusion region, the diameter of the DM particle becomes comparable to interatomic spacing, meaning the DM may scatter with multiple nuclei at the same time. While this would not change the physical deposited energy, it could change the quenched energy. However, in this region of parameter space, the energy depositions are vastly above threshold. Near the bottom right of our exclusion region, below the line $\sigma_{\chi}/m_{\chi} \simeq 10^{-33}$ cm$^2$/GeV, DM particles may pass through Earth, allowing upgoing events, which would only increase the signal rate. \section{Extensions for Future Work} \label{sec:future} What would be needed to significantly expand coverage of the composite DM parameter space? We first focus on how a detector like ours could be improved, and then consider other options. The diagonal edge of the sensitivity region cannot be substantially improved for detectors near Earth's surface, as the DM flux is exponentially depleted by attenuation. (Some improvement should be possible by using a better digitizer for larger pulses, which would enable new capabilities to recognize slow-moving tracklike events.) The right edge, in contrast, could be greatly improved. In the setup above, the top area of each detector was $\sim 80$ cm$^2$, the runtime was $\sim 0.2$ yr, and the solid-angle penalty was $\sim 10^{-3}$. It is easy to imagine, as discussed below, how the exposure could be improved by several orders of magnitude. If the backgrounds can be kept near-negligible, this would improve the sensitivity of the right edge by the same factor. For the bottom edge, there is room to improve by lowering the threshold energy, which would require reducing backgrounds. This would be especially important for covering the gap between our sensitivity region and that of conventional underground detectors. Reanalyses of the latter taking into account the breakdown of the usual scaling relation would also help. A potential setup could consist of a horizontal checkerboard array of modules, with this array replicated at multiple heights. This would increase the area and solid angle exposure, and of course one could choose to run for a longer time. With modules at more than two heights, the required multiple coincidences would reduce uncorrelated backgrounds, which could allow a lower threshold per module. Greatly improved sensitivity could be achieved for moderate costs. Similar detectors probing lower cross sections may be sensitive not just to downgoing DM, but also to upgoing DM, in which case the angle-dependent attenuation in Earth must be treated properly and the sign of the time delay accounted for, though this would increase the signal rate by at most a factor of two. Other setups could help cover more of the parameter space. Significantly moving the diagonal edge would require a satellite or high-altitude balloon experiment. The column density of the overburden could realistically be reduced by about $\sim 10^3$, which would increase the sensitivity by the same factor. (For example, note that the Skylab diagonal edge is about this much higher than ours.) Our exclusion region is already partially covered by the cosmological constraints~\cite{Nad20}, though direct experimental sensitivity is always preferred. Another low-threshold surface detector, a special run of CRESST~\cite{Ang17}, should be reanalyzed for composite DM, as should DAMA's search for strongly interacting DM~\cite{Ber99}, and constraints from planetary heat flow~\cite{Mac07, Bra19} or heating of celestial bodies~\cite{Kou10, Bar17, Bel19, Das19, Bel20, Das20}. It may be fruitful to consider large underground detectors, as done in Ref.~\cite{Bra18b}. While their diagonal edges could be as much as $\sim 10^2$ times lower than ours (for detectors at thousands of m.w.e.\ depth), those could still nearly meet the cosmological constraints~\cite{Nad20}. Such detectors would have the advantage of a huge exposure. In the ideal situation, a highly segmented large detector would be an expanded version of the stacked arrays of modules discussed above. Data from other detectors with tracking capability, such as the proposed MATHUSLA~\cite{Lub19}, could be analyzed for such a search. It may be possible to effectively achieve imaging of tracklike events in homogeneous detectors with excellent position resolution, such as the time-projection chambers used in many large DM experiments~\cite{Akerib_2017, Aprile_2017, Calvo_2017, Agnes_2018, Akerib_2020}, bubble chambers like PICO-60~\cite{Amo17} or perhaps even DUNE~\cite{abi2020deep}. Also potentially sensitive to this parameter space are certain sub-detectors of the MACRO experiment~\cite{Amb02}. In some other detectors such as Borexino~\cite{Alimonti_2009} or SNO+ \cite{And15}, the isotropized scintillation light usually makes directional reconstruction difficult, but that changes for DM particles that leave long, distinctive tracks that develop $\sim 10^3$ times more slowly than muon tracks~\cite{bramante_2019}. \section{Conclusions} \label{sec:conclusions} To discover DM, we must search broadly. In an interesting class of models, DM has evaded detection by interacting too strongly with ordinary matter, instead of too weakly, as commonly assumed. The parameter space of pointlike strongly interacting DM has largely been eliminated by the theoretical considerations of Ref.~\cite{Dig19}, superseding much prior work~\cite{Alb03, Mac07, Khl07, Eri07, Khl08, Far17, Kav17, Mah17, Bho18b, Bra18, Emk18, Far20}. However, it also highlights the open parameter space for composite strongly interacting DM~\cite{Sta90, Jac14, Gra18, Sid18}, for which prior results on pointlike DM could be reanalyzed. In this paper, we present experimental tests of composite DM based on a dedicated search using a novel detector at the University of Chicago. This detector, while small and near the surface, has powerful sensitivity, covering large regions of the parameter space that had not yet been probed. We did not find DM candidates, and accordingly set limits. Our analysis shows that for composite DM, terrestrial detectors can probe cross sections above cosmological constraints~\cite{Nad20}, reducing the need for rocket- or space-based experiments. At moderate costs, an improved version of our detector could have greatly improved sensitivity, potentially extending significantly lower in cross section and several orders of magnitude higher in DM mass. We have also detailed how progress could be made more generally, so that strongly interacting composite DM could be probed over a much wider parameter space. It is important to do this systematically. While a strongly interacting DM particle might seem unlikely relative to typical theoretical prejudices, those prejudices have also not led us to any discovery of DM. A strongly interacting DM particle could also enable a host of new laboratory studies. New analyses of existing data will be an important part of this, and we encourage them. \section*{Acknowledgments} We are grateful to Carlos Blanco, Joseph Bramante, Matthew Digman, Timon Emken, Brian Fields, Chris Hirata, Bradley Kavanagh, Jason Kumar, Ben Lillard, Annika Peter, Nirmal Raj, Anupam Ray, Juri Smirnov, and Xingchen Xu for helpful comments and discussions. We also thank the anonymous referee for helpful comments that improved the paper. CVC and JFB were supported by NSF grant Nos.\ PHY-1714479 and PHY-2012955. JIC was supported by NSF grant No.\ PHY-1506357.
1,941,325,220,115
arxiv
\section{Introduction} The efficient modeling of the long length- and time-scale dynamics of complex liquids such as colloidal and polymeric suspensions requires a simplified, coarse-grained description of the solvent degrees of freedom. A recently introduced particle-based simulation technique \cite{male_99}---often called stochastic rotation dynamics (SRD) \cite{ihle_01,ihle_03a,ihle_04,tuze_03,kiku_03,pool_05} or multi-particle collision dynamics \cite{lamu_01,ripo_04}---is a very promising algorithm for mesoscale simulations of this type. In additional to its numerical advantages, the algorithm enables simulations in the microcanonical ensemble, and fully incorporates both thermal fluctuations and hydrodynamic interactions. Furthermore, its simplicity has made it possible to obtain analytic expressions for the transport coefficients which are valid for both large and small mean free paths, something that is often very difficult to do for other mesoscale particle-based algorithms.This algorithm is particularly well suited for studying phenomena with Reynolds and Peclet numbers of order one, and it has been used to study the behavior of polymers \cite{kiku_02,ripo_04}, colloids \cite{male_99,lee_04,padd_04,hecht_05}, vesicles in shear flow \cite{nogu_04}, and complex fluids \cite{hash_00,saka_02a}. The original SRD algorithm models a fluid with an ideal gas equation of state. The fluid is therefore very compressible, and the speed of sound, $c_s$, is low. In order to have negligible compressibility effects, as in real liquids, the Mach number has to be kept small, which means that there are limits on the flow velocity in the simulation. It is therefore important to explore ways to extend the algorithm to model dense fluids. Our approach starts from what has been a common theme of most liquid theories, namely the separation of intermolecular forces into short- and long-range parts, which are then treated differently. The short-range component is a strong repulsion when molecules are close together; it leads to excluded volume effects which cause a decrease in the fluid's compressibility and eventual crystallization at low temperatures or high density. The long-range component is a weak attraction which can lead to a liquid-gas transition. The generic reference system for the short-range repulsive component of the force is the hard sphere system. In this letter we show how the SRD the algorithm can be modified to model excluded volume effects, allowing for a more realistic modeling of dense gases and liquids. This is done in a thermodynamically consistent way by introducing generalized excluded volume interactions between the fluid particles. The algorithm can be thought of as a coarse-grained multi-particle collision generalization of a hard sphere fluid, since, just as for hard spheres, there is no internal energy. In order to simplify the analysis of the equation of state and the transport coefficients, and enhance computational efficiency, the cell structure of the original SRD algorithm is retained. This work is a first step towards developing consistent particle-based algorithms for modeling, in the microcanonical ensemble, more general liquids with additional attractive interactions and a liquid-gas phase transition. \section{Model} As in the original SRD algorithm, the solvent is modeled by a large number $N$ of point-like particles of mass $m$ which move in continuous space with a continuous distribution of velocities. The system is coarse-grained into $(L/a)^d$ cells of a $d$-dimensional cubic lattice of linear dimension $L$ and lattice constant $a$. The algorithm consists of individual streaming and collision steps. In the free-steaming step, the coordinates, ${\bf r}_i(t)$, of the solvent particles at time $t$ are updated according to ${\bf r}_i(t+\tau)= {\bf r}_i(t) + \tau {\bf v}_i(t)$, where ${\bf v}_i(t)$ is the velocity of particle $i$ at time $t$ and $\tau$ is the value of the discretized time step. In order to define the collision, we introduce a second grid with sides of length $2a$ which (in $d=2$) groups four adjacent cells into one ``supercell''. As discussed in Refs. \cite{ihle_01} and \cite{ihle_03a}, a random shift of the particle coordinates before the collision step is required to ensure Galilean invariance. All particles are therefore shifted by the {\it same} random vector with components in the interval $[-a,a]$ before the collision step (Because of the supercell structure, this is a larger interval than in the conventional SRD algorithm). Particles are then shifted back by the same amount after the collision. To initiate a collision, pairs of cells in every supercell are randomly selected. As shown in Fig. 1, three different choices are possible: a) horizontal (with $\mbox{\boldmath $\sigma$}_1=\hat x$), b) vertical ($\mbox{\boldmath $\sigma$}_2 =\hat y$), and c) diagonal collisions (with $\mbox{\boldmath $\sigma$}_3= (\hat x+\hat y)/\sqrt{2}$ and $\mbox{\boldmath $\sigma$}_4=(\hat x-\hat y)/\sqrt{2}$). Note that diagonal collisions are essential to equilibrate the kinetic energies in the $x-$ and $y-$directions. \begin{figure} \twofigures[width=2.5in]{supercollision.eps}{D_vs_taukT_inset.eps} \caption{Schematic of collision rules. Momentum is exchanged in four ways, a) horizontally along $\mbox{\boldmath $\sigma$}_1$, b) vertically along $\mbox{\boldmath $\sigma$}_2$, c) diagonally and d) off-diagonally along $\mbox{\boldmath $\sigma$}_3$ and $\mbox{\boldmath $\sigma$}_4$ respectively, according to Eq. (\ref{NONID2}). $w$ and $w_d$ denote the probabilities of choosing collisions a), b) and c), d) respectively.} \label{fig_rules} \caption{Diffusion coefficient as a function of $\tau$. The data points in the inset are data for the shear viscosity measured using Green-Kubo relations, as a function of $\tau k_BT$. The solid line shows the analytical result, Eq. (\ref{EXACT_DIFF}). Parameters: $L=64a$, $M=5$, $k_BT=1.0$ and $A=1/60$.} \label{fig_diffnu} \end{figure} In every cell, we define the mean particle velocity, \begin{equation} {\bf u}_n={1\over M_n}\,\sum_{i=1}^{M_n}\,{\bf v}_i, \end{equation} where the sum runs over all particles, $M_n$, in the cell with index $n$. The projection of the difference of the mean velocities of the selected cell-pairs on ${\bf \sigma}_j$, $\Delta u={\bf \sigma}_j\cdot ({\bf u}_1-{\bf u}_2)$, is then used to determine the probability of collision. If $\Delta u<0$, no collision will be performed. For positive $\Delta u$, a collision will occur with an acceptance probability which depends on $\Delta u$ and the number of particles in the two cells, $M_1$ and $M_2$. This rule mimics a hard-sphere collision on a coarse-grained level: For $\Delta u>0$ clouds of particles collide and exchange momenta. For reasons discussed in the following, we have used the acceptance probability \begin{equation} \label{NONID0} p_A(M_1,M_2,\Delta u)=\Theta(\Delta u)\,\,{\rm tanh}(\Lambda) \ \ \ \ {\rm with}\ \ \ \ \Lambda = A\,\Delta u\, M_1M_2 , \end{equation} where $\Theta$ is the unit step function and $A$ is a parameter which allows us to tune the equation of state. The hyperbolic tangent function was chosen in (\ref{NONID0}) in order to obtain a probability which varies smoothly between 0 and 1. Once it is decided to perform a collision, an explicit form for the momentum transfer between the two cells is needed. The collision should conserve the total momentum and kinetic energy of the cell-pairs participating in the collision, and in analogy to the hard-sphere liquid, the collision should primarily transfer the component of the momentum which is parallel to the connecting vector $\mbox{\boldmath $\sigma$}_j$. In the following, this component will be called the parallel or longitudinal momentum. There are many different rules which fullfill these conditions. Our goal here is to obtain a large speed of sound. We therefore use a collision rule which leads to the maximum transfer of the parallel component of the momentum and does not change the transverse momentum. The rule is quite simple; it exchanges the parallel component of the mean velocities of the two cells, which is equivalent to a ``reflection'' of the relative velocities, \begin{equation} \label{NONID2} v_i^{\Vert}(t+\tau)-u^{\Vert}=-(v_i^{\Vert}(t)-u^{\Vert})\,, \end{equation} where $u^{\Vert}$ is the parallel component of the mean velocity of the particles of {\it both} cells. The perpendicular component remains unchanged, \begin{equation} v_i^{\perp}(t+\tau)=v_i^{\perp}(t). \end{equation} It is easy to verify that these rules conserve momentum and energy in the cell pairs. Because of $x-y$ symmetry, the probabilities for choosing cell pairs in the $x-$ and $y-$ directions (with unit vectors $\mbox{\boldmath $\sigma$}_1$ and $\mbox{\boldmath $\sigma$}_2$ in Fig. 1) are equal, and will be denoted by $w$. The probability for choosing diagonal pairs ($\mbox{\boldmath $\sigma$}_3$ and $\mbox{\boldmath $\sigma$}_4$ in Fig. 1) is given by $w_d=1-2w$. $w$ and $w_d$ must be chosen to that the hydrodynamic equations are isotropic and do not depend on the orientation of the underlying grid. This can be done by considering the temporal evolution of the lowest moments of the velocity distribution function. It is sufficient to consider the following three moments for a single particle $i$, \begin{equation} \Psi_i(t)= \left( \begin{array}{c} \langle v_{ix}^2(t) \rangle \\ \langle v_{iy}^2(t) \rangle \\ \langle v_{ix}(t)\,v_{iy}(t)\rangle \end{array} \right) . \end{equation} Assuming molecular chaos, the collision rules can be used to determine the eigenvalues of the relaxation matrix, $R$, defined by $\Psi_i(t+\tau)=R\,\Psi_i(t)$. Because of the conservation of energy, one of the three eigenvalues of $R$ is equal to one; the other two are given by $\lambda_1=w_d+2w(2/M-1)$ and $\lambda_2=2w+w_d(2/M-1)$, where $M$ is the average number of particles per cell. Isotropy requires that $\lambda_1=\lambda_2$, a condition that can be fullfiled for arbitrary $M$ only if $w_d=1/2$ and $w=1/4$. Simulations show that both the speed of sound and the shear viscosity are isotropic for this choice. Note, however, that this does not guarantee that all properties of the model are isotropic. This becomes apparent at high densities or high collision frequency, $1/\tau\gg 1$, where inhomogenuous states with cubic or rectangular order can be observed (see Fig. 4 and accompanying discussion). \section{Transport coefficients} The transport coefficients can be determined using the same Green-Kubo formalism as was used for the original SRD algorithm \cite{ihle_01, ihle_03a,ihle_04}. In particular, the kinematic shear viscosity is given by \begin{equation} \label{shear_v} \nu = \frac{\tau}{Nk_BT}\left.\sum_{n=0}^\infty\right.' \langle S_{xy}(0)S_{xy}(n\tau)\rangle, \end{equation} where \begin{equation} \label{BCOR1} S_{xy}(n\tau)=\sum_{j=1}^N\,\left( v_{jx}(n\tau)\Delta\xi_{jy}(n\tau) + \Delta v_{jx}(n\tau)[ \Delta\xi^s_{jy}(n\tau)-z^s_{jly}([n+1]\tau)/2]\right) \end{equation} is the off-diagonal element of the stress tensor ${\bf S}$. ${\mbox{\boldmath $\xi$}}_j(t)$ and ${\mbox{\boldmath $\xi$}}^s_j(t)$ are the cell coordinates of particle $j$ in the fixed and {\it shifted} frames at time $t$, respectively, $\Delta{\mbox{\boldmath $\xi$}}_j(t)={\mbox{\boldmath $\xi$}}_j(t+\tau)-{\mbox{\boldmath $\xi$}}_j(t)$, $\Delta{\mbox{\boldmath $\xi$}}^s_j(t)={\mbox{\boldmath $\xi$}}_j(t+\tau)-{\mbox{\boldmath $\xi$}}_j^s(t+\tau)$, and $\Delta{\bf v}_j(t) = {\bf v}_j(t+\tau)-{\bf v}_j(t)$. ${\bf z}^s_{jl}$ indexes pairs of cells which participate in a collision event; the second subscript, $l$, is the index of the collision vectors $\mbox{\boldmath $\sigma$}_l$ listed in Fig. 1. For example, for collisions characterized by $\mbox{\boldmath $\sigma$}_1$, $z^s_{j1x}=1$ if $\xi^s_{jx}$ in (\ref{BCOR1}) is one of the two cells on the left of a supercell and $z^s_{j1x}=-1$ if $\xi^s_{jx}$ is on the right hand side of a supercell; all other components of ${\bf z}^s$ are zero. In general, the components of ${\bf z}^s_{jl}$ are either $0$, $1$, or $-1$. Using $\{{\bf z}^s_{jl}\}$, the collision invariants of the model can be written as \begin{equation} \label{cons} \sum_j \left( {\rm e}^{i{\bf k}\cdot{\mbox{\boldmath \tiny $\xi$}}^s_j(t+\tau)} + {\rm e}^{i{\bf k}\cdot({\mbox{\boldmath \tiny $\xi$}}^s_j(t+\tau) + {\bf z}^s_{jl}(t+\tau))} \right) [a_{\beta,j}(t+\tau) - a_{\beta,j}(t)]=0, \end{equation} where $a_{1,j}=1$ for the density, $\{a_{\beta,j}\}=\{v_{\beta-1,j}\}$ are components of the particle momentum, and $a_{d+2,j}=v_j^2/2$ is the kinetic energy of particle j \cite{ihle_03a}. The analogous collision invariants for the standard SRD algorithm are given in Eq. (25) of \cite{ihle_03a}. The vectors ${\bf z}^s$ are constructed so that the sum of the two exponentials in (\ref{cons}) is the same for two particles if and only if they are in partner cells in a collision with index $l$ (see Fig. 1). The self-diffusion constant $D$ is given by a sum over the velocity-autocorrelation function (see, e.g. Eq. (102) in \cite{ihle_03a}) and can be evaluated analytically assuming molecular chaos. Due to the excluded volume interactions, density fluctuations are supressed in the current algorithm; ignoring these fluctuations, one finds \begin{equation} \label{EXACT_DIFF} D=k_B T\,\tau\left( {1\over A}\,\sqrt{\pi \over k_B T}\;{M^{-3/2} \over 1+1/(8M) } -{1 \over 2} \right)\,, \end{equation} which is in good agreement with simulation data, see Fig. 2. \section{Equation of state} The collision rules conserve the kinetic energy, so that the internal energy of our system should be the same as that of an ideal gas. Thermodynamic consistency requires that the non-ideal contribution to the pressure is linear in $T$. As will be shown, this is possible if the coefficient $A$ in (\ref{NONID0}) is chosen small enough (see Fig. 3). We use here the mechanical definition of pressure---the average longitudinal momentum transfer across a fixed interface per unit time and unit surface area---to determine the equation of state. We consider only the momentum transfer due to collisions, since that coming from streaming constitutes the ideal part of the pressure. Take an interface that is parallel to the $y-$axis and consider the component $p_{xx}$ of the pressure tensor. Only collisions with label $l=1$, $3$, and $4$ of the collision vector $\mbox{\boldmath $\sigma$}_l$ in Fig. 1 contribute to the momentum transfer in this case. Consider the contribution to the momentum transfer across the cell boundary from collisions with $l=1$. For fixed number of particles, $M_1$ and $M_2$, in the two cells, the thermal average of the momentum transfer, $ \Delta G_x $, across the dividing line is \begin{equation} \label{PRESS3} \langle \Delta G_x \rangle={w\over 2} \int_0^\infty\,p_G(\Delta u)\, p_A(M_1,M_2,\Delta u) \Delta G_x \,d(\Delta u)\,. \end{equation} The factor $1/2$ comes from the position average of the dividing line, since the collision occurs n the shifted cells, and the integral is restricted to positive $\Delta u$ because the acceptance rate is zero for $\Delta u<0$. $p_G(\Delta u)$ is the probability that $u_{1x}-u_{2x}$ for the micro-state of two cells is equal to $\Delta u$. $w=1/4$ is the probability of selecting this collision. \begin{figure} \twofigures[width=2.5in]{pressure_vs_kTtau.eps}{conf_tau0.001_M5_kT3.125e-5.eps} \caption{Non-ideal pressure, $P_n$, as a function of $k_BT/\tau$ averaged over $10^5$ time steps. Both $k_B T$ and $\tau$ ranged from $0.005$ to $4$. The line represents the theoretical expression, Eq. (\ref{PRESS_MAV}). For $\tau=0.005$ and $k_B T=1$, $P_n$ is five times larger than $P_{id}$. Parameters: $L=64a$, $M=5$, $A=1/60$.} \label{PRESS_FIG} \caption{Freezing snapshot after $10^6$ time steps. Parameters: $L_x=L_y=32a$, $M=5$, $k_BT=3.125\times10^{-5}$, $\tau=10^{-3}$, $A=1/60$.} \label{fig_freezing} \end{figure} Expanding the acceptance probability, Eq. (\ref{NONID0}), in $\Lambda\equiv A\;\Delta u\;M_1\, M_2$ leads to \\ $p_A(M_1,M_2,\Delta u)=\Theta(\Delta u) (\Lambda-\Lambda^3/3+...)$. The contributions to the pressure from all terms of this series can be calculated, but since the resulting contribution to the pressure from a term proportional to $\Lambda^n$ is of order $T^n$, we restrict ourselves to the first term. The resulting contribution to the pressure, $P(\mbox{\boldmath $\sigma$}_1,M_1,M_2)$, for fixed $M_1$ and $M_2$ is the average momentum transfer per unit area and unit time, so that using Eqs. (\ref{PRESS3}), we have $P(\mbox{\boldmath $\sigma$}_1,M_1,M_2)=w\, A\, k_B T\, M_1 \,M_2/(2a\tau) + O(A^3T^2)$. A similar calculation can be performed for the contributions from the diagonal collisions, which occur with the probability $w_d$. Using $w=1/4$ and $w_d=1/2$ and averaging over the number of particles per cell (assuming that they are Poisson-distributed and that the particle number distributions in adjacent cells are not correlated), one finds the non-ideal part of the pressure, \begin{equation} \label{PRESS_MAV} P_n=P_{id}\,\left({1 \over 2 \sqrt{2}}+{1 \over 4} \right) {A\,M \over 2} {a \over \tau} + O(A^3T^2) . \end{equation} where $P_{id}=k_B T\,M/a^2$ is the ideal gas contribution to the pressure (in $d=2$). Note that the same result is obtained if, instead of averaging over $M_1$ and $M_2$, we simply set $M_1=M_2=M$, the average number of particles per cell. $P_n$ is quadratic in the particle density, $\rho=M/a^2$, as one would expect from a virial expansion. The prefactor $A$ must be chosen small enough that higher order terms in this expansion are negligible. We have found that prefactors $A$ leading to acceptance rates of about $20\%$ are sufficiently small to guarantee that the pressure is linear in $T$ (see Fig. 3). In order to measure $P_n$, we have used the fact that the average of the diagonal part of the microscopic stress tensor gives the virial expression for the pressure \begin{equation} \label{VIRIAL} P=P_{id}+P_n = \left\langle \sum_j \left\{ v_{jx}\Delta \xi_{jx}+\Delta v_{jx}\left[\Delta \xi^s_{jx}- z^s_{jlx}/2\right] \right\} \right\rangle . \end{equation} The first term, $\langle v_{x,j}\Delta \xi_{x,j} \rangle= \langle \tau v_{x,j}^2 \rangle$, gives $P_{id}$, as discussed in Ref. \cite{ihle_03a}. The average over the second term vanishes (see Ref. \cite{ihle_03a}), while the average of the third term is the non-ideal part of the pressure, $P_n$. Simulation results for $P_n$ obtained using (\ref{VIRIAL}) are in good agreement with the analytical expression, (\ref{PRESS_MAV}) (see Fig. \ref{PRESS_FIG}). In addition, measurements of the density fluctuations, $\langle |\rho_k|^2 \rangle$, at small wave vectors ${\bf k}$, as well as results for the adiabatic speed of sound obtained from simulations of the dynamic structure factor, are both in good agreement with the predictions following from Eq. (\ref{PRESS_MAV}). These results provide strong evidence for the thermodynamic consistency of the model. \section{Caging and order/disorder transition} If the non-ideal part of the pressure is large compared to the ideal pressure, ordering effects can be expected. For small $A$, both contributions to the pressure are proportional to the temperature, so that just as in a real hard-sphere fluid, changing the temperature does not lead to an order/disorder transition. On the other hand, the two contributions to the pressure have different dependencies on the density and time step, $\tau$. In fact, $\tau$ can be interpreted as a parameter describing the efficiency of the collisions; lowering $\tau$ results in a higher collision frequency, and has a similar effect to making the spheres larger in a real hard-sphere system. We therefore expect caging and ordering effect if either $\rho$ is increased or $\tau$ is decreased. This is indeed the case. For $\tau< 0.0016$, $\rho=5$ and $k_BT= 1.0$, an ordered cubic state is observed. The cubic symmetry of the ordered state is clearly an artifact of the cubic cell structure, and it would be interesting to see if this could be removed by using an hexagonal cell structure or incorporating random rotations of the grid. One of the surprising features of this crystalline-like state is, that $x-y$ symmetry can be broken. Furthermore, there is the possibility of having several metastable crystalline states corresponding to slightly different lattice constants and number of particles per ``cloud''. As expected, the lattice constants of these ordered states are slightly smaller than the super-cell spacing, $2a$, which sets the range of the multi-particle interaction. In this state, the diffusion coefficient becomes very small; particles are caged and can barely leave their location. To understand this behavior, note that without collisions, particle clouds will broaden due to streaming; this will happen faster the higher the temperature. Due to the grid shift, particles at the perimeter of the clouds will more often undergo collisions with neighbor clouds. These collisions backscatter the particles, forcing them to fly back towards the center of their cloud. There is a correlation between the distance from cloud center and rate at which it is backscattered, leading to stable cloud formation. A particle which is left alone between clouds will feel repulsion from all clouds and moves around very quickly until it is absorbed into a cloud. \section{Conclusion} The model presented in this letter is the first extension of the SRD algorithm to model fluids with a non-ideal equation of state. It was shown that the model is thermodynamically consistent for the correct choice of acceptance probabilities and reproduces the correct isotropic hydrodynamic equations at large length scales. Expressions for the equation of state and the self-diffusion constant were derived and shown to be in good agreement with numerical results. Simulation results for the kinematic viscosity were presented, and it was shown that there is an ordered state for large densities and collision frequencies. A detailed analysis of the transport coefficients will be presented elsewhere. \section{Acknowledgement} Support from the National Science Foundation under Grant No. DMR-0513393 and ND EPSCoR through NSF grant EPS-0132289 are gratefully acknowledged. We thank A.J. Wagner for numerous discussions.
1,941,325,220,116
arxiv
\section*{Highlights} \begin{itemize} \item Si contamination during high-temperature thin-film chalcogenide synthesis is studied. \item A thin TiN layer was used as a diffusion barrier and recombination layer \item No evidence of electrically active defects in Si after CZTS processing \item First proof of concept monolithic CZTS-on-Si was fabricated \item No significant loss in minority carrier lifetime of the silicon cell bottom cell. \end{itemize} \end{document} \section{Introduction} The current global uptake of photovoltaic (PV)-based solar energy has been enabled by the remarkable developments in crystalline Silicon (c-Si) solar cell technologies, both in terms of module efficiencies and costs, with market shares consistently around 90\% for decades – a figure which is expected to remain unchanged in the near future \cite{osti_1344202, VDMA2018, MarkusFischer2019}. However, as the Si cell efficiency approaches the Shockley-Queisser (SQ) single-junction limit \cite{doi:10.1063/1.1736034}, further cell improvements are now only incremental, and the focus is instead on systems cost reduction and raw material utilization \cite{VDMA2018, MarkusFischer2019}. Multi-junction solar cells can achieve higher efficiencies than the single-junction SQ limit, with AM 1.5 limits of around 45\% and 50.5\% for double (also called tandem) and triple-junction solar cells, respectively \cite{Green1982, Green2014}. However, to transition the global PV market from a single- to a multi-junction solar cell technology, the following conditions must be met: 1) The efficiency improvements should not sacrifice cost competitiveness, namely in terms of the Levelized Cost of Electricity (LCOE); 2) The raw materials used should be abundant, inexpensive and non-toxic; 3) Each individual junction, as well as the full device, must be stable and have a lifetime of decades {\cite{Yu2018, Bae2019}}. Various multi-junction cell configurations have been proposed and demonstrated experimentally, particularly with Si and III-V semiconductors, reaching efficiencies of 32.8\% and 37.9\% for tandem and triple-junction cells, respectively \cite{Yu2016, Essig2017, doi:10.1002/pip.3102, SharpCorporation2013}. In space applications, multi-junction III-V solar cells have been used almost exclusively since the late 1990s, but with costs not competitive with the single-junction c-Si technology for terrestrial applications \cite{ILES20011, 5950320}. Out of all the possible multi-junction configurations, a monolithically integrated two-terminal (MI-2T) tandem device is considered \textit{a priori} to be the most feasible for cost-competitive, large-scale applications, since it retains the module design simplicity of single-junction technologies and minimizes the total number of processing steps. Despite all of its potential advantages, MI-2T tandem devices are challenging to achieve in practice because every processing step has to be compatible and the properties of the preceding interface and layers should not be compromised \cite{C6ME00041J}. c-Si is an excellent partner in a tandem solar cell for the same reasons that gave it a dominant position in the PV market. Its band gap of 1.12 eV is near ideal for a MI-2T tandem – when used together with an absorber with a bandgap of 1.72 eV, a theoretical maximum efficiency of close to 43\% can be achieved \cite{Green2014, Yu2016}. Recently, a lot of interest has been raised after a series of MI-2T Perovskite/Si tandem devices achieved efficiencies above 25\%, with the current record set at 28\% \cite{Sahli2018, OxfordPV2018, NREL2019}, a value higher than that of the best Si solar cell. Thin film chalcogenides (TFCs) such as CdTe, CuIn\textsubscript{x}Ga\textsubscript{1-x}(S\textsubscript{y}Se\textsubscript{1-y})\textsubscript{2} (CIGSSe), Cu\textsubscript{2}ZnSn(S\textsubscript{x}Se\textsubscript{1-x})\textsubscript{4} (CZTSSe) and their respective solid solutions and cationic substitutions could be suitable alternatives to Perovskites due to their increasing single-junction solar cell efficiencies, competitive production costs and superior stability. Indeed, a 16.8\% Cd\textsubscript{1-x}Zn\textsubscript{x}Te/Si tandem cell has been demonstrated using low temperature molecular beam epitaxy (MBE) \cite{doi:10.1063/1.3582902}. However, in most cases, TFCs have the disadvantage that a high temperature step $>$ 500 \degree C ) is needed, contrary to Perovskites which can be processed at low temperatures ($<$ 200 \degree C) \cite{C6ME00041J, Sahli2018}. Recently, a promising monolithic Si/CGSe tandem cell with an efficiency of 10\% has been reported \cite{Jeong2017}, where the CGSe layer was produced by co-evaporation and high-temperature annealing. The authors report that the bottom Si cell J-V curve was not degraded during the top cell processing, however no further details were provided regarding the characterization of the bottom Si cell, and it is unclear what would be the resilience of the bottom cell for different processing parameters, or different deposition methods. Hence, to the best of our knowledge, the implications of high temperature processing on the feasibility of a MI-2T TFC/Si tandem device remain relatively unknown and have not yet been directly assessed experimentally. In this work, we discuss the challenges of producing TFC/Si MI-2T tandem devices, using the sulfide kesterite Cu\textsubscript{2}ZnSnS\textsubscript{4} (CZTS), an earth abundant and environmentally friendly representative of the TFC group. In particular, we assess the contamination and degradation of a Tunnel Oxide Passivated Contact (TOPCon) Si bottom cell during the CZTS processing steps. We test the introduction of a thin titanium nitride (TiN) diffusion barrier layer between the Si and CZTS structures and use the results to evaluate the process compatibility between CZTS and Si. We show that compatibility can be achieved, and report on a first proof of concept CZTS/Si tandem solar cell with an efficiency of 1.1\% and a \textit{V}\textsubscript{oc} of 900 mV, a value higher than that of each respective individual reference cell. Moreover, we suggest strategies for future device improvement. \subsection{The Top Cell: CZTS} The kesterite sulfide-selenide CZTSSe attracted interest as an all earth abundant alternative to CIGSSe consisting of non-toxic elements (in particular the sulfide CZTS), achieving solar cell efficiencies above 10\% using industrially upscalable methods such as sputtering \cite{Yan2018}. Sulfide CZTS, in particular, has some features which suggest that it could be a promising tandem partner for Si. Through different solid solutions and cationic substitutions, the bandgap of kesterites can be tuned – for instance, through Ge or Ag incorporation the bandgap of sulfide CZTS can be increased from the nominal 1.5 eV to about 2.1 eV, an ideal range for tandem applications \cite{OUESLATI2019315, GARCIALLAMAS2016147, UMEHARA2016713, doi:10.1002/pssc.201400343, doi:10.1021/acs.chemmater.8b00677, doi:10.1063/1.4863951}. Moreover, CZTS and Si are closely lattice-matched, with an a-axis lattice mismatch of less than $\pm$ 0.1\% \cite{doi:10.1002/9781119052814.ch2, doi:10.1002/9781118437865.ch3}. This means that heteroepitaxial growth of CZTS on Si could be possible, and this has indeed been proven experimentally \cite{OISHI20081449, SHIN20149, doi:10.1063/1.4922992}. While this allows in principle for growing CZTS/Si tandem devices epitaxially (free of grain boundaries), epitaxial growth of CZTS on Si with the necessary tunnel junction structures has not been demonstrated yet. So far, the TFC solar cells with the highest efficiencies, in particular in the case of CZTS, involved at least one high temperature step \cite{doi:10.1002/9781118437865.ch3} (with the notable exceptions of MBE \cite{doi:10.1063/1.3582902} and monograin technology \cite{MELLIKOV200965}). Herein, we argue that one of the biggest challenges towards a TFC/Si MI-2T tandem device could be a cross-contamination of the bottom Si cell with metallic elements such as Cu or chalcogens like S, during the high temperature step. \subsection{The Bottom Silicon Cell: Tunnel Oxide Passivating Contacts (TOPCon)} The tunnel oxide passivating contact (TOPCon) structure has played a key role in the recent silicon solar cell efficiency improvements \cite{Haase_2017, Feldmann2013, HAASE2018184, FELDMANN2014270, FELDMANN201446}. The structure consists of stacks of thin ($ \sim $ 1.2 -- 1.5 nm) SiO\textsubscript{2} layers (tunnelling oxide, TO) and highly doped (Phosphorous or Boron) polycrystalline silicon layers (PolySi) on both sides of a crystalline silicon (c-Si) wafer. This structure provides excellent surface passivation and carrier selectivity. Consequently, high implied \textit{V}\textsubscript{oc} of 750 mV and external \textit{V}\textsubscript{oc} of up to 739 mV have been achieved \cite{6960882}. In contrast to its aSi:H heterojunction counterpart, the TOPCon structure alone is resilient to high temperature annealing up to 900 \degree C, which is well above the typical annealing temperatures used in the synthesis of chalcogenide semiconductors and in other front and backend processes. Moreover, the simple one-dimensional current transport and full coverage of contacts at both sides allows for very low contact resistivity and thereby low fill-factor (FF) losses \cite{7747536}. A major drawback of a front PolySi contact in a single-junction device is the parasitic absorption losses in the blue wavelength region within the PolySi layer. As a result, a short-circuit current density (\textit{J}\textsubscript{sc}) loss of 0.5 mA/cm\textsuperscript{2} is expected for every 10 nm of PolySi in a single junction cell \cite{FELDMANN2017265}. However, this loss is not a limitation in a tandem configuration, where the high-energy photons are absorbed in the top cell. Thus, the double-sided TOPCon structure may be an ideal candidate for double-junction tandem solar cell. \subsection{The Need for a Diffusion Barrier Layer} When a nearly complete silicon solar cell is used as substrate for the growth of a TFC, there is a risk of contamination from metallic and chalcogen elements that should be thoroughly assessed. In this contribution, we study the case of co-sputtered CZTS precursors from Cu, ZnS and SnS targets on c-Si model substrates. During co-sputtering, the impinging energetic ions and neutrals can directly cause sputter damage, or contaminate the Si bulk by implantation. After co-sputtering, CZTS is formed by high temperature reactive annealing in a sulfur atmosphere. Here, the elements Cu, Zn, Sn and S (the latter both from the precursors and from the atmosphere) may diffuse into the Si bulk. We note that this high temperature step is of particular interest, since it is nearly ubiquitous in high-quality TFC fabrication, even in single-step processes (for instance co-evaporation of CIGS). Copper contamination in silicon deserves special consideration as it is a common element of both the CIGS and the CZTS group of alloys and, most importantly, because it is one of the most common detrimental contaminants known in crystalline Si, as widely reported in the photovoltaic and integrated circuit industries \cite{osti_15000243,Istratov1998, Istratov01012002}. Copper has a high diffusivity in Si, and can diffuse through the entire thickness of a Si wafer at room temperature in a matter of hours, although the solid solubility is $<$ 10\textsuperscript{15} cm\textsuperscript{-3} at the relevant temperatures \cite{osti_15000243}. Cu exhibits a complex defect physics in Si, leading to point defects and complexes, decoration of extended defects, precipitation of copper silicides, out-diffusion to the surface and segregation phenomena. In particular, copper silicides have been shown to lead to mid-gap defect traps in Si and a high recombination activity \cite{Seibt2009}, detrimental in solar cells. Although studied to a lesser extent than copper, the other elements of CZTS could also be harmful contaminants for a bottom Si cell. Zinc can introduce near-midgap defect levels in Si as shown in pure diffusion studies \cite{SZE1968599, Masuhr_1999, PhysRev.105.379}. Tin was studied in particular as a dopant to improve the radiation resistance of c-Si devices, but was also found to form midgap states in Si \cite{Larsen2001, PhysRevB.62.4535}. Finally, sulfur was studied notably in ``black silicon" processing, where it was found that its incorporation creates deep bandgap states, which increase the infrared light absorption in Si, making it appear more ``black" \cite{SZE1968599, KOYAMA1978953, PhysRevB.70.205210}. Here, we suggest that one possible way to prevent bottom cell contamination is using a diffusion barrier layer at the bottom-cell/top-cell interface. In general, a barrier layer must have properties such as mechanical stability, good adhesion, high temperature stability and low diffusivity for the contaminating elements. For tandem solar cell applications, it must also be electrically conductive and transparent in the near infrared region. To the best of our knowledge, only one published work directly addresses this problem, suggesting the use of ZnS as a barrier layer for the growth of CZTS/Si tandem cells \cite{7749871}. In this work, we propose titanium nitride (TiN) as a barrier layer at the CZTS/Si interface, a novel concept for the monolithic integration of a top thin-film cell on a bottom Si cell. TiN has been extensively studied as a barrier layer for copper metallization in integrated circuits, although it is arguably not the most effective barrier known against Cu diffusion \cite{Istratov01012002, doi:10.1146/annurev.matsci.30.1.363, Lee2007}. TiN has been employed as a back contact modification and barrier against over-sulfurization (or over-selenization) in single-junction CZTSSe cells, and proved to be compatible with up to 9\% efficiency devices \cite{Oueslati_2014, SCHNABEL2017290, ENGLUND201791, doi:10.1063/1.4740276, doi:10.1021/cm4015223}. Due to its poor transparency, the TiN thickness must be limited to only a few nm. By contrast, in a MI-2T Perovskite/Si tandem solar cell, a Si-based tunnel junction or a simple interface recombination layer based on a transparent conductive oxide (TCO) can be used to achieve high performing devices \cite{C6ME00041J, Sahli2018}. This could also be a possibility if contamination-free growth of TFCs on Si can be proven. In this regard, it is noteworthy to mention that there are studies suggesting that some TCO substrates could be compatible with TFC growth conditions \cite{UMEHARA2016713, doi:10.1021/acssuschemeng.7b02797}. \section{Materials and Methods} A set of double side polished 100 mm diameter, 1 $\Omega$.cm, 350 $\mu$m thick, (100) n-type Cz-Si wafers were used. The fabrication process of the TOPCon structure is as follows. After the wafers were cleaned in RCA1 (H\textsubscript{2}O\textsubscript{2}:NH\textsubscript{4}OH:5H\textsubscript{2}O) and RCA2 (H\textsubscript{2}O\textsubscript{2}:HCl:5H\textsubscript{2}O) mixtures, $\sim$ 1.2 nm of SiO\textsubscript{2} (tunnel oxide or TO) was grown by chemical oxidation in a 65 \%\textsubscript{wt} HNO\textsubscript{3} solution at 95 \degree C. Subsequently, $\sim$ 40 nm PolySi layers were deposited using Low Pressure Chemical Vapor Deposition (LPCVD) at 620\degree C, using SiH\textsubscript{4}, B\textsubscript{2}H\textsubscript{6}, or PH\textsubscript{3} as precursors for p+ or n+ PolySi layers, respectively. The samples were then annealed in N\textsubscript{2} at 850 \degree C for 20 min for further dopant diffusion and activation. All samples have a symmetrical passivation of TO/n+PolySi on both sides, except in two cases: for Deep Level Transient Spectroscopy (DLTS), this passivating stack was not used, and for the tandem solar cell fabrication, an asymmetrical passivation was used, with TO/n+PolySi on the front and TO/p+PolySi on the rear side. In the fabrication of tandem cells, a hydrogenation process was performed on the as-passivated bottom cell precursor wafer. For this purpose, a sacrificial $\sim$ 75 nm hydrogenated SiN (SiN:H) layer was deposited on both sides of the wafer using Plasma Enhanced Chemical Vapor Deposition (PECVD) at 300 \degree C. After a hydrogen drive-in process at 400 \degree C for 30 min in N\textsubscript{2} atmosphere, the SiN:H layers were stripped in a buffered HF solution. The benefits of this SiN hydrogenation process are similar to those achieved by annealing in forming gas \cite{Bae2017}. A few experiments used an alternative surface passivation with 40 nm Al\textsubscript{2}O\textsubscript{3}, deposited by Atomic Layer Deposition (ALD) using tetramethylammonia (TMA) and H\textsubscript{2}O as precursors. TiN barrier layers ($<$ 25 nm) were deposited in a Picosun Plasma-Enhanced ALD (PEALD) system using TiCl\textsubscript{4} and NH\textsubscript{3} precursors at 500 \degree C. To improve the optical transparency of TiN, the ALD chamber was not passivated for nitride depositions in the tandem cell fabrication. As a result, a slightly higher amount of oxygen is present in the TiN layer used in the tandem cell. Metallic 100 nm Cu layers were sputtered on the TOPCon structure and annealed at 550 \degree C in vacuum ($1\times10^{-6}$ mbar). CZTS precursors were co-sputtered from Cu, ZnS, and SnS targets, and annealed in a graphite box with a reactive N\textsubscript{2} atmosphere containing 50 mg of S pellets, at dwell temperatures of 525 -- 575 \degree C for 30 min, in order to form CZTS films with a thickness around 300 nm. This CZTS thickness was chosen based on optical simulations (not shown here) and photocurrent density (\textit{J}\textsubscript{sc}) results from single junction CZTS devices, in order to match the photocurrents of the two cells. Prior to lifetime, Secondary Ion Mass Spectrometry (SIMS), and DLTS measurements, the Cu, CZTS and TiN layers were removed (after the sulfurization/annealing step) in a mixture of H\textsubscript{2}O\textsubscript{2}:4H\textsubscript{2}SO\textsubscript{4} (piranha) and RCA1 solutions, followed by a dilute HF dip. For the Rutherford Backscattering Spectroscopy (RBS) measurements, only piranha was used. The effective minority carrier lifetime ($\tau$\textsubscript{eff}) of Si was measured by the microwave detected photoconductance decay method ($\mu$-PCD) in steady-state configuration at 1-sun illumination using an MDP lifetime scanner from Freiberg Instruments. The reported lifetime values were obtained from maps of the whole wafer area with 1 cm edge-exclusion margin. The i-\textit{V}\textsubscript{oc} values were calculated based on the method described in \cite{doi:10.1002/aenm.201900439}. The in-diffusion depth profile were measured by SIMS and RBS on selected samples. The SIMS depth profiles were obtained from a Cameca IMS-7f microprobe. A 10 keV O\textsubscript{2}\textsuperscript{+} primary beam was mainly utilized, and rastered over $150\times150$ $\mu$m\textsuperscript{2}, and the positive ions were collected from a circular area with a diameter of 33 $\mu$m. For sulfur, however, a 5 keV Cs\textsuperscript{+} primary beam was employed, and clusters of \textsuperscript{32}S\textsuperscript{133}Cs were detected to minimize matrix effect and avoid mass interference. The quantification of Cu depth profiles was obtained by measuring an implanted reference sample, ensuring a $\pm$ 10 \% error in accuracy. The crater depths were measured by a Dektak 8 stylus profilometer, and a constant sputter erosion rate was assumed for the depth calculation. The RBS measurements were done using 2 MeV He ions and a silicon PIN diode detector under a 168$^{\circ}$ angle. The collected RBS data were analyzed and fitted using RUMP \cite{doi:10.1063/1.3340459}. DLTS was used to characterize electrically active defects in the Si bulk. DLTS measurements were performed on circular Schottky diodes (1 mm diameter), where 50 nm thick Pd contacts were deposited by thermal evaporation. The backsides were coated with silver paste to form an ohmic contact. During the measurements the diodes were held at $-5$ V reverse bias and pulsed to 1 V, filling all majority traps within the depletion width of $\sim$ 1 $\mu$m. The samples were cooled to 35 K by a closed-cycle cryostat and six rate windows (with lengths 2$^i$ $\times$ 10 ms, $i$ = 1, $\ldots$ , 6), were used to record the capacitance transients while heating to 300 K. The transients were multiplied by a lock-in weighting function for improved signal extraction. Further details on the method and setup are given in \cite{Liu2017}. For the monolithic CZTS-Si tandem device, a 50 nm CdS layer, by chemical bath deposition, was used as the buffer to form the p-n heterojunction of the top cell, followed by a 50 nm intrinsic i-ZnO and a 350 nm Al-doped ZnO (AZO) as the TCO layer. Both window layers, i-ZnO and AZO, were deposited using reactive sputtering. A 500 nm Ag layer was thermally evaporated as the back contact. No front metal contacts were used for simplicity, as the active tandem cell areas were only 3$\times$3 mm\textsuperscript{2}. The full tandem solar cell was post-annealed on a hot plate in air at 250 \degree C for 15 min, in order to improve the properties of the CZTS/CdS heterojunction (see Figure S13). The J-V characteristic curves of the solar cells were measured at near Standard Test Conditions (STC: 1000 W/m\textsuperscript{2}, AM 1.5 and 25 \degree C). A Newport class ABA steady state solar simulator was used. The irradiance was measured with a $2\times2$ cm\textsuperscript{2} Mono-Si reference cell from ReRa certified at STC by the Nijmegen PV measurement facility. The temperature was kept at 25 $\pm$ 3 \degree C as measured by a temperature probe on the contact plate. The acquisition was done with 2 ms between points, using a 4 wire measurement probe, from reverse to forward voltage. The external quantum efficiency (EQE) of the tandem cell was measured using a QEXL setup (PV Measurements) equipped with a grating monochromator, adjustable bias voltage, and a bias spectrum. Room temperature photoluminescence (PL) measurements were done on complete cells with an excitation wavelength of 785 nm using a modified Renishaw Raman spectrometer equipped with a Si CCD detector, in confocal mode. Scanning electron microscopy (SEM) images of the tandem cell structures were acquired using a Zeiss Merlin field emission electron microscope under a 5 kV acceleration voltage. \section{Results and Discussion} \subsection{Minority Carrier Lifetime Measurements on Si}\label{sec:31} The minority carrier lifetime of Si is used as a figure of merit throughout the paper to evaluate the bottom cell after CZTS and tandem cell processing. For this purpose, 10 symmetrically passivated wafers with an as-passivated mean lifetime of 2.65 $ \pm $ 0.52 ms were prepared. The uniform surface passivation quality across the wafer set ensures that the passivation and wafer qualities are not variables in the subsequent studies. More details on the passivation statistics are shown in the supplementary information (Figure S1).\ Three different set of samples were prepared, as listed in Table \ref{tab:1} and illustrated in Figure \ref{fig:1}. All the samples have a 25 nm TiN layer on the backside, to eliminate any unwanted contamination from that side during the different processing steps. \begin{table*}[ht] \renewcommand{\arraystretch}{1} \begin{center} \caption{Overview of the different samples used for minority carrier lifetime measurements. Note: all the samples have 25 nm TiN on the backside.} \begin{tabular}{llll} \toprule \textbf{Sample} & \textbf{TiN Thickness (nm)} & \textbf{Annealing atm./T \degree C} &\textbf{Purpose} \\ \midrule \textit{Cu Reference} &25 &Vacuum/550 &Compare metallic Cu to CZTS \\ \textit{Sulfur Reference} &0, 10 &Sulfur/525 &Isolate the effect of S \\ \textit{CZTS} &0, 10, 25 &Sulfur/525, 550, 575 &Integration of CZTS on Si \\ \bottomrule \end{tabular} \label{tab:1} \end{center} \end{table*} \begin{figure} \centering \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[width=\linewidth]{images/image1.pdf} \caption{} \label{fig:1a} \end{subfigure} \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[width=\linewidth]{images/image2.pdf} \caption{} \label{fig:1b} \end{subfigure} \begin{subfigure}[b]{.3\textwidth} \centering \includegraphics[width=\linewidth]{images/image3.pdf} \caption{} \label{fig:1c} \end{subfigure} \caption{Cross-section scheme of the samples used for minority carrier lifetime measurements: a) \textit{Cu Reference}, b) \textit{Sulfur Reference}, and c) \textit{CZTS}} \label{fig:1} \end{figure} In Figure \ref{fig:2}, the Si minority carrier lifetime of the \textit{Cu Reference} sample is shown as a function of the annealing time. In this case, the TiN barrier layer fails after a 15 min annealing at 550 \degree C , with a 73\% loss of lifetime. The lifetime is further degraded with increasing annealing time. These results indicate that, for this temperature range, the 25 nm TiN barrier layer provided insufficient protection of the sample against Cu diffusion. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image4.pdf} \caption{Minority carrier lifetime evolution of the \textit{Cu Reference} sample with annealing time at 550 \degree C in vacuum.} \label{fig:2} \end{figure} In Figure \ref{fig:3} (a) and (b), the Si carrier lifetime results for the \textit{Sulfur Reference} and \textit{CZTS} cases are shown. Here, the lifetime was monitored after each major processing step, namely the TiN deposition and the CZTS annealing steps. From the $``$As-Passivated$"$ to the $``$Before TiN$"$ step, there was a waiting time on the order of a few weeks, causing a slight decrease in the lifetime due to aging. The final lifetimes of Figure \ref{fig:3} (a) are reduced to 45-50\% of the $``$As-passivated$"$ value after annealing in a sulfur atmosphere, suggesting the role of S as a contaminating species. The 60 min point of the \textit{Cu Reference} carrier lifetime of Figure \ref{fig:2} is included in Figure \ref{fig:3} (a) for comparison, showing that the impact of the S atmosphere is less severe than that of metallic Cu. Further details and lifetime maps of the \textit{Cu Reference} and \textit{Sulfur Reference} carrier lifetime measurements are shown in the supplementary Figure S2 and Figure S3, respectively. In Figure \ref{fig:3} (b), the key observation is that the final lifetime values after \textit{CZTS} processing are significantly higher than that of the \textit{Cu Reference} case, in spite of the fact that metallic Cu is present in the co-sputtered CZTS precursors. One possible explanation for this milder contamination effect is that the driving force for the formation of Cu\textsubscript{2-x}S phases (the binary phases in the CZTS phase diagram with the lowest melting point \cite{doi:10.1002/9781118437865.ch3}) competes directly with the diffusion of the available Cu into Si. Therefore, the competing Cu\textsubscript{2-x}S formation reaction reduces the driving force for Cu diffusion into Si. Moreover, the lifetime values after \textit{CZTS} processing are comparable to the \textit{Sulfur Reference} case, indicating that having CZTS in addition to a sulfur atmosphere does not lead to additional lifetime deterioration. \begin{figure} \centering \begin{subfigure}[b]{0.6\textwidth} \centering \includegraphics[width=\linewidth]{images/image5.pdf} \caption{} \label{fig:3a} \end{subfigure} \\ \begin{subfigure}[b]{0.6\textwidth} \centering \includegraphics[width=\linewidth]{images/image6.pdf} \caption{} \label{fig:3b} \end{subfigure} \caption{Mean effective minority carrier lifetimes of Si for (a) the \textit{Cu Reference} and \textit{Sulfur Reference} samples, and (b) the \textit{CZTS}-processed samples, at each different processing step. The temperatures of the annealing series (in \degree C) are indicated above the corresponding bars. Note: All the samples have 25 nm TiN on the backside, so they all show some ``After TiN" degradation.} \label{fig:3} \end{figure} The influence of the annealing temperature was also studied in the \textit{CZTS} case with a series of different annealing temperatures at 525 \degree C, 550 \degree C and 575 \degree C, for the cases without TiN and 10 nm TiN, as shown in the ``After CZTS" step of Figure \ref{fig:3} (b). One single measurement with a TiN thickness of 25 nm is included as reference for comparison in the subsequent studies. However, this thickness would be too high for use in a tandem cell (due to poor transparency). The annealing series shows that while the 10 nm TiN case seems to follow a trend with increasing temperature, this is not true for the case without TiN. This is likely due to spatial variations in the sample’s lifetime, shown by the uncertainty bars, which have a magnitude comparable to the variations seen in the temperature series. Moreover, it can be seen that the 0 nm and 10 nm TiN series have comparable absolute carrier lifetimes after CZTS processing. To further understand this behavior, we plot in Figure \ref{fig:4} the same results but scaled to the respective ``After TiN" lifetimes. By doing this, it becomes clear that the lifetime deterioration during CZTS processing is more significant when no TiN is present. This scaling procedure is also justified as it can be noted in both Figure \ref{fig:3} (a) and (b) that the final (post-process) lifetime values are affected by a significant and non-uniform loss in lifetime during the TiN deposition step. The reason for this loss may be attributed to a minor contamination originating from the ALD chamber and stainless steel carrier used during the deposition (e.g. iron contamination). We discuss this issue in greater detail and show additional experimental data in the SI (see Figure S8). However, further future investigation will be required to fully clarify this effect. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image7.pdf} \caption{Relative change in minority carrier lifetime for the annealing series in the \textit{CZTS}-processed samples, when scaled to the respective ``After TiN" lifetimes. The temperature is displayed at the bars (in \degree C).} \label{fig:4} \end{figure} Despite the degradation throughout the processing, the final lifetimes are above 1 ms, which corresponds to an i-\textit{V}\textsubscript{oc} above 700 mV. This encouraging result indicates that the performance of the bottom silicon cell may not necessarily be compromised as a result of the CZTS synthesis. However, given the comparable absolute lifetime values, regardless of the TiN thickness, it is not yet clear from the lifetime results alone whether the use of a TiN barrier layer would be useful. To further evaluate whether the observed degradation is related to a bulk contamination or TO/n+PolySi surface depassivation, a complementary experiment was conducted where a silicon wafer was passivated only at the end of the CZTS processing, after etching the CZTS and TiN layers. Any possible unforeseen effects caused by the TO/n+PolySi passivation are avoided by using this configuration. We refer to this as the ``end-passivated" sample. Here, we repeated the Si/TiN(25nm)/CZTS sample, except using a non-passivated bare silicon wafer (no TO/n+PolySi passivation) as the substrate. Subsequent to the CZTS and TiN etching and cleaning, 40 nm ALD Al\textsubscript{2}O\textsubscript{3} was deposited on both sides for surface passivation. The results, plotted in Figure \ref{fig:5}, indicate a tolerable 14 mV decrease in i-\textit{V\textsubscript{oc}} ($\sim$ 30\% lifetime decrease) for the sample with CZTS processing compared to the clean reference sample. Even though this experiment does not directly clarify the effect of the PolySi layer on the diffusion of contaminants from CZTS processing, it shows that relatively high-end lifetimes can be achieved without using a PolySi layer. This suggests that there is some flexibility of design in the bottom Si cell, and offers new perspectives for future tandem integration experiments. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image8.pdf} \caption{Comparison between the i-\textit{V}\textsubscript{oc} of a \textit{CZTS}-processed sample, annealed in a sulfur atmosphere at 525 \degree C for 30 min (in red), and a reference without CZTS processing (in grey). Both half-wafers are cleaved from the same substrate, and are end-passivated with 40 nm Al\textsubscript{2}O\textsubscript{3} on both sides after etching the TiN and CZTS layers.} \label{fig:5} \end{figure} \subsection{SIMS and RBS Analysis} To correlate the lifetime results with possible diffusion of contaminants into the Si bulk, SIMS and RBS measurements were performed on selected \textit{Cu Reference} and \textit{CZTS}-processed samples (after selective removal of the Cu, TiN and CZTS layers). The SIMS results are illustrated in Figure \ref{fig:6}. For the \textit{Cu Reference} samples, the corresponding quantitative Cu SIMS depth profiles are shown in Figure \ref{fig:6} (a). A clear diffusion tail into the c-Si bulk is detected in all cases, with a Cu peak concentration of up to 10\textsuperscript{20} cm\textsuperscript{-3} occurring in the PolySi. Furthermore, an increase in Cu concentration is seen with increasing annealing time, which is in qualitative agreement with the lifetime results of Figure \ref{fig:2}. In Figure \ref{fig:6} (b), a quantitative Cu profile is presented for the \textit{CZTS}-processed samples annealed at 525 \degree C. The Cu profiles reveal that for the No TiN and 10 nm TiN samples, there is a diffusion tail extending at least 100 nm into the Si bulk, but for the 25 nm TiN case the Cu concentration drops sharply to below detection limits after the PolySi. In all three cases, the Cu concentration is 2 -- 3 orders of magnitude lower compared to the \textit{Cu Reference} case, which helps to justify their significantly higher lifetimes. Into the Si bulk (close to the surface), the Cu concentration is always lower than 10\textsuperscript{18} cm\textsuperscript{-3}, which in Si corresponds to 0.002 at\% (or 20 ppm). Depth profiles of other relevant elements during CZTS processing, namely Zn, Sn, S and Ti (from TiN) are shown in Figure \ref{fig:6} (c) to (f). Other elements than Cu appear to be at background levels or near detection limits into the c-Si bulk. \begin{figure} \centering \begin{subfigure}[b]{.31\textwidth} \centering \includegraphics[width=\linewidth]{images/image9.pdf} \caption{} \label{fig:6a} \end{subfigure} \begin{subfigure}[b]{.31\textwidth} \centering \includegraphics[width=\linewidth]{images/image10.pdf} \caption{} \label{fig:6b} \end{subfigure} \begin{subfigure}[b]{.31\textwidth} \centering \includegraphics[width=\linewidth]{images/image11.pdf} \caption{} \label{fig:6c} \end{subfigure}\\ \begin{subfigure}[b]{.31\textwidth} \centering \includegraphics[width=\linewidth]{images/image12.pdf} \caption{} \label{fig:6d} \end{subfigure} \begin{subfigure}[b]{.31\textwidth} \centering \includegraphics[width=\linewidth]{images/image13.pdf} \caption{} \label{fig:6e} \end{subfigure} \begin{subfigure}[b]{.31\textwidth} \centering \includegraphics[width=\linewidth]{images/image14.pdf} \caption{} \label{fig:6f} \end{subfigure} \caption{SIMS depth profiles for \textit{Cu Reference} and \textit{CZTS}-processed samples. (a) The \textit{Cu Reference}, showing quantitative Cu depth profiles; (b) Quantitative Cu depth profile for the \textit{CZTS}-processed samples. The depth profile of the \textit{Cu Reference} sample is added for comparison; (c), (d), (e) and (f) Qualitative depth profiles of Zn, Sn, S, and Ti for the CZTS samples, respectively. The measurements are performed on the n+PolySi layer towards the c-Si bulk, as marked by the blue rectangle, after etching the top layers. The \textit{CZTS}-processed samples were annealed at 525 \degree C. The annealing time was 30 min unless otherwise specified.} \label{fig:6} \end{figure} To complement the SIMS analysis, RBS measurements were done on the \textit{CZTS}-processed samples with 0 nm and 10 nm TiN, annealed at 525 \degree C. The results are illustrated in Figure \ref{fig:7} (a) and (b), respectively. Since all the potential contaminant elements are heavier than Si, the RBS data is zoomed in at energies higher than the Si onset. None of the possible contaminants are detected in Si, except for some Ti at the surface, which was not fully removed during the piranha etching (confirmed with SEM, not shown here). This means that an estimate for the upper limit for the concentration of these contaminants can be established, given by the sensitivity of the measurement itself. For the measurement conditions of Figure \ref{fig:7} (a) and (b), these upper limits are given in the figure insets. In the particular case of Cu, which was also quantitatively measured by SIMS, RBS shows that its concentration has to be below 0.01 at\%, agreeing with the value of below 0.002 at\% obtained by SIMS. \begin{figure} \centering \begin{subfigure}[b]{0.6\textwidth} \centering \includegraphics[width=\linewidth]{images/image15.pdf} \caption{} \label{fig:7a} \end{subfigure} \\ \begin{subfigure}[b]{.6\textwidth} \centering \includegraphics[width=\linewidth]{images/image16.pdf} \caption{} \label{fig:7b} \end{subfigure} \caption{RBS spectra for the \textit{CZTS}-processed samples with (a) No TiN and (b) 10 nm TiN. The insets show the structures used and the detection limits for the contaminant elements. The samples were annealed at 525 \degree C.} \label{fig:7} \end{figure} \subsection{DLTS Analysis} To further assess the influence of these possible contaminants, DLTS measurements were made on \textit{CZTS}-processed samples annealed at 525 \degree C. Here, samples without the TO/n+PolySi passivation were prepared as no Schottky contact (required in our DLTS setup) could be obtained between the metal electrode and the heavily-doped PolySi. An unprocessed bare ``Reference" wafer was also included to rule out any possible pre-existing defects. The results are plotted in Figure \ref{fig:8}. It is shown that the samples with 10 nm TiN, 25 nm TiN and the Reference wafer do not have any DLTS signal, but the No TiN sample exhibits peaks related to electrically active defects, with two features peaking at $\sim$ 175 K and $\sim$ 275 K. The 175 K peak shows a broadening towards the lower temperature side, which may be related to several overlapping defect signatures or extended defects \cite{doi:10.1002/(SICI)1099-159X(199703/04)5:2<79::AID-PIP155>3.0.CO;2-J}. In the case of extended defects, an exponential decay in emission rate may not hold, and will influence the extracted activation energies and apparent capture cross-sections from an Arrhenius plot of the corresponding DLTS peak \cite{DOOLITTLE1986227}. This peak near 175 K might come from several defects associated with precipitates of Cu \cite{Haase_2017, doi:10.1063/1.344389}, but further measurements would be required to assign this unambiguosly. The peak at 275 K could be used instead for making an Arrhenius plot. This peak has a broad shape due to its very low capture cross-section of $2\times10^{-22}$ cm\textsuperscript{2}, and its energy level was found to be 0.16 eV below the conduction band edge, as extracted from the Arrhenius plot. The level at $E_{c}-0.16$ eV has previously been reported in Cu diffused Si and shown to originate from interstitial copper or a complex of interstitial copper by Istratov et al \cite{PhysRevB.52.13726}. More details on the DLTS results, analysis and Arrhenius plot can be found in the supplementary information (Figure S4 and Figure S5). \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image17.pdf} \caption{DLTS results of \textit{CZTS}-processed samples annealed at 525 \degree C compared to a clean reference Si wafer. Note: these DLTS samples do not have the TO/n+PolySi passivation stack.} \label{fig:8} \end{figure} Based on these findings, 10 nm of TiN seems to be sufficient to prevent the formation of electrically active defects in the Si bulk. This thickness was thus selected to prepare a full CZTS/Si tandem solar cell. \subsection{Fabrication of a Monolithic CZTS/Si Solar Cell}\label{sec:34} The effective minority carrier lifetime of the silicon bottom cell was monitored at different steps of the fabrication process, similar to Section \ref{sec:31}. However, the samples are now asymmetrically passivated (with TO/p+PolySi on the backside). An additional SiN hydrogenation step (to improve the passivation quality) was also included. The corresponding i-\textit{V}\textsubscript{oc} is shown in Figure \ref{fig:9} as a function of process steps. Figure \ref{fig:9} shows that the i-\textit{V}\textsubscript{oc} of the silicon bottom cell was slightly degraded after the TiN deposition step. As mentioned in Section \ref{sec:31}, we suggest that this degradation may be due to iron contamination (originating from the ALD chamber). However, given the additional hydrogenation step of the p-PolySi explained above, a partial loss in the hydrogen passivation could occur in this case (see supplementary Figure S8). The i-\textit{V}\textsubscript{oc}, however, does not degrade further during the full fabrication of the CZTS top cell. This demonstrates that 10 nm TiN was an effective diffusion barrier. The J-V curves, EQE and schematic illustration of the tandem device are shown in Figure \ref{fig:10} (a), (b) and (c), respectively. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image18.pdf} \caption{Changes in i-\textit{V}\textsubscript{oc} of the silicon bottom cell during the fabrication processes of the CZTS/Si tandem cell. The indicated fabrication steps are: 1) Silicon surface passivation 2) SiN:H hydrogenation of the TOPCon layers 3) TiN deposition, and 4) Full CZTS cell fabrication before depositing the Ag back contact.} \label{fig:9} \end{figure} As seen in Figure \ref{fig:10} (a), the tandem cell shows a \textit{V}\textsubscript{oc} of 900 mV, which is higher than that expected for each individual junction separately, under the same conditions (see supplementary Figure S6). The efficiency, however, is low (1.10\%) and the light J-V curve shows a clear ``rollover" effect, which is characterized by a distortion of the J-V curve, causing a very low fill-factor (FF). The magnitude of the \textit{J}\textsubscript{sc} seems to be also affected by the distortion as the EQE measurement in Figure \ref{fig:10} (b) shows a \textit{J}\textsubscript{sc} of around 11 mA/cm\textsuperscript{2} for each individual cell. We attribute the low efficiency to a combination of the rollover effect and a poorer CZTS top cell compared to the single junction CZTS cell, as will be elaborated below. This rollover effect, with S-shaped J-V curves, has been reported previously for non-optimal tandem cells, associated with the reverse breakdown voltage regime of the top cell when the tandem is current-mismatched \cite{Torazawa01012016}. While this explanation is certainly plausible here, we note that other effects may cause a rollover effect in single-junction solar cells, as was reviewed recently in \cite{Saive2019}. This occurs when there is one or more barriers to current extraction throughout the solar cell under illumination. This barrier can be due to the presence of Schottky barriers in non-ohmic contacts (namely the n+PolySi/TiN or TiN/CZTS interfaces), or a non-ideal p-n junction, leading to a voltage-dependent current blocking behavior. In particular, this effect has been reported for non-ideal p-n junctions in single-junction CZTS cells \cite{Nakamura_2010}, and we have also seen it in our internal experiments for CZTS/CdS p-n junctions when the synthesis parameters were not ideal (see supplementary Figure S9). For the tandem cell, our TiN barrier layer was produced in an unpassivated ALD chamber, which improves the optical transparency of TiN by incorporating some oxygen, leading to a TiO\textsubscript{x}N\textsubscript{y} film. However, an excessive amount of oxygen can be detrimental, as it is known to increase the sheet resistance of TiN \cite{Maenpaa1980}. It has been reported that the presence of 10-15 \% oxygen in TiN leads to formation of a Schottky diode with a barrier height of 0.55 eV on n-type Si (100) \cite{Ang1988}. In the case of a single Si cell, we found evidence that using a similar 10 nm TiO\textsubscript{x}N\textsubscript{y} layer in-between the n+PolySi and TCO layers can cause a roll-over behavior on single junction Si cells (see supplementary Figure S10). Thus, the results of this work suggest that the ideal compromise between transparency and electrical properties in the TiN layer might not have been reached in this initial device, and this will be investigated in future work by tuning the TiO\textsubscript{x}N\textsubscript{y} composition. Furthermore, a rollover effect has been reported in CIGS at the Mo/CIGS interface on non-glass substrates, where there is no natural inclusion of Na (or other alkali elements) in the absorber layer. It was shown that this effect can be completely eliminated by providing a sufficient amount of Na \cite{doi:10.1063/1.120026}. This is particularly relevant in this work, as the growth of CZTS is also substrate dependent, and no intentional Na was added in the fabrication of the tandem cell. \begin{figure} \centering \begin{subfigure}{.36\textwidth} \centering \includegraphics[width=\linewidth]{images/image19.pdf} \caption{} \label{fig:10a} \end{subfigure} \begin{subfigure}{.37\textwidth} \centering \includegraphics[width=\linewidth]{images/image20.pdf} \caption{} \label{fig:10b} \end{subfigure} \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=\linewidth]{images/image21.pdf} \caption{} \label{fig:10c} \end{subfigure} \caption{(a) Tandem cell illuminated (solid) and dark (dashed) J-V curves. The insets show the region near V\textsubscript{oc} and the J-V parameters; (b) EQE of the two sub-cells and sum of the sub-cell contributions. The value in parenthesis accounts for a $\sim$ 0.5 mA/cm\textsuperscript{2} contribution from the high wavelength region outside the measurement range; (c) Tandem solar cell scheme} \label{fig:10} \end{figure} To explore these issues, we compare the results of our tandem device to a baseline single-junction CZTS cell where the CZTS thickness was reduced from the typical 1 $\mu$m to a value of around 275 nm, similar to the value used for the tandem. This ``thin CZTS cell" achieved an efficiency of 5.8\%, with a \textit{J}\textsubscript{sc} of 15.8 mA/cm\textsuperscript{2} and a \textit{V}\textsubscript{oc} of 585 mV (the J-V curve is shown in the supplementary Figure S11), which is fairly comparable to state of the art thin CZTS devices (with a record of 8.57\% for a 400 nm thick CZTS \cite{doi:10.1002/pip.2766}). However, when compared to the CZTS growth for the tandem cell, significant morphological differences between the two CZTS layers are noticeable. The morphology comparison is presented in Figure \ref{fig:11}. The SEM top views of CZTS grown on the Si cell and on Mo/SLG, shown in Figure \ref{fig:11} (a) and (b), respectively, reveal a clear difference in grain size. The SEM cross-section of the tandem cell, in Figure \ref{fig:11} (c), shows that the CZTS exhibits a double layer structure, with a smaller grain size. In comparison, the CZTS grown with the same conditions on Mo-coated soda lime glass (SLG), shown in Figure \ref{fig:11} (d), has a single layer and larger grains. This indicates that the local conditions for CZTS growth are different in the two cases. In addition, CZTS photoluminescence (PL) measurements made on both the fully finished tandem and the thin CZTS cell, confirm that the thin CZTS cell has a significantly higher PL intensity (see supplementary Figure S12). We suggest that one possible reason for these differences is likely to be the contribution of Na diffusion from the glass, which is not available in Si. This possibility will be explored in future work. \begin{figure} \centering \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=\linewidth]{images/image23.jpg} \caption{} \label{fig:11a} \end{subfigure} \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=\linewidth]{images/image22.jpg} \caption{} \label{fig:11b} \end{subfigure}\\ \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=\linewidth]{images/image25.jpg} \caption{} \label{fig:11c} \end{subfigure} \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=\linewidth]{images/image24.jpg} \caption{} \label{fig:11d} \end{subfigure} \caption{SEM comparison of the tandem cell and a single junction thin CZTS cell on Mo/SLG. (a) Top view of the CZTS surface as used on the tandem cell, before CdS deposition; (b) Top view of the thin CZTS surface, before CdS deposition; (c) Cross-section view of the upper part of the tandem; (d) Cross-section view of the full thin CZTS cell. The CZTS absorber layer is highlighted in yellow.} \label{fig:11} \end{figure} The results of this work show that there is a margin for monolithically integrating a CZTS top cell on a full Si bottom cell using high temperature processing above 500 \degree C, without compromising the bottom cell. Given the constituting contaminant elements of CZTS (in particular Cu and S), we suggest that this study could be generalized to other thin film chalcogenide materials (many of which do contain Cu), and thereby open up the possibility of exploring new emerging wide band gap semiconductors as top cell alternatives. \section{Conclusion} We have assessed the potential of monolithically-integrated two-terminal tandem cells based on thin-film chalcogenides on Si, using CZTS and double-sided TOPCon Si as model system. We have investigated the use of a thin TiN barrier layer to protect the bottom Si cell from in-diffusion of metals and sulfur during the CZTS growth, and serve as interface recombination layer between the top and bottom cells at the same time. It was revealed that Cu contamination induced by CZTS growth on the Si bulk is significantly smaller than that from annealing of metallic Cu on Si. While traces of all CZTS elements (except for Sn) can be detected at the surface of c-Si after CZTS annealing, it was shown that the main contributor to the lifetime reduction in the bottom Si cell is Cu. Furthermore, it was shown that a TiN barrier layer as thin as 10 nm can effectively suppress the formation of Cu-related deep defects in Si. Based on these results, we presented a proof-of-concept monolithically integrated CZTS/Si tandem solar cell with an efficiency of 1.1\% and a \textit{V}\textsubscript{oc} of 900 mV, which shows an additive \textit{V}\textsubscript{oc} effect. The i-\textit{V}\textsubscript{oc} of the silicon bottom cell was retained during the full fabrication of the CZTS cell when a 10 nm TiN barrier was used. It is suggested that the poor performance of the tandem cell is mainly due to limitations in the CZTS top cell, namely difficulty of reproducing high-quality CZTS absorbers on non-glass substrates, where Na is not available. The possibility of non-ohmic blocking behavior at the TiN interfaces is also mentioned. By showing that a full TOPCon Si solar cell can be processed at temperatures well above 500 \degree C in the presence of several critical contaminant elements – notably copper – without suffering from a severe degradation in lifetime and without forming deep defect levels, this work opens up the possibility of exploring other less known and future high bandgap compounds processed at high temperatures. This could allow for achieving high efficiency monolithically integrated tandem solar cells in the future. \section{Acknowledgements} A. Hajijafarassar (A.H.) and F. Martinho (F.M.) contributed equally to this work. This work was supported by a grant from the Innovation Fund Denmark (grant number 6154-00008A). F.M. would like to thank A. A. S. Lancia for the support on the J-V measurements. \section{Data Availability} All the data is available from the corresponding authors upon reasonable request. \section*{References} \section{Results and Discussion (Supplementary)} The statistics for the set of 10 symmetrically n-passivated wafers is shown in Figure \ref{fig:s1}. The representative lifetime was taken as the mean value, and the uncertainty as the maximum deviation. The passivation quality is considered to be high, as the corresponding implied V\textsubscript{oc} of the silicon solar cell exceeds 700 mV. It is important to have a good quality passivation when conducting lifetime studies, because the effective minority carrier lifetime is always a contribution from bulk recombination and surface recombination. A high quality surface passivation minimizes the contribution of the surface recombination term, and allows to draw conclusions about changes in the bulk during processing. Throughout this study, it was observed that samples with low quality passivation (or weak resistance to depassivation) would yield different lifetime results due to the depassivation of the surface, making it difficult to study specific changes in the bulk due to contamination. By ensuring a high quality passivation across the wafer set, we have more confidence that the lifetime results are directly comparable and give information about changes in the Si bulk. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image1S.pdf} \caption{Statistical lifetime distribution of the set of 10 symmetrically n-passivated wafers. A scheme of the passivated structure is shown as inset).} \label{fig:s1} \end{figure} In the metallic \textit{Cu reference} lifetime test, to prove unequivocally that it is the metallic Cu that is responsible for the deterioration in lifetime (and not, for example, the high temperature annealing of its own and possible effects on the passivation), we conducted a similar metallic Cu diffusion test as in the main text, but patterned a large area near the center of the wafer, where no Cu was deposited. The results are shown in Figure \ref{fig:s2}. After annealing, the results clearly show that the Cu-free area exhibits a lifetime close to the original as-passivated values ($>$ 1.5 ms), whereas the areas exposed to Cu show a large degradation in lifetime. Moreover, by keeping track of changes the annealing time, the effect of lateral Cu diffusion into the patterned area is evident, highlight the high diffusivity of Cu in Si. \begin{figure} \centering \begin{subfigure}[b]{.47\textwidth} \centering \includegraphics[width=\linewidth]{images/image2S.jpg} \caption{} \label{fig:2sa} \end{subfigure} \begin{subfigure}[b]{.47\textwidth} \centering \includegraphics[width=\linewidth]{images/image3S.jpg} \caption{} \label{fig:2sb} \end{subfigure} \caption{Variation of the Cu diffusion reference of the main text with patterning included. (a) After just 30 minutes of annealing in N\textsubscript{2}, the edges of the pattern are already showing decreased lifetime, highlighting the effects of lateral diffusion of Cu. The region not covered with Cu, defined by the pattern, retains a high lifetime, similar to the as-passivated value, meaning that the surface does not depassivate during annealing; (b) After 60 minutes of annealing in N\textsubscript{2}, some Cu already diffused into the patterned area, as indicated by the further degradation in lifetime.} \label{fig:s2} \end{figure} The \textit{Sulfur reference} test allowed us to decouple the effect of the S atmosphere during the CZTS annealing from the effects of the CZTS itself. In Figure \ref{fig:s3} a direct comparison of an as-passivated quarter and a quarter annealed in an S atmosphere is shown. Both quarters are from the same initial wafer. The result indicates that S is detrimental for the lifetime of Si, which explains the lifetime results shown in the main text. Interestingly, the final after annealing lifetime is nearly the same for S annealing and for CZTS annealing. Considering the SIMS, RBS, and DLTS results from the main text, this suggests that the mechanism for lifetime degradation is different in these two cases: in the sulfur annealing it is sulfur contamination which is degrading the lifetime, whereas in CZTS annealing the Cu contamination seems to be more relevant. Still, in both cases the end lifetimes are significantly higher than in the metallic Cu case. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image4S.jpg} \caption{Lifetime mapping showing the effeccts of sulfur annealing for the 25 nm TiN case at 525 \degree C, compared to a non-annealed quarter from the same wafer. It clearly shows the detrimental effect of annealing the Si wafer in a sulfur atmosphere.} \label{fig:s3} \end{figure} In the main text, a claim was made that the interpretation of the low temperature DLTS peak near 175 K was ambiguous due to the low temperature tail and broadening. To gain further insight on this, a 4-term Gaver–Stehfest algorithm (GS4) was used for signal extraction. The advantage of this algorithm is that it significantly increases the detection resolution, at the cost of a lower signal-to-noise ratio \cite{doi:10.1063/1.366269}. The results are shown in Figure \ref{fig:s4}. Using GS4, several individual peaks can now be resolved. This is an indication of either overlapping or extended defects in the Si bulk, and justifies why no activation energies were extracted for this low temperature feature. The extraction was only done for the higher temperature peak (near 270 K). \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image5S.pdf} \caption{Comparison of the main DLTS signal (lock-in) with the 4-term Gaver–Stehfest (GS4) signal. The comparison shows that the low temperature (below 200 K) feature is a contribution of several peaks, whereas the high temperature feature (around 270 K) is a single peak. Here, the rate window used was 640 ms (window 6).} \label{fig:s4} \end{figure} Having identified the high temperature feature as a single peak, a rate window variation was performed to extract the corresponding Arrhenius plot from the peak shift. These results are shown in Figure \ref{fig:s5} (a) and (b). Certain peaks shifted outside the measurable temperature range, meaning that only 3 peaks could be used to make the Arrhenius plot. As such, the extracted parameters have a large uncertainty. Nevertheless, the values reasonably match well-known Cu defects in Si, as mentioned in the main text. \begin{figure} \centering \begin{subfigure}[b]{.47\textwidth} \centering \includegraphics[width=\linewidth]{images/image6S.pdf} \caption{} \label{fig:5sa} \end{subfigure} \begin{subfigure}[b]{.41\textwidth} \centering \includegraphics[width=\linewidth]{images/image7S.pdf} \caption{} \label{fig:5sb} \end{subfigure} \caption{DLTS spectra with time window variation (with lengths 2$^i\times$10 ms, $i$ = 1, $\ldots$ , 6); (b) Arrhenius plot corresponding to the high temperature peak of (a). Here, e\textsubscript{n}\textsuperscript{max} is the electron emission rate, E\textsubscript{a} is the defect activation energy, and $\sigma$\textsubscript{na} is the capture cross-section. Only 3 peaks fall within the experimental temperature range.} \label{fig:s5} \end{figure} In the text, it was claimed that the tandem exhibited a Voc of 900 mV, and higher than each individual single junction solar cell. Figure \ref{fig:s6} shows the best J-V curves of both single junction solar cells, at the time the CZTS/Si tandem was fabricated. Although the values are currently under optimization, it can be seen that in both cases a Voc around 600 mV is expected using the same parameters as used for the production of the tandem cell (when applicable). \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image8S.pdf} \caption{The best single junction J-V characteristics of our processing line at the time of fabrication of the CZTS/Si tandem cell. In the tandem cell, the V\textsubscript{oc} reached close to 900 mV, whereas each individual cell only shows around 600 mV.} \label{fig:s6} \end{figure} As mentioned in the main text, the transparency of the TiN layer can be improved by incorporating oxygen into the films, albeit at the cost of increased resistivity. In this work, the oxygen content was controlled by the amount of residual oxygen available in the ALD reactor during the deposition. The transmission spectrum as well as the absorption coeficient ($\alpha$) of the 10 nm TiO\textsubscript{x}N\textsubscript{y}, used in the tandem cell, is compared with 10 nm TiN and TiO2 layers in Figure \ref{fig:s12}. As can be seen in Figure \ref{fig:s12}, a high transmission of around 90 \% in the near infrared region was achieved for the TiO\textsubscript{x}N\textsubscript{y} layer, however, the resisitivity of the film increased by several orders of magnitude from 0.5 - 1 m$\Omega$.cm for TiN, to around 40 $\Omega$.cm for TiO\textsubscript{x}N\textsubscript{y}. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image12S_abs.pdf} \caption{Transmission spectra (solid lines left y-axis) and aborsption coeficient (dashed lines right y-axis) of 10nm TiN, 10 nm TiO\textsubscript{x}N\textsubscript{y}, and 10 nm TiO\textsubscript{2} films deposited on fused-silica (SiO\textsubscript{2}) substrates.} \label{fig:s12} \end{figure} By monitoring the minority carrier lifetime of the samples throughout different processing steps, we observed that the TiN deposition can have a negative effect on the lifetime of the samples. In section 3.1 of the main text, it was argued that for the samples in the lifetime series a mild Fe contamination could be occurring during the ALD step, and in the tandem cell sample an additional partial loss in the hydrogen passivation could be happening as well, as the sample had an extra hydrogenation step using SiN:H. To further evaluate the TiN effect, special samples were prepared. In these samples, the front (n+PolySi) or rear (p+PolySi) surface of the device precursor wafers were coated by a 75 nm SiN:H layer. The SiN:H layer was then patterned by photo-lithography and wet-etching in buffered HF to expose the underlying n+polySi/ p+polySi layer. Photos of such samples are shown in Figure \ref{fig:s13} (a) and (b) as examples. It has been confirmed by lifetime maps (before and after patterning, not shown here) that the patterning process has no significant effect on the minority carrier lifetime of the wafers. Moreover, these lifetime mappings have been done with the non-patterned side facing upward (which had a blanket SiN:H layer) to exclude any possible optical effects of the pattern on the measurements. Subsequently, 10 nm TiO\textsubscript{x}N\textsubscript{y} was deposited on the patterned surface, and the minority carrier lifetime was mapped over the entire wafer. The obtained lifetime maps are shown in Figure \ref{fig:s13} (c) and (d). As it can be noted from the lifetime maps, the exposed areas exhibit lower minority carrier lifetime in both cases, however the p+PolySi sample seems to have more severe degradation compared to the n+PolySi counterpart. A possible explanation for this loss can be attributed to a partial loss of hydrogen passivation (provided by the preceding hydrogenation process with SiN:H) in the exposed areas as a result of the high temperature ALD process. In the non-exposed areas, this loss is not expected as the hydrogen is provided by the SiN:H layer, which contains high amount of atomic hydrogen as impurity. If that is the case, a post-TiN hydrogenation process, e.g., by annealing the samples in a hydrogen rich-atmosphere, such as forming gas or remote hydrogen plasma, could help to recover the initial lifetime. Regarding the possibility of Fe contamination, it has been seen that placing the samples on a clean dummy wafer can mitigate the degradation (not shown here). A clean dummy wafer was in fact used for the tandem cell fabrication (in section 3.4 of the main text), but a degradation in lifetime was still seen after the TiN step, suggesting that the hydrogenation loss might also be playing a role. Note, however, that in the case of a mild contamination during the TiN deposition, the same protective effect of SiN shown in Figure S8 would still occur, as it would block the diffusion of contaminants into Si. For this reason, we are still not able to single out the reason for the degradation occurring during the TiN step, and this requires additional investigation in the future. \begin{figure} \centering \begin{subfigure}[b]{.464\textwidth} \centering \includegraphics[width=\linewidth]{images/image_TiN_n+polySi.jpg} \caption{p+PolySi/TO} \label{fig:s13a} \end{subfigure} \begin{subfigure}[b]{.47\textwidth} \centering \includegraphics[width=\linewidth]{images/image_TiN_p+polySi.jpg} \caption{p+PolySi/TO} \label{fig:s13b} \end{subfigure} \begin{subfigure}[b]{.47\textwidth} \centering \includegraphics[width=\linewidth]{images/image_n+polySi_afterTiN.pdf} \caption{TiN on n+PolySi/TO} \label{fig:s13b} \end{subfigure} \begin{subfigure}[b]{.47\textwidth} \centering \includegraphics[width=\linewidth]{images/image_p+polySi_afterTiN.pdf} \caption{TiN on p+PolySi/TO} \label{fig:s13b} \end{subfigure} \caption{Effect of the TiN deposition on the passivisation quality of n+polySi (left) and p+polySi (right) TOPCon structures. Patterned SiN pictures ((a) and (b)) and the corresponding lifetime maps ((c) and (d)) on n+polySi/TO and p+polySi/TO structures.} \label{fig:s13} \end{figure} We have seen throughout our processing that blocking behaviors can occur also in our single junction CZTS cells, causing a roll-over effect on the J-V curve. We compare this to the tandem cell in Figure \ref{fig:s7}. In a tandem cell, we speculate that this could be related to the lack of alkali elements in the CZTS layer or to non-ohmic interfaces at the TiN barrier layer. In the case of the single CZTS cell, the roll-over was caused by a non-ideal CZTS composition, and the effect was completely removed by tuning the CZTS composition. In the tandem cell, the same ideal CZTS composition was used. The exact cause of the rollover behavior in the tandem cell is still under investigation. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image9S.pdf} \caption{Comparison of a rollover effect experimentally found during the tuning of the CZTS baseline process and the roll-over effect on the tandem cell. While this problem was resolved in our single junction CZTS baseline, the reasons for this behavior in the tandem cell are currently under investigation.} \label{fig:s7} \end{figure} Additionally, we have found evidence that part of the roll-over effect could be occurring on the Si bottom cell alone, due to the presence of TiO\textsubscript{x}N\textsubscript{y} between the n+polySi and the TCO layers. This is shown in the J-V curve of Figure \ref{fig:s8}. The roll-over is not as significant as in the tandem cell, suggesting that this is likely not the only contributor to the overall distortion of the tandem J-V curve. As mentioned in the main text, this effect can be controlled by tuning the amount of oxygen allowed in the TiN film. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image10S.pdf} \caption{Comparison of a single junction Si solar cell with and without a TiN layer on the top. The presence of a 10 nm TiN layer causes a roll-over effect, distorting the J-V curve.} \label{fig:s8} \end{figure} To evaluate the performance of the top cell and the current matching condition, a CZTS single junction solar cell was prepared, with a thickness similar to that of the tandem cell, around 275 nm. The J-V characteristic of this $``$thin single thin CZTS cell$"$ is plotted in Figure \ref{fig:s9}. Despite its reduced thickness, this solar cell reaches almost 16 mA/cm\textsuperscript{2} and has an efficiency of 5.8$\%$ , comparable to the best CZTS cells produced at the time of the tandem fabrication. However, the tandem results do not show this behavior, as can be seen by the severe blocking occurring in the first quadrant of the J-V curves. This suggests that the top CZTS cell in the tandem is not performing as well as the thin single junction CZTS. The reasons for this are currently under investigation. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image11S.pdf} \caption{J-V characteristic of the single junction thin CZTS solar cell. Despite a reduced thickness of only 275 nm, the current reaches close to 16 mA/cm\textsuperscript{2}.} \label{fig:s9} \end{figure} A further indication of the top CZTS cell limiting the tandem performance was obtained using room-temperature photoluminescence (PL) on complete devices. As Figure \ref{fig:s10} shows, the single thin CZTS cell has a PL yield 2 orders of magnitude higher than the CZTS on the tandem cell, despite having been produced under the same conditions (including all the buffer layers). This naturally comes from the difference in grain size, as shown by the SEM images in the main text, but it is also debated that Na and other alkali metals can lead to a reduction in non-radiative recombination and increase in device performance \cite{doi:10.1063/1.4916635}. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{images/image12S.pdf} \caption{Photoluminescence comparison of CZTS grown on 10 nm TiN in the tandem device (blue) and on Mo in a 5.8$\%$ efficient thin CZTS cell (black). The wavelength of the excitation laser was 785 nm.} \label{fig:s10} \end{figure} It is mentioned in the experimental methods that the full tandem cell is annealed on the hotplate in air at 250 \degree C. This step is used to improve the properties of the CZTS/CdS heterojunction. The effects of this annealing step are shown in Figure \ref{fig:s11}. \begin{figure} \centering \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[width=\linewidth]{images/image11S_a.pdf} \caption{} \label{fig:1a} \end{subfigure} \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[width=\linewidth]{images/image20.pdf} \caption{} \label{fig:1b} \end{subfigure} \caption{EQE Improvement of the top cell upon hot plate annealing in air at 250 \degree C: (a) before hot plate annealing; (b) after hotplate annealing.} \label{fig:s11} \end{figure} \section*{References}
1,941,325,220,117
arxiv
\section{Introduction} \label{sec:intro} A safety critical system in, for example, nuclear power plants involves complicated and advance safety system for ensuring its running process absolutely safe. The system consists of high number and various types of sensors \cite{jamilAffandi,SciTopics}. All of them generate a huge amount of data at a real-time basis which must be processed properly throughout its whole life time. Many heuristic approaches based on the artificial intelligence (AI) such as neural network are currently available and have been studied intensively \cite{Obreja,waveletRBNN,recurrentNN,pakDede}. However, the AI-based approaches have a fundamental problem due to its statistical algorithm which could lead to disaster in the real applications of safety critical system. It is clear that in such critical systems, no fault tolerance is the most important principle. Therefore, putting the safety as the priority, one should implement the exhaustive algorithm spanning over all possibilities rather than using the AIs. This turns into the exhaustive search problem which unfortunately lacks of inefficiency due to the requirement of huge computing power. Some technical approaches have been introduced to overcome this problem. Most of them deploy the parallel algorithm \cite{Karnin1984,Hui2010} together with graphical or combinatorial representation to improve both resources and running time \cite{Matwin1985,haystack,Bhalch2010,Chang2011}. Previously, the application of exhaustive methods into search problem was not feasible for large number of sensors. Fortunately, the affordable parallel environment using graphic processor unit (GPU) is available in recent days. The use of GPU is getting popular, especially, after the introduction of the NVIDIA Compute Unified Device Architecture (CUDA) through a C-based API \cite{cuda}. This enables an easy way to take advantage of the high performance of GPUs for parallel computing. Deploying the GPU-based distributed computing would reduce the execution time causing less responsive system in the previous days, while it also realizes lower power and space consumption than CPU \cite{knapsackGPU}. This motivates us to reconsider the feasibility of exhaustive search for hybrid sensors network. In this paper a new exhaustive search based model is proposed. The model is mainly intended to realize an exhaustive decision support system (DSS) consisting of various and huge number of sensors. However, the paper is focused only on introducing the model. The discussion of parallelization and detail analysis will be published elsewhere. The paper is organized as follows. First in Sec. \ref{sec:model} the model is introduced, and it is followed by the description of mathematical formulation in Sec. \ref{sec:math}. In Sec. \ref{sec:alg} a simple algorithm to execute the model is given. Finally, the paper is ended with a short summary and discussion. \section{The model} \label{sec:model} Before moving on constructing the model and its mathematical representation, let us discuss the basic constraints and circumstances in the expected applications. Putting in mind that the model is developed under the following considerations : \begin{enumerate} \item \textbf{No failure decision is allowed.}\\ By definition, there is no room for even a small mistake generated by the DSS. This actually discourages the deployment of any AI-based method from first principle. \item \textbf{Fast enough 'real-time' process}\\ Fast execution time of the whole process is crucial to increase the safety. However, fast execution time at the order of minutes is in practical more than enough. This moderate requirement encourages the implementation of exhaustive methods supported by GPU powered computation. \item \textbf{Huge number of hybrid sensors}\\ The system is consisting of huge number (at the order of hundreds or thousands) sensors with different characteristics, in particular types and scales \cite{domainSafetyAnalysis}. \item \textbf{Certain relationship across the sensors}\\ Each sensor has relations with another ones in certain way at some degrees. The relationships among the sensors are thereafter called as interaction. \item \textbf{Sensor network with multi clusters}\\ The sensor network is divided into several clusters which typically represents the geographical locations with different degrees of interaction. This realizes a situation of, for instance, a nuclear power plant which is equipped with many sensors in each building. Consequently the sensors in each cluster have stronger interactions among themselves than with another ones belong to another clusters. So, the model should be able to describe the independent interactions among the sensors in a cluster, and also the interactions among different clusters as well. \item \textbf{Dynamic behavior}\\ The values of each sensor are by nature changing from time to time. However, the data acquisition is performed periodically, for instance every few minutes according to the above second point. In a nuclear reactor facility this could happen due to human errors, common system failures and even seismic activities \cite{binarydecission}. \end{enumerate} \begin{figure}[t!] \centering \includegraphics[width=85mm]{skemaEvaluasi.eps} \caption{The illustration of 1-D relationship of $n$ sensors evaluated till the $m^\mathrm{th}$ level, where $S$ denotes sensor and $E$ is the evaluation result.} \label{fig:simpleEval} \end{figure} Having the above requirements in mind, obviously one will arrive at the problem of unlimited decision trees. In order to reduce the tree significantly without raising the risk of failures, let us assume the nearest neighborhood approximation (NNA). Under this approximation, only the interactions with the nearest sensors are taken into account. \begin{figure}[t!] \centering \includegraphics[width=85mm] {representasi.eps} \caption{The circular representation for 1-D relationship of $n$ sensors evaluated for two schemes (blue and brown lines) till the $m^\mathrm{th}$ level, where $S$ denotes sensor and $E$ is the evaluation result.} \label{fig:representasi} \end{figure} In the present paper, let us consider the simplest case of one dimensional (1-D) relationship. This means all sensors are put on a virtual line which allows only interactions with the nearest right and left neighboring sensors for each sensor. This actually reproduces the known tree analysis commonly implemented in the analysis of fault \cite{ftaAircraft,binarydecission}, elements interaction \cite{Park_parallel,noisyGraph, socialNet} and even in optimizing system \cite{airportLayout}. The algorithm can be well illustrated in a tree-like diagram using the evaluation scheme under the NNA scheme as shown in Fig. \ref{fig:simpleEval}. In the figure, two adjacent sensors are first evaluated and then the result are subsequently evaluated with another adjacent sensor. The evaluation scheme depicted in Fig. \ref{fig:simpleEval} can be exhaustively changed according to the acquired values from the responsible sensors. It should be noted that the evaluation scheme is not necessarily binary, but it could be anything else like fuzzy and so on. On the other, due to point 4 one should consider the modified tree analysis, that is both edges should interact each other too and forms a circle line of sensors. Moreover, each sensor on the circle line should be put carefully. Because, according to point 5 and the NNA, the relative location of sensors on the circle line represents their degree of relationship or relevancy between one and another. The stronger relationship between two sensors, both should be put closer each other. This type of circular model is depicted in Fig. \ref{fig:representasi}. There are two examples of evaluation results in the figure, the blue and brown ones corresponding to the evaluation at different time. The innermost circles represent the chain of sensors, and the subsequent outer circles describe the evaluation results at certain levels. As required in point 5, the sensor network should also be divided into several clusters based on either its genuine characteristic or critical levels. Each cluster can be treated separately as an independent sensor network as Fig. \ref{fig:representasi}. The model with several clusters is illustrated in Fig. \ref{fig:representasiCluster} where each cluster is separated by the blue dashed lines. Two evaluation results are shown in the figure as before, the blue and brown ones corresponding to the evaluation at different time. \begin{figure}[b!] \centering \includegraphics[width=85mm] {representasiCluster.eps} \caption{The circular representation for 1-D relationship of $n$ sensors belong to separated clusters, evaluated for two schemes (blue and brown lines) till the $m^\mathrm{th}$ level, where $S$ denotes sensor and $E$ is the evaluation result.} \label{fig:representasiCluster} \end{figure} Now, one is ready to formulate the model in a mathematical representation. \section{Mathematical representation} \label{sec:math} Based on the previous discussion, one should first model the interaction between the neighboring sensors. This should be a function which determines the value of $E_{i,j}$ representing the interacting result between two adjacent sensors at the point $i$ and $j$, \begin{equation} E_{ij} = a_{ij} \, \left( \frac{x_i}{l_i + 1} + \frac{x_j}{l_j + 1} \right) \; . \label{eq:interaction} \end{equation} Here, $l_i = 1, 2, \cdots, m$ denotes the evaluation level of $x_i$ with $i = 1,2, \cdots, n$ and $m = n - 1$. While $x_i$ is the normalized value acquired by sensor for $l_i = 1$, or the value of previous evaluation result for $l_i > 1$. The coupling constant $a_{ij}$ reflects the strength of interaction between both adjacent sensors, $x_i$ and $x_j$ respectively. It is defined in a way such that the value of $E_{ij}$ is getting smaller for higher evaluation level, that is, \begin{equation} a_{ij} = 1 - \frac{| i - j |}{n} \; , \label{eq:aij} \end{equation} for $l_i = 1$ in a system with $n$ sensors. Moreover, Eqs. (\ref{eq:interaction}) and (\ref{eq:aij}) can be extended for $l_i > 1$ as follow, \begin{equation} E_{(ij)k} = a_{ijk} \, \left( \frac{x_{ij}}{l_{ij} + 1} + \frac{x_k}{l_k + 1} \right) \; , \label{eq:interaction2} \end{equation} with, \begin{equation} a_{ijk} = \frac{a_{ij} + a_{ik} + a_{jk}}{3} \; , \label{eq:aij2} \end{equation} and $x_{ij} = E_{ij}$ respectively. The situation is well illustrated in Figs. \ref{fig:representasi} and \ref{fig:representasiCluster}. Further generalization of Eqs. (\ref{eq:interaction2}) and (\ref{eq:aij2}) up to certain evaluation level can be performed in a straightforward way. \begin{table}[t!] \begin{center} \caption{The values of weight parameter $a_{ij}$ for $n$ sensors.} \label{tab:aij} \begin{tabular}{c|cccccccc} $a_{ij}$ & 1 & 2 & 3 & & $\cdots$ & & $n - 1$ & $n$ \\ \\ \hline \\ 1 & 1 & $\frac{n-1}{n}$ & $\frac{n-2}{n}$ & & $\cdots$ & & $\frac{2}{n}$ & $\frac{1}{n}$\\ 2 & $\frac{n-1}{n}$ & 1 & $\frac{n-1}{n}$ & & $\cdots$ & & $\frac{3}{n}$ & $\frac{2}{n}$\\ 3 & $\frac{n-2}{n}$ & $\frac{n-1}{n}$ & 1 & & $\cdots$ & & $\frac{4}{n}$ & $\frac{3}{n}$ \\ $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & & 1 & & $\vdots$ & $\vdots$\\ $n-1$ & $\frac{2}{n}$ & $\frac{3}{n}$ & $\frac{4}{n}$ & & $\cdots$ & & 1 & $\frac{n-1}{n}$\\ $n$ & $\frac{1}{n}$ & $\frac{2}{n}$ & $\frac{3}{n}$ & & $\cdots$ & & $\frac{n-1}{n}$ & 1 \\ \end{tabular} \end{center} \end{table} There is actually an important reason for choosing the definition in Eqs. (\ref{eq:aij}) and (\ref{eq:aij2}). In order to fulfill the requirements in point 3 and 4 in Sec. \ref{sec:model}, it is plausible to normalize all scales in a uniform unit scale. In the present case all values are normalized to be in the range between 0 and 1. This normalization demands all acquired values from the sensors should also be normalized accordingly through a relation, \begin{equation} x_i = f \, \left( x^\prime_i - x_\mathrm{min} \right) \; , \end{equation} where $x_i$ and $x^\prime_i$ are the normalized and originally acquired values of $i^\mathrm{th}$ sensor for $l_i = 1$. The normalization factor $f$ is, \begin{equation} f = \frac{1}{\left| x_\mathrm{max} - x_\mathrm{min} \right|} \; , \end{equation} with $x_\mathrm{max/min}$ denotes the maximum or minimum value of each sensor. This kind of normalization enable us to treat all sensors in the same manner regardless with its types and unit scales. From Eqs. (\ref{eq:aij}) and (\ref{eq:aij2}), it is obvious that the coupling $a$ is uniquely characterizing the present model. It ensures the evaluation value of $E$ at the final $(n - 1)^\mathrm{th}$ level is always divergent, that is between 0 and 1. This is in contradiction with any conventional tree analysis which associates the largest evaluation value at the final level as the final solution. By the way, from the definition $1/n \le a_{ij} \le 1$ as shown in Tab. \ref{tab:aij} which forms a symmetric matrix with unit diagonal elements. Furthermore, one should take a threshold value $E_\mathrm{th}$ as a standard value whether the evaluation value at certain level is allowed to proceed further or not. For the sake of simplicity, this value is fixed and valid for all levels and sensors. This represents the critical value of safety. Following the above normalization it is again constrained, \begin{equation} 0 < E_\mathrm{th} < 1 \; . \label{eq:eth} \end{equation} Only if the evaluation value exceeds this threshold, i.e. $E > E_\mathrm{th}$, the tree should be analyzed further. Otherwise it ends forever. According to the initial value of each sensor at certain time, some evaluation values at final level may survive or not. The surviving value triggers the warning alarm which is indicating some anomalies detected by any sensors. Of course, the determination of appropriate $E_\mathrm{th}$ requires preliminary experiments based on the available standards and regulations. The above procedure is carried out each time following the periodic data acquisition by all sensors. Finally, all the tools have been established and we are ready for applying the above rules. \section{The algorithm} \label{sec:alg} In this section, let us provide a simple algorithm to realize the previously discussed model. Because of the exhaustive method deployed in the model, it requires all variables are treated as a circle. The array of variables should be a traversing pointer indicating the first element as a starting point. Further, the evaluation is performed from the starting point, proceeds to the subsequent element of variables array in an increasing mode till reaching the last one, that is back to the first element. Each sensor has a chance to become a root of a tree and also a leaf. While forming a circle model, the tree within the model is evaluated recursively. A set of simple algorithms is presented here. It consists of two main parts for the pra-evaluation and the main evaluation. Each sensor is labeled with an integer ranging from $0$ to $n-1$ for $n$ number of sensors. The algorithm starts with positioning the sensors by shifting it one by one to generate considerable combinations. Each time a tree of sensors is formed, it is evaluated as a scheme depicted in Fig. \ref{fig:simpleEval}. Algorithm \ref{alg:ps}-\ref{alg:value} require the array of sensor's indexes and values. The value described in a sensor's index array absolutely points to its value in the array of sensor's value. Algorithm \ref{alg:ps} determines the positioning of sensors, while Algorithm \ref{alg:weight} and \ref{alg:value} evaluate their interactions. \begin{algorithm} \caption{Positioning sensors} \label{alg:ps} \begin{algorithmic}[1] \REQUIRE $root$ \COMMENT{index of sensor being a root of a tree or a sub tree} \REQUIRE $n$ \COMMENT{number of sensors involved} \REQUIRE $index$ \COMMENT{array of sensor's index} \IF{$n=2$ \OR $root=n-2$} \STATE evaluate their interaction \STATE exchange sequence of the two or the last two sensors \STATE evaluate their interaction \ELSE \FOR {$i=0 \to n$} \STATE re-positioning sensors($index$,$root+1$) \STATE shifting the sequence from the $root$ position \ENDFOR \ENDIF \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Calculating interaction's weight} \label{alg:weight} \begin{algorithmic}[1] \REQUIRE $root$ \COMMENT{index of sensor being a root of a tree or a sub tree} \REQUIRE $n$ \COMMENT{number of sensors involved} \REQUIRE $index$ \COMMENT{array of sensor's index} \REQUIRE $w=0$ \COMMENT{weight of sensor's interaction initialized} \REQUIRE $b=0$ \COMMENT{counting number of sensors combination currently involved in interaction} \FOR{$i=1 \to root$} \FOR{$j=i+1 \to root$} \STATE $w=w+(1-\frac{abs(index[i]-index[j])}{n})$ \STATE $b=b+1$ \ENDFOR \ENDFOR \STATE $return \frac{w}{b}$ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Evaluate the interaction} \label{alg:value} \begin{algorithmic}[1] \REQUIRE $x_{1},x_{2}$ \COMMENT{two sensor value involved, for a more deeper tree, x1 can be an evaluation value from the previous level of depth} \REQUIRE $l_{1},l_{2}$ \COMMENT{level of interaction} \REQUIRE $a$ \COMMENT{weight of interaction} \STATE $E=a((\frac{x_{1}}{l_{1}+1})+(\frac{x_{2}}{l_{2}+1}))$ \end{algorithmic} \end{algorithm} In positioning the sensors as determined by Algorithm \ref{alg:ps}, for the case of two sensors the algorithm only exchanges their sequences. After the evaluation of first sequence, the algorithm exchanges the sequence and reevaluate the new one. This procedure is specified in the second to fourth lines. For the case of more than two sensors, the algorithm recursively traverses the sequence till it reaches the condition where only two sensors are left. In this case, the algorithm runs in the same manner as if there are only two sensors involved. Each time the algorithm is traversing deeper, the root is increased indicating the depth of tree under evaluation. If a leaf of tree is reached, the algorithm returns back to the parent leaf, exchanges to the next root, traverses deeper till it reaches the leaf. After all paths to each leaf have been reached by one root, the sequence exchanges to another sensor as the new root and so forth. Furthermore, the weight of interaction is calculated using Eq. (\ref{eq:aij}) and given in Tab. \ref{tab:aij}. In Algorithm \ref{alg:ps} $root$ determines the $root$ of a new sub tree, while in Algorithm \ref{alg:weight} $root$ is intended to determine the number of sensors which are currently involved in the interaction as illustrated in Fig. \ref{fig:simpleEval}. $b$ in Algorithm \ref{alg:weight} is intended to count the number of combination of two sensors among the whole sensors as defined in Eq. \ref{eq:aij2}, that is it would be as many as elements in Tab. \ref{tab:aij}. For example, the interaction of three sensors contains three dual-sensor interactions, while the interaction of four sensors contains six and so on. Finally, Algorithm \ref{alg:value} is intended to calculate the evaluation value, $E$. It requires the weight of each interaction from Algorithm \ref{alg:weight} using Eq. (\ref{eq:aij}) and Tab. \ref{tab:aij}. The total number of evaluation values is equal to the number of sensors being involved. \section{Summary} A new model based on the exhaustive search method for hybrid sensor network has been proposed. The model treats all sensors in the same manner by introducing normalization procedure for all sensors and parameters. It is shown that the model is able to describe the whole evaluation processes using few parameters, that is the coupling constant $a$ for each pair of sensors determined uniformly and the universal threshold value $E_\mathrm{th}$. In the present paper, the study is focused on the case of sensor network with 1-D relationship. A simple algorithm for such cases has also been given and briefly discussed. Through the discussion, it is argued that the model could realize a feasible early warning system for any safety critical facilities involving various sensors using exhaustive method to prevent unnecessary failures due to statistical approaches in, for example any AI-based methods. On the other hand, the method requires much less computing power since it has only a complexity of $O(n!)$. In principle the method can be extended to incorporate more complicated relationship among the sensors by considering higher dimensional relationship. Nevertheless, some studies on distributing the computation load to improve the processing speed should also been done carefully. All of these issues are in progress and will be published elsewhere. \section*{Acknowledgment} AAW thanks the Indonesian Ministry of Research and Technology for financial support, and appreciates the Group for Theoretical and Computational Physics at Research Center for Physics LIPI for warm hospitality during the work. The work of LTH is supported by the Riset Kompetitif LIPI 2012 under Contract no. 11.04/SK/KPPI/II/2012. \bibliographystyle{IEEEtran}
1,941,325,220,118
arxiv
\section{Introduction}\label{intro} Hannes Alfv\'en introduced the notion of magnetic flux conservation in ideal magnetohydrodynamics \cite{Alfven42}. This property of ``flux-freezing'' involves an implicit assumption, however, that plasma fluid velocities remain smooth in the limit of vanishing resistivity. This assumption need not be valid if the viscosity vanishes together with the resistivity, at a fixed or a decreasing magnetic Prandtl number. At the implied high Reynolds numbers, smooth laminar plasma flows will be unstable to the development of turbulence. We have argued previously that magnetic flux-conservation in its usual sense cannot hold at any small resistivities, no matter how tiny, in a turbulent plasma with a Kolmogorov-like energy spectrum \cite{EyinkAluie06}. Neither is flux-freezing completely broken but, instead, it becomes intrinsically stochastic due to the roughness of the advecting velocity field \cite{Eyink07,Eyink09}. Infinitely-many magnetic field lines are carried to each point---even in the limit of very high conductivity---which must be averaged to obtain the resultant magnetic field at that point. The previous ideas are theoretical predictions for turbulent flows of astrophysical and laboratory plasmas. However, they are exact properties of at least one soluble problem of turbulent advection, the Kazantsev-Kraichnan (KK) model of kinematic magnetic dynamo. See \cite{Kazantsev68,KraichnanNagarajan67, Kraichnan68} and many following papers \cite{RuzmaikinSokolov81, Novikovetal83,Vergassola96,RogachevskiiKleeorin97, Vincenzi02,BoldyrevCattaneo04,Boldyrevetal05,Celanietal06,ArponenHorvai07}. In that model the physical turbulent velocity field is replaced by a Gaussian random field that is white-noise in time and rough (H\"older continuous) in space. It is rigorously established for this model that Lagrangian particle trajectories are ``spontaneously stochastic'' in the limit of infinite Reynolds numbers, with an infinite ensemble of trajectories for a given advecting velocity field and a given initial particle position \cite{Bernardetal98,GawedzkiVergassola00,Chavesetal03,EvandenEijnden00, EvandenEijnden01,LeJanRaimond02,LeJanRaimond04,Kupiainen03}. This remarkablebreakdown of Laplacian determinism is a consequence of the properties of two-particle Richardson diffusion \cite{Richardson26}, which are also expected to hold for real fluid turbulence. The stochastic nature of flux-freezing must play an essential role in the turbulent magnetic dynamo at high kinetic Reynolds numbers, either for fixed or for vanishing magnetic Prandtl numbers. The zero Prandtl-number fluctuation dynamo was studied in an earlier work of Eyink and Neto \cite{EyinkNeto10}, hereafter referred to as I, primarily for the KK model. This model undergoes a remarkable transition as the scaling exponent $\xi$ of the velocity structure function is decreased, with dynamo effect disappearing for $\xi<1$ in space dimension three. This seems rather counterintuitive, because velocity-gradients and line-stretching rates in the model {\it increase} as $\xi$ is lowered, which should favor dynamo action. Paper I showed that the stochasticity of flux-freezing is fundamental to understand this transition. Although magnetic line-stretching rates indeed increase for decreasing $\xi,$ the resultant magnetic field at a point is the average of infinitely-many field lines advected to that point from a large spatial volume. As shown in I, the {\it correlations} between independent line-vectors are small for $\xi<1,$ when the velocity field is too rough, and the net magnetic field decays despite the rapid growth in strength of individual field lines. The contribution of flux-line stochasticity to turbulent dynamo action for $\xi>1$ was not fully explicated in I, however. Infinitely-many field lines are advected to each point from a large spatial volume, e.g. with diameter of order the velocity integral length $L$ in one turbulent turnover time $L/u_{rms}.$ But all of these field lines are unlikely to make an equal contribution to dynamo action. In particular, field vectors that arrive to the same point from very distant initial locations must be poorly correlated and give small (or negative) dynamo effect. It is not hard to guess that dynamo action in the limit of high-Reynolds number $Lu_{rms}/\nu \gg 1$ and low Prandtl number $\nu/\eta\ll 1$ must arise mainly from vectors with initial separation distances $r$ of order $\sim \ell_\eta,$ the resistive length. Indeed, this is the only available length-scale in that limit, on dimensional grounds. The exact contribution to fluctuation dynamo from vectors with initial separation $r$ remains unknown, however, even in the KK model. One of the main purposes of this paper to provide a quantitative answer to this problem. Another important issue left unresolved in I was the outcome of a spatially-uniform initial magnetic field in the dynamo regime of the KK model for $\xi>1.$ Can such a magnetic field provide the seed for a small-scale fluctuation-dynamo? This question is very closely connected with the proper formulation of the concept of ``magnetic induction'' . This process is usually defined as the creation of small-scale magnetic fluctuations by the turbulent ``shredding'' of a non-zero mean magnetic field and has been invoked to explain small-scale magnetic fluctuations in liquid metal experiments \cite{Odieretal98,Bourgoinetal02,Peffleyetal00,Spenceetal06,Nornbergetal06}, in related numerical simulations \cite{Baylissetal07,Schekochihinetal07}, and at the solar surface \cite{BrandenburgSubramanian05,SchuesslerVoegler08}. It is frequently suggested that this process is distinct from fluctuation dynamo. For example, Schekochihin et al. wrote: ``Given a multiscale observed or simulated magnetic field, one does not generally have enough information (or understanding), to tell whether it has originated from the fluctuation dynamo, from the mean-field dynamo plus the turbulent induction or from some combination of the two.'' \cite{Schekochihinetal07} However, if a mean magnetic field can provide a seed for fluctuation dynamo, is there really any meaningful physical distinction between the two mechanisms? Or if magnetic induction is defined instead as a process that occurs only in the absence of fluctuation dynamo, as it sometimes is, then how can one regard the two processes as acting in ``combination"? Clearly there is some conceptual confusion in the literature about the proper definition of magnetic induction. We shall use our exact results in the KK model to discuss these issues and, also, to evaluate some the proposed theories of the induced magnetic spectrum. The contents of this paper are as follows. In section \ref{back} we provide more detailed background to our work and a more formal statement of the problems to be studied. In section \ref{dynamo} we study the Lagrangian mechanism of the fluctuation dynamo at zero Prandtl number and the quantitative role of stochastic flux-freezing. A first subsection \ref{analysis} presents exact mathematical analysis and the second subsection \ref{numerical} gives numerical results. The next section \ref{induction} discusses the problem of magnetic induction using our analytical results for the KK model. We draw our final conclusions in section \ref{conclusions}. Appendices \ref{BCR} and \ref{numerics} present some technical material. \section{Background and Problem Statement}\label{back} We begin this section with a summary of the key results of Paper I. A ``dynamo order-parameter'' was introduced there with purely geometric significance. Let ${\bf x}({\bf a},t)$ be a stochastic Lagrangian trajectory, satisfying \begin{equation} d{\bf x} = {\bf u}({\bf x},t)dt + \sqrt{2\eta}\,d{\bf W}(t),\,\,\,\,\,\,{\bf x}(t_0)={\bf a}. \label{x-def} \end{equation} We shall assume in this section that the advecting velocity field is incompressible, $\hbox{\boldmath $\nabla$}\hbox{\boldmath $\cdot$}{\bf u}=0,$ as in I. ${\bf W}(t)$ in (\ref{x-def}) is a vector Brownian motion and $\eta$ is the resistivity (or, in fact, the magnetic diffusivity). Let $\hbox{\boldmath $\ell$}_k({\bf a},t)$ be a passive vector starting as a unit vector $\hat{{\bf e}}_k$ at space point ${\bf a}$ and time $t_0$ and subsequently transported along ${\bf x}({\bf a},t),$ stretched and rotated by the velocity-gradient. That is, \begin{equation} \hbox{\boldmath $\ell$}_k({\bf a},t) = \hat{{\bf e}}_k\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}_a{\bf x}({\bf a},t). \label{bell-def} \end{equation} Then I introduced the quantity \begin{equation} \mathcal{R}_{k\ell}({\bf r},t) =\langle\overline{\hbox{\boldmath $\ell$}_k({\bf a},t)\hbox{\boldmath $\cdot$}\hbox{\boldmath $\ell$}_\ell'({\bf a}',t) \delta^3({\bf x}({\bf a},t)-{\bf x}'({\bf a}',t))}\rangle, \label{R-def} \end{equation} with ${\bf r}={\bf a}-{\bf a}',$ where $\langle\cdot\rangle$ denotes average over the turbulent velocity realizations, $\overline{(\cdot)}$ denotes average over the random Brownian motions and $\prime$ indicates a second, independent realization of the latter. This quantity measures not only the magnitude but also the angular correlation of independent line-vectors that start as unit vectors $\hbox{\boldmath $\ell$}_k({\bf a},t_0)={\bf e}_k,\, \hbox{\boldmath $\ell$}_\ell'({\bf a}',t_0)={\bf e}_\ell$ at points ${\bf a},{\bf a}'$ separated by displacement ${\bf r},$ which end at the {\it same} point at time $t.$ Now let ${\bf B}$ be a magnetic field kinematically transported by the random velocity field ${\bf u}$ and diffused by resistivity $\eta$: \begin{equation} \partial_t{\bf B}+({\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}){\bf B}=({\bf B}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}){\bf u} +\eta\triangle{\bf B}, \,\,\,\,\,\,{\bf B}(t_0)={\bf B}_{(0)}. \label{B-eq1} \end{equation} As shown in I, the mean energy in this magnetic field can be expressed as \begin{equation} \langle B^2(t)\rangle = \int d^3r \, \mathcal{R}_{k\ell}({\bf r},t) \langle B^k_{(0)}({\bf r}) B^\ell_{(0)}(\hbox{\boldmath $0$})\rangle. \label{Ben-R-rel} \end{equation} It has been assumed in deriving (\ref{Ben-R-rel}) that initial conditions on the magnetic field are statistically independent of the velocity and that both quantities are statistically homogeneous (space translation-invariant). This formula makes clear the close relation of dynamo action to the line-correlation (\ref{R-def}), which grows exponentially rapidly precisely in the dynamo regime. This connection was verified in I by an exact calculation of the line-correlation (\ref{R-def}) in the soluble Kazantsev-Kraichnan (KK) model of turbulent dynamo \cite{Kazantsev68,KraichnanNagarajan67,Kraichnan68,Vergassola96, RogachevskiiKleeorin97,Vincenzi02,BoldyrevCattaneo04, Boldyrevetal05,Celanietal06,ArponenHorvai07}. This model replaces the true turbulent velocity by a Gaussian random field which is white-noise in time: \begin{equation} \langle u^i({\bf x},t) u^j({\bf x}',t')\rangle=\kappa^{ij}({\bf r})\delta(t-t') \label{u-corr} \end{equation} with ${\bf r}={\bf x}-{\bf x}'.$ In this model, the magnetic correlation function \begin{equation} \mathcal{C}^{ij}({\bf r},t)=\langle B^i({\bf r},t) B^j(\hbox{\boldmath $0$},t)\rangle \label{C-def} \end{equation} evolves according to a closed, linear equation \begin{equation} \partial_t\mathcal{C}^{ij}= \mathcal{M}^{ij}_{k\ell}\mathcal{C}^{k\ell}, \label{C-eq} \end{equation} where $\mathcal{M}$ is a singular diffusion operator. If $\kappa^{ij}({\bf r})\sim r^\xi$ for small $r,$ with rugosity exponent $0<\xi<2,$ then the velocity realizations are rough in space and provide a qualitatively good model of turbulent advection. It was shown in I that $\mathcal{R}_{k\ell}({\bf r},t)$ in the KK model is the solution of the adjoint equation \begin{equation} \partial_t\mathcal{R}_{k\ell}= \mathcal{M}_{k\ell}^{ij\,*}\mathcal{R}_{ij} \label{R-eq} \end{equation} with initial condition $\mathcal{R}_{k\ell}({\bf r},t)=\delta_{k\ell} \delta^3({\bf r}).$ This fact was exploited to show that $\mathcal{R}_{k\ell}({\bf r},t)$ in the KK model at zero magnetic Prandtl number, as expected, grows exponentially in the dynamo regime ($\xi>1$) but has only power-law time-dependence in the non-dynamo regime ($\xi<1$). The study in I left open, however, several important questions. First, what range of $r=|{\bf r}|$ substantially contributes to the correlation $\mathcal{R}$ and to the space-integral in (\ref{Ben-R-rel})? It is known in the KK model that magnetic field lines arrive to a point from a spatial region with radius $L(t)\sim (t-t_0)^{1/(2-\xi)},$ in the root-mean-square sense. This is also expected for kinematic dynamo in a real turbulent fluid, where $\xi\doteq 4/3$ and $L(t)\sim (t-t_0)^{3/2}$ corresponds to turbulent Richardson diffusion of the field lines. However, not all of the field lines in this large volume should be expected to contribute equally to dynamo action. Field vectors starting from points separated by distances $r$ much larger than the resistive length-scale may arrive at the same point with small angular correlations and thus contribute little to net magnetic field growth. It follows from the analysis of I for the KK model in its dynamo regime that, for times $t$ long compared to the resistive time-scale, \begin{equation} \mathcal{R}_{k\ell}({\bf r},t) \sim e^{-\lambda t} \mathcal{\widetilde{G}}_{k\ell}({\bf r}), \,\,\,\,\,\,\,\,\,\, (\lambda<0) \label{R-longT} \end{equation} where $-\lambda$ is the largest (positive) eigenvalue of $\mathcal{M}^*$ and $\mathcal{\widetilde{G}}$ is the corresponding (right) eigenmode. Thus, questions about the spatial structure of $\mathcal{R}$ reduce, in the exponential growth regime at sufficiently long times, to the corresponding questions about the right eigenmode $\mathcal{\widetilde{G}}$ of $\mathcal{M}^*$ (or, equivalently, left eigenmode of $\mathcal{M}$). It is one of the goals of the present work to calculate this eigenmode and thereby determine the degree of correlation of line-vectors arriving from various initial separations. A second important issue left unresolved in I was the response of a spatially uniform initial magnetic field to turbulent advection and stretching. In the case of a uniform initial field, formula (\ref{Ben-R-rel}) simplifies to \begin{equation} \langle B^2(t)\rangle = \mathcal{R}_{k\ell}(t)\langle B^k_{(0)}B^\ell_{(0)}\rangle, \label{Ben-R-hom} \end{equation} with \begin{equation} \mathcal{R}_{k\ell}(t)\equiv \int d^3r\,\mathcal{R}_{k\ell}({\bf r},t) = \langle\hbox{\boldmath $\ell$}_k(t)\hbox{\boldmath $\cdot$}\hbox{\boldmath $\ell$}_\ell'(t)\rangle_0. \label{Rint-def} \end{equation} The quantity (\ref{Rint-def}) measures the correlation of two independent line-vectors that arrive at the same point, regardless of their initial separation. The result (\ref{R-longT}) does not imply exponential growth of this integrated correlation, however, unless one can show that \begin{equation} \int d^3r \, \mathcal{\widetilde{G}}_{k\ell}({\bf r})\neq 0. \end{equation} Below we shall demonstrate this fact. This result is closely connected with another important physical problem, the creation of small-scale magnetic fluctuations by turbulent ``magnetic induction'' of a non-zero mean-field \cite{Odieretal98,Bourgoinetal02,Peffleyetal00, Spenceetal06,Nornbergetal06,Baylissetal07,Schekochihinetal07}. Indeed, the formula (\ref{Ben-R-hom}) applies directly to this situation, which corresponds to the special case $\langle B^k_{(0)} B^\ell_{(0)}\rangle =\langle B^k_{(0)}\rangle \langle B^\ell_{(0)}\rangle.$ This observation already makes clear that there is no essential physical distinction between ``fluctuation dynamo'' and ``magnetic induction'', which simply correspond to different choices of initial magnetic field (random vs. deterministic). Other definitions of ``magnetic induction'' are offered in the literature. If the fluctuation dynamo fails to operate for some reason (e.g. if the magnetic Reynolds number is too small), then the term ``magnetic induction'' is sometimes used for the process by which small-scale fluctuations are generated from the mean magnetic field. If a mean-field dynamo still operates and creates an exponentially growing large-scale magnetic field, then these ``parasitic'' small-scale fluctuations may be also exponentially increasing. This provides another mechanism to produce exponential growth of small-scale magnetic fields in the absence of a fluctuation dynamo. If instead there is no mean-field dynamo, then such growth of small-scale fluctuations by ``magnetic induction'' is sub-exponential. We shall discuss below the non-dynamo regime $(\xi<1)$ of the KK model as a simple example of such ``magnetic induction'' and use it to evaluate some physical theories that have been proposed for the phenomenon. We mention finally one additional motivation to study the left eigenmodes of $\mathcal{M}.$ As pointed out in I, these have a physical interpretation as correlators of the magnetic vector potential ${\bf A}.$ More precisely, the correlation function defined by \begin{equation} \mathcal{G}_{k\ell}({\bf r},t)=\langle A_k({\bf r},t) A_\ell(\hbox{\boldmath $0$},t)\rangle, \label{G-def} \end{equation} in the KK model satisfies \begin{equation} \partial_t\mathcal{G}_{k\ell}= \mathcal{M}_{k\ell}^{ij\,*}\mathcal{G}_{ij}, \label{G-eq} \end{equation} with diffusion operator adjoint to that in eq.(\ref{C-eq}). The underlying reason for this fact is the conservation of magnetic helicity $H=\int d^3r\, {\bf A}\hbox{\boldmath $\cdot$}{\bf B}$ for the ideal induction equation. If the latter is written as \begin{equation} \partial_t {\bf B} = \mathcal{L}{\bf B}, \label{B-Leq} \end{equation} for a linear operator $\mathcal{L}$ (depending on the velocity ${\bf u}$), then helicity conservation is equivalent to \begin{equation} \partial_t{\bf A} = -\mathcal{L}^*{\bf A}, \label{A-Leq} \end{equation} up to addition of a gradient term. Different operators $\mathcal{L}$ are possible, which coincide in their action on solenoidal magnetic fields ($\hbox{\boldmath $\nabla$}\hbox{\boldmath $\cdot$}{\bf B}=0$), but whose adjoints $\mathcal{L}^*$ correspond to different gauge choices for ${\bf A}.$ Equation (\ref{A-Leq}) directly implies (\ref{G-eq}) in the KK model. The addition of resistivity to eqs. (\ref{B-Leq}), (\ref{A-Leq}) does not change this result, since the additional Laplacian operator is self-adjoint. \section{Fluctuation Dynamo}\label{dynamo} We study in this section the Lagrangian mechanism of the fluctuation dynamo in the KK model for three space-dimensions (3D) and for homogeneous and isotropic statistics of both velocity and magnetic fields. We begin in the first subsection \ref{analysis} with exact mathematical analysis of the problem. Most of this discussion applies to a fairly general situation, allowing for compressibility and helicity of the velocity field. We then specialize to the case of an incompressible, non-helical advecting velocity, with a power-law space-correlation corresponding to an infinite-Reynolds-number inertial-range range for the velocity field. In the second subsection \ref{numerical} we present numerical results for this latter case and discuss their physical interpretation. The reader who is mostly interested in the final results, and not their detailed derivation, may skip directly to that section of the paper. \subsection{Mathematical Analysis}\label{analysis} We discuss two natural choices for the linear operator $\mathcal{L}$ in the ideal induction eq.(\ref{B-Leq}) and the corresponding adjoint operators $\mathcal{L}^*$ and gauge choices for the vector potential, successively in the following two subsections. \subsubsection{Gauge I}\label{analytic-I} One choice to write the induction equation is as \begin{equation} \partial_t{\bf B}=\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}({\bf u}\hbox{\boldmath $\times$}{\bf B}-\eta\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}{\bf B}). \label{B-eq-I} \end{equation} Note that in the ideal equation with $\eta=0,$ $\mathcal{L}_{\bf u}^{(2)}{\bf B}= -\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}({\bf u}\hbox{\boldmath $\times$}{\bf B})$ is the Lie-derivative acting on ${\bf B}$ as a differential 2-form. See \cite{Larsson03}, eq.(2.3) for $\hbox{\boldmath $\nabla$}\hbox{\boldmath $\cdot$}{\bf B}=0.$ Thus, this form of the induction equation is most directly connected with the conservation of magnetic flux through advected 2-dimensional surfaces. The corresponding adjoint equation for the vector potential is \begin{eqnarray} \partial_t{\bf A} &=& {\bf u}\hbox{\boldmath $\times$}(\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}{\bf A})-\eta\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}(\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}{\bf A}) \cr &=&-({\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}){\bf A} + (\hbox{\boldmath $\nabla$}{\bf A}){\bf u} \cr &&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +\eta[\triangle{\bf A}-\hbox{\boldmath $\nabla$}(\hbox{\boldmath $\nabla$}\hbox{\boldmath $\cdot$}{\bf A})]. \label{A-eq-I} \end{eqnarray} In the gauge-choice implied by this equation, pure-gauge fields ${\bf A}=\hbox{\boldmath $\nabla$}\lambda$ satisfy $\hbox{\boldmath $\nabla$}(\partial_t\lambda)=0$ and are, thus, time-independent up to possible spatially constant terms. Boldyrev, Cattaneo and Rosner in \cite{Boldyrevetal05} developed an elegant formulation of the KK model based on the eq.(\ref{B-eq-I}) for the case of homogeneous and isotropic statistics. We review their formulation---hereafter referred to as the BCR formalism---in Appendix \ref{BCR}, paying special attention to issues of gauge-invariance. Readers not previously familiar with the work of \cite{Boldyrevetal05} may wish to review that appendix before proceeding. One essential observation of BCR is that the evolution operator for magnetic correlations {\it factorizes} as ${\cal M}={\cal DJ}.$ More precisely, in the KK model starting from eq.(\ref{B-eq-I}) \begin{equation} \partial_t \mathcal{C}^{ij}= \mathcal{M}^{ij}_{pq}\mathcal{C}^{pq} =\mathcal{D}^{ij,k\ell}\mathcal{J}_{k\ell,pq}\mathcal{C}^{pq} \label{C-eq-long} \end{equation} where \begin{equation} \mathcal{ D}^{ij,k\ell}=\epsilon^{ikp}\epsilon^{j\ell q}\partial_p\partial_q \label{D-def} \end{equation} is the non-positive, self-adjoint differential operator which relates the magnetic correlation to the vector-potential correlation and \begin{equation} \mathcal{J}_{ij,k\ell}=\epsilon_{ikp}\epsilon_{j\ell q}T^{pq} \label{J-def} \end{equation} is the self-adjoint multiplication operator with $ T^{pq}({\bf r})= 2\eta\delta^{pq} + \kappa^{pq}(0)-\kappa^{pq}({\bf r}), $ where $\kappa^{pq}({\bf r})$ is the spatial velocity-correlation in eq.(\ref{u-corr}). Explicitly, \begin{eqnarray} \mathcal{M}^{ij}_{pq}\mathcal{C}^{pq} &=& \partial_r\partial_s\left(T^{rs}\mathcal{C}^{ij}\right) -\partial_p\partial_s\left(T^{is}\mathcal{C}^{pj}\right) \cr && \,\,\,\,\,-\partial_r\partial_q\left(T^{rj}\mathcal{C}^{iq}\right) +\partial_p\partial_q\left(T^{ij}\mathcal{C}^{pq}\right). \label{M-def-I} \end{eqnarray} One may further write equations in the KK model for the joint correlations of magnetic and vector-potentials $$ \Psi^{i\,\,\,\,}_{\,\,\,\,k}({\bf r},t) = \langle B^i({\bf r},t)A_k(\hbox{\boldmath $0$},t)\rangle, \,\,\Psi_{k\,\,\,\,}^{\,\,\,\,i}({\bf r},t) = \langle A_k({\bf r},t)B^i(\hbox{\boldmath $0$},t)\rangle$$ using the operator $\mathcal{R}^{i,k} = \epsilon^{i p k}\partial_p,$ in terms of which the operator $\mathcal{D}$ itself factorizes as $\mathcal{ D}^{ij,k\ell} =\mathcal{R}^{i,k}\mathcal{R}^{j,\ell}.$ Then \begin{eqnarray} \partial_t \Psi^{i\,\,\,\,}_{\,\,\,\,n} &=& -(\mathcal{R}^{i,m})^*\mathcal{J}_{mn,pq} \mathcal{R}^{q,\ell}\Psi^{p\,\,\,\,}_{\,\,\,\,\ell} \cr \partial_t \Psi_{m\,\,\,\,}^{\,\,\,\,j} &=& -(\mathcal{R}^{j,n})^*\mathcal{J}_{mn,pq} \mathcal{R}^{p,k}\Psi_{k\,\,\,\,}^{\,\,\,\,q} \label{Psi-eq} \end{eqnarray} Finally, the vector-potential correlation in (\ref{G-def}) obeys \begin{equation} \partial_t \mathcal{G}_{k\ell}= (\mathcal{M}^*)_{k\ell}^{pq}\mathcal{G}_{pq} =\mathcal{J}_{k\ell,rs}\mathcal{D}^{rs,pq}\mathcal{G}_{pq} \label{G-eq-I} \end{equation} with $\mathcal{M}^*=\mathcal{JD}.$ Explicitly, \begin{eqnarray} (\mathcal{M}^*)_{k\ell}^{pq}\mathcal{G}_{pq} &=& T^{rs}\partial_r\partial_s\mathcal{G}_{k\ell} -T^{ps}\partial_k\partial_s\mathcal{G}_{p\ell} \cr && \,\,\,\,\,-T^{rq}\partial_r\partial_\ell\mathcal{G}_{kq} +T^{pq}\partial_k\partial_\ell\mathcal{G}_{pq}. \label{Mstar-def-I} \end{eqnarray} BCR mainly considered the isotropic sector of the KK model, invariant under proper rotations but not necessarily space reflections. In that case, with $\hat{{\bf r}}={\bf r}/r,$ $$ \kappa^{ij}({\bf r})=\kappa_L(r) \hat{r}^i\hat{r}^j +\kappa_N(r)(\delta^{ij}-\hat{r}^i\hat{r}^j)+ \kappa_H(r) \epsilon^{ijk} \hat{r}_k, $$ where $\kappa_L,\,\,\kappa_N,$ and $\kappa_H=r g$ are conventional longitudinal, transverse and helical correlation functions, with similar representations of $\mathcal{C},\mathcal{G},$ etc. However, ref.\cite{Boldyrevetal05} introduced instead a special basis to expand tensor correlation functions, as $\mathcal{C}^{ij}=\sum_{a=1}^3 C_a \xi^{ij}_a$ with $$ \xi^{ij}_1 = {{1}\over{\sqrt{2}r}}(\delta^{ij}-\hat{r}^i\hat{r}^j),\,\,\,\, \xi^{ij}_2 = {{1}\over{r}}\hat{r}^i\hat{r}^j,\,\,\,\, \xi^{ij}_3 = {{1}\over{\sqrt{2}r}}\epsilon^{ijk}\hat{r}^k, $$ so that $$ C_1=\sqrt{2}rC_N,\,\,\,\,C_2=rC_L,\,\,\,\,C_3=\sqrt{2}r C_H. $$ The special feature of this basis is that the Hilbert-space inner-product defined by $$ \langle \mathcal{G,C}\rangle = \int d^3 r\,\, \mathcal{G}_{ij}({\bf r}) \mathcal{C}^{ij}({\bf r}) $$ simplifies in the isotropic sector to $$ \langle G,C\rangle =4\pi \int_0^\infty dr\,[G_1C_1+G_2C_2+G_3C_3]. $$ We hereafter use the notation $\mathcal{C,G}$ etc. in the BCR formalism for the isotropic sector to represent the column vectors $$ \mathcal{C}=\left(\begin{array}{c} C_1\cr C_2\cr C_3 \end{array}\right). $$ In this representation, the operators ${\cal J}$ and ${\cal D}$ take the simple forms $$ \mathcal{J}=\left(\begin{array}{ccc} b & a & 0\cr a & 0 & c \cr 0 & c & b \end{array}\right) ,\,\,\,\,\, \mathcal{D}=\left(\begin{array}{ccc} \partial_r^2 & -\partial_r{{\sqrt{2}}\over{r}} & 0\cr {{\sqrt{2}}\over{r}}\partial_r & -{{2}\over{r^2}} & 0 \cr 0 & 0 & {{1}\over{r^2}}\partial_r r^4\partial_r{{1}\over{r^2}} \end{array}\right), $$ with $$ a(r)=\sqrt{2}[2\eta+\kappa_N(0)-\kappa_N(r)]$$ $$ b(r)=2\eta+\kappa_L(0)-\kappa_L(r)$$ $$ c(r)=\sqrt{2}[g(0)-g(r)]r. $$ A crucial observation of BCR is that $\mathcal{D}$ factorizes as $\mathcal{D}=-\mathcal{RR}^*$ with $$ \mathcal{R}=\left(\begin{array}{ccc} 0 & \partial_r & 0\cr 0 & {{\sqrt{2}}\over{r}} & 0 \cr 0 & 0 & -{{1}\over{r^2}}\partial_r r^2 \end{array}\right), \,\,\,\,\,\, \mathcal{R}^*=\left(\begin{array}{ccc} 0 & 0 & 0\cr -\partial_r & {{\sqrt{2}}\over{r}} & 0 \cr 0 & 0 & r^2\partial_r{{1}\over{r^2}} \end{array}\right). $$ Another important fact is that ${\rm Ran}(\mathcal{R}),$ the range of the operator $\mathcal{R},$ consists of the solenoidal functions $\mathcal{C}^{ij}$ that satisfy $\partial_i\mathcal{C}^{ij}= \partial_j \mathcal{C}^{ij}=0.$ Thus, all solenoidal solutions of the equation $\partial_t \mathcal{C}=\mathcal{MC}=-\mathcal{RR}^*\mathcal{JC}$ can be obtained by solving \begin{equation} \partial_t W= - \mathcal{R}^*\mathcal{JR} W \label{W-eq} \end{equation} with a self-adjoint operator $\mathcal{S}=\mathcal{R}^*\mathcal{JR}$, and then setting $$ \mathcal{C}=\mathcal{R}W=\left(\begin{array}{c} \partial_rW_2 \cr {{\sqrt{2}}\over{r}}W_2 \cr -{{1}\over{r^2}}\partial_r( r^2W_3) \end{array}\right). $$ Comparison of eq.(\ref{W-eq}) in the isotropic sector with the general eq.(\ref{Psi-eq}) suggests that $W$ is a simple linear transformation of the magnetic-field, vector-potential correlation $\Psi.$ For this result, and for the derivation of all the preceding statements, see Appendix \ref{BCR}. We now discuss the solutions of the adjoint problem $\partial_t \mathcal{G}=\mathcal{M}^* \mathcal{G}=-\mathcal{JRR}^*\mathcal{G}.$ Its solutions are likewise related to solutions of (\ref{W-eq}) by the equation $$W=\mathcal{R}^*\mathcal{G}=\left(\begin{array}{c} 0\cr -\partial_rG_1 +{{\sqrt{2}}\over{r}}G_2\cr r^2\partial_r\left({{G_3}\over{r^2}}\right) \end{array}\right). $$ This relation is many-to-one. Since ${\rm Ker}(\mathcal{R}^*)= \left[{\rm Ran}(\mathcal{R})\right]^\perp,$ the kernel of $\mathcal{R}^*$ consists of the correlations of gradient type: \begin{equation} G_1(r) =\sqrt{2}\Lambda'(r),\,\,\,\,G_2(r) = r\Lambda''(r), \label{gradient} \end{equation} for a scalar correlation function $\Lambda.$ Thus, any solutions $\mathcal{G}$ of the adjoint equation that differ by such a gradient solution are mapped to the same solution $W$ of (\ref{W-eq}). This freedom just corresponds to gauge-invariance of $W.$ We next employ this formalism to discuss the right eigenfunctions $\mathcal{C}_\alpha$ and left eigenfunctions $\mathcal{G}_\alpha$ of $\mathcal{M},$ which satisfy $$ \mathcal{MC}_\alpha=-\lambda_\alpha\mathcal{C}_\alpha,\,\,\,\,\, \mathcal{M}^*\mathcal{G}_\alpha=-\lambda_\alpha\mathcal{G}_\alpha,$$ respectively. (Of course, for the continuous spectrum of the operators these are generalized eigenfunctions.) Let us suppose that one has obtained the eigenfunction $W_\alpha$ of the self-adjoint operator $\mathcal{S}=\mathcal{R}^*\mathcal{JR}$: \begin{equation} \mathcal{S}W_\alpha = \mathcal{R}^*\mathcal{JR}W_\alpha = \lambda_\alpha W_\alpha. \label{S-eig} \end{equation} Then it is not hard to see that \begin{equation} \mathcal{C}_\alpha =\mathcal{R}W_\alpha,\,\,\,\, \mathcal{G}_\alpha={{1}\over{\lambda_\alpha}}\mathcal{JR}W_\alpha. \label{eig-funs} \end{equation} The first result is obvious. To obtain the second, note that, with $\mathcal{G}_\alpha$ as defined above, $$ \mathcal{R}^* \mathcal{G}_\alpha={{1}\over{\lambda_\alpha}} \mathcal{S}W_\alpha=W_\alpha.$$ If this result is used to eliminate $W_\alpha$ in the definition of $\mathcal{G}_\alpha,$ then the statement follows. An interesting consequence is that right and left eigenfunctions are related very simply by $$ \mathcal{G}_\alpha={{1}\over{\lambda_\alpha}}\mathcal{J}\mathcal{C}_\alpha. $$ Note also that $\langle\mathcal{C}_\beta,\mathcal{G}_\alpha\rangle =\langle \mathcal{R}W_\beta, {{1}\over{\lambda_\alpha}}\mathcal{JR}W_\alpha\rangle ={{1}\over{\lambda_\alpha}}\langle W_\beta, \mathcal{S}W_\alpha\rangle=\langle W_\beta,W_\alpha\rangle=\delta_{\alpha,\beta}.$ Thus, the left and right eigenfunctions are biorthogonal. Our discussion so far has been fairly general, permitting a compressible velocity field with reflection-non-symmetric (helical) statistics. However, we now specialize. If space-reflection symmetric statistics are assumed, then $W_3=C_3=G_3=0.$ Furthermore, $c(r)=0$ in the operator ${\cal J}.$ Thus, the eigenvalue problem (\ref{S-eig}) reduces to a single equation for $W_2$: $$ -\partial_r(b(r)\partial_rW_2) + {{\sqrt{2}}\over{r^2}}[a(r)-ra'(r)]W_2=\lambda W_2. $$ This is a standard Sturm-Liouville eigenvalue problem. The relations (\ref{eig-funs}) yield \begin{equation} C_1=\partial_r W_2,\,\,\,\, C_2 = {{\sqrt{2}}\over{r}} W_2 \label{R-eigfun} \end{equation} (true even in the reflection non-symmetric case) and \begin{equation} G_1={{\sqrt{2}a(r)}\over{\lambda r}} W_2 + {{b(r)}\over{\lambda}}\partial_r W_2, \,\,\,\, G_2 = {{a(r)}\over{\lambda}}\partial_r W_2. \label{L-eigfun} \end{equation} The problem further simplifies if one assumes an incompressible flow, implying the relation $a=\sqrt{2}\left(b+{{1}\over{2}}rb'\right).$ In that case the Sturm-Liouville problem becomes \begin{equation} -\partial_r(p(r)\partial_rW_2) + q(r) W_2=\lambda W_2. \label{SL-eq} \end{equation} with $p(r)=b(r)$ and $$ q(r)= {{2b(r)-2rb'(r)-r^2b''(r)}\over{r^2}}= -\partial_r\left[{{1}\over{r^2}}\partial_r\left(r^2 b(r)\right)\right]. $$ {}From general Sturm-Liouville theory, the spectrum is bounded below and may consist of both point spectrum and continuous spectrum, depending upon the choice of the function $b(r).$ Dynamo effect corresponds to the lowest eigenvalue being negative, $\lambda<0,$ with a ground state, square-integrable wavefunction $\int_0^\infty dr\,W_2^2(r)<\infty.$ The ground-state eigenfunction $W_2$, if it exists at all, must be positive for all $r>0$ by the Sturm oscillation theorem. \subsubsection{Gauge II} \label{analytic-II} There is another natural choice for the linear operator $\mathcal{L}$ in the ideal induction eq.(\ref{B-Leq}). This choice corresponds to writing that equation as \begin{equation} \partial_t {\bf B} = -({\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}){\bf B}+({\bf B}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}){\bf u} -(\hbox{\boldmath $\nabla$}\hbox{\boldmath $\cdot$}{\bf u}){\bf B} +\eta\triangle{\bf B}. \label{B-eq-II} \end{equation} If ${\bf B}$ is a smooth solution of eq. (\ref{B-eq-II}) for $\eta=0$ and if $\rho$ is the mass density solving the continuity equation, $\partial_t\rho+\hbox{\boldmath $\nabla$}\hbox{\boldmath $\cdot$}(\rho{\bf u})=0,$ then, as is well-known, $\hbox{\boldmath $\ell$}={\bf B}/\rho$ is a ``frozen-in'' field. That is, $$ \frac{\partial}{\partial t}\hbox{\boldmath $\ell$}=-({\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$})\hbox{\boldmath $\ell$}+(\hbox{\boldmath $\ell$}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}){\bf u}. $$ The righthand side of this equation is just $\mathcal{L}^{\bf u}_{(1)}\hbox{\boldmath $\ell$},$ the Lie-derivative operator acting upon vectors (rank-1 contravariant tensors) \cite{Larsson03}. These facts explain the interest of the eq. (\ref{B-eq-II}), since intuitive geometric notions of material line-vector motion can be exploited to understand magnetic dynamo effect. The results in I all depended upon using this form of the induction equation. The equation for the vector-potential adjoint to (\ref{B-eq-II}) is \begin{equation} \partial_t \widetilde{{\bf A}} =-({\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$})\widetilde{{\bf A}} -(\hbox{\boldmath $\nabla$}{\bf u})\widetilde{{\bf A}} +\eta\triangle\widetilde{{\bf A}}. \label{A-eq-II} \end{equation} We use the tilde to distinguish the vector potential in this gauge from that resulting from (\ref{A-eq-I}). There is also a geometric significance to eq.(\ref{A-eq-II}), since $\mathcal{L}_{\bf u}^{(1)}\widetilde{{\bf A}}=({\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$})\widetilde{{\bf A}}+(\hbox{\boldmath $\nabla$}{\bf u})\widetilde{{\bf A}}$ is the Lie-derivative acting on $\widetilde{{\bf A}}$ as a differential 1-form. This is related to the fact that ``frozen-in'' magnetic flux through surfaces (2-cells) can be written as the line-integral of the vector potential $\widetilde{{\bf A}}$ (or ${\bf A}$) around closed loops (1-cycles). In the gauge choice corresponding to (\ref{A-eq-II}) the pure-gauge fields $\widetilde{{\bf A}}=\hbox{\boldmath $\nabla$}\widetilde{\lambda}$ have zero Lagrangian time-derivative, $D_t\widetilde{\lambda} =0,$ up to possible spatial constants. Of course, the two gauge choices ${\bf A}$ and $\widetilde{{\bf A}}$ must be related by a suitable gauge transformation $$ \widetilde{{\bf A}}={\bf A} - \hbox{\boldmath $\nabla$}\lambda. $$ In fact, eq.(\ref{A-eq-II}) can be rewritten as $$ \partial_t \widetilde{{\bf A}} = {\bf u}\hbox{\boldmath $\times$}(\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}\widetilde{{\bf A}}) -\eta\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}(\hbox{\boldmath $\nabla$}\hbox{\boldmath $\times$}\widetilde{{\bf A}}) -\hbox{\boldmath $\nabla$}\Phi, $$ with $$ \Phi={\bf u}\hbox{\boldmath $\cdot$}\widetilde{{\bf A}}-\eta\hbox{\boldmath $\nabla$}\hbox{\boldmath $\cdot$}\widetilde{{\bf A}}, $$ implying that $\lambda=\int^t dt'\,\,\Phi(t').$ In the KK model, the form of the induction equation (\ref{B-eq-II}) leads to the evolution equation for the magnetic correlation $\partial_t\mathcal{C}^{ij}= \mathcal{\widetilde{M}}^{ij}_{pq}\mathcal{C}^{pq},$ with \begin{eqnarray} \mathcal{\widetilde{M}}^{ij}_{pq}\mathcal{C}^{pq} &=& \partial_r\partial_s\left(T^{rs}\mathcal{C}^{ij}\right) -\partial_s\left(\partial_pT^{is}\,\mathcal{C}^{pj}\right)\cr & & \,\,-\partial_r\left(\partial_q T^{rj}\,\mathcal{C}^{iq}\right) + (\partial_p\partial_q T^{ij})\mathcal{C}^{pq}. \label{M-def-II} \end{eqnarray} Comparison with the definition of $\mathcal{M}$ in (\ref{M-def-I}) shows that $\mathcal{MC}=\mathcal{\widetilde{M}C}$ when acting on elements $\mathcal{C}$ of the subspace of solenoidal correlations functions. We note furthermore that this subspace is invariant under the evolution (\ref{C-eq}) (because the induction equation (\ref{B-eq-I}) preserves solenoidality of ${\bf B}.$) The adjoint equation for the vector-potential correlation $\mathcal{\widetilde{G}}_{ij}({\bf r},t)=\langle\widetilde{A}_i({\bf r},t)\widetilde{A}_j(\hbox{\boldmath $0$},t)\rangle$ is \begin{equation} \partial_t\widetilde{\mathcal{G}}_{k\ell}= (\widetilde{\mathcal{M}}^*)_{k\ell}^{pq} \widetilde{\mathcal{G}}_{pq}, \label{G-eq-II} \end{equation} with \begin{eqnarray} (\mathcal{\widetilde{M}}^*)_{k\ell}^{pq}\widetilde{\mathcal{G}}_{pq} &=& T^{rs}\partial_r\partial_s\widetilde{\mathcal{G}}_{k\ell} + (\partial_k T^{ps})\partial_s\widetilde{\mathcal{G}}_{p\ell}\cr & & \,\,+(\partial_\ell T^{rq})\partial_r\widetilde{\mathcal{G}}_{kq} +(\partial_k\partial_\ell T^{pq})\widetilde{\mathcal{G}}_{pq}. \label{Mstar-def-II} \end{eqnarray} In the previous section we have discussed how to determine the eigenvalues and eigenfunctions of the operator $\mathcal{M}.$ The right eigenfunctions $\mathcal{\widetilde{C}}_\alpha$ of $\mathcal{\widetilde{M}}$ and $\mathcal{C}_\alpha$ of $\mathcal{M}$ are the same in the solenoidal subspace, since $\mathcal{\widetilde{M}}=\mathcal{M}$ there. We shall obtain the left eigenfunctions of $\mathcal{\widetilde{M}}$ by solving for the differences with the corresponding left eigenfunctions of $\mathcal{M}$. Defining $\delta \mathcal{G}_\alpha=\widetilde{\mathcal{G}}_\alpha-\mathcal{G}_\alpha, \,\,\,\,\delta\mathcal{M}^*\equiv \widetilde{\mathcal{M}}^*-\mathcal{M}^*,$ it follows from $ \mathcal{M}^*\mathcal{G}_\alpha=-\lambda_\alpha\mathcal{G}_\alpha, \widetilde{\mathcal{M}}^*\mathcal{\widetilde{G}}_\alpha= -\lambda_\alpha\mathcal{\widetilde{G}}_\alpha$ that \begin{equation} \left(\mathcal{\widetilde{M}}^*+\lambda_\alpha\right)\delta\mathcal{G}_\alpha =-\delta\mathcal{M}^*\mathcal{G}_\alpha. \label{delG-eq} \end{equation} A straightforward calculation using (\ref{Mstar-def-I}) and (\ref{Mstar-def-II}) gives \begin{eqnarray} (\delta\mathcal{M}^*)_{k\ell}^{pq}\mathcal{G}_{pq} &=& \partial_k\left[T^{pq}\left(\partial_q\mathcal{G}_{p\ell} -\partial_\ell\mathcal{G}_{pq}\right)\right]\cr & & \,\,\,\,\,\,\,\, +\partial_\ell\left[T^{pq}\left(\partial_p\mathcal{G}_{kq} -\partial_k\mathcal{G}_{pq}\right)\right]\cr & & \,\,\,\,\,\,\,\, +\partial_k\partial_\ell \left(T^{pq}\mathcal{G}_{pq}\right). \label{delM-def} \end{eqnarray} The solvability condition of eq.(\ref{delG-eq}) is $\langle\mathcal{\widetilde{C}}_\alpha,\delta\mathcal{M}^* \mathcal{G}_\alpha\rangle=0,$ for the right eigenfunction $\mathcal{\widetilde{C}}_\alpha$ of $\mathcal{\widetilde{M}}$ (which is also the right eigenfunction $\mathcal{C}_\alpha$ of $\mathcal{M}$) with eigenvalue $\lambda_\alpha.$ This is easily seen to be satisfied using the definition of $\delta\mathcal{M}.$ Thus, eq.(\ref{delG-eq}) has a unique solution $\delta\mathcal{G}_\alpha$ in the subspace orthogonal to $\mathcal{\widetilde{C}}_\alpha.$ Defining $\mathcal{\widetilde{G}}_\alpha =\mathcal{G}_\alpha+\delta\mathcal{G}_\alpha,$ it follows that $$ \langle\mathcal{\widetilde{C}}_\alpha,\mathcal{\widetilde{G}}_\alpha\rangle= \langle\mathcal{\widetilde{C}}_\alpha,\mathcal{G}_\alpha\rangle =\langle\mathcal{C}_\alpha,\mathcal{G}_\alpha\rangle=1. $$ Similar arguments using eq.(\ref{delG-eq}) show that $\delta\mathcal{M}^*\mathcal{G}_\alpha$ and $\delta\mathcal{G}_\alpha$ are orthogonal to every solenoidal eigenfunction $\mathcal{\widetilde{C}}_\beta=\mathcal{C}_\beta$ and, thus, to the entire subspace of solenoidal correlation functions. Hence the new set of eigenfunctions $\mathcal{\widetilde{C}}_\alpha, \mathcal{\widetilde{G}}_\alpha$ for all $\alpha$ form another biorthogonal set. There are several simplifications in the isotropic sector of the model. As shown in Appendix \ref{BCR}, any function $\delta\mathcal{G}$ in the isotropic sector which is orthogonal to all solenoidal correlations must be pure-gauge: \begin{equation} \delta\mathcal{G}_{k\ell}=\partial_k\partial_\ell\Lambda. \label{pure-gauge} \end{equation} (For simplicity, we drop here the spectral index $\alpha$ which labels the eigenvalues and eigenfunctions.) Finding $\delta\mathcal{G}$ thus reduces to finding $\Lambda.$ Note furthermore that $\mathcal{M}^*\delta\mathcal{G}=0$ when $\delta\mathcal{G}$ is pure-gauge (ultimately because gauge functions $\lambda$ do not evolve in the gauge-choice of eq.(\ref{A-eq-I})). Thus, the equation for $\delta\mathcal{G}$ becomes $$ \left( \delta\mathcal{M}^*+\lambda\right)\delta\mathcal{G}= -\delta\mathcal{M}^*\mathcal{G}.$$ and, substituting from (\ref{pure-gauge}), \begin{eqnarray*} \partial_k\partial_\ell \left(T^{pq}\partial_p\partial_q\Lambda\right) +\lambda\partial_k\partial_\ell\Lambda &=& -\partial_k\left[T^{pq}\left(\partial_q\mathcal{G}_{p\ell} -\partial_\ell\mathcal{G}_{pq}\right)\right]\cr & & -\partial_\ell\left[T^{pq}\left(\partial_p\mathcal{G}_{kq} -\partial_k\mathcal{G}_{pq}\right)\right]\cr & & -\partial_k\partial_\ell \left(T^{pq}\mathcal{G}_{pq}\right). \end{eqnarray*} Finally, there must exist in the isotropic sector a scalar function $\Phi(r)$ such that \begin{equation} T^{pq}\left(\partial_p\mathcal{G}_{kq} -\partial_k\mathcal{G}_{pq}\right)=\partial_k\Phi, \label{Phi-def} \end{equation} with $\partial_k\Phi(r)=\hat{r}_k\Phi'(r),$ so that the previous equation becomes $$ T^{pq}\partial_p\partial_q\Lambda+\lambda\Lambda =-\left[2\Phi+T^{pq}\mathcal{G}_{pq}\right]. $$ Substituting the isotropic form $$ T^{pq}({\bf r}) = T_L(r)\hat{r}^p\hat{r}^q + T_N(r)\left(\delta^{pq}-\hat{r}^p\hat{r}^q\right) + T_H(r) \epsilon^{pqm}\hat{r}_m, $$ and the similar expression for $\mathcal{G}$ into (\ref{Phi-def}) gives, after some computation, $\Phi'(r) = 2 (T_N \Psi_H - T_H\Psi_N). $ The resulting equation to be solved for $\Lambda$ is \begin{equation} T_L\partial_r^2\Lambda + 2T_N\frac{1}{r}\partial_r\Lambda +\lambda \Lambda = -\left[2\Phi+T^{pq}\mathcal{G}_{pq}\right], \label{Lam-eq} \end{equation} with $$ \Phi(r) = -2 \int_r^\infty d\rho\,[T_N(\rho)\Psi_H(\rho)-T_H(\rho)\Psi_N(\rho)] $$ (so that $\Phi(+\infty)=0$) and $$ T^{pq}G_{pq}=T_LG_L+2T_NG_N+2T_HG_H. $$ Alternatively, the equation (\ref{Lam-eq}) may be written using BCR quantities, with $T_L=b,\,\,\,T_N=a/\sqrt{2}$ $$ \Phi' = \frac{aW_2-cW_3}{r}-\sqrt{2}c \Psi_L, $$ $$\Psi_L(r)=\sqrt{2}\int_0^r d\rho\,\frac{W_3(\rho)}{\rho^2},$$ and $$ TG\equiv T^{pq}G_{pq}= \frac{a G_1 + bG_2 + cG_3}{r}. $$ \subsection{Numerical Results and Physical Discussion} \label{numerical} As a particular case of physical interest we shall consider an incompressible, non-helical velocity field with $a=\sqrt{2}(b+\frac{1}{2}rb')$ and $c=0.$ To model an infinite-Reynolds-number inertial-range, we take, for $0<\xi<2,$ $$ b(r)=2\eta + 2D_1 r^\xi. $$ We non-dimensionalize the equations of the KK model using $\ell_\eta=(\eta/D_1)^{1/\xi}$ for space and $\tau_\eta=\ell_\eta^2/\eta$ for time. The Sturm-Liouville problem is then given by (\ref{SL-eq}), or \begin{eqnarray} &&\,\,\,\,\,\,\,\,\,\,\,\,-\partial_r(p(r)\partial_rW_2)+q(r)W_2 = \lambda W_2, \cr && p(r)=2(1+r^\xi), \,\,\,\,q(r)= {{4}\over{r^2}}-{{2(\xi+2)(\xi-1)}\over{r^{2-\xi}}}. \,\,\,\,\,\,\,\,\, \label{SL-eq-p} \end{eqnarray} Endpoints $r=0,\infty$ are singular, non-oscillatory and limit-point. The Friedrichs boundary conditions to select the principal solution are $W_2(0)=W_2(\infty)=0.$ E.g. see \cite{Zettl05}. The Sturm-Liouville operator with $q(r)$ above is obviously non-negative for $\xi<1,$ so dynamo effect can exist only for $\xi>1.$ In the latter case, there is both point spectrum and, above some threshold value, $\lambda>\lambda_c,$ continuous spectrum. The lowest eigenvalue is negative, implying dynamo effect. To give the closest correspondence with real hydrodynamic turbulence, we take the Richardson value $\xi=4/3$ in our work here. See \cite{Vincenzi02} for a numerical study of general values $1<\xi<2.$ We obtain the solution of (\ref{SL-eq-p}) using the {\tt MATSLISE} software for ${\tt MATLAB}$ \cite{Ledouxetal05,Ledoux07}. Briefly, this package solves regular Sturm-Liouville eigenvalue problems by first converting them through the so-called Liouville transformation $W_2=\Psi/[p(r)]^{1/4}$ and $x=\int_0^r \frac{dr'}{\sqrt{p(r')}}$into a corresponding Schr\"odinger equation for $\Psi(x).$ The latter is then solved by a high-order Constant Perturbation Method (CPM) algorithm, which generates its own numerical grid of points $r_i,$ $i=1,2,...,L.$ To treat our singular problem on an infinite interval, we truncate to a finite interval $[0,b]$ with the condition $W_2(b)=0$ and then take successively larger $b$ values, as discussed in \cite{Ledoux07}, Section 6.3.1. We have considered successively $b=200,...,700,800,$ with convergence of solutions for $r<500.$ The ground-state eigenvalue found by this method is $\lambda\doteq -0.1927,$ in good agreement with \cite{Vincenzi02}, Fig. 5, and the corresponding eigenfunction $W_2(r)$ is plotted in Fig.~1. The eigenfunction is normalized so that \begin{equation} \int_0^\infty dr\,W_2^2(r)=1. \label{W-norm} \end{equation} \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig1.pdf}} \caption{Plot of ground-state eigenfunction $W_2(r)$ versus $r.$ All quantities in this and following figures have been non-dimensionalized with resistive units as discussed in the text.} \end{figure}\label{eigfun-fig} \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig2.pdf}} \caption{Small-$r$ asymptotics of ground-state eigenfunction. In \textcolor{blue}{blue} is a log-log plot of $W_2(r)$ versus $r$ and in \textcolor{red}{red} the quadratic fit $(0.1288)r^2.$} \end{figure}\label{Wsmallr} \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig3.pdf}} \caption{Large-$r$ asymptotics of ground-state eigenfunction. In \textcolor{blue}{blue} is a log-linear plot of $W_2(r)$ versus $r^{1/3}$ and in \textcolor{red}{red} a straight line with slope from (\ref{W2-large}) for $\xi=4/3$.} \end{figure}\label{Wlarger} Asymptotics at small and large $r$ are known for the ground-state eigenfunction. The small-$r$ behavior was obtained by \cite{BoldyrevCattaneo04}, who noted \begin{equation} W_2(r) = A r^2 + B r^{2+\xi} + o(r^{2+\xi}),\,\,\,\,\,r\ll 1. \label{W2-small} \end{equation} Indeed, direct substitution of this ansatz shows that the Sturm-Liouville eigenvalue equation (\ref{SL-eq-p}) is then satisfied up to terms $o(r^\xi),$ if and only if $B=-A.$ The large-$r$ behavior from a WKB asymptotic analysis is \cite{Vincenzi02,BoldyrevCattaneo04} \begin{equation} W_2(r) \sim \exp\left[-{{\sqrt{-2\lambda}\, r^{(2-\xi)/2}}\over{2-\xi}}\right],\,\,\,\,\,r\gg 1 \label{W2-large} \end{equation} up to a power-law prefactor. This corresponds to the dominant balance $2r^\xi W_2'' \doteq -\lambda W_2$ in the Sturm-Liouville eq.(\ref{SL-eq-p}) at large-$r$. Both the small-$r$ and large-$r$ asymptotic behaviors predicted analytically have been verified in our numerical solution. As shown in Fig.~2, the leading-order $r^2$ behavior in (\ref{W2-small}) is verified over about 24 orders of magnitude with the value $A\doteq 0.1288$ obtained by taking the small-$r$ limit of the numerical results for $W_2/r^2.$ Although we shall not show it here, we have furthermore verified the subleading $r^{10/3}$ term in (\ref{W2-small}) with the same value of $A,$ over about 16 orders of magnitude. We also verify the stretched-exponential decay (\ref{W2-large}) at large-$r$, as shown by the log-linear plot of $W_2(r)$ vs. $r^{1/3}$ in Fig.~3. The red line shows the slope predicted by (\ref{W2-large}) with $\lambda=-0.1927.$ Our numerical evaluation of $W_2(r)$ is in very good agreement with all known analytical results for the dynamo eigenfunction. Magnetic and vector-potential correlations in the dynamo growth mode are obtained from $W_2$ via (\ref{R-eigfun}), (\ref{L-eigfun}) and plotted in Figs.~4 and 5 using the traditional longitudinal and transverse functions. The normalizations are those which follow from (\ref{W-norm}). One obvious feature is the much longer range of the vector-potential correlations as compared with the magnetic-field correlations. \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig4.pdf}} \label{Bcorr-fig} \caption{Plot of magnetic field correlation functions, longitudinal $C_L(r)$ in \textcolor{blue}{blue} and transverse $C_N(r)$ in \textcolor{green}{green} versus $r.$\\} \end{figure} \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig5.pdf}} \label{Acorr-fig} \caption{Plot of magnetic vector-potential correlation functions, longitudinal $G_L(r)$ in \textcolor{blue}{blue} and transverse $G_N(r)$ in \textcolor{green}{green} versus $r.$} \end{figure} The small-$r$ behavior is obtained from (\ref{W2-small}) to be \begin{equation} C_L \sim \sqrt{2}A [1-r^{\xi}+o(r^{\xi})], \label{CL-small} \end{equation} \begin{equation} C_N \sim \sqrt{2}A[1-(1+\frac{\xi}{2})r^{\xi}+o(r^{\xi})], \label{CN-small} \end{equation} and \begin{equation} G_L \sim G_N \sim \frac{4\sqrt{2}A}{\lambda}[1+o(r^\xi)]. \label{GLN-small} \end{equation} Note that the contributions of order $r^{\xi}$ cancel in the latter two functions, signalling their greater smoothness. The large-$r$ behavior is found, with (\ref{W2-large}), to be \begin{equation} C_1 \sim -\sqrt{\frac{-\lambda}{2}} r^{-\xi/2}W_2,\,\,\,\,C_2 \sim {{\sqrt{2}}\over{r}} W_2 \label{C-large} \end{equation} \begin{equation} G_1 \sim \sqrt{\frac{2}{-\lambda}} r^{\xi/2} W_2, \,\,\,\, G_2 \sim \frac{2+\xi}{\sqrt{-\lambda}} r^{\xi/2} W_2. \label{G-large} \end{equation} The signs implied for $r\gg 1$ are $C_N<0,\,\,C_L>0$ and $G_N,\,\,G_L>0.$ Although it is difficult to see clearly in Fig.~4, $C_N(r)<0$ in our numerical solution for $r>6.85.$ The existence of negative tails in $C_N$ was noted some time ago \cite{RuzmaikinSokolov81,Novikovetal83} to be a consequence of the solenoidal character of the magnetic field (see also just below). The physical-space behaviors discussed above imply corresponding results for the magnetic energy spectrum of the dynamo mode. At low wavenumbers \begin{equation} E(k)\sim A' k^4, \,\,\,\,\,\,\,\, k\ll k_\eta \label{energy-low} \end{equation} with $k_\eta=2\pi/\ell_\eta$ and $A'$ some constant numerically proportional to $A.$ To see this, note that the stretched-exponential decay (\ref{C-large}) implies that $E(k)$ is analytic at low-$k$ and can be expanded as a convergent power-series in $k^2.$ The leading term proportional to $k^2$ vanishes, however, since the integral $\int d^3r\, \mathcal{C}^{ij}({\bf r})=0.$ By isotropy, this is equivalent to the vanishing of the integral $\int_0^\infty dr\,r^2 C_T,$ where $C_T={{\sqrt{2}C_1+C_2}\over{r}}$ is the trace $C_T=C_{ii}.$ Using (\ref{R-eigfun}), $$ r^2C_T=\sqrt{2}\left(r\partial_rW_2+W_2\right)=\sqrt{2}\partial_r\left(rW_2\right), $$ so that $$ \int_0^\infty dr\,r^2 C_T(r)=\sqrt{2} \lim_{r\rightarrow \infty} rW_2(r)=0. $$ Note that this result requires the existence of negative tails in $C_T$ at large $r$ \cite{RuzmaikinSokolov81,Novikovetal83}. The $k^4$ spectrum in (\ref{energy-low}) was explained long ago by Kraichnan and Nagarajan \cite{KraichnanNagarajan67} as due to the dipole magnetic field that results from ``irregularly twisted and elongated current loops whose transverse dimension is $\sim k_m^{-1}$ [ our $k_\eta^{-1}$].'' The spectrum at high wavenumbers is \cite{BoldyrevCattaneo04} \begin{equation} E(k)\sim C k^{-(1+\xi)}, \,\,\,\,\,\,\,\, k\gg k_\eta. \label{energy-high} \end{equation} This follows mathematically by Fourier transforming the singular terms $\propto r^\xi$ in (\ref{CL-small}),(\ref{CN-small}). The result (\ref{energy-high}) is the exact analogue for the Kazantsev model of the Golitsyn-Moffatt $k^{-11/3}$ spectrum \cite{Golitsyn60,Moffatt61}. The dominant balance in the induction equation for length-scales $\ell\ll\ell_\eta$ is $$ \partial_t{\bf B} -\eta\triangle{\bf B} = {\bf B}_\eta\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf u}, $$ with ${\bf B}_\eta$ the magnetic field at scale $\ell_\eta.$ The time-derivative must be included because of the rapid change in time of the velocity field. Solving for the statistical steady-state of the above linear Langevin equation leads to $E(k)\sim \frac{\langle B^2\rangle}{\eta} E_u(k),$ where $E_u(k)$ is the energy spectrum of the velocity field, and not to $E(k)\sim \frac{\langle B^2\rangle}{ \eta^2k^2} E_u(k),$ as in the original argument of Golitsyn. Our discussion here exactly parallels that of Frisch and Wirth \cite{FrischWirth96} for the analogous passive scalar problem. Note finally that the energy spectrum in the Kazantsev model, both in its low-$k$ and high-$k$ behaviors, is thus close to that argued for kinematic dynamo in inertial-range hydrodynamic turbulence by Kraichnan and Nagarajan \cite{KraichnanNagarajan67}. \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig6.pdf}} \label{Lambda-fig} \caption{Plot of gauge-change function $\Lambda(r)$ versus $r$.} \end{figure} \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig7.pdf}} \label{Lam-small} \caption{Small-$r$ asymptotics of gauge-change function. In \textcolor{blue}{blue} is a log-log plot of $\Lambda(r)-\Lambda_0$ versus $r$ and in \textcolor{red}{red} the quadratic fit $(11.93)r^2.$} \end{figure} \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig8.pdf}} \label{Lam-large} \caption{Large-$r$ asymptotics of gauge-change function. In \textcolor{blue}{blue} is a log-linear plot of $|\Lambda(r)|$ versus $r^{1/3}$ and in \textcolor{red}{red} a straight line with slope from (\ref{Lambda-large}) for $\xi=4/3$.} \end{figure} We shall now turn to the evaluation of the gauge-II correlations. For this purpose, we must determine the gauge-change function $\Lambda(r)$ which solves eq.(\ref{Lam-eq}): \begin{equation} r^{-2}(r^2b\Lambda')'+\lambda\Lambda=-F \label{Lam-eq2} \end{equation} with $F=2\Phi+TG.$ We have solved this equation numerically, by the procedure described in Appendix \ref{numerics}. The result is plotted in Fig.~6. At small $r$, the solution $\Lambda=\Lambda_{reg}+\Lambda_{sing}$ is the sum of a regular part \begin{equation} \Lambda_{reg}\sim \Lambda_0 +\Lambda_1r^2+o(r^2) \label{Lam-reg} \end{equation} and a singular part \begin{equation} \Lambda_{sing}\sim \Lambda_s r^{2+\xi}+o(r^{2+\xi}). \label{Lam-sing} \end{equation} This is verified by substituting into eq.(\ref{Lam-eq2}), using the similar expansion for $F$ $$ F\sim F_0 + F_1 r^\xi $$ which follows from $$ \Phi = \Phi_0+\int_0^r d\rho\, \frac{aW_2}{\rho} = \Phi_0 +O(r^2) $$ with $\Phi_0=-\int_0^\infty dr\, \frac{aW_2}{r}<0$ and $$ TG= \frac{8\sqrt{2}A}{\lambda}\left[3+(3+\xi)r^\xi+o(r^\xi)\right]. $$ Thus, $F_0=2\Phi_0+\frac{24\sqrt{2}A}{\lambda}<0$ and $F_1= \frac{8\sqrt{2}A}{\lambda}(3+\xi)<0.$ Then it is straightforward to obtain \begin{equation} \lambda\Lambda_0+12\Lambda_1=-F_0 \label{F0-eq} \end{equation} \begin{equation} 2\left[2\Lambda_1 +(2+\xi)\Lambda_s\right](3+\xi)= -F_1. \label{F1-eq} \end{equation} These results have been verified in our numerical solution, with specific values of the constants $$ \Lambda_0 = -1535.5, \,\,\Lambda_1 =11.9299, \,\, \Lambda_s =-6.0232. $$ See Appendix \ref{numerics}, and Fig.~7 for a comparison of $\Lambda(r)$ and $\Lambda_{reg}(r)$ at small $r.$ At large-$r$ we expect $\Lambda(r)\sim \exp\left[-{{\sqrt{-2\lambda}\, r^{(2-\xi)/2}}\over{2-\xi}}\right]$ up to power-law prefactors. Rewriting the eq.(\ref{Lam-eq2}) for $\Lambda$ as $$ b\Lambda'' + r^{-2}(r^2b)'\Lambda'+\lambda \Lambda= -F $$ one finds that the terms $b\Lambda'',\,\,\lambda\Lambda$ cancel to leading order. For $r\gg 1.$ $$ TG \sim \frac{2(2+\xi)}{\sqrt{-\lambda}}r^{3\xi/2-1}W_2, $$ and $\Phi'=aW_2/r\sim \sqrt{2}(2+\xi)r^{\xi-1}W_2,$ so that $$ \Phi \sim -\frac{2(2+\xi)}{\sqrt{-\lambda}}r^{3\xi/2-1}W_2. $$ It follows that $$ F = 2\Phi+TG\sim \Phi. $$ The dominant balance of $r^{-2}(r^2b)'\Lambda'$ and $-F$ gives \begin{equation} \Lambda \sim \frac{\sqrt{2}}{\lambda} r^\xi W_2, \,\,\,\,\,\,\, r\gg 1. \label{Lambda-large} \end{equation} This asymptotics is verified in our numerical solution. See Fig.~8 for a log-linear plot of $|\Lambda(r)|$ vs. $r^{1/3},$ with the red line having the slope predicted by (\ref{Lambda-large}),(\ref{W2-large}). {}From the function $\Lambda$ we obtain $\delta\mathcal{G}$ via eq.(\ref{pure-gauge}). We plot in Fig.~9 the longitudinal and transverse components $\delta G_L$ and $\delta G_N$ obtained from (\ref{gradient}). It is interesting that these functions show almost exactly opposite behaviors of algebraic signs compared with $G_L,\,\,G_N$ in Fig.~5. Both $\delta G_L$ and $\delta G_N$ are proportional to $r^\xi$ at small $r$ because of the $\Lambda_s$ contribution and are stretched-exponentials \begin{equation} \delta G_1 \sim \sqrt{\frac{2}{-\lambda}} r^{\xi/2} W_2, \,\,\,\, \delta G_2 \sim -\frac{r}{\sqrt{2}}W_2 \label{delG-large} \end{equation} at large $r,$ so that $\delta G_N>0,\,\,\delta G_L<0$ for $r\gg 1.$ \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig9.pdf}} \label{deltaG-fig} \caption{Plot of change in vector-potential correlation functions, longitudinal $\delta G_L(r)$ in \textcolor{blue}{blue} and transverse $\delta G_N(r)$ in \textcolor{green}{green}, versus $r.$} \end{figure} \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig10.pdf}} \label{tildeG-fig} \caption{Plot of line-vector correlation functions, longitudinal $\widetilde{G}_L(r)$ in \textcolor{blue}{blue} and transverse $\widetilde{G}_N(r)$ in \textcolor{green}{green}, versus $r.$} \end{figure} Finally, we obtain $\mathcal{\widetilde{G}}=\mathcal{G}+\delta\mathcal{G}.$ Plotted in Fig.~10 are $\widetilde{G}_L$, $\widetilde{G}_N.$ These are the central results of this paper, because of their relation to $\mathcal{R}$ in eq.(\ref{R-longT}). We can thus interpret these functions as the correlations in the dynamo growth phase of vector line-elements carried to the same final point that were initially separated by distance $r,$ with directions longitudinal and transverse to the separation vector, respectively. We shall discuss their physical interpretation below; first we consider the asymptotic behaviors for small-$r$ and large-$r$. Combining results for $\mathcal{G}$ and $\delta\mathcal{G}$ at $r\ll 1$ gives \begin{equation} \widetilde{G}_L \sim \left( \frac{4\sqrt{2}A}{\lambda}+2\Lambda_1\right) +(2+\xi)(1+\xi)\Lambda_s r^{\xi} \label{G-small-II} \end{equation} and \begin{equation} \widetilde{G}_N \sim \left(\frac{4\sqrt{2}A}{\lambda}+2\Lambda_1\right) +(2+\xi)\Lambda_s r^{\xi}. \label{GN-small-II} \end{equation} It is interesting to note that $\mathcal{\widetilde{G}},$ unlike $\mathcal{G},$ contains terms $\propto r^\xi$ and is singular at small $r$. The reason for this is that the eq.(\ref{A-eq-II}) for $\widetilde{{\bf A}}$ contains the velocity-gradient $\hbox{\boldmath $\nabla$}{\bf u},$ whereas the eq.(\ref{A-eq-I}) for ${\bf A}$ contains only ${\bf u}$ itself. Finally, comparing the large-$r$ asymptotics in (\ref{G-large}) with that in (\ref{delG-large}) gives for $r\gg 1$ \begin{equation} \widetilde{G}_1 \sim 2\sqrt{\frac{2}{-\lambda}} r^{\xi/2} W_2, \,\,\,\, \widetilde{G}_2 \sim -\frac{r}{\sqrt{2}}W_2, \label{G-large-II} \end{equation} so that $\widetilde{G}_N>0,\,\,\widetilde{G}_L<0$ for $r\gg 1.$ The results in Fig.~10 quantify the significance of magnetic field-line stochasticity for the small-$Pr_m$ turbulent dynamo. The plotted correlations represent the contribution to magnetic energy at a given point produced by field vectors that arrive from points separated by distance $r$ initially. As one can see, the contribution is quite diffuse (in units of the resistive length $\ell_\eta$) with fat, stretched-exponential tails. Both correlations are positive for small separations $r <3\ell_\eta.$ Particularly interesting is the long negative tail for the longitudinal correlation $\widetilde{G}_L,$ which represents an ``anti-dynamo effect'' that suppresses the turbulent growth of magnetic field energy. \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig11.pdf}} \label{Transverse-fig} \caption{Contribution of transverse seed field vectors to a positive line-vector correlation. Two transverse field-vectors on parallel lines at the initial time (left), when brought to the same point by a fluid motion, are positively correlated at first contact of the lines (right).} \end{figure} \begin{figure} \centerline{\includegraphics[width=9cm,height=7cm]{fig12.pdf}} \label{Longitude-fig} \caption{Contribution of longitudinal seed field vectors to a negative line-vector correlation. Two longitudinal field-vectors on the same line---or two adjacent lines--- at the initial time (left), when brought to the same point by a fluid motion, are negatively correlated at first contact of the lines (right).} \end{figure} It is easy to understand the positive signs of $\widetilde{G}_L,\,\,\widetilde{G}_N$ at small $r$ values. For $r\lesssim \ell_\eta$ the Brownian motion term dominates in the equation which follows from (\ref{x-def}) for the relative motion ${\bf r}(t)={\bf x}_2(t)-{\bf x}_1(t)$ of a pair of points. Since the Brownian motion corresponds to a simple translation of the lines, with no rotation, the line-vectors which start parallel (either longitudinal or transverse) at separation $r\lesssim \ell_\eta$ will tend to remain nearly parallel when they intersect. Hence, they arrive at the final point positively correlated. The signs of the tails of $\widetilde{G}_L,\,\,\widetilde{G}_N$ for large $r$ are less easy to understand, because turbulent advection effects become important for $r\gtrsim\ell_\eta.$ We suggest here a heuristic explanation. In the left half of Fig.~11 we show a pair of parallel field-lines separated by ${\bf r}$ with their initially transverse line-vectors indicated in red. We then show on the right the result of a large-scale fluid motion which brings the two line-vectors together to the same point, stretched and rotated by the flow. (It is important to understand that the velocity at scales $r>\ell_\eta$ cannot actually bring the two lines to touch, but only within distance $\sim \ell_\eta$ of each other; it is then the action of two independent Brownian motions which completes their transport to the same point.) Around the time of first contact shown in the figure, the two line-vectors are positively correlated. Further rotation by the flow can eventually produce anti-alignment and negative correlation. The situation is exactly the opposite, however, for the initially longitudinal line-vectors shown in Fig.~12, which start on the same (or nearly the same) field-line. As these vectors are brought together by a similar fluid motion as before, the line vectors are anti-aligned near the time of first contact. They may then become positively oriented as further rotation and stretching occurs. These geometric considerations may help to explain why, for large enough $r,$ positive alignment dominates in $\widetilde{G}_N$ while negative alignment dominates in $\widetilde{G}_L.$ Transverse line-vectors seem to be brought together from large distances by mainly lateral advection of field-lines, while longitudinal vectors are brought together by twisting and looping of field-lines. \section{Uniform Fields and Magnetic Induction} \label{induction} We now turn to the second question left open in I, the fate of spatially-uniform magnetic seed fields in the dynamo regime of the Kazantsev model. This issue is closely related to the problem of ``magnetic induction'', which is often defined simply as the generation of small-scale magnetic fluctuations by turbulent tangling of the lines of a non-vanishing mean-field. In the case of homogeneous statistics---which we only consider in this paper---any non-zero average field must be spatially uniform. Thus, the response of a uniform field to homogeneous turbulence is intimately related to that of magnetic induction. In fact, we shall argue that there is no real distinction in this context between ``magnetic induction'' and ``fluctuation dynamo'' with a uniform seed field. Let us consider then the special case that the initial magnetic-field ${\bf B}_{(0)}$ is spatially-uniform (but still possibly random). Applying eq.(\ref{Ben-R-hom}) gives the energy in the fluctuation field ${\bf b}={\bf B}-\langle{\bf B}\rangle$ to be \begin{eqnarray} && \langle b^2(t)\rangle = \mathcal{R}_{k\ell}(t)\langle B^k_{(0)}B^\ell_{(0)}\rangle -\delta_{k\ell}\langle B^k_{(0)}\rangle\langle B^\ell_{(0)}\rangle \cr && =\mathcal{R}_{k\ell}(t)\langle b^k_{(0)}b^\ell_{(0)}\rangle + [\mathcal{R}_{k\ell}(t)-\delta_{k\ell}]\langle B^k_{(0)}\rangle\langle B^\ell_{(0)}\rangle. \,\,\,\,\,\,\,\, \label{ben-R-hom} \end{eqnarray} In the second line of the equation above, one may make a formal distinction between ``fluctuation dynamo'' represented by the first term and ``magnetic induction'' represented by the second term. However, their asymptotic growth rates at long times are the same, both given by $$ \mathcal{R}_{k\ell}(t)\sim e^{-\lambda t} \mathcal{\widetilde{G}}_{k\ell}, \,\,\,\,\,\,\,\,\,\,\,\, (\lambda<0) $$ if the integral $$ \mathcal{\widetilde{G}}_{k\ell}\equiv \int d^3r\,\,\mathcal{\widetilde{G}}_{k\ell}({\bf r}) $$ is non-vanishing. We show now that it is indeed non-zero. Since the above integral is gauge-invariant, one may just as well consider the integral of $\mathcal{G}_{k\ell}({\bf r}).$ Furthermore, as long as the turbulent velocity field (but not necessarily the magnetic field) is statistically isotropic, then $$ \mathcal{G}_{k\ell} = \int d^3r\,\,\mathcal{G}_{k\ell}({\bf r}) = \frac{1}{3}G_T \delta_{k\ell} $$ where $ G_T\equiv 4\pi \int_0^\infty dr\,r^2 G_T(r) $ and $G_T(r)=G_L(r)+2G_N(r)$ is the trace function. From (\ref{L-eigfun}) $$ G_T(r)={{\sqrt{2}G_1+G_2}\over{r}} = {{2aW_2+(a+\sqrt{2}b)r\partial_rW_2}\over{\lambda r^2}}.$$ Using the incompressibility condition $a=\sqrt{2}\left(b+{{1}\over{2}}rb'\right)$ and integrating by parts once in $r$ gives $$ \int_0^\infty dr \,r^2 G_T(r)=-{{1}\over \sqrt{2}\lambda} \int_0^\infty (r^2b''+4rb') W_2(r)\,dr. $$ On the other hand, multiplying the eigenvalue equation (\ref{SL-eq}) by $r^2$ and integrating by parts in $r$ twice gives $$ -\int_0^\infty (r^2b''+4rb') W_2(r)\,dr =\lambda \int_0^\infty r^2 W_2(r)\,dr. $$ We finally obtain that $$ G_T\equiv 4\pi\int_0^\infty dr \,r^2 G_T(r)=2\pi\sqrt{2} \int_0^\infty r^2 W_2(r)\,dr>0. $$ We used here the positivity of the ground-state (dynamo) eigenfunction $W_2$. Notice that we have not assumed any specific form of $b(r),$ as long as the dynamo mode exists. In fact, the entire spatial structure of small-scale fluctuations for ``magnetic induction'' is the same as that for ``fluctuation dynamo'' at long times. Both are determined by the same dynamo eigenmode. To see this, we note that the magnetic correlation can be expanded, in general, into right eigenmodes of $\mathcal{M}:$ \begin{equation} \mathcal{C}^{ij}({\bf r},t)=\sum_\alpha c_{\alpha} e^{-\lambda_\alpha t} \mathcal{C}^{ij}_\alpha({\bf r}) \label{C-expand} \end{equation} The expansion coefficients are determined from the initial correlation $\mathcal{C}^{ij}_{(0)}({\bf r})$ by \begin{equation} c_{\alpha} \equiv \int d^3r\,\, \mathcal{G}_{\alpha; ij}({\bf r})\mathcal{C}^{ij}_{(0)}({\bf r}), \label{coeff} \end{equation} using biorthogonality of right and left eigenmodes $\mathcal{C}_\alpha$ and $\mathcal{G}_\alpha$. If $\mathcal{C}^{ij}_{(0)}({\bf r})=\langle B^i_{(0)}B^j_{(0)}\rangle$ is ${\bf r}$-independent, then for the ground state mode (say $\alpha=0$) $ c_0 = \mathcal{G}_{0; k\ell}\langle B^k_{(0)}B^\ell_{(0)}\rangle = \frac{1}{3}G_T \langle B_{(0)}^2\rangle>0.$ Thus, with $r$ fixed, $$ \mathcal{C}^{ij}({\bf r},t) \sim c_0 e^{-\lambda_0 t} \mathcal{C}^{ij}_0({\bf r}), \,\,\,\,\,\,\,\, t\rightarrow \infty. $$ The magnetic correlation $\mathcal{C}^{ij}({\bf r},t) $ is dominated by the leading dynamo eigenmode $\mathcal{C}^{ij}_0({\bf r})$ with subleading contributions from additional eigenmodes (point spectrum of $\mathcal{M}$) if any. There is no essential physical difference between the two cases of ``fluctuation dynamo'' (${\bf B}_{(0)}={\bf b}_{(0)}$) and ``magnetic induction'' (${\bf B}_{(0)}=\langle{\bf B}_{(0)}\rangle$). It is interesting to consider in (\ref{C-expand}) also the opposite limit of large $r$ with $t$ fixed. In that case, the result with initial condition $\mathcal{C}^{ij}_{(0)}({\bf r}) =\langle B^i_{(0)}B^j_{(0)}\rangle$ is \begin{equation} \mathcal{C}^{ij}({\bf r},t) \sim \langle B^i_{(0)}B^j_{(0)}\rangle, \,\,\,\,\,\,\,\,r\rightarrow\infty. \label{cluster} \end{equation} This is clearly true for the case $\mathcal{C}^{ij}_{(0)}({\bf r})=\langle B^i_{(0)}\rangle \langle B^j_{(0)}\rangle,$ because the above limit then corresponds to ``clustering'' of the magnetic correlation function. However, the expansion coefficients (\ref{coeff}) are linear in the initial correlation, so the above result must hold whenever $\mathcal{C}_{(0)}^{ij}({\bf r})$ is ${\bf r}$-independent. The limit (\ref{cluster}) comes entirely from the continuous spectrum of $\mathcal{M}$ in the expansion (\ref{C-expand}) (for which the sum over $\alpha$ is actually an integral), because the finite number of dynamo eigenmodes $\mathcal{C}_\alpha$ associated to the point spectrum of $\mathcal{M}$ all have the stretched-exponential decay in (\ref{W2-large}), with the corresponding value of $\lambda_\alpha<0$. Their contribution thus vanishes in the large-$r$ limit. We conclude that there is no essential distinction in the dynamo regime between ``magnetic induction'' for a uniform mean-field and ``fluctuation dynamo'' for a homogeneous (but random) seed field, since all of their physical behaviors and underlying mechanisms are exactly the same. Another possible definition of ``magnetic induction'' which is employed in the literature is the growth of small-scale fluctuation fields generated from a non-vanishing mean-field, but only in a parameter regime where the ``fluctuation dynamo'' does not exist. The term is used this way in discussion of liquid-metal laboratory experiments operated at a sub-threshold regime \cite{Peffleyetal00,Bourgoinetal02,Nornbergetal06}. Although such inductive growth of fluctuations is generally sub-exponential, it may be exponential if the average magnetic field is itself exponentially growing because of a mean-field dynamo effect. Of course, with this definition, it makes no sense to regard ``magnetic induction'' and ``fluctuation dynamo'' as possible co-existing and competing mechanisms for small-scale magnetic field growth. We may consider as an example of this second type of ``magnetic induction'' the failed-dynamo regime of the KK model for $\xi<1,$ at zero Prandtl number and infinite magnetic Reynolds number. Unlike the liquid-metal experiments where fluctuation-dynamo fails because $Re_m<Re_m^{(c)},$ the failure here is due to the extreme roughness of the advecting velocity field (see I). This model problem is not a perfect analogue of the induction phenenoma seen in the experiments. For example, we solve the KK model with velocity integral scale $L=\infty$ (and thus $Re_m=\infty$) so that the large-scales of our problem are quite different than the non-universal, inhomogeneous and anisotropic conditions seen in experiments. Also, the average field is uniform in the KK model with homogeneous statistics. Since spatially constant fields must be time-independent, we cannot study in this setting induction of an exponentially growing mean-field. On the other hand, the cited experiments \cite{Peffleyetal00,Bourgoinetal02,Nornbergetal06} have studied the turbulent induction of a near-uniform, stationary external field. The analogy with those experiments is close enough that we can test some proposed theories of ``magnetic induction'' in our model situation. We therefore consider in this light several results of I for the failed dynamo regime of the KK model with $\xi<1.$ It was found there for the case of spatially-uniform and isotropic initial data $\mathcal{C}^{ij}_{(0)}({\bf r})=A\delta^{ij}$ that, at long times, \begin{equation} \langle B^2(t)\rangle \propto \ell_\eta^{\zeta_1}(D_1t)^{|\zeta_1|/\gamma}, \label{inducedE} \end{equation} with $$ \zeta_1=-\frac{3}{2}-\frac{\xi}{2}+\frac{3}{2} \sqrt{1-\frac{1}{3}\xi(\xi+2)} $$ which is negative and decreasing from 0 to $-2$ for $0<\xi<1.$ Thus, the energy in the magnetic fluctuations is growing, but only as a power-law in $t$. These same results hold , in fact, for general uniform initial data of the form $\mathcal{C}^{ij}_{(0)}({\bf r}) =\langle B^i_{(0)} B^j_{(0)}\rangle.$ This follows directly from eq.(\ref{Ben-R-hom}) when the random velocity (but not necessarily the magnetic field) is statistically isotropic and the line-correlation matrix $\mathcal{R}_{k\ell}(t)=\frac{1}{3}\mathcal{R}(t)\delta_{k\ell}.$ Paper I showed further that there are three spatial regimes of the magnetic correlation. These are the resistive range: \begin{equation} C_L(r,t) \simeq A'' \left(\frac{\ell_\eta}{L(t)}\right)^{\zeta_1}\left[ 1-2 \left(\frac{r}{\ell_\eta}\right)^\xi\right] \,\,\,\,r\ll \ell_\eta, \label{resist} \end{equation} the quasi-steady, inertial-convective range: \begin{equation} C_L(r,t) \simeq A' \left(\frac{r}{L(t)}\right)^{\zeta_1}\,\,\,\, \ell_\eta \ll r\ll L(t), \label{quasi} \end{equation} and the very large-scale range: \begin{equation} C_L(r,t) \simeq A \left[1- \frac{2\zeta_1(\zeta_1+3+\xi)}{\gamma} \left(\frac{L(t)}{r}\right)^{\gamma}\right]\,\,\,\, r\gg L(t). \label{veryL} \end{equation} We have here defined $L(t)\equiv (D_1t)^{1/\gamma}$ to be a characteristic large length-scale of the magnetic field. Note that $A',A''$ are constants numerically proportional to $A.$ Result (\ref{resist}) follows from the formula for $\Gamma_{in}(\sigma)$ on p.26 of I, together with the series expansion of the hypergeometric function ${\,\!}_2F_1$ (eq.(75) in I). Equations (\ref{quasi}) and (\ref{veryL}) follow from I, eq.(67) for $\alpha=0$ with $\rho\ll 1$ and $\rho\gg 1,$ respectively. For the last case, the large-argument asymptotics of the Kummer function in \cite{Erdelyi53}, eq. 6.13.1(2), is employed to derive the $1/r^\gamma$ term. These three ranges all have simple physical descriptions, which are easiest to discuss based on the corresponding magnetic energy spectra obtained by Fourier transform. In the resistive range the result is \begin{equation} E(k,t) \propto \frac{\langle B^2(t)\rangle}{\eta} k^{-(1+\xi)} \,\,\,\,\,\,\,\,\,k\gg k_\eta. \label{resist-spectrum} \end{equation} (Note that $\eta=D_1\ell_\eta^\xi.$) This is a Golitsyn-like spectrum, with physics exactly like that discussed earlier for the resistive range of the dynamo growth modes. This range is very universal in the small-$Pr_m$ limit. The inertial-convective range has energy spectrum \begin{equation} E(k,t) \simeq A' [L(t)]^{|\zeta_1|}k^{|\zeta_1|-1} \,\,\,\,\,\,\,\,\,k_L(t)\ll k\ll k_\eta \label{IC-spectrum} \end{equation} with $k_L(t)=2\pi/L(t)\rightarrow 0$ as $t\rightarrow\infty.$ This growing range is responsible for the energy increase in (\ref{inducedE}). It is ``quasi-steady'' in the sense that all its statistical characteristics are identical to those in the forced steady-state (see \cite{Vergassola96} and I), with the uniform initial field replacing the role of the external force. The physics involves a nontrivial competition of stretching ${\bf B}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf u}$ and nonlinear cascade ${\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf b}.$ If one assumes that ``induction'' of the initial, background field ${\bf B}_{(0)}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf u}$ dominates the stretching, then this is the same physics invoked by Ruzma\u{\i}kin and Shukurov \cite{RuzmaikinShukurov82} for induced fluctuations. Assuming a balance of the terms $({\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}){\bf b}\simeq {\bf b}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf u}\simeq {\bf B}_{(0)}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf u},$ they proposed on dimensional grounds an $A k^{-1}$ spectrum, with $A\propto \langle B^2_{(0)}\rangle.$ However, such a spectrum only occurs in the KK model for $\xi=0.$ In that case, the cascade term ${\bf u}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf b}$ can be represented as an eddy-diffusivity $2D_1$ and a balance equation $$ \partial_t {\bf b} -2D_1\triangle{\bf b} ={\bf B}_{(0)}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf u} $$ predicts the correct spectrum $\langle B_{(0)}^2\rangle k^{-1}.$ There is no distinction in this case between the inertial-convective range and the resistive range with Golitsyn spectrum. For $0<\xi<1,$ however, there is an ``anomalous dimension'' $|\zeta_1|$ which corrects the dimensional prediction. There is no longer any simple, heuristic argument for the magnetic spectral exponent in the quasi-steady range, which requires the computation of a non-perturbative zero-mode. The physics involves a nontrivial competition between creation of magnetic energy by stretching and destruction by turbulent cascade to the resistive range. For the very low wavenumber range $k\ll k_L(t)$ one might expect the dimensional argument of Ruzma\u{\i}kin and Shukurov to work after all and to correctly yield an $Ak^{-1}$ spectrum. The results in (\ref{resist})-(\ref{veryL}) correspond to a member of the one-parameter family of self-similar decay solutions found in I, with parameter choice $\alpha=0.$ For the general member of this family, the low-wavenumber spectrum is of the form $A k^{\alpha-1}$ (``permanence of large eddies''; see I), so setting $\alpha=0$ should naively lead to the Ruzma\u{\i}kin-Shukurov prediction. But this is not the case. Fourier transformation of the initial uniform magnetic field instead yields a term in the energy spectrum $\propto A \delta(k),$ a delta-function at $k=0.$ The actual energy spectrum in the low-wavenumber range is found by Fourier transforming the second term in (\ref{veryL}) to be \begin{equation} E(k,t) \simeq A' [L(t)]^\gamma k^{1-\xi}, \,\,\,\,\,\,\,\,\, k\ll k_L(t). \label{VL-spectrum} \end{equation} The simple physics of this range is direct induction from the spatially-constant field via the balance $$ \partial_t {\bf b} = {\bf B}_{(0)}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf u}. $$ Indeed, solving this linear Langevin equation yields $E(k,t)\propto \langle B^2_{(0)}\rangle (D_1 t) k^2 E_u(k),$ reproducing the above spectrum. Note that $[L(t)]^\gamma=D_1t$ corresponds to diffusive spectral growth in this range, due to the white-noise in time character of the advecting velocity field. The total energy in this range remains constant in time, however, because shrinking of the range exactly compensates for increase of the energy spectrum. Once the energy $b_k^2\simeq k E(k,t)$ in an interval around wavenumber $k$ approaches the energy $\langle B_{(0)}^2\rangle$ in the initial uniform field a nonlinear cascade begins and that wavenumber $k$ joins the inertial-convective range with spectrum (\ref{IC-spectrum}). The above argument resembles one which has been invoked to explain a $k^{-5/3}$ spectrum of magnetic energy reported at wavenumbers $k<k_\eta$ in some low-$Pr_m$ liquid-metal experiments \cite{Peffleyetal00,Nornbergetal06} and in a related numerical simulation \cite{Baylissetal07}. These works study the induction of an imposed magnetic field by a turbulent flow with a Kolmogorov energy spectrum. The argument made by those authors amounts to assuming a balance between induction of the external magnetic field ${\bf B}_{(0)}$ and convection by the large-scale velocity ${\bf u}_{(0)}$ (both mean and fluctuating components): $$ {\bf u}_{(0)}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf b} = {\bf B}_{(0)}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}{\bf u}. $$ This leads to the prediction that $\langle u_{(0)}^2\rangle E(k)\simeq \langle B_{(0)}^2\rangle E_u(k),$ so that $E_u(k)\simeq \varepsilon^{2/3}k^{-5/3}$ implies a similar magnetic energy spectrum $E(k)$. We find this argument unconvincing. In the first place, large-scale advection conserves the energy in magnetic fluctuations. It cannot therefore balance the input from induction of ${\bf B}_{(0)}.$ If nonlinear cascade can be ignored, then magnetic energy must be expected to grow, as in our result (\ref{VL-spectrum}) above, and not to saturate. Furthermore, if the large-scale sweeping were important, then it should appear in our balance argument for (\ref{VL-spectrum}), because the KK model contains such dynamical sweeping effects. However, we see that the correct spectrum at very low wavenumbers is obtained by ignoring such sweeping as irrelevant. This is consistent with the results of Frisch and Wirth \cite{FrischWirth96} for the diffusive range of a passive scalar, who emphasized the (nontrivial) fact that sweeping by large-scales plays no role in the balance for that high-wavenumber range. Finally, we note that $k^{-5/3}$ is the (Obukhov-Corrsin) spectrum expected for a passive {\it scalar} in a Kolmogorov inertial-range and not for a passive magnetic field. The argument of \cite{Peffleyetal00, Nornbergetal06,Baylissetal07} thus omits the effects of the small-scale stretching interaction $({\bf b}\hbox{\boldmath $\cdot$}\hbox{\boldmath $\nabla$}){\bf u}$ and we know of no good justification for doing so in a steady-state, saturated regime. \section{Conclusions} \label{conclusions} The main purpose of this paper was to quantify the importance of stochasticity of flux-line freezing to the zero-Prandtl-number turbulent dynamo. It is a rigorous result for the Kazantsev-Kraichnan model that Lagrangian trajectories becomes intrinsically stochastic in the inertial range with velocity roughness exponent $\xi$, due to Richardson 2-particle turbulent diffusion. In this situation, infinitely-many magnetic lines in the initial seed field at time $t_0$ are brought to a point at time $t$ from a region of size $L(t)\sim (t-t_0)^{1/(2-\xi)}.$ Note that the size of the region sampled is independent of the resistivity. Not all of the magnetic field lines in this large region will, however, make an equal contribution to the net dynamo growth of magnetic energy. The contribution from lines initially separated by distance $r$ is quantified, in isotropic non-helical turbulence, by the line-correlations $R_L(r,t)$ and $R_N(r,t).$ At long times, these correlations are proportional to $e^{-\lambda t} \widetilde{G}_L(r)$ and $e^{-t\lambda t}\widetilde{G}_N(r),$ respectively, where $\widetilde{G}_L(r)$ and $\widetilde{G}_N(r)$ are longitudinal and transverse components of the (left/adjoint) dynamo eigenfunction. The central results of this paper are these two functions, plotted in Fig.~10, which quantify the relative contribution to magnetic energy from lines initially separated by the distance $r,$ in resistive units. The principal contribution to the dynamo growth comes from lines at separations of the order of magnitude of the resistive length $\ell_\eta.$ However, the decay of the correlations, in units of the resistive length, is a slow, stretched exponential. This implies that lines which arrive from any arbitrary, fixed separation $r$---no matter how large---will contribute an amount of energy growing exponentially rapidly in time. Of course lines separated by $r$ much, much greater than $\ell_\eta$ will make a small relative contribution, but even lines separated by many $\ell_\eta$ make a substantial contribution. To underline this fact, we plot in Fig.~13 below the eigenfunctions $\widetilde{G}_L(r)$ and $\widetilde{G}_N(r)$ multiplied by $r^2,$ which, integrated over $r,$ give the mean magnetic energy for a uniform seed field. (In order to make this plot, we have extended our numerical results to $r>163\ell_\eta$ with the asymptotic formulas (\ref{G-large-II}).) As this figure should make clear, lines separated by many hundreds of resistive lengths are important to the dynamo growth. To be more quantitative, we find that lines initially separated by up to $968\ell_\eta$ must be considered in order to get 90\% of the total magnetic energy. Another feature dramatically illustrated in Fig.~13 is the strong {\it anti-dynamo} effect arising from lines-vectors initially parallel to the separation vector ${\bf r}$, represented by the long negative tail in $r^2\widetilde{G}_L(r).$ We have proposed a heuristic explanation for this interesting effect in terms of twisting and looping of field lines (Figs.~11 and 12). \begin{figure}[h] \centerline{\includegraphics[width=9cm,height=7cm]{fig13.pdf}} \label{rsq_tildeG-fig} \caption{Contributions of line-correlations to the energy. The integrands, $r^2\,G_L(r)$ in \textcolor{blue}{blue} and $r^2\,G_N(r)$ in \textcolor{green}{green}, for magnetic energy with a uniform seed field in eqs.(\ref{Ben-R-hom}),(\ref{Rint-def}).} \end{figure} As we shall see in a following paper \cite{Eyink11} these principle conclusions apply not only in the Kazantsev-Kraichnan model with $Pr_m=0$ but also to kinematic dynamo in real hydrodynamic turbulence with $Pr_m\sim 1.$ A second purpose of this paper was to discuss the meaningful distinction, if any, between ``fluctuation dynamo'' and ``magnetic induction.'' In the case of the KK model at $Pr_m=0$ and $Re_m=\infty,$ we have shown that a uniform mean field (or a uniform random field) may provide the seed field for small-scale fluctuation dynamo. The asymptotic exponential growth rate and the small-scale magnetic correlations are exactly the same as for any other random seed field whose correlation function is non-orthogonal to the leading dynamo mode. We therefore do not agree with authors \cite{BrandenburgSubramanian05,Schekochihinetal07,SchuesslerVoegler08} who distinguish between ``fluctuation dynamo'' and ``magnetic induction'' as two fundamentally different mechanisms. For example, Sch\"ussler and V\"ogler \cite{SchuesslerVoegler08} have proposed to explain small-scale magnetic fields in the quiet solar photosphere as a consequence of near-surface dynamo action. They mention magnetic induction as a possible alternative explanation, but dismiss it with the remark: `` `Shredding' of pre-existing magnetic flux (remnants of bipolar magnetic regions) cannot explain the large amount of observed horizontal flux since the turbulent cascade does not lead to an accumulation of energy (and generation of a spectral maximum) at small scales. On the other hand, such a behavior is typical for turbulent dynamo action.'' We find in the KK model that, quite to the contrary, induction of a mean field produces the same small-scale fluctuations and energy spectra as does the fluctuation dynamo. We may indeed say that---at least in the kinematic regime of weak fields--- ``magnetic induction'' is nothing but ``fluctuation dynamo'' with a large-scale, deterministic (mean) seed field. Although Schekochihin et al. \cite{Schekochihinetal07} do make a distinction between two separate mechanisms, their final conclusion is not so different from ours. In their Fig.~8(a) they find, in a sub-threshold regime with $Re_m<Re_m^{(c)},$ that saturated energy spectra for induction from a uniform field and for decaying spectra of the ``failed'' dynamo state {\it exactly} coincide, when normalized to the same total energy. They conclude that ``the same mechanism is responsible for setting the shape of the spectrum of the magnetic fluctuations induced by a mean field and of the decaying or growing such fluctuations in the absence of a mean field,'' just as we see in the KK model. We have also studied the KK model with velocity roughness exponent $\xi<1,$ where the $Pr_m=0$ fluctuation dynamo does not exist, and with a uniform magnetic seed field in order to get some insight into the physics of the induction mechanism in a failed-dyamo regime. This is very roughly the same situation that was studied in several liquid-sodium experiments with $Pr_m\lesssim 10^{-5}$ \cite{Odieretal98,Bourgoinetal02, Peffleyetal00,Spenceetal06,Nornbergetal06} and in related numerical simulations \cite{Baylissetal07,Schekochihinetal07} at somewhat larger Prandtl numbers. Some of those studies have reported observing a $k^{-5/3}$ spectrum of magnetic fluctuations in the velocity inertial-range \cite{Peffleyetal00,Nornbergetal06, Baylissetal07}, while others have reported a $k^{-1}$ spectrum \cite{Bourgoinetal02, Schekochihinetal07}. The very least we can conclude from our analysis is that the explanations offered for those spectra based on inertial-range turbulence physics do not successfully explain the results in the KK model, even where those explanations should apparently apply. An exception is the Golitsyn-Moffatt argument for the spectrum at wavenumbers $k>k_\eta,$ which succeeds (with slight modification) in the KK model. At intermediate wavenumbers $k_L(t)<k<k_\eta,$ we find in the KK model a range with input of magnetic energy by induction of the mean field balanced by nonlinear stretching and cascade to the resistive scale. This is the same physics invoked in the theory of Ruzma\u{\i}kin and Shukurov \cite{RuzmaikinShukurov82}, who predicted a spectrum $\langle B^2_0\rangle k^{-1}$ on dimensional grounds. However, this prediction is only verified in the KK model for $\xi=0,$ where nonlinear cascade can be accurately modelled as an eddy-diffusivity. For the KK model with $0<\xi<1$ this prediction is modified by a large anomalous exponent $|\zeta_1|$ (with $|\zeta_1|=2$ for $\xi=1$) which must be calculated by a non-perturbative argument. At low wavenumbers $k<k_L(t)$ in the KK model we find a range with continuous growth of magnetic energy spectrum supplied by induction of the uniform field. Unlike the argument in \cite{Peffleyetal00, Nornbergetal06,Baylissetal07}, there is no balance with large-scale advection, although such effects exist in the model. Indeed, it is very unclear to us how large-scale advection, which conserves magnetic energy, can by itself balance the input from induction. We shall not attempt here to explain in detail the induced spectra observed in the experiments and simulations, except to note that they must involve large-scale physics outside inertial-range scales. This was, in fact, the point of view of Bourgoin et al. \cite{Bourgoinetal02} who explained their $k^{-1}$ spectrum by global fluctuations of the flow pattern. Large-scale effects are certainly necessary to obtain saturation of the energy. If the length-scale $L(t)$ in (\ref{IC-spectrum}) had a finite limit for large times, e.g. for turbulence confined to a box of size $L_B,$ then the system should reach a statistical steady-state, just as in the randomly-forced case \cite{Vergassola96}. Otherwise, magnetic energy would continue to grow without bound within a kinematic description. In such a steady-state, the only extended spectral ranges would be the nonlinear range (\ref{IC-spectrum}) and the Golitsyn range (\ref{resist-spectrum}). The latter is well-confirmed in experiments \cite{Odieretal98,Bourgoinetal02,Nornbergetal06} and simulations \cite{Baylissetal07,Schekochihinetal07}. If the $k^{-1}$ or $k^{-5/3}$ spectra observed at larger scales can be understood at all in terms of inertial-range turbulence, then it seems that an analogue of the nonlinear range (\ref{IC-spectrum}) is the most likely explanation. If so, then the determination of the precise exponent is a hard theoretical problem, for there may be large anomalous scaling corrections to the dimensional Ruzma\u{\i}kin-Shukurov $k^{-1}$ spectrum. The precise spectral exponent of induced magnetic fluctuations in the liquid metal experiments is open to debate. Over the limited scaling ranges available it is quite difficult to distinguish between $-5/3$ and $-1$ exponents and the empirical spectra can be fit about equally well with both power-laws, or others as well. It is not even clear that the concept of ``scaling exponent'' is entirely well-defined without the possibility to extend the putative scaling ranges. Our three spectral ranges (\ref{resist-spectrum}), (\ref{IC-spectrum}),(\ref{VL-spectrum}) can be made arbitrarily long by adjusting parameters, but, in the experiments and simulations, only the Golitsyn spectral range can be lengthened by lowering $Pr_m.$ The low-wavenumber spectral ranges can be made longer only by increasing $Re_m,$ but, beyond a critical value $Re_{m}^{(c)},$ the dynamo effect sets in and the physical phenomena change. \newpage
1,941,325,220,119
arxiv
\section*{SM1. Eigenstates and eigenvalues of $\tilde H_\text{int}$ for different combinations of interaction terms} Table \ref{tab:eigen} lists all the eigenvectors and eigenvalues for the local Hamiltonians with different interaction terms discussed in the main text. We use 4 bits with values 0, 1 to represent the Fock states $|(b\!\uparrow) (b\!\downarrow) (a\!\uparrow) (a\!\downarrow\rangle)$ where the first (last) two bits give the occupations in the bonding (antibonding) orbital with spin $\uparrow$ and $\downarrow$, respectively. The eigenvalues for the different models and the parameters considered in the main text are plotted in Fig.~\ref{fig:eval}. \begin{table*}[htp] \centering \caption{Eigenvectors and eigenvalues of the local Hamiltonian with different interaction terms.} \label{tab:eigen} \medskip \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{3}{*}{Index} & \multirow{2}{*}{Eigenvector} & \multicolumn{4}{c|}{Interaction Form, Eigenvalue}\tabularnewline \cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6} & & $U_{c}=U^{\prime}=J_{S}=J_{P}=\frac{U}{2}$ & $U_{c}=J_{S}=J_{P}=\frac{U}{2}$ & $U_{c}=U^{\prime}=\frac{U}{2}$ & $U_{c}=\frac{U}{2}$ \tabularnewline & & $$ & $U^{\prime}=0$ & $J_{S}=J_{P}=0$ & $U^{\prime}=J_{S}=J_{P}=0$ \tabularnewline \hline \hline 1 & $|0\rangle$ & 0 & 0 & 0 & 0 \tabularnewline \hline 2,3 & $|1000\rangle,|0100\rangle$ & $-\mu$ & $-\mu$ & $-\mu$ & $-\mu$ \tabularnewline \hline 4,5 & $|0010\rangle,|0001\rangle$ & $-\mu$ & $-\mu$ & $-\mu$ & $-\mu$ \tabularnewline \hline 6,7 & $\frac{1}{\sqrt{2}}\left(|1100\rangle\pm|0011\rangle\right)$ & $U_{c}\pm J_{P}-2\mu$ & $U_{c}\pm J_{P}-2\mu$ & $U_{c}-2\mu$ & $U_{c}-2\mu$ \tabularnewline \hline 8,9 & $\frac{1}{\sqrt{2}}\left(|1001\rangle\mp|0110\rangle\right)$ & $U^{\prime}\pm J_{S}-2\mu$ & $\pm J_{S}-2\mu$ & $U^{\prime}-2\mu$ & $-2\mu$ \tabularnewline \hline 10,11 & $|0101\rangle,|1010\rangle$ & $-2\mu$ & $-2\mu$ & $-2\mu$ & $-2\mu$ \tabularnewline \hline 12,13 & $|1110\rangle,|1101\rangle$ & $U_{c}+U^{\prime}-3\mu$ & $U_{c}-3\mu$ & $U_{c}+U^{\prime}-3\mu$ & $U_{c}-3\mu$ \tabularnewline \hline 14,15 & $|1011\rangle,|0111\rangle$ & $U_{c}+U^{\prime}-3\mu$ & $U_{c}-3\mu$ & $U_{c}+U^{\prime}-3\mu$ & $U_{c}-3\mu$ \tabularnewline \hline $16$ & $|1111\rangle$ & $2U_{c}+2U^{\prime}-4\mu$ & $2U_{c}-4\mu$ & $2U_{c}+2U^{\prime}-4\mu$ & $2U_{c}-4\mu$\tabularnewline \hline \end{tabular} \end{table*} \begin{figure*} \includegraphics[clip,width=6.8in,angle=0]{sm_eval.png} \caption{ Distribution of eigenvalues for all states listed in Tab.~\ref{tab:eigen}. The minimum value for each model is shifted to 0 for ease of comparison. (a) Result for all interaction terms ($\#6$ column in Tab.~\ref{tab:eigen}). (b) Result for the $U_c$, $J_S$ and $J_P$ terms ($\#5$ column in Tab.~\ref{tab:eigen}). (c) Result for the $U_c$ and $U^\prime$ terms ($\#4$ column in Tab.~\ref{tab:eigen}). (d) Result for the $U_c$ term only ($\#6$ column in Tab.~\ref{tab:eigen}). The parameters are $U=-4$, $\mu=-2$, $t_4=0$. } \label{fig:eval} \end{figure*} \section*{SM2. SC gap versus Mott Gap} There is a gap opening both in the spectrum of a Mott insulator and of a superconductor. As discussed in the main text, the spectra in Fig.~2(d) correspond to a SC state. In Fig.~\ref{fig:Akw_HubI_vs_DMFTSC}(a) we show the spectrum $A(\bf{k},\omega)$ in the normal phase, obtained with the Hubbard-I approximation, which employs an atomic-limit self-energy. In this case, the flat band is Mott insulating with a gap size of $U$, whereas the wide band shows only a tiny gap. However, the spectral function in the SC phase obtained with DMFT is distinctly different, as shown in Fig.~\ref{fig:Akw_HubI_vs_DMFTSC}(b) (reproduced from Fig.~2(d)). The gap size of the flat band in the SC phase ($\sim0.4U$) is significantly smaller than the Mott gap $U$ in the atomic limit, and the same as the gap of the wide band. Hence, we can identify the gap in Fig.~2(d) as a SC gap rather than a Mott gap. In the DMFT spectra one can however also notice two dispersionless features at higher energies (with a splitting $\sim 0.8U$), as clearly seen in the local spectrum shown in Fig.~\ref{fig:Akw_HubI_vs_DMFTSC}(c), which we interpret as the Hubbard bands. \begin{figure*} \includegraphics[clip,width=6.8in,angle=0]{SM_HubI_akw_Um1_n2-01.png} \caption{ The momentum-resolved spectral function $A(\bf{k},\omega)$ for $W_\alpha=8$ and $W_\beta=0$. (a) $A(\bf{k},\omega)$ calculated using the Hubbard-I approximation in the normal phase. (b) $A(\bf{k},\omega)$ and (c) $A(\bf{k}=\Gamma,\omega)$ calculated by DMFT in the SC phase. } \label{fig:Akw_HubI_vs_DMFTSC} \end{figure*} In Fig.~\ref{fig:Aw_Aanow_T0p025_gapsize}, we plot both the normal spectral function $A(\omega)$ and anomalous spectral function $A^\mathrm{ano}(\omega)$ obtained by using the auxiliary \cite{Reymbaut2015} maximum entropy \cite{Jarrell1996} method. We link the the peak positions in $A(\omega)$ and $A^\mathrm{ano}(\omega)$ by the vertical dashed lines, to show that they match. These results demonstrate that the SC gap size of the $\beta$ band is the same as that of the $\alpha$ band, even though the order parameters are different ($\Delta_\beta>\Delta_\alpha$). \begin{figure*} \includegraphics[clip,width=6.8in,angle=0]{Aw_Aanow_T0p025_gapsize-01.png} \caption{ Normal spectral function $A(\omega)$ and anomalous spectral function $A^\mathrm{ano}(\omega)$ for the indicated band-width ratio (red line: bonding orbital $\alpha$, green line: anti-bonding orbital $\beta$). The vertical dashed lines mark the peak positions. Here, $T=0.025$. } \label{fig:Aw_Aanow_T0p025_gapsize} \end{figure*} \section*{SM3. SC order parameters and determination of $T_c$} Fig.~\ref{fig:Delta_Tx}(a) [(c)] shows the superconducting order parameter $\Delta_\alpha$ for the wide band and $\Delta_\beta$ for the narrow band as a function of $T$ at $W_\beta/W_\alpha=1$ [=0]. As one decreases $T$, both bands become simultaneously superconducting ($\Delta_\alpha$ and $\Delta_\beta$ become nonzero at the same $T$), which means that $T_c$ is the same for both bands. Since the transition from the normal phase to the SC phase is expected to be second order, we plot $\Delta^2$ as a function $T$ in panels (b,d). By extrapolating $\Delta^2$ by a function linear in $T$, we determine $T_c$ as the intersection with the $T$-axis, as shown by the black dashed lines in panels (b,d). \begin{figure*} \includegraphics[clip,width=7.2in,angle=0]{Delta_Tx-01.png} \caption{ The superconducting order parameter $\Delta$ as a function of $T$. (a) $\Delta$ and (b) $\Delta^2$ for $W_\beta/W_\alpha=1$. (c) $\Delta$ and (d) $\Delta^2$ for $W_\beta/W_\alpha=0$. } \label{fig:Delta_Tx} \end{figure*} \section*{SM4. Spectra in the bilayer Hubbard model and single band Hubbard model} In Fig.~\ref{fig:spectra_2L_vs_1L}, we compare the spectra of the normal phase for the bilayer Hubbard model and the single-band Hubbard model at small $W_\beta/W_\alpha$ (the band width in the single-band Hubbard model is $W\equiv W_\beta$; $W_\alpha=8$ is a constant). At $W_\beta/W_\alpha$=0.1 (panel (a)), the bilayer Hubbard model is in a good metallic state with a sharp peak in $A_\beta(\omega)$ (green line), while the single-band model is already in the Mott insulator phase (blue line). At $W_\beta/W_\alpha$=0.05 (panel (b)), the bilayer Hubbard model becomes a bad metal, with the anti-bonding orbital pseudo-gapped. The single-band Hubbard model, with a clear gap and well-separated Hubbard bands, becomes more insulating. \begin{figure*} \includegraphics[clip,width=4.0in,angle=0]{normal_Aw_bilayer_singlelayer-01.png} \caption{ Spectral functions near the Fermi energy in the normal phase of the bilayer (BL.) Hubbard model (red line: bonding orbital $\alpha$, green line: anti-bonding orbital $\beta$) and the single (SL.) band Hubbard model (blue line) for the indicated band width ratio. Here, $T=0.025$. } \label{fig:spectra_2L_vs_1L} \end{figure*} \section*{SM5. Normal and Anomalous Worm Sampling} Before presenting a detailed discussion of the `anomalous' worm sampling, we briefly review the conventional worm sampling method for hybridization-expansion continuous-time quantum Monte-Carlo algorithm (CT-HYB). This method was introduced by Gunacker {\it et al.} \cite{Gunacker2015,Gunacker2016} to measure the one- and two-particle normal Green's functions with high precision. It is particularly useful in the atomic limit where the hybridization function vanishes. In this case, the standard measurement procedure for the normal Green's function, which is based on removing hybridization lines from configuration space $\mathcal{C}_{Z}$ diagrams of the partition function $Z$, cannot be applied. Worm sampling overcomes this limitation by extending the configuration space to $\mathcal{C}=\mathcal{C}_{z}\oplus \mathcal{C}_{G^{(1)}}$, where $\mathcal{C}_{G^{(1)}}$ is the configuration space of the modifed ``partition function" $\mathcal{Z}_{G^{(1)}}$ which is obtained by integrating over all degrees of freedom of the normal Green's function $G_{\alpha_{1}\alpha_{2}}(\tau_{1},\tau_{2})=-\langle\mathcal{T}_{\tau}c_{\alpha_{1}}(\tau_{1})c_{\alpha_{2}}^{\dagger}(\tau_{2})\rangle$, \begin{equation} Z_{G^{(1)}}=\sum_{\alpha_{1}\alpha_{2}}\int d\tilde{\tau}_{1}d\tilde{\tau}_{2}G_{\alpha_{1}\alpha_{2}}(\tilde{\tau}_{1},\tilde{\tau}_{2}). \end{equation} This allows to define the extended partition function $W=Z+\eta_{G^{(1)}}\mathcal{Z}_{G^{(1)}}$, where $\eta_{G^{(1)}}$ is a weighing factor. The difference between a configuration in $\mathcal{C}_{Z}$ and a configuration in $\mathcal{C}_{G^{(1)}}$ is that the latter has no hybridization lines attached to the operators $c_{\alpha_{1}}(\tau_{1})$ and $c_{\alpha_{2}}^{\dagger}(\tau_{2})$, which are called worm operators. We will refer to the Monte Carlo sampling in the \emph{extended} configuration space $W$ as (normal) worm sampling. The insertion and removal of operators are the two basic updates necessary for an ergodic sampling. Worm replacement/shift updates can further be used to reduce the auto-correlation time. \begin{figure*} \includegraphics[clip,width=4.8in,angle=0]{worm_sampling-01.png} \caption{ Schematic illustration of the worm sampling in the extended configuration space, which includes the partition function space $\mathcal{C}_Z$ (red), the anomalous worm space $\mathcal{C}_{F^{(1)}}$ (blue) and the normal worm space $\mathcal{C}_{G^{(1)}}$ (green). The weights of the worm space configurations are rescaled by the corresponding weighing factors $\eta_{F^{(1)}}$ and $\eta_{G^{(1)}}$. } \label{fig:worm} \end{figure*} In the `anomalous worm sampling', we measure $F_{\alpha_{1}\alpha_{2}}(\tau_{1},\tau_{2})=-\langle\mathcal{T}_{\tau}c_{\alpha_{1}}(\tau_{1})c_{\alpha_{2}}(\tau_{2})\rangle$, the one-particle anomalous Green's function, using a worm algorithm. We extend the configuration space to include not only the normal worm space but also the anomalous worm space \begin{equation} \mathcal{C}=\mathcal{C}_{Z}\oplus\mathcal{C}_{G^{(1)}}\oplus\mathcal{C}_{F^{(1)}}, \end{equation} where we also define a modified ``partition function" $Z_{{F^{(1)}}}$ associated with the configuration space $\mathcal{C}_{F^{(1)}}$ by \begin{equation} Z_{{F^{(1)}}}=\sum_{\alpha_{1}\alpha_{2}}\int d\tilde{\tau}_{1}d\tilde{\tau}_{2}F_{\alpha_{1}\alpha_{2}}(\tilde{\tau}_{1},\tilde{\tau}_{2}). \end{equation} The partition function of the extended configuration space reads \begin{equation} W=Z+\eta_{G^{(1)}}Z_{{G^{(1)}}}+\eta_{F^{(1)}}Z_{{F^{(1)}}}. \end{equation} As illustrated in Fig.~\ref{fig:worm}, we implement Monte Carlo updates between the partition function space $\mathcal{C}_Z$ and one of the two worm spaces by worm operator insertion/removal updates, although direct transitions between the two worm spaces would also be possible in principle. The updates within each subspace depend on its structure. Generally, simple pair insertion/removal updates are sufficient for ergodicity. In practice, however, one finds that worm replacement and shift updates \cite{Gunacker2015} help to reduce the auto-correlation time. Additional updates may be necessary if there is a symmetry breaking. For example, S$\acute{\text{e}}$mon {\it et al.} \cite{Semon2014} showed that is is necessary to perform four-operator updates in the $d$-wave superconducting state of a single-band repulsive Hubbard model. In the following, we will show that also in the present attractive-$U$ two-orbital Hubbard system, four-operators updates are necessary in all subspaces in the SC phase. Furthermore, additional updates are needed to sample $Z_{{F^{(1)}}}$ due to the imbalanced number of creation and annihilation operators for each spin flavor. \subsection*{CT-HYB in the Nambu formalism} Let us first recall the CT-HYB algorithm in a Nambu formulation appropriate for intra-orbital spin-singlet pairing. \subsubsection*{Hamiltonian} In DMFT, the correlated lattice model in the SC phase is mapped to an Anderson impurity model with a self-consistently determined SC bath \begin{equation} H_{\mathrm{AIM}}=H_{\mathrm{loc}}+H_{\mathrm{bath}}^{\mathrm{SC}}+H_{\mathrm{hyb}}. \end{equation} We consider the generalized Kanamori local Hamiltonian \begin{align} H_{\mathrm{loc}} & =\sum_{j\sigma}E_{j}\hat{n}_{j\sigma}+U_{c}\sum_{j=1}^{M}\hat{n}_{j\uparrow}\hat{n}_{j\downarrow}+U^{\prime}\sum_{j\ne j^{\prime}}\hat{n}_{j\uparrow}\hat{n}_{j^{\prime}\downarrow}+U^{\prime\prime}\sum_{j>j^{\prime},\sigma}\hat{n}_{j\sigma}\hat{n}_{j^{\prime}\sigma}\nonumber\\ & -J_{S}\sum_{i,j\ne j^{\prime}}c_{j\uparrow}^{\dagger}c_{j\downarrow}c_{j\downarrow}^{\dagger}c_{j\uparrow}+J_{P}\sum_{j\ne j^{\prime}}c_{j\uparrow}^{\dagger}c_{j\downarrow}^{\dagger}c_{j\downarrow}c_{j^{\prime}\uparrow}, \end{align} where $j$ runs from 1 to the number of localized orbitals $M$ per site. For a $t_{2g}$ shell with spin rotational invariance, we have $U_{c}=U$, $U^{\prime}=U-2J$, $U^{\prime\prime}=U-3J$ and $J_{S}=J_{P}=J$. In the case of the bilayer Hubbard model in the bonding-antibonding basis, the parameters are $U_{c}=U^{\prime}=J_{S}=J_{P}=U/2$ and $U^{\prime\prime}=0$. The Hamiltonian of the SC bath reads \begin{equation} H_{\mathrm{bath}}^{\mathrm{SC}}=\sum_{k\alpha\sigma}(\epsilon_{k\alpha}-\mu)f_{k\alpha\sigma}^{\dagger}f_{k\alpha\sigma}+\sum_{k\alpha}\varDelta_{k\alpha}f_{k\alpha\uparrow}^{\dagger}f_{-k\alpha\downarrow}^{\dagger}+\sum_{k\alpha}\varDelta_{k\alpha}^{*}f_{-k\alpha\downarrow}f_{k\alpha\uparrow}, \end{equation} where $\epsilon_{k\alpha}$ is the energy spectrum of the conduction electrons with momentum $k$, band index $\alpha$ and spin index $\sigma=\uparrow,\downarrow$. $\varDelta_{k\alpha}$ is the pairing amplitude. The Nambu-spinors for the conduction electrons are defined as \begin{equation} \hat{\Psi}_{k}^{\dagger}=\left[\begin{array}{ccccc} f_{k\alpha\uparrow}^{\dagger}, & f_{-k\alpha\downarrow}, & f_{k\beta\uparrow}^{\dagger}, & f_{-k\beta\downarrow}, & \cdots\end{array}\right]\equiv\left[\begin{array}{ccc} \hat{\Psi}_{k\alpha}^{\dagger}, & \hat{\Psi}_{k\beta}^{\dagger}, & \cdots\end{array}\right], \end{equation} and $\hat{\Psi}_{k}=\left[\begin{array}{ccccc} f_{k\alpha\uparrow}, & f_{-k\alpha\downarrow}^{\dagger}, & f_{k\beta\uparrow}, & f_{-k\beta\downarrow}^{\dagger}, & \cdots\end{array}\right]^{T}$. $H_{\mathrm{bath}}^{\mathrm{SC}}$ can be expressed in the compact form \begin{align} H_{\mathrm{bath}}^{\mathrm{SC}} & =\sum_{k\alpha}\hat{\Psi}_{k\alpha}^{\dagger}\hat{E}_{k\alpha}\hat{\Psi}_{k\alpha}=\sum_{k\alpha}\left[\begin{array}{cc} f_{k\alpha\uparrow}^{\dagger}, & f_{-k\alpha\downarrow}\end{array}\right]\left[\begin{array}{cc} \epsilon_{k\alpha}-\mu & \varDelta_{k\alpha}\nonumber\\ \varDelta_{k\alpha}^{*} & -\epsilon_{-k\alpha}+\mu \end{array}\right]\left[\begin{array}{c} f_{k\alpha\uparrow}\nonumber\\ f_{-k\alpha\downarrow}^{\dagger} \end{array}\right]\nonumber\\ & =\sum_{k\alpha}\left[(\epsilon_{k\alpha}-\mu)f_{k\alpha\uparrow}^{\dagger}f_{k\alpha\uparrow}+(-\epsilon_{-k\alpha}+\mu)f_{-k\alpha\downarrow}f_{-k\alpha\downarrow}^{\dagger}+\varDelta_{k\alpha}f_{k\alpha\uparrow}^{\dagger}f_{-k\alpha\downarrow}^{\dagger}+\varDelta_{k\alpha}^{*}f_{-k\alpha\downarrow}f_{k\alpha\uparrow}\right] \end{align} with $\hat{E}_{k\alpha}=\left[\begin{array}{cc} \epsilon_{k\alpha}-\mu & \varDelta_{k\alpha}\\ \varDelta_{k\alpha}^{*} & -\epsilon_{-k\alpha}+\mu \end{array}\right]$. The hybridization term in the Nambu formalism becomes \begin{align} H_{\mathrm{hyb}} & =\sum_{k\alpha\sigma,j}\left[V_{k\alpha}^{j}f_{k\alpha,\sigma}^{\dagger}c_{j\sigma}+V_{k\alpha}^{j*}c_{j\sigma}^{\dagger}f_{k\alpha,\sigma}\right]\nonumber\\ & =\sum_{k\alpha,j}\left[V_{k\alpha}^{j}f_{k\alpha,\uparrow}^{\dagger}c_{j\uparrow}+V_{k\alpha}^{j}f_{k\alpha,\downarrow}^{\dagger}c_{j\downarrow}+V_{k\alpha}^{j*}c_{j\uparrow}^{\dagger}f_{k\alpha,\uparrow}+V_{k\alpha}^{j*}c_{j\downarrow}^{\dagger}f_{k\alpha,\downarrow}\right]\nonumber\\ & =\sum_{k\alpha,j}\left[V_{k\alpha}^{j}f_{k\alpha,\uparrow}^{\dagger}c_{j\uparrow}-V_{-k\alpha}^{j}c_{j\downarrow}f_{-k\alpha,\downarrow}^{\dagger}+V_{k\alpha}^{j*}c_{j\uparrow}^{\dagger}f_{k\alpha,\uparrow}-V_{-k\alpha}^{j*}f_{-k\alpha,\downarrow}c_{j\downarrow}^{\dagger}\right]\nonumber\\ & =\sum_{k\alpha,j}\left[V_{k\alpha}^{j}f_{k\alpha,\uparrow}^{\dagger}c_{j\uparrow}+V_{k\alpha}^{j*}c_{j\uparrow}^{\dagger}f_{k\alpha,\uparrow}-V_{-k\alpha}^{j}c_{j\downarrow}f_{-k\alpha,\downarrow}^{\dagger}-V_{-k\alpha}^{j*}f_{-k\alpha,\downarrow}c_{j\downarrow}^{\dagger}\right]\nonumber\\ & \equiv\hat{V}_{\uparrow}+\hat{V}_{\uparrow}^{\dagger}+\hat{V}_{\downarrow}+\hat{V}_{\downarrow}^{\dagger}. \label{eq:Hyb_V} \end{align} \subsubsection*{Partition function} In CT-HYB, we treat $H_{\mathrm{hyb}}$ as the perturbation and expand $Z=\mathrm{Tr}e^{-\beta H}$ in terms of $H_{\mathrm{hyb}}$ as \begin{align} Z & =\mathrm{Tr}\left[e^{-\beta(H_{\mathrm{loc}}+H_{\mathrm{bath}}^{\mathrm{SC}})}\mathcal{T}_{\tau}e^{-\int_{0}^{\beta}H_{\mathrm{hyb}}(\tau)d\tau}\right]\nonumber\\ & =\sum_{n=0}^{\infty}(-1)^{n}\frac{1}{n!}\int_{0}^{\beta}d\tau_{1}\cdots\int_{0}^{\beta}d\tau_{n}\mathrm{Tr}\left[\mathcal{T}_{\tau}e^{-\beta(H_{\mathrm{loc}}+H_{\mathrm{bath}}^{\mathrm{SC}})}H_{\mathrm{hyb}}\left(\tau_{1}\right)\cdots H_{\mathrm{hyb}}\left(\tau_{n}\right)\right]. \end{align} The particle number as well as spin conservation on the local atom requires that the terms with non-zero contribution to $Z$ must contain an equal number of $\hat{V}_{\sigma}$ and $\hat{V}_{\sigma}^{\dagger}$ as \begin{equation} Z=\sum_{n=0}^{\infty}\int_{0}^{\beta}d\tau_{1}^{\uparrow}\cdots\int_{\tau_{n-1}^{\uparrow}}^{\beta}d\tau_{n}^{\uparrow}\int_{0}^{\beta}d\tau_{1}^{\uparrow\prime}\cdots\int_{\tau_{n-1}^{\uparrow\prime}}^{\beta}d\tau_{n}^{\uparrow\prime}\sum_{m=0}^{\infty}\int_{0}^{\beta}d\tau_{1}^{\downarrow}\cdots\int_{\tau_{m-1}^{\downarrow}}^{\beta}d\tau_{m}^{\downarrow}\int_{0}^{\beta}d\tau_{1}^{\downarrow\prime}\cdots\int_{\tau_{m-1}^{\downarrow\prime}}^{\beta}d\tau_{m}^{\downarrow\prime}w_{\mathrm{trace}} \end{equation} with \begin{equation} w_{\mathrm{trace}}=\mathrm{Tr}\left[\mathcal{T}_{\tau}e^{-\beta(H_{\mathrm{loc}}+H_{\mathrm{bath}}^{\mathrm{SC}})}\hat{V}_{\uparrow}(\tau_{n}^{\uparrow})\hat{V}_{\uparrow}^{\dagger}(\tau_{n}^{\uparrow\prime})\cdots\hat{V}_{\uparrow}(\tau_{1}^{\uparrow})\hat{V}_{\uparrow}^{\dagger}(\tau_{1}^{\uparrow\prime})\cdot\hat{V}_{\downarrow}(\tau_{m}^{\downarrow})\hat{V}_{\downarrow}^{\dagger}(\tau_{m}^{\downarrow\prime})\cdots\hat{V}_{\downarrow}(\tau_{1}^{\downarrow})\hat{V}_{\downarrow}^{\dagger}(\tau_{1}^{\downarrow\prime})\right]. \end{equation} After substituting $\hat{V}_{\sigma}^{(\dagger)}$ as given in Eq.~(\ref{eq:Hyb_V}) and separating the bath and impurity operators we obtain \begin{align} \omega_{\mathrm{trace}}= & \sum_{k_{n}\alpha_{n},j_{n}}\sum_{k_{n}^{\prime}\alpha_{n}^{\prime},j_{n}^{\prime}}\cdots\sum_{k_{1}\alpha_{1},j_{1}}\sum_{k_{1}^{\prime}\alpha_{1}^{\prime},j_{1}^{\prime}}\sum_{\tilde{k}_{m}\tilde{\alpha}_{m},\tilde{j}_{m}}\sum_{\tilde{k}_{m}^{\prime}\tilde{\alpha}_{m}^{\prime},\tilde{j}_{m}^{\prime}}\cdots\sum_{\tilde{k}_{1}\tilde{\alpha}_{1},\tilde{j}_{1}}\sum_{\tilde{k}_{1}^{\prime}\tilde{\alpha}_{1}^{\prime},\tilde{j}_{1}^{\prime}}\nonumber\\ \times & V_{k_{n}\alpha_{n}}^{j_{n}}V_{k_{n}^{\prime}\alpha_{n}^{\prime}}^{j_{n}^{\prime}*}\cdots V_{k_{1}\alpha_{1}}^{j_{1}}V_{k_{1}^{\prime}\alpha_{1}^{\prime}}^{j_{1}^{\prime}*}V_{-\tilde{k}_{m}\tilde{\alpha}_{m}}^{\tilde{j}_{m}}V_{-\tilde{k}_{m}^{\prime}\tilde{\alpha}_{m}^{\prime}}^{\tilde{j}_{m}^{\prime}*}\cdots V_{-\tilde{k}_{1}\tilde{\alpha}_{1}}^{\tilde{j}_{1}}V_{-\tilde{k}_{1}^{\prime}\tilde{\alpha}_{1}^{\prime}}^{\tilde{j}_{1}^{\prime}*}\nonumber\\ \times & \mathrm{Tr}_{f}[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{bath}}^{\mathrm{SC}}}f_{k_{n}\alpha_{n},\uparrow}^{\dagger}(\tau_{n}^{\uparrow})f_{k_{n}^{\prime}\alpha_{n}^{\prime},\uparrow}(\tau_{n}^{\uparrow\prime})\cdots f_{k_{1}\alpha_{1},\uparrow}^{\dagger}(\tau_{1}^{\uparrow})f_{k_{1}^{\prime}\alpha_{1}^{\prime},\uparrow}(\tau_{1}^{\uparrow\prime})\nonumber\\ & \cdot f_{-\tilde{k}_{m}\tilde{\alpha}_{m},\downarrow}^{\dagger}(\tau_{m}^{\downarrow})f_{-\tilde{k}_{m}^{\prime}\tilde{\alpha}_{m}^{\prime},\downarrow}(\tau_{m}^{\downarrow\prime})\cdots f_{-\tilde{k}_{1}\tilde{\alpha}_{1},\downarrow}^{\dagger}(\tau_{1}^{\downarrow})f_{-\tilde{k}_{1}^{\prime}\tilde{\alpha}_{1}^{\prime},\downarrow}(\tau_{1}^{\downarrow\prime})]\nonumber\\ \times & \mathrm{Tr}_{c}[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{loc}}}c_{j_{n}\uparrow}(\tau_{n}^{\uparrow})c_{j_{n}^{\prime}\uparrow}^{\dagger}(\tau_{n}^{\uparrow\prime})\cdots c_{j_{1}\uparrow}(\tau_{1}^{\uparrow})c_{j_{1}^{\prime}\uparrow}^{\dagger}(\tau_{1}^{\uparrow\prime})\nonumber\\ & \cdot c_{\tilde{j}_{m}\downarrow}(\tau_{m}^{\downarrow})c_{\tilde{j}_{m}^{\prime}\downarrow}^{\dagger}(\tau_{m}^{\downarrow\prime})\cdots c_{\tilde{j}_{1}\downarrow}(\tau_{1}^{\downarrow})c_{\tilde{j}_{1}^{\prime}\downarrow}^{\dagger}(\tau_{1}^{\downarrow\prime})]\nonumber\\ \equiv & \frac{1}{Z_{\mathrm{bath}}^{\mathrm{SC}}}w_{\mathrm{det}}\cdot w_{\mathrm{loc}} \end{align} with the local trace part, \begin{align} w_{\mathrm{loc}} = & \,\,\mathrm{Tr}_{c}\Big[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{loc}}}c_{j_{n}\uparrow}(\tau_{n}^{\uparrow})c_{j_{n}^{\prime}\uparrow}^{\dagger}(\tau_{n}^{\uparrow\prime})\cdots c_{j_{1}\uparrow}(\tau_{1}^{\uparrow})c_{j_{1}^{\prime}\uparrow}^{\dagger}(\tau_{1}^{\uparrow\prime})\nonumber\\ & \times c_{\tilde{j}_{m}\downarrow}(\tau_{m}^{\downarrow})c_{\tilde{j}_{m}^{\prime}\downarrow}^{\dagger}(\tau_{m}^{\downarrow\prime})\cdots c_{\tilde{j}_{1}\downarrow}(\tau_{1}^{\downarrow})c_{\tilde{j}_{1}^{\prime}\downarrow}^{\dagger}(\tau_{1}^{\downarrow\prime})\Big], \end{align} and the determinant part obtained by applying Wick's theorem, \begin{align} w_{\mathrm{det}}=\det\Delta\equiv\frac{1}{Z_{\mathrm{bath}}^{\mathrm{SC}}} & \sum_{k_{n}\alpha_{n}}\sum_{k_{n}^{\prime}\alpha_{n}^{\prime}}\cdots\sum_{k_{1}\alpha_{1}}\sum_{k_{1}^{\prime}\alpha_{1}^{\prime}}\sum_{\tilde{k}_{m}\tilde{\alpha}_{m}}\sum_{\tilde{k}_{m}^{\prime}\tilde{\alpha}_{m}^{\prime}}\cdots\sum_{\tilde{k}_{1}\tilde{\alpha}_{1}}\sum_{\tilde{k}_{1}^{\prime}\tilde{\alpha}_{1}^{\prime}}\\ & V_{k_{n}\alpha_{n}}^{j_{n}}V_{k_{n}^{\prime}\alpha_{n}^{\prime}}^{j_{n}^{\prime}*}\cdots V_{k_{1}\alpha_{1}}^{j_{1}}V_{k_{1}^{\prime}\alpha_{1}^{\prime}}^{j_{1}^{\prime}*}V_{-\tilde{k}_{m}\tilde{\alpha}_{m}}^{\tilde{j}_{m}}V_{-\tilde{k}_{m}^{\prime}\tilde{\alpha}_{m}^{\prime}}^{\tilde{j}_{m}^{\prime}*}\cdots V_{-\tilde{k}_{1}\tilde{\alpha}_{1}}^{\tilde{j}_{1}}V_{-\tilde{k}_{1}^{\prime}\tilde{\alpha}_{1}^{\prime}}^{\tilde{j}_{1}^{\prime}*}\nonumber\\ & \mathrm{Tr}_{f}\Big[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{bath}}^{\mathrm{SC}}}f_{k_{n}\alpha_{n},\uparrow}^{\dagger}(\tau_{n}^{\uparrow})f_{k_{n}^{\prime}\alpha_{n}^{\prime},\uparrow}(\tau_{n}^{\uparrow\prime})\cdots f_{k_{1}\alpha_{1},\uparrow}^{\dagger}(\tau_{1}^{\uparrow})f_{k_{1}^{\prime}\alpha_{1}^{\prime},\uparrow}(\tau_{1}^{\uparrow\prime})\nonumber\\ & \times f_{-\tilde{k}_{m}\tilde{\alpha}_{m},\downarrow}^{\dagger}(\tau_{m}^{\downarrow})f_{-\tilde{k}_{m}^{\prime}\tilde{\alpha}_{m}^{\prime},\downarrow}(\tau_{m}^{\downarrow\prime})\cdots f_{-\tilde{k}_{1}\tilde{\alpha}_{1},\downarrow}^{\dagger}(\tau_{1}^{\downarrow})f_{-\tilde{k}_{1}^{\prime}\tilde{\alpha}_{1}^{\prime},\downarrow}(\tau_{1}^{\downarrow\prime})\Big], \end{align} where $Z_{\mathrm{bath}}^{\mathrm{SC}}=\mathrm{Tr}_{f}\big[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{bath}}^{\mathrm{SC}}}\big]$. In general, $\Delta$ is a non-block diagonal matrix with non-zero elements between two different orbital indices. In an appropriate basis, $\Delta$ can become block diagonal. In the following, we consider the situation where $\Delta$ is block diagonal in the orbital index, which is the case for the bilayer Hubbard model, \begin{equation} \det\Delta=\prod_{j=1}^{N}\det\Delta_{j}. \end{equation} A certain configuration in $Z$ contains $n_{j\sigma}^{\prime}$ local creation operators $\{c_{j\sigma}^{\dagger}(\tau_{1}^{(j,\sigma)\prime}),\cdots,c_{j\sigma}^{\dagger}(\tau_{n_{j\sigma}^{\prime}}^{(j,\sigma)\prime})\}$ and $n_{j\sigma}$ local annihilation operators $\{c_{j\sigma}(\tau_{1}^{(j,\sigma)}),\cdots,c(\tau_{n_{j\sigma}}^{(j,\sigma)})\}$ for orbital $j$ and spin $\sigma$. There are four types of hybridization matrix elements in $\Delta_{j}$. The normal elements for $j$ and $\uparrow$ read \begin{equation} \Delta_{\mathrm{nor}}^{(j\uparrow\uparrow)}(\tau_{l}^{(j\uparrow)\prime}-\tau_{m}^{(j\uparrow)})=\sum_{k_{l}^{\prime}\alpha_{l}^{\prime}}\sum_{k_{m}\alpha_{m}}V_{k_{l}^{\prime}\alpha_{l}^{\prime}}^{j*}V_{k_{m}\alpha_{m}}^{j}\mathrm{Tr}_{f}\Big[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{bath}}^{\mathrm{SC}}}f_{k_{m}\alpha_{m},\uparrow}^{\dagger}(\tau_{m}^{(j\uparrow)})f_{k_{l}^{\prime}\alpha_{l}^{\prime},\uparrow}(\tau_{l}^{(j\uparrow)\prime})\Big], \end{equation} while for $j$ and $\downarrow$ they are \begin{equation} -[\Delta_{\mathrm{nor}}^{(j\downarrow\downarrow)}[-(\tau_{l}^{(j\downarrow)\prime}-\tau_{m}^{(j\downarrow)})]=\sum_{k_{l}^{\prime}\alpha_{l}^{\prime}}\sum_{k_{m}\alpha_{m}}V_{-k_{l}^{\prime}\alpha_{l}^{\prime}}^{j*}V_{-k_{m}\alpha_{m}}^{j}\mathrm{Tr}_{f}\Big[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{bath}}^{\mathrm{SC}}}f_{-k_{m}\alpha_{m},\downarrow}^{\dagger}(\tau_{m}^{(j\downarrow)})f_{-k_{l}^{\prime}\alpha_{l}^{\prime},\downarrow}(\tau_{l}^{(j\downarrow)\prime})\Big]. \end{equation} The anomalous element for $j\uparrow j\downarrow$ reads \begin{equation} \Delta_{\mathrm{ano}}^{(j\uparrow\downarrow)}(\tau_{l}^{(j\uparrow)\prime}-\tau_{m}^{(j\downarrow)\prime})=\sum_{k_{l}^{\prime}\alpha_{l}^{\prime}}\sum_{k_{m}^{\prime}\alpha_{m}^{\prime}}V_{k_{l}^{\prime}\alpha_{l}^{\prime}}^{j*}V_{-k_{m}^{\prime}\alpha_{m}^{\prime}}^{j*}\mathrm{Tr}_{f}\Big[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{bath}}^{\mathrm{SC}}}f_{-k_{m}^{\prime}\alpha_{m}^{\prime},\downarrow}(\tau_{m}^{(j\downarrow)\prime})f_{k_{l}^{\prime}\alpha_{l}^{\prime},\uparrow}(\tau_{l}^{(j\uparrow)\prime})\Big], \end{equation} and its counterpart for $j\downarrow j\uparrow$ is \begin{equation} \Delta_{\mathrm{ano}}^{(j\downarrow\uparrow)}(\tau_{l}^{(j\downarrow)}-\tau_{m}^{(j\uparrow)})=\sum_{k_{l}\alpha_{l}}\sum_{k_{m}\alpha_{m}}V_{-k_{l}\alpha_{l}}^{j}V_{k_{m}\alpha_{m}}^{j}\mathrm{Tr}_{f}\Big[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{bath}}^{\mathrm{SC}}}f_{k_{m}\alpha_{m},\uparrow}^{\dagger}(\tau_{m}^{(j\uparrow)})f_{-k_{l}\alpha_{l},\downarrow}^{\dagger}(\tau_{l}^{(j\downarrow)})\Big]. \end{equation} $\Delta_{j}$, a $(n_{j\uparrow}^{\prime}+n_{j\downarrow})\times(n_{j\uparrow}+n_{j\downarrow}^{\prime})$ matrix, can be expressed as \begin{equation} \Delta_{j}=\left[\begin{array}{cccccc} \Delta_{\mathrm{nor}}^{(j\uparrow\uparrow)}(\tau_{1}^{(j\uparrow)\prime}-\tau_{1}^{(j\uparrow)}) & \cdots & \Delta_{\mathrm{nor}}^{(j\uparrow\uparrow)}(\tau_{1}^{(j\uparrow)\prime}-\tau_{n_{j\uparrow}}^{(j\uparrow)}) & \Delta_{\mathrm{ano}}^{(j\uparrow\downarrow)}(\tau_{1}^{(j\uparrow)\prime}-\tau_{1}^{(j\downarrow)\prime}) & \cdots & \Delta_{\mathrm{ano}}^{(j\uparrow\downarrow)}(\tau_{1}^{(j\uparrow)\prime}-\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)\prime})\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ \Delta_{\mathrm{nor}}^{(j\uparrow\uparrow)}(\tau_{n_{j\uparrow}^{\prime}}^{(j\uparrow)\prime}-\tau_{1}^{(j\uparrow)}) & \cdots & \Delta_{\mathrm{nor}}^{(j\uparrow\uparrow)}(\tau_{n_{j\uparrow}^{\prime}}^{(j\uparrow)\prime}-\tau_{n_{j\uparrow}}^{(j\uparrow)}) & \Delta_{\mathrm{ano}}^{(j\uparrow\downarrow)}(\tau_{n_{j\uparrow}^{\prime}}^{(j\uparrow)\prime}-\tau_{1}^{(j\downarrow)\prime}) & \cdots & \Delta_{\mathrm{ano}}^{(j\uparrow\downarrow)}(\tau_{n_{j\uparrow}^{\prime}}^{(j\uparrow)\prime}-\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)\prime})\\ \Delta_{\mathrm{ano}}^{(j\downarrow\uparrow)}(\tau_{1}^{(j\downarrow)}-\tau_{1}^{(j\uparrow)}) & \cdots & \Delta_{\mathrm{ano}}^{(j\downarrow\uparrow)}(\tau_{1}^{(j\downarrow)}-\tau_{n_{j\uparrow}}^{(j\uparrow)}) & -\Delta_{\mathrm{nor}}^{(j\downarrow\downarrow)}[-(\tau_{1}^{(j\downarrow)}-\tau_{1}^{(j\downarrow)})] & \cdots & -\Delta_{\mathrm{nor}}^{(j\downarrow\downarrow)}[-(\tau_{1}^{(j\downarrow)\prime}-\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)})]\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ \Delta_{\mathrm{ano}}^{(j\downarrow\uparrow)}(\tau_{n_{j\downarrow}}^{(j\downarrow)}-\tau_{1}^{(j\uparrow)}) & \cdots & \Delta_{\mathrm{ano}}^{(j\downarrow\uparrow)}(\tau_{n_{j\downarrow}}^{(j\downarrow)}-\tau_{n_{j\uparrow}}^{(j\uparrow)}) & -\Delta_{\mathrm{nor}}^{(j\downarrow\downarrow)}[-(\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)\prime}-\tau_{1}^{(j\downarrow)})] & \cdots & -\Delta_{\mathrm{nor}}^{(j\downarrow\downarrow)}[-(\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)\prime}-\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)})] \end{array}\right]. \end{equation} $\Delta_{j}$ is a square matrix with $n_{j\uparrow}^{\prime}+n_{j\downarrow}=n_{j\uparrow}+n_{j\downarrow}^{\prime}$ but {\it not} necessarily $n_{j\sigma}=n_{j\sigma}^{\prime}$. This means the number of creation operators for spin-orbital index $j\sigma$ can be different from the number of destruction operators for the same index. In other words, if we write the hybridization matrix in a block form representing the normal and anomalous components, $\Delta_{j}=\left[\begin{array}{cc} A & B\\ C & D \end{array}\right]$, each submatrix can be a non-square matrix. In the end, the partition function reads \begin{align} Z & =Z_{\mathrm{bath}}^{\mathrm{SC}}\left[\prod_{j=1}^{M}\prod_{\sigma=\uparrow,\downarrow}\sum_{n_{j\sigma},n_{j\sigma}^{\prime}=0}^{\infty}\int_{0}^{\beta}d\tau_{1}^{(j\sigma)}\cdots\int_{\tau_{n-1}^{(j\sigma)}}^{\beta}d\tau_{n^{(j\sigma)}}^{(j\sigma)}\int_{0}^{\beta}d\tau_{1}^{(j\sigma)\prime}\cdots\int_{\tau_{n-1}^{(j\sigma)\prime}}^{\beta}d\tau_{n^{(j\sigma)\prime}}^{(j\sigma)\prime}\det\Delta_{j}\right]\nonumber\\ & \times\mathrm{Tr}_{c}\Big[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{loc}}}\prod_{j=1}^{M}c_{j\uparrow}(\tau_{n_{j\uparrow}}^{(j\uparrow)})c_{j\uparrow}^{\dagger}(\tau_{n_{j\uparrow}^{\prime}}^{(j\uparrow)\prime})\cdots c_{j\uparrow}(\tau_{1}^{(j\uparrow)})c_{j\uparrow}^{\dagger}(\tau_{1}^{(j\uparrow)\prime})\times c_{j\downarrow}(\tau_{n_{j\downarrow}}^{(j\downarrow)})c_{j\downarrow}^{\dagger}(\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)\prime})\cdots c_{j\downarrow}(\tau_{1}^{(j\downarrow)})c_{j\downarrow}^{\dagger}(\tau_{1}^{(j\downarrow)\prime})\Big]s_{c}, \end{align} where $s_{c}$ is the permutation sign from grouping $\{c,c^{\dagger}\}$ operators by their orbital indices for sake of clearness. \subsubsection*{Updates within $\mathcal{C}_Z$} The normal pair insertion/removal update is a simple and necessary update, which involves one creation operator $c_{j\sigma}^{\dagger}(\tau^{(j\sigma)})$ and one annihilation operator $c_{j\sigma}(\tau^{(j\sigma)})$ for the same spin-orbital $j\sigma$. This update changes the expansion order by $\pm 1$ as $n_{j\sigma}^{(\prime)}\rightarrow n_{j\sigma}^{(\prime)}\pm1$. If $\sigma=\uparrow$ ($=\downarrow)$ the update will modify the block matrices $A$, $C$ and $B$ ($D$, $B$, $C$ ) in $\Delta_{j}$. As pointed out by S$\acute{\text{e}}$mon {et al.} \cite{Semon2014}, four-operator (termed $4$-op) updates are also necessary for ergodicity in the case of superconducting states. This can be seen by considering the simple configuration, \begin{equation} \beta|-c_{j^{\prime}\uparrow}^{\dagger}(\tau_{1}^{(j^{\prime}\uparrow)\prime})-c_{j^{\prime}\downarrow}^{\dagger}(\tau_{1}^{(j^{\prime}\downarrow)\prime})-c_{j\downarrow}(\tau_{1}^{(j\downarrow)})-c_{j\uparrow}(\tau_{1}^{(j\uparrow)})-|0, \end{equation} where the order of $\{\tau\}$ can be arbitrary and $j\ne j^\prime$. This configuration cannot be generated by two successive insertion updates, since the local trace after the first insertion is zero. The corresponding determinant is \begin{equation} w_{\mathrm{det}}=\Delta_{\mathrm{ano}}^{(j\downarrow\uparrow)}(\tau_{1}^{(j\downarrow)}-\tau_{1}^{(j\uparrow)})\times\Delta_{\mathrm{ano}}^{(j^{\prime}\uparrow\downarrow)}(\tau_{1}^{(j^{\prime}\uparrow)\prime}-\tau_{1}^{(j^{\prime}\downarrow)\prime}), \end{equation} and the local trace \begin{equation} w_{\mathrm{loc}}=\mathrm{Tr}_{c}[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{loc}}}c_{j^{\prime}\uparrow}^{\dagger}(\tau_{1}^{(j^{\prime}\uparrow)\prime})c_{j^{\prime}\downarrow}^{\dagger}(\tau_{1}^{(j^{\prime}\downarrow)\prime})c_{j\downarrow}(\tau_{1}^{(j\downarrow)})c_{j\uparrow}(\tau_{1}^{(j\uparrow)})]\ne0 \end{equation} if $J_{P}\ne0$. Starting from an arbitrary configuration, the 4-op insertion/removal update changes the expansion order as $n_{j\sigma}\rightarrow n_{j\sigma}\pm1$ and $n_{j^{\prime}\sigma}^{\prime}\rightarrow n_{j^{\prime}\sigma}^{\prime}\pm1$. It changes the matrices $C$, $A$, $D$ in $\Delta_{j}$ and the matrices $B$, $D$, $A$ in $\Delta_{j^{\prime}}$. \subsection*{Green's function} The normal ($G$) and anomalous ($F$) single-particle Green's functions are obtained by inserting two corresponding operators in the local trace part of $Z$ as \begin{align} G_{j\uparrow\uparrow}(\tau^{(j\uparrow)},\tau^{(j\uparrow)\prime})= & -\langle\mathcal{T}_{\tau}c_{j\uparrow}(\tau^{(j\uparrow)})c_{j\uparrow}^{\dagger}(\tau^{(j\uparrow)\prime})\rangle\nonumber\\ = & Z_{\mathrm{bath}}^{\mathrm{SC}}\left[\prod_{j=1}^{M}\prod_{\sigma=\uparrow,\downarrow}\sum_{n_{j\sigma},n_{j\sigma}^{\prime}=0}^{\infty}\int_{0}^{\beta}d\tau_{1}^{(j\sigma)}\cdots\int_{\tau_{n-1}^{(j\sigma)}}^{\beta}d\tau_{n^{(j\sigma)}}^{(j\sigma)}\int_{0}^{\beta}d\tau_{1}^{(j\sigma)\prime}\cdots\int_{\tau_{n-1}^{(j\sigma)\prime}}^{\beta}d\tau_{n^{(j\sigma)\prime}}^{(j\sigma)\prime}\det\Delta_{j}\right]\nonumber\\ \times & \mathrm{Tr}_{c}[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{loc}}}\prod_{j=1}^{M}c_{j\uparrow}(\tau_{n_{j\uparrow}}^{(j\uparrow)})c_{j\uparrow}^{\dagger}(\tau_{n_{j\uparrow}^{\prime}}^{(j\uparrow)\prime})\cdots c_{j\uparrow}(\tau_{1}^{(j\uparrow)})c_{j\uparrow}^{\dagger}(\tau_{1}^{(j\uparrow)\prime})\nonumber\\ \times & c_{j\downarrow}(\tau_{n_{j\downarrow}}^{(j\downarrow)})c_{j\downarrow}^{\dagger}(\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)\prime})\cdots c_{j\downarrow}(\tau_{1}^{(j\downarrow)})c_{j\downarrow}^{\dagger}(\tau_{1}^{(j\downarrow)\prime})c_{j\uparrow}(\tau^{(j\uparrow)})c_{j\uparrow}^{\dagger}(\tau^{(j\uparrow)\prime})]s_{c}, \end{align} and \begin{align} F_{j\uparrow\downarrow}(\tau^{(j\uparrow)},\tau^{(j\downarrow)})= & -\langle\mathcal{T}_{\tau}c_{j\uparrow}(\tau^{(j\uparrow)})c_{j\downarrow}(\tau^{(j\downarrow)})\rangle\nonumber\\ = & Z_{\mathrm{bath}}^{\mathrm{SC}}\left[\prod_{j=1}^{M}\prod_{\sigma=\uparrow,\downarrow}\sum_{n_{j\sigma},n_{j\sigma}^{\prime}=0}^{\infty}\int_{0}^{\beta}d\tau_{1}^{(j\sigma)}\cdots\int_{\tau_{n-1}^{(j\sigma)}}^{\beta}d\tau_{n^{(j\sigma)}}^{(j\sigma)}\int_{0}^{\beta}d\tau_{1}^{(j\sigma)\prime}\cdots\int_{\tau_{n-1}^{(j\sigma)\prime}}^{\beta}d\tau_{n^{(j\sigma)\prime}}^{(j\sigma)\prime}\det\Delta_{j}\right]\nonumber\\ \times & \mathrm{Tr}_{c}[\mathcal{T}_{\tau}e^{-\beta H_{\mathrm{loc}}}\prod_{j=1}^{M}c_{j\uparrow}(\tau_{n_{j\uparrow}}^{(j\uparrow)})c_{j\uparrow}^{\dagger}(\tau_{n_{j\uparrow}^{\prime}}^{(j\uparrow)\prime})\cdots c_{j\uparrow}(\tau_{1}^{(j\uparrow)})c_{j\uparrow}^{\dagger}(\tau_{1}^{(j\uparrow)\prime})\nonumber\\ \times & c_{j\downarrow}(\tau_{n_{j\downarrow}}^{(j\downarrow)})c_{j\downarrow}^{\dagger}(\tau_{n_{j\downarrow}^{\prime}}^{(j\downarrow)\prime})\cdots c_{j\downarrow}(\tau_{1}^{(j\downarrow)})c_{j\downarrow}^{\dagger}(\tau_{1}^{(j\downarrow)\prime})c_{j\uparrow}(\tau^{(j\uparrow)})c_{j\downarrow}(\tau^{(j\downarrow)})]s_{c}. \end{align} We only consider the paramagnetic phase and therefore $G_{j\downarrow\downarrow}(\tau)$ and $F_{j\downarrow\uparrow}(\tau)$ can be obtained from $G_{j\uparrow\uparrow}(\tau)$ and $F_{j\uparrow\downarrow}(\tau)$ using time-reversal symmetry: $G_{j\downarrow\downarrow}(\tau)=G_{j\uparrow\uparrow}(\tau)$, $F_{j\downarrow\uparrow}(\tau)=F_{j\uparrow\downarrow}(\tau)^{*}$. Since the partition function in the Nambu formalism has a similar structure as the partition function in the conventional CT-HYB for the normal phase, we can directly write down the conventional estimator to measure $G_{j\uparrow\uparrow}$ and $F_{j\uparrow\downarrow}$ as \begin{equation} \begin{array}{l} G_{j\uparrow\uparrow}\left(\tau-\tau^{\prime}\right)=-\frac{1}{\beta}\left\langle \sum_{lm=1}^{(n_{j\uparrow}^{\prime}+n_{j\downarrow})}(\Delta_{j}^{-1})_{lm}\delta^{-}\left(\tau-\tau^{\prime},\tau_{m}-\tau_{l}\right)\delta_{j\uparrow,m}\delta_{j\uparrow,l}\right\rangle _{\mathrm{MC}}\end{array}, \label{eq:G_conv} \end{equation} \begin{equation} F_{j\uparrow\downarrow}\left(\tau-\tau^{\prime}\right)=-\frac{1}{\beta}\left\langle \sum_{lm=1}^{(n_{j\uparrow}^{\prime}+n_{j\downarrow})}(\Delta_{j}^{-1})_{lm}\delta^{-}\left(\tau-\tau^{\prime},\tau_{m}-\tau_{l}\right)\delta_{j\uparrow,m}\delta_{j\downarrow,l}\right\rangle _{\mathrm{MC}}. \label{eq:F_conv} \end{equation} Equation (\ref{eq:G_conv}) [Eq.~(\ref{eq:F_conv})] is obtained by removing the normal (anomalous) hybridization lines attached to two operators ($c_{j\uparrow}$ and $c_{j\uparrow}^{\dagger}$ to measure $G$, $c_{j\uparrow}$ and $c_{j\downarrow}$ to measure $F$) in a certain configuration of $Z$. The conventional estimator yields bad statistics if the average expansion order becomes small $\langle n_{j\sigma}\rangle\rightarrow0$ and it cannot be applied if $\langle n_{j\sigma}\rangle=0$ (where $\langle n_{j\sigma}\rangle$ should not be confused with the average occupation number $\langle\hat{n}_{j\sigma}\rangle$). This happens when the hybridization function reaches the atomic limit $\Delta^{(j\sigma)}\rightarrow0$. For example, it appears in the Falicov-Kimball model \cite{FKmodel}. As shown in the main text, this situation also appears in the bilayer Hubbard model when the narrow band reaches the flat-band limit ($W_\beta\rightarrow0$), since the hybridization strength is proportional to its band width $V_\beta\propto W_\beta\rightarrow0$. \subsection*{Normal worm sampling} In the normal worm sampling, one treats $\text{\ensuremath{c_{j\uparrow}}(\ensuremath{\tau^{(j\uparrow)}})}$ and $c_{j\uparrow}^{\dagger}(\tau^{(j\uparrow)\prime})$ as the worm operators, and includes the worm space $\mathcal{C}_{G_{j\uparrow\uparrow}^{(1)}}$. The modified partition function is \begin{align} Z_{{G_{j\uparrow\uparrow}^{(1)}}}=\int d\tau^{(j\uparrow)}d\tau^{(j\uparrow)\prime}G_{j\uparrow\uparrow}(\tau^{(j\uparrow)},\tau^{(j\uparrow)\prime}). \end{align} \subsubsection*{Updates within $\mathcal{C}_{G_{j\uparrow\uparrow}^{(1)}}$} The updates within $\mathcal{C}_{G_{j\uparrow\uparrow}^{(1)}}$ are analogous to those in $\mathcal{C}_{Z}$. The only difference is that there are no hybridization lines attached to the worm operators. The necessary updates are normal pair insertion/removal and 4-op updates. Furthermore, we also implemented worm shift/replacement updates to reduce the auto-correlation time. \subsubsection*{Updates between $\mathcal{C}_{Z}$ and $\mathcal{C}_{G_{j\uparrow\uparrow}^{(1)}}$} In the worm insertion update, we start from a random configuration in $\mathcal{C}_{Z}$ and randomly choose two imaginary times for the worm operators $\text{\ensuremath{c_{j\uparrow}}(\ensuremath{\tau^{(j\uparrow)}})}$ and $c_{j\uparrow}^{\dagger}(\tau^{(j\uparrow)\prime})$. The worm removal update is the inverse process which removes the worm operators. The Metropolis acceptance rates for the worm insertion and removal updates are \begin{equation} p(\mathcal{C}_{Z}\rightarrow\mathcal{C}_{G_{j\uparrow\uparrow}^{(1)}})=\min\left[1,\eta_{G^{(1)}}\cdot\frac{w_{\mathrm{loc}}(\{\tau\}_{Z},\tau^{(j\uparrow)},\tau^{(j^{\prime}\uparrow)\prime})}{w_{\mathrm{loc}}(\{\tau\}_{Z})}\beta^{2}\right], \end{equation} and \begin{equation} p(\mathcal{C}_{G_{j\uparrow\uparrow}^{(1)}}\rightarrow\mathcal{C}_{Z})=\min\left[1,\frac{1}{\eta_{G^{(1)}}}\cdot\frac{w_{\mathrm{loc}}(\{\tau\}_{Z})}{w_{\mathrm{loc}}(\{\tau\}_{Z},\tau^{(j\uparrow)},\tau^{(j^{\prime}\uparrow)\prime})}\cdot\frac{1}{\beta^{2}}\right], \end{equation} respectively. \subsubsection*{Worm measurement} The measurement formula for the anomalous Green's function is \begin{equation} G_{{F^{(1)}}}^{(1)}(\tau)=\frac{1}{\eta_{G^{(1)}}}\frac{N_{G^{(1)}}}{N_{Z}}\frac{\left\langle \mathrm{sgn}(\mathcal{C}_{G^{(1)}})\cdot\delta\left(\tau,\tau_{i}-\tau_{j}\right)\right\rangle _{\mathrm{MC}}}{\langle\mathrm{sgn}(\text{\ensuremath{\mathcal{C}_{Z}})}\rangle_{\mathrm{MC}}}, \label{eq:G_worm} \end{equation} where $N_{G^{(1)}}$ ($\text{\ensuremath{N_{Z}}}$) is the number of Monte Carlo steps taken in $\mathcal{C}_{G^{(1)}}$ ($\mathcal{C}_{Z}$), $\text{\ensuremath{\mathrm{sgn}}(\ensuremath{\mathcal{C}_{G^{(1)}}})}$ is the sign of a certain configuration of $\mathcal{C}_{G^{(1)}}$, and $\langle\mathrm{sgn}(\text{\ensuremath{\mathcal{C}_{Z}})}\rangle_{\mathrm{MC}}$ is the average sign of the configurations in the $\mathcal{C}_Z$ space. \subsection*{Anomalous worm sampling} In the anomalous worm sampling, one treats $\text{\ensuremath{c_{j\uparrow}}(\ensuremath{\tau^{(j\uparrow)}})}$ and $c_{j\downarrow}(\tau^{(j\downarrow)})$ as the worm operators, and considers the worm space $\mathcal{C}_{F_{j\uparrow\downarrow}^{(1)}}$. The modified partition function is \begin{align} Z_{{F_{j\uparrow\downarrow}^{(1)}}}=\int d\tau^{(j\uparrow)}d\tau^{(j\downarrow)}F_{j\uparrow\downarrow}(\tau^{(j\uparrow)},\tau^{(j\downarrow)}). \end{align} \subsubsection*{Updates within $\mathcal{C}_{F_{j\uparrow\downarrow}^{(1)}}$} Due to the existence of two worm annihilation operators, a non-zero local trace $w_{\mathrm{loc}}$ in $F$ requires additionally two creation operators $c_{j^{\prime}\uparrow}^{\dagger}$ and $c_{j^{\prime}\downarrow}^{\dagger}$. Thus the first non-zero diagram in $F$ contains two worm operators and two normal creation operators. Then we can perform normal pair insertion updates starting from this diagram, and normal pair removal updates for diagrams with more than these four operators. The 4-op update used in $Z$ is also necessary to reach ergodicity within the $F$ space. Such updates can result in a non-zero local trace and non-zero determinant even though the final configuration cannot be reached by two successive normal pair insertion/removal updates. As in the normal worm sampling, worm shift/replacement updates allow to reduce the auto-correlation time. \subsubsection*{Updates between $\mathcal{C}_{Z}$ and $\mathcal{C}_{F_{j\uparrow\downarrow}^{(1)}}$} The anomalous worm insertion update is proposed as follows. We start from a random configuration $\mathcal{C}_{Z}$ in the partition function space and randomly choose two imaginary times for the worm operators $\text{\ensuremath{c_{j\uparrow}}(\ensuremath{\tau^{(j\uparrow)}})}$ and $c_{j\downarrow}(\tau^{(j\downarrow)})$. Then we randomly assign an orbital index $j^{\prime}\in[1,2,\cdots,M]$ and randomly pick the imaginary times for the two normal operators $c_{j^{\prime}\uparrow}^{\dagger}(\tau^{(j^{\prime}\uparrow)\prime})$ and $c_{j^{\prime}\downarrow}^{\dagger}(\tau^{(j^{\prime}\downarrow)\prime})$. We suppose the number of $c_{j^{\prime}\uparrow}^{\dagger}$ ($c_{j^{\prime}\uparrow}^{\dagger}$) operators is $n_{j^{\prime}\uparrow}^{\prime}$ ($n_{j^{\prime}\downarrow}^{\prime}$) in $\mathcal{C}_{Z}$ before the update. The anomalous worm removal update is performed by randomly choosing an orbital index $j^{\prime}$ and then randomly selecting one of the existing $c_{j^{\prime}\uparrow}^{\dagger}$ and $c_{j^{\prime}\uparrow}^{\dagger}$. The Metropolis acceptance rates for the worm insertion and removal updates are \begin{equation} p(\mathcal{C}_{Z}\rightarrow\mathcal{C}_{F_{j\uparrow\downarrow}^{(1)}})=\min\left[1,\eta_{F^{(1)}}\cdot\frac{\det\Delta_{j^{\prime}}(\{\tau^{j^{\prime}}\}_{Z},\tau^{(j^{\prime}\uparrow)\prime},\tau^{(j^{\prime}\downarrow)\prime})}{\det\Delta_{j^{\prime}}(\{\tau^{j^{\prime}}\}_{Z})}\cdot\frac{w_{\mathrm{loc}}(\{\tau\}_{Z},\tau^{(j\uparrow)},\tau^{(j\downarrow)},\tau^{(j^{\prime}\uparrow)\prime},\tau^{(j^{\prime}\downarrow)\prime})}{w_{\mathrm{loc}}(\{\tau\}_{Z})}\cdot\frac{\beta^{4}}{(n_{j^{\prime}\uparrow}^{\prime}+1)(n_{j^{\prime}\downarrow}^{\prime}+1)}\right] \label{eq:ZtoF} \end{equation} and \begin{equation} p(\mathcal{C}_{F_{j\uparrow\downarrow}^{(1)}}\rightarrow\mathcal{C}_{Z})=\min\left[1,\frac{1}{\eta_{F^{(1)}}}\cdot\frac{\det\Delta_{j^\prime}(\{\tau^{j^\prime}\}_{Z})}{\det\Delta_{j^\prime}(\{\tau^{j^\prime}\}_{Z},\tau^{(j^{\prime}\uparrow)\prime},\tau^{(j^{\prime}\downarrow)\prime})}\cdot\frac{w_{\mathrm{loc}}(\{\tau\}_{Z})}{w_{\mathrm{loc}}(\{\tau\}_{Z},\tau^{(j\uparrow)},\tau^{(j\downarrow)},\tau^{(j^{\prime}\uparrow)\prime},\tau^{(j^{\prime}\downarrow)\prime})}\cdot\frac{n_{j^{\prime}\uparrow}^{\prime}n_{j^{\prime}\downarrow}^{\prime}}{\beta^{4}}\right], \label{eq:FtoZ} \end{equation} respectively. The determinant ratios in Eqs.~(\ref{eq:ZtoF}) and (\ref{eq:FtoZ}) appear because of the two normal operators which are inserted/removed with the worm operators. \subsubsection*{Worm Measurement} The measurement formula for the anomalous Green's function is \begin{equation} G_{{F^{(1)}}}^{(1)}(\tau)=\frac{1}{\eta_{F^{(1)}}}\frac{N_{F^{(1)}}}{N_{Z}}\frac{\left\langle \mathrm{sgn}(\mathcal{C}_{F^{(1)}})\cdot\delta\left(\tau,\tau_{i}-\tau_{j}\right)\right\rangle _{\mathrm{MC}}}{\langle\mathrm{sgn}(\text{\ensuremath{\mathcal{C}_{Z}})}\rangle_{\mathrm{MC}}}, \label{eq:F_worm} \end{equation} where $N_{F^{(1)}}$ is the number of Monte Carlo steps taken in $\mathcal{C}_{F^{(1)}}$, and $\text{\ensuremath{\mathrm{sgn}}(\ensuremath{\mathcal{C}_{F^{(1)}}})}$ is the sign of a certain configuration in $\mathcal{C}_{F^{(1)}}$. \subsection*{Tests of the worm sampling code} We first benchmark the worm sampling in a parameter region where the worm sampling is not necessary. In the bilayer Hubbard model, the hybridization function in the anti-bonding orbital $\beta$ is nonzero if $W_{\beta}/W_{\alpha} > 0.$ We compare in Fig.~\ref{fig:GF_compare} the normal and anomalous Green's functions measured by the conventional estimator (blue lines) according to Eqs.~(\ref{eq:G_conv},\ref{eq:F_conv}) and with worm sampling (blue lines) according to Eqs.~(\ref{eq:G_worm},\ref{eq:F_worm}). For $W_{\beta}/W_{\alpha}$=1, there is a strong hybridization in the $\beta$ orbital and the conventional estimator is more efficient than worm sampling, see panel (a). As one reduces the narrow band width to $W_\beta/W_\alpha=0.2$ (panel (b)), the hybridization strength in the $\beta$ band becomes much smaller. As a result, the conventional measurement of $G_{\beta\uparrow\uparrow}(\tau)$ and especially $F_{\beta\uparrow\downarrow}(\tau)$ contains a lot of noise, which can be significantly reduced by applying the normal and anomalous worm sampling. As we approach the narrow-band limit for $W_\beta/W_\alpha=0.05$ (panel (c)), the noise in the conventional estimate of $F_{\beta\uparrow\downarrow}(\tau)$ becomes very severe. When the anti-bonding band is flat (panel (d)), the conventional estimator (which measures zero when the expansion order $n_{\beta\sigma}=n_{\beta\sigma}^\prime=0$) cannot be used anymore due to a zero hybridization, while the worm sampling still allows to measure the normal and anomalous Green's functions. \begin{figure*} \includegraphics[clip,width=5.5in,angle=0]{Gtau_worm_comparep-01.png} \caption{Comparison of the normal Green's functions $G_{\beta\uparrow\uparrow}(\tau)$ (anomalous Green's function $F_{\beta\uparrow\downarrow}(\tau)$) obtained using the conventional estimator and normal (anomalous) worm sampling, for the indicated band width ratios in the present system with $T=0.1$. In the limit $W_\beta=0$, the conventional measurement cannot be used. } \label{fig:GF_compare} \end{figure*}
1,941,325,220,120
arxiv
\section{Keywords:} photonic, reservoir computing, passive, coherent, distributed nonlinearity, Kerr, fiber-ring } \end{abstract} \section{Introduction} In this work, we discuss an efficient, i.e. high speed and low power, analogue photonic computing system based on the concept of reservoir computing (RC) \cite{maass2002,jaeger2004}. This framework allows to exploit the transient dynamics of a nonlinear dynamical system for performing useful computations. In this neuromorphic computing scheme, a network of interconnected computational nodes (called neurons) is excited with input data. The ensemble of neurons is called the reservoir, and the interneural connections are fixed and can be chosen at random. For the coupling of the input data to the reservoir an input mask is used: a set of input weights which determines how strongly each of the inputs couples to each of the neurons. The randomness in both the input mask and internal reservoir connections ensures diversity in the neural responses. The reservoir output is constructed through a linear combination of neural responses (possibly first processed by a readout function) with a set of readout weights. The strength of the reservoir computing scheme lies in the simplicity of its training method, where only the readout weights are tuned to force the reservoir output to match a desired target. In general, a reservoir exhibits internal feedback through loops in the neural interconnections. As a result any reservoir has memory, which means it can retain input data for a finite amount of time, and it can compute linear and nonlinear functions of the retained information. Within the field of reservoir computing two main approaches exist: in the network-based approach networks of neurons are implemented by connecting multiple discrete nodes \cite{verstraeten2007}, and in the delay-based approach networks of virtual neurons are created by subjecting a single node (often a nonlinear dynamical device) to delayed feedback \cite{appeltant2011}. In the latter, the neurons are called virtual because they correspond with the travelling signals found in consequent timeslots in the continuous delay-line system. On account of this time-multiplexing of neurons, the input weights are translated into a temporal input mask, which is mixed with the input data before it is injected into the reservoir. Besides ensuring diversity in the neural responses, this input mask also keeps the virtual neurons in a transient dynamic regime, which is a necessary condition for good reservoir computing performance. Multiple opto-electronic reservoirs have been implemented, both delay-based \cite{paquot2012,larger2012,duport2016,larger2017} and network-based \cite{bueno2018}. Several all-optical reservoirs have been realized, both network-based systems \cite{vandoorne2011,vandoorne2014,katumba2018,bueno2018,harkhoe2018} and delay-based systems \cite{duport2012,brunner2013,vinckier2015}. An overview of recent advances is given in Ref. \cite{vandersande2017}. We observe that in the field of optical reservoir computing, some implementations operated in an incoherent regime, while others operated in a coherent regime. Coherent reservoirs have the advantage that they can exploit the complex character of the optical field, exploit interferences, and can use the natural quadratic nonlinearity of photodiodes. As a drawback, coherent bulk optical reservoirs typically need to be stabilized, but this is not a problem for on chip implementations. Here we investigate the potential advantage of having a coherent reservoir with nonlinearity inside the reservoir. We show that it can increase the performance of the reservoir on certain tasks and we expect that future coherent optical reservoir computers will make use of such nonlinearities. State of the art photonic implementations target simple reservoir architectures \cite{harkhoe2018}, which can easily be upscaled to increase the number of computational nodes or neurons, thereby enhancing the reservoir’s computational capacity. Even a linear photonic cavity can be a potent reservoir \cite{vinckier2015}, provided that some nonlinearity is present either in the mapping of input data to the reservoir, or in the readout of the reservoir’s response. Despite advances towards all-optical RC \cite{bienstman2018}, many state of the art photonic reservoir computers inherently contain some nonlinearity as they are usually set up to process and produce electronic signals. This means that even if the reservoir is all-optical, the reservoir computer in its entirety is of an opto-electronic nature. Commonly used components like a Mach-Zehnder modulators (MZM) and photodetectors (PD) provide means for transitioning back and forth between the electronic and optical domains, and they also – almost inevitably - introduce nonlinearities which boost the opto-electronic reservoir computer’s performance beyond the merits of the optical reservoir itself. When transitioning towards all-optical reservoir computers, such non-linearities can no longer be relied on, and thus the required nonlinear transformation of information must originate elsewhere. One option is then to use multiple strategically placed nonlinear components in the reservoir, but this can be a costly strategy when upscaling the reservoir \cite{vandoorne2011}. In this paper, we study a delay-based reservoir computer, based on a passive coherent optical fiber ring cavity following Ref. \cite{vinckier2015} and exploit the inherent nonlinear response of the waveguiding material to build a state-of-the-art photonic reservoir. This means that the nonlinearity of our photonic reservoir is not found in localized parts, but rather it is distributed over the reservoir’s entire extent. To correctly characterize the effects of such distributed nonlinearity, we also consider in this study all other nonlinearities that may surround the reservoir. In terms of the reservoir’s input mapping, we examined the system responses when receiving optical inputs (linear mapping), and when receiving electronic inputs coupled to the optical reservoir through a Mach-Zehnder modulator with a nonlinear mapping. For the reservoir’s readout layer, we examined both linear readouts (coherent detection) and nonlinear readouts through the quadratic nonlinearity of a photodiode measuring the power of the optical field. Taking these different options into account, we then constructed different scenarios in terms of the presence of nonlinearities in the input and/or output layer of these reservoir computers. In all these scenarios we numerically benchmarked the RC performance, thus quantifying the difference in performance between systems which do or do not have such distributed nonlinearity inside the reservoir. In the next sections, we show our numerical results, which show a broad range of optical input power levels at which these RCs benefit from the self-phase modulation experienced by the signals due to the nonlinear Kerr effect induced by the waveguide material. We also show the results of our experimental measurements that indicate how much this distributed nonlinearity boosts the reservoir's capacity to perform nonlinear computation. In the discussion section, we analyze the impact of these findings on the future of photonic reservoir computing. \section{Materials and Methods} \subsection{Setup} \label{section:setup} Our reservoir computing simulations and experiments are based on the set of dynamical systems which are discussed in this section. The reservoir itself is implemented in the all-optical fiber-ring cavity shown in Fig. \ref{fig:setup_core}, using standard single-mode fiber. A polarization controller is used to ensure that the input field $E_{in}$ (originating from the green arrow) excites a polarization eigenmode of the fiber-ring cavity. A fiber coupler, characterized by its power transmission coefficient $T=50\%$, couples light in and out of the cavity. The fiber-ring is characterized by the roundrip length $L=\SI{10}{\meter}$ (or roundtrip time $t_R$), the propagation loss $\alpha$ (taken here \SI{0.18}{\deci\bel\per\kilo\meter}), the fiber nonlinear coefficient $\gamma$ (which is set to $0$ to simulate a linear reservoir, and set to $\gamma_{Kerr}=\SI{2.6}{\milli\radian\per\meter\per\watt}$ to simulate a nonlinear reservoir), and the cavity detuning $\delta_0$, i.e. the difference between the roundtrip phase and the nearest resonance (multiple of $2\pi$). This low-finesse cavity is operated off-resonance, with a maximal input power of \SI{50}{\milli\watt} (\SI{17}{\dBm}). A network of time-multiplexed virtual neurons is encoded in the cavity field envelope. The output field $E_{out}$ is sent to the readout layer (through the orange arrow) where the neural responses are demultiplexed. \begin{figure}[h!] \begin{center} \includegraphics[width=8cm]{setup_core} \end{center} \caption{Schematic of the fiber-ring cavity of length $L$ used to implement an optical reservoir. The green (orange) arrow indicates a connection with an input (output) layer. A polarization controller maps the input polarization onto a polarization eigenmode of the cavity. A coupler with power transmission coefficient $T$ couples the input field $E_{in}^{(n)}(\tau)$ to the cavity field $E^{(n)}(z,\tau)$ and couples to the output field $E_{out}^{(n)}(\tau)$, where $n$ is the roundtrip index, $\tau$ is time (with $0<\tau<t_R$) and $z$ is the longitudinal position in the ring cavity. }\label{fig:setup_core} \end{figure} The input field $E_{in}$ can originate from one of two different optoelectronic input schemes. Firstly we consider a scenario where the input signal $u(n)$ (with discrete time $n$) is amplitude-encoded in an optical signal $E\sim u(n)$, as shown in Fig. \ref{fig:setup_inputs_outputs}(a). The reservoir's input mask $m(\tau)$ is mixed with the input signal by periodic modulation of the optical input signal using an MZM. This scheme was implemented in Ref. \cite{duport2016}, but the nonlinearity of the MZM was avoided through pre-compensation of the electronic input signal. Note that the discrete time $n$ corresponds with the roundtrip index. And as delay-based reservoirs are typically set up to process 1 sample each roundtrip, $n$ also corresponds with the sample index. However, we have chosen to hold each input sample over multiple roundtrips, for reasons which are explained in the Results section (that is, $u(n)$ is constant over multiple values of $n$). Secondly we consider a scenario where we use the MZM to modulate a CW optical pump following Ref. \cite{duport2012}, as shown in Fig. \ref{fig:setup_inputs_outputs}(b). Here the input signal is first mixed with the input mask and then used to drive the MZM. It is known that the MZM's nonlinear transfer function can affect the RC system's performance \cite{vinckier2015}, but the implications for a coherent nonlinear reservoir have not yet been investigated. Similarly, the output field $E_{out}$ can be processed by two different optoelectronic readout schemes. Firstly we consider a coherent detection scheme as shown in Fig. \ref{fig:setup_inputs_outputs}(c). Mixing the reservoir's output field with a reference field $E_{LO}$ allows to record the complex neural responses, time-multiplexed in the output field $E_{out}$. Secondly, we consider a readout scheme where a photodetector (PD) measures the optical power of the neural responses $|E_{out}|^2$, as shown in Fig. \ref{fig:setup_inputs_outputs}(d). With high optical power levels and small neuron spacing (meaning fast modulation of the input signal), dynamical and nonlinear effects other than the Kerr nonlinearity may appear, such as photon-phonon interactions causing Brillouin and Raman scattering, and bandwidth limitations caused by the driving and readout equipment. We want to focus in the present work on the effects of the Kerr nonlinearity. Combined with the memory limitations of the oscilloscope, we therefore limit our reservoir to $20$ neurons, with a maximal input power of \SI{100}{\milli\watt}. The current setup is not actively stabilized. We have found that the cavity detuning $\delta_0$ does not vary more than a few \SI{}{\milli\radian} over the course of any single reservoir computing experiment, where a few thousand input samples are processed. A short header, added to the injected signal, allows us to recover the detuning $\delta_0$ post-experiment. We effectively measure the interference between a pulse which reflects off the cavity and a pulse which completes one roundtrip through the cavity. However, we find that the precise value of $\delta_0$ has no significant influence on the experimental reservoir computing results. \begin{figure}[h!] \begin{center} \includegraphics[width=8cm]{setup_inputs_outputs} \end{center} \caption{Schematics of input and output layers connecting to the reservoir shown in Fig. \ref{fig:setup_core}. In the linear input scheme \textbf{(a)} the Mach-Zehnder modulator (MZM) superimposes the reservoir's input mask $m(\tau)$ on the optical signal $E\sim u(n)$ carrying the input data. In the (possibly) nonlinear input scheme \textbf{(b)} the input data is mixed with the input mask and then drives the MZM to modulate a CW optical pump. In the linear output scheme \textbf{(c)} a reference field $E_{LO}$ is used to implement coherent detection, allowing a quadrature of the complex optical field to be measured. Note that coherent detection requires two such readout arms with phase-shifted reference fields in order to measure the complex output field $E_{out}$. In the nolinear output scheme \textbf{(d)} only a photodetector (PD) is used, thus only allowing the optical output power $|E_{out}|^2$ to be recorded.} \label{fig:setup_inputs_outputs} \end{figure} \subsection{Physical model} Here we discuss the mean-field model used to describe the temporal evolution of the electric field envelope $E^{(n)}(z,\tau)$ inside the cavity, where $n$ is the roundtrip index, $0<\tau<t_R$ is time (bound by the cavity roundtrip time $t_R$ and $0<z<L$ is the longitudinal coordinate of the fiber ring cavity with length $L$. The position $z=0$ corresponds to the position of the fiber coupler. The position $z=L$ corresponds to the same position, but after propagation through the entire fiber-ring. We will describe the evolution on a per-roundtrip basis (i.e. with varying roundtrip index $n$). With this notation $E^{(n)}(z,\tau)$ represents the cavity field envelope measured at position $z$ at time $\tau$ during the $n$-th roundtrip. For each roundtrip we model propagation through the nonlinear cavity to obtain $E^{(n})(z=L,\tau)$ from $E^{(n)}(z=0,\tau)$. We then express the cavity boundary conditions to obtain $E^{(n+1)}(0,\tau)$ from $E^{(n)}(L,\tau)$ and to obtain the field $E_{out}^{(n)}(\tau)$ at the output of the fiber-ring reservoir. For now we will omit $\tau$. Firstly, to model propagation in the fiber-ring cavity we take into account propagation loss and the nonlinear Kerr-effect. Since the nonlinear propagation model is independent from the roundtrip index $n$, this subscript is omitted in the following description. The nonlinear propagation equation is given by \begin{equation} \label{eq:nonlinearpropagation} \partial_z E = i\gamma |E|^2E-\alpha E. \end{equation} Here, $\alpha$ is the propagation loss and $\gamma$ is the nonlinear coefficient which is set to $\gamma=0$ to simulate a linear reservoir, and set to $\gamma=\gamma_{Kerr}$ to include the nonlinear Kerr effect caused by the fiber waveguide. We do not include dispersion effects at the current operating point of the system, since the neuron separation is much larger than the diffusion length, hence also $\tau$ can be omitted in the nonlinear propagation model. The evolution of the power $|E(z)|^2$ is readily obtained by solving the corresponding propagation equation \begin{equation} \label{eq:powerequation} \partial_z |E|^2 = E^* \partial_z E + E \partial_z E^* = -2\alpha |E|^2, \end{equation} \begin{equation} \label{eq:powersolution} |E(z)|^2 = |E(0)|^2 e^{-2\alpha z}. \end{equation} With $\phi_{_z}$ the nonlinear phase acquired during propagation over a distance $z$, we know that the solution of $E(z)$ will be of the form \begin{equation} \label{eq:fieldsolutionformal} E(z) = E(0) e^{i\phi_{_z}-\alpha z}. \end{equation} Since this nonlinear phase depends on the power evolution given by Eq. \eqref{eq:powersolution}, an expression for $\phi_{_z}$ is found to be \begin{equation} \label{eq:nonlinearphase} \phi_{_z} = \gamma \int_0^z |E(v)|^2\delta v = \gamma |E(0)|^2 \int_0^z e^{-2\alpha v}\delta v = \gamma |E(0)|^2 \frac{1-e^{-2\alpha z}}{2 \alpha}. \end{equation} At this point, we can introduce the effective propagation distance $z_{eff}$ as \begin{equation} \label{eq:effectivedistance} z_{eff} = \frac{1-e^{-2\alpha z}}{2 \alpha}. \end{equation} In general (since $\alpha\geq0$) we have $z_{eff}\leq z$. Substituting these result in Eq. \eqref{eq:fieldsolutionformal} yields the complete solution for propagation of the cavity field envelope \begin{equation} \label{eq:fieldsolutionfull} E(z) = E(0) \exp\left(i\gamma |E(0)|^2 z_{eff}-\alpha z\right). \end{equation} Finally, we reinstitute the roundtrip index $n$ and the time parameter $\tau$ which allows us to combine this nonlinear propagation model with the cavity boundary conditions. \begin{align} \label{eq:fullmodel} \left \{ \begin{array}{rl} E^{(n)}(L,\tau) &= E^{(n)}(0,\tau) \exp\left(i\gamma |E^{(n)}(0,\tau)|^2 L_{eff}-\alpha L\right) \\ E^{(n+1)}(0,\tau) &= \sqrt{T}E_{in}^{(n+1)}(\tau) + \sqrt{1-T}e^{i\delta_{_0}}E^{(n)}(L,\tau) \\ E_{out}^{(n+1)}(\tau) &= \sqrt{1-T}E_{in}^{(n+1)}(\tau) + \sqrt{T}e^{i\delta_{_0}}E^{(n)}(L,\tau) \end{array} \right. \end{align} In these equations, $T$ represents the power transmission coefficient of the cavity coupler, and $\delta_0$ represents the cavity detuning (i.e. difference between the roundtrip phase and the closest cavity resonance). Further, the input field $E_{in}=E_{in}^{(n)}(\tau)$ changes with the roundtrip index $n$ as new data samples can be injected into the system, and is modulated in time using the input mask to create a network of virtual neurons. The output field $E_{out}=E_{out}^{(n)}(\tau)$ containing the neural responses is sent to a measurement stage. \subsection{Reservoir computing} The framework of reservoir computing allows to exploit the transient nonlinear dynamics of a dynamical system to perform useful computation \cite{maass2002,jaeger2004}. For the purpose of reservoir computing, virtual neurons (dynamical variables, computational nodes) are time-multiplexed in $\tau$-space of the physical system described by Eq. \eqref{eq:fullmodel}, following the delay-based reservoir computing scheme originally outlined in Ref. \cite{appeltant2011}. As such, the input field $E_{in}^{(n)}(\tau)$ varies with $n$ as new input samples arrive, and varies with $\tau$ to implement the input mask, which excites the neurons into a transient dynamic regime. Subsequently, the neural responses are encoded in the output field $E_{out}^{(n)}(\tau)$ and need to be demultiplexed from $\tau$-space. As in Refs. \cite{paquot2012,vinckier2015} the length $t_M$ of the input mask $m(\tau)$ is deliberately mismatched from the cavity roundtrip time $t_R$. Instead, we set $t_M = t_R N / (N+1)$ which provides interconnectivity between the $N$ virtual neurons in a ring topology. The input mask $m(\tau)$ is a piecewise constant function, with intervals of duration $\theta = t_M/N$. The signal $I^{(n)}(\tau)$ injected into the RC is constructed by multiplying the input series $u(n)$ with the input mask, $I^{(n)}(\tau) = u(n)m(\tau)$. When the input is coupled linearly to the reservoir then $E_{in}^{(n)}(\tau) \sim I^{(n)}(\tau)$. This would be the case when $u(n)$ is an optical signal periodically modulated with the input mask signal $m(\tau)$. When a MZM modulator with transfer function $f$ is used to convert the electronic signal $I^{(n)}(\tau)$ to the optical domain then $E_{in}^{(n)}(\tau) \sim f(I^{(n)}(\tau))$, where $f$ can be nonlinear. Note that in Ref. \cite{vinckier2015} the sample duration $t_S$ is matched to the length of the input mask $t_M$, allowing the reservoir to process 1 input sample approximately every roundtrip, as $t_S=t_M\smallerrel{\lesssim} t_R$. However, for reasons explained in the Results section, we will study different sample durations by holding input samples over multiple durations of the input mask, $t_S=k\ t_M$ with integer $k$ as illustrated in Fig. \ref{fig:timing}. This inevitably slows the reservoir down, as it only processes 1 input sample approximately every $k$ roundtrips. But it also provides practically straightforward means to accumulate more nonlinear processing of the data inside the reservior, which can then be measured and quantified. \begin{figure}[h!] \begin{center} \includegraphics[width=\linewidth]{timing_time} \end{center} \caption{Schematic of input and output timing, with $t_S$ the sample duration, $t_M$ the input mask duration and $t_R$ the roundtrip time. Input samples are injected during (integer) $k$ roundtrips (bars in alternating colors) and the neural responses are recorded at times $\{\tau_i\}$ (blue tick marks) during the last of those $k$ roundtrips.} \label{fig:timing} \end{figure} Since the virtual neurons are time-multiplexed in this delay-based reservoir computer, they need to be de-multiplexed from $E_{out}^{(n)}(\tau)$ in the readout layer by sampling this output field at a set of times $\{\tau_i\}$ (with $i$ the neuron index and $1<i<N$ when $N$ neurons are used) as shown in Fig \ref{fig:timing}. The dynamical neural responses $x_i(n) = E_{out}^{(n)}(\tau_i)$ are recorded and used to train the reservoir to perform a specific task. That is, we optimize a set of readout weights $w_i$ which are used to combine the neural readouts into a single scalar reservoir output $y(n)$. In general the reservoir output is constructed as \begin{equation}\label{eq:RCoutput} y(n) = \sum_{i=1}^N w_i g(x_i(n)) \end{equation} where the neural responses $x_i(n)$ are first parsed by an output function $g(x)$ taking into account the operation of the readout layer and readout noise $\nu$. In all simulations the fixed level of readout noise is matched to the experimental conditions. When the complex-valued reservoir states are directly recorded, then $g(x) = x+\nu$ and the readout weights $w_i$ are complex too, such that $y$ is real. If however, a PD measures the power of the neural responses, then $g(x)=|x|^2+\nu$ which is real-valued, and the readout weights will be real-valued too. Tasks are defined by the real-valued target output $\hat{y}$. Optimization of the readout weights occurs over a training set of $T_{train}$ input and target samples, and is achieved through least squares regression. This procedure minimizes the mean squared error between the reservoir output $y$ and target output $\hat{y}$, averaged over all samples. \begin{equation} \label{eq:regression} \{w_i\} = \argmin_{\{w_i\}} \langle \left( \hat{y} - \sum_{i=1}^N w_i g(x_i) \right)^2 \rangle_{_{T_{train}}}. \end{equation} These optimized readout weights are then validated on a test set of $T_{test}$ new input and target samples. A common figure of merit to quantify the reservoir's performance is the normalized mean square error (NMSE) defined as \begin{equation} NMSE(y,\hat{y}) = \frac{\langle \left( y-\hat{y} \right)^2 \rangle_{_{T_{test}}}}{\langle \hat{y} ^2 \rangle_{_{T_{test}}}}. \end{equation} \subsection{Balanced Mach-Zehnder modulator operation} \label{section:MZM} Here we briefly investigate the relevant nonlinearities which occur when mapping an electronic signal to an optical signal using an MZM. The operation of our balanced MZM can be described as \begin{equation} \label{eq:MZM} \frac{E_{in}}{E_0} = \cos\left(\frac{V}{V_{\pi}}\frac{\pi}{2}\right) \end{equation} where $E_0$ represents the incident CW pump field, $E_{in}$ is the transmitted field which will be the input field to the optical reservoir, $V_{\pi}$ determines at which voltage the zero intensity point occurs (point of no transmission), and $V$ is the voltage of the applied electronical signal consisting of a bias contribution $V_{b}$ and a zero-mean signal $V_s$, i.e. $V = V_{b}+V_{s}$. For our numerical investigation, we will set the amplitude of the signal voltage to $|V_s| = V_{\pi}/2$. First, we investigate the zero intensity bias point, $V_b=V_{\pi}$. In this case, we can approximate Eq. \eqref{eq:MZM} with the following Taylor expansion \begin{align} \frac{E_{in}}{E_0} &= f(V_s) + \mathcal{O}\left(V_s^5 \right) \\ f(V_s) &= -\frac{\pi}{2V_{\pi}}V_s +\frac{1}{6} \left( \frac{\pi}{2V_{\pi}}\right)^3 V_s^3 \label{eq:taylorexpansion1} \end{align} With $\left(E_{in}/E_0\right)_{max}$ representing the maximal value of $\frac{E_{in}}{E_0}$ with the given bias voltage $V_b$ and signal amplitude $|V_s|$, the relative error $r.e.$ of the Taylor expansion \eqref{eq:taylorexpansion1} \begin{equation} \label{eq:relativeerror} r.e. = \frac{|\frac{E_{in}}{E_0}-f(V_s) |}{\left(\frac{E_{in}}{E_0}\right)_{max}} \end{equation} is smaller than $1\%$. When the cubic term ($\sim V_s^3$) of the approximation $f(V_s)$ is omitted, this error increases to $11\%$. This means that at this operating point of the MZM, there is a significant nonlinearity which scales with the input signal cubed. Next, we investigate the linear intensity operating point, $V_b = V_{\pi}/2$. Although the MZM's transfer function at this operating point is the most linear in terms of the transmitted optical power, it is highly nonlinear in terms of the transmitted optical field. In this case, we replace Eq. \eqref{eq:taylorexpansion1} with \begin{equation} f(V_s) = \frac{1}{\sqrt{2}} \left(1-\frac{\pi}{2V_{\pi}}V_s + \frac{1}{2} \left( \frac{\pi}{2V_{\pi}}\right)^2 V_s^2 + \frac{1}{6} \left( \frac{\pi}{2V_{\pi}}\right)^3 V_s^3 + \frac{1}{24} \left( \frac{\pi}{2V_{\pi}}\right)^4 V_s^4 \right), \end{equation} as we need all polynomial terms up to order 4 to keep the relative error defined by Eq. \eqref{eq:relativeerror} below $1\%$. In this case, omitting terms of orders above 1 in the approximation $f(V_s)$ increases the relative error of the Taylor expansion to $26\%$. This means that at this operating point of the MZM there are multiple polynomial nonlinearities and that the total nonlinear signal distortion is stronger compared with the zero intensity bias point. Furthermore, during our experiments we have decided to operate the MZM in a linear regime. This allows for the nonlinear effects inside the reservoir to be more readily measured. To this end, we tuned the MZM close to the zero intensity operating point, $V_b=V_{\pi}-\delta_V$ with $\delta_V \ll V_{\pi}$ and reduced the signal amplitude $|V_s|$. The small deviation $\delta_V$ is used to generate a bias in the optical field injected into the reservoir \subsection{Memory capacities} \label{section:memorycapacities} To benchmark the performance of an RC, one can train it to perform one or several benchmark tasks. Alternatively, there exists a framework to quantify the system's total information processing capacity. This capacity is typically split into two main parts: the capacity of the system to retain past input samples is captured by the linear memory capacity \cite{jaeger2002}, and the capacity of the system to perform nonlinear computation is captured by the nonlinear memory capacity\cite{dambre2012}. It is known that the total memory capacity has an upper bound given by the number of dynamical variables in the system, which in our system is the number of neurons in the reservoir. It is also known that readout noise reduces this total memory capacity, and that there is a trade-off between linear and nonlinear memory capacity, depending on the operating regime of the dynamical system. In order to measure these capacities for our reservoir computer a series of independent and identically distributed input samples $u(n)$ drawn uniformly from the interval $[-1,1]$ is injected into the reservoir, with discrete time $n$. The RC is subsequently trained to reconstruct a series of linear and nonlinear polynomial functions depending on past inputs $u(n-i)$, looking back $i$ steps in the past. Following Ref. \cite{dambre2012} these functions are chosen to be Legendre polynomials $P_d(u)$ (of degree $d$), because they are orthogonal over the distribution of the input samples. As an example, we can train the reservoir to reproduce the target signal $\hat{y}(n)$, given by \begin{equation} \label{eq:memorytask} \hat{y}(n) = P_2(u(n-1))P_1(u(n-3)). \end{equation} The ability of the RC to reconstruct each of these functions is evaluated by comparing the reservoir's trained output $y$ with the target $\hat{y}$ for previously unseen input samples. This yields a memory capacity $C$ which lies between $0$ and $1$ \cite{dambre2012}, \begin{equation} \label{eq:memorycapacity} C = 1-\frac{\langle \left(\hat{y}-y\right)^2 \rangle}{\langle \hat{y}^2 \rangle}, \end{equation} where $\langle . \rangle$ denotes the average over all samples used for the evaluation of $C$. Due to the orthogonality of the polynomial functions over the distribution of the input samples, the capacities corresponding to different functions yield independent information and can thus be summed to quantify the total memory capacity, i.e. the total information processing capacity of the RC. The memory functions are typically grouped by their total degree, which is the sum of degrees over all constituent polynomial functions, e.g. Eq. \eqref{eq:memorytask} has total degree 3. Summing all memory capacities corresponding with functions of identical total degree yields the total memory capacity per degree. This allows to quantify the contributions of individual degrees to the total memory capacity of the RC, which is the sum over all degrees. As the memory capacities will become small for large degrees, the total memory capacity is still bound. Since the reservoirs are trained and their performance is evaluated on finite data sets, we run the risk of overestimating the memory capacities $C$, whose estimator Eq. \eqref{eq:memorycapacity} is plagued by a positive bias \cite{dambre2012}. Therefore, a cutoff capacity $C_{co}$ is used ($C_{co}\approx 0.1$ for 1000 test samples) and capacities below this cutoff are neglected (i.e. they are assumed to be 0). Note that the trade-off between linear and nonlinear memory capacity is typically evaluated by comparing the total memory capacity of degree 1 (linear) with the total memory capacity of all higher degrees (nonlinear). However, special attention is due when a PD is present in the readout layer of our RC. If a reservoir can (only) linearly retain past inputs $u(n-i)$ ($i$ steps in the past) then any neural response $x(n)$ consists of a linear combination (with a bias term $b$ and fading coefficients $a_i$) of those past inputs \begin{equation} \label{eq:linearreservoirstate} x(n) = b + \sum_i a_iu(n-i) \end{equation} and subsequently the optical power $P_x$ measured by the PD is given by \begin{equation} \label{eq:PD} P_x(n) = x(n)\bar{x}(n) =|b|^2 + \sum_{i}2\text{Re}(b\bar{a_i})u(n-i)\ + \sum_{i,j}2\text{Re}(a_i\bar{a_j})u(n-i)u(n-j) \end{equation} which consists of polynomial functions of past inputs of degree 1 and 2. Thus, in this case the total linear memory capacity of the RC is represented by the total memory capacity of degrees 1 and 2 combined. In case the bias term $b$ is lacking, only memory capacities of degree 2 will be present. On the other hand, if a PD is used in the output and memory capacities of degree higher than 2 are present, then this indicates that the reservoir itself is not linear, i.e. cannot be represented by a function of the form Eq. \eqref{eq:linearreservoirstate}. \section{Results} \subsection{Numerical RC performance: Sante Fe time series prediction} For the injection of input samples to the optical reservoir, we consider two strategies as discussed in Section \ref{section:setup} and in Figs. \ref{fig:setup_inputs_outputs}(a) and (b), referred to here as the linear and nonlinear input regimes respectively. The exact shape of the nonlineariy in the nonlinear regime depends, among other things, on the operating point (or bias voltage) of the MZM, as discussed in Section \ref{section:MZM}. We will demonstrate this by showing results around both the linear intensity operating point and the zero intensity operating point of the MZM. For the readout of the reservoir response, we also consider two cases as discussed in Section \ref{section:setup} and in Figs. \ref{fig:setup_inputs_outputs}(c) and (d), referred to here as the linear and nonlinear output regimes respectively. We have thus identified 4 different scenarios based on the absence or presence of nonlinearities in the input and output layer of the reservoir computer. As we will show, we have for each of these cases numerically investigated the effect of the distributed nonlinear Kerr effect, present in the fiber waveguide, on RC performance. For this evaluation, we have used $100$ neurons to solve the Santa Fe time series prediction task \cite{weigend1993} and each input sample is injected during 6 roundtrips ($t_S=kt_M$ with $k=6$) for reasons which will become clear in Section \ref{section:experimentalresults}. Here, a pre-existing signal generated by a laser operating in a chaotic regime is injected into the reservoir. The target at each point in time is for the reservoir computer to predict the next sample. Performance is evaluated using the NMSE, where lower is better. Fig. \ref{fig:numericalresults} has 4 panels corresponding to these 4 scenario's. Each panel shows the NMSE as function of the average optical power per neuron inside the cavity. Dashed blue lines correspond with simulation results of linear reservoirs (i.e. with the nonlinear coefficient $\gamma$ set to $0$), and full red lines correspond with simulation results of reservoirs with Kerr-nonlinear waveguides (i.e. $\gamma$ set to $\gamma_{Kerr}$). In Fig. \ref{fig:numericalresults}(a) both the input and output layers of the reservoir are strictly linear (i.e. optical input and coherent detection). It is clear that the linear reservoir ($\gamma=0$) scores poorly, with the NMSE approaching $20\%$. For a wide range of optical power levels, the presence of the Kerr nonlinear effect ($\gamma=\gamma_{Kerr}$) induced by the fiber waveguide boosts the RC performance, with an optimal NMSE just below $1\%$. This can be readily understood as it is well known that for this task, some nonlinearity is required in order to obtain good RC performance. Note that the average neuron power $\langle P_x \rangle$ can be used to estimate the average nonlinear phase $\phi_{Kerr}$ the signals will acquire during the sample duration $t_S$, as $\phi_{Kerr}=\gamma_{Kerr}\langle P_x \rangle L t_S/t_M$. We observe that without the presence of phase noise in the cavity, the boost to the RC performance due to the Kerr effect starts at very small values of the estimated nonlinear phase, and breaks down when $\phi_{Kerr} \gtrsim 1$. Switching to Fig. \ref{fig:numericalresults}(b) we have now introduced the square nonlinearity by using a PD in the readout layer. Focusing on the results obtained with a linear reservoir, we see that the PD's nonlinearity alone decreases the NMSE down from $20\%$ to approximately $5\%$ ($\gamma=0$). Although the PD's nonlinearity clearly boosts the RC performance on this task, its effect is rather restricted. The PD only generates squared terms, and linear terms if a bias is present, see Section \ref{section:memorycapacities}, depending on the MZM's operating point. Furthermore this nonlinearity does not affect the neural responses nor the operation of the reservoir itself, as it only applies to the readout layer. It can thus be understood that the introduction of the Kerr nonlinearity inside the reservoir warrants an additional significant drop in NMSE, to below $1\%$ ($\gamma=\gamma_{Kerr}$). In Fig. \ref{fig:numericalresults}(c), the output layer is linear again, but now we have introduced the MZM in the input layer. The closed markers correspond with simulations where the MZM operates around the zero intensity operating point or the point of minimal transmission ($V_{bias}=V_{\pi}$). In terms of the optical field modulation, this is the most linear regime. It is thus no surprise that the performance of both linear and nonlinear reservoirs mimics that Fig. \ref{fig:numericalresults}(a) where no nonlinearity was present in the input layer. The only difference is that the error of the linear reservoir drops from $20\%$ to about $13\%$ ( $\gamma=0$, $V_{bias}=V_{\pi}$) because of the small residual nonlinearity at this operating point of the MZM. The round markers correspond with simulations where the MZM operates around the linear intensity operating point ($V_{bias}=V_{\pi}/2$). In terms of the optical field modulation, the nonlinearity in the mapping of input samples to the optical field injected into the reservoir is more nonlinear at this operating point. This is why even the linear reservoir manages to achieve errors below $4\%$ ($\gamma=0$, $V_{bias}=V_{\pi}/2$). Again we see that the introduction of the nonlinear Kerr effect allows the NMSE to drop even further, to below $1\%$ ($\gamma=\gamma_{Kerr}$). In fact, this scenario is similar to the scenario with linear input mapping and nonlinear output mapping, Fig. \ref{fig:numericalresults}(b). Finally, in Fig. \ref{fig:numericalresults}(d), nonlinearities are present in both the input mapping and readout layer. With the MZM operating around the zero intensity operating point, there is only a weak nonlinearity in the input mapping and thus, as expected, both linear and nonlinear reservoirs show trends which are very similar to the scenario where the input mapping is linear, Fig. \ref{fig:numericalresults}(c). With the MZM operating around the linear intensity operating point ($V_{bias}=V_{\pi}/2$) however, we observe a scenario in which the RC does not seem to benefit from the presence of the Kerr nonlinear effect. It seems that with significant nonlinearities present in both input and output layers of the RC the distributed nonlinear effect inside the reservoir cannot further descrease the NMSE below values attained by the linear reservoir, which is below $1\%$ ($V_{bias}=V_{\pi}/2$). In all other cases, Figs. \ref{fig:numericalresults}(a,b,c), we find that the distributed nonlinearity inside the reservoir significanlty boosts RC performance, and we find that its presence is critical when no other nonlinearities are available. \begin{figure}[h!] \begin{center} \includegraphics[width=13.5cm]{numericalresults_labels} \end{center} \caption{Numerical results of fiber-ring reservoir computer on Santa Fe time series prediction tasks. In all panels the prediction error (NMSE) is plotted versus the average neuron power $\langle P_x \rangle$. Panels \textbf{(a)} and \textbf{(c)} correspond with a linear input layer, where panels \textbf{(b)} and \textbf{(d)} correspond with a nonlinear input layer using the MZM's nonlinear transfer function. The nonlinear input regime shows results for 2 different operating points of the MZM with different strengths of nonlinear transformation. Panels \textbf{(a)} and \textbf{(b)} correspond with a linear output layer, where panels \textbf{(c)} and \textbf{(d)} correspond with a nonlinear output layer using the PD.}\label{fig:numericalresults} \end{figure} \subsection{Experimental verification: linear and nonlinear memory capacity} \label{section:experimentalresults} In this Section we compare experimental results with detailed numerical simulations. For the experimental verification of our work, we are currently limited to operate with 20 neurons, as explained in Section \ref{section:setup}. Therefore, we have chosen not to perform the reservoir computing experiment on the Santa Fe task. With this few neurons, tasks like the Santa Fe task become hard for the reservoir. Instead we turn to a more academic task which allows us to quantify the reservoir's memory and nonlinear computational capacity in a more complete and task-independent way. We experimentally measure the linear and nonlinear memory capacities considered in Section \ref{section:memorycapacities}. Even with this few neurons the evaluation of the memory capacities can yield meaningful results while taking up comparatively little processing time. For these experiments, the input layer to our fiber-ring reservoir contains a balanced MZM tuned to operate in a linear regime as outlined in Section \ref{section:MZM}. The output layer employs a PD to measure the neural responses. That is, we use the setups of Figs. \ref{fig:setup_inputs_outputs}(b) and (d) but with the MZM operated as in Eq. \eqref{eq:taylorexpansion1}. Following Ref. \cite{dambre2012}, we have driven the reservoir with a series of independent and identically distributed random samples and trained the RC to reproduce different linear and nonlinear polynomial functions of past input samples. The capacity of the reservoir to reconstruct these functions was then evaluated and results were grouped according to the function's polynomial degree. To retain oversight on the results, we will only show the total capacity per degree, by summing all capacities corresponding with functions of the same total polynomial degree. In Fig. \ref{fig:experimentalverification} we show the total memory capacity per degree, encoded in the height of vertically stacked and color-coded bars. The stacking allows to visualize the contributions of individual degrees to the total overall memory capacity (summed over all degrees). Capacities of degree higher than 4 are not considered, as they were found not to contribute significantly to the total memory capacity of the system. For results labeled \textit{bias off} the MZM operates at the zero-intensity point ($V_{bias}=V_{\pi}$), and moving towards the \textit{bias on} label, we tuned the MZM's bias voltage ($V_{bias}=V_{\pi}-\delta_V$, with $\delta_V \ll V_{\pi}$). This introduces a small bias component to the optical field injected into the reservoir, without compromising the linear operation of the MZM. The experiment was also repeated for different values of the sample duration $t_S$ with respect to the input mask periodicity $t_M$ (approximately equal to the cavity roundtrip $t_R$). We expect the sample duration to play a very important role, since it determines how much time a piece of information spends inside the cavity, and thus how much nonlinear phase can be acquired. The ratio $t_S/t_M$ is gradually increased from $t_S=2t_M$ in (first row) Figs. \ref{fig:experimentalverification}(a), (b) and (c), to $t_S=6t_M$ in (middle row) Figs. \ref{fig:experimentalverification}(d), (e) and (f), and finally to $t_S=10t_M$ in (bottow row) Figs \ref{fig:experimentalverification}(g), (h) and (i). The experimental results in (left column) Figs. \ref{fig:experimentalverification}(a),(d) and (g) are compared with numerical results on a linear reservoir ($\gamma=0$) in (middle column) Figs. \ref{fig:experimentalverification}(b), (e) and (h), and a nonlinear reservoir ($\gamma=\gamma_{Kerr}$) in (right column) Figs. \ref{fig:experimentalverification}(c), (f) and (i). Firstly, in Fig. \ref{fig:experimentalverification}(a) we observe that without bias to the optical input field ($V_{bias}=V_{\pi}$) the total memory capacity originates almost completely from the polynomial functions of degree 2 which means (given the presence of the PD in the readout layer) that the optical system is almost completely linear. Then, as an optical field bias is introduced we find that the total linear memory capacity of the system is now shared between degrees 1 and 2. As expected on account of quadratic nonlinearity due to the PD, Eq. \eqref{eq:PD}, the contribution of (odd) degree 1 grows with the increasing bias. Beyond these capacities of degree 1 and 2, we also observe a small contribution of capacities of degrees 3 and 4. We ascribe these contributions to the imperfect tuning of the MZM and thus a small residual nonlinearity in the input mapping. Note that the simulations take into account the quasi-linear input mapping of the MZM, but seemingly underestimate the residual nonlinearities to be insignificant. The imperfection of the MZM tuning also leads to a small residual bias component to the optical injected field, resulting in a small non-zero capacity of degree 1. Numerical simulations of linear ($\gamma=0$) and nonlinear ($\gamma=\gamma_{Kerr}$) reservoirs in Figs. \ref{fig:experimentalverification}(b) and (c) respectively, show the same growth in the memory capacity of degree 1 at the expense of the memory capacity of degree 2 when the bias is changed. Note that both simulations seem to overestimate the minimal bias required to obtain a significant memory capacity of degree 1. At this sample duration ($t_S=2t_M$) neither simulations indicate any significant contributions of capacities with degrees beyond 2. When increasing the sample duration ($t_S=6t_M$ and $t_S=10t_M$), the experimenal results in Figs. \ref{fig:experimentalverification}(d) and (g) show a steady increase in the contributions of capacities with degrees 3 and 4. This increase is attributed to the nonlinear Kerr effect, due to the larger accumulation of nonlinear phase during the time each sample is presented to the reservoir. At the same time we see a decrease in the capacities of degrees 1 and 2. As explained before, due to the PD these capacities capture the reservoir's capacity to linearly retain past samples. This trade-off between linear memory capacity (here degrees 1 and 2) and nonlinear computational capacity (here degrees 3 and 4) is well documented \cite{dambre2012}. Because we use the sample duration ($t_S=kt_M\approx kt_R$) to control the cumulative nonlinear effect inside the reservoir, we inevitably increase the mismatch between the inherent timescale of the input data (i.e. the sample duration $t_S$) and the inherent timescale of the reservoir (i.e. the cavity roundtrip $t_R$). and alter the reservoir’s internal topology. When each sample is presented longer, past samples have spent more time inside the lossy cavity by the time they are accessed through the reservoir’s noisy readout. Thus, on the longer timescales ($t_S$) at which information is now processed, it is harder for the reservoir (operating at timescale $t_R$) to retain past information. These aspects explain why the overall total memory capacity (summed over all degrees) decreases with increased sample duration $t_S$. The numerical results on both the linear reservoir ($\gamma=0$) in Figs. \ref{fig:experimentalverification}(e) and (h) and the nonlinear reservoir ($\gamma=\gamma_{Kerr}$) in Figs. \ref{fig:experimentalverification}(f) and (i) correctly predict a drop in the total linear memory capacities (degrees 1 and 2). Due to the memory capacity cutoff explained in Section 2.5, small capacities are harder to quantify accurately and systematic underestimation can occur. This explains why the small total memory capacities obtained experimentally are larger than the small total memory capacity obtained numerically. The correspondence for large total memory capacities is better as they are largely unaffected by the cutoff. But besides the drop in linear memory capacities, only the nonlinear reservoir model can explain the steady increase in nonlinear memory capacities (degrees 3 and 4) with longer sample durations. With increasing sample duration $t_S$ the simulated nonlinear reservoir shows the contribution of the total nonlinear memory capacity (degrees 3 and 4) to the total memory capacity (all degrees) growing from $0\%$ to $25.4\%$, and in the experiment this contribution starts at $6.4\%$ and grows up to $23.6\%$. This sizable increase in nonlinear computation capacity can be of considerable significance to the reservoir's performance on other tasks, as shown earlier. When comparing the experimental results with the nonlinear reservoir model for all given sample durations $t_S$, the main difference is that the capacities of degree 3 seem to appear sooner (i.e. for smaller sample duration) in the experiment. This can be explained by the residual bias component to the optical injected field. Such a bias makes it easier to produce polynomial functions of odd degrees, thus explaining their earlier onset. This can be explained by the quadratic nature of the Kerr nonlinearity, as the reasoning previously applied to the quadratic nonlinearity of the PD in Eq. \eqref{eq:PD} can be generalized to memory capacities of higher degree. \begin{figure}[h!] \begin{center} \includegraphics[width=15cm]{experimentalresults} \end{center} \caption{Comparison between experimental results \textbf{(a,d,g)} and numerical models with linear ($\gamma=0$) \textbf{(b,e,h)} and nonlinear ($\gamma=\gamma_{Kerr}$) reservoirs \textbf{(c,f,i)}. The stacked vertical bars are color-coded to respresent the total memory capacities (TMC) of degree 1 (blue), 2 (red), 3 (orange), and 4 (purple). As such, the total height represents the total overall memory capacity. A control variable to the MZM $\delta_V$, is varied to include a small bias component to the injected optical field, where \textit{bias off} corresponds with $\delta_V=0$ and \textit{bias on} corresponds with a small nonzero value $0<\delta_V\ll V_{\pi}$. The sample duration $t_S$ is varied from 2 times \textbf{(a,b,c)}, to 6 times \textbf{(d,e,f)} and finally to 10 times \textbf{(g,h,i)} the input mask period $t_M$ ($\approx$ cavity roundtrip time $t_R$).}\label{fig:experimentalverification} \end{figure} \section{Discussion} We have identified and investigated the role of nonlinear transformation of information inside a photonic computing system based on a passive coherent fiber-ring reservoir. Nonlinearities can occur at different places inside a reservoir computer: the input layer, the bulk and the readout layer. State-of-the-art opto-electronic RC systems often include one or several components which inevitably introduce nonlinearities to the computing system. On the reservoir's input side, we have compared a linear input regime with the usage of a MZM, which has a nonlinear transfer function, to convert electronic data to an optical signal. On the reservoir's output side, we have compared a linear output regime with the usage of a PD which measures optical power levels, that scale quadratically with the optical field strength of the neural responses. We numerically evaluated such systems using a benchmark test and found that nonlinear input and/or output components are needed to obtain good RC performance when the optical reservoir itself (i.e. the core of the RC system) is a strictly linear system. Internal to the reservoir, we investigated the effect of the optical Kerr nonlinear effect on RC performance. Our numerical benchmark test showed a large band of optical powers where the presence of this distributed nonlinear effect, caused by the waveguiding material of the reservoir, significantly decreased the RC's error figure. Our numerical and experimental measurements of the linear and nonlinear memory capacity of this RC system showed that the accumulation of nonlinear phase due to the distributed nonlinear Kerr effect strongly improves the system's nonlinear computational capacity. We can thus conclude that for photonic reservoir computers with nonlinear input and/or output components, the presence of a distributed nonlinear effect inside the optical reservoir improves the RC performance. Furthermore the distributed nonlinearity is essential for good performance in the regime where nonlineariies are absent from both the input and ouput layer. This may be the case in an all-optical reservoir computer (i.e. with optical input and output layers). We have shown that the effect of the distributed nonlinearity is strong enough to compensate for the lack of nonlinear transformation of information elsewhere in the system, and that it allows to build a computationally strong photonic computing system. Finally, we expect a design approach including distributed nonlinear effects to improve the scalability of these types of computational devices. In general, when harder tasks are considered, larger reservoirs are required. One way to increase the size of a delay-based reservoir is to implement a longer delay-line. This increase in length of the signal propagation path naturally increases the effect of distributed nonlinearities as considered in this work. Similarly, increasing the size of a network-based reservoir will also lead to more and/or longer signal paths, resulting in the increased accumulation of nonlinear effects, although waveguides with stronger nonlinear effects may have to be considered to compensate for the shorter connection lengths in on-chip implementations. We believe that the natural increase in the strength of nonlinear effects, following the increase in size of the reservoir, may diminish the need to place discrete nonlinear components inside large networks used for strongly nonlinear tasks. As such, both the complexity and cost of such systems would be reduced. Since the waveguiding material itself is used to induce nonlinear effects, the waveguide properties (such as material and geometry) determines the optical field confinement and thus regulate the strength of nonlinear interactions. Consequently it may be possible to create reservoirs where deliberate variations in the waveguide properties are used to tune the strength of the distributed nonlinear effect in different regions of the system. This would allow for a trade off between the system's linear memory capacity and its nonlinear computational capacity, such that a large number of past input samples can be retained (in some parts of the system) and then nonlineary processed to solve difficult tasks (in other parts of the system). These considerations indicate why distributed nonlinear effects may play a major role in future implementations of powerful photonic reservoir computers. \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} The idea was first conceived by GVdS and finalized together with GV and SM. JP was responsible for the physical modelling, the numerical calculations and the experimental verification and wrote most of the manuscript. All coauthors contributed to the discussion of the results and writing of the manuscript. \section*{Funding} We acknowledge financial support from the Research Foundation Flanders (FWO) under grants 11C9818N, G028618N and G029519N, the Fonds de la Recherche Scientifique (FRS-FNRS), the Hercules Foundation and the Research Council of the VUB. \section*{Data Availability Statement} The data used in this study for the Sante Fe prediction task \cite{weigend1993} is one of the data sets from the “Time Series Prediction Competition” sponsored by the Santa Fe Institute, initiated by Neil Gershenfeld and Andreas Weigend in the early 90’s, no licenses/restrictions apply. No further datasets were used or generated. \bibliographystyle{frontiersinHLTH&FPHY}
1,941,325,220,121
arxiv
\section{Introduction} Even though superconductivity was discovered experimentally in 1911 by Kamerlingh Onnes \cite{Onnes}, a sufficiently predictive microscopic theory was not available until the work of Bardeen, Cooper and Schriefer, published in 1957 \cite{BCSoriginal} (known today as BCS theory). One of the fundamental features of this construction is the realization of the ground state of the system as a grand canonical state of fermionic pairs. Following these steps, several other many-body systems have benefited from these insights and extended the result to other non-trivial Gaussian states. Another seminal landmark of many-body physics is Onsager's solution of the two-dimensional Ising model, published more than a decade earlier \cite{Onsager}. Despite its cumbersome original formulation, current understanding of this model is closely related to BCS theory. More concretely, the ground-state of the associated one-dimensional quantum spin chain can also be constructed from a condensate of fermionic pairs \cite{Henkel, Sachdev, Mussardo}. This is a remarkable result if we consider that both models have very different descriptions and applications. BCS theory has remained an important starting point for the analysis of more exotic phenomena. For instance, in the past few decades two-dimensional superconductors have become testbeds for novel topological features, some of them closely related to the fractional quantum Hall (FQH) effect. Read and Green \cite{ReadGreen, Miguel} established a connection between the weak pairing regime of the $p+ip$ superconductor and the topological phase defined by the Moore-Read Pfaffian state \cite{MooreRead91}. The robustness of these phases to local perturbations have turned them into strong candidate schemes for quantum computing \cite{NayakAnyons}. One of the shared tools for the study of these strongly correlated quantum systems is conformal field theory (CFT) \cite{BPZ, Tsvelik, QuantumGroupsCFT, diFrancesco, Gogolin, Mussardo}. Due to its symmetry constraints, these theories have powerful algebraic structures that may allow for exact solutions. This has been exploited in the construction of trial wave functions for many-body systems, both in the lattice and the continuum. This is done by computing correlators in the CFT and using them as variational wave functions. The most famous applications have been in FQH physics \cite{MooreRead91}, however it has also been used to study 1D spin systems using infinite matrix product states (iMPS) \cite{iMPS, iMPS2, KL, FQH-MPS}. In this latter case, the entanglement structure has some features that cannot be easily obtained from finite matrices, such as logarithmic scaling of the entanglement entropy \cite{iMPS}. It has been argued that conformal blocks (CBs) of rational CFTs can be used to construct wave functions for lattice spin systems \cite{WFfromCB}. Even if there is no straightforward spin-like structure arising from the representation of internal symmetries (for instance, in the case of minimal CFTs \cite{BPZ}), the physical degrees of freedom can still be encoded in the different fusion channels of non-Abelian operators. This was illustrated using the Ising CFT, where the relevant CBs were obtained from chiral correlators of several spin operators $\sigma$, grouped in pairs to describe two-level systems. In this paper, we provide further characterization of the many-body lattice wave functions obtained from the Ising CFT. In particular, we show both analytic and numerical evidence that states describing $N$ spins obtained from the CBs of $2N$ $\sigma$ fields (dubbed $\ket{\psi_{ee}}$) can be understood as BCS wave functions. This article is organized as follows: Sect. II presents a general short review of BCS states. Sect. III and Sect. IV introduce the notion of vertex operators. We use this formalism to write $\ket{\psi_{ee}}$ and other related states as matrix product states with continuous ancillary degrees of freedom. In Sect. V, we develop a first-order operator product expansion (OPE) that allows us to write $\ket{\psi_{ee}}$ as an explicit (albeit approximate) BCS state. Sect. VI reviews the exact formulas for the Ising CBs. In Sect. VII, Sect. VIII and Sect. IX, we study in detail translationally invariant 1D states. We prove that in this case $\ket{\psi_{ee}}$ can be written as a BCS state in an exact manner and find suitable parent Hamiltonians. In particular, we prove that the ground-state of the finite critical Ising transverse field (ITF) spin chain corresponds to $\ket{\psi_{ee}}$ for a homogeneous configuration. In Sect. X, we study 1D excitations by means of wave functions obtained from CBs with different asymptotic boundary conditions. Sect. XI presents some general remarks about the problem of writing $\ket{\psi_{ee}}$ as a BCS state for an arbitrary coordinate configuration. Finally, in Sect. XII, we provide a brief study of the 2D states obtained from the OPE regime. We use both the entanglement spectrum \cite{LiHaldane} and the scaling of the entanglement entropy to relate these states to the weak pairing phase of the $p+ip$ superconductor. \section{BCS wave functions: a short review} Given a collection of (spinless) fermionic modes $\{c_n\}_{n=1}^N$ on a lattice, we can define the BCS many-body wave function \begin{equation} \ket{\psi_\text{BCS}} = \prod_{n<m}\left(u_{nm}+v_{nm}c_n^\dagger c_m^\dagger\right)\ket{0}_c, \end{equation} where $\ket{0}_c$ is the state annihilated by all the operators $c_n$, and $u_{nm}$, $v_{nm}$ are complex numbers that satisfy the normalization condition $|u_{nm}|^2+|v_{nm}|^2 = 1$. Furthermore, we impose $u_{nm}=u_{mn}$ and $v_{nm}=-v_{mn}$. This state can be written as \begin{equation} \ket{\psi_\text{BCS}} = C_N \exp\left(\sum_{n<m}g_{nm}c_n^\dagger c_m^\dagger \right)\ket{0}_c, \end{equation} where $g_{nm}=v_{nm}/u_{nm}$ is the pairing function (or more generally pairing matrix) and $C_N=\prod_{n<m} u_{nm}$ is a normalization constant. Note that $g_{nm}$ is a (generally complex) antisymmetric tensor $g_{nm}=-g_{mn}$. We can interpret this wave function as a grand canonical state of pairs created by the operator $P=\sum_{n<m}g_{nm}c_n^\dagger c_m^\dagger$. From the fermionic anticommutation relations, it can be shown that the wave function amplitude for $2M$ fermions occupying sites $r(1)<\cdots<r(2M)$ is given by \begin{equation} \Psi(r(1),\cdots,r(2M))= C_N \text{Pf}(\textbf{M}), \end{equation} where $\textbf{M}$ is the $2M\times 2M$ antisymmetric matrix \begin{equation} (\textbf{M})_{ij} = g_{r(i), r(j)}, \end{equation} and we make use of the Pfaffian \begin{equation} \text{Pf}(\textbf{Q}) = \frac{1}{2^{M}M!}\sum_{\sigma\in S_{2M}} \text{sgn}(\sigma) \prod_{j=1}^M (\textbf{Q})_{\sigma(2j-1),\sigma(2j) }. \label{Pfaffian} \end{equation} BCS wave functions are Gaussian states that arise naturally from mean-field solutions of Hamiltonians describing superconductivity \cite{Sachdev, ReadGreen}. In that context, both $u_{ij}$ and $v_{ij}$ can be written in terms of single-particle energies $\epsilon_k$ and the pairing interaction potential $V_{k,k'}$. Being spinless fermions, we say that these states correspond to $p$-wave superconductivity, due to the fact that the wave functions for the spatial degrees of freedom are antisymmetric. The aim of this paper is to provide an alternative route to the BCS state using CFT. More precisely, we shall consider states obtained from the chiral conformal blocks of the critical Ising model and relate them to known BCS states. \section{Vertex operators in the chiral Ising CFT} The (chiral) Ising CFT is a minimal RCFT that consists of three primary fields, $\mathds{1}$, $\chi$ (Majorana), and $\sigma$ (spin) with conformal weights $0$, $1/2$, and $1/16$, respectively. They have the (non-trivial) fusion rules \cite{diFrancesco} \begin{equation} \sigma\times\sigma = \mathds{1} + \chi, \qquad \chi\times\chi = \mathds{1}, \qquad \sigma\times\chi = \sigma. \end{equation} A conformal block (CB) in a RCFT is a chiral correlator that encodes an allowed fusion channel for a given set of primary fields. If we start with $N$ primaries $\{\phi_{j_n}\}$, a CB can be written as \cite{QuantumGroupsCFT, MooreSeiberg, MooreSeibergNotes} \begin{equation} \mathcal{F}_\mathbf{p}(z_1,\cdots,z_N) = \braket{\prod_{n=1}^N \phi_{j_n}(z_n)}_\mathbf{p}. \end{equation} where $\mathbf{p}$ labels the internal channels. The number of conformal blocks of this type depends on the possible allowed fusion channels of the $\phi_{j_n}$ fields. The exact formulas for the CBs obtained from the Ising primary fields have been calculated in Ref.\cite{NayakWilczek,IsingCB}. We will be interested mainly in CBs containing only $2N$ spin field operators \begin{equation} \mathcal{F}^{(2N)}_\mathbf{p} (z_1,\cdots,z_{2N}) = \braket{\sigma(z_1)\cdots\sigma(z_{2N})}_\mathbf{p}. \label{sigmaCBs} \end{equation} Given the fusion rules, a pair of $\sigma$ fields can be seen as a single degree of freedom \cite{WFfromCB}. This allows us to write the fusion channels in terms of local binary variables. In order to see this explicitly, we group the fields in reference pairs $[\sigma(z_{2n-1}),\sigma(z_{2n})]$. When they are fused pairwise, the different channels can be labeled using the vector $\mathbf{m}=(m_1,\cdots,m_N)$, with $m_i=0$ ($1$) representing an identity operator $\mathds{1}$ (a fermion $\chi$). In this representation, there is an enforced parity coming from the preservation of fermion parity, so that $\sum_i m_i \equiv 0 (\text{mod } 2)$. The local pair-wise fusion produces bilocal chiral vertex operators \begin{equation} V_{ac}^b(z_{2n-1},z_{2n}) : \mathcal{V}_c \to \mathcal{V}_a, \qquad a,b,c = \mathds{1}, \chi \end{equation} where $b=m_n$ corresponds to the fusion channel of reference pair $[\sigma(z_{2n-1}),\sigma(z_{2n})]$, $\mathcal{V}_a$ are the Verma modules associated to the corresponding primary fields, and we require the conservation of fermionic parity at each vertex [Fig.\eqref{bilocalvertex}]. We can use these operators to express explicitly the inner structure of each CB. \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{bilocalvertex} \caption{Graphical representation of the bilocal vertex operator $V^b_{ac}(z_1,z_2)$.} \label{bilocalvertex} \end{figure} \section{Many-body lattice states from Ising conformal blocks} Let us now consider the $2^N$-dimensional Hilbert space $\H$ obtained from $N$ spinless fermionic modes $\{c_n\}_{n=1}^N$ and define the $2\times 2$ operator matrix \begin{equation} A^{(n)}(z_{2n-1}, z_{2n}) = \begin{pmatrix} V_{\mathds{1} \mathds{1}}^\mathds{1} & c_n^\dagger V_{\mathds{1}\chi}^\chi \\ c_n^\dagger V_{\chi \mathds{1}}^\chi & V_{\chi \chi}^\mathds{1} \end{pmatrix}. \label{Amatrix} \end{equation} This yields the map \begin{equation} A^{(n)}(z_{2n-1},z_{2n}) : \left( \begin{array}{c} \mathcal{V}_\mathds{1}\otimes \H_e \\ \mathcal{V}_\chi \otimes \H_o \end{array} \right) \to \left( \begin{array}{c} \mathcal{V}_\mathds{1}\otimes \H_e \\ \mathcal{V}_\chi \otimes \H_o \end{array} \right), \end{equation} where $\H_e$ ($\H_o$) is the Fock space with even (odd) number of fermions \begin{align} \H_e &= \left\{\ket{0}_c, c_{i_1}^\dagger c_{i_2}^\dagger\ket{0}_c,\cdots\right\}, \\ \H_o &= \left\{ c_{i_1}^\dagger\ket{0}_c, c_{i_1}^\dagger c_{i_2}^\dagger c_{i_3}^\dagger\ket{0}_c,\cdots\right\}, \nonumber \end{align} so that $\H=\H_e\oplus\H_o$. The product of $N$ matrices of type $A$ gives the $2\times 2$ operator matrix \begin{align} \Phi^{(N)} &= A^{(1)}(z_1,z_2)\cdots A^{(N)}(z_{2N-1},z_{2N})\nonumber\\ &= \begin{pmatrix} \Phi_{ee}^{(N)} & \Phi_{eo}^{(N)} \\ \Phi_{oe}^{(N)} & \Phi_{oo}^{(N)} \end{pmatrix}. \label{PsiMatrix} \end{align} Using this notation, we have that the operator $\Phi_{ee}^{(N)}$ acting on $\mathcal{V}_\mathds{1}\otimes\H_e$ defines the (unnormalized) state \begin{equation} \ket{\psi_{ee}} = \braket{ 0\left| \Phi_{ee}^{(N)} \right| 0} \ket{0}_c \in\H_e, \label{psiee} \end{equation} where $\bra{0}\cdots \ket{0}$ corresponds to the expectation value in the vacuum of the CFT. As noted in Ref.\cite{WFfromCB}, this construction is very similar to matrix product states (MPS) obtained from CFT \cite{iMPS, iMPS2}. In both cases, the ancillary degrees of freedom are described by a quantum field theory and the resulting many-body wave functions describes a lattice system. As a matter of fact, note that $\ket{\psi_{ee}}$ corresponds to the many-body state defined in that paper written in fermionic variables \begin{equation} \Psi_\mathbf{m}^{(ee)} = \braket{\mathbf{m} | \psi_{ee}} = \mathcal{F}_\mathbf{m} (z_1,\cdots, z_{2n}), \label{CBamplitudes} \end{equation} where $\ket{\mathbf{m}} = \ket{m_1\cdots m_N}$. The present formulation highlights both the inner (i.e., entanglement) structure of these states and its relation to the physical degrees of freedom. We can also contruct other states by adding fermions to the asymptotic states (within the operator-state correspondence \cite{diFrancesco}), in particular \begin{align} \ket{\psi_{oo}} = \braket{ \chi\left| \Phi_{oo}^{(N)} \right| \chi} \ket{0}_c \in\H_e. \label{psioo} \end{align} As we will see in a later section, these wave functions are natural ans\"atze for low-energy excited eigenstates. \section{First-order picture: OPE Analysis} The construction we have discussed so far is quite general. In order to get a more intuitive picture of these states, we can consider a first-order approximation using the operator product expansion (OPE). This scheme will allow us to get a glimpse of the structure of state $\ket{\psi_{ee}}$ using simplified operators. The full expression of the OPE of two $\sigma$ fields is given by \cite{diFrancesco} \begin{align} \sigma(z_1)\sigma(z_2) = \frac{1}{z_{12}^{1/8}}\bigg(&\sum_{\alpha\in\mathcal{V}_\mathds{1}}z_{12}^{h_\alpha}C_{\sigma\sigma}^\alpha \alpha\left(\frac{z_1 + z_2}{2}\right) \label{fullSigmaOPE}\\ &+\sum_{\beta\in\mathcal{V}_\chi}z_{12}^{h_\beta}C_{\sigma\sigma}^\beta \beta\left(\frac{z_1 + z_2}{2}\right)\bigg),\nonumber \end{align} where $z_{12} = z_1 - z_2$, $\alpha$ $(\beta)$ are the fields with conformal weights $h_\alpha$ $(h_\beta)$ that generate the Verma module $\mathcal{V}_\mathds{1}$ $(\mathcal{V}_\chi)$ by acting on the vacuum, and $C_{\sigma\sigma}^\alpha, C_{\sigma\sigma}^\beta$ are constants fixed by 3-point functions. (Note that we are using a symmetrized version of the OPE, instead of pinning the resulting operators on $z_2$.) If we only keep the lowest orders in the expansion, we get the familiar expression \begin{equation} \sigma(z_1)\sigma(z_2) \sim \frac{1}{z_{12}^{1/8}}\left(1+\left(\frac{z_{12}}{2}\right)^{1/2}\chi\left(\frac{z_1+z_2}{2}\right)\right), \label{OPEapprox} \end{equation} where we used the fact that $C_{\sigma\sigma}^\chi = 1/\sqrt{2}$. Assume now that we have $N$ pairs of $\sigma$ fields, parametrized by \begin{equation} z_{2n-1} = w_n - \frac{1}{2}\delta_n, \qquad z_{2n} = w_n +\frac{1}{2} \delta_n. \end{equation} Using this notation and the OPE, we can write the approximate expression for \eqref{Amatrix} \begin{equation} A^{(n)} \sim \frac{1}{\delta_n^{1/8}}\left[\mathds{1}_2 +\left( \sqrt{\frac{\delta_n}{2}}c_n^\dagger\chi(w_n)\right)\sigma^x \right], \label{AmatrixOPE} \end{equation} where $\sigma^x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ is one of the Pauli matrices. Note that this approximation implies % \begin{equation} V_{\mathds{1} \mathds{1}}^\mathds{1} = V_{\chi \chi}^\mathds{1} \sim \frac{1}{\delta_n^{1/8}}\mathds{1}, \quad V_{\mathds{1}\chi}^\chi = V_{\chi \mathds{1}}^\chi \sim \frac{1}{\sqrt{2}}\delta_n^{3/8}\chi(w_n). \label{OPEvertex} \end{equation} % Given that $(c_n^\dagger)^2=0$, we have \begin{equation} A^{(n)} \propto \exp\left(\sqrt\frac{\delta_n}{2} c_n^\dagger \chi(w_n)\sigma^x\right), \end{equation} so that \eqref{PsiMatrix} becomes \begin{equation} \Phi^{(N)} \propto \exp\left[\sum_{n=1}^N\left(\sqrt\frac{\delta_n}{2} c_n^\dagger \chi(w_n)\right)\sigma^x \right]. \end{equation} Now, using the fact that the vacuum of the Ising CFT is a free Gaussian state for the Majorana fermions \cite{diFrancesco}, we employ the familiar identity \begin{equation} \braket{\exp(A)}_\text{Gaussian} = \exp\left(\frac{1}{2}\braket{A^2}_\text{Gaussian}\right) \end{equation} (assuming $\braket{A}_\text{Gaussian}=0$) to obtain \begin{align} \braket{0\left|\Phi_{ee}^{(N)}\right|0} &\propto \exp\left[\sum_{n<m}\frac{\sqrt{\delta_n\delta_m}}{2}\braket{\chi(w_n)\chi(w_m)}c_n^\dagger c_m^\dagger\right]\nonumber\\ &=\exp\left[\sum_{n<m}\frac{\sqrt{\delta_n\delta_m}}{2\left(w_n - w_m\right)}c_n^\dagger c_m^\dagger\right]. \end{align} We conclude then that $\ket{\psi_{ee}}$ is a BCS state defined by the (real-space) pairing function \begin{equation} g_{nm}^{(\text{OPE})} = \frac{\sqrt{\delta_n\delta_m}}{2(w_{n}-w_{m})}. \label{OPEpairingPlane} \end{equation} Note that this result holds for arbitrary complex coordinates and only depends on the validity of the OPE approximation. One may wonder if this expansion is really needed to guarantee the BCS structure of the lattice wave function. As we will show in a later section, numerical calculations suggests that this result extends beyond the OPE expansion, albeit with a different pairing function. We will also discuss some aspects regarding a full analytical proof of this fact. If $|z_n|=1$, it is also convenient to use the conformal transformation that maps the plane to the cylinder \begin{equation} z \mapsto \exp(i\theta). \end{equation} In this setting, we parametrize the coordinates as \begin{equation} \theta_{2n-1} = \phi_n - \frac{1}{2}\epsilon_n, \qquad \theta_{2n} = \phi_n + \frac{1}{2}\epsilon_n, \label{cylcoord} \end{equation} so that the OPE can be written as \begin{align} \sigma(\theta_{2n-1})& \sigma(\theta_{2n}) \\ &\sim \left(\frac{1}{2\sin\left(\frac{\epsilon_n}{2}\right)}\right)^{1/8}\left(1+\sin^{1/2}\left(\frac{\epsilon_n}{2}\right)\chi(\phi_n)\right).\nonumber \end{align} Given that on the cylinder we have \begin{equation} \braket{\chi(\phi_1)\chi(\phi_2)}_{\text{cyl}} = \frac{1}{2\sin\left(\frac{\phi_1-\phi_2}{2}\right)}, \end{equation} a similar analysis yields a BCS state with the pairing function \begin{equation} g_{nm}^{(\text{OPE, cyl})} = \frac{\sqrt{\sin\left(\frac{\epsilon_n}{2}\right)\sin\left(\frac{\epsilon_m}{2}\right)}}{2\sin\left(\frac{\phi_n - \phi_m}{2}\right)}. \label{OPEpairingCylinder} \end{equation} This representation is particularly useful for lattice configurations which are periodic, such as cylinders. Note again that this analysis holds for arbitrary configurations, allowing for complex $\theta_n$. \section{Exact expressions for the Ising conformal blocks} Finding the exact form of the CBs for an arbitrary CFT is in general an ardous task. While it is known that they must satisfy a set of well-known differential equations \cite{diFrancesco}, it is far from obvious that they can be solved analytically for any given number of primary fields. In the case of the Ising CFT, one can find exact closed expressions by means of bosonization \cite{NayakWilczek, IsingCB}. We will make use of the multiperipheral basis to write the exact formulas for the CBs. This is a canonical representation that is valid for all types of CBs \cite{QuantumGroupsCFT,WFfromCB}. We will omit the $\sigma$'s in this notation and write $\mathbf{p}=(p_1,\cdots,p_{N-1})$, where $p_i=0$ ($1$) corresponds to an identity operator $\mathds{1}$ (a fermion $\chi$). (Note that $\mathbf{p}$ can take $2^{N-1}$ different values, as expected.) We can easily relate the multiperipheral basis to the one obtained from the pair-wise fusion of operators (see Fig. \eqref{IsingCB}). The latter is the basis that we used previously in \eqref{CBamplitudes}. Note that, in order to preserve the number of fermions at each vertex, there is the restriction $m_k = p_{k-1} + p_k (\text{mod}\, 2)$. (We define fixed auxiliary values $p_0=p_N=0$.) \begin{figure}[h] \centering \includegraphics[width=0.95\linewidth]{IsingCB} \caption{A conformal block using only $\sigma$ field operators grouped in reference pairs $(\sigma(z_{2k-1}), \sigma(z_{2k}))$. The equivalance between the two representations is obtained from the relation $m_k = p_{k-1} + p_k (\text{mod}\, 2)$.} \label{IsingCB} \end{figure} Before stating the formulas for the CBs, we introduce some extra notation. First, we will need certain bipartitions of the $\sigma$ field coordinates that associate the points of each reference pairs to different groups. We call these macrogroups $\ell_\mathbf{q}, \ell'_\mathbf{q}$ and they are generated from an integer $\mathbf{q}=0,\cdots, 2^{N-1}-1$ according to \cite{WFfromCB, IsingCB} \begin{equation} \ell_\mathbf{q}(k) = 2k -\frac{1}{2}(1+s_k), \qquad \ell'_\mathbf{q}(k) = 2k -\frac{1}{2}(1-s_k), \label{Spin macrogroup} \end{equation} where $q_k$ are the binary digits of $\mathbf{q} = (q_1, q_2, \dots, q_{N-1})$, \begin{equation} s_k = \prod_{i=1}^{k-1}\left(1-2q_i\right), \label{auxspin} \end{equation} and $s_1=1$ by definition. Using this notation, we can define \begin{equation} z_{\ell_\mathbf{q}}=\prod_{k<m}z_{\ell_\mathbf{q}(k),\ell_\mathbf{q}(m)}. \end{equation} where $z_{ab} = z_a - z_b$. We will also need the sign given by \begin{align} \epsilon_{\mathbf{p}\mathbf{q}} &\equiv (-1)^{\sum_k p_k q_k}= \prod_{k=1}^{N-1}\left(1-2p_k q_k\right)\\ &= \prod_{k=1}^{N-1}\left(1+p_k (s_{k}s_{k+1}-1)\right)\equiv \tilde\epsilon_{\mathbf{p}\mathbf{s}}.\nonumber \end{align} using the binary expansion of both $\mathbf{p}$ and $\mathbf{q}$. The expression for the CB can be written as \cite{IsingCB} \begin{equation} \mathcal{F}^{(2N)}_\mathbf{p} = \frac{1}{2^{\frac{N-1}{2}}}\prod_{a<b}^{2N}z_{ab}^{-1/8}\left(\sum_{\mathbf{q} = 0}^{2^{N-1}-1}\epsilon_{\mathbf{p}\mathbf{q}}\sqrt{z_{\ell_\mathbf{q}}z_{\ell'_\mathbf{q}}}\right)^{1/2}. \label{CBsigmas} \end{equation} Note that the sum inside the square root is the only part that depends on $\mathbf{p}$. It is important to remark that we are assuming radial ordering \begin{equation} |z_1|\geq |z_2| \geq \cdots \geq |z_{2N}|. \end{equation} Moreover, if $|z_n| = |z_m|$ and $n<m$, we will assume that the angular parts in the polar decomposition are ordered with respect to the principal value of the logarithm. In other words, if $z_n = \exp(a_n + ib_n)$, whenever $a_n = a_m$, we will assume \begin{equation} -\pi < b_n < b_m \leq \pi \end{equation} if $n<m$. The ordering of the coordinates will be important because it ensures that we consistently choose the same branches of the (complex) square root. In order to see this, let us define \begin{equation} B_\mathbf{q} = \prod_{n<m}^N\left[\left(1 - \frac{z_{\ell_\mathbf{q}(m)}}{z_{\ell_\mathbf{q}(n)}}\right)\left(1 - \frac{z_{\ell'_\mathbf{q}(m)}}{z_{\ell'_\mathbf{q}(n)}}\right)\right]^{\frac{1}{2}}. \end{equation} Using this notation, we note that we can write the $\mathbf{p}$-dependent part of \eqref{CBsigmas} using only the main branch of the square root \begin{equation} \mathcal{F}_\mathbf{p}^{(2N)} \propto \left(\sum_{\mathbf{q} = 0}^{2^{N-1}-1}\epsilon_{\mathbf{p}\mathbf{q}} \frac{B_\mathbf{q}}{B_0}\right)^{1/2}. \end{equation} This will be particularly important for 2D spin configurations. Note that we can obtain CBs for coordinates which are not radially ordered by analytic continuation of these expressions. This can be done by means of the Ising braid matrices \cite{QuantumGroupsCFT,WFfromCB, MooreSeibergNotes}. \section{1D wave functions} We focus now on a one-dimensional configuration. For this purpose, we choose the $2N$ coordinates to be given by $z_k=\exp(i\theta_k)$, where (see Fig. \eqref{conf_epsilon}) \begin{equation} \theta_k =\frac{2\pi}{2N}\left(k+(-1)^k\delta-N\right), \label{coord1d} \end{equation} and $\delta\in (-\frac{1}{2},\frac{1}{2})$ is a fixed parameter. Using this parametrization, we can rewrite the wavefunction amplitudes \eqref{CBamplitudes} as \cite{WFfromCB} \begin{equation} \Psi_\mathbf{p}^{(ee)}(\delta) = \frac{1}{\tilde N_0}\left(\sum_{\mathbf{q} = 0}^{2^{N-1}-1}\epsilon_{\mathbf{p}\mathbf{q}}\, A_\mathbf{q}(\delta)\right)^{1/2}, \label{varWF} \end{equation} where \begin{equation} A_\mathbf{q} = \prod_{n>m}^N\left[\sin\frac{\theta_{\ell_\mathbf{q}(n)}-\theta_{\ell_\mathbf{q}(m)}}{2}\sin\frac{\theta_{\ell'_\mathbf{q}(n)}-\theta_{\ell'_\mathbf{q}(m)}}{2}\right]^{\frac{1}{2}}, \label{AAA} \end{equation} and $\tilde N_0$ is a normalization constant \begin{equation} \tilde N_0^2 = \frac{N^{N/2}}{2^{(N-1)(N-2)/2}}. \end{equation} We can also write \eqref{AAA} in terms of the auxiliary spins \eqref{auxspin} (see the Appendix in Ref.\cite{WFfromCB}) \begin{equation} A(\{s_k\}) = \prod_{j>i}^N \sin\left[\frac{\pi}{N}\left(j-i+\frac{1+2\delta}{4}(s_j-s_i)\right)\right]. \label{AAAspins} \end{equation} Note that for all values of $\delta$, the resulting wave function describes a translationally invariant spin chain with periodic boundary conditions. This is a consequence of the fact that we are describing physical degrees of freedom on the lattice by means of pairs of $\sigma$ fields. The centers-of-mass of the pairs are uniformly distributed on the circle, while their size is constant for fixed $\delta$ \begin{align} \theta_{2k}-\theta_{2k-1} &= \frac{2\pi}{N}(\frac{1}{2}+\delta)\equiv\frac{2\pi}{N}\epsilon, \\ \frac{\theta_{2k}+\theta_{2k-1}}{2} &= \frac{2\pi}{N}\left(k-\frac{2N+1}{2}\right).\nonumber \end{align} \begin{figure}[h] \centering \includegraphics[width=0.65\linewidth]{conf_epsilon} \caption{Coordinate configuration on the complex plane for a 1D system with 12 spins. Note that the reference pairs are uniformly distributed, so the wave function is translationally invariant for all values of $\epsilon$.} \label{conf_epsilon} \end{figure} In this representation, there is an exponentially large number of numerical operations that need to be performed. Luckily, we can obtain a determinant form for these particular configurations that simplifies the calculations. Once again, we make use of the pair-wise fusion basis $\mathbf{m}=(m_1,\cdots,m_N)$. It can be shown that the normalized wave function amplitudes can be written as (we will leave the details for Appendix A) \begin{equation} \Psi^{(ee)}_\mathbf{m}(\delta) = \left(\frac{\det(F_\mathbf{m}*V)}{\det(V)}\right)^{1/2}, \label{psieedet} \end{equation} where $F_\mathbf{m}*V$ is the element-wise matrix product (also known as the Hadamard product of matrices) \begin{equation} (F_\mathbf{m}*V)_{rt} = (F_\mathbf{m})_{rt}(V)_{rt}, \end{equation} obtained from matrices \begin{equation} (V)_{rt} = \exp\left(i\frac{2\pi}{N}r(t-1)\right) \end{equation} and \begin{equation} (F_\mathbf{m})_{rt} = \left\{ \begin{array}{l l} \cos\left[\frac{\pi}{4N}(1+2\delta)\left(2t-N-1\right)\right], & \,\, m_r=0,\\ i\sin\left[\frac{\pi}{4N}(1+2\delta)\left(2t-N-1\right)\right], & \,\, m_r=1. \end{array} \right. \end{equation} Determinant expression \eqref{psieedet} can also allow us to write $\ket{\psi_{ee}(\delta)}$ as a BCS state. If we define the lattice momenta as \begin{equation} k = \frac{\pi}{N}(2m-N-1), \qquad m=1,\cdots,N \label{kdef} \end{equation} we can write the normalized state as \begin{equation} \ket{\psi_{ee}(\delta)} = C_N(\delta) \exp\left(\sum_{n<m}\tilde g_{nm}(\delta)c_n^\dagger c_m^\dagger \right)\ket{0}_c, \end{equation} where \begin{equation} C_N(\delta) = \prod_k \sqrt{\cos\left[\frac{(1+2\delta)}{4}k\right]}, \label{cN1D} \end{equation} and $\tilde g_{nm}=g_{n-m}$ with \begin{align} g_r = (-1)^r\frac{2}{N}\sum_{k>0} & \tan\left[\frac{(1+2\delta)}{4}k\right]\sin\left(kr\right). \label{g!D} \end{align} We will leave the details for Appendix B. Note that this result is exact and does not depend on any approximation. We also highlight that this is further evidence that $\ket{\psi_{ee}}$ as defined in \eqref{psiee} has a BCS structure beyond the OPE regime. \section{The ground state of the critical Ising spin chain from conformal blocks} In Ref.\cite{WFfromCB}, it was argued from numerical evidence that $\ket{\psi_{ee}(\delta=0)}$ correspond exactly to the ground-state of the even sector (defined by $\braket{Q}=\braket{\prod_n\sigma^z_n}=1$) of the Ising tranverse field (ITF) critical Hamiltonian with periodic boundary conditions \begin{equation} H = -\sum_{n=1}^N \sigma^x_n\sigma^x_{n+1} - \sum_{n=1}^N \sigma^z_n. \label{ITFh1} \end{equation} We will now present a analytical proof of this result. The exact solution of \eqref{ITFh1} is well-known \cite{Henkel, Sachdev, Mussardo}. The ground state can be obtained by mapping the spin variables to spinless fermions using a Jordan-Wigner (JW) transformation \begin{align} \sigma_n^z &= 1-2c_n^\dagger c_n, \\ \sigma_n^x &= \prod_{m=1}^{n-1}(1-2c_m^\dagger c_m)(c_m^\dagger+c_m),\nonumber \end{align} to obtain \begin{align} H = -\sum_{n}(1-2c_n^\dagger c_n)- \sum_{n}(c_n^\dagger - c_n)(c_{n+r}^\dagger + c_{n+r}). \end{align} This is a translationally invariant quadratic Hamiltonian that can be solved via a Fourier transform \begin{equation} c_n^\dagger = \frac{1}{\sqrt{N}}\sum_k e^{ikn}c_k^\dagger, \end{equation} where we take $k$ as in \eqref{kdef}, followed by a Bogoliubov transformation. The normalized ground state has a BCS structure \begin{align} \ket{gs} &= \prod_{k>0} \left[\cos\left(\frac{\theta_k}{2}\right)+i\sin\left(\frac{\theta_k}{2}\right)c_k^\dagger c_{-k}^\dagger\right]\ket{0}_c \\ &=\prod_{k>0} \cos\left(\frac{\theta_k}{2}\right)\exp\left[\sum_{k>0}i\tan\left(\frac{\theta_k}{2}\right)c_k^\dagger c_{-k}^\dagger\right]\ket{0}_c,\nonumber \end{align} where we define \begin{align} \cos\left(\frac{\theta_k}{2}\right) &=\sqrt{\frac{1+\sin\left|\frac{k}{2}\right|}{2}},\\ \sin\left(\frac{\theta_k}{2}\right) &= -\text{sgn}(k)\sqrt{\frac{1-\sin\left|\frac{k}{2}\right|}{2}}. \nonumber \end{align} This implies that the normalization constant is \begin{align} \prod_{k>0} \cos\left(\frac{\theta_k}{2}\right) &= \prod_{k>0}\sqrt{\frac{1+\sin\left(\frac{k}{2}\right)}{2}}\\ &=\prod_{m=1}^N \sqrt{\cos\left[\frac{\pi}{4N}\left(2m-N-1\right)\right]}. \nonumber \end{align} We can also compute the real-space pairing function by doing a Fourier transform of $g_k=i\tan\left(\frac{\theta_k}{2}\right)$ \begin{align} g_{r}&=\frac{2}{N}\sum_{k>0}\frac{1-\sin\left(\frac{k}{2}\right)}{\cos\left(\frac{k}{2}\right)}\sin\left(kr\right)\\ &=(-1)^r\frac{2}{N}\sum_{k>0}\tan\left(\frac{k}{4}\right)\sin\left(kr\right),\quad r\in\mathbb{Z},\nonumber \end{align} where the second expression is obtained from the first one by replacing $k\mapsto \pi-k$. Now, coming back to the wave functions obtained from the Ising CBs, note that these expressions correspond to the normalization constant \eqref{cN1D} and the pairing function \eqref{g!D} obtained in the previous section when $\delta=0$, so that \begin{equation} \ket{gs}=\ket{\psi_{ee}(\delta=0)}. \end{equation} This is a remarkable result given that the expression for the CBs \eqref{CBsigmas} was obtained from the infrarred fixed point of the critical theory. It is non-trivial that it would agree with the ground state of a finite-size lattice system. \section{Parent Hamiltonians for 1D} We have checked numerically that for $\delta\neq 0$, we can find parent Hamiltonians that can also be mapped to a quadratic fermionic form. We consider the following family of Hamiltonian terms \begin{align} Z &= -\sum_{n}\sigma_n^z, \nonumber\\ X_r &= -\sum_n \sigma_{n}^x\sigma_{n+1}^z\cdots\sigma_{n+r-1}^z\sigma_{n+r}^x,\label{HamTerms1D} \\ Y_r &= -\sum_n \sigma_{n}^y\sigma_{n+1}^z\cdots\sigma_{n+r-1}^z\sigma_{n+r}^y,\nonumber \end{align} with $r=1,\cdots,N/2$. (Note that $X_1$ is the usual Ising term.) This particular choice for the family of Hamiltonian terms corresponds to those that will yield quadratic forms in fermionic variables (see Appendix C for an explicit fermionic formulation). Given that $\ket{\psi_{ee}(\delta)}$ is translationally invariant and describes a system with periodic boundary conditions, we impose the same constraints on the Hamiltonian terms. We know that the variational wavefunctions obtained from the CB of the Ising model behave nicely under a Kramers-Wannier (KW) duality transformation \cite{WFfromCB, topDefIsing}. In particular, we have that \begin{equation} \ket{\psi_{ee}(\delta)} \mapsto \ket{\psi_{ee}(-\delta)}. \end{equation} Something similar can be said about the Hamiltonian terms we are considering. \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{coefham} \caption{Variational coefficients of the parent Hamiltonians of the form \eqref{varHams} for $\ket{\psi_{ee}(\delta)}$ for $N=20$ spins. They are normalized so that $a_1=1$. } \label{coefham} \end{figure} The action of the KW transformation can be summarized in the map \begin{equation} \sigma_n^z \mapsto \sigma_n^x \sigma_{n+1}^x, \qquad \sigma_n^x \mapsto \sigma_1^z\cdots \sigma_n^z. \end{equation} (See Appendix C for a formulation of the KW transformation in terms of Majorana fermions.)From these relations, it is easy to compute the action of the KW transformation (for the even-parity sector of the Hilbert space, defined by $\braket{Q}=\braket{\prod_n\sigma^z_n}=1$) \begin{align} Z \mapsto X_1, \quad & X_1 \mapsto Z, \quad X_r \mapsto -Y_{r-1},\, (r=2,\cdots N/2), \nonumber\\ & Y_r \mapsto -X_{r+1},\, (r=1,\cdots, N/2). \end{align} Note first that the KW dual of a Hamiltonian that can be written as a quadratic form in fermionic variables is once again of the same type. We can try to take advantage of the KW duality. For our variational fits, we used the Hamiltonian family \begin{align} &\tilde H_0 = \mathds{1}, \quad \tilde H_1 = X_1 + Z, \quad \tilde H_2 = X_1 - Z,\label{HamFamily1D} \\ &\tilde H_r = X_{r-1} + Y_{r-2},\, (r=3,\cdots N/2+1).\nonumber \end{align} We need $\tilde H_0$ to be equal to the identity for the variational algorithm (see Appendix C). Notice also that $\tilde H_1$ corresponds to the critical ITF Hamiltonian \eqref{ITFh1}. Using these definitions, the whole family is closed under a KW transformation \begin{align} &\tilde H_0 \mapsto \tilde H_0, \quad \tilde H_1 \mapsto \tilde H_1, \\ &\tilde H_r \mapsto -\tilde H_r, \, (r=2,\cdots, N/2+1).\nonumber \end{align} We checked numerically that the wavefunction $\ket{\psi_{ee}(\delta)}$ obtained from the Ising CBs with $|\delta|<1/2$ is the ground state of a Hamiltonian of the form (see Fig. \eqref{coefham}) \begin{equation} H = -\sum_{r=1}^{N/2+1}a_r \tilde H_r. \label{varHams} \end{equation} The duality implies that $a_1(\delta) = a_1(-\delta)$, so we will set $a_1=1$. (Remember this is the coefficient associated to the critical ITF Hamiltonian term.) Note also that \begin{equation} a_r(-\delta) = -a_r(\delta), \,(r=2,\cdots,N/2+1). \end{equation} In Fig. \eqref{coefham}, we plot the variational coefficients obtained for different values of $\delta$ and $N=20$ spins. We see that the Hamiltonian is dominated by $\tilde H_1$, namely, the ITF critical Hamiltonian \eqref{ITFh1}. The other significant contribution comes from $\tilde H_2$, in particular for $|\delta|\approx 1/2$, so that the ground state approximates the trivial Ising fixed points in these limits (see Ref.\cite{WFfromCB}). For $|\delta|\approx 0$, all the Hamiltonian terms that change sign under a KW transformation are very small compared to $\tilde H_1$. Given that the whole Hamiltonian family $\{\tilde H_r\}$ respect the basic Ising symmetries, this implies that $\ket{\psi_{ee}(\delta)}$ approximates small massive perturbations away from criticality. If we drop the Hamiltonian terms for $r>2$, we see that in this vicinity the corresponding transverse field will be given by \begin{equation} h = \frac{1-a_2}{1+a_2}\approx 1-2a_2 + \mathcal{O}(a_2^2). \end{equation} This explains the relative good agreement between $\ket{\psi_{ee}(\delta)}$ for $|\delta|\ll 1$ and the ground state of the ITF Hamiltonian close to the critical point \cite{WFfromCB}. \section{Excited states} We can extend the previous discussion to other states obtained from operator matrix \eqref{PsiMatrix}. Let us first consider state $\ket{\psi_{oo}}$, defined in \eqref{psioo}. In this case, the asymptotic states of the CFT are fermions \begin{equation} \Psi^{(oo)}_\mathbf{p} \propto \braket{\chi | \sigma(z_1)\cdots \sigma(z_{2N}) | \chi}_\mathbf{p}. \end{equation} Assuming radial ordering, we can obtain the amplitudes for this state by adding two fermions at $z=0,\infty$. Starting from the exact expression \cite{IsingCB} and taking the appropriate limit, the corresponding amplitudes for the associated wave function are given by \begin{align} \Psi^{(oo)}_\mathbf{p} &= \frac{1}{\tilde N_2}\left(\sum_{\mathbf{q} = 0}^{2^{N-1}-1}\epsilon_{\mathbf{p}\mathbf{q}}A_\mathbf{q}\right)^{-1/2}\\ &\left[\sum_{\mathbf{q} = 0}^{2^{N-1}-1}\epsilon_{\mathbf{p}\mathbf{q}}A_\mathbf{q}\left( \sqrt{\prod_{k=1}^N\frac{z_{\ell_\mathbf{q}(k)}}{z_{\ell'_\mathbf{q}(k)}}} + \sqrt{\prod_{k=1}^N\frac{z_{\ell'_\mathbf{q}(k)}}{z_{\ell_\mathbf{q}(k)}}} \right)\right].\nonumber \end{align} If we use the homogeneous 1D configuration \eqref{coord1d}, we can rewrite these amplitudes as \begin{align} \Psi^{(oo)}_\mathbf{p} &= \frac{1}{N_2}\left(\sum_{\{s_k\}}\tilde\epsilon_{\mathbf{p}\mathbf{s}}\,A(\{s_k\})\right)^{-1/2}\\ &\left(\sum_{\{s_k\}}\tilde\epsilon_{\mathbf{p}\mathbf{s}}\,A(\{s_k\})\cos\left[\frac{\pi}{2N}(1+2\delta)\sum_{k=1}^N s_k\right]\right.,\nonumber \end{align} They can also be written in terms of determinants. Define the matrix \begin{equation} (J_\mathbf{m}(q))_{r,t} = \exp\left(i\frac{2\pi}{N}r(t-1)\right)(F_\mathbf{m}(q))_{r,t}\, , \end{equation} where \begin{align} (F_\mathbf{m}(q)&)_{r,t} \\ =&\left\{ \begin{array}{l l} \cos\left[\frac{\pi}{4N}(1+2\delta)\left(2t-N-1+q\right)\right], & \, m_r=0,\\ i\sin\left[\frac{\pi}{4N}(1+2\delta)\left(2t-N-1+q\right)\right], & \,m_r=1. \end{array} \right. \nonumber \end{align} We can easily show that \begin{equation} \Psi^{(oo)}_\mathbf{m} \propto \frac{1}{\left(\det(J_\mathbf{m}(0)\right)^{1/2}}\left[\det(J_\mathbf{m}(2)+\det(J_\mathbf{m}(-2)\right]. \end{equation} Given that this wave function can be written in terms of real amplitudes (up to a possible overall phase that does not depend on $\mathbf{p}$), it is easy to compute the overlap with $\ket{\psi_{ee}}$ \begin{align} \braket{\psi_{ee}(\delta)|\psi_{oo}(\delta)}&=\sum_\mathbf{p} \Psi^{(ee)}_\mathbf{p} \Psi^{(oo)}_\mathbf{p} \\ &\propto \cos\left(\frac{\pi}{2}(1+2\delta)\right) = -\sin\left(\pi\delta\right).\nonumber \end{align} This implies that the two states will be orthogonal if $\delta=0$. (Recall that we are assuming $|\delta|<1/2$.) This is exactly the case for which $\ket{\psi_{ee}}$ describes the ground state of the critical ITF Hamiltonian \eqref{ITFh1}. We have checked numerically the action of this Hamiltonian on $\ket{\psi_{oo}(\delta=0)}$ for sizes up to $N=20$ spins using a Lanczos algorithm. We found that it corresponds within machine precision to the first excited state of the even-parity sector of the critical ITF Hamiltonian \eqref{ITFh1}. This is again a remarkable result given that the amplitudes are computed from CBs obtained at the infrared limit of the thermodynamic model. These results reflect the relation between the finite-size study of the Ising spin chain and the operator content of the Ising CFT \cite{Henkel, ConformallyInvariantLimit, CardyContent}. It is known that the spectrum of the even-parity sector of \eqref{ITFh1} with periodic boundary conditions corresponds to the Virasoro towers of both operators $\mathds{1}$ and $\epsilon$. These are the primary operators of the full Ising CFT that are even under the internal $\mathbb{Z}_2$ symmetry. Starting from these states, we can in principle construct the full spectrum of the Hamiltonian by acting with the corresponding representation of the Virasoro algebra. These operators can be obtained on the lattice from the local Hamiltonian density \cite{SKoriginal, SKnonintegrable}. It is tempting to extend this construction to the odd-parity sector of the ITF spin chain. Finite-size scaling using periodic boundary conditions relate this sector to the Virasoro tower of $\sigma$ \cite{Henkel}. We tried the natural candidates obtained from (a) using a single fermion on the asymptotic states, both at $z=0,\infty$, so that the CFT degrees of freedom are traced out by $\bra{0}\cdots\ket{\chi}$ or $\bra{\chi}\cdots\ket{0}$; (b) using a pair of $\sigma$ fields on the asymptotic states $\bra{\sigma}\cdots\ket{\sigma}$. In both scenarios, the amplitudes obtained using configuration \eqref{coord1d} for $\delta=0$ contained complex amplitudes that cannot be factored to an overall phase. This implies that these states cannot be used naively to describe ground states of real Hamiltonians. Moreover, using $\sigma$ for both asymptotic states can yield wave functions that are not translationally invariant even if the degrees of freedom are arranged uniformly on the circle. This suggests that there is a richer structure underlying the general framework that needs to be understood in further work. \section{BCS structure beyond OPE: general observations} We have seen that the OPE expansion of the CBs yields many-body wave functions with a BCS structure, and that this remains true in the exact case for translationally invariant 1D configurations. One may wonder if this result still holds true for the exact CBs using an arbitrary configuration (assuming, of course, radial ordering). In order to check this, let us consider $N=4$ spins described by $\ket{\psi_{ee}}$. This provides the smallest system size in which a BCS wave function is non-trivial and it allows us to understand the problem in more detail. First, consider a general BCS wave function for $N=4$. If we write it in full detail, we have \begin{equation} \ket{\psi_\text{BCS}} \propto \left(1 + \sum_{n<m}g_{nm}c_n^\dagger c_m^\dagger + g_{1234}c_1^\dagger c_2^\dagger c_3^\dagger c_4^\dagger\right)\ket{0}_c, \end{equation} where we define for convenience \begin{equation} g_{1234} = g_{12} g_{34} - g_{13} g_{24} + g_{14}g_{23} . \label{BCS4} \end{equation} Note that this definition relates explicitly to Wick theorem for fermions. If $\ket{\psi_{ee}}$ does indeed describe a BCS state, we expect its amplitudes to fulfill this constraint. In order to check this, let us write the operator matrix \eqref{Amatrix} as (we omit the coordinates for simplicity) \begin{equation} A^{(n)} = \begin{pmatrix} V_{00} & c_n^\dagger V_{01} \\ c_n^\dagger V_{10} & V_{11} \end{pmatrix}. \end{equation} Using this notation, it is easy to see that condition \eqref{BCS4} will be fulfilled for $\ket{\psi_{ee}}$ if and only if (see Fig. \eqref{N4Vertex}) \begin{align} \braket{V_{00}V_{00}V_{00}V_{00}}&\braket{V_{01}V_{10}V_{01}V_{10}}=\nonumber\\ &\braket{V_{01}V_{10}V_{00}V_{00}}\braket{V_{00}V_{00}V_{01}V_{10}} \label{N4VertexCondition} \\ &- \braket{V_{01}V_{11}V_{10}V_{00}}\braket{V_{00}V_{01}V_{11}V_{10}}\nonumber \\ &+\braket{V_{01}V_{11}V_{11}V_{10}}\braket{V_{00}V_{01}V_{10}V_{00}}.\nonumber \end{align} Note first that this equation is trivially satisfied if all $V_{ij}$ are numbers. Also, if we use the OPE approximation \eqref{OPEvertex}, the equation reduces to the usual Wick theorem for free fermions. If we write this using the exact amplitudes in the pair-wise fusion basis, we get \begin{equation} \mathcal{F}_{0000}\mathcal{F}_{1111} = \mathcal{F}_{1100}\mathcal{F}_{0011} - \mathcal{F}_{1010}\mathcal{F}_{0101} + \mathcal{F}_{1001}\mathcal{F}_{0110}. \end{equation} \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{WickN4} \caption{Graphical representation of equation \eqref{N4VertexCondition}. } \label{N4Vertex} \end{figure} We have checked numerically the condition for (radially ordered) random configurations using the exact CBs and they do indeed describe BCS wave functions. Unfortunately, we cannot provide a general proof even for such a small system size. One possible route is to expand the vertex operators using the full OPE expansion \eqref{fullSigmaOPE}. In that case, condition \eqref{N4VertexCondition} can be recast into a perturbative expression. Some subtleties regarding this approach are discussed in the Appendix D. \section{2D wave functions} So far, we have used coordinate configurations for the $\sigma$ fields that are constrained to the unit circle on the complex plane. We now study 2D configurations, where the formalism for the construction of the wave function will be very similar. Unfortunately, we cannot use the same procedure we described in Appendix B to write the amplitudes using a determinant form such as \eqref{psieedet}. This limits the system sizes we can consider numerically. However, we can get around this impasse by considering the OPE approximation we already discussed. We will relate $\ket{\psi_{ee}}$ for a 2D configuration to the weak pairing phase of the effective mean-field Hamiltonian that describes $p+ip$ superconductivity \cite{ReadGreen} \begin{equation} H = \sum_{\mathbf{k}} \left[\xi_\mathbf{k} c_\mathbf{k}^\dagger c_\mathbf{k} + \frac{1}{2}\left(\Delta_\mathbf{k}^* c_{-\mathbf{k}}c_\mathbf{k} + h.c.\right) \right], \label{HpSC} \end{equation} where \begin{equation} \xi_\mathbf{k} =\frac{1}{2m}\mathbf{k}^2 - \mu, \qquad \Delta_\mathbf{k} = \hat\Delta(k_x - ik_y), \end{equation} $\mu$ is the chemical potential, and $\hat\Delta$ is a constant defining the gap function. The (normalized) ground state of this theory is obtained by usual BCS methods and can be written as \begin{equation} \ket{gs} = \left.\prod_\mathbf{k} \right.' \left(u_\mathbf{k} + v_\mathbf{k} c_\mathbf{k}^\dagger c_{-\mathbf{k}}^\dagger\right)\ket{0}, \end{equation} where the prime on the product indicates that each pair $(\mathbf{k},-\mathbf{k})$ appears only once, and $u_\mathbf{k},v_\mathbf{k}$ are the Bogoliubov functions obtained from the Bogoliubov-de Gennes (BdG) equations \begin{align} E_\mathbf{k} u_\mathbf{k} = \xi_\mathbf{k} u_k - \Delta_\mathbf{k}^* v_\mathbf{k}, \quad E_\mathbf{k} v_\mathbf{k} = -\xi_\mathbf{k} v_\mathbf{k} - \Delta_\mathbf{k} u_\mathbf{k}. \end{align} This reduces to \begin{align} E_\mathbf{k}& = \sqrt{\xi_\mathbf{k}^2 + |\Delta_\mathbf{k}|^2},\\ |u_\mathbf{k}|^2 &= \frac{1}{2}\left(1+\frac{\xi_\mathbf{k}}{E_\mathbf{k}}\right), \quad |v_\mathbf{k}|^2 = \frac{1}{2}\left(1-\frac{\xi_\mathbf{k}}{E_\mathbf{k}}\right).\nonumber \end{align} The ground state can then be rewritten as \begin{equation} \ket{gs} =\left( \prod_\mathbf{k} |u_\mathbf{k}|^2\right)\exp\left(\frac{1}{2}\sum_\mathbf{k} g_\mathbf{k} c_\mathbf{k}^\dagger c_{-\mathbf{k}}^\dagger\right)\ket{0}, \end{equation} where \begin{equation} g_\mathbf{k} = \frac{v_\mathbf{k}}{u_\mathbf{k}} = -\frac{E_\mathbf{k}-\xi_\mathbf{k}}{\Delta_\mathbf{k}^*}. \end{equation} (Note there is no restriction on $\mathbf{k}$, except maybe for $\mathbf{k}=0$.) Using the fermionic statistics, the amplitudes of the ground state can be written as Pfaffians \eqref{Pfaffian} using the real-space pairing function \begin{equation} g(\mathbf{r}) = \frac{1}{N^2}\sum_\mathbf{k} e^{i\mathbf{k}\cdot\mathbf{r}}g_\mathbf{k}. \end{equation} If $\mu>0$, the system will be in the so-called weak pairing phase \cite{ReadGreen, Miguel}. For small momenta we have $\xi_\mathbf{k}<0$ and \begin{equation} g_\mathbf{k} \sim -\frac{2\mu}{\hat\Delta(k_x+ik_y)}. \end{equation} The leading behavior of the real-space pairing function is given by (see Appendix E for details) \begin{equation} g(\mathbf{r})\sim -\frac{2a^2 \mu}{2\pi i \hat\Delta}\frac{1}{x+iy}, \label{gSC} \end{equation} where $a$ is the lattice spacing. Note that this analysis is done on a regular square lattice, assuming a very large system size. However, the leading singular term gives the qualitative infrared behavior that determines the phase of the system. \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{fig_2d} \caption{2D configuration corresponding to 48 $\sigma$ fields arranged on a cylinder with $N_x=6$ and $N_y=4$. We represent it on the plane according to the exponential map $z\mapsto \exp(i\theta)$. For clarity, we label the 24 physical spins, each one obtained from a pair of $\sigma$ fields. Using coordinates \eqref{phi2D}, spins with equal $y_n$ are located at the same radius.} \label{conf2D} \end{figure} Pairing function \eqref{gSC} is similar to the one obtained from $\ket{\psi_{ee}}$ using an OPE approximation \eqref{OPEpairingPlane}. This suggests that $\ket{\psi_{ee}}$ can be related to the weak pairing phase of \eqref{HpSC} as long as the OPE regime yields a good approximation of the CBs. The set of distances between the $\sigma$ fields can then be related to the chemical potential of the $p+ip$ superconductor. We expect then that $\ket{\psi_{ee}}$ can describe the topological weak pairing phase of \eqref{HpSC}, which has been associated to the Moore-Read Pfaffian state in the fractional quantum Hall effect \cite{ReadGreen, Miguel}. Based on the previous analysis, we focus now 2D spin systems on finite cylinders. The lattice will contain $N_y$ spins along the longitudinal direction and $N_x$ spins along the periodic one. We follow the analysis presented in Sect. V for the OPE approximation on the cylinder. We set $z_n = \exp(i\theta_n)$ using the cylinder coordinates \eqref{cylcoord}, where $\phi_n$ corresponds to the location of the $n$-th physical spin and $\epsilon_n$ to the size of its reference pair. We parametrize them as (see Fig. \eqref{conf2D}) \begin{equation} \phi_n = \frac{2\pi}{N_x} (x_n - i R y_n) \label{phi2D} \end{equation} where $n=1,\cdots, N_x N_y$ labels the spin sites, $x_n \in \{1,\cdots N_x\}$ and $y_n \in \{1,\cdots, N_y\}$ are positive integers that define the lattice on the cylinder, and $R$ is the anisotropy factor. (We will use a regular square lattice, so we set $R=1$.) We also define the same separation for all reference pairs \begin{equation} \epsilon_n = \frac{2\pi}{N_x}\epsilon. \end{equation} We can use complex values for $\epsilon$, but the radial ordering leads to subtleties when we extrapolate to the exact regime. We will focus then on real values, noting that the OPE regime corresponds to $0<\epsilon\ll 1$. Using this notation, the OPE pairing function becomes \begin{align} g_{nm} = \frac{\sin\left(\frac{\pi}{N_x}\epsilon\right)} {2\sin\left(\frac{\pi}{N_x}(x_n-x_m - i R (y_n - y_m))\right)}. \end{align} Note that, for large values of $N_x$, we can approximate this expression by a power law, so the leading singular term is similar to \eqref{gSC}. \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{fig_density} \caption{Expected occupation per site for different values of $\epsilon$ for a cylinder with $N_x=N_y=20$. The layer $y_n$ correspond to the longitudinal $y$-direction in the cylinder. Note that being periodic along the $x$-direction, the expectation value does not depend on $x_n$.} \label{dens2D} \end{figure} In order to characterize these wave functions, we study the entanglement spectrum \cite{LiHaldane} and the entanglement entropy obtained from the reduced density matrix $\rho_\text{cyl}$ of half a cylinder (corresponding to all sites with $y_n=1,\cdots,\frac{N_y}{2}$). Being a BCS state, we know that $\rho_\text{cyl}$ can be written as \cite{EntanglementSpinChains, Peschel} \begin{equation} \rho_\text{cyl} = \frac{1}{Z}\exp\left(\sum_m \lambda_m b^\dagger_m b_m \right), \end{equation} where $\{b_m\}$ are fermionic modes and $Z$ the normalization constant. In Appendix F, we describe a general algorithm to obtain both the spectrum $\{\lambda_m\}$ and the fermionic modes from the pairing function $g_{nm}$. \begin{figure}[h] \centering \subfigure[]{ \includegraphics[width=0.85\linewidth]{fig_ek} \label{entspec2D} } \subfigure[]{ \includegraphics[width=0.85\linewidth]{fig_epsilon} \label{ententr2D} } \caption{(a) Entanglement (single-body) spectrum for different values of $\epsilon$ on a cylinder with $N_x=N_y=20$. (b) Scaling of the entanglement entropy as a function of the circumference of the cylinder for different values of $\epsilon$ and set $N_y=80$.} \end{figure} We compute the expected occupation per site for different values of $\epsilon$ (see Fig. \eqref{dens2D}). We see that the boundaries do not affect the physics deep inside the bulk for large enough $N_y$. Given that in the limit $\epsilon\to 0$ the state corresponds to a trivial vacuum, the occupation is small in the OPE regime. The periodicity in the $x$-direction is preserved in $\rho_\text{cyl}$, so that we can associate a momentum $k$ to each mode. We can then write the single-body entanglement spectrum as a dispersion relation. In Fig. \eqref{entspec2D}, we see the single-body spectrum for different values of $\epsilon$. It corresponds to a chiral free fermion. For values close to $\epsilon\to 0$, there is a gap in the dispersion that closes at around $\epsilon\sim 0.1$. This behavior is in agreement with the entanglement spectrum of $p+ip$ superconductors in the weak pairing phase \cite{chiralfermion}. From the entanglement spectrum, we computed the scaling of the entanglement entropy for different values of $\epsilon$ by changing the cincumference of the cylinder (see Fig. \eqref{ententr2D}). In all cases, the scaling follows an area law $S(N_x)\sim c N_x$, with non-universal slopes. According to the scaling, there is no topological correction in the entanglement entropy. Once again, this is in agreement with the behavior of $p+ip$ superconductors \cite{chiralfermion, triplettopSC}. \section{Conclusions} We have presented a characterization of many-body states for lattice systems constructed from the CBs of the chiral Ising CFT. The basic feature of the construction is the use of pairs of $\sigma$ fields to describe single localized spins. Writing these CBs using local vertex operators enables us to relate this formalism to usual matrix product states. This rewriting makes explicit the relation between the ancillary CFT degrees of freedom and the lattice fermionic modes. We have provided evidence that states constructed from CBs using only $\sigma$ fields can be written as BCS states. A partial proof of this fact can be obtained whenever an OPE approximation is valid. In this case, an explicit BCS form can be obtained using the local vertex operator formalism. In the case of translationally invariant 1D configurations, we can go beyond the approximation and write a full non-perturbative proof. This also allows us to obtain a whole family of quasi-local parent Hamiltonians that can be written as quadratic fermionic forms. They are closely related to the critical ITF Hamiltonian \eqref{ITFh1}. In particular, we presented a proof that the ground state of the critical ITF Hamiltonian can be obtained exactly from this construction. The first excited state of the even-parity sector of this Hamiltonian can also be obtained using CBs with fermions in the asymptotic CFT states. The OPE approximation can be used to study large 2D spin configurations. By placing the degrees of freedom on finite cylinders, we have related the states obtained from the CBs in the OPE regime to the weak pairing phase of the $p+ip$ superconductor. This has been done via the entanglement spectrum obtained from the reduced density matrix of half of the cylinder. Further work is needed to deepen the connection between CBs and the ground states of finite systems. In the case of the Ising CFT, this would mean a general proof that $\ket{\psi_{ee}}$ describes a BCS wave function regardless of the coordinate configuration. A deeper understanding of the formalism may produce other physically relevant states, such as the ground state of the 1D odd parity sector of the ITF Hamiltonian, or vortices in 2D superconductors. In additional, generalizations to other rational CFTs, such as the Potts model or the $\mathbb{Z}_n$ model \cite{Tsvelik, QuantumGroupsCFT, diFrancesco}, are worth studying. Due to the algebraic constraints, we expect those constructions to be related to anyon chains \cite{goldenChain,AnyonTN} or parafermions \cite{para1,para2,para3}. \section*{Acknowledgments} We would like to thank H.-H. Tu for being part of this project at its early stages and for suggesting the algorithm for variational parent Hamiltonians. We would also like to thank A.E.B. Nielsen, J. Slingerland, and R. Sanchez for useful discussions. This work is supported by the Spanish Research Agency (Agencia Estatal de Investigaci\'on) through the grant IFT Centro de Excelencia Severo Ochoa SEV-2016-0597, and funded by Grant No. FIS2015-69167-C2-1-P from the Spanish government, and QUITEMAD+ S2013/ICE-2801 from the Madrid regional government. SM is supported by the FPI-Severo Ochoa Ph.D. fellowship No. SVP-2013-067869. \section*{Appendix A: Determinant form for the 1D wave function} We can rewrite the wave function amplitudes as \begin{equation} \Psi_\mathbf{p} = \frac{1}{\tilde N_0}\left(\sum_{\{s_1=1,\cdots\}}\tilde\epsilon_{\mathbf{p}\mathbf{s}}\,A(\{s_k\})\right)^{1/2} \end{equation} where \begin{align} A(\{s_k\}) =\bigg(& \prod_{j>i}^N \sin\left[\frac{\pi}{N}\left(j-i+\frac{1+2\delta}{4}(s_j-s_i)\right)\right]\\ &\sin\left[\frac{\pi}{N}\left(j-i-\frac{1+2\delta}{4}(s_j-s_i)\right)\right]\bigg)^{1/2}.\nonumber \end{align} Using the fact that (see Appendix in \cite{WFfromCB}) \begin{align} \prod_{j>i}^N &\sin\left[\frac{\pi}{N}\left(j-i+\frac{\alpha}{4}(s_j-s_i)\right)\right] \\ &=\prod_{j>i}^N \sin\left[\frac{\pi}{N}\left(j-i-\frac{\alpha}{4}(s_j-s_i)\right)\right], \end{align} for arbitrary real $\alpha$, we can eliminate the square root in $A(\{s_k\})$ and lift the restriction on $s_1$ noting that \begin{align} \sum_{\{s_1=1,\cdots\}}\tilde\epsilon_{\mathbf{p}\mathbf{s}}\, A(\{s_k\}) =\frac{1}{2}\sum_{\{s_1=\pm1,\cdots\}}\tilde\epsilon_{\mathbf{p}\mathbf{s}}\, A(\{s_k\}). \end{align} Putting all the pieces together, we have \begin{equation} \Psi_\mathbf{p} = \frac{1}{N_0}\left(\sum_{\{s_k\}}\tilde\epsilon_{\mathbf{p}\mathbf{s}}\,A(\{s_k\})\right)^{1/2}, \end{equation} where now \begin{equation} A(\{s_k\}) = \prod_{j>i}^N \sin\left[\frac{\pi}{N}\left(j-i+\frac{1+2\delta}{4}(s_j-s_i)\right)\right] \end{equation} and \begin{equation} N_0^2 = 2 \tilde N_0^2 = 2^{N}\left(\frac{N}{2^{N-1}}\right)^{N/2}. \end{equation} Note first that, using the identities \begin{equation} \sum_{\sigma\in S_N}\text{sgn}(\sigma)\prod_{n=1}^N \alpha_n^{\sigma(n)-1} = \prod_{n>m}^N (\alpha_n-\alpha_m) \end{equation} and \begin{equation} \sin\left(\frac{\theta_n - \theta_m}{2}\right) = \exp\left(-i\frac{\theta_n + \theta_m}{2}\right)\left(\frac{z_n - z_m}{2i}\right), \end{equation} we have \begin{align} A(\{s_k\})&=\prod_{j>i}^N\left[ \exp\left(-i\frac{\theta_j + \theta_i}{2}\right)\left(\frac{z_j - z_i}{2i}\right)\right]\nonumber\\ &=C_N\sum_{\sigma\in S_N}\text{sgn}(\sigma)\left(\prod_{j=1}^N a_{j,\sigma(j)}\right)\left(\prod_{j=1}^N b_{j,\sigma(j)}\right) \end{align} where \begin{equation} (V)_{r,t}=a_{r,t} = \exp\left(i\frac{2\pi}{N}r(t-1)\right) \label{Vandermonde} \end{equation} defines a Vandermonde matrix, \begin{equation} b_{r,t} = \exp\left(i\frac{\pi}{4N}s_r(1+2\delta)(2t-N-1)\right) \end{equation} contains all the dependence on $\{s_k\}$, and \begin{equation} C_N = (2i)^{-N(N-1)/2}e^{-i\frac{\pi}{2}(N^2-1)} \end{equation} Coming back to $\Psi_\mathbf{p}$, we can now sum over the auxiliary spins $\{s_k\}$ \begin{align} \sum_{\{s_k\}}&\tilde\epsilon_{\mathbf{p}\mathbf{s}}\,A(\{s_k\}) \\ &= C_N\sum_{\sigma\in S_N}\text{sgn}(\sigma)\prod_{j=1}^N a_{j,\sigma(j)} \left(\sum_{\{s_k\}}\tilde\epsilon_{\mathbf{p}\mathbf{s}}\prod_{j=1}^N b_{j,\sigma(j)}\right).\nonumber \end{align} We can perform the sum \begin{align} \sum_{\{s_k\}}\tilde\epsilon_{\mathbf{p}\mathbf{s}}\prod_{j=1}^N b_{j,\sigma(j)}=2^N\prod_{j=1}^N f_{j,\sigma(j)}. \end{align} where \begin{equation} f_{r,t} = \left\{ \begin{array}{l l} \cos\left[\frac{\pi}{4N}(1+2\delta)\left(2t-N-1\right)\right], & \,\,m_r=0,\\ i\sin\left[\frac{\pi}{4N}(1+2\delta)\left(2t-N-1\right)\right], & \,\, m_r=1, \end{array} \right. \end{equation} and we make use again of the pair-wise basis $\mathbf{m}=(m1_,\cdots,m_N)$. In other words, we have a cosine whenever the $r$-th reference pair fuses to an identity and sine when it fuses to a fermion. We can now clean everything up. Note first that \begin{align} N_0^2 &= 2^{N}\prod_{j>i}^N \sin\left(\frac{\pi}{2N}\left(j-i\right)\right)\nonumber\\ &= 2^N C_N\sum_{\sigma\in S_N}\text{sgn}(\sigma)\left(\prod_{j=1}^N a_{j,\sigma(j)}\right) \\ &= 2^N C_N\det(V), \nonumber \end{align} where $V$ is the Vandermonde matrix defined by $a_{r,t}$. If we define the matrix $F_\mathbf{m}$ by the elements $f_{r,t}$, we have \begin{equation} \Psi_\mathbf{m} = \left(\frac{\det(F_\mathbf{m}*V)}{\det(V)}\right)^{1/2}, \end{equation} where $F_\mathbf{m}*V$ is the Hadamard product \begin{equation} (F_\mathbf{m}*V)_{r,t} = a_{r,t}f_{r,t}\,. \end{equation} \section*{Appendix B: BCS state from determinant form for 1D wave functions} In order to show that the wave function amplitudes \eqref{psieedet} correspond to a BCS state, we need to write them as Pfaffians obtained from a given pairing function. We will accomplish this by using the multilineality of the determinant. First, consider the matrix \begin{equation} (U)_{r,t} = \frac{1}{\sqrt N} \exp\left(i\frac{2\pi}{N}r\left(t-\frac{1}{2}\right)\right). \end{equation} It is easy to show that $U$ is a unitary matrix. Also, by multilineality of the determinant, \begin{equation} \Psi_\mathbf{m}^2 = \frac{\det(F_\mathbf{m}*V)}{\det(V)} = \frac{\det(F_\mathbf{m}*U)}{\det(U)}. \end{equation} Now, note that we can write \begin{equation} \det(F_\mathbf{m}*U) = C_N(\delta)^2 \det(H_\mathbf{m}*U), \end{equation} where \begin{align} C_N(\delta)^2 &= \prod_{m=1}^N \cos\left[\frac{\pi}{4N}(1+2\delta)\left(2m-N-1\right)\right], \end{align} and $H_\mathbf{m}$ is defined by matrix elements \begin{equation} (H_\mathbf{m})_{r,t} = \left\{ \begin{array}{l l} 1, & \,\,m_r=0,\\ i\tan\left[\frac{\pi}{4N}(1+2\delta)\left(2t-N-1\right)\right], & \,\, m_r=1. \end{array} \right. \end{equation} We can further simplify this expression. If we define $M_\mathbf{m} = \left((H_\mathbf{m}*U)U^\dagger\right)$ and \begin{align} g_r &= (-1)^r\frac{i}{N}\sum_{m=1}^N \tan\left[\frac{\pi(1+2\delta)}{4N}\left(2m-N-1\right)\right]e^{i\frac{2\pi}{N}mr}\nonumber\\ & =(-1)^r\frac{2}{N}\sum_{k>0}\tan\left(\frac{k}{4}\right)\sin\left(kr\right), \label{BCSpairingfun} \end{align} (using momenta $k$ as defined in \eqref{kdef}), it is easy to see that \begin{equation} (M_\mathbf{m})_{r,t} = \left\{ \begin{array}{l l} \delta_{r,t}, & \,\,m_r=0,\\ g_{r-t}, & \,\, m_r=1. \end{array} \right. \end{equation} Note that $g_r$ is an anti-symmetric function. Taking into account that $\sum m_n=2R$ is an even number, assume that the $1$'s are located at positions $r(1)<\cdots<r(2R)$. In order to compute the determinant of $M_\mathbf{m}$, note that \begin{equation} \det(M_\mathbf{m}) = \det(\textbf{G}_\mathbf{m}) \end{equation} where \begin{equation} (\textbf{G}_\mathbf{m})_{ij} = g_{r(i)-r(j)}, \end{equation} is the $2R\times 2R$ anti-symmetric matrix obtained from $M_\mathbf{m}$ by keeping only the rows and columns corresponding to $r(1),\cdots,r(2R)$. Being anti-symmetric, note also that \begin{equation} \det(\textbf{G}_\mathbf{m}) = \text{Pf}^2 (\textbf{G}_\mathbf{m}). \end{equation} Summing up, we have \begin{align} \Psi_\mathbf{m}^2 & = C_N(\delta)^2\frac{\det(H_\mathbf{m}*U)}{\det(U)}\nonumber\\ & = C_N(\delta)^2\det\left(M_\mathbf{m}\right)\\ & = \left(C_N(\delta)\,\text{Pf} (\textbf{G}_\mathbf{m})\right)^2.\nonumber \end{align} Given that this result holds for all $\mathbf{m}$, we conclude that $\ket{\psi}=\sum_\mathbf{m} \Psi_\mathbf{m}\ket{\mathbf{m}}$ corresponds to a BCS state defined by pairing function \eqref{BCSpairingfun}. \section*{Appendix C: Finding the 1D parent Hamiltonians} Consider a family of Hamiltonian terms \begin{equation} H_\alpha = \sum_{i_1,\cdots,i_k}h^{(\alpha)}_{i_1,\cdots,i_k} \end{equation} which can be either local or non-local. For convenience, we set $H_0 = \mathds{1}$. Given a wavefunction $\ket{\Psi}$, we would like to find a linear superposition of these operators that will have $\ket{\Psi}$ as an eigenstate. In other words, we want to find coefficients $J_\alpha$ such that \begin{equation} \left(\sum_\alpha J_\alpha H_\alpha\right) \ket{\Psi} = 0. \label{superposition} \end{equation} In order to solve this, consider the matrix \begin{equation} (M)_{\alpha\beta} = \braket{\Psi\left|H_\alpha H_\beta\right|\Psi}. \end{equation} It is easy to see that condition \eqref{superposition} will be satisfied for a certain set of coefficients $\{J_\alpha\}$ if and only if $M$ has a non-trivial kernel. (Note that $M$ is positive-definite.) Going back to the Hamiltonian terms \eqref{HamTerms1D}, we can use the JW transformation, \begin{align} \sigma_n^z &= 1-2c_n^\dagger c_n, \nonumber\\ \sigma_n^x &= \prod_{m=1}^{n-1}(1-2c_m^\dagger c_m)(c_m^\dagger+c_m), \\ \sigma_n^y &= i \prod_{m=1}^{n-1}(1-2c_m^\dagger c_m)(c_m^\dagger-c_m),\nonumber \end{align} to obtain \begin{align} Z &\mapsto -\sum_{n}(1-2c_n^\dagger c_n), \nonumber\\ X_{r} &\mapsto - \sum_{n}(c_n^\dagger - c_n)(c_{n+r}^\dagger + c_{n+r}),\\ Y_{r} &\mapsto \sum_{n} (c_n^\dagger + c_n)(c_{n+r}^\dagger - c_{n+r}).\nonumber \end{align} We pick the convention for the JW transformation so that $\ket{0}_c = \ket{\uparrow}^{\otimes N}$. Note that, for the even parity sector $\braket{Q}=\braket{\prod_n \sigma_n^z}=1$, we have antiperiodic boundary conditions for the fermions $c_{N+m} = -c_{m}$. It is illuminating to write these operators in terms of Majorana fermions \begin{equation} a_{2n-1} = c_n + c_n^\dagger, \quad a_{2n} = \frac{c_n-c_n^\dagger}{i}. \end{equation} Antiperiodic boundary conditions on the fermions imply $a_{2N+r} = -a_r$. In these variables, we have \begin{align} Z &= i\sum_n a_{2n-1}a_{2n},\nonumber\\ X_r &= i\sum_n a_{2n}a_{2(n+r)-1},\\ Y_r &= - i\sum_n a_{2n-1}a_{2(n+r)}.\nonumber \end{align} Now, let us consider the variational Hamiltonian family defined in \eqref{HamFamily1D}. Using the Majorana variables, we obtain \begin{align} \tilde H_1 &= i\sum_n a_n a_{n+1},\\ \tilde H_r &= i \sum_n (-1)^n a_n a_{n+2(r-1)-1},\nonumber \end{align} with $r = 2,\cdots, N/2+1$. Tha action of the KW transformation on the Majorana fermions is simply \begin{equation} a_r \mapsto a_{r+1}, \end{equation} which acts as expected on the Hamiltonian family. Note that this action mimics the interpretation of the KW transformation as a consequence of the braiding of sigma fields as discussed in Ref.\cite{WFfromCB}. The Hamiltonian family we have used is very similar to the conserved quantities of the Ising model, seen as an integrable model \cite{SKoriginal}. Those can be obtained from \begin{equation} E_p = (-1)^p\frac{i}{2p}\sum_n a_n a_{n+p}. \end{equation} Note that $\tilde H_1 = -2E_1$ and that $[E_p,E_q]=0$. It has been shown that formal manipulations of $\{E_p\}$ can yield lattice representations of the Virasoro algebra \cite{SKoriginal, SKnonintegrable}. \section*{Appendix D: Towards a generalized Wick theorem} The OPE of two $\sigma$ fields is given by \eqref{fullSigmaOPE}. One can easily identify the fields $\alpha(z)$ as the ones appearing in the fusion channel of the CVOs $V_{00}$ and $V_{\chi \chi}$, while the fields $\beta(z)$ are the ones appearing in the CVOs $V_{0 \chi}$ and $V_{\chi0}$. This implies that \eqref{N4VertexCondition} holds provided the fields $\alpha(z_i)$ and $\beta(z_i)$ satisfy the relation \begin{align} \braket{ \alpha_1 \alpha_2 \alpha_3 \alpha_4 } \braket{ \beta_1 \beta_2 \beta_3 \beta_4 } = & \braket{ \beta_1 \beta_2 \alpha_3 \alpha_4 } \braket{ \alpha_1 \alpha_2 \beta_3 \beta_4 } \label{generalizedWick} \\ &- \braket{ \beta_1 \alpha_2 \beta_3 \alpha_4 } \braket{ \alpha_1 \beta_2 \alpha_3 \beta_4 }\nonumber \\ &+ \braket{ \beta_1 \alpha_2 \alpha_ 3 \beta_4 } \braket{ \alpha_1 \beta_2 \beta_3 \alpha_4 },\nonumber \end{align} where $\alpha_i = \alpha(z_i)$ and $\beta_i = \beta(z_i)$. This equation coincides with the standard Wick theorem if $\alpha(z)=\mathds{1}$ and $\beta(z) = \chi(z)$. Let us provide other examples. Suppose $\alpha_1 = T(z_1)$ with $T(z)$ the stress tensor \cite{diFrancesco}, $\alpha_2 = \alpha_3 = \alpha_4 = \mathds{1}$ and $\beta_i = \chi(z_i)$. Using this, \eqref{generalizedWick} becomes \begin{align} \braket{ T_1 } \braket{ \chi_1 \chi_2 \chi_3 \chi_4 } = & \braket{ \chi_1 \chi_2 } \braket{ T_1 \chi_3 \chi_4 } - \braket{ \chi_1 \chi_3 } \braket{ T_1 \chi_2 \chi_4 } \nonumber\\ &+ \braket{ \chi_1 \chi_4 } \braket{ T_1 \chi_2 \chi_3 }. \end{align} The left hand side of the equation vanishes because on the plane $\braket{ T(z) } = 0$. To find $\braket{ T \chi \chi }$, we use Ward identities \cite{BPZ} to conclude \begin{equation} \braket{ T(z_1) \chi({z_2}) \chi(z_3) } = \frac{ z_{23}}{ 2 z_{12}^2 z_{13}^2}. \end{equation} Plugging these equations into \eqref{generalizedWick} yields \begin{align} \frac{1}{ z_{12}} \frac{ z_{34}}{ 2 z_{13}^2 z_{14}^2} &- \frac{1}{ z_{13}} \frac{ z_{24}}{ 2 z_{12}^2 z_{14}^2} + \frac{1}{ z_{14}} \frac{ z_{23}}{ 2 z_{12}^2 z_{13}^2} = \\ &\frac{ z_{12} z_{34} - z_{13} z_{24} + z_{14} z_{23}}{ 2 (z_{12} z_{13} z_{14})^2} =0, \nonumber \end{align} so that the condition is satisfied. As a more elaborate example, choose $\alpha_i = \mathds{1}$, $\beta_1 = L_{-n} \chi(z_1)$ with $L_{-n}$ the mode operator of the stress tensor that belongs to the representation of the Virasoro algebra \cite{diFrancesco}, and $\beta_i(z) = \chi(z_i) \; (i=2,3,4)$. Equation \eqref{generalizedWick} becomes \begin{align} \braket{ \left(L_{-n}\chi_1\right) \chi_2 \chi_3 \chi_4 } = & \braket{ \left(L_{-n}\chi_1\right) \chi_2 } \braket{ \chi_3 \chi_4 } \\ &- \braket{\left(L_{-n}\chi_1\right) \chi_3 } \braket{ \chi_2 \chi_4 } \nonumber\\ &+ \braket{ \left(L_{-n}\chi_1\right) \chi_4 } \braket{ \chi_2 \chi_3 },\nonumber \end{align} where \begin{equation} L_{-n} \chi_1(z_1) = \oint_{z_1} d \zeta \; (\zeta - z_1)^{ - n +1} \; T(\zeta) \, \chi(z_1) , \quad n \geq 1 \end{equation} (We have suppressed the denominator $2 \pi i$ in the integral.) Equation \eqref{generalizedWick} can be written as \begin{equation} \Omega_n \equiv \oint_{z_1} d \zeta \; (\zeta - z_1)^{ - n +1} \; f(\zeta, \{ z_i \} ) = 0, \label{OmegaCondition} \end{equation} with \begin{align} f(\zeta, \{ z_i \} ) =& \braket{T(\xi)\chi_1\chi_2\chi_3\chi_4}- \braket{T(\xi)\chi_1\chi_2}\braket{\chi_3\chi_4}\\ &+ \braket{T(\xi)\chi_1\chi_3}\braket{\chi_2\chi_4} - \braket{T(\xi)\chi_1\chi_4}\braket{\chi_2\chi_3}.\nonumber \end{align} We now use the familiar identity for general fields $\phi_i$ with conformal weights $h_i$ \cite{diFrancesco} \begin{align} &\braket{T(\zeta) \prod_{i} \phi_i(z_i) } =\\ &\qquad \left[ \sum_{i} \left( \frac{ h_i}{(\zeta- z_i)^2 } + \frac{1}{ \zeta - z_i} \frac{ \partial}{ \partial z_i} \right) \right] \braket{ \prod_{i} \phi_i({z_i})}, \nonumber \end{align} to find \begin{equation} f(\zeta, \{ z_i \} ) = \left( \frac{ \zeta - z_1}{ (\zeta - z_2) (\zeta- z_3) (\zeta - z_4)} \right)^2 \frac{ z_{23} z_{24} z_{34}}{ 2 z_{12} z_{13} z_{14}}. \end{equation} Hence, equation \eqref{OmegaCondition} becomes \begin{align} \Omega_n &= \oint_{z_1} d \zeta \; (\zeta - z_1)^{ - n +1} \; \left( \frac{ \zeta - z_1}{ (\zeta - z_2) (\zeta- z_3) (\zeta - z_4)} \right)^2 \nonumber\\ &= \oint_{z_1} d \zeta \; \frac{ (\zeta - z_1)^{ -n + 3}}{ [ (\zeta - z_2) (\zeta- z_3) (\zeta - z_4)]^2} = 0. \quad (n\geq 1) \end{align} This equation holds for $n=1,2,3$ but for $n=4$ one has \begin{equation} \Omega_4 = \oint_{z_1} d \zeta \; \frac{ (\zeta - z_1)^{ -1}}{ [ (\zeta - z_2) (\zeta- z_3) (\zeta - z_4)]^2} = \frac{ 1}{ (z_{12} z_{13} z_{14})^2} \end{equation} It seems that $\Omega_n \neq 0$ for $n \geq 4$. Hence in these cases \eqref{generalizedWick} does not hold. What is the explanation of this fact? The characters of the Verma modules $\mathcal{V}_\mathds{1}$ and $\mathcal{V}_\chi$ are given by \begin{align} \chi_0(q) &= {\rm Tr}_{\mathcal{V}_\mathds{1}} q^{ L_0 - c/24} \\ &=q^{ - \frac{1}{48}} \left( 1 + q^2 + q^3 + 2 q^4 + 2 q^5 + 3 q^6 + \dots \right), \nonumber\\ \chi_1(q) & = {\rm Tr}_{\mathcal{V}_\chi} q^{ L_0 - c/24}\\ &= q^{ - \frac{1}{48}- \frac{1}{2} } \left(1 + q + q^2 + q^3 + 2 q^4 + 2 q^5 + \dots \right). \nonumber \end{align} Notice that at level $n=4$ there are two states in the Majorana sector. As a matter of fact, the descendants we had considered above correspond to the derivatives of the field $\chi(z)$, \begin{equation} (L_{-n} \chi)(0) = \frac{n+1}{2} \chi_{- n - \frac{1}{2}} \end{equation} The conclusion is that equation \eqref{N4VertexCondition} reduces to equation \eqref{generalizedWick} only if the fields $\alpha$ and $\beta$ that appear in the OPE \eqref{fullSigmaOPE} are unique at a given level. Otherwise one has to consider all the fields appearing at the same level. \section*{Appendix E: Fourier transform of 2D pairing function} If $\mu>0$, we have $\xi_\mathbf{k}>0$ for small momenta and \begin{equation} g_\mathbf{k} \sim -\frac{2\mu}{\hat\Delta(k_x+ik_y)}. \end{equation} Let us try to fix the constants in the Fourier transform, at least in an asymptotic way. Taking $L=aN$ to be the length of the systems (so that the total number of sites is $N\times N$), we can define \begin{equation} k_x = \frac{2\pi(n-\frac{1}{2})}{aN}, \quad k_y = \frac{2\pi(m-\frac{1}{2})}{aN}, \end{equation} where $n,m=-N/2,-N/2+1\cdots,N/2$. Using this, we have \begin{align} \frac{1}{N^2}&\sum_{k_x,k_y} \frac{\exp\left[i\left(k_x x+k_y y\right)\right]}{k_x+ik_y} \\ &\to \frac{a^2}{(2\pi)^2}\int_{-\pi/a}^{\pi/a}dk_x\int_{-\pi/a}^{\pi/a}dk_y\frac{\exp\left[i\left(k_x x+k_y y\right)\right]}{k_x+ik_y}.\nonumber \end{align} Here we need to be careful. We will both take the limit $a\to 0$ and keep it explicitly in the prefactor. (This can be fixed by changing the normalization of the Fourier transform.) Note that for $y>0$ \begin{equation} \int_{-\infty}^\infty \frac{dk_y}{2\pi i} \frac{\exp\left[i\left(k_x x+k_y y\right)\right]}{k_y-ik_x} = \Theta(k_x) \exp\left[ik_x\left(x+i y\right)\right]. \end{equation} Using this, we have \begin{equation} g(\mathbf{r})\to -\frac{2a^2 \mu}{2\pi i \hat\Delta}\frac{1}{x+iy}. \end{equation} For a fixed number of fermions, this corresponds to the Moore-Read state for the FQHE. In this phase, the ground-state of the $p+ip$ conductor is then a grand-canonical state of fermions with this pairing. \section*{Appendix F: Bogoliubov transformation from a BCS pairing matrix} Let us consider a fermionic system with on-site creation operators $c^\dagger_i$, $i\in \{1,\cdots, N\}$ and annihilation operators $c_i$. We will adopt the following notation: \begin{equation} \{C_l\}_{l=1}^{2N}=\{c_1,\cdots,c_N,c^\dagger_1,\cdots,c^\dagger_N\}. \end{equation} Thus, creation and annihilation operators are bundled together. Let us consider a different set of creation and annihilation operators, \begin{equation} \{B_l\}_{l=1}^{2N}=\{b_1,\cdots,b_N,b^\dagger_1,\cdots,b^\dagger_N\} \end{equation} with $B_l=\sum_p M_{lp} C_p$. The linear transformation will be a Bogoliubov transformation if the $b^\dagger$ and $b$ are bona-fide creation and annihilation operators, with the expected anticommutation and adjoint relations. The first condition is that $M$ is unitary. If that is the case, the Bogolibov matrix $M$ can be naturally split: \begin{equation} \begin{pmatrix}b \\ b^\dagger\end{pmatrix} = \begin{pmatrix}D & E \\ E^* & D^*\end{pmatrix} \begin{pmatrix}c \\ c^\dagger\end{pmatrix}, \end{equation} where $D$ and $E$ are $N\times N$ complex matrices, $D^*$ and $E^*$are their complex conjugates (not Hermitian adjoints!) and they must fulfill \begin{align} DD^\dagger + EE^\dagger = \mathds{1}, \quad DE^T + ED^T = 0 \end{align} so that matrix $M$ will be unitary. Notice that $A^\dagger$ is the Hermitian adjoint, and $A^T$ is merely the transpose. In our case, the BCS state is defined via the pairing function $g_{ij}$, which is anti-symmetric, $g_{ij}=-g_{ji}$, \begin{equation} \ket{\Psi}=\exp\left( \sum_{ij} g_{ij} c^\dagger_i c^\dagger_j \right) \ket{0}_c \equiv \exp(P) \ket{0}_c \end{equation} where the last relation defines the pairing operator $P$. This state is the vacuum of a certain Bogoliubov set of operators, $\{B_l\}_{l=1}^{2N}=\{b^\dagger_1,\cdots,b^\dagger_N,b_1\cdots,b_N\}$, which means that \begin{equation} b_k\ket{\Psi}=0, \qquad k\in \{1,\cdots,N\}. \end{equation} Let us impose that condition in order to find the Bogoliubov transformation $M$. By definition, \begin{equation} b_k = \sum_i D_{ki} c_i + E_{ki}c^\dagger_i, \end{equation} so our condition becomes \begin{equation} 0 = b_k \exp(P) \ket{0} = \left( \exp(P) b_k +[b_k,\exp(P)] \right)\ket{0}. \end{equation} Remember that $b_k$ can be expanded as a linear combination of $c_i$ and $c^\dagger_i$. Using \begin{equation} [c_i,f(\{c_j,c^\dagger_j\})]=\frac{\partial f}{\partial c^\dagger_i}, \end{equation} we find that \begin{equation} [c_i,\exp(P)]=\exp(P) \left(\sum_j g_{ij}c^\dagger_j \right). \end{equation} Of course, $c^\dagger_i$ commutes with $\exp(P)$. Thus, the annihilation condition becomes \begin{equation} \exp(P) \left\{ \sum_i D_{ki}c_i + \sum_{ij}D_{ki}g_{ij}c^\dagger_j+ \sum_i E_{ki}c^\dagger_i \right\} \ket{0} =0. \end{equation} which implies the following relation between $D$, $E$ and $g$: \begin{equation} \sum_i D_{ki} g_{ij} + E_{kj}=0. \end{equation} Thus, in order to find the Bogoliubov transformation given the pairing matrix $g$, we have to solve the following matrix equations: \begin{align} Dg+E &=0, \nonumber\\ D D^\dagger + E E^\dagger &=\mathds{1}, \\ DE^T + ED^T&= 0,\nonumber \end{align} From the first equation we get $E=-Dg$, which when inserted into the third equation yields $D(g^T+g)D=0$. But this relation is trivial due to the antisymmetry of $g$. Then, the only non-trivial equation becomes \begin{equation} D \left( \mathds{1} + g g^\dagger \right) D^\dagger = \mathds{1}. \end{equation} This equation can be easily solved in the eigenbasis of $\mathds{1} + g g^\dagger$, which is self-adjoint and positive-definite.
1,941,325,220,122
arxiv
\section{Introduction}\label{intro} Dimensionality reduction (DR) plays a crucial role in handling high-dimensional data. The supervised distance preserving projection (SDPP) is proposed by Zhu et al. \cite{Ref_Zhu2013Supervised} for data dimension reduction (DR), which minimizes the difference of local structure between projected input covariates and their corresponding responses. Given sample points $\{\boldsymbol{x}_1, \boldsymbol{x}_2, \cdots,\boldsymbol{x}_n\}\subset\mathcal{X}\subset\mathcal{R}^d$ and their corresponding responses $\{\boldsymbol{y}_1,\boldsymbol{y}_2, \cdots, \boldsymbol{y}_n\} \subset \mathcal{Y} \subset\mathcal{R}^m$, the form of the SDPP is as following: \begin{equation}\label{eq_1} \begin{array}{ll} \min & J(\mathbf{P})=\frac{1}{n} \sum_{i=1}^{n} \sum_{\boldsymbol{x}_{j} \in \mathcal{N}\left(\boldsymbol{x}_{i}\right)}\left( \|\mathbf{P}^T\boldsymbol{x}_{i} - \mathbf{P}^T\boldsymbol{x}_{j}\|^2-\left\|\boldsymbol{y}_{i}-\boldsymbol{y}_{j}\right\|^{2}\right)^{2}. \end{array} \end{equation} In here, $\mathcal{N}\left(\boldsymbol{x}_{i}\right)$ is a neighborhood of $\boldsymbol{x}_{i}$. $\mathbf{P} \in \mathcal{R}^{d\times r} \left(d >> r\right)$ denotes the protection matrix. $\|\mathbf{P}^T\boldsymbol{x}_{i} - \mathbf{P}^T\boldsymbol{x}_{j}\|$ and $\left\|\boldsymbol{y}_{i}-\boldsymbol{y}_{j}\right\|$ are the pairwise distances among the projected input covariates and distances among responses, respectively. Zhu et al. \cite{Ref_Zhu2013Supervised} reformulated the SDPP into a semidefinite quadratic linear programming (SQLP) when $d$ is small $(d\leq 100)$. Jahan \cite{Ref_jahan2018dimension} incorporated the total variance of projected covariates to the objective function of the SDPP, and transferred the obtained optimization problem into semidefinite least squares SDPP (SLS-SDPP). In fact, the hidden low rank constraint has been ignored in the above two converted SDP problems, which may lead to suboptimal projection matrix and poor DR performance. In this manuscript, we show that the optimization problem of the SDPP \eqref{eq_1} can be converted equivalently into a rank constrained least squares semidefinite programming (RCLSSDP). The RCLSSDP also occurs in many other contexts such as nearest correlation matrix(NCM) \cite{Ref_gao2010structured,Ref_gao2010majorized,Ref_qi2014computing}, sensor network localization(SNL) \cite{Ref_singer2008remark} and classical multidimensional scaling (MDS) \cite{Ref_shang2004localization,Ref_1952Multidimensional}. As we known, the rank constrained matrix optimization problems are computationally intractable \cite{Ref_buss1999computational} and NP hard because they are unconvex. The convex relaxation technique has been applied to remove the unconvexity of directly solving the rank constrained least squares problem (RCLS) \cite{Ref_candes2011tight,Ref_recht2010guaranteed}, one of the most popular convex relaxation is the nuclear norm minimization (NNM): \begin{equation}\label{eq_2} \begin{array}{ll} \min &\|\mathcal{A}\left(\mathbf{U}\right)- \boldsymbol{b}\|^{2}+\lambda\|\mathbf{U}\|_*\\ \text { s.t. } & \mathbf{U} \in \mathcal{R}^{p_1\times p_2}. \end{array} \end{equation} where $\|\mathbf{U}\|_* = \Sigma_{i=1}^{\min{\left(p_1,p_1\right)}} \sigma_i(\mathbf{U})$ is the nuclear norm of $\mathbf{U}$ and $\lambda$ is a tuning parameter. Some efficient algorthms have been proposed to solve NNM, such as proximal gradient descent \cite{Ref_toh2010accelerated} and proximal point methods \cite{Ref_jiang2014partial}. It has been shown that the solution of the NNM has desirable properties under proper assumptions. Another widely considered convex relaxation for the rank constrained optimization is called the max norm minimization (MNM) \cite{Ref_cai2014sparse,Ref_lee2010practical}, which use max norm as regularizer. However, these convex relaxation teachniques may expand the parameter space of target problem, which would keep the solution of convex relaxation problem away from that of target problem. What's more, the penalty parameter need to be carefully tuned to ensure that the optimal solution of penalty problem satisfies low rank constraint and its related the fitting term is small enough. In addition to convex relaxation methods, a class of non-convex optimization algorithms for solving rank constrained least squares has also been widely investgated \cite{Ref_chen2015fast,Ref_jain2013low,Ref_luo2020recursive}, which directly enforce $\operatorname{rank}(\mathbf{U})=r$ on the iterates. In these algorithms, the low-rank matrix $\mathbf{U}$ is first factored to $\mathbf{R}\mathbf{L}^{\top}$ with two factor matrices $\mathbf{R}\in \mathcal{R}^{p_1\times r}$ and $\mathbf{L}\in \mathcal{R}^{p_2\times r}$, then alternately run either gradient decent or minimization on $\mathbf{R}\in \mathcal{R}^{p_1\times r}$ and $\mathbf{L}\in \mathcal{R}^{p_2\times r}$ \cite{Ref_candes2015phase,Ref_li2019rapid,Ref_zheng2015convergent}. Based on this framework, some methods perform sketching to speed up the computation via dimension reduction has been explored extensively in recent years \cite{Ref_luo2020recursive,Ref_mahoney2010randomized}. Since this class of method performs computation and storage the iterates in factored form, it is more efficient than the NNM both in computation and storage, especially in cases where the rank $r$ is very small compared to $p_1$ and $p_2$. \par The another popular unconvex method to directly deal with the rank constraint is that the rank constraint $\operatorname{rank}(\mathbf{U})\leq r$ is equivalently converted into the equality constraint $\|\mathbf{U}\|_*- \|\mathbf{U}\|_{(r)} = 0$ with nuclear norm function $\|\mathbf{U}\|_*$ and Ky-Fan r-norm function $\|\mathbf{U}\|_{(r)}$. Then the exact penalty approach is used to penalize the euqality constraint into objective with chosen penalty parameter. Since the nuclear norm function $\|\mathbf{U}\|_*$ and the Ky-Fan r-norm function $\|\mathbf{U}\|_{(r)}$ are both convex, so the rank constrained optimization problem is converted into difference-of-convex (DC) probelm, which can be solved under the DC approach framework \cite{Ref_tao1997convex}. Then the rank constrained semidefinite programming (RCSDP) can be reformulated as a DC problem \cite{Ref_gao2010structured,Ref_gao2010majorized}, so the classical DC algorithm is used to solve the DC formulation of the RCSDP. In each iteration of the classical DC algorithm, the concave part of the objective function is replaced by its linearization at the chosen point, then the resulting convex optimization problem is solved efficiently by some state-of-the-art solvers. In recent decades, massive algorithms for solving DC models have been proposed, including the classical DC algorithm \cite{Ref_an2005dc,Ref_le1999Exact,Ref_le2012exact,Ref_tao1997convex}, the proximal DC algorithm \cite{Ref_souza2016global}, the DC alorithm with proximal bundle \cite{Ref_gaudioso2018minimizing,Ref_de2019proximal} and the proximal DC algorithm with extrapolation \cite{Ref_liu2019refined,Ref_wen2018proximal}, etc. In the literature \cite{Ref_jiang2021proximal}, a semi-proximal DC algorithm (SPDCA) was proposed to solve the rank constrained SDP and the large scale subproblem was solved by an efficient majorized semismooth Newton-CG augmented Lagrangian method based on the software package SDPNAL+ \cite{Ref_sun2020sdpnal+}. This technique performs very well in massive unconvex problems, e.g. quadratic assignment problem (QAP), standard quadratic programming and minimum-cut graph tri-partitioning problem. However, they solve the subprobelm exactly, which will waste massive computation and reduce the solving efficiency. \par As all we known, the calculation of DC algorithm mainly focuses on solving convex subproblems. For large scale DC problem, we need huge amount of computation when solving convex subproblems to higher precision. In fact, it is time-consuming and unnecessary to solve the convex subproblems exactly at each iterate of general DC algorithm, specially at the initial iteration of the DCA. To overcome this issue, an inexact proximal DC algorithm (iPDCA) is proposed \cite{Ref_souza2016global}, but the inexact strategy in their algorithm is difficult to implement in practical application. In fact, it is a challenging task to design an inexact proximal DC algorithm that can guarantee the theoretical convergence and good numerical performance for solving large scale DCLSSDP. As far as we know, there is no research in this field. The main contributions of this paper can be divided into the following four parts. First of all, we propose a numerical efficient inexact proximal DC algorithm with sieving strategy (s-iPDCA) for solving the RCLSSDP, and the sequence generated by proposed algorithm globally converges to any stationary point of the corresponding DC problem. Secondly, for the subprobelm of the s-iPDCA, we design an very effective accelerated block coordinate descent (ABCD) method to solve its strongly convex dual probelm. Thirdly, an operable and numerically simple inexact strategy is used to solve subprobelm of s-iPDCA efficiently. Finally, we compare our s-iPDCA with the classical PDCA and the PDCA with extrapolation (PDCAe) for solvig the RCLSSDP from DR experiment on the COIL-20 database, the results demonstrate that our proposed s-iPDCA outperforms the other two algorithms. We also perform face recognition experiments on standard face recognition databases, the ORL and YaleB, the resluts indicate the new RCKSDPP model is very effective to reduce the dimension of face images data with complex distribution. Below are some common notations to be used in this paper. We use $\mathcal{S}^q$ to denote the linear subspace of all $q\times q$ real symmetric matrices and use $\mathcal{S}^q_+\backslash\mathcal{S}^q_-$ to denote the positive$\backslash$negative semidefinite matrix cone. Denotes $\|\cdot\|_F$ the Frobenius norm of matrices. $\|\cdot\|$ is used to represent the $l_2$ norm of vectors and matrices. Let $\boldsymbol{e}_i$ be the $i^{th}$ standard unit vector. Given an index set $\mathcal{L} \subset \left\{1, \cdots, q\right\}$, $|\mathcal{L}|$ denotes the size of $\mathcal{L}$. We denote the vector and square matrix of all ones by $\boldsymbol{1}_q$ and $\mathbf{E}_q$ respectively, and denote the identity matrix by $\mathbf{I}_q$. We use “$\operatorname{vec}(\cdot)$” to denote the vectorization of matrices. If $\boldsymbol{z} \in \mathcal{R}^p $, then Diag($\boldsymbol{z}$) is a $p \times p$ diagonal matrix with $\boldsymbol{z}$ on the main diagonal. We denote the Ky-Fan k-norm of matrix $\mathbf{U}$ as $\|\mathbf{U}\|_{(k)} = \Sigma_{i=1}^k \sigma_i(\mathbf{U})$, in which, $\sigma_i(\mathbf{U})$ is the $i^{th}$ largest singular value of $\mathbf{U}$. $\langle\mathbf{U},\mathbf{A}\rangle = \Sigma_{i,j=1}^q\mathbf{U}_{i j}\mathbf{A}_{i j}$ is used to denote the inner product between square matrix $\mathbf{U}$ and square matrix $\mathbf{A}$. Let $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_q$ be the eigenvalues of $\mathbf{U}\in \mathcal{S}^q$ being arranged in nonincreasing order. We denote $\mu:=\left\{i|\lambda_i>0,i = 1,...,q\right\}$ and $\nu:=\left\{i|\lambda_i<0,i = 1,...,q\right\}$ as the index set of positive eigenvalues and the index set of negative eigenvalues, respectively. The spectral decomposition of $\mathbf{U}$ is given as $\mathbf{U} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{\top}$ with \[\mathbf{\Lambda} = \left[\begin{array}{ccc} \mathbf{\Lambda_{\mu}} & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & \mathbf{\Lambda_{\nu}}, \end{array}\right]\] The semidefinite positive$\backslash$negative matrix cone projection of $\mathbf{U}$ is represented as \[\operatorname{\Pi}_{\mathcal{S}_+^q}\left(\mathbf{U}\right)=\mathbf{Q}_{\mu}\mathbf{\Lambda}_{\mu}\mathbf{Q}_{\mu}^{\top}\backslash\operatorname{\Pi}_{\mathcal{S}_-^q}\left(\mathbf{U}\right)=\mathbf{Q}_{\nu}\mathbf{\Lambda}_{\nu}\mathbf{Q}_{\nu}^{\top}.\] \section{Rank constrained supervised distance preserving projection} \label{sec:2} Let $\mathbf{U} = \mathbf{P}\mathbf{P}^T$, $\boldsymbol{\tau}_{i j} = \boldsymbol{x}_i - \boldsymbol{x}_j$ and $\iota_{i j} = \boldsymbol{y}_i - \boldsymbol{y}_j$, the SDPP \eqref{eq_1} can be formulated as \begin{equation}\label{eq_3} \begin{array}{ll} \min_{\mathbf{U}\in \mathcal{S}_+^d} J(\mathbf{U})=\frac{1}{n} \sum_{i ,j=1}^n \mathbf{G}_{i j} \left(\left\langle\boldsymbol{\tau}_{i j} \boldsymbol{\tau}_{i j}^{\top},\mathbf{U}\right\rangle-\iota_{i j}^{2}\right)^{2}, \end{array} \end{equation} where $\mathbf{G}$ denotes a graph matrix, whose element is shown as \begin{equation}\nonumber \mathbf{G}_{i j} = \left\{\begin{array}{l} 1\quad \text{ if } \boldsymbol{x}_j\in\mathcal{N}(\boldsymbol{x}_i) \\ 0\quad \text{ otherwise } \end{array}\right. \end{equation} Let $p = \Sigma_{i,j=1}^n\mathbf{G}_{ i j }$ and rearrange the items in \eqref{eq_3} with $\mathbf{G}_{i j}>0$ into one cloumn, we obtain a least squares semidefinite programming (LSSDP) \begin{equation}\label{eq_4} \begin{array}{l} \min_{\mathbf{U} \in \mathcal{S}^d_+} J(\mathbf{U}) =\frac{1}{n} \|\mathcal{A}\left(\mathbf{U}\right)- \boldsymbol{b}\|^{2}, \end{array} \end{equation} in which, $\boldsymbol{b} = \left[\iota_{(1)}^2,\iota_{(2)}^2,...,\iota_{(p)}^2\right]^{\top}$, $\mathcal{A}:\mathcal{S}^d_+\rightarrow \mathcal{R}^p$ is a linear opterator that can be explicitly represented as \begin{equation}\label{eq_5} \mathcal{A}\left(\mathbf{U}\right) = \left[\langle \mathbf{A}_1,\mathbf{U}\rangle,\langle \mathbf{A}_2,\mathbf{U}\rangle,...,\langle \mathbf{A}_p,\mathbf{U}\rangle\right]^{\top}, \mathbf{A}_i = \tau_{(i)}\tau_{(i)}^{\top}, i = 1,2,...,p. \end{equation} To distinguish the DR model expressed in \eqref{eq_4} with the orginal SDPP\eqref{eq_1}, we call it semidefinite SDPP (SSDPP) when we use it to reduce the dimension of data. In fact, the LSSDP in \eqref{eq_4} is the convex relaxation of the optimization problem given in \eqref{eq_1}, but it may not satisfy the low rank constraint, i.e., $\operatorname{rank}\left(\mathbf{U}\right)\leq r,r\ll d$. Therefore, the low rank constraint should be added to the LSSDP in \eqref{eq_4}, given as \begin{equation}\label{eq_6} \begin{array}{ll} \min_{\mathbf{U} \in \mathcal{S}^d_+} &J(\mathbf{U}) = \frac{1}{n}\|\mathcal{A}\left(\mathbf{U}\right)- \boldsymbol{b}\|^{2}\\ \text { s.t. }&\operatorname{rank}\left(\mathbf{U}\right)\leq r. \end{array} \end{equation} Then, we obtain a rank constrained least squares semidefinite programming (RCLSSDP), which is equivalent to the optimization problem of the orginal SDPP \eqref{eq_1}. When apply this model to DR, we call it rank constrained SDPP (RCSDPP). As we known, it is difficult to solve the rank constrained optimization problem. By the observation that $\operatorname{rank}\left(\mathbf{U}\right)\leq r$ if and only if $\|\mathbf{U}\|_*-\|\mathbf{U}\|_{(r)} = 0$ and $\forall \mathbf{U}\in \mathcal{S}^d_+$, $\|\mathbf{U}\|_* = \langle \mathbf{U},\mathbf{I}\rangle$, the RCLSSDP can be reformulated as the equivalent form \begin{equation}\label{eq_7} \begin{array}{ll} \min_{\mathbf{U} \in \mathcal{S}^d_+} &J(\mathbf{U}) = \|\mathcal{A}\left(\mathbf{U}\right)- \boldsymbol{b}\|^{2}\\ \text { s.t. }&\langle \mathbf{U},\mathbf{I}\rangle-\|\mathbf{U}\|_{(r)} = 0. \end{array} \end{equation} Even if the problem \eqref{eq_6} is converted to the above form, the difficulty caused by rank constraint is not eliminated. To address this problem, we employ an exact penalty approach. By penalizing the equality constraint in \eqref{eq_7} into the objective function, we obtain a LSSDP with DC regularization term (DCLSSDP), shown as \begin{equation}\label{eq_8} \min_{\mathbf{U} \in \mathcal{S}^d_+} J_c(\mathbf{U}) = \frac{1}{n}\|\mathcal{A}\left(\mathbf{U}\right)- \boldsymbol{b}\|^{2}+ c\left(\langle \mathbf{U},\mathbf{I}\rangle-\|\mathbf{U}\|_{(r)}\right). \end{equation} For the penalty problem \eqref{eq_8}, we have the following conclusion. \begin{proposition}\label{prop_1} Let $\mathbf{U}_c^*$ be a global optimal solution to the penalized problem \eqref{eq_8}. If $\operatorname{rank}\left(\mathbf{U}_c^*\right)\leq r$, then $\mathbf{U}_c^*$ is an optimal solution of \eqref{eq_7}. \end{proposition} \begin{proof} For the details of proof, one can refer to \cite[Proposition 3.1]{Ref_gao2010majorized}. \end{proof} \begin{proposition}\label{prop_2} Let $\hat{\mathbf{U}}^*$ be an optimal solution to the LSSDP \eqref{eq_4} and $\mathbf{U}_r$ a feasible solution to problem \eqref{eq_7}. Given an $\epsilon>0$, and the $c$ is chosen such that $\left(J(\mathbf{U}_r)-J(\hat{\mathbf{U}}^*)\right)/c\leq \epsilon$. Then \begin{equation}\label{eq_9} \langle \mathbf{U}_c^*,\mathbf{I}\rangle-\|\mathbf{U}_c^*\|_{(r)}<\epsilon \quad \text{and}\quad J(\mathbf{U}_c^*)\leq \bar{J}- c\left(\right\langle \mathbf{U}_c^*,\mathbf{I}\rangle-\|\mathbf{U}_c^*\|_{(r)})\leq \bar{J}. \end{equation} where $\bar{J} = J(\mathbf{U}^*)$, $\mathbf{U}^*$ is a global optimal solution of RCLSSDP \eqref{eq_7}. \end{proposition} \begin{proof} For the details of proof, one can refer to \cite[Proposition 3.2]{Ref_gao2010majorized}. \end{proof} From the Proposition \ref{prop_2}, it is easy to see that an $\epsilon$-optimal solution to the RCLSSDP \eqref{eq_7} in the sense of \eqref{eq_9} is guaranteed by solving the penalized problem \eqref{eq_8} with a chosen penalty parameter $c$. This provides the rationale to replace the rank constraint in problem \eqref{eq_7} by the penalty function $c \left(\langle \mathbf{U},\mathbf{I}\rangle-\|\mathbf{U}\|_{(r)}\right)$. Obviously, the penalty parameter $c$ is essential to produce low rank solution for the DCLSSDP \eqref{eq_8}. Thus, how to find the appropriate penalty parameter and efficiently solve the corresponding penalty problem is very important. Firstly, for choosing appropriate penalty parameter, we have the following conclusion. \begin{proposition}\label{prop_3} If exist a $\bar{c}>0$, let $\mathbf{U}^*_{\bar{c}}\in \mathcal{S}^d_+$ be one of the global optimal solution of \eqref{eq_8} with penalty parameter $\bar{c}$ such that $\|\mathbf{U}^*_{\bar{c}}\|_*-\|\mathbf{U}^*_{\bar{c}}\|_{(r)} = 0$. Then for any $c>\bar{c}$, $\mathbf{U}^*_{\bar{c}}$ is one of the global optimal solution of \eqref{eq_8} with penalty parameter $c$. \end{proposition} \begin{proof} We can rewirite the \eqref{eq_8} as \begin{equation}\label{eq_10} \min_{\mathbf{U} \in \mathcal{S}^d_+} J(\mathbf{U}) + c(\| \mathbf{U}\|_*-\|\mathbf{U}\|_{(r)}). \end{equation} From $\|\mathbf{U}^*_{\bar{c}}\|_*-\|\mathbf{U}^*_{\bar{c}}\|_{(r)} = 0$ and $\|\mathbf{U}\|_*-\|\mathbf{U}\|_{(r)} \geq 0$, we have \[J(\mathbf{U}) + c(\|\mathbf{U}\|_*-\|\mathbf{U}\|_{(r)})\geq J(\mathbf{U}) + c(\|\mathbf{U}^*_{\bar{c}}\|_*-\|\mathbf{U}^*_{\bar{c}}\|_{(r)}),\] in here, $c>0$. Adding $\bar{c}(\|\mathbf{U}\|_*-\|\mathbf{U}\|_{(r)})$ to both side of the above inequality, we get \[J(\mathbf{U}) + (c+\bar{c})(\|\mathbf{U}\|_*-\|\mathbf{U}\|_{(r)})\geq J(\mathbf{U}) + c(\|\mathbf{U}^*_{\bar{c}}\|_*-\|\mathbf{U}^*_{\bar{c}}\|_{(r)})+\bar{c}(\|\mathbf{U}\|_*-\|\mathbf{U}\|_{(r)})\] Due to the optimality of $\mathbf{U}^*_{\bar{c}}$ for solving \eqref{eq_10}, we deduce that \[J(\mathbf{U}) + (c+\bar{c})(\|\mathbf{U}\|_*-\|\mathbf{U}\|_{(r)})\geq J(\mathbf{U}^*_{\bar{c}}) + (c+\bar{c})(\|\mathbf{U}^*_{\bar{c}}\|_*-\|\mathbf{U}^*_{\bar{c}}\|_{(r)})\] holds, which implies $\mathbf{U}^*_{\bar{c}}\|_{(r)}$ solves $\min_{\mathbf{U} \in \mathcal{S}^n_+} J(\mathbf{U}) + (c+\bar{c})(\| \mathbf{U}\|_*-\|\mathbf{U}\|_{(r)})$. This completes the proof by setting $c = c+\bar{c}$. \end{proof} \begin{remark} If exist a $\hat{c}>0$, let $\mathbf{U}^*_{\hat{c}}\in \mathcal{S}^d_+$ be one of the global optimal solution of \eqref{eq_8} with penalty parameter $\hat{c}$ such that $\operatorname{rank}(\mathbf{U}^*_{\hat{c}}) = \hat{r}\leq r$, then for any $c>\hat{c}$, $\operatorname{rank}(\mathbf{U}^*_{c})=\hat{r}$ holds, in which, $\mathbf{U}^*_{c}$ is one of the global optimal solution of \eqref{eq_8} with penalty parameter $c$. Thus, based on the Proposition \ref{prop_3} and the Proposition \ref{prop_1}, we use a strategy of gradually increasing the penalty parameter to obtain the appropriate penalty parameter. The details for adjusting the penalty parameter is given in the Algorithm \ref{alg_penalty}. \end{remark} \begin{algorithm}[htb] \caption{Framework of adjusting the penalty parameter for solving the RCLSSDP \eqref{eq_7}} \label{alg_penalty} \begin{algorithmic} \STATE \textbf{Step 0}. Choose $c_0>0$, give $\rho>1$, set $i = 0$. \STATE \textbf{Step 1}. Solve the problem \eqref{eq_8} with penalty parameter $c_i$ by s-iPDCA \ref{alg_s_iPDCA}, shown as \[\mathbf{U}^*_{c_i}=\text{arg}\min J_{c_i}(\mathbf{U})+\delta_{\mathcal{S}_+^d}(\mathbf{U})\] \STATE \textbf{Step 2}. If $\operatorname{rank}(\mathbf{U}^*_{c_i})\leq r$ holds, stop and return $\mathbf{U}^*_{c_i}$, otherwise, $c_{i+1} = \rho c_i$, $i= i+1$, go to $\textbf{Step 0}$. \end{algorithmic} \end{algorithm} \section{Inexact proximal DC algorithm with sieving strategy for solving the DCLSSDP}\label{sec:3} \subsection{Algorithm framework for solving the DCLSSDP}\label{sec:3_1} Obviously, for fixed penalty parameter $c$, the penalized problem \eqref{eq_8} can be formulated into the following standard DC form \begin{equation}\label{eq_11} \min_{\mathbf{U} \in \mathcal{S}^d_+} J_c(\mathbf{U}) = \underbrace{\frac{1}{n}\|\mathcal{A}\left(\mathbf{U}\right)- \boldsymbol{b}\|^{2}+c\langle \mathbf{U},\mathbf{I}\rangle}_{f_1(\mathbf{U})}-\underbrace{c\|\mathbf{U}\|_{(r)}}_{f_2(\mathbf{U})}. \end{equation} Let's first briefly review the classical proximal DC algorithm (PDCA) for solving DC problem \eqref{eq_11}, the detail is shown in Algorithm \ref{alg_PDCA}. \begin{algorithm}[htb] \caption{Proximal DC algorithm for solving the DCLSSDP \eqref{eq_11}} \label{alg_PDCA} \begin{algorithmic} \STATE \textbf{Step 0}. Given $c>0$, initize $\mathbf{U}^{0} \in \mathcal{S}_+^d$, $\mathbf{W}^{0}\in \partial f_2(\mathbf{U}^0)$, tolerance error $\varepsilon\geq 0$, proximal parameter $\alpha>0$, $k = 0$. \STATE \textbf{Step 1}. \begin{equation}\label{eq_12} \mathbf{U}^{k+1} = \text{arg}\min f_1\left(\mathbf{U}\right)- \langle \mathbf{U}, \mathbf{W}^{k}\rangle +\delta_{\mathcal{S}^d_+}\left(\mathbf{U}\right) + \frac{\alpha}{2}\|\mathbf{U}-\mathbf{U}^{k}\|_F^2, \end{equation} \STATE \textbf{Step 2}. If $\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F \leq \varepsilon$, stop and return $\mathbf{U}^{k+1}$. \STATE \textbf{Step 3}. Choose $\mathbf{W}^{k+1} \in\partial f_2(\mathbf{U}^{k+1})$, set $k\leftarrow k+1$, go to $\textbf{Step 1}$. \end{algorithmic} \end{algorithm} As we known, it is computation expensive and time-consuming to solve the subproblem \eqref{eq_12} exactly at each iteration of the PDCA \ref{alg_PDCA}. Therefore, we need to solve the strongly convex SDP \eqref{eq_12} inexactly. How to design an inexact proximal DC algorithm that can guarantee the theoretical convergence and good numerical performance for solving large scale DC problem remain an open question. Some researchers proposed to solve the subproblem inexactly, i.e., the inexact solution of \eqref{eq_12} satisfies following KKT condition: \begin{equation} \label{eq_13} \mathbf{0} \in \nabla f_1\left(\mathbf{U}^{k+1}\right) -\mathbf{W}^{k}+\partial\delta_{\mathcal{S}^d_+}\left(\mathbf{U}^{k+1}\right)+\alpha\left(\mathbf{U}^{k+1}-\mathbf{U}^{k}\right)-\mathbf{\Delta}^{k+1}, \end{equation} where $\mathbf{\Delta}^{k+1}$ is the inexact term. Equivalently, $\mathbf{U}^{k+1}$ is the optimal solution of the following problem: \begin{equation}\label{eq_14} \min f_1\left(\mathbf{U}\right)- \langle \mathbf{U}, \mathbf{W}^{k}\rangle +\delta_{\mathcal{S}^d_+}\left(\mathbf{U}\right) + \frac{\alpha}{2}\|\mathbf{U}-\mathbf{U}^{k}\|_F^2-\langle \mathbf{\Delta}^{k+1}, \mathbf{U}\rangle, \end{equation} If $\|\mathbf{\Delta}^{k+1}\|_F\leq \epsilon_{k+1}$, the trial point $\mathbf{U}^{k+1}$ is the $\epsilon_{k+1}$-inexact solution of strongly convex subproblem \eqref{eq_12}. In traditional inexact algorithm \cite{Ref_jiang2012An,Ref_sun2015An}, the sequence $\{\epsilon_{k+1}\}$ is assumed to satisfy the condition that its summability $\Sigma_{k=0}^{\infty}\epsilon_{k+1}$ is convergent. Although this kind inexact strategy is simple, it cannot guarante the convergence of corresponding DC algorithm. To make sure that the algorithm is convergent, Wang et al. \cite{Ref_wang2019task} assumed the condition that $\|\mathbf{\Delta}^{k+1}\|_F < \frac{\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F$ holds. However, this condition may be unreachable in numerical experiments because the inexact term $\mathbf{\Delta}^{k+1}$ is related to $\mathbf{U}^{k+1}$ implicitly. Recently, Souza et al. \cite{Ref_souza2016global} proposed an inexact proximal DC algorithm (iPDCA), in their algorithm, the inexact solution $\mathbf{U}^{k+1}$ is assumed to satisfy \begin{equation}\nonumber f_1\left(\mathbf{U}^{k}\right)-f_1\left(\mathbf{U}^{k+1}\right)-\left\langle \mathbf{W}^{k}, \mathbf{U}^{k}-\mathbf{U}^{k+1}\right\rangle \geq \frac{(1-\sigma)\alpha}{2}\left\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\right\|_F^{2} \end{equation} and \begin{equation}\nonumber \left\|\mathbf{W}^{k+1}-\mathbf{W}^{k}\right\|_F \leq \theta\left\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\right\|_F \end{equation} with $\mathbf{W}^{k+1}\in\partial f_2(\mathbf{U}^{k+1})$ and $\|\mathbf{\Delta}^{k+1}\|_F<\frac{\sigma\alpha}{2}\left\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\right\|_F$. Obviously, these conditions are also difficult to implement in practical numerical experiments.\par Different from the above inexact strategies, in addition to $\lim_{k\rightarrow\infty}\epsilon_{k+1}=0$, no assumpution else is made for $\{\epsilon_{k+1}\}$ and $\{\mathbf{\Delta}_{k+1}\}$ in our algorithm. This bring that the sequence $\left\{J_c(\mathbf{U}^k)\right\}$ generated by the new iPDCA with $\{\epsilon_{k+1}\}$ and $\{\mathbf{\Delta}_{k+1}\}$ may not decrease. To address this issue, we employ a sieving strategy in our proposed iPDCA (s-iPDCA). Specifically, we choose a stability center $\mathbf{U}^{k+1}$ at each iteration of s-iPDCA, then choose $\mathbf{W}^{k+1} \in\partial f_2(\mathbf{U}^{k+1})$ and the $\mathbf{U}^{k+1}$ is set as the proximal point for the next iteration. Once a new trial point $\mathbf{V}^{k+1}$ satisfying the $\epsilon_{k+1}$-inexact condition is computed, the following rule is used to update $\mathbf{U}^{k+1}$: if \begin{equation}\label{eq_15} \|\mathbf{\Delta}^{k+1}\|_F < \frac{(1-\kappa)\alpha}{2}\|\mathbf{V}^{k+1}-\mathbf{U}^{k}\|_F, \end{equation} then we say a serious step is performed and set $\mathbf{U}^{k+1}:= \mathbf{V}^{k+1}$; otherwise, we say a null step is performed and set $\mathbf{U}^{k+1} := \mathbf{U}^{k}$, which means that the stability center remains unchanged. In \eqref{eq_15}, $\kappa\in \left(0, 1\right)$ is a tuning parameter to balance the efficiency of the s-iPDCA and the inexactness of solution for \eqref{eq_14}. It should be noted that, as shown in s-iPDCA \ref{alg_s_iPDCA}, when the test \eqref{eq_15} doesn't hold true, only the iteration counter $k$ and inexact error bound $\epsilon_{k+1}$ are changed. \par \begin{algorithm}[htb] \caption{Inexact proximal DC algorithm with sieving strategy (s-iPDCA) for solving the DCLSSDP \eqref{eq_11}} \label{alg_s_iPDCA} \begin{algorithmic} \STATE \textbf{Step 0}. Given $c>0$, initize $\mathbf{U}^{0} = \mathbf{V}^{0}\in \mathcal{S}_+^d$, $\mathbf{W}^{0}\in \partial f_2(\mathbf{U}^0)$, tolerance error $\varepsilon\geq 0$, non-negative monotone descent sequence $\{\epsilon_{k+1}\}$, proximal parameter $\alpha>0$, $k = 0$. \STATE \textbf{Step 1}. Inexactly solve \eqref{eq_12} so that $\|\mathbf{\Delta}^{k+1}\|_F\leq \epsilon_{k+1}$ holds true, shown as \begin{equation}\label{eq_16} \mathbf{V}^{k+1} = \text{arg}\min f_1\left(\mathbf{U}\right)- \langle \mathbf{U}, \mathbf{W}^{k}\rangle +\delta_{\mathcal{S}^d_+}\left(\mathbf{U}\right) + \frac{\alpha}{2}\|\mathbf{U}-\mathbf{U}^{k}\|_F^2-\langle \mathbf{\Delta}^{k+1}, \mathbf{U}\rangle, \end{equation} \STATE \textbf{Step 2}. If $\|\mathbf{V}^{k+1}-\mathbf{U}^{k}\|_F \leq \varepsilon$, stop and return $\mathbf{V}^{k+1}$. \STATE \textbf{Step 3}. If \eqref{eq_15} holds true, set $\mathbf{U}^{k+1} := \mathbf{V}^{k+1}$, choose $\mathbf{W}^{k+1} \in\partial f_2(\mathbf{U}^{k+1})$, otherwise, set $\mathbf{U}^{k+1} := \mathbf{U}^{k}$, $\mathbf{W}^{k+1} :=\mathbf{W}^{k}$. Set $k\leftarrow k+1$, go to $\textbf{Step 1}$. \end{algorithmic} \end{algorithm} In this paper, we choose the $\mathbf{W}^{k+1} \in\partial f_2(\mathbf{U}^{k+1})$ as the following: let $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_d$ is the eigenvalues of $\mathbf{U}^{k+1}$ being arranged in nonincreasing order, then $\mathbf{U}^{k+1}$ has spectral decomposition $\mathbf{U}^{k+1} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{\top}$, where $\mathbf{\Lambda}$ is the diagonal matrix whose $i^{th}$ diagonal entry is $\lambda_i$, and the $i^{th}$ column of $\mathbf{Q}$ is the eigenvector of $\mathbf{U}^{k+1}$ corresponding to the eigenvalue $\lambda_i$, given as $\boldsymbol{q}_i$. Then set $\mathbf{W}^{k+1}=c\Sigma_{i = 1}^r \boldsymbol{q}_i\boldsymbol{q}_i^{\top}$.\par Next, we consider to solve the subproblem \eqref{eq_16} from its dual problem. If we ignore the inexact term $\mathbf{\Delta}^{k+1}$ of the subproblem \eqref{eq_16}, then, its dual problem can be equivalently formulated as the following minimization problem: \begin{equation} \label{eq_17_1} \min \frac{n}{4} \|\boldsymbol{z}\|^{2} + \langle \boldsymbol{z},\boldsymbol{b}\rangle+\delta_{\mathcal{S}^d_+}\left(\mathbf{Y}\right)+\frac{1}{2\alpha}\|\mathcal{A}^*\left(\boldsymbol{z}\right)-\mathbf{Y}-\mathbf{\Phi}^{k}\|_F^2, \end{equation} where $\mathbf{\Phi}^{k} = \mathbf{W}^{k}-c\mathbf{I} + \alpha \mathbf{U}^{k}$. The KKT condition for solving \eqref{eq_17_1} is given as \begin{align} \mathbf{\Pi}_{\mathcal{S}^d_+}\left(\mathbf{Y}+\frac{1}{\alpha}\left(\mathcal{A}^*\left(\boldsymbol{z}\right)-\mathbf{Y}-\mathbf{\Phi}^{k}\right)\right)&=\mathbf{Y}\label{eq_18}\\ \frac{n}{2}\boldsymbol{z} + \boldsymbol{b}+\frac{1}{\alpha}\mathcal{A}\left(\mathcal{A}^*\left(\boldsymbol{z}\right)-\mathbf{Y}-\mathbf{\Phi}^{k}\right)&= \boldsymbol{0} \label{eq_19} \end{align} This problem belongs to a general class of unconstrained, multi-block convex optimization problems with coupled objective function. Thus, this problem can be solved efficiently by two-block accelerate block coordinate descent (ABCD) method. In order to solve the convex problem \eqref{eq_17_1} inexactly, we employ an efficient ABCD algorithm, shown in Algorithm \ref{alg_abcd}. \par \begin{algorithm}[htb] \caption{Acclerated block coordinate desent algorithm (ABCD)} \label{alg_abcd} \begin{algorithmic} \STATE \textbf{Step 0}. Initize $\tilde{\mathbf{Y}}^{1}\in \mathcal{S}_+^d$, $\boldsymbol{z}^0$, inexact error bound $\zeta_k$ given in \eqref{eq_34_1}, accelation factor $t_1 =1$, $j\leftarrow 1$. \STATE \textbf{Step 1}. Update $\boldsymbol{z}$: \begin{equation}\label{eq_20} \boldsymbol{z}^{j}= \text {arg}\min\frac{n}{4} \|\boldsymbol{z}\|^{2} + \langle \boldsymbol{z},\boldsymbol{b}\rangle+\frac{1}{2\alpha}\|\mathcal{A}^*\left(\boldsymbol{z}\right)-\tilde{\mathbf{Y}}^j-\mathbf{\Phi}^{k}\|_F^2 \end{equation} \STATE \textbf{Step 2}. Update $\mathbf{Y}$: \begin{equation}\label{eq_21} \mathbf{Y}^{j} = \text {arg}\min\delta_{\mathcal{S}^m_+}\left(\mathbf{Y}\right)+\frac{1}{2\alpha}\|\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}-\mathbf{\Phi}^{k}\|_F^2 \end{equation} \STATE \textbf{Step 3}. If $\left\|\frac{n}{2}\boldsymbol{z}^j + \boldsymbol{b}+\frac{1}{\alpha}\mathcal{A}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}^j-\mathbf{\Phi}^{k}\right)\right\| \leq \zeta_k$, stop.\par \STATE \textbf{Step 4}. Compute $t_{j+1} = \frac{1+\sqrt{1+4t_j^2}}{2}$, $\beta_j = \frac{t_j-1}{t_{j+1}}$, $\tilde{\mathbf{Y}}^{j+1}=\mathbf{Y}^j+\beta_j\left(\mathbf{Y}^j-\mathbf{Y}^{j-1}\right)$; \par set $j\leftarrow j+1$, go to $\textbf{Step 1}$. \end{algorithmic} \end{algorithm} In the Algorithm \ref{alg_abcd}, for the $\boldsymbol{z}$-subproblem \eqref{eq_20}, its solution is obtained by solving the following linear system: \begin{equation} \label{eq_22_1} \frac{n}{2}\boldsymbol{z} + \boldsymbol{b}+\frac{1}{\alpha}\mathcal{A}\left(\mathcal{A}^*\left(\boldsymbol{z}\right)-\tilde{\mathbf{Y}}^j-\mathbf{\Phi}^{k}\right)= \boldsymbol{0}. \end{equation} By using preconditioned conjugate gradient (PCG) method, we can solve the above linear system efficiently, especially when its scale is very large. For the $\mathbf{Y}$-subproblem, fortunately, it has closed form solution, given as \begin{equation}\label{eq_23_1} \begin{array}{ll} \mathbf{Y}^{j} &= \text{arg}\min\delta_{\mathcal{S}^d_+}\left(\mathbf{Y}\right)+\frac{1}{2\alpha}\|\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}-\mathbf{\Phi}^{k}\|_F^2\\ &= \operatorname{prox}_{\mathcal{S}_+^d}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}\right)= \mathbf{\Pi}_{\mathcal{S}_+^d}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}\right). \end{array} \end{equation} Then, from the relation between primal variable and dual variable, we can obtain a feasible solution $\mathbf{U}^{(j)}=\frac{\mathbf{Y}^j+\mathbf{\Phi}^{k}-\mathcal{A}^*\left(\boldsymbol{z}^j\right)}{\alpha}$ of \eqref{eq_16} at each iteration of ABCD \ref{alg_abcd}. As we known, the residuals of KKT equation is a good choice for the termination condition of the Alrorithm \ref{alg_abcd}. From \eqref{eq_23_1}, we have \begin{equation}\label{eq_24_1} \begin{array}{lll} &\mathbf{\Pi}_{\mathcal{S}^d_+}\left(\mathbf{Y}^j+\frac{1}{\alpha}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}^j-\mathbf{\Phi}^{k}\right)\right)\\ &= \mathbf{\Pi}_{\mathcal{S}^d_+}\left(\mathbf{\Pi}_{\mathcal{S}_+^d}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}\right)+\frac{1}{\alpha}\mathbf{\Pi}_{\mathcal{S}_-^d}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}\right)\right) = \mathbf{Y}^j. \end{array} \end{equation} Thus, \begin{equation}\label{eq_25_1} \mathbf{Y}^j = \mathbf{\Pi}_{\mathcal{S}^d_+}\left(\mathbf{Y}^j+\frac{1}{\alpha}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}^j-\mathbf{\Phi}^{k}\right)\right) \end{equation} holds at each iteration of Algorithm \ref{alg_abcd}. This means that the KKT condition \eqref{eq_18} exactly holds at each iteration of ABCD \ref{alg_abcd}. Based on this observation, we do not need to check the equation \eqref{eq_18} with positive semidefinite cone projection any more, which save massive computation when the matrix dimension $d$ is large ($d>500$). Let \[\gamma^j :=\frac{n}{2}\boldsymbol{z^j} + \boldsymbol{b}+\frac{1}{\alpha}\mathcal{A}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}^j-\mathbf{\Phi}^{k}\right),\] then the Algorithm \ref{alg_abcd} stops if $\|\gamma^j\|\leq\zeta_k$ hold. \subsection{Algorithm details and low rank structure utilization}\label{sec:3_2} Although the linear operator $\mathcal{A}$ can be transformed into the form of matrix-vector product, i.e., $\mathcal{A}(\mathbf{U}) = \mathbf{A}\operatorname{vec}(\mathbf{U})$, it is impractical to store the matrix $\mathbf{A}$ with size of $p\times d^2$ when the size of problem \eqref{eq_11} is large. Thus, we need to form the linear operator $\mathcal{A}$ and its conjugate $\mathcal{A}^*$ at each iteration of the ABCD \ref{alg_abcd} and the s-iPDCA \ref{alg_s_iPDCA}. Therefore, the major computation for the s-iPDCA \ref{alg_s_iPDCA} focuses on the positive semidefinite cone projection for solving $\mathbf{Y^j}$ in the ABCD \ref{alg_abcd} and the formation of operators $\mathcal{A}$ and its conjugate $\mathcal{A}^*$.\par We further reduce the computation and the storage of the ABCD \ref{alg_abcd} and the s-iPDCA \ref{alg_s_iPDCA} for solving the DCLSSDP \eqref{eq_11} by employing the low rank structure of solution $\mathbf{U}^k$. Let $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_d$ be the eigenvalues of $\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}\right)$ being arranged in nonincreasing order. Denote $\mu:=\left\{i|\lambda_i>0,i = 1,...,d\right\}$ and $\nu:=\left\{i|\lambda_i<0,i = 1,...,d\right\}$. Then $\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}\right)$ has the following spectral decomposition \begin{equation}\nonumber \mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{\top},\quad \mathbf{\Lambda} = \left[\begin{array}{ccc} \mathbf{\Lambda_{\mu}} & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & \mathbf{\Lambda_{\nu}} \end{array}\right] \end{equation} In here, $\mathbf{\Lambda}$ is the diagonal matrix whose $i^{th}$ diagonal entry is $\lambda_i$, and the $i^{th}$ column of $\mathbf{Q}$ is the eigenvector of $\mathbf{U}^{k+1}$ corresponding to the eigenvalue $\lambda_i$. Thus, we have \begin{equation}\nonumber \mathbf{Y}^j = \operatorname{\Pi}_{\mathcal{S}_+^d}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}\right)=\mathbf{Q}_{\mu}\mathbf{\Lambda}_{\mu}\mathbf{Q}_{\mu}^{\top} \end{equation} From the relation between primal variable and dual variable, i.e., $\mathbf{U}^{(j)}=\frac{\mathbf{Y}^j+\mathbf{\Phi}^{k}-\mathcal{A}^*\left(\boldsymbol{z}^j\right)}{\alpha}$, we have \begin{equation}\nonumber \mathbf{U}^{(j)} = -\frac{1}{\alpha}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}-\mathbf{Y}^j\right)=-\frac{1}{\alpha}\mathbf{Q}_{\nu}\mathbf{\Lambda}_{\nu}\mathbf{Q}_{\nu}^{\top}. \end{equation} From the formulation of $\mathbf{Y}^j$ and $\mathbf{U}^{(j)}$, we know \[\operatorname{rank}(\mathbf{Y}^j)+\operatorname{rank}(\mathbf{U}^{(j)})= |\mu|+|\nu|\leq d.\]Then if $|\mu|< \frac{d}{2}$ holds true, we update $\mathbf{Y}^j$ by $\mathbf{Y}^j = \mathbf{Q}_{\mu}\mathbf{\Lambda}_{\mu}\mathbf{Q}_{\mu}^{\top}$, otherwise by $\mathbf{Y}^j = \mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}- \mathbf{Q}_{\nu}\mathbf{\Lambda}_{\nu}\mathbf{Q}_{\nu}^{\top}$. In fact, only the second case that $|\nu|< \frac{d}{2}$ will occur in our algorithm \ref{alg_abcd}, so we can store the $\mathbf{Q}_{\nu}\sqrt{-\mathbf{\Lambda}_{\nu}}$ instead of $\mathbf{Q}_{\mu}\sqrt{\mathbf{\Lambda}_{\mu}}$ or $\mathbf{Q}_{\mu}\mathbf{\Lambda}_{\mu}\mathbf{Q}_{\mu}^{\top}$ to reduce storage cost. Let $\mathbf{V} := \mathbf{Q}_{\nu}\sqrt{-\mathbf{\Lambda}_{\nu}}\in\mathcal{R}^{n\times |\nu|}$. Then, we can formulate the operator $\mathcal{A\left(\mathbf{V}\mathbf{V}^{\top}\right)}$ as \begin{equation}\nonumber \mathcal{A}\left(\mathbf{V}\mathbf{V}^{\top}\right) = \left[\langle \mathbf{A}_1,\mathbf{V}\mathbf{V}^{\top}\rangle,\langle \mathbf{A}_2, \mathbf{V}\mathbf{V}^{\top}\rangle,...,\langle \mathbf{A}_p, \mathbf{V}\mathbf{V}^{\top}\rangle\right]^{\top}, \end{equation} in here, $\langle \mathbf{A}_i,\mathbf{V}\mathbf{V}^{\top}\rangle$ can be computed as \begin{equation}\nonumber \begin{array}{ll} \langle \mathbf{A}_i,\mathbf{V}\mathbf{V}^{\top}\rangle = \langle \tau_{(i)}\tau_{(i)}^{\top},\mathbf{V}\mathbf{V}^{\top}\rangle =\tau_{(i)}^{\top}\mathbf{V}\mathbf{V}^{\top}\tau_{(i)}=\left(\mathbf{V}^{\top}\tau_{(i)}\right)^{\top}\left(\mathbf{V}^{\top}\tau_{(i)}\right). \end{array} \end{equation} Then, the amount of computation for formulating the operator $\mathcal{A}$ on rank-$|\nu|$ matrix is reduced from $O\left(p(d^2+d)\right)$ to $O\left((|\nu| d+|\nu|)p\right)$. This strategy will significantly reduce the computation when $|\nu|\ll d$. \par Based on the above analysis, we form the linear operator $\mathcal{A}(\mathbf{Y}^j)$ as \begin{equation}\nonumber \begin{array}{ll} \mathcal{A}\left(\mathbf{Y}^j\right) &= \mathcal{A}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{\Phi}^{k}- \mathbf{Q}_{\nu}\mathbf{\Lambda}_{\nu}\mathbf{Q}_{\nu}^{\top}\right)\\ &=\mathcal{A}\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathcal{A}\left(\mathbf{\Phi}^{k}\right)- \mathcal{A}\left(\mathbf{Q}_{\nu}\mathbf{\Lambda}_{\nu}\mathbf{Q}_{\nu}^{\top}\right) \end{array} \end{equation} In here, the term $\mathcal{A}\mathcal{A}^*\left(\boldsymbol{z}^j\right)$ can be computed by matrix-vector product when the matrix $\mathbf{A}\mathbf{A}^{\top}\in \mathcal{S}_+^p$ is stored. For the term $\mathcal{A}\left(\mathbf{\Phi}^{k}\right)$, we compute it as \[\mathcal{A}\left(\mathbf{\Phi}^{k}\right)= \mathcal{A}\left(\mathbf{W}^{k}\right)-\mathcal{A}\left(c\mathbf{I}\right) + \alpha \mathcal{A}\left(\mathbf{U}^{k}\right)\] from the definite of $\mathbf{\Phi}^{k}$. We note that $\mathbf{W}^{k}$ and $\mathbf{U}^{k}$ are also low rank matrices, and this matrices can be reformulated into factored form easily. In addition, for the term $\frac{1}{\alpha}\mathcal{A}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}^j-\mathbf{\Phi}^{k}\right)$ in the KKT equation \eqref{eq_19}, it can be reformulated as \[\mathcal{A}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}^j-\mathbf{\Phi}^{k}\right) = \mathcal{A}\left(\mathbf{Q}_{\nu}\mathbf{\Lambda}_{\nu}\mathbf{Q}_{\nu}^{\top}\right),\] then the KKT equation \eqref{eq_19} can be checked by using the tricks mentioned above.\par \subsection{A reliable inexact strategy} \label{sec:3_3} To get the inexact error bound $\zeta_k$ for the ABCD algorithm \ref{alg_abcd}, we need to find the relation between the inexact term $\mathbf{\Delta}^{k+1}$ in the primal problem \eqref{eq_16} and the inexact term $\gamma^j$ in minimization problem \eqref{eq_17_1}. Firstly, we show how to obtain the inexact term $\mathbf{\Delta}^{k+1}$ and the corresponding inexact solution $\mathbf{V}^{k+1}$ from KKT equation of equation \eqref{eq_16}. Since the inexact solution $\mathbf{V}^{k+1}$ at $k^{th}$ iteration of s-iPDCA satisfies the inexact optimality condition \eqref{eq_13}, expressed as \begin{equation} \label{eq_26_1} 0 \in \frac{2}{n} \mathcal{A}^*\left(\mathcal{A}\left(\mathbf{V}^{k+1}\right) - \boldsymbol{b}\right) -\mathbf{\Phi}^{k}+\partial\delta_{\mathcal{S}^d_+}\left(\mathbf{V}^{k+1}\right)+\alpha\mathbf{V}^{k+1}-\mathbf{\Delta}^{k+1}, \end{equation} in which, $\|\mathbf{\Delta}^{k+1}\|_F\leq\epsilon_{k+1}$. As we known, \eqref{eq_26_1} is equivalent to \begin{equation} \label{eq_27_1} \mathbf{V}^{k+1} = \operatorname{prox}_{\delta_{\mathcal{S}^d_+}}\left(\left(1-\alpha\right)\mathbf{V}^{k+1}-\frac{2}{n} \mathcal{A}^*\left(\mathcal{A}\left(\mathbf{V}^{k+1}\right) - \boldsymbol{b}\right) +\mathbf{\Phi}^{k}+\mathbf{\Delta}^{k+1}\right). \end{equation} In here, $\operatorname{prox}_{\delta_{\mathcal{S}^d_+}}$ denotes the proximal projection of $\delta_{\mathcal{S}^d_+}$, which is just the positive semidefinite matrix cone projection, given as \begin{equation}\label{eq_28_1} \mathbf{V}^{k+1} = \mathbf{\Pi}_{\mathcal{S}^d_+}\left(\left(1-\alpha\right)\mathbf{V}^{k+1}-\frac{2}{n} \mathcal{A}^*\left(\mathcal{A}\left(\mathbf{V}^{k+1}\right) - \boldsymbol{b}\right) +\mathbf{\Phi}^{k}+\mathbf{\Delta}^{k+1}\right) \end{equation} To obtain the above inexact solution $\mathbf{V}^{k+1}$ and inexact term $\mathbf{\Delta}^{k+1}$, we set \begin{equation}\label{eq_29_1} \tilde{\mathbf{U}}^{(j)} = \mathbf{\Pi}_{\mathcal{S}^d_+}\left(\left(1-\alpha\right)\mathbf{U}^{(j)}-\frac{2}{n} \mathcal{A}^*\left(\mathcal{A}\left(\mathbf{U}^{(j)}\right) - \boldsymbol{b}\right) +\mathbf{\Phi}^{k}\right), \end{equation} in here, $\mathbf{U}^{(j)}=\frac{\mathbf{Y}^j+\mathbf{\Phi}^{k}-\mathcal{A}^*\left(\boldsymbol{z}^j\right)}{\alpha}$ is a feasible solution of \eqref{eq_16}. Then we know that $\mathbf{V}^{k+1} = \tilde{\mathbf{U}}^{(j)}$ is just the inexact solution when \[\left\|(\frac{2}{n} \mathcal{A}^*\mathcal{A}+(\alpha-1)\mathcal{I})(\tilde{\mathbf{U}}^{(j)}-\mathbf{U}^{(j)})\right\|\leq \epsilon_{k+1}\] holds, and \[\mathbf{\Delta}^{k+1} = \left(\frac{2}{n} \mathcal{A}^*\mathcal{A}+(\alpha-1)\mathcal{I}\right)\left(\tilde{\mathbf{U}}^{(j)}-\mathbf{U}^{(j)}\right)\] is just the inexact term corresponding to $\mathbf{V}^{k+1}$.\par Secondly, we will show the relation between $\mathbf{\Delta}^{k+1}$ and $\gamma^j$. As defined in the first subsection, the solutions $\left(\boldsymbol{z}^j,\mathbf{Y}^j\right)$ of ABCD \ref{alg_abcd} at $j^{th}$ iteration satisfy the following equation: \begin{equation} \label{eq_30_1} \gamma^j=\frac{n}{2}\boldsymbol{z^j} + \boldsymbol{b}+\frac{1}{\alpha}\mathcal{A}\left(\mathcal{A}^*\left(\boldsymbol{z}^j\right)-\mathbf{Y}^j-\mathbf{\Phi}^{k}\right). \end{equation} From $\mathbf{U}^{(j)}=\frac{\mathbf{Y}^j+\mathbf{\Phi}^{k}-\mathcal{A}^*\left(\boldsymbol{z}^j\right)}{\alpha}$, we have \begin{equation}\label{eq_31_1} -\frac{2}{n} \mathcal{A}^*\left(\mathcal{A}\left(\mathbf{U}^{(j)}\right) -\boldsymbol{b}\right)= \mathcal{A}^*\left(\frac{2}{n}\gamma^j-\boldsymbol{z}^j\right). \end{equation} Furthermore, the projection $\tilde{\mathbf{U}}^{(j)}$ can be reformulated as \begin{equation}\nonumber \begin{array}{ll} \tilde{\mathbf{U}}^{(j)} &= \mathbf{\Pi}_{\mathcal{S}^d_+}\left(\mathbf{U}^{(j)}+\mathcal{A}^*\left(\frac{2}{n}\gamma^j-\boldsymbol{z}^j\right) +\mathbf{\Phi}^{k}-\left(\mathbf{Y}^j+\mathbf{\Phi}^{k}-\mathcal{A}^*\left(\boldsymbol{z}^j\right)\right)\right)\\ &= \mathbf{\Pi}_{\mathcal{S}^d_+}\left(\mathbf{U}^{(j)}+\frac{2}{n}\mathcal{A}^*\left(\gamma^j\right) -\mathbf{Y}^j\right). \end{array} \end{equation} In here, the second euqality follows from \eqref{eq_31_1}. Clearly, \[\tilde{\mathbf{U}}^{(j)}= \mathbf{\Pi}_{\mathcal{S}^d_+}\left(\mathbf{U}^{(j)} -\mathbf{Y}^j\right) = \mathbf{U}^{(j)}\] holds when $\gamma^j = \boldsymbol{0}$, which is consistent with intuition that solving the dual problem exactly leads to exact solution of primal problem $(\mathbf{\Delta}^{(j)} = \mathbf{0})$. When $\gamma^j \neq \boldsymbol{0}$, the inexact term $\mathbf{\Delta}^{(j)}$ can be denoted as \begin{equation}\label{eq_32_1} \begin{array}{lll} \mathbf{\Delta}^{(j)} &= \left(\frac{2}{n} \mathcal{A}^*\mathcal{A}+(\alpha-1)\mathcal{I}\right)\left(\tilde{\mathbf{U}}^{(j)}-\mathbf{U}^{(j)}\right)\\ &= \left(\frac{2}{n} \mathcal{A}^*\mathcal{A}+(\alpha-1)\mathcal{I}\right)\left(\mathbf{\Pi}_{\mathcal{S}^d_+}\left(\mathbf{U}^{(j)}+\frac{2}{n}\mathcal{A}^*\left(\gamma^j\right) -\mathbf{Y}^j\right)-\mathbf{U}^{(j)}\right) \end{array} \end{equation} From the non-expansibility of the prox projection operator, we have \begin{equation}\label{eq_33_1} \begin{array}{lll} &\|\operatorname{prox}_{\delta_{\mathcal{S}^d_+}}(\mathbf{U}^{(j)}+\frac{2}{n}\mathcal{A}^*(\gamma^j) -\mathbf{Y}^j)-\mathbf{U}^{(j)}\|_F\\ &=\|\operatorname{prox}_{\delta_{\mathcal{S}^d_+}}(\mathbf{U}^{(j)}+\frac{2}{n}\mathcal{A}^*(\gamma^j) -\mathbf{Y}^j)-\operatorname{prox}_{\delta_{\mathcal{S}^d_+}}(\mathbf{U}^{(j)})\|_F\\ &=\|\operatorname{prox}_{\delta_{\mathcal{S}^d_+}}(\mathbf{U}^{(j)}+\frac{2}{n}\mathcal{A}^*(\gamma^j) -\mathbf{Y}^j)-\operatorname{prox}_{\delta_{\mathcal{S}^d_+}}(\mathbf{U}^{(j)}-\mathbf{Y}^j)\|_F\leq \frac{2}{n}\|\mathcal{A}^*(\gamma^j)\|_F. \end{array} \end{equation} In here, the second equality from the \eqref{eq_23_1} and the relation $\mathbf{U}^{(j)}=\frac{\mathbf{Y}^j+\mathbf{\Phi}^{k}-\mathcal{A}^*\left(\boldsymbol{z}^j\right)}{\alpha}$. Then we have \begin{equation}\nonumber \begin{array}{lll} \|\mathbf{\Delta}^{(j)}\|_F &\leq\frac{2}{n}\|\frac{2}{n} \mathbf{A}^{\top}\mathbf{A}+(\alpha-1)\mathbf{I}\|_F\|\mathcal{A}^*(\gamma^j)\|_F\\ &\leq \frac{2}{n}\|\mathbf{A}\|_F\|\frac{2}{n} \mathbf{A}^{\top}\mathbf{A}+(\alpha-1)\mathbf{I}\|_F\|\gamma^j\|. \end{array} \end{equation} Then we know that $\mathbf{V}^{k+1} = \tilde{\mathbf{U}}^{(j)}$ is just the solution of \eqref{eq_16} satisfied inexact condition $\|\mathbf{\Delta}^{(j)}\|_F< \epsilon_{k+1}$ when $\|\gamma^j\|< \zeta_k$ holds true, where $\zeta_k$ is defined as \begin{equation}\label{eq_34_1} \zeta_k := \frac{n}{2}\|\mathbf{A}\|_F^{-1}\|\frac{2}{n} \mathbf{A}^{\top}\mathbf{A}+(\alpha-1)\mathbf{I}\|_F^{-1}\epsilon_{k+1} . \end{equation} It should be noted that the cost of computing the norm $\|\frac{2}{n} \mathbf{A}^{\top}\mathbf{A}+(\alpha-1)\mathbf{I}\|_F$ directly is expensive when $d^2>>p$. Thus, we compute this norm as following: \[\|\frac{2}{n} \mathbf{A}^{\top}\mathbf{A}+(\alpha-1)\mathbf{I}\|_F = \sqrt{\|\frac{2}{n} \mathbf{A}\mathbf{A}^{\top}\|_F^2+\|(1-\alpha)\mathbf{I}\|_F^2 -\langle (1-\alpha)\mathbf{A}\mathbf{A}^{\top}, \frac{2}{n}\mathbf{I}\rangle}\] which reduces the computing cost from $O(d^4)$ to $O(p^2)$. \par Next, we shall study the convergence of the proposed s-iPDCA for solving the DCLSSDP \eqref{eq_11}. \section{Global convergence analysis for s-iPDCA}\label{sec:4} In this section, we present the convergence for the proposed s-iPDCA for solving \eqref{eq_11}. A feasible point $\mathbf{U}\in \mathcal{S}_+^d$ is said to be a stationary point of the DC problem \eqref{eq_11} if \begin{equation}\label{eq_35_1} \left(\nabla f_1\left(\mathbf{U}\right)+\mathcal{N}_{\mathcal{S}_+^d}\left(\mathbf{U}\right)\right)\bigcap \partial f_2\left(\mathbf{U}\right)\neq\phi \end{equation} where $\mathcal{N}_{\mathcal{S}_+^d}\left(U\right)$ is the normal cone of the convex set $\mathcal{S}_+^d$ at $\mathbf{U}$, that is equal to $\partial\delta_{\mathcal{S}_+^d}\left(U\right)$ because of the convexity of $\mathcal{S}_+^d$ \cite{Ref_rockafellar1970convex}. The following results on the convergence of the proposed s-iPDCA (Algorithm \ref{alg_s_iPDCA}) for the DCLSSDP \eqref{eq_11} follows from the basic convergence theorem of DCA\cite{Ref_tao1997convex}. \par To ensure the objective function in \eqref{eq_11} is coercive, we assume $\mathcal{A}$ satisfies the Restricted Isometry Property (RIP) condition\cite{Ref_candes2008restricted}, which is also used as one of the most standard assumptions in the low-rank matrix recovery literatures\cite{Ref_bhojanapalli2016global,Ref_cai2014sparse,Ref_luo2020recursive}. \begin{definition}[Restricted Isometry Property (RIP)]\label{def_1} Let $\mathcal{A}:\mathcal{S}_+^d\rightarrow \mathcal{R}^p$ be a linear map. For every integer $r$ with $1\leq r\leq d$, define the r-restricted isometry constant to be the smallest number $R_r$ such that \[\left(1-R_{r}\right)\|\mathbf{U}\|_F^{2} \leq \|\mathcal{A}(\mathbf{U})\|^{2} \leq\left(1+R_{r}\right)\|\mathbf{U}\|_F^{2}\] holds for all $\mathbf{U}$ of rank at most r. And $\mathcal{A}$ is said to satisfy the r-restricted isometry property (r-RIP) if $0 \leq R_{r} <  1$. \end{definition} \begin{proposition}\label{prop_4} Assume $\mathcal{A}$ satisfies r-restricted isometry property (r-RIP). The sequence of stability center $\{\mathbf{U}^{k}\}$ is generated by s-iPDCA for solving \eqref{eq_11}, the following statements hold.\\ $(1)$. The $J_c$ in \eqref{eq_11} is lower bounded and coercive.\\ $(2)$. The sequence $\left\{J_c(\mathbf{U}^{k})\right\}$ is nonincerasing.\\ $(3)$. The sequence $\left\{\mathbf{U}^{k}\right\}$ is bounded. \end{proposition} \begin{proof} First we prove (1). As is shown in \eqref{eq_11}, the $J_c$ is the sum of two non-negative functions: $\frac{1}{n}\|\mathcal{A}(\mathbf{U})- \boldsymbol{b}\|^2$ and $c\left(\langle \mathbf{U},\mathbf{I}\rangle-\|\mathbf{U}\|_{(r)}\right)$, so $J_c$ is lower bounded. Combining this with the r-RIP of $\mathcal{A}$, we have \begin{equation}\nonumber \begin{array}{lll} J_c& = \frac{1}{n}\|\mathcal{A}(\mathbf{U})- \boldsymbol{b}\|^2 + c\left(\langle \mathbf{U},\mathbf{I}\rangle-\|\mathbf{U}\|_{(r)}\right)\geq \frac{1}{n}\|\mathcal{A}(\mathbf{U})- \boldsymbol{b}\|^2\\ &\geq \frac{1}{n}\|\mathcal{A}(\mathbf{U})\|^2-\frac{2}{n}\| \mathcal{A}(\mathbf{U})\|\|\boldsymbol{b}\|+\frac{1}{n}\|\boldsymbol{b}\|^2\\ &\geq \frac{1}{n}(1-R_r)^2\|\mathbf{U}\|_F^2-\frac{2}{n}(1+R_r)\|\mathbf{U}\|_F\|\boldsymbol{b}\|+\frac{1}{n}\|\boldsymbol{b}\|^2. \end{array} \end{equation} So the $J_c$ in \eqref{eq_11} trends to infinity only if $\|\mathbf{U}\|_F$ trends to infinity, which implies that $J_c$ is coercive. \par For state (2). If $\mathbf{U}^{k+1}$ is generated in null step, namely, $\mathbf{U}^{k+1} = \mathbf{U}^{k}$, the $J_c\left(\mathbf{U}^{k+1}\right)\leq J_c\left(\mathbf{U}^{k}\right)$ holds immediately. In the another situation, if $\mathbf{U}^{k+1} = \mathbf{V}^{k+1}$, then from the optimality of $\mathbf{U}^{k+1}$ for solving \eqref{eq_16} and the feasibility of $\mathbf{U}^{k}$, we have \begin{equation}\label{eq_36} \begin{array}{ll} &f_1(\mathbf{U}^{k+1})-\langle \mathbf{U}^{k+1},\mathbf{W}^{k}\rangle+\frac{\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F^2-\langle\mathbf{U}^{k+1},\mathbf{\Delta}^{k+1}\rangle\\ &\leq f_1(\mathbf{U}^{k})-\langle\mathbf{U}^{k},\mathbf{W}^{k}\rangle -\langle\mathbf{U}^{k},\mathbf{\Delta}^{k+1}\rangle. \end{array} \end{equation} Thanks to the convexity of $f_2\left(\mathbf{U}\right)$, we have \[f_2\left(\mathbf{U}^{k+1}\right)\geq f_2\left(\mathbf{U}^{k}\right) + \langle\mathbf{U}^{k+1}-\mathbf{U}^{k},\mathbf{W}^{k}\rangle.\] Combining this with \eqref{eq_36}, we get \begin{equation}\label{eq_37} \begin{array}{ll} &\frac{\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F^2-\langle\mathbf{U}^{k+1}-\mathbf{U}^{k},\mathbf{\Delta}^{k+1}\rangle\\ &\leq \left[f_1(\mathbf{U}^{k})-f_2(\mathbf{U}^{k})\right]-\left[f_1(\mathbf{U}^{k+1})-f_2(\mathbf{U}^{k+1})\right]. \end{array} \end{equation} Since $\mathbf{U}^{k+1}= \mathbf{V}^{k+1}$ is generated in serious step of s-iPDCA, so the test \eqref{eq_15} holds true: \begin{equation}\label{eq_38} \|\mathbf{\Delta}^{k+1}\|_F\leq\frac{(1-\kappa)\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F, \end{equation} Then we have \begin{equation}\label{eq_39} \begin{array}{ll} &\frac{\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F^2-\langle\mathbf{U}^{k+1}-\mathbf{U}^{k},\mathbf{\Delta}^{k+1}\rangle\\ &\geq \frac{\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F^2-\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F\|\mathbf{\Delta}^{k+1}\|_F\geq \frac{\kappa\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F^2. \end{array} \end{equation} Applying this to \eqref{eq_37}, we get \begin{equation}\label{eq_40} \begin{array}{lll} \frac{\kappa\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F^2\leq\left[f_1\left(\mathbf{U}^{k}\right) - f_2\left(\mathbf{U}^{k}\right)\right] -\left[f_1\left(\mathbf{U}^{k+1}\right) - f_2\left(\mathbf{U}^{k+1}\right)\right]. \end{array} \end{equation} So the sequence $\left\{f_1(\mathbf{U}^{k}) - f_2(\mathbf{U}^{k})\right\}$ is nonincerasing. \par For state (3). Since $J_c(\mathbf{U}^0)<\infty$, then according to the results in state (1) and state (2), we know $\left\{J_c(\mathbf{U}^k)\right\}$ is bounded. If $\left\{\mathbf{U}^{k}\right\}$ is unbounded, namely, there exists subset $\mathcal{K}^{\prime}\subset\mathcal{K}= \left\{0,1,2,\cdots\right\}$ such that $\|\mathbf{U}^k\|_F= \infty$ for $k\in \mathcal{K}^{\prime}$. According to coercive property of $J_c$, then $J_c(\mathbf{U}^k)= \infty$ for $k\in \mathcal{K}^{\prime}$ holds, which is contrary to the boundedness of $\left\{J_c(\mathbf{U}^k)\right\}$. Thus, the boundness of the sequence $\left\{\mathbf{U}^{k}\right\}$ holds true. This completes the proof. \end{proof} Next, set the tolerance error $\varepsilon = 0$, we divide our convergence analysis into two parts: firstly, we consider the case that only finite serious steps are performed in s-iPDCA for solving \eqref{eq_11}; secondly, we suppose that infinite serious steps are performed in s-iPDCA for solving \eqref{eq_11}. \subsection{Finite serious steps in s-iPDCA}\label{sec:4_1} In this subsection, we provide the convergence analysis for the situation that only finite serious steps are performed in s-iPDCA for solving \eqref{eq_11}. \begin{proposition}\label{prop_5} Set the tolerance error $\varepsilon = 0$. Suppose that only finite serious steps are performed in s-iPDCA for solving \eqref{eq_11}. The following statements hold:\\ $(1)$. If the algorithm s-iPDCA is terminated in finite steps, namely, there exists $\bar{k}>0$ such that $\mathbf{V}^{\bar{k}+1}= \mathbf{U}^{\bar{k}}$, then the stability center $\mathbf{U}^{\bar{k}}$ is an $\epsilon_1$-stationary point of \eqref{eq_11}.\\ $(2)$. If after $\hat{k}^{th}$ iteration of s-iPDCA \ref{alg_s_iPDCA}, only null step is performed, namely, $\mathbf{W}^{k+1} = \mathbf{W}^{\hat{k}+1}$ and $ \mathbf{U}^{k+1} = \mathbf{U}^{\hat{k}+1}$, $\forall k>\hat{k}$, then \begin{equation} \label{eq_41} \lim_{k\rightarrow \infty}\mathbf{V}^{k+1} = \mathbf{U}^{\hat{k}+1}, \end{equation} and the stability center $\mathbf{U}^{\hat{k}+1}$ generated in the last serious step is a stationary point of \eqref{eq_11}. \end{proposition} \begin{proof} For state (1). From the optimality of $\mathbf{V}^{\bar{k}+1}$ for solving the strongly convex subproblem \eqref{eq_16}, we have that $\mathbf{U}^{\bar{k}} =\mathbf{V}^{\bar{k}+1}$ solves \[\min f_1\left(\mathbf{U}\right)- \langle \mathbf{U}, \mathbf{W}^{\bar{k}}\rangle +\delta_{\mathcal{S}^m_+}\left(\mathbf{U}\right) + \frac{\alpha}{2}\|\mathbf{U}-\mathbf{U}^{\bar{k}}\|_F^2-\langle \mathbf{\Delta}^{\bar{k}+1}, \mathbf{U}\rangle.\] Thus, the optimality condition \[0 \in \nabla f_1\left(\mathbf{U}^{\bar{k}}\right) -\mathbf{W}^{\bar{k}}+\partial\delta_{\mathcal{S}^m_+}\left(\mathbf{U}^{\bar{k}}\right)-\mathbf{\Delta}^{\bar{k}+1}\] holds true. Then, there exists $\xi\in \partial\delta_{\mathcal{S}^m_+}\left(\mathbf{U}^{\bar{k}}\right)$ such that \[ \nabla f_1\left(\mathbf{U}^{\bar{k}}\right) - \mathbf{W}^{\bar{k}}+\xi -\mathbf{\Delta}^{\bar{k}+1} = \mathbf{0}.\] Thus, $\|\nabla f_1\left(\mathbf{U}^{\bar{k}}\right) - \mathbf{W}^{\bar{k}}+\xi \|_F= \|\mathbf{\Delta}^{\bar{k}+1}\|_F\leq \epsilon_{\bar{k}+1}\leq\epsilon_{1}$. So we have that \[\mathbf{0} \in \partial_{\epsilon_1}\left[f_1\left(\mathbf{U}^{\bar{k}}\right) - f_2\left(\mathbf{U}^{\bar{k}}\right)+\delta_{\mathcal{S}^m_+}\left(\mathbf{U}^{\bar{k}}\right)\right],\] which means that $\mathbf{0}$ belongs to the $\epsilon_1$-inexact subdifferential of $f_1\left(\mathbf{U}\right)- f_2\left(\mathbf{U}\right)+\delta_{\mathcal{S}^m_+}\left(\mathbf{U}\right)$ at point $\mathbf{U}^{\bar{k}}$. Then $\mathbf{U}^{\bar{k}}$ is an $\epsilon_1$-stationary point of DC problem \eqref{eq_11}. For state (2). We first show $\lim_{k\rightarrow \infty}\mathbf{V}^{k+1} = \mathbf{U}^{\hat{k}}$. Since the test \eqref{eq_15} doesn't hold for all $k>\hat{k}$, that's to say \begin{equation}\label{eq_42} \frac{\left(1-\kappa\right)\alpha}{2}\|\mathbf{V}^{k+1}-\mathbf{U}^{\hat{k}}\|_F^2\leq \|\mathbf{\Delta}^{k+1}\|_F\leq \epsilon_{k+1} \end{equation} holds true, $\forall k>\hat{k}$. Thanks to the monotonic descent property of the sequence $\left\{\epsilon_{k+1}\right\}$ and $\lim_{k\rightarrow\infty}\epsilon_{k+1} = 0$, take limit on both sides of inequality \eqref{eq_42}, we obtain \begin{equation}\label{eq_43} \lim_{k\rightarrow \infty}\frac{\left(1-\kappa\right)\alpha}{2}\|\mathbf{V}^{k+1}-\mathbf{U}^{\hat{k}}\|_F^2 = 0. \end{equation} So $\lim_{k\rightarrow \infty}\mathbf{V}^{k+1} = \mathbf{U}^{\hat{k}}$. Next, we show that $\mathbf{U}^{\hat{k}}$ is a stationary point of \eqref{eq_11}. Due to the optimality of $\mathbf{V}^{k+1}$ for solving \eqref{eq_14}, we know that $\forall k > \hat{k}$, \[0 \in \nabla f_1\left(\mathbf{V}^{k+1}\right) - \mathbf{W}^{\hat{k}}+\partial\delta_{\mathcal{S}^m_+}\left(\mathbf{V}^{k+1}\right)+\alpha\left(\mathbf{V}^{k+1} -\mathbf{U}^{\hat{k}}\right)-\mathbf{\Delta}^{k+1}.\] Then, there exists $\zeta^{k+1}\in\partial\delta_{\mathcal{S}^m_+}\left(\mathbf{V}^{k+1}\right)$ such that \begin{equation}\label{eq_44} \nabla f_1\left(\mathbf{V}^{k+1}\right) - \mathbf{W}^{\hat{k}}+\zeta^{k+1}+\alpha\left(\mathbf{V}^{k+1} -\mathbf{U}^{\hat{k}}\right)-\mathbf{\Delta}^{k+1} = 0. \end{equation} So we have \begin{equation}\label{eq_45} \begin{array}{ll} \|\nabla f_1(\mathbf{V}^{k+1}) - \mathbf{W}^{\hat{k}}+\zeta^{k+1}\|_F&\leq \alpha\|\mathbf{V}^{k+1} -\mathbf{U}^{\hat{k}}\|_F+\|\mathbf{\Delta}^{k+1}\|_F\\ &<\alpha\|\mathbf{V}^{k+1} -\mathbf{U}^{\hat{k}}\|_F+\epsilon_{k+1}. \end{array} \end{equation} According to proposition \ref{prop_4}, $\left\{\mathbf{U}^{k}\right\}$ is bounded, we know that $\left\{\mathbf{U}^{k}\right\}$ is bounded from \[\lim_{k\rightarrow\infty}\mathbf{V}^{k+1} = \mathbf{U}^{\hat{k}}.\] Then the boundness of $\left\{\zeta^{k}\right\}$ can be deduced from the convexity and continuity of $\partial\delta_{\mathcal{S}^m_+}$. Thus, there exists subset $\mathcal{K}^{\prime}\subset\mathcal{K} = \left\{0,1,2,\cdot\cdot\cdot\right\}$ such that $\lim_{k\in\mathcal{K}^{\prime}}\zeta^{k+1} = \hat{\zeta}\in \partial\delta_{\mathcal{S}^m_+}\left(\mathbf{U}^{\hat{k}}\right) $ and $\lim_{k\in\mathcal{K}^{\prime}}\epsilon_k = 0$. Then \begin{equation}\label{eq_46} \lim_{k\in\mathcal{K}^{\prime}}\|\nabla f_1\left(\mathbf{V}^{k+1}\right) - \mathbf{W}^{\hat{k}}+\zeta^{k+1}\|_F=\|\nabla f_1\left(\mathbf{U}^{\hat{k}}\right) - \mathbf{W}^{\hat{k}}+\hat{\zeta}\|_F=0. \end{equation} So we have \begin{equation} \label{eq_47} 0 \in \nabla f_1\left(\mathbf{U}^{\hat{k}}\right) -\partial f_2\left(\mathbf{U}^{\hat{k}}\right) +\partial\delta_{\mathcal{S}^m_+}\left(\mathbf{U}^{\hat{k}}\right), \end{equation} which implies that $\mathbf{U}^{\hat{k}}$ is a stationary point of problem \eqref{eq_11}. This completes the proof. \end{proof} \subsection{Infinite serious steps in s-iPDCA}\label{sec:4_2} In this subsection, we consider the case that infinite serious steps are performed in s-iPDCA for solving \eqref{eq_11} when tolerance error $\varepsilon $ is set to $0$. \begin{theorem}[Global subsequential convergence of s-iPDCA]\label{thm:1} Set the tolerance error $\varepsilon = 0 $. Let $\{\mathbf{U}^{k}\}$ be the stablity center sequence generated by s-iPDCA for solving \eqref{eq_11}. Then the following statements hold:\\ $(1)$. $\lim_{k\rightarrow \infty}\|\mathbf{U}^{k+N}-\mathbf{U}^{k}\|_F =0$, $\forall N\in \mathcal{N}^+, N<\infty$.\\ $(2)$. Any accumulation point $\bar{\mathbf{U}}\in\left\{\mathbf{U}^{k}\right\}$ is stationary point of \eqref{eq_11}. \end{theorem} \begin{proof} For state (1). From Proposition \ref{prop_4}, we know that the sequence $\left\{f_1(\mathbf{U}^{k}) - f_2(\mathbf{U}^{k})\right\}$ is nonincerasing and bounded below, so $\liminf_{k\rightarrow \infty}\left[f_1\left(\mathbf{U}^{k+1}\right) -f_2\left(\mathbf{U}^{k+1}\right)\right]<\infty$. Thus, by summing both sides of \eqref{eq_40} from $k = 0$ to $\infty$, we obtain that \begin{equation}\label{eq_48} \begin{array}{lll} &\mathbf{\Sigma}_{k= 0}^{\infty}\frac{\kappa\alpha}{2}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F^2\\ &\leq\left[f_1(\mathbf{U}^{0}) - f_2(\mathbf{U}^{0})\right] -\liminf_{k\rightarrow \infty}\left[f_1(\mathbf{U}^{k+1}) - f_2(\mathbf{U}^{k+1})\right]<\infty. \end{array} \end{equation} Then we deduce that $\left\{\mathbf{U}^{k}\right\}$ is Cauchy sequence, so $\lim_{k\rightarrow \infty}\|\mathbf{U}^{k+N}-\mathbf{U}^{k}\|_F = 0$, $\forall N\in \mathcal{N}^+, N<\infty$.\par For state (2). Since the number of serious steps is infinite, then $\forall k>0$, there exists $N\in \mathcal{N}^+, N<\infty$, such that $\mathbf{U}^{k+N}$ is the stability center generated at the new serious step of s-iPDCA \ref{alg_s_iPDCA}, so the following optimality condition for solving \eqref{eq_16} holds true: \begin{equation} \label{eq_49} 0 \in \nabla f_1\left(\mathbf{U}^{k+N}\right) - \mathbf{W}^{k}+\partial\delta_{\mathcal{S}^m_+}\left(\mathbf{U}^{k+N}\right)+\alpha\left(\mathbf{U}^{k+N} -\mathbf{U}^{k}\right)-\mathbf{\Delta}^{k+N}. \end{equation} Then, there exists $\zeta^{k+N}\in\partial\delta_{\mathcal{S}^m_+}\left(\mathbf{U}^{k+N}\right)$ such that \[\nabla f_1\left(\mathbf{U}^{k+N}\right) - \mathbf{W}^{k}+\zeta^{k+N}+\alpha\left(\mathbf{U}^{k+N} -\mathbf{U}^{k}\right)-\mathbf{\Delta}^{k+N}=0.\] So we have \begin{equation}\label{eq_50} \left\|\nabla f_1(\mathbf{U}^{k+N}) - \mathbf{W}^{k}+\zeta^{k+N}\right\|_F\leq \alpha\left\|\mathbf{U}^{k+N} -\mathbf{U}^{k}\right\|_F+\epsilon_{k+N}. \end{equation} Due to the boundness of $\left\{\mathbf{U}^{k}\right\}$, there exists subsets $\mathcal{K}^{\prime} \subset \mathcal{K}=\{0,1,2 \ldots\}$ such that $\left\{\mathbf{U}^{k}\right\}_{\mathcal{K}^{\prime}}$ converges to an accumulation point $\bar{\mathbf{U}}\in\left\{\mathbf{U}^{k}\right\}_{\mathcal{K}}$. Combining the boundedness of $\left\{\mathbf{U}^{k}\right\}_{\mathcal{K}^{\prime}}$ with the continuity and convexity of $f_2$ and $\delta_{\mathcal{S}^m_+}$, we deduce that the subsequence $\left\{\mathbf{W}^{k}\right\}_{\mathcal{K}^{\prime}}$ and $\left\{\zeta^{k+N}\right\}_{\mathcal{K}^{\prime}}$ are bounded. By the fact that the nonnegative sequence $\left\{\epsilon_{k+N}\right\}$ monotonically decreases to zero, we may assume without loss of generality that there exist a subset $\mathcal{K}^{\prime \prime} \subset \mathcal{K}^{\prime}$ such that $\lim_{k\in \mathcal{K}^{\prime \prime}}\mathbf{W}^{k} = \bar{\mathbf{W}}\in \partial f_2(\bar{\mathbf{U}})$, $\lim_{k\in \mathcal{K}^{\prime \prime}}\zeta^{k+N} = \bar{\zeta}\in \partial\delta_{\mathcal{S}^m_+}\left(\bar{\mathbf{U}}\right)$ and $\lim_{k\in \mathcal{K}^{\prime \prime}}\epsilon_{k+N} = 0$. Taking the limit on the two sides of inequality in \eqref{eq_50} with $k\in \mathcal{K}^{\prime \prime}$, we have \begin{equation}\label{eq_51} \begin{array}{ll} \left\|\nabla f_1\left(\bar{\mathbf{U}}\right) - c\bar{\mathbf{W}}+\bar{\zeta}\right\|_F&=\lim_{k\in\mathcal{K}^{\prime\prime}}\left\|\nabla f_1(\mathbf{U}^{k+N}) - \mathbf{W}^{k}+\zeta^{k+N}\right\|_F\\ & \leq \lim_{k\in\mathcal{K}^{\prime\prime}} \alpha\left\|\mathbf{U}^{k+N} -\mathbf{U}^{k}\right\|_F+\lim_{k\in \mathcal{K}^{\prime \prime}}\epsilon_{k+N}= 0, \end{array} \end{equation} which implies that $\left\|\nabla f_1\left(\bar{\mathbf{U}}\right) - c\bar{\mathbf{W}}+\bar{\zeta}\right\|_F = 0$. Therefore, any accumulation point $\bar{\mathbf{U}}\in\left\{\mathbf{U}^{k}\right\}$ satisfies the following optimality condition: \begin{equation} \label{eq_52} 0 \in \nabla f_1\left(\bar{\mathbf{U}}\right) -\partial f_2\left(\bar{\mathbf{U}}\right) +\partial\delta_{\mathcal{S}^m_+}\left(\bar{\mathbf{U}}\right). \end{equation} This implies that any accumulation point of $\left\{\mathbf{U}^{k}\right\}$ is a stationary point of \eqref{eq_11}. This completes the proof. \end{proof} In order to show that the sequence $\{\mathbf{U}^k\}$ actually converges to the stationary points of \eqref{eq_11} when infinite serious steps are performed in our Algorithm \ref{alg_s_iPDCA}, we recall the following definition of the Kurdyka-Łojaziewicz (KL) property of the lower semi-continuous function\cite{Ref_attouch2009convergence,Ref_bolte2016majorization,Ref_bolte2014proximal}. Let $a \geq 0$ and $\mathbf{\Xi}_a$ be the class of functions $\varphi:[0,a)\rightarrow \mathcal{R}^+$ that satisfy the following conditions:\\ $(1)$.$\varphi(0) = 0$\\ $(2)$.$\varphi$ is positive, concave and continuous\\ $(3)$.$\varphi$ is continuously differentiable on $\left[0,a\right)$ with $\varphi^{\prime}(x)>0$, $\forall x\in \left[0,a\right)$. \begin{definition}[KL property]\label{def_3} The given proper lower semicontinuous function $h:\mathcal{R}^q \rightarrow (-\infty, \infty]$ is said to have the KL property at $\bar{\boldsymbol{u}} \in \operatorname{dom} h$ if there exist $a > 0$ , a neighborhood $\mathcal{U}$ of $\bar{\boldsymbol{u}}$ and a concave function $\varphi\in\mathbf{\Xi}_a$ such that \[\varphi^{\prime}\left(h\left(\boldsymbol{u}\right)-h\left(\bar{\boldsymbol{u}}\right)\right)\operatorname{dist}\left(0,\partial h\left(\boldsymbol{u}\right)\right)\geq 1,\forall \boldsymbol{u}\in\mathcal{U} \quad\text{and} \quad h\left(\bar{\boldsymbol{u}}\right)\leq h\left(\boldsymbol{u}\right) \leq h\left(\bar{\boldsymbol{u}}\right)+a,\] in here, $\operatorname{dist}\left(\boldsymbol{0},\partial h\left(\boldsymbol{u}\right)\right)$ is the distance from the point $\boldsymbol{0}$ to a nonempty closed set $\partial h\left(\boldsymbol{u}\right)$. The function h is said to be a KL function if it has the KL property at each point of $\operatorname{dom}h$. \end{definition} \begin{lemma}[Uniformized KL property]\label{lemma_1} Suppose that h is a proper closed function and let $\mathbf{\Gamma}$ be a compact set. If h is a constant on $\mathbf{\Gamma}$ and satisfies the KL property at each point of $\mathbf{\Gamma}$, then there exists $\epsilon>0$, $a>0$ and $\varphi\in \mathbf{\Xi}_a$ such that \begin{equation}\label{eq_53} \varphi^{\prime}\left(h\left(\boldsymbol{u}\right)-h\left(\bar{\boldsymbol{u}}\right)\right)\operatorname{dist}\left(0,\partial h\left(\boldsymbol{u}\right)\right)\geq 1 \end{equation} for any $\bar{\boldsymbol{u}}\in\mathbf{\Gamma}$ and $\boldsymbol{u}\in\mathcal{U}$ with \[\mathcal{U} = \left\{\boldsymbol{u}|\operatorname{dist}\left(0,\partial h\left(\boldsymbol{u}\right)\right)<\epsilon \quad \text{and} \quad h\left(\bar{\boldsymbol{u}}\right)\leq h\left(\boldsymbol{u}\right) \leq h\left(\bar{\boldsymbol{u}}\right)+a \right\}\] \end{lemma} The semialgebraic functions are the most frequently used functions with KL property. This class of functions has been used in the previous studies\cite{Ref_bolte2007lojasiewicz,Ref_bolte2007clarke,Ref_jiang2021proximal}. As we known, a real symmetric matrix is positive semidefinite if and only if all its eigenvalue are non-negative, which means that the $\mathcal{S}_+^d$ can be written as the intersection of finite many polynomial inequalities. Thus, the $\mathcal{S}_+^d$ is a semialgebraic set because the semialgebraic property is stable under the boolean operations. Moreover, From the Tarski-Seidenberg theorem \cite[Theorem 8.6.6]{Ref_1993Algorithmic}, we know that the semidefinite programming representable sets are all semialgebraic\cite{Ref_ben2001lectures,Ref_ioffe2009invitation}.\par Now, we are ready to present our global convergence of $\left\{\mathbf{U}^k\right\}$ generated by s-iPDCA. Similar to the method proposed by Liu et al.\cite{Ref_liu2019refined}, we make use of the following auxiliary function: \begin{equation} \label{eq_54} E\left(\mathbf{U},\mathbf{W},\mathbf{V}, \mathbf{T}\right) = f_1\left(\mathbf{U}\right)-\langle \mathbf{U},\mathbf{W}\rangle + f^*_2\left(\mathbf{W}\right) +\frac{\alpha}{2}\|\mathbf{U}-\mathbf{V}\|^2_F-\langle \mathbf{T}, \mathbf{U}-\mathbf{V}\rangle. \end{equation} In here, $f^*_2$ is convex conjugate of $f_2$, given as \begin{equation} \label{eq_55} f^*_2\left(\mathbf{W}\right) = \sup_{\mathbf{U}\in\mathcal{S}_+^d} \left\{\langle \mathbf{W},\mathbf{U}\rangle-f_2\left(\mathbf{U}\right)\right\}. \end{equation} Then $f_1\left(\mathbf{U}\right)-f_2\left(\mathbf{U}\right)\leq f_1\left(\mathbf{U}\right) -\langle \mathbf{U},\mathbf{W}\rangle + f^*_2\left(\mathbf{W}\right)$ holds true. Thanks to the fact that $f_2\left(\mathbf{U}\right) = c\|\mathbf{U}\|_{(r)}$ is a proper closed convex function, we have that $f^*_2\left(\mathbf{W}\right)$ is also a proper closed convex function and the Young’s inequality holds \begin{equation} \label{eq_56} f^*_2\left(\mathbf{W}\right) + f_2\left(\mathbf{U}\right)\geq \langle \mathbf{U},\mathbf{W}\rangle. \end{equation} and the equality holds if and only if $\mathbf{W}\in \partial f_2\left(\mathbf{U}\right)$. Moreover, for any $\mathbf{U}$ and $\mathbf{W}$, $\mathbf{W}\in \partial f_2\left(\mathbf{U}\right)$ if and only if $\mathbf{U}\in \partial f^*_2\left(\mathbf{W}\right)$. Obviously, $E$ is semialgebraic function, so it is also KL function.\par Based on the assumption that infinite serious steps are performed in s-iPDCA for solving \eqref{eq_11}, we know that only finite stability centers generated in null step. The sequence $\left\{\mathbf{U}^k\right\}$ is shown as \begin{equation}\nonumber \left\{\cdots,\underbrace{\mathbf{U}^{k-M}}_{\mathbf{U}^{k_{l}}},\underbrace{\mathbf{U}^{k-M+1},\cdots,\mathbf{U}^{k}}_{\text{M null steps}},\underbrace{\mathbf{U}^{k+1}}_{\mathbf{U}^{k_{l+1}}},\underbrace{\mathbf{U}^{k+2},\cdots,\mathbf{U}^{k+N+1}}_{{\text{N null steps}}},\underbrace{\mathbf{U}^{k+N+2}}_{\mathbf{U}^{k_{l+2}}},\cdots\right\}, \end{equation} in which, $\mathbf{U}^{k_l}$ denotes the stablity center generated in the $l^{th}$ serious step. Since the subsequence \[\left\{\mathbf{U}^{k-M+1},\cdots, \mathbf{U}^{k},\mathbf{U}^{k+2},\cdots,\mathbf{U}^{k+N+1}\right\}\] is the collection of the stablity centers in null steps between $l^{th}$ serious step and $(l+2)^{th}$ serious step, then $\mathbf{U}^{k_{l}}=\mathbf{U}^{k-M} = \mathbf{U}^{k-M+1}=\cdots=\mathbf{U}^{k}$ and $\mathbf{U}^{k_{l+1}} = \mathbf{U}^{k+1} = \mathbf{U}^{k+2}=\cdots=\mathbf{U}^{k+N+1}$ hold, which shows that the stablity centers in null steps are the finite repetition of that in serious steps. By removing the $\mathbf{U}^k$ generated in null steps from $\left\{\mathbf{U}^k\right\}$, we obtain a subsequence $\left\{\mathbf{U}^{k_l}\right\}$. In addition, subsequence $\left\{\mathbf{W}^{k_{l}}\right\}$ denotes the set of chosen subgradient of $f_2$ at $\mathbf{U}^{k_{l}}$. The subsequence $\left\{\mathbf{\Delta}^{k_{l}}\right\}$ is the set of inexact term related to $\mathbf{U}^{k_{l}}$, then $\|\mathbf{\Delta}^{k_{l}}\|_F\leq \frac{(1-\kappa)\alpha}{2}\|\mathbf{U}^{k_{l}}-\mathbf{U}^{k_{l-1}}\|_F$ holds. \begin{proposition}\label{prop_6} Let E be defined in \eqref{eq_54}. Suppose that infinite serious steps are performed in s-iPDCA for solving \eqref{eq_11}. Let $\left\{\mathbf{U}^{k_l}\right\}$, $\left\{\mathbf{\Delta}^{k_{l}}\right\}$and $\left\{\mathbf{W}^{k_l}\right\}$ be the subsequences generated in serious steps of s-iPDCA for solving \eqref{eq_11}. Then the following statements hold:\\ $(1)$ For any $l$, \begin{equation} \label{eq_57} J_c\left(\mathbf{U}^{k_{l+1}}\right) \leq E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right). \end{equation} $(2)$ For any $l$, \begin{equation} \label{eq_58} E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)-E\left(\mathbf{U}^{k_l},\mathbf{W}^{k_{l-1}},\mathbf{U}^{k_{l-1}},\mathbf{\Delta}^{k_{l}}\right)\leq \frac{\kappa\alpha}{2}\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\|^2_F. \end{equation} $(3)$ The set of accumulation points of the sequence $\left\{\left(\mathbf{U}^{k_{l+1}}, \mathbf{W}^{k_l}, \mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)\right\}$, denoted by $\mathbf{\Gamma}$, is a nonempty compact set. \\ $(4)$ The limit $\Upsilon = \lim_{l\rightarrow\infty}E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)$ exists and $E \equiv \Upsilon$ on $\mathbf{\Gamma}$. \\ $(5)$ There exists $\rho > 0$ such that for any $l \geq 1$, we have \begin{equation} \label{eq_59} \operatorname{dist}\left(0,\partial E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)\right)\leq \rho\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F. \end{equation} \end{proposition} \begin{proof} We first prove (1). Since $\mathbf{U}^{k_{l+1}} = \mathbf{V}^{k+1}$ is the stability center generated in serious step, then the test \eqref{eq_15} holds true, shown as \begin{equation}\label{60} \|\mathbf{\Delta}^{k_{l+1}}\|_F\leq\frac{\left(1-\kappa\right)\alpha}{2}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F, \end{equation} then we get \begin{equation}\label{eq_61} \frac{\kappa\alpha}{2}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F^2\leq \frac{\alpha}{2}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F^2-\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\rangle. \end{equation} Thus, we have \begin{equation}\nonumber \begin{array}{lll} &&E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)\\ &=& f_1\left(\mathbf{U}^{k_{l+1}}\right)-\langle \mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l}\rangle + f^*_2\left(\mathbf{W}^{k_l}\right)+\frac{\alpha}{2}\|\mathbf{U}^{k_{l+1}} -\mathbf{U}^{k_l}\|^2_F-\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\rangle\\ &=& f_1\left(\mathbf{U}^{k_{l+1}}\right)-\langle \mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l},\mathbf{W}^{k_l}\rangle - f_2\left(\mathbf{U}^{k_l}\right)+\frac{\alpha}{2}\|\mathbf{U}^{k_{l+1}} -\mathbf{U}^{k_l}\|^2_F-\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\rangle\\ &\geq& f_1\left(\mathbf{U}^{k_{l+1}}\right)- f_2\left(\mathbf{U}^{k_{l+1}}\right)+\frac{\alpha}{2}\|\mathbf{U}^{k_{l+1}} -\mathbf{U}^{k_l}\|^2_F-\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\rangle\\ &\geq& f_1\left(\mathbf{U}^{k_{l+1}}\right)- f_2\left(\mathbf{U}^{k_{l+1}}\right)+\frac{\kappa\alpha}{2}\|\mathbf{U}^{k_{l+1}} -\mathbf{U}^{k_l}\|^2_F\\ &\geq& f_1\left(\mathbf{U}^{k_{l+1}}\right)- f_2\left(\mathbf{U}^{k_{l+1}}\right) = J_c\left(\mathbf{U}^{k_{l+1}}\right). \end{array} \end{equation} In here, the second euqality follows from the convexity of $f_2$ and the fact that $\mathbf{W}^{k_l}\in\partial f_2\left(\mathbf{U}^{k_l}\right)$, the first ineuqality follows from the convexity of $f_2$.\par For state (2). Since $\mathbf{U}^{k_{l+1}} =\mathbf{V}^{k+1}\in\mathcal{S}_+^d$ is the optimal solution of strongly convex problem \eqref{eq_16}, then the following inequality follow from the feasibility of $\mathbf{U}^{k_l}$: \begin{equation}\label{eq_62} \begin{array}{lll} & f_1(\mathbf{U}^{k_{l+1}})-\langle \mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l}\rangle+\frac{\alpha}{2}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F^2-\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_{l+1}}\rangle\\ &\leq f_1(\mathbf{U}^{k_l})-\langle \mathbf{U}^{k_l},\mathbf{W}^{k_l}\rangle -\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_l}\rangle. \end{array} \end{equation} then we know that \begin{equation} \nonumber \begin{array}{lll} & &E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)\\ &=& f_1\left(\mathbf{U}^{k_{l+1}}\right)-\langle \mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l}\rangle + f^*_2\left(\mathbf{W}^{k_l}\right)+\frac{\alpha}{2}\|\mathbf{U}^{k_{l+1}} -\mathbf{U}^{k_l}\|^2_F -\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\rangle\\ &\leq& f_1(\mathbf{U}^{k_l})+\langle \mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l},\mathbf{W}^{k_l}\rangle-\frac{\alpha}{2}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F^2 +\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\rangle\\ &&-\langle \mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l}\rangle + f^*_2\left(\mathbf{W}^{k_l}\right)+\frac{\alpha}{2}\|\mathbf{U}^{k_{l+1}} -\mathbf{U}^{k_l}\|^2_F-\langle\mathbf{\Delta}^{k_{l+1}},\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\rangle\\ & =& f_1(\mathbf{U}^{k_l})-\langle \mathbf{U}^{k_l},\mathbf{W}^{k_l}\rangle + f^*_2\left(\mathbf{W}^{k_l}\right). \end{array} \end{equation} Simlar to inequality \eqref{eq_61}, the following inequality: \begin{equation}\label{eq_63} \frac{\kappa\alpha}{2}\|\mathbf{U}^{k_l}-\mathbf{U}^{k_{l-1}}\|_F^2\leq \frac{\alpha}{2}\|\mathbf{U}^{k_l}-\mathbf{U}^{k_{l-1}}\|_F^2-\langle\mathbf{\Delta}^{k_{l}},\mathbf{U}^{k_l}-\mathbf{U}^{k_{l-1}}\rangle. \end{equation} holds true. Consequently, we have \begin{equation}\nonumber \begin{array}{lll} &&E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)\\ & \leq & f_1(\mathbf{U}^{k_l})-\langle \mathbf{U}^{k_l},\mathbf{W}^{k_l}\rangle + f^*_2\left(\mathbf{W}^{k_l}\right)= f_1(\mathbf{U}^{k_l}) - f_2\left(\mathbf{U}^{k_l}\right)\\ & \leq& f_1(\mathbf{U}^{k_l})-\langle \mathbf{U}^{k_l},\mathbf{W}^{k_{l-1}}\rangle + f^*_2\left(\mathbf{W}^{k_{l-1}}\right)\\ & =& E\left(\mathbf{U}^{k_l},\mathbf{W}^{k_{l-1}},\mathbf{U}^{k_{l-1}},\mathbf{\Delta}^{k_{l}}\right)- \frac{\alpha}{2}\left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|^2_F+ \langle\mathbf{\Delta}^{k_{l}},\mathbf{U}^{k_l}-\mathbf{U}^{k_{l-1}}\rangle\\ &\leq& E\left(\mathbf{U}^{k_l},\mathbf{W}^{k_{l-1}},\mathbf{U}^{k_{l-1}},\mathbf{\Delta}^{k_{l-1}}\right)- \frac{\kappa\alpha}{2}\|\left(\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right)\|^2_F. \end{array} \end{equation} In here, the first equality follows from the convexity of $f_2$ and the fact that $\mathbf{W}^{k_l}\in\partial f_2\left(\mathbf{U}^{k_l}\right)$. The second inequality follows from the convexity of $f_2$ and the Young’s inequality applied to $f_2$. The last inequality come from \eqref{eq_63}.\par For space limitations, we omitted the proof of statements (3)-(5), and we refer the intrested readers to \cite{Ref_wen2018proximal,Ref_liu2019refined}. This completes the proof. \end{proof} \begin{theorem}\label{thm:2} Set the tolerance error $\varepsilon = 0$. Let $\{\mathbf{U}^{k_l}\}$ be the stablity center sequence generated in serious steps of s-iPDCA for solving \eqref{eq_11}. Then $\{\mathbf{U}^{k_l}\}$ is convergent to a stationary point of \eqref{eq_11}. Moreover, $\mathbf{\Sigma}_{l=0}^{\infty}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_{l}}\|_F<\infty$. \end{theorem} \begin{proof} From Proposition \ref{prop_6}, we know that $\left\{E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right) \right\}$ is nonincreasing and its limitation $\Upsilon$ exists. We first show that $E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right) > \Upsilon, \forall l>0$. To this end, we suppose that $\exists L>0$ such that $E\left(\mathbf{U}^{k_{L+1}}, \mathbf{W}^{k_L}, \mathbf{U}^{k_L},\mathbf{\Delta}^{k_{L+1}}\right) = \Upsilon$, then $E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right) = \Upsilon$ holds ture for all $l>L$. From \eqref{eq_58}, we have $\mathbf{U}^{k_l} =\mathbf{U}^{k_L}$, $\forall l \geq L$. This shows that only finite serious steps are performed for s-iPDCA, which is contrary to the assumption of infinite serious steps.\par Next, from the Theorem \ref{thm:1}, we know that it is sufficient to show the convergence of $\{\mathbf{U}^{k_l}\}$ and $\mathbf{\Sigma}_{l=0}^{\infty}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F$. Since $E$ satisfies the KL property at each point in the compact set $\mathbf{\Gamma}\subset\operatorname{dom} \partial E$ and $E\equiv\Upsilon$ on $\mathbf{\Gamma}$, by Lemma \ref{lemma_1}, there exists $\epsilon>0$ and a continuous concave function $\varphi\in\Xi_a$ with $a > 0$ such that \begin{equation}\nonumber \varphi^{\prime}\left(E\left(\mathbf{U},\mathbf{W},\mathbf{V}, \mathbf{T}\right)-\Upsilon\right) \cdot \operatorname{dist}(\mathbf{0}, \partial E\left(\mathbf{U},\mathbf{W},\mathbf{V},\mathbf{T}\right) \geq 1. \end{equation} $\forall \left(\mathbf{U},\mathbf{W},\mathbf{V},\mathbf{T}\right)\in \mathbf{\Theta}$, with \begin{equation}\nonumber \begin{array}{ll} \mathbf{\Theta}=&\left\{\left(\mathbf{U},\mathbf{W},\mathbf{V},\mathbf{T}\right): \operatorname{dist}(\left(\mathbf{U},\mathbf{W},\mathbf{V},\mathbf{T}\right), \mathbf{\Gamma})<\epsilon\right\} \\ &\cap\left\{\left(\mathbf{U},\mathbf{W},\mathbf{V},\mathbf{T}\right): \Upsilon<E\left(\mathbf{U},\mathbf{W},\mathbf{V},\mathbf{T}\right)<\Upsilon+a\right\} \end{array} \end{equation} Since $\mathbf{\Gamma}$ is the set of accumulation points of the $\left\{\left(\mathbf{U}^{k_{l+1}}, \mathbf{W}^{k_l}, \mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)\right\}$, we know that \begin{equation}\nonumber \lim_{l\rightarrow\infty}\operatorname{dist} \left(\left(\mathbf{U}^{k_{l+1}}, \mathbf{W}^{k_l}, \mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right),\mathbf{\Gamma}\right) = 0. \end{equation} Thus, there exists $\bar{L}>0$ such that \[\operatorname{dist} \left(\left(\mathbf{U}^{k_{l+1}}, \mathbf{W}^{k_l}, \mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right),\mathbf{\Gamma}\right)<\epsilon, \forall l>\bar{L}-2.\] From Proposition \ref{prop_6}, we know that the sequence $\left\{E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right) \right\}$ converges to $\Upsilon$, then exists $\bar{\bar{L}}>0$ such that \[\Upsilon<E\left(\mathbf{U}^{k_{l+1}},\mathbf{W}^{k_l},\mathbf{U}^{k_l},\mathbf{\Delta}^{k_{l+1}}\right)<\Upsilon+a, \forall l>\bar{\bar{L}}-2.\] Let $\tilde{L} = \max\left\{\bar{L},\bar{\bar{L}}\right\}$ and \[E^{k_{l-1}} = E\left(\mathbf{U}^{k_{l-1}},\mathbf{W}^{k_{l-2}},\mathbf{U}^{k_{l-2}},\mathbf{\Delta}^{k_{l-1}}\right).\] Therefore, $\forall l>\tilde{L}$, we have that $\left(\mathbf{U}^{k_{l-1}}, \mathbf{W}^{k_{l-2}}, \mathbf{U}^{k_{l-2}},\mathbf{\Delta}^{k_{l-1}} \right)\in\mathbf{\Theta}$ and \begin{equation}\label{eq_64} \varphi^{\prime}\left(E^{k_{l-1}}-\Upsilon\right) \cdot \operatorname{dist}\left(\mathbf{0}, \partial E^{k_{l-1}} \right)\geq 1 \end{equation} hold true. Using the concavity of $\varphi$, we know that \begin{equation}\label{eq_65} \begin{array}{ll} &\left[\varphi\left(E^{k_{l-1}}-\Upsilon\right) -\varphi\left(E^{k_{l+1}}-\Upsilon\right)\right] \cdot \operatorname{dist}(\mathbf{0}, \partial E^{k_{l-1}}) \\ &\geq\varphi^{\prime}\left(E^{k_{l-1}}-\Upsilon\right)\cdot \operatorname{dist}\left(\mathbf{0}, \partial E^{k_{l-1}}\right)\cdot \left(E^{k_{l-1}}-E^{k_{l+1}}\right) \geq E^{k_{l-1}}-E^{k_{l+1}} \end{array} \end{equation} holds true, $\forall l>\tilde{L}$. In here, the last inequality follows from \eqref{eq_64}. Let $\mathbf{\pi}^{k_{l-1}} = \varphi\left(E^{k_{l-1}}-\Upsilon\right)$ and $\mathbf{\pi}^{k_{l+1}} = \varphi\left(E^{k_{l+1}}-\Upsilon\right)$. Combining the results in \eqref{eq_58}, \eqref{eq_59} and \eqref{eq_65}, we have \begin{equation}\label{eq_66} \begin{array}{ll} &\left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|^2_F+\left\|\mathbf{U}^{k_{l-1}} -\mathbf{U}^{k_{l-2}}\right\|^2_F\leq \frac{2\rho}{\kappa\alpha}\left(\mathbf{\pi}^{k_{l-1}} -\mathbf{\pi}^{k_{l+1}}\right)\left\|\mathbf{U}^{k_{l-1}} -\mathbf{U}^{k_{l-2}}\right\|_F \end{array} \end{equation} Applying the arithmetic mean–geometric mean inequality, we obtain \begin{equation}\nonumber \begin{array}{ll} \left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|_F&\leq \sqrt{\frac{\rho}{\kappa\alpha}\left(\mathbf{\pi}^{k_{l-1}} -\mathbf{\pi}^{k_{l+1}}\right)-\frac{1}{2}\left\|\mathbf{U}^{k_{l-1}} -\mathbf{U}^{k_{l-2}}\right\|_F}\cdot\sqrt{2\left\|\mathbf{U}^{k_{l-1}} -\mathbf{U}^{k_{l-2}}\right\|_F}\\ &\leq \frac{\rho}{2\kappa\alpha}\left(\mathbf{\pi}^{k_{l-1}} -\mathbf{\pi}^{k_{l+1}}\right)-\frac{1}{4}\left\|\mathbf{U}^{k_{l-1}} -\mathbf{U}^{k_{l-2}}\right\|_F+ \left\|\mathbf{U}^{k_{l-1}} -\mathbf{U}^{k_{l-2}}\right\|_F. \end{array} \end{equation} Then, we have \begin{equation}\label{eq_67} \frac{1}{4}\left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|_F\leq \frac{\rho}{2\kappa\alpha}\left(\mathbf{\pi}^{k_{l-1}} -\mathbf{\pi}^{k_{l+1}}\right)+ \frac{3}{4}\left(\left\|\mathbf{U}^{k_{l-1}} -\mathbf{U}^{k_{l-2}}\right\|_F-\left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|_F\right) \end{equation} Summing both sides of \eqref{eq_67} from $l=\tilde{L}$ to $\infty$, we have \begin{equation}\label{eq_68} \begin{array}{lll} \frac{1}{4}\Sigma_{l=\tilde{L}}^{\infty}\left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|_F&\leq& \frac{\rho}{2\kappa\alpha}\left(\mathbf{\pi}^{k_{\tilde{L}-1}} +\mathbf{\pi}^{k_{\tilde{L}}}\right)- \lim_{l\rightarrow\infty}\frac{\rho}{2\kappa\alpha}\left(\mathbf{\pi}^{k_{l}} +\mathbf{\pi}^{k_{l+1}}\right)\\ &&+ \frac{3}{4}\left(\left\|\mathbf{U}^{k_{\tilde{L}-1}} -\mathbf{U}^{k_{\tilde{L}-2}}\right\|_F-\lim_{l\rightarrow\infty}\left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|_F\right) \end{array} \end{equation} By applying the fact that $\lim_{l\rightarrow\infty}\frac{\rho}{2\kappa\alpha}\left(\mathbf{\pi}^{k_{l}} +\mathbf{\pi}^{k_{l+1}}\right)=0$ and $\lim_{l\rightarrow\infty}\left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|_F=0$, we obtain \begin{equation}\label{eq_69} \begin{array}{lll} \frac{1}{4}\Sigma_{l=\tilde{L}}^{\infty}\left\|\mathbf{U}^{k_l} -\mathbf{U}^{k_{l-1}}\right\|_F\leq \frac{\rho}{2\kappa\alpha}\left(\mathbf{\pi}^{k_{\tilde{L}-1}} +\mathbf{\pi}^{k_{\tilde{L}}}\right)+ \frac{3}{4}\left\|\mathbf{U}^{k_{\tilde{L}-1}} -\mathbf{U}^{k_{\tilde{L}-2}}\right\|_F<\infty \end{array} \end{equation} Thus the subsequence $\left\{\mathbf{U}^{k_l}\right\}$ is convergent as well as $\mathbf{\Sigma}_{l=0}^{\infty}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F<\infty$. Combining this with the results of Theorem \ref{thm:1}, we know that the sequence $\left\{\mathbf{U}^{k_l}\right\}$ generated by s-iPDCA converges to the stationary points of \eqref{eq_11}. This completes the proof. \end{proof} \begin{theorem}\label{thm:3} Set the tolerance error $\varepsilon = 0$. Let $\{\mathbf{U}^{k}\}$ be the stablity center sequence generated by s-iPDCA for \eqref{eq_11}. Then $\{\mathbf{U}^{k}\}$ is convergent to a stationary point of \eqref{eq_11}. Moreover, $\mathbf{\Sigma}_{k=0}^{\infty}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F<\infty$. \end{theorem} \begin{proof} From assumption of infinite serious steps in this subsection, we know that $\{\mathbf{U}^{k_{l}}\}$ is just the subsequence of $\{\mathbf{U}^{k}\}$ removing the finite repeated points. From the Teorem \ref{thm:2}, we know that the sequence $\left\{\mathbf{U}^{k}\right\}$ is also convergent to the stationary points of DC problem \eqref{eq_11}. Moreover, we have \[\mathbf{\Sigma}_{k=0}^{\infty}\|\mathbf{U}^{k+1}-\mathbf{U}^{k}\|_F = \mathbf{\Sigma}_{l=0}^{\infty}\|\mathbf{U}^{k_{l+1}}-\mathbf{U}^{k_l}\|_F<\infty.\] This completes the proof. \end{proof} \section{Numerical experiments}\label{sec:5} In this section, we perform numerical experiments to show the efficiency of our s-iPDCA for solving the RCLSSDP \eqref{eq_7}, and then to prove the effectiveness of the RCKSDPP in face recognition. All experiments are performed in Matlab 2020a on a 64-bit PC with an Intel(R) Xeon(R) CPU E5-2609 v2 (2.50GHz)(2 processor) and 56GB of RAM.\par Before we start the experiments, we first scale the model in \eqref{eq_6} as following: let $\tilde{\mathcal{A}}= \frac{\mathcal{A}}{\|\mathbf{A}\|_F}$ and $\tilde{\boldsymbol{b}}= \frac{\boldsymbol{b}}{\|\boldsymbol{b}\|}$, then the scaled RCLSSDP shown as \begin{equation}\label{eq_70} \begin{array}{ll} \min_{\tilde{\mathbf{U}}\in \mathcal{S}^d_+} &J(\tilde{\mathbf{U}}) = \|\tilde{\mathcal{A}}\left(\tilde{\mathbf{U}}\right)- \tilde{\boldsymbol{b}}\|^{2}\\ \text { s.t. }&\langle \tilde{\mathbf{U}},\mathbf{I}\rangle-\|\tilde{\mathbf{U}}\|_{(r)} = 0. \end{array} \end{equation} In here, $\tilde{\mathbf{U}} = \frac{\|\mathbf{A}\|_F\mathbf{U}}{\|\boldsymbol{b}\|}$. Then, we use the s-iPDCA or classical PDCA to solve the scaled DC problem from above scaled RCLSSDP. After the scaled optimal solution and optimal value being computed, the optimal solution and optimal value are obtaind by rescaling. For convenience, we omit the '$\sim$' in anywhere else of this paper. Next, we divide our numerical experiments into two parts, shown in the following. In the first part, we compare the performance of s-iPDCA with the classical PDCA and the PDCA with extrapolation (PDCAe) on solving the RCLSSDP \eqref{eq_7}, where the RCLSSDP come from dimension reduction for the COIL-20 database. In the second part, we apply RCKSDPP on dimension reduction for face recognition on the ORL database and the YaleB database, the results is compared with KSSDPP and KPCA.\par The dimension of the images in some databases is mach larger than the number of data points, e.g., the ORL database contains 400 face images with $d = 112\times 92$, cropped YaleB database contains 2414 face images with $d = 168\times 192$ and the cropped COIL-20 database contains 1440 images with $d= 128\times128$. In these situations, we use kernel trick to reduce the size of PSD matrix in \eqref{eq_11} from $d \times d$ to $n\times n$. The kernel extension SDPP (KSDPP) first maps the data from the original input space to another higher (even infinite) dimensional feature space $\phi: \mathcal{X}\rightarrow \mathcal{H}$. Then we can rewrite the projection matrix as $\tilde{\mathbf{P}} = \mathbf{\Psi}\mathbf{P}$ with $\mathbf{\Psi} = \left[\phi(\boldsymbol{x}_1),...,\phi(\boldsymbol{x}_n)\right]$. To obtain the KSDPP, we only need to replace the projected matrix $\mathbf{P}$ and the input covariate $\boldsymbol{x}$ in \eqref{eq_1} as $\tilde{\mathbf{P}}$ and $\phi(\boldsymbol{x})$, respectively. Denote the element of the kernel matrix $\mathbf{K}$ as $\mathbf{K}_{i j}=k\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \phi\left(\mathbf{x}_{j}\right)\right\rangle$, then we get the KSDPP \begin{equation}\label{eq_71} \begin{array}{ll} \min & J(\mathbf{P})=\frac{1}{n} \sum_{i=1}^{n} \sum_{\mathbf{K}_{j} \in \mathcal{N}\left(\mathbf{K}_{i}\right)}\left( \|\mathbf{P}^T\mathbf{K}_{i} - \mathbf{P}^T\mathbf{K}_{j}\|^2-\left\|\boldsymbol{y}_{i}-\boldsymbol{y}_{j}\right\|^{2}\right)^{2}. \end{array} \end{equation} Simlar to the SDPP, the KSDPP also can be equivalently transferred into rank constrianed KSDPP (RCKSDPP). What's more, we call the kernel extension of SSDPP as KSSDPP. In this paper, the Gaussian kernel function $exp(\frac{-\|\boldsymbol{x}-\boldsymbol{y}\|^2}{\varsigma t^2})$ is used in this paper, in here, the $t$ and $\varsigma>0$ are the kernel parameters. The $t$ is set by Silverman rule of thumb, shown as $t = 1.06n^{-0.2}\sqrt{\frac{\Sigma_{i=1}^n\|\boldsymbol{x}_i-\bar{\boldsymbol{x}}\|^2}{n-1}}$. The parameter $\varsigma$ is set as $\varsigma = 2$ in stadrad Gaussian kernel function, but in this paper, we adjust the $\varsigma$ to achieve better numerical performance. \par In this paper, we use the low precision solution of convex problem \eqref{eq_4} as the initial solution of our s-iPDCA. When a sufficiently small penalty parameter is chosen, the \eqref{eq_8} can be solved easily with the initial solution from convex problem \eqref{eq_4}. By this observation, we start our s-iPDCA with very small penalty parameter.\par \subsection{Comparing the performance of s-iPDCA with the classial PDCA and the PDCAe}\label{sec:5_1} In this subsection, we compare the performance of our s-iPDCA with the classical PDCA and the PDCAe for solving the RCLSSDP \eqref{eq_7}. The RCLSSDP in this experiments comes from performing DR on COIL-20 database. The projection dimension is set as $r = 5$ and the kernel parameter is set as $\varsigma = 2$. The neighborhood size of k-nearest neighbor (k-nn) for the RCKSDPP is set as $k = \operatorname{round}(log(n))$. The proximal parameters of the s-iPDCA and PDCA are both set as $\alpha = 5\times10^{-6}$. In order to obtain the same quality of suboptimal objective function value $J$, we set the termination criteria of these three algorithm as $\eta = \frac{|J(\mathbf{U}^{k+1})-J(\mathbf{U}^{k})|}{1+|J(\mathbf{U}^{k})|}\leq 1\times 10^{-7}$ for the PDCAe, $\eta = \frac{|J(\mathbf{U}^{k+1})-J(\mathbf{U}^{k})|}{1+|J(\mathbf{U}^{k})|}\leq 7\times 10^{-8}$ for the PDCA and $\eta = \frac{|J(\mathbf{V}^{k+1})-J(\mathbf{U}^{k})|}{1+|J(\mathbf{U}^{k})|}\leq 7\times 10^{-8}$ for the s-iPDCA, respectively. Then the total solving time (t/s), the outer iterations (Iter), the relative variation of objective function value ($\eta$) and optimal value of DC problem \eqref{eq_11} ($J$) were compared, the results are listed in Table \ref{tab:1}.\par \begin{table} \caption{Performance of PDCAe, PDCA and s-iPDCA} \label{tab:1} \begin{tabular}{cccccccccccccccc} \hline\\[2pt] \centering{n}&\multicolumn{4}{c}{ PDCAe}&\multicolumn{4}{c}{PDCA}&\multicolumn{4}{c}{s-iPDCA}\\[2pt] \cmidrule(lr){2-5} \cmidrule(lr){6-9} \cmidrule(lr){10-13}\\[2pt] &$J$&$\eta$&Iter&t/s&$J$&$\eta$&Iter&t/s&$J$&$\eta$&Iter&t/s\\[5pt] \hline \\[2pt] 100&3.89e-3&3.64e-8&391&2.51&4.06e-4&6.78e-8&153&5.68&2.90e-4&1.03e-9&137&1.74\\[10pt] 200&7.94e-4&9.94e-8&468&10.09&2.51e-4&6.95e-8&204&15.84&2.42e-4&6.94e-8&200&6.22\\[10pt] 400&6.91e-3&9.95e-8&807&75.69&3.80e-3&6.98e-8&332&75.37&3.79e-3&6.96e-8&338&48.70\\[10pt] 600&1.30e-2&9.95e-8&939&188.28&8.87e-3&6.99e-8&375&205.58&8.87e-3&6.97e-8&378&100.18\\[10pt] 800&2.16e-2&9.96e-8&980&427.77&1.67e-2&6.99e-8&397&447.39&1.67e-2&6.98e-8&399&216.21\\[10pt] 1000&2.60e-2&9.96e-8&961&713.44&2.60e-2&6.99e-8&503&739.89&2.58e-2&6.99e-8&506&435.15\\[10pt] 1200&4.48e-2&9.98e-8&986&1199.66&4.11e-2&6.99e-8&506&1204.47&4.08e-2&6.99e-8&510&666.69\\[10pt] 1440&4.83e-2&9.99e-8&1083&2128.23&4.45e-2&7.00e-8&530&1484.42&4.45e-2&6.99e-8&530&1057.91\\[5pt] \hline \end{tabular} \end{table} When we use the PDCAe to solve the DC problem \eqref{eq_11}, the following convex subproblem is considered: \begin{equation}\label{eq_72} \min_{\mathbf{U} \in \mathcal{S}^n_+} \langle \frac{2}{n}\mathcal{A}(\mathcal{A}(\tilde{\mathbf{U}}^k)- \boldsymbol{b}), \mathbf{U}\rangle+c\langle \mathbf{U},\mathbf{I}\rangle- \langle \mathbf{U}, \mathbf{W}^{k}\rangle+\frac{L}{2}\|\mathbf{U}-\tilde{\mathbf{U}}^k\|_F, \end{equation} in here, $\tilde{\mathbf{U}}^k$ is the extrapolation term, given as $\tilde{\mathbf{U}}^k = \tilde{\mathbf{U}}^k + \beta_k(\mathbf{U}^k- \mathbf{U}^{k-1})$, the extrapolation parameter is set as $t_{k+1} = \frac{1+\sqrt{1+4t_k^2}}{2}$, $\beta_k = \frac{t_k-1}{t_{k+1}}$ with $t_0=1$. The $L$ is positive number greater than the gradient Lipschitz smooth constant of $J(\mathbf{U})=\frac{1}{n}\|\mathcal{A}(\mathbf{U})-\boldsymbol{b}\|^2$, set as $L \geq \frac{2}{n}\|\mathbf{A}^{\top}\mathbf{A}\|_2$. The \eqref{eq_72} has closed form solution, shown as \[\mathbf{U}^{k+1} = \frac{1}{L}\mathbf{\Pi}_{\mathcal{S}^n_+}(L\tilde{\mathbf{U}}^{k}+\mathbf{W}^{k} -c\mathbf{I}-\frac{2}{n}\mathcal{A}^*(\mathcal{A}(\tilde{\mathbf{U}}^k)- \boldsymbol{b})).\] The algorithm details for PDCAe can be founded in \cite{Ref_liu2019refined,Ref_wen2018proximal}. In addition, the subproblem of PDCA is solved by the ABCD \ref{alg_abcd} and the termination error is set as $\zeta = 1\times10^{-9}$. When the s-iPDCA is used to solve the RCLSSDP \eqref{eq_11}, we set the initial inexact bounded $\epsilon_1 = 1\times 10^{-4}$. Then, the computation time (t/s), iteration (Iter) and optimal value ($J$) are compared with our s-iPDCA.\par We perform dimension reduction experiment on the COIL-20 database, this data set is a collection of gray images of 1440 images from 20 objects where each object has 72 different images. The images of each objects, with uniformly distributed rotation angles, $\left[0^{\circ}, 5^{\circ}, \cdots,355^{\circ}\right]$, has been cropped into size of $128\times 128$. In order to compare the efficiency of our s-iPDCA with the PDCA and PDCAe, we perform DR on the same images group, e.g., the first 10 images of the 20 objects have been chosen when n = 200 and the first 20 images of the all 20 objects have been chosen when n =400, etc. \par The results is listed in Table \ref{tab:1}, from which we know that our s-iPDCA outperforms the PDCA and the PDCAe for solving the RCLSSDP \eqref{eq_7} from computation time and optimal value. Since only about $2-3$ steps the ABCD have been performed at each s-iPDCA iteration, although the iteration of the s-iPDCA is larger than the PDCA, its total computation time is much less than PDCA. As is shown in Table \ref{tab:1}, the optimal value of s-iPDCA is smaller than that of the other two algorithms. What's more, although the subproblem of PDCAe has closed form solution, it takes the more computation time and iterations than the s-iPDCA to solve the RCLSSDP for all situation. The reason is that the proximal parameter $L$ of the PDCAe is chosen as the gradient Lipschitz smooth constant of $J$ (or larger) to ensure the convergence of the PDCAe, which limits the convergent speed of the PDCAe. \subsection{Dimension reduction for recognition}\label{sec:5_2} In this subsection, some numerical experiments for face recogition are performed to show the advantage of our model in practical application. We divide face images database into training set and testing set randomly, then we proceed the following steps for face recognition. Firstly, a projected matrix is obtained by solving the RCLSSDP \eqref{eq_6}. Then, we reduce the dimension of face images in both training set and testing set by applying the above projected martrix. Finally, we use the nearest neighbor method as classifier to identify whether the projected covariate is belong which individual. \par Two well-known face database ORL and the Extended Yale Face Database B (YaleB) \cite{Ref_georghiades2001few} are used in our experiments. The ORL database contains 400 images from 40 individuals where each individual has 10 different images. The size of the each image is $92\times 112$. For each individual, the face in the images are rotated, scaled or tilted to a mild degree. We only extract the subset of YaleB database that containing 2,414 frontal pose images of 38 individuals under different illuminations for ecah individual. We crop all the images from YaleB database into $168\times 192$ pixels.\par From each of the face database mentioned above, the image set is partitioned into the different training and testing sets. We use the $G_p/T_q$ indicates that p face images per individual are randomly selected for training and q face images from the remaining are used for testing. Next, we will show the advantage of the RCKSDPP by comparing its performance with KSSDPP and KPCA. The KSSDPP is solved by an SDP code, called boundary point method\cite{Ref_povh2006boundary}.\par According to our test, when we reduce the demension of image data in ORL to around 40, the highest recognition accuracy is obtained. Then we set $r = 40$ when we use the RCKSDPP to reduce the dimension of image data in the face recognition task on the ORL. In the DR experiment on the ORL, the neighborhood size of k-nn for the RCKSDPP is set as $k = \operatorname{round}(log(n))$, the kernel parameter of kernel function is set as $\varsigma =25$, and the termination condition of the s-iPDCA for solving RCKSDPP is set as $\frac{\|\mathbf{V}^{k+1}-\mathbf{U}^{k}\|_F}{1+\|\mathbf{U}^{k}\|_F}\leq 1 \times 10^{-4}$ while the termination precision of boundary points method for solving KSSDPP is set as $1 \times 10^{-5}$. The average recognition accuracy and the standard deviation across 50 runs of tests of KSSDPP, KPCA and RCKSDPP are shown in Table \ref{tab:2}. From Table \ref{tab:2}, we can know that the RCKSDPP outperforms the KSSDPP and KPCA on the ORL database for all situations. What's more, the larger the training set is, the higher the recognition accuracy was obtained for each model.\par \begin{table} \caption{Dimension reduction results: recognition accuracy $\left(\text{Re}\pm\text{std}\%\right)$, average optimal value $(J)$ and average solving time $(t/s)$ on the ORL database} \label{tab:2} \begin{tabular}{cccccccccccccc} \hline\\[2pt] Partitions&\multicolumn{4}{c}{ KSSDPP}&\multicolumn{4}{c}{RCKSDPP}&\multicolumn{2}{c}{KPCA}\\[2pt] \cmidrule(lr){2-5} \cmidrule(lr){6-9} \cmidrule(lr){10-11}\\[2pt] &$\text{Re}\pm\text{std}$&$J$&Iter&t/s&$\text{Re}\pm\text{std}$&$J$&Iter&t/s&$\text{Re}\pm\text{std}$&t/s\\[5pt] \hline\\[2pt] $G_2/ T_8$ & $80.15\pm3.26$&6.53e-2&1523&19.47& $83.22\pm2.57$&2.22e-4&654&8.11&$79.75\pm2.41$ &0.95\\[10pt] $G_3/ T_7$ & $88.11\pm2.56$&9.88e-2&1242&21.67& $90.90\pm 2.16$&1.27e-3&682&17.04&$84.67\pm12.08$ &0.98 \\[10pt] $G_4/ T_6$ & $92.32\pm1.80$&1.33e-1&1366&38.33 & $94.47\pm 1.70$&2.25e-3&747&22.31&$88.59\pm12.58$ &1.00 \\[10pt] $G_5/T_5$ & $95.20\pm0.76$&1.245e-1&1840&72.43 & $96.46 \pm 1.56$&4.6e-3&716&43.42 &$93.39\pm2.16$ &1.01\\[10pt] $G_6/T_4$ & $96.37\pm1.49$&1.47e-1&1751&92.93 & $97.15 \pm 1.66$&5.93e-3&684&50.55&$95.01\pm1.63$ &1.03 \\[10pt] $G_7/ T_3$ & $97.00\pm0.75$&2.14e-1&1710&158.48 & $98.25 \pm 1.19$&1.15e-2&634&67.38&$96.53\pm1.64$ &1.04 \\[10pt] $G_8/T_2$ & $98.02\pm1.62$&2.11e-1&1276&160.97 & $98.70 \pm 1.58$&1.41e-2&641 &72.55 &$97.15\pm2.02$ &1.05\\[5pt] \noalign{\smallskip}\hline \end{tabular} \end{table} Compared to the ORL database, the YaleB database has different illuminations, which makes it diffcult to perform recognition task on the YaleB database. In the DR experiment on the YaleB, We set $r = 45$, set the neighborhood size of k-nn for the RCKSDPP as $k = 3$, and in each run of RCKSDPP, KSSDPP and KPDCA, 20 individuals are randomly selected for face recognition. The different suitable kernel parameter is chosen for each of these three models, e.g., $\varsigma =7$ for the KSSDPP and RCKSDPP, while $\varsigma =2000$ for the KPCA. The termination condition of the s-iPDCA for solving RCKSDPP is set as $\frac{\|\mathbf{V}^{k+1}-\mathbf{U}^{k}\|_F}{1+\|\mathbf{U}^{k}\|_F}\leq 7 \times 10^{-5}$ and the termination precision of boundary points method for solving KSSDP is set as $1 \times 10^{-5}$. The comparison results on this database is shown in Table \ref{tab:3}, which shows that the unsupervised method, the KPCA, has much lower recognition accuracy than the RCKSDPP and KSSDPP on the YaleB face database. What's more, the RCKSDPP outperforms the KSSDPP, which means that RCKSDPP is more powerful to reduce the dimension of complex data than the KSSDPP.\par \begin{table} \caption{Dimension reduction results: recognition accuracy $\left(\text{Re}\pm\text{std}\%\right)$, average optimal value $(J)$ and average solving time $(t/s)$ on the YaleB database} \label{tab:3} \begin{tabular}{ccccccccccc} \hline\\[2pt] Partitions&\multicolumn{4}{c}{KSSDPP}&\multicolumn{4}{c}{RCKSDPP}&\multicolumn{2}{c}{KPCA}\\[2pt] \cmidrule(lr){2-5} \cmidrule(lr){6-9} \cmidrule(lr){10-11} \\[2pt] &$\text{Re}\pm\text{std}$&$J$&Iter&t/s&$\text{Re}\pm\text{std}$&$J$&Iter&t/s&$\text{Re}\pm\text{std}$&t/s\\[5pt] \hline\\[2pt] $G_{10}/ T_{50}$& $68.24\pm 1.67$&7.13e-2&1918&42.05& $76.28 \pm 3.17$ &3.03e-3&718&40.70&$55.43 \pm 2.57$ &1.52\\[10pt] $G_{20}/T_{40}$& $79.98\pm 2.53$&1.30e-1&2093&169.73 & $89.03\pm 1.53$&1.64e-2&575&86.21 &$67.13 \pm 2.63$ &1.81\\[10pt] $G_{30}/T_{30}$& $85.70\pm 1.76$&1.52e-1&2212&379.76 & $93.67 \pm 1.14$&2.59e-2&531&150.58&$71.35 \pm 2.65$ &2.15 \\[10pt] $G_{40}/ T_{20}$& $87.10\pm 1.76$&1.50e-1&2389&702.58 & $94.00 \pm 1.77$&3.28e-2&459&193.90&$74.33 \pm 3.00$ &2.46 \\[10pt] $G_{50}/T_{10}$&$87.50\pm 2.89$ &1.91e-1&2476&1494.51 & $95.05 \pm 1.45$&4.43e-2&426&298.95&$74.57 \pm 3.74$ &2.80\\[5pt] \noalign{\smallskip}\hline \end{tabular} \end{table} From Table \ref{tab:2} and Table \ref{tab:3}, we know that the RCKSDPP always outperforms the KSSDPP from computation times and recognition accuracy. This results demonstrate that the RCKSDPP is effective to reduce the dimension of image data. What's more, the larger the training set size is, the higher the recognition accuracy of KSSDPP, RCKSDPP and KPCA are obtained. \section{Conclusion}\label{sec:6} In this paper, the supervised distance preserving projections (SDPP) for dimension reduction has been considered, which is reformulated into the rank constrained least squares semidefinite programming (RCLSSDP). To address the difficulty bringing by the rank constraint, we introduce the DC regularization strategy to transfer the RCLSSDP into LSSDP with DC regularization. Under the framework of DC approach, we propose an efficient algorithm, the inexact proximal DC algorithm with sieving strategy (s-iPDCA), for solving the DCLSSDP. Then, we show that the sequence generated by s-iPDCA globally converges to the stationary points of corresponding DC probelm. In addition, based on the dual of the strongly convex subprobelm of s-iPDCA, an efficient accelerated block coordinate descent (ABCD) method is designed. What's more, an efficient inexact strategy is employed to solve the subprobelm of s-iPDCA. Moreover, the low rank structure of solution is considered to reduce the storage cost and computation cost. Finally, we compare our s-iPDCA with the classical PDCA and the PDCA with extrapolation (PDCAe) for solving the RCLSSDP by performing DR on the COIL-20 database, the results show that the s-iPDCA is more efficient to solve the RCLSSDP. We also apply the rank constrained kernel SDPP (RCKSDPP) to perform DR for face recognition on the ORL and the YaleB database, the resluts demonstrate that the RCKSDPP outperforms the Kernal SSDPP (KSSDPP) and the kernel principal component analysis (KPCA) on recognition accuracy. \bibliographystyle{spmpsci_unsrt}
1,941,325,220,123
arxiv
\section{Introduction} The optomechanical experiment proposed by {\it Marshall et al.} \cite{Penrose} presents one of the first systems where one attempts to produce and detect coherent quantum superpositions of spatially separated macroscopic states, and to test various decoherence and wave function collapse models \cite{Diosi84,Diosi1,Penrose98,Bassi1,Diosi,decoherence}. The basic idea here is close in spirit to Schr\"{o}dinger's original discussion \cite{Bose1,Bose2}: a microscopic quantum system (photon), for which the superposition principle is undoubtedly valid, is coupled with a macroscopic object (mirror), in order to transfer interference effects from the former to the latter, creating a macroscopic superposition state. For this goal, one employs a Michelson interferometer with a tiny moveable mirror in one arm. In this way, since the photon displaces through its radiation pressure the tiny mirror, the initial superposition of the photon being in either arms causes the system to evolve into a superposition of states corresponding to two distinct locations of the mirror. Nevertheless, before being able to detect macroscopic superpositions, a serious obstacle has to be overcome: decoherence induced by the mirror itself on the photon. The photon, indeed, cannot be dealt with as an isolated system, because it interacts with the mirror and, hence, decoherence can occur destroying any photon coherent superposition \cite{Zurek,MSR}. In this case, no interference effects can be transferred to the mirror. Mirror induced decoherence has been examined in Ref.\,\cite{Penrose}, considering both, the mirror and the photon, as quantum objects. However, the size of the former ($\approx1\mu$m) far exceeds the scales which are typical of the explored quantum regime. Therefore, a classical description of the mirror should be investigated as an alternative and differences or similarities with a quantum one need to be confronted with the planned experiments. The purpose of our paper is to study the decoherence process with the mirror treated as a classical, rather than a quantum subsystem, while the photon obviously retains its quantum nature. Thus, we have to deal with a model comprising a quantum and a classical sector, which coexist and interact. Such a situation needs a particular theoretical framework for a consistent description, namely a quantum-classical hybrid theory. There has been much interest in hybrid theories, both for practical and theoretical reasons. From a theoretical point of view, hybrid theories have originally been devised to provide a different approach to the quantum measurement problem \cite{Sherry}. Furthermore, a quantum-classical hybrid theory may be employed to describe consistently the interaction between quantum matter and classical spacetime \cite{Bou}. See also, for example, the related studies in Refs.\,\cite{CaroSalcedo99,DiosiGisinStrunz,PeresTerno,HallReginatto05,ZhangWu06,Hall08,Manko12}. Even if one is not inclined to modify certain ingredients of quantum theory, there is also clearly practical interest in various forms of hybrid dynamics, in particular in nuclear, atomic, or molecular physics. The Born-Oppenheimer approximation, for example, is based on a separation of interacting slow and fast degrees of freedom of a compound object. The former are treated as approximately classical, the latter as of quantum mechanical nature. Moreover, mean field theory, based on the expansion of quantum mechanical variables into a classical part plus quantum fluctuations, leads to another approximation scheme and another form of hybrid dynamics. This has been reviewed more generally for macroscopic quantum phenomena in Ref.\,\cite{Gnac}. In all these cases hybrid dynamics is considered as an approximate description of an intrinsically quantum mechanical object. Such considerations are and will become increasingly important for the precise manipulation of quantum mechanical objects by apparently and for all practical purposes classical means, especially in the mesoscopic regime. In particular, we recall the hybrid theory elaborated in \cite{Elze1,Elze2,Elze3,Elze4}, which overcomes the known impediments found in earlier work. For a closely related approach, see also Ref.\,\cite{Buric}. Herein, the classical sector is described by the standard analytical classical mechanics, while the description of the quantum sector is based on Heslot's representation \cite{Heslot}, {\it cf.} Ref.\,\cite{Strocchi}, allowing to express quantum mechanics in a Hamiltonian framework. Similarly as in classical physics, it is then possible to present states of the quantum sector in terms of couples of real time-dependent functions, rather then vectors, which play the role of canonical variables. Furthermore, the observables are no longer given by self-adjoint operators, but are represented by real quadratic functions of these canonical variables. In this way, an entire hybrid system can be studied in one, uniform, scheme. In Section\,$\ref{HH}$, we will employ this hybrid theory to specify a Hamiltonian, which constitutes the starting point for the analysis of the dynamics of the {\it Marshall et al.} optomechanical system. -- In Section\,$\ref{EM}$, we will derive and solve the corresponding equations of motions. -- The solutions of these equations will be employed to calculate the off-diagonal matrix elements of the reduced density matrix for the photon and allow the evaluation of the decoherence induced by the classical mirror. The analysis of this decoherence process and the discussion of the results obtained will be presented in Section\,$\ref{MIDec}$. -- Finally, in Section\,$\ref{PIP}$, we will relate the off-diagonal matrix elements to quantities that can be determined experimentally. -- In the concluding section, implications of our results shall be discussed. \section{The quantum-classical hybrid Hamiltonian}\label{HH} The Hamiltonian for the optomechanical interferometer system proposed by {\it Marshall et al.} \cite{Penrose} consists naturally of three different parts which have their correspondents in our hybrid description: terms related to the photon (the quantum sector), terms associated with the mirror (the classical sector), and the hybrid coupling of both sectors. Since the photon is treated quantum mechanically, it is represented by a Hamilton operator $\hat{H}_{QM}$: \begin{equation}\label{HQM} \hat{H}_{QM}=\hbar\omega (\hat{c}^{\dagger}_{A}\hat{c}_{A} +\hat{c}^{\dagger}_{B}\hat{c}_{B}) \;\;, \end{equation} where $\hat{c}^{\dagger}_{A}$ and $\hat{c}_{A}$, respectively, are creation and annihilation operators for a photon in arm A, and correspondingly for arm B. -- Instead, the mirror is considered here as a classical subsystem. While it was described as a quantum harmonic oscillator in Ref.\,\cite{Penrose}, it is now represented by a classical one with Hamiltonian $H_{CL}$: \begin{equation}\label{HCL} H_{CL}=\frac{p^{2}}{2M}+\frac{M\Omega^{2}}{2}x^{2} \;\;, \end{equation} in which $x$ and $p$ denote position and momentum of the mirror, respectively. -- The hybrid coupling $\hat{I}$, {\it i.e.} the interaction between photon and mirror, incorporates both, a quantum operator and a classical variable, for photon and mirror, respectively. Being essentially related to the radiation pressure of the photon, as shown in detail in Refs.\,\cite{Man1,Man2,Pace,Law1,Law2}, we have: \begin{equation}\label{HIT} \hat{I}=\hbar gx\hat{c}^{\dagger}_{A}\hat{c}_{A} \;\;,\end{equation} where $g:=\omega /L$. Following Refs.\,\cite{Elze1,Elze2,Elze3,Elze4}, we obtain the full hybrid Hamiltonian $H$ as: \begin{equation}\label{FHH} H=H_{CL}+\bra{\psi}(\hat{H}_{QM}+\hat I)\ket{\psi} \;\;, \end{equation} in which $\ket{\psi}$ denotes a generic photon state, satisfying always $\braket{\psi\pton{t}}{\psi\pton{t}}=1$\,. In order to evaluate the expectation values in Eq.\,(\ref{FHH}), we consider the orthonormal basis in the Hilbert space of the photon formed by the two vectors $\ket{1,0}$ and $\ket{0,1}$, representing the states in which the photon is either in arm A or B of the interferometer; since we consider only one-photon states, this is also complete. Accordingly, we expand the state $\ket{\psi}$ as follows: \begin{equation}\label{Expansion} \ket{\psi}=\frac{1}{\sqrt{2\hbar}}\pton{X_{A}+iP_{A}}\ket{1,0} +\frac{1}{\sqrt{2\hbar}}\pton{X_{B}+iP_{B}}\ket{0,1} \;\;, \end{equation} where $X_{A}$, $X_{B}$, $P_{A}$, and $P_{B}$ are real time-dependent functions, which play the role of canonical variables \cite{Elze1,Elze2,Elze3,Elze4}. This gives: \begin{eqnarray} H&=&\frac{p^2}{2M}+\frac{M\Omega^2}{2}x^2 \nonumber \\ [1ex] \label{FinHH} &\;&+\frac{\omega}{2}\sum_{i=A,B}\pton{X^2_{i}+P^2_{i}} +\frac{g}{2}x\pton{X^2_{A}+P^2_{A}} \;\;. \end{eqnarray} This is the full hybrid Hamiltonian for the {\it Marshall et al.} optomechanical system, which is a real function of position and momentum of the mirror and of the canonical variables pertaining to the photon, resembling a completely classical formalism. Nevertheless, the quantum nature of the photon is described correctly according to the representation of Eq.\,(\ref{Expansion}). \subsection{Comments} The scenario \cite{Penrose} on which our work is based, resulting in the hybrid Hamiltonian (\ref{FinHH}), may be related to real experiments. We comment on two important aspects. Assuming that the {\it single photon within} the interferometer typically will be produced by a source {\it outside} of the cavity under consideration, the interferometer should be treated as an open system. This has been pointed out in Ref.\,\cite{Chen1} and studied in a fully quantum mechanical approach (invoking the rotating wave approximation when coupling photon cavity mode inside and continuum modes outside). Depending on the bandwidth of the initial photon wave function as compared to the cavity linewidth, the photon may obtain simultaneously nonzero amplitudes to be inside and outside of the cavity. -- This will not affect the hybrid coupling (\ref{HIT}), on which we have presently focussed attention. However, it introduces an additional time dependence into the dynamics. Our description of the photon sector can naturally be adapted to this, as before \cite{Chen1}, while the coupling to the considered classical mirror remains the same. Anticipating the results obtained in the following, we expect similar effects as observed in Ref.\,\cite{Chen1}, which remains to be confirmed by a detailed study. Furthermore, one wonders whether our present derivation must be generalized, in order to allow for the simultaneous presence of {\it several photons} inside the cavity, which might be experimentally of interest. Let us consider for illustration the interferometer as a closed system and assume a two-photon state inside. Each photon can be present in its arm A or B. Any resulting two-photon state can be embedded in a four-dimensional Hilbert space (e.g. spanned by Bell states) and be expanded analogously to Eq.\,(\ref{Expansion}) above. (This has recently been applied in a somewhat similar setting of two q-bits interacting with a classical oscillator \cite{Lorenzo}.) Correspondingly, there will be a proliferation of terms in the resulting hybrid Hamiltonian. Nevertheless, the bilinearity of the hybrid Hamiltonian in the canonical coordinates, as in Eq.\,(\ref{FinHH}), will not be affected. The character of the eventually resulting coupled equations of motion, cf. Section\,$\ref{EM}$, will only change by number. However, for high intensity of the cavity field, genuinely nonlinear multi-photon interactions are to be expected, rendering the hybrid coupling constant $g$ in Eq.\,(\ref{HIT}) effectively time and photon state dependent. This is clearly an interesting regime to be studied, concerning comparison between fully quantum mechanical and hybrid descriptions in particular. This leads us to mention another hybrid system currently attracting much interest \cite{Carlip,BassiEtAl}, namely a massive body (considered quantum mechanical) interacting with the gravitational field (considered classical and nonrelativistic), thus interchanging the role of quantum mechanical and classical degrees freedom as compared to the situation presently studied. It has recently been shown that this hybrid can be represented by the nonlinear Schr\"odinger-Newton equation, in the limit that the extended body is composed of many ``mass-concentrating'' constituents (atoms or similar) \cite{Chen2}. It remains to be further examined to what extent such effectively {\it nonlinear} descriptions of quantum-classical hybrids are consistent in the sense of criteria discussed earlier \cite{CaroSalcedo99,DiosiGisinStrunz,PeresTerno,Hall08,Elze1,Elze2,Elze3,Elze4,Buric}. \section{The hybrid equations of motion}\label{EM} Having specified the hybrid Hamiltonian for the {\it Marshall et al.} optomechanical interferometer system, we are now ready to derive the corresponding equations of motion. Following the procedure defined by the appropriate Poisson bracket structure for the algebra of observables and the Hamiltonian, in particular, we will obtain the equations of motion and their solutions here, {\it cf.} Refs.\,\cite{Elze1, Elze2, Elze3,Elze4}. \subsection{Derivation of the equations} We begin with the photon in arm A. Its equations of motion are formally given by: \begin{equation}\label{AEQ1} \der{X_{A}}{t}=\{X_{A},H\}_{X}\quad,\quad\der{P_{A}}{t}=\{P_{A},H\}_{X} \;\;, \end{equation} with $\{f,g\}_{X}:=\{f,g\}_{CL}+\{f,g\}_{QM}$. Here the ``quantum mechanical'' Poisson bracket is given by $\{f,g\}_{QM}:=\sum_{i=A,B}\pton{\der{f}{X_{i}}\der{g}{P_{i}}-\der{g}{X_{i}}\der{f}{P_{i}}}$, while the classical Poisson bracket $\{f,g\}_{CL}$ is the usual one, of course - following the rules of the hybrid theory constructed in Ref.\,\cite{Elze1}. Inserting Eq.\,(\ref{FinHH}) into Eq.\,(\ref{AEQ1}), we obtain: \begin{equation}\label{AEQ2} \der{X_{A}}{t}=\omega P_{A}-gxP_{A}\;\; ,\;\;\; \der{P_{A}}{t}=-\omega X_{A}+gxX_{A} \;\;. \end{equation} In the same manner, we obtain the equations of motion for the photon in arm B: \begin{equation}\label{BEQ} \der{X_{B}}{t}=\omega P_{B}\;\; ,\;\;\; \der{P_{B}}{t}=-\omega X_{B} \;\;. \end{equation} Finally, for the mirror, we similarly obtain: \begin{equation}\label{MEQ} \der{x}{t}=\frac{p}{M}\;\; ,\;\;\; \der{p}{t}=-M\Omega^2x+\frac{g}{2}\pton{X_{A}^2+P_{A}^2} \;\;. \end{equation} Remarkably, the equations of motion for our quantum-classical hybrid system have a completely classical appearance. This will be employed in the following derivation of their solutions. \subsection{Solution of the hybrid equations} The equations obtained in the previous subsection describe a set of coupled harmonic oscillators, which can be solved analytically. In particular, while the equations for the photon in arm B are completely decoupled, those associated with the mirror and the photon in arm A are coupled. This is as expected, since the photon in arm B does not interact with anything, while when it is in arm A it inevitably interacts with the mirror. The coupling term in Eq.\,(\ref{MEQ}), $\propto\pton{X_{A}^2+P_{A}^2}$, at first sight, complicates solving of the equations. However, this term consists in a constant of motion and, therefore, can be replaced by its initial value. We note that: \begin{equation}\label{Const} \frac{1}{2}\der{\pton{X^2_{A}+P^2_{A}}}{t}=X_{A}\der{X_{A}}{t}+P_{A}\der{P_{A}}{t}=0 \;\;, \end{equation} using Eq.\,(\ref{AEQ2}). In order to determine the value of this constant of motion, we consider its physical meaning: it simply describes the probability to find the photon in arm A, expressed in the representation of Eq.\,(\ref{Expansion}). We assume a fifty-fifty beam splitter in the interferometer \cite{Penrose}. Therefore, this probability is equal to $1/2$: \begin{eqnarray} \frac{1}{2}&=&\bra{\psi}\hat{c}^{\dagger}_{A}\hat{c}_{A}\ket{\psi} =\frac{1}{2\hbar}\pton{X^2_{A}+P^2_{A}}\bra{0,1}\hat{c}^{\dagger}_{A}\hat{c}_{A}\ket{1,0} \nonumber \\ [1ex] \label{norma} &=&\frac{1}{2\hbar}\pton{X^2_{A}+P^2_{A}}\braket{0,1}{1,0}= \frac{1}{2\hbar}\pton{X^2_{A}+P^2_{A}} , \end{eqnarray} since $\langle 0,1|=|1,0\rangle^\dagger$. Thus, we have: $X^2_{A}+P^2_{A}=\hbar$\,. Similarly, we find: $X^2_{B}+P^2_{B}=\hbar$\,. Making use of this in Eq.\,(\ref{MEQ}), the equations for the mirror become: \begin{equation}\label{SM2} \der{x}{t}=\frac{p}{M}\;\; ,\;\;\; \der{p}{t}=-M\Omega^2x+\frac{\hbar g}{2} \;\;, \end{equation} which can be easily solved to give: \begin{eqnarray}\label{xpmirr} x\pton{t}&=&A\sin\pton{\Omega t+\phi}-\frac{\hbar g}{2M\Omega^2}\;\; , \\ [1ex] \label{pxmirr} p\pton{t}&=&AM\Omega \cos\pton{\Omega t+\phi} \;\;, \end{eqnarray} where $A$ and $\phi$, respectively, represent amplitude and phase of the oscillation of the mirror, which are determined by the initial conditions. -- With: \begin{equation}\label{xp0} x\pton{0}=A\sin\pton{\phi}-\frac{\hbar g}{2M\Omega^2} \;\; ,\;\;\; p\pton{0}=AM\Omega \cos\pton{\phi} \;\;, \end{equation} and abbreviating $x_0\equiv x\pton{0}$, $p_0\equiv p\pton{0}$, we find: \begin{eqnarray}\label{Am} A\pton{x_{0},p_{0}}&=& \frac{p_{0}}{M\Omega\cos \big [\phi (x_0,p_0)\big ]} \;\;, \\ [1ex] \label{Phim} \phi\pton{x_{0},p_{0}}&=& \arctan \big [(M\Omega x_{0}+\frac{\hbar g}{2\Omega})/p_0\big ] \;\;. \end{eqnarray} Knowing the solutions of the equations for the mirror, we can solve those for the photon in arm A, Eqs.\,(\ref{AEQ2}). It is straightforward to obtain: \begin{eqnarray}\label{XAsol} \frac{X_{A}\pton{t}}{\sqrt\hbar}=\cos\big (\Omega_+t +\frac{A g}{\Omega}\big [\cos\pton{\Omega t+\phi}-\cos\pton{\phi}\big ]\big ) ,\; \\ [1ex] \label{PAsol} \frac{P_{A}\pton{t}}{\sqrt{\hbar}}=-\sin\big (\Omega_+ t +\frac{Ag}{\Omega}\big [\cos\pton{\Omega t+\phi}-\cos\pton{\phi}\big ]\big ) ,\; \end{eqnarray} conveniently introducing $\Omega_+$ and $k$, $\Omega_+:=\omega+k^{2}\Omega :=\omega +\hbar g^2/(2M\Omega^2)$\,. Finally, regarding the photon in arm B, we find: \begin{equation}\label{FinalB} X_{B}\pton{t}=\sqrt{\hbar}\cos\pton{\omega t}\quad,\quad P_{B}\pton{t}=-\sqrt{\hbar}\sin\pton{\omega t} \;\;. \end{equation} Here we recalled the normalization of the photon state for arm B, which follows similarly as in Eq.\,(\ref{norma}), in order to fix the amplitudes. The phase, instead, is determined by choosing: $X_{B}\pton{0}=1$ and $P_{B}\pton{0}=0$\,. \section{The mirror induced decoherence}\label{MIDec} In this section, we report quantitative results concerning the decoherence induced by the classical mirror on the quantum photon. The relevant information is contained in the off-diagonal elements of the reduced density matrix for the photon. Presently, these matrix elements are given by: \begin{equation}\label{odme1} \rho_{AB}=\braket{1,0}{\psi}\braket{\psi}{1,0}=\rho_{BA}^{\; *} \;\;, \end{equation} with $\langle 1,0|=|0,1\rangle^\dagger$. Inserting Eq.\,(\ref{Expansion}) here, we obtain: \begin{equation}\label{odme2} \rho_{AB}\pton{t}=\frac{1}{2\hbar}\pton{X_{A}+iP_{A}}\pton{X_{B}-iP_{B}} \;\;, \end{equation} and recalling Eqs.\,(\ref{XAsol}), (\ref{PAsol}), and (\ref{FinalB}), this becomes: \begin{eqnarray} &\;&\rho_{AB}\pton{t;x_{0},p_{0}}=\frac{1}{2}\exp\big\{i\omega t\big\} \nonumber \\ [1ex] \label{finalrho} &\cdot&\exp\big\{-i\big [\Omega_+ t +\frac{Ag}{\omega}[\cos\pton{\Omega t +\phi}-\cos\pton{\phi}]\big ]\big\} ,\;\; \end{eqnarray} where the dependence on $x_{0}$ and $p_{0}$ is through $A$ and $\phi$. Taking the modulus of the matrix element $\rho_{AB}$, we obtain the mirror induced decoherence as a function of time, {\it i.e.} the visibility of interference of the photon. Since the result of Eq.\,(\ref{finalrho}) is a pure phase, its modulus is a constant: \begin{equation} |\rho_{AB}|=\frac{1}{2} \;\;. \end{equation} Thus, for pointlike initial conditions in the classical phase space of the mirror, no mirror induced decoherence occurs. In this case, the photon remains in its initial pure state, the coherent superposition of being in either of the arms of the Michelson interferometer, and the corresponding interference effects are preserved. We emphasize that this result concerns the situation where the initial conditions of the classical mirror are perfectly known and are represented by a point in phase space. It shows a clear and expected difference between the present hybrid and the purely quantum approach \cite{Penrose}. The latter results in mirror induced decoherence, even before thermal averaging. However, suppose that there is some loss of information consisting in somewhat imprecisely known initial position and momentum of the mirror. Instead of sharp initial conditions, we may have a probability distribution over phase space. -- For instance, we consider a Boltzmann distribution: \begin{equation}\label{MB} f\pton{x_{0},p_{0}}:=\frac{\beta\Omega}{2\pi} \exp\big\{-\beta\pton{\frac{p^{\; 2}_{0}}{2M}+\frac{M\Omega^2x^{\; 2}_{0}}{2}}\big\} \;\;, \end{equation} depending on the inverse temperature, $\beta :=1/k_BT$. In this case, the physically relevant matrix element is given by the thermal average of the result of Eq.\,(\ref{finalrho}): \begin{equation}\label{rhof} <\rho_{AB}>_{f}=\int\rho_{AB}\pton{t;x_{0},p_{0}}f\pton{x_{0},p_{0}}\mbox{d}x_{0}\mbox{d}p_{0} \;\;. \end{equation} In order to calculate this integral, we rewrite it more conveniently, using the abbreviations $\kappa^2:=\hbar g^2/(2M\Omega^3)$ (previously introduced in Ref.\,\cite{Penrose}), $\;\theta\pton{t}:=\Omega t-\sin\pton{\Omega t}$\,, and: \begin{eqnarray}\label{intg} h_1\pton{x_{0},t}&:=&\frac{\beta M\Omega^2}{2}x^{\; 2}_{0} +i\frac{g}{\Omega}\sin\pton{\Omega t}x_{0} \;\;, \\ [1ex] \label{inth} h_2\pton{p_{0},t}&:=&-\frac{\beta}{2M}p^{\; 2}_{0} -i\frac{g}{M\Omega^2}\big [\cos\pton{\omega_{m}t}-1\big ]p_{0} .\;\;\; \end{eqnarray} Thus, we obtain from Eq.\,(\ref{rhof}): \begin{eqnarray} &\phantom .&<\rho_{AB}\pton{t}>_{f}\;=\; \frac{\beta\Omega}{4\pi}\exp\{ -i\kappa^2\theta\pton{t} \} \nonumber \\ [1ex] \label{rhof1} &\phantom .&\;\;\cdot \int^{\infty}_{-\infty} \exp\{ h_1\pton{x_{0},t}\} \mbox{d}x_{0} \int^{\infty}_{-\infty} \exp\{ h_2\pton{p_{0},t}\} \mbox{d}p_{0} \nonumber \\ [1ex] \label{rhof2} &=&\frac{1}{2}\exp\big\{ -i\kappa^2 [\Omega t-\sin\pton{\Omega t}] -z_{CL}^2[1-\cos\pton{\Omega t}]\big\} ,\;\;\;\;\;\; \end{eqnarray} with $z_{CL}^2:=g^2/(\beta M\Omega^4)=2\kappa^{2}k_BT/(\hbar\Omega )$. In order to evaluate the mirror induced decoherence, we calculate the modulus of this averaged matrix element (commonly referred to as {\it ``visibility''}): \begin{equation}\label{mod} |<\rho_{AB}\pton{t}>_{f}|=\frac{1}{2}\exp\big\{-z_{CL}^2[1-\cos\pton{\Omega t}]\big\} \;\;. \end{equation} This shows the decoherence induced by the classical mirror on the quantum photon, taking into account that the mirror initial conditions are thermally distributed. The Fig.\,\ref{mid} shows the temporal behaviour of the visibility of interference. As in the purely quantum case, the visibility has a maximum at the initial instant and then decreases, due to the mirror induced decoherence. However, after half a period of the mirror oscillation, we observe a revival of coherence of the photon and the visibility returns to its maximum value exactly at $t=2\pi /\Omega$. \begin{figure}[!htpb] \begin{center} \includegraphics[width=7.3cm, height=7.5cm, keepaspectratio]{DT-3-4A.pdf} \includegraphics[width=7.3cm, height=7.5cm, keepaspectratio]{DT-3-4B.pdf} \caption{\label{mid} $|<\rho_{AB}\pton{t}>_{f}|$ as a function of $\tau :=\Omega t$, for $\kappa =1$, $\Omega=2\pi\cdot 500$Hz. The dashed line (purple) is for $T=10^{-3}$K, the full line (blue) for $T=10^{-4}$K.} \end{center} \end{figure} It is important to realize that the result of Eq.\,(\ref{mod}), incorporating the thermal average over classical initial conditions, has {\it exactly} the same form as the earlier one obtained in Ref.\,\cite{Penrose} for a quantum mechanical mirror as part of the interferometer. In particular, we find the {\it same time dependence}. The only difference resides in that our parameter $z_{CL}$, defined after Eq.\,(\ref{rhof2}), has to be replaced by the corresponding parameter $z_{QM}$ given by: \begin{eqnarray} z_{QM}^2&=&2\kappa^2\Big (\bar n(\hbar\Omega /k_BT)+1/2\Big ) \nonumber \\ [1ex] \label{zrel} &=& z_{CL}^2(\hbar\Omega /k_BT)\Big (\bar n(\hbar\Omega /k_BT)+1/2\Big ) \;\;, \end{eqnarray} with the Bose-Einstein distribution $\bar n(x):=(\exp (x)-1)^{-1}$. Here we incorporated the appropriate finite temperature correction indicated (but not explicitly given) in Ref.\,\cite{Penrose} for the quantum mechanical mirror. Thus, we find that in the {\it high-temperature limit}, with $\hbar\Omega /k_BT\ll 1$, both parameters coincide, \begin{equation}\label{hT} z_{QM}^2=z_{CL}^2\Big (1+\frac{1}{12}[\hbar\Omega /k_BT]^2 +\mbox{O}([\hbar\Omega /k_BT]^4)\Big ) \;\;, \end{equation} and, consequently, the visibilities given by the right-hand side of Eq.\,(\ref{mod}), with either $z_{CL}$ or $z_{QM}$ inserted, become equal as well, for all times! More generally, considering the ratio $\eta$ of the result of Eq.\,(\ref{mod}) divided by the quantum mechanical result from \cite{Penrose}, $\eta :=|<\rho_{AB}\pton{t}>_{f}|/|<\rho_{AB}\pton{t}>_{QM}|$, we find numerically that -- for experimentally relevant temperatures $10^{-6}\mbox{K}<T<10^{-3}\mbox{K}$ and mirror frequency $\Omega=2\pi\cdot 500$Hz -- the deviation of both results can be correspondingly bounded by $10^{-2}>\eta -1>10^ {-6}$, indeed a surprising result. Furthermore, due to identical time dependence, $\propto 1-\cos (\Omega t)$, in the exponent on the right-hand side of Eq.\,(\ref{mod}) and the corresponding quantum mechanical result, the deviation of both visibilities goes to zero {\it always} when $\Omega t$ approaches $2\pi$ times an integer, which is the experimentally interesting region close to maximal visibility, cf. Fig.\,1. Since the visibility of Eq.\,(\ref{mod}) shows, for sufficiently short times ($\tau :=\Omega t\ll 1$, cf. Fig.\,\ref{mid}), a Gaussian decay, we may define the characteristic {\it decoherence time} $t_{CL}$ by: \begin{equation}\label{dectime} z_{CL}^2[1-\cos (\Omega t)]\approx z_{CL}^2(\Omega t)^2/2 =:(t/t_{CL})^2/2 \;\;, \end{equation} and, correspondingly, for the case of a quantum mechanical mirror. This gives us the relevant decoherence times $t_{CL}=(z_{CL}\Omega )^{-1}$ and $t_{QM}=(z_{QM}\Omega )^{-1}$. Thus, we obtain the following relation: \begin{eqnarray} t_{CL}&=&t_{QM}\cdot z_{QM}/z_{CL} \nonumber \\ [1ex] \label{decTrel} &=&t_{QM} (\hbar\Omega /k_BT)^{1/2}\Big (\bar n(\hbar\Omega /k_BT)+1/2\Big )^{1/2} ,\;\;\; \end{eqnarray} using Eq.\,(\ref{zrel}). In analogy to Eq.\,(\ref{hT}), we conclude here that the decoherence times coincide in the high-temperature limit. -- For experimentally relevant parameters \cite{Penrose}, i.e., frequencies around $\Omega =2\pi\cdot 500$Hz, while maintaining $\kappa\approx 1$, and temperatures in the interval $10^{-6}\mbox{K}\stackrel{<}{\sim}T\stackrel{<}{\sim}10^{-3}\mbox{K}$, such that $2.4\cdot 10^{-2}\stackrel{>}{\sim}\hbar\Omega /k_BT\stackrel{>}{\sim}2.4\cdot 10^{-5}$, we have that the discriminating factor $z_{QM}/z_{CL}$ in Eq.\,(\ref{decTrel}) deviates from 1 by less than $10^{-4}$. Therefore, the decoherence times $t_{CL}$ and $t_{QM}$ are the same to such accuracy that they will be difficult to distinguish experimentally, at present. For all practical purposes, the similitude of the features of mirror induced decoherence is a robust result, as we have demonstrated, considering both, either a classical mirror plus photon described by quantum-classical hybrid theory or a mirror plus photon described as fully quantum mechanical system \cite{Penrose}. \section{The probability to detect a photon}\label{PIP} The mirror induced decoherence has been evaluated in terms of an off-diagonal matrix element of the reduced density operator for the photon. In this section, we relate this quantity to the experimentally accessible probability to find a photon, respectively, in one of the two detectors situated in the interferometer. They are given by: \begin{equation}\label{P} P_{i}\pton{t}=\mbox{Tr}\big (<\hat{\rho}\pton{t}>_{f}\hat{P}_{i})\;\;,\;\;\; i=1,2 \;\;, \end{equation} with the averaged density matrix given by: \begin{eqnarray} \nonumber <\hat{\rho}\pton{t}>_{f}\;=\hskip 6cm \\ [1ex] \nonumber \left(\begin{array}{cc} {\textstyle \frac{1}{2}} &{\textstyle \frac{1}{2}} \exp\big\{ -ik^2[\Omega t-\sin\pton{\Omega t}] -z^2[1-\cos\pton{\Omega t}]\big \} \\ \mbox{c.c.}& {\textstyle \frac{1}{2}} \end{array}\right) \\ ,\;\; \label{rhomatrix} \end{eqnarray} and where $\hat P_{1}$ and $\hat P_{2}$ are projectors related to the two interferometer arms where the detectors are located \cite{Penrose}; ``c.c.'' denotes the complex conjugate of the upper off-diagonal matrix element. In the basis of $<\hat{\rho}\pton{t}>_{f}$ chosen here, the projectors are represented by: \begin{equation}\label{P12} \hat{P}_{1}= \frac{1}{2}\left (\begin{array}{cc} 1&1 \\ 1&1 \end{array}\right ) \;\;,\;\;\; \hat{P}_{2}= \frac{1}{2}\left (\begin{array}{cc} 1&-1 \\ -1&1 \end{array}\right ) \;\;. \end{equation} Inserting Eqs.\,$\pton{\ref{rhomatrix}}$ and $\pton{\ref{P12}}$ into Eq.\,$\pton{\ref{P}}$, we obtain: \begin{eqnarray} P_{1,2}\pton{t}&=&\frac{1}{2}\big [1+2\mbox{Re}\pton{<\rho_{AB}\pton{t}>_{f}}\big ] \nonumber \\ [1ex] &=&\frac{1}{2}\big \{ 1\pm\cos\pqua{k^2\pton{\Omega t-\sin\pton{\Omega t}}} \nonumber \\ \label{prho} &\;&\cdot\exp\pqua{-z^2\pton{1-\cos\pton{\Omega t}}}\big \} \;\;. \end{eqnarray} This presents an important relation, because it connects $<\hat{\rho}\pton{t}>_{f}$, the central quantity to learn about mirror induced decoherence, with the probability of observing a photon in one of the two detectors. In this way, in principle, we could learn about the former from experimental measurements of the latter. This result is independent of whether we consider the mirror as a classical or a quantum object, {\it i.e.} independent of whether we apply quantum theory to the whole interferometer set-up or the quantum-classical hybrid theory presently studied. \begin{figure}[!htpb] \begin{center} \includegraphics[width=7.3cm, height=7.5cm, keepaspectratio]{P1P2-3-4A.pdf} \includegraphics[width=7.3cm, height=7.5cm, keepaspectratio]{P1P2-3-4B.pdf} \caption{The probabilities $P_{1,2}$ ($k=1$) as a function of $\tau :=\Omega t$, for several values of the temperature. The upper full line (black) and lower full line (red) represent, respectively, $P_{1}$ and $P_{2}$ for $T=10^{-4}$K; the dashed line (purple) and short-dashed line (blue) show, respectively, $P_{1}$ and $P_{2}$ for $T=10^{-3}$K.} \end{center} \end{figure} \section{Conclusions and perspectives} In this paper we have studied the optomechanical interferometer experiment of {\it Marshall et al.} in the hybrid quantum-classical theory of Refs.\,\cite{Elze1, Elze2, Elze3,Elze4,Buric}. Here, the mirror is considered as a perfectly classical rather than a quantum mechanical object and the quantum nature of the photon is preserved in a formally consistent framework. In Section\,$\ref{HH}$, we presented the hybrid Hamiltonian for the whole system, composed of the classical mirror and the quantum photon. The Hamiltonian encodes all dynamical information regarding the system and has been employed to derive the corresponding equations of motion. In Section\,$\ref{EM}$, we solved the equations analytically. In Section\,$\ref{MIDec}$, the solutions of the equations of motion have been used to obtain the off-diagonal elements of the reduced density matrix for the photon, which forms the starting point for our quantitative evaluation of the decoherence process induced by the classical mirror on the quantum photon. As in the fully quantum approach, this decoherence destroys interference effects and is detrimental to the formation of spatially separated coherent superposition states of the mesoscopic mirror. We emphasize that, according to the hybrid interaction scheme, the photon and the classical mirror presently do not become entangled. Thus, the mirror is at each moment of time in a classical pure state, unless thermal (or some other) fluctuations are explicitly introduced. Such classical fluctuations play an analogous role here to the quantum fluctuations in the mirror state induced by entanglement if both parts are treated as quantum systems. More precisely, we have to distinguish two different cases. -- First, if the classical initial conditions of the mirror, namely its initial position and momentum, are exactly known, then no decoherence is observed: the photon remains in its initial pure state, the coherent superposition of being in either arm of the interferometer, and related interference effects are sustained. This clearly differs from the original quantum approach \cite{Penrose}, where also without thermal averaging over initial coherent oscillator states of the mirror, one finds mirror induced decoherence. This result leads us to conjecture that the absence of decoherence, when the initial conditions of the classical subsystem are completely fixed, is a general feature of a composite quantum-classical hybrid. A proof has to await future studies of these phenomena. Secondly, however, we have also examined the more realistic situation where some information about the classical initial conditions is lost and only a phase space probability distribution, instead, can be assumed or be experimentally prepared. In particular, we have considered a thermal Boltzmann distribution specifying the mirror initial conditions. This leads to correspondingly averaged matrix elements of the reduced density matrix of the photon. Analyzing these, we find the surprising result that the mirror induced decoherence according to the hybrid theory essentially equals the one found in a fully quantum mechanical treatment \cite{Penrose}. This is nicely reflected in the corresponding decoherence timescales that we defined and discussed in Section\,$\ref{MIDec}$, in particular for the experimentally relevant range of temperatures. We pointed the stability of this equality with respect to variations of the physical parameters of the system (temperature $T$, mirror frequency $\Omega$, photon frequency $\omega$, cavity length $L$, and mirror mass $M$). We have found that the near-equality in the behaviour of the interferometer, whether treated as a quantum-classical hybrid or fully quantum mechanical system, is stable against such variations within the experimentally accessible regime. This extends to the experimentally measurable probability of finding a photon in one of the two detectors of the original interferometer arrangement \cite{Penrose}. In Section\,$\ref{PIP}$, we have related the off-diagonal matrix elements of the photon reduced density operator, from which mirror induced decoherence has always been calculated, to the probabilities to detect the photon in one of the two detectors. An interesting study, which can also be performed on the basis of our formalism and results, will be to consider (thermally averaged) sqeezed initial states for a quantum mirror and correspondingly deformed (Boltzmann like) initial phase space distributions for a classical mirror. Will quantum theory and quantum-classical hybrid theory remain essentially indistinguishable also in this case, concerning mirror induced decoherence? In any case, our discussion may have implications for the interpretation of planned experiments, see, for example, Refs.\,\cite{BassiEtAl,YinEtAl,PaternostroEtAl} and further references therein. In fact, the optomechanical system of {\it Marshall et al.}, first of all, has been proposed to test various spontaneous wave function collapse models \cite{Diosi84,Diosi1,Penrose98,Bassi1,Diosi}. As indicated by our results, however, the system might not be suitable to discern a quantum from a classical mirror, given the accessible experimental parameters. In this case, the observation of ``anomalous decoherence'' ({\it i.e.}, when common sources of environmentally induced decoherence can be controlled) cannot unambiguosly be attributed to a rapid collapse mechanism, perhaps the mirror has been classical from the start and yet produces a similar decoherence signal. We conclude that applications of quantum-classical hybrid theory to describe presently considered experiments at the quantum-classical border, in particular when ``macroscopic'' components play a role, deserve further study. Last not least, since it is still thoroughly unknown whether, where, and what kind of border to expect. \section*{Acknowledgements} It is a pleasure to thank L.~Di\'osi and F.~Giraldi for discussions on various occasions. H.-T.E. wishes to thank L.~Di\'osi also for kind hospitality and support during the Wigner-111 Symposium at Budapest, where part of this work was completed. A.L. and L.F. gratefully acknowledge support through Phd programs of their institutions; A.L. has been supported by an ERC AdG OSYRIS fellowship.
1,941,325,220,124
arxiv
\section{Introduction} \label{Introduction} A probability density $f$ on the real line is called log-concave if it may be written as $$ f(x) \ = \ \exp \phi(x) $$ for some concave function $\phi : \mathbb{R} \to [-\infty,\infty)$. The class of all log-concave densities provides an interesting nonparametric model consisting of unimodal densities and containing many standard parametric families; see D\"umbgen and Rufibach (2009) for a more thorough overview. This paper treats algorithmic aspects of maximum likelihood estimation for this particular class. In Section~\ref{Complete Data} we derive a general finite-dimensional optimization problem which is closely related to computing the maximum likelihood estimator of a log-concave probability density $f$ based on independent, identically distributed observations. Section~\ref{Active Set} is devoted to the latter optimization problem. At first we describe generally an active set algorithm, a useful tool from optimization theory (cf.\ Fletcher, 1987) with many potential applications in statistical computing. A key property of such algorithms is that they terminate after finitely many steps (in principle). Then we adapt this approach to our particular estimation problem, which yields an alternative to the iterative algorithms developed by Rufibach (2006, 2007) and Pal, Woodroofe and Meyer (2006). The resulting active set algorithm is similar in spirit to the vertex direction and support reduction algorithms described by Groeneboom, Jongbloed and Wellner (2008), who consider the special setting of mixture models. In Section~\ref{Censored Data} we consider briefly the problem of estimating a probability distribution $P$ on $(0,\infty]$ based on censored or binned data. Censoring occurs quite frequently in biomedical applications, e.g.\ $X$ being the time point when a person develops a certain disease or dies from a certain cause. Another field of application is quality control where $X$ is the failure time of a certain object. A good reference for event time analysis is the monograph of Klein and Moeschberger (1997). Binning is typical in socioeconomic surveys, e.g.\ when persons or households are asked which of several given intervals their yearly income $X$ falls into. We discuss maximum likelihood estimation of $P$ under the assumption that it is absolutely continuous on $(0,\infty)$ with log-concave probability density $f$. The resulting estimator is an alternative to those of D\"{u}mbgen et al.\ (2006). The latter authors restrict themselves to interval-censored data and considered the weaker constraints of $f$ being non-increasing or unimodal. Introducing the stronger but still natural constraint of log-concavity allows us to treat arbitrarily censored data, similarly as Turnbull (1976). In Section~\ref{EM} we indicate an expectation-maximization (EM) algorithm for the estimation of $P$, using the aforementioned active set algorithm as a building block. This approach is similar to Turnbull (1976) and Braun et al.\ (2005); the latter authors considered self-consistent kernel density estimators. For more information and references on EM and related algorithms in general we refer to Lange et al.\ (2000). A detailed description of our method for censored or binned data will be given elsewhere. Section~\ref{Proofs} contains most proofs and various auxiliary results. \section{The general log-likelihood function for complete data} \label{Complete Data} \paragraph{Independent, identically distributed observations.} Let $X_1, X_2, \ldots, X_n$ be independent random variables with log-concave probability density $f = \exp\phi$ on $\mathbb{R}$. Then the normalized log-likelihood function is given by $$ \ell(\phi) \ := \ n_{}^{-1} \sum_{i=1}^n \phi(X_i) . $$ It may happen that due to rounding errors one observes $\widetilde{X}_i$ in place of $X_i$. In that case, let $x_1 < x_2 < \cdots < x_m$ be the different elements of $\{\widetilde{X}_1, \widetilde{X}_2,\ldots,\widetilde{X}_n\}$ and define $p_i := n^{-1} \# \{j : \widetilde{X}_j = x_i\}$. Then an appropriate surrogate for the normalized log-likelihood is \begin{equation} \ell(\phi) \ := \ \sum_{i=1}^m p_i \phi(x_i) . \label{eq: log-lik} \end{equation} \paragraph{The general log-likelihood function.} In what follows we consider the functional \eqref{eq: log-lik} for arbitrary given points $x_1 < x_2 < \cdots < x_m$ and probability weights $p_1, p_2, \ldots, p_m > 0$, i.e.\ $\sum_{i=1}^m p_i = 1$. Suppose that we want to maximize $\ell(\phi)$ over all functions $\phi$ within a certain family $\mathcal{F}$ of measurable functions from $\mathbb{R}$ into $[-\infty,\infty)$ satisfying the constraint $\int \exp \phi(x) \, dx = 1$. If $\mathcal{F}$ is closed under addition of constants, i.e.\ $\phi + c \in \mathcal{F}$ for arbitrary $\phi \in \mathcal{F}$ and $c \in \mathbb{R}$, then one can easily show that maximizing $\ell(\phi)$ over all $\phi \in \mathcal{F}$ with $\int \exp \phi(x) \, dx = 1$ is equivalent to maximizing $$ L(\phi) \ := \ \sum_{i=1}^m p_i \phi(x_i) - \int \exp \phi(x) \, dx $$ over the whole family $\mathcal{F}$; see also Silverman~(1982, Theorem~3.1). \paragraph{Restricting the set of candidate functions.} The preceding considerations apply in particular to the family $\mathcal{F}$ of all concave functions. Now let $\mathcal{G}$ be the set of all continuous functions $\psi : [x_1,x_m] \to \mathbb{R}$ which are linear on each interval $[x_k, x_{k+1}]$, $1 \le k < m$, and we define $\psi := - \infty$ on $\mathbb{R} \setminus [x_1,x_m]$. Moreover, let $\mathcal{G}_{\mathrm{conc}}$ be the set of all concave functions within $\mathcal{G}$. For any $\phi \in \mathcal{F}$ with $L(\phi) > - \infty$ let $\psi$ be the unique function in $\mathcal{G}_{\mathrm{conc}}$ such that $\psi = \phi$ on $\{x_1,x_2,\ldots,x_m\}$. Then it follows from concavity of $\phi$ that $\psi \le \phi$ pointwise, and $L(\psi) \ge L(\phi)$. Equality holds if, and only if, $\psi = \phi$. Thus maximizing $L$ over the class $\mathcal{F}$ is equivalent to its maximization over $\mathcal{G}_{\mathrm{conc}}$. \paragraph{Properties of $L(\cdot)$.} For explicit calculations it is useful to rewrite $L(\psi)$ as follows: Any function $\psi \in \mathcal{G}$ may be identified with the vector $\boldsymbol{\psi} := (\psi(x_i))_{i=1}^m \in \mathbb{R}^m$. Likewise, any vector $\boldsymbol{\psi} \in \mathbb{R}^m$ defines a function $\psi \in \mathcal{G}$ via $$ \psi(x) \ := \ \Bigl( 1 - \frac{x - x_k}{\delta_k} \Bigr) \, \psi_k^{} + \frac{x - x_k}{\delta_k} \, \psi_{k+1}^{} \quad\mbox{for } x \in [x_k,x_{k+1}], 1 \le k < m , $$ where $\delta_k := x_{k+1} - x_k$. Then one may write $$ L(\psi) \ = \ L(\boldsymbol{\psi}) := \sum_{i=1}^m p_i \psi_i - \sum_{k=1}^{m-1} \delta_k J(\psi_k, \psi_{k+1}) $$ with $$ J(r,s) \ := \ \int_0^1 \exp \bigl( (1 - t) r + t s \bigr) \, dt $$ for arbitrary $r,s \in \mathbb{R}$. The latter function $J : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ is infinitely often differentiable and strictly convex. Hence $L(\cdot)$ is an infinitely often differentiable and strictly concave functional on $\mathbb{R}^m$. In addition it is coercive in the sense that \begin{equation} L(\boldsymbol{\psi}) \ \to \ -\infty \quad\mbox{as } \|\boldsymbol{\psi}\| \to \infty . \label{eq: coercivity of L} \end{equation} This entails that both \begin{eqnarray} \widetilde{\psi} & := & \mathop{\rm argmax}_{\psi \in \mathcal{G}} L(\psi) \label{eq: definition of psicheck} \quad\mbox{and} \\ \widehat{\psi} & := & \mathop{\rm argmax}_{\psi \in \mathcal{G}_{\mathrm{conc}}} L(\psi) \label{eq: definition of psihat} \end{eqnarray} are well defined and unique. Let us discuss some further properties of $L(\cdot)$ and its unrestricted maximizer $\widetilde{\psi}$. To maximize $L(\cdot)$ we need its Taylor expansion of second order. In fact, for functions $\psi, v \in \mathcal{G}$, \begin{eqnarray} \frac{d}{dt} \Big\vert_{t=0} L(\psi + tv) & = & \sum_{i=1}^m p_i v(x_i) - \int v(x) \exp \psi(x) \, dx , \label{eq: 1st dir deriv L} \\ \frac{d^2}{dt^2} \Big\vert_{t=0} L(\psi + tv) & = & - \int v(x)^2 \exp \psi(x) \, dx . \label{eq: 2nd dir deriv L} \end{eqnarray} Note that the latter expression yields an alternative proof of $L$'s strict concavity. Explicit formulae for the gradient and hessian matrix of $L$ as a functional on $\mathbb{R}^m$ are given in Section~\ref{Proofs}, and with these tools one can easily compute $\widetilde{\psi}$ very precisely via Newton type algorithms. We end this section with a characterization and interesting properties of the maximizer $\widetilde{\psi}$. In what follows let $$ J_{ab}(r,s) \ := \ \frac{\partial^{a+b}}{\partial r^a \partial s^b} J(r,s) \ = \ \int_0^1 (1 - t)^a t^b \exp((1 - t)r + t s) \, dt . $$ for nonnegative integers $a$ and $b$. \begin{Theorem} \label{thm: 1dim functional} Let $\psi \in \mathcal{G}$ with corresponding density $f(x) := \exp \psi(x)$ and distribution function $F(r) := \int_{x_1}^r f(x) \, dx$ on $[x_1, x_m]$. The function $\psi$ maximizes $L$ if, and only if, its distribution function $F$ satisfies $$ F(x_m) = 1 \quad\mbox{and}\quad \delta_k^{-1} \int_{x_k}^{x_{k+1}} F(x) \, dx \ = \ \sum_{i=1}^k p_i \quad\mbox{for } 1 \le k < m . $$ In that case, $$ \int_{x_1}^{x_m} x f(x) \, dx \ = \ \sum_{i=1}^m p_i x_i $$ and $$ \int_{x_1}^{x_m} x^2 f(x) \, dx \ = \ \sum_{i=1}^m p_i^{} x_i^2 - \sum_{k=1}^{m-1} \delta_k^3 J_{11}(\psi_k, \psi_{k+1}) . $$ \end{Theorem} \paragraph{Some auxiliary formulae.} For $\psi \in \mathcal{G}$ with density $f(x) := \exp \psi(x)$ and distribution function $F(r) := \int_{x_1}^r f(x) \, dx$ on $[x_1, x_m]$, one can easily derive explicit expressions for $F$ and the first two moments of $f$ in terms of $J(\cdot,\cdot)$ and its partial derivatives: For $1 \le k < m$, $$ F(x_{k+1}) \ = \ \sum_{i=1}^k \delta_i J(\psi_i, \psi_{i+1}) $$ and $$ \delta_k^{-1} \int_{x_k}^{x_{k+1}} F(x) \, dx \ = \ F(x_k) + \delta_k J_{10}(\psi_k,\psi_{k+1}) \ \in \ \bigl( F(x_k), F(x_{k+1}) \bigr) . $$ Moreover, for any $a \in \mathbb{R}$, \begin{eqnarray*} \int_{x_1}^{x_m} (x-a) f(x) \, dx & = & \sum_{k=1}^{m-1} \delta_k \bigl( (x_k - a) J_{10}(\psi_k,\psi_{k+1}) + (x_{k+1} - a) J_{01}(\psi_k,\psi_{k+1}) \bigr) , \\ \int_{x_1}^{x_m} (x - a)^2 f(x) \, dx & = & \sum_{k=1}^{m-1} \delta_k \bigl( (x_k - a)^2 J_{10}(\psi_k,\psi_{k+1}) + (x_{k+1} - a)^2 J_{01}(\psi_k,\psi_{k+1}) \bigr) \\ && - \ \sum_{k=1}^{m-1} \delta_k^3 J_{11}(\psi_k,\psi_{k+1}) . \end{eqnarray*} \section{An active set algorithm} \label{Active Set} \subsection{The general principle} We consider an arbitrary continuous and concave function $L : \mathbb{R}^m \to [-\infty,\infty)$ which is coercive in the sense of \eqref{eq: coercivity of L} and continuously differentiable on the set $\mathrm{dom}(L) := \{\boldsymbol{\psi} \in \mathbb{R}^m : L(\boldsymbol{\psi}) > - \infty\}$. Our goal is to maximize $L$ on the closed convex set $$ \mathcal{K} \ := \ \left\{ \boldsymbol{\psi} \in \mathbb{R}^m : \boldsymbol{v}_i^\top \boldsymbol{\psi} \le c_i \ \text{for} \ i = 1,\ldots,q \right\} , $$ where $\boldsymbol{v}_1,\ldots,\boldsymbol{v}_q$ are nonzero vectors in $\mathbb{R}^m$ and $c_1, \ldots, c_q$ real numbers such that $\mathcal{K} \cap \mathrm{dom}(L) \ne \emptyset$. These assumptions entail that the set $$ \mathcal{K}_* \ := \ \mathop{\rm argmax}_{\boldsymbol{\psi} \in \mathcal{K}} \, L(\boldsymbol{\psi}) $$ is a nonvoid and compact subset of $\mathrm{dom}(L)$. For simplicity we shall assume that \begin{equation} \boldsymbol{v}_1, \boldsymbol{v}_2, \ldots, \boldsymbol{v}_q \ \text{are linearly independent} , \label{ass: linear independence} \end{equation} but see also the possible extensions indicated at the end of this section. An essential tacit assumption is that for any index set $A \subseteq \{1,\ldots,q\}$ and the corresponding affine subspace $$ \mathcal{V}(A) \ := \ \left\{ \boldsymbol{\psi} \in \mathbb{R}^m : \boldsymbol{v}_a^\top \boldsymbol{\psi} = c_a \mbox{ for all } a \in A \right\} $$ of $\mathbb{R}^m$, we have an algorithm computing a point $$ \widetilde{\boldsymbol{\psi}}(A) \ \in \ \mathcal{V}_*(A) \ := \ \mathop{\rm argmax}_{\boldsymbol{\psi} \in \mathcal{V}(A)} \, L(\boldsymbol{\psi}) , $$ provided that $\mathcal{V}(A) \cap \mathrm{dom}(L) \ne \emptyset$. Now the idea is to vary $A$ suitably until, after finitely many steps, $\widetilde{\boldsymbol{\psi}}(A)$ belongs to $\mathcal{K}_*$. In what follows we attribute to any vector $\boldsymbol{\psi} \in \mathbb{R}^m$ the index set $$ A(\boldsymbol{\psi}) \ := \ \Bigl\{ i \in \{1,\ldots,q\} \ : \ \boldsymbol{v}_i^\top \boldsymbol{\psi} \ge c_i \Bigr\} . $$ For $\boldsymbol{\psi} \in \mathcal{K}$ the set $A(\boldsymbol{\psi})$ identifies the ``active constraints'' for $\boldsymbol{\psi}$. The following theorem provides useful characterizations of $\mathcal{K}_*$ and $\mathcal{V}_*(A)$. \begin{Theorem} \label{thm: KKstar and VVA} Let $\boldsymbol{b}_1,\ldots,\boldsymbol{b}_m$ be a basis of $\mathbb{R}^m$ such that $$ \boldsymbol{v}_i^\top \boldsymbol{b}_j^{} \ \begin{cases} < \ 0 & \text{if} \ i = j \le q , \\ = \ 0 & \text{else} . \end{cases} $$ \noindent \textbf{(a)} A vector $\boldsymbol{\psi} \in \mathcal{K} \cap \mathrm{dom}(L)$ belongs to $\mathcal{K}_*$ if, and only if, \begin{equation} \boldsymbol{b}_i^\top \nabla L(\boldsymbol{\psi}) \ \begin{cases} = \ 0 & \text{for all} \ i \in \{1,\ldots,m\} \setminus A(\boldsymbol{\psi}) , \\ \le \ 0 & \text{for all} \ i \in A(\boldsymbol{\psi}) . \end{cases} \label{eq: KKstar} \end{equation} \noindent \textbf{(b)} For any given set $A \subseteq \{1,\ldots,q\}$, a vector $\boldsymbol{\psi} \in \mathcal{V}(A) \cap \mathrm{dom}(L)$ belongs to $\mathcal{V}_*(A)$ if, and only if, \begin{equation} \boldsymbol{b}_i^\top \nabla L(\boldsymbol{\psi}) \ = \ 0 \quad\mbox{for all } \ i \in \{1,\ldots,m\} \setminus A . \label{eq: VVA} \end{equation} \end{Theorem} The characterizations in this theorem entail that any vector $\boldsymbol{\psi} \in \mathcal{K}_*$ belongs to $\mathcal{V}_*(A(\boldsymbol{\psi}))$. The active set algorithm performs one of the following two procedures alternately: \paragraph{Basic procedure 1: Replacing a feasible point with a ``conditionally'' optimal one.} Let $\boldsymbol{\psi}$ be an arbitrary vector in $\mathcal{K} \cap \mathrm{dom}(L)$. Our goal is to find a vector $\boldsymbol{\psi}_{\rm new}$ such that \begin{equation} L(\boldsymbol{\psi}_{\rm new}) \ \ge \ L(\boldsymbol{\psi}) \quad\mbox{and}\quad \boldsymbol{\psi}_{\rm new} \ \in \ \mathcal{K} \cap \mathcal{V}_*(A(\boldsymbol{\psi}_{\rm new})) . \label{eq: local optimality} \end{equation} To this end, set $A := A(\boldsymbol{\psi})$ and define the candidate vector $\boldsymbol{\psi}_{\rm cand} := \widetilde{\boldsymbol{\psi}}(A)$. By construction, $L(\boldsymbol{\psi}_{\rm cand}) \ge L(\boldsymbol{\psi})$. If $L(\boldsymbol{\psi}_{\rm cand}) = L(\boldsymbol{\psi})$, we set $\boldsymbol{\psi}_{\rm new} := \boldsymbol{\psi}$. If $L(\boldsymbol{\psi}_{\rm cand}) > L(\boldsymbol{\psi})$ and $\boldsymbol{\psi}_{\rm cand} \in \mathcal{K}$, we set $\boldsymbol{\psi}_{\rm new} := \boldsymbol{\psi}_{\rm cand}$. Here \eqref{eq: local optimality} is satisfied, because $A(\boldsymbol{\psi}_{\rm new}) \supseteq A(\boldsymbol{\psi})$, so that $\mathcal{V}(A(\boldsymbol{\psi}_{\rm new})) \subseteq \mathcal{V}(A)$. Finally, if $L(\boldsymbol{\psi}_{\rm cand}) > L(\boldsymbol{\psi})$ but $\boldsymbol{\psi}_{\rm cand} \not\in \mathcal{K}$, let \begin{eqnarray} t = t(\boldsymbol{\psi}, \boldsymbol{\psi}_{\rm cand}) & := & \max \bigl\{ t \in (0,1) : (1 - t) \boldsymbol{\psi} + t \boldsymbol{\psi}_{\rm cand} \in \mathcal{K} \bigr\} \label{eq:def of t} \\ & = & \min \Bigl\{ \frac{c_i - \boldsymbol{v}_i^\top \boldsymbol{\psi}} {\boldsymbol{v}_i^\top \boldsymbol{\psi}_{\rm cand} - \boldsymbol{v}_i^\top \boldsymbol{\psi}} : 1 \le i \le q, \boldsymbol{v}_i^\top \boldsymbol{\psi}_{\rm cand} > c_i \Bigr\} . \nonumber \end{eqnarray} Then we replace $\boldsymbol{\psi}$ with $(1 - t)\boldsymbol{\psi} + t \boldsymbol{\psi}_{\rm cand}$. Note that $L(\boldsymbol{\psi})$ does not decrease in this step, due to concavity of $L$. Moreover, the set $A(\boldsymbol{\psi})$ increases strictly. Hence, repeating the preceding manipulations at most $q$ times yields finally a vector $\boldsymbol{\psi}_{\rm new}$ satisfying \eqref{eq: local optimality}, because $\mathcal{V}(\{1,\ldots,q\})$ is clearly a subset of $\mathcal{K}$. With the new vector $\boldsymbol{\psi}_{\rm new}$ we perform the second basic procedure. \paragraph{Basic procedure 2: Altering the set of active constraints.} Let $\boldsymbol{\psi} \in \mathcal{K} \cap \mathrm{dom}(L) \cap \mathcal{V}_*(A)$ with $A = A(\boldsymbol{\psi})$. It follows from Theorem~\ref{thm: KKstar and VVA} that $\boldsymbol{\psi}$ belongs to $\mathcal{K}_*$ if, and only if, $$ \boldsymbol{b}_a^\top \nabla L(\boldsymbol{\psi}) \ \le \ 0 \quad\mbox{for all } a \in A . $$ Now suppose that the latter condition is violated, and let $a_o = a_o(\boldsymbol{\psi})$ be an index in $A$ such that $\boldsymbol{b}_{a_o}^\top \nabla L(\boldsymbol{\psi})$ is maximal. Then $\boldsymbol{\psi} + t \boldsymbol{b}_{a_o} \in \mathcal{K}$ and $A(\boldsymbol{\psi} + t \boldsymbol{b}_{a_o}) = A \setminus \{a_o\}$ for arbitrary $t > 0$, while $L(\boldsymbol{\psi} + t \boldsymbol{b}_{a_o}) > L(\boldsymbol{\psi})$ for sufficiently small $t > 0$. Thus we consider the vector $\boldsymbol{\psi}_{\rm cand} := \widetilde{\boldsymbol{\psi}}(A \setminus \{a_o\})$, which satisfies necessarily the inequality $L(\boldsymbol{\psi}_{\rm cand}) > L(\boldsymbol{\psi})$. It may fail to be in $\mathcal{K}$, but it satisfies the inequality $$ \boldsymbol{v}_{a_o}^\top \boldsymbol{\psi}_{\rm cand} \ > \ c_{a_o} . $$ For $\boldsymbol{\psi}_{\rm cand} - \boldsymbol{\psi}$ may be written as $\lambda_{a_o} \boldsymbol{b}_{a_o} + \sum_{i \not\in A} \lambda_i \boldsymbol{b}_i$ with real coefficients $\lambda_1,\ldots,\lambda_m$, and $$ 0 \ < \ (\boldsymbol{\psi}_{\rm cand} - \boldsymbol{\psi})^\top \nabla L (\boldsymbol{\psi}) \ = \ \lambda_{a_o} \boldsymbol{b}_{a_o}^\top \nabla L(\boldsymbol{\psi}) $$ according to \eqref{eq: VVA}. Hence $0 < \lambda_{a_o} = \boldsymbol{v}_{a_o}^\top(\boldsymbol{\psi}_{\rm cand} - \boldsymbol{\psi}) = \boldsymbol{v}_{a_o}^\top\boldsymbol{\psi}_{\rm cand} - c_{a_o}$. If $\boldsymbol{\psi}_{\rm cand} \in \mathcal{K}$, we repeat this procedure with $\boldsymbol{\psi}_{\rm cand}$ in place of $\boldsymbol{\psi}$. Otherwise, we replace $\boldsymbol{\psi}$ with $(1 - t) \boldsymbol{\psi} + t \boldsymbol{\psi}_{\rm cand}$, where $t = t(\boldsymbol{\psi}, \boldsymbol{\psi}_{\rm cand}) > 0$ is defined in \eqref{eq:def of t}, which results in a strictly larger value of $L(\boldsymbol{\psi})$. Then we perform the first basic procedure. \paragraph{The complete algorithm and its validity.} Often one knows a vector $\boldsymbol{\psi}_o \in \mathcal{K} \cap \mathrm{dom}(L)$ in advance. Then the active set algorithm can be started with the first basic procedure and proceeds as indicated in Table~\ref{t: Active Set 1}. In other applications it is sometimes obvious that $\mathcal{V}(\{1,\ldots,q\})$, which is clearly a subset of $\mathcal{K}$, contains a point in $\mathrm{dom}(L)$. In that case the input vector $\boldsymbol{\psi}_o$ is superfluous, and the first twelve lines in Table~\ref{t: Active Set 1} may be simplified as indicated in Table~\ref{t: Active Set 2}. The latter approach with starting point $\boldsymbol{\psi}_o = \widetilde{\boldsymbol{\psi}}(\{1,\ldots,q\})$ may be numerically unstable, presumably when this starting point is very far from the optimum. In the special settings of concave least squares regression or log-concave density estimation, a third variant turned out to be very reliable: We start with $A = \emptyset$ and $\boldsymbol{\psi}_o = \widetilde{\boldsymbol{\psi}}(A)$. As long as $\boldsymbol{\psi}_o \not\in \mathcal{K}$, we replace $A$ with the larger set $A(\boldsymbol{\psi}_o)$ and recompute $\boldsymbol{\psi}_o = \widetilde{\boldsymbol{\psi}}(A)$; see Table~\ref{t: Active Set 3}. In Table~\ref{t: Active Set 1}, the lines marked with (*) and (**) correspond to the end of the first basic procedure. At this stage, $\boldsymbol{\psi}$ is a vector in $\mathcal{K} \cap \mathrm{dom}(L) \cap \mathcal{V}_*(A(\boldsymbol{\psi}))$. Moreover, whenever the point (**) is reached, the value $L(\boldsymbol{\psi})$ is strictly larger than previously and equal to the maximum of $L$ over the set $\mathcal{V}(A)$. Since there are only finitely many different sets $A \subseteq \{1,\ldots,q\}$, the algorithm terminates after finitely many steps, and the resulting $\boldsymbol{\psi}$ belongs to $\mathcal{K}$ by virtue of Theorem~\ref{thm: KKstar and VVA}. When implementing these algorithms one has to be aware of numerical inaccuracies and errors, in particular, if the algorithm $\widetilde{\boldsymbol{\psi}}(\cdot)$ yields only approximations of vectors in $\mathcal{V}_*(\cdot)$. In our specific applications we avoided endless loops by replacing the conditions ``$\boldsymbol{b}_a^\top \nabla L(\boldsymbol{\psi}) < 0$'' and ``$\boldsymbol{v}_i^\top \boldsymbol{\psi} > c_i$'' with ``$\boldsymbol{b}_a^\top \nabla L(\boldsymbol{\psi}) < - \epsilon$'' and ``$\boldsymbol{v}_i^\top \boldsymbol{\psi} > c_i + \epsilon$'', respectively, for some small constant $\epsilon > 0$. \begin{table}[h] \centerline{\bf\begin{tabular}{|l|} \hline \ruck{0} Algorithm $\boldsymbol{\psi} \leftarrow \mbox{ActiveSet1}(L,\widetilde{\boldsymbol{\psi}}^{\strut}(\cdot),\boldsymbol{\psi}_o)$\\ \ruck{0} $\boldsymbol{\psi} \leftarrow \boldsymbol{\psi}_o$\\ \ruck{0} $A \leftarrow A(\boldsymbol{\psi})$\\ \ruck{0} $\boldsymbol{\psi}_{\rm cand} \leftarrow \widetilde{\boldsymbol{\psi}}(A)$\\ \ruck{0} while $\boldsymbol{\psi}_{\rm cand} \not\in \mathcal{K}$ do\\ \ruck{1} $\boldsymbol{\psi} \leftarrow (1 - t(\boldsymbol{\psi},\boldsymbol{\psi}_{\rm cand})) \boldsymbol{\psi} + t(\boldsymbol{\psi},\boldsymbol{\psi}_{\rm cand}) \boldsymbol{\psi}_{\rm cand}$\\ \ruck{1} $A \leftarrow A(\boldsymbol{\psi})$\\ \ruck{1} $\boldsymbol{\psi}_{\rm cand} \leftarrow \widetilde{\boldsymbol{\psi}}(A)$\\ \ruck{0} end while\\ \ruck{0} $\boldsymbol{\psi} \leftarrow \boldsymbol{\psi}_{\rm cand}$\\ \ruck{0} $A \leftarrow A(\boldsymbol{\psi})$ \quad (*)\\ \ruck{0} while $\max_{a \in A} \boldsymbol{b}_a^\top \nabla L(\boldsymbol{\psi}) > 0$ do\\ \ruck{1} $a \leftarrow \min \left( \mathop{\rm argmax}_{a \in A} \boldsymbol{b}_a^\top \nabla L(\boldsymbol{\psi}) \right)$\\ \ruck{1} $A \leftarrow A \setminus \{a\}$\\ \ruck{1} $\boldsymbol{\psi}_{\rm cand} \leftarrow \widetilde{\boldsymbol{\psi}}(A)$\\ \ruck{1} while $\boldsymbol{\psi}_{\rm cand} \not\in \mathcal{K}$ do\\ \ruck{2} $\boldsymbol{\psi} \leftarrow (1 - t(\boldsymbol{\psi},\boldsymbol{\psi}_{\rm cand})) \boldsymbol{\psi} + t(\boldsymbol{\psi},\boldsymbol{\psi}_{\rm cand}) \boldsymbol{\psi}_{\rm cand}$\\ \ruck{2} $A \leftarrow A(\boldsymbol{\psi})$\\ \ruck{2} $\boldsymbol{\psi}_{\rm cand} \leftarrow \widetilde{\boldsymbol{\psi}}(A)$\\ \ruck{1} end while\\ \ruck{1} $\boldsymbol{\psi} \leftarrow \boldsymbol{\psi}_{\rm cand}$\\ \ruck{1} $A \leftarrow A(\boldsymbol{\psi})$ \quad (**)\\ \ruck{0} end while.\\\hline \end{tabular}} \caption{Pseudo-code of an active set algorithm.} \label{t: Active Set 1} \end{table} \begin{table}[h] \centerline{\bf\begin{tabular}{|l|} \hline \ruck{0} Algorithm $\boldsymbol{\psi} \leftarrow \mbox{ActiveSet2}(L,\widetilde{\boldsymbol{\psi}}^{\strut}(\cdot))$\\ \ruck{0} $\boldsymbol{\psi} \leftarrow \widetilde{\boldsymbol{\psi}}(\{1,\ldots,q\})$\\ \ruck{0} $A \leftarrow \{1,\ldots,q\}$\\ \ruck{0} while $\max_{a \in A} \boldsymbol{b}_a^\top \nabla L(\boldsymbol{\psi}) > 0$ do\\ \ruck{1} \ldots\\ \ruck{0} end while.\\\hline \end{tabular}} \caption{Pseudo-code of first modified active set algorithm.} \label{t: Active Set 2} \end{table} \begin{table}[h] \centerline{\bf\begin{tabular}{|l|} \hline \ruck{0} Algorithm $\boldsymbol{\psi} \leftarrow \mbox{ActiveSet3}(L,\widetilde{\boldsymbol{\psi}}^{\strut}(\cdot))$\\ \ruck{0} $\boldsymbol{\psi} \leftarrow \widetilde{\boldsymbol{\psi}}(\emptyset)$\\ \ruck{0} while $\boldsymbol{\psi} \not\in \mathcal{K}$ do\\ \ruck{1} $A \leftarrow A(\boldsymbol{\psi})$\\ \ruck{1} $\boldsymbol{\psi} \leftarrow \widetilde{\boldsymbol{\psi}}(A)$\\ \ruck{0} end while\\ \ruck{0} $A \leftarrow A(\boldsymbol{\psi})$\\ \ruck{0} while $\max_{a \in A} \boldsymbol{b}_a^\top \nabla L(\boldsymbol{\psi}) > 0$ do\\ \ruck{1} \ldots\\ \ruck{0} end while.\\\hline \end{tabular}} \caption{Pseudo-code of second modified active set algorithm.} \label{t: Active Set 3} \end{table} \paragraph{Possible extension I.} The assumption of linearly independent vectors $\boldsymbol{v}_1, \ldots, \boldsymbol{v}_q$ has been made for convenience and could be relaxed of course. In particular, one can extend the previous considerations easily to the situation where $\mathcal{K}$ consists of all vectors $\boldsymbol{\psi} \in \mathbb{R}^m$ such that $$ c_{i,1} \ \le \ \boldsymbol{v}_i^\top \boldsymbol{\psi} \ \le \ c_{i,2} $$ for $1 \le i \le q$ with numbers $- \infty \le c_{i,1} < c_{i,2} < \infty$. \paragraph{Possible extension II.} Again we drop assumption~\eqref{ass: linear independence} but assume that $c_1 = \cdots = c_q = 0$, so that $\mathcal{K}$ is a closed convex cone. Suppose further that we know a finite set $\mathcal{E}$ of generators of $\mathcal{K}$, i.e.\ every vector $\boldsymbol{\psi} \in \mathcal{K}$ may be written as $$ \boldsymbol{\psi} \ = \ \sum_{\boldsymbol{e} \in \mathcal{E}} \lambda_{\boldsymbol{e}} \boldsymbol{e} $$ with numbers $\lambda_{\boldsymbol{e}} \ge 0$. In that case, a point $\boldsymbol{\psi} \in \mathcal{K} \cap \mathrm{dom}(L)$ belongs to $\mathcal{K}_*$ if, and only if, \begin{equation} \nabla L(\boldsymbol{\psi})^\top \boldsymbol{\psi} \ = \ 0 \quad\text{and}\quad \max_{\boldsymbol{e} \in \mathcal{E}} \, \nabla L(\boldsymbol{\psi})^\top \boldsymbol{e} \ \le \ 0 . \label{eq: KKstar2} \end{equation} Now we can modify our basic procedure~2 as follows: Let $\boldsymbol{\psi} \in \mathcal{K} \cap \mathrm{dom}(L) \cap \mathcal{V}(A)$ with $A := A(\boldsymbol{\psi})$. If \eqref{eq: KKstar2} is violated, let $\boldsymbol{e}(\boldsymbol{\psi}) \in \mathcal{E}$ such that $\nabla L(\boldsymbol{\psi})^\top \boldsymbol{e}(\boldsymbol{\psi}) > 0$. Further let $s(\boldsymbol{\psi}), t(\boldsymbol{\psi}) > 0$ such that $\boldsymbol{\psi}_{\rm new} := s(\boldsymbol{\psi}) \boldsymbol{\psi} + t(\boldsymbol{\psi}) \boldsymbol{e}(\boldsymbol{\psi}) \in \mathcal{K}$ satisfies $L(\boldsymbol{\psi}_{\rm new}) > L(\boldsymbol{\psi})$. Then we replace $\boldsymbol{\psi}$ with $\boldsymbol{\psi}_{\rm new}$ and perform the first basic procedure. \subsection{The special case of fitting log-concave densities} Going back to our original problem, note that $\psi \in \mathcal{G}$ lies within $\mathcal{G}_{\mathrm{conc}}$ if, and only if, the corresponding vector $\boldsymbol{\psi}$ satisfies \begin{equation} \frac{\psi_{j+1} - \psi_j}{\delta_j} - \frac{\psi_j - \psi_{j-1}}{\delta_{j-1}} = \boldsymbol{v}_j^\top \boldsymbol{\psi} \ \le \ 0 \quad\mbox{for } j = 2,\ldots,m-1 , \label{eq: concavity of psi} \end{equation} where $\boldsymbol{v}_j = (v_{i,j})_{i=1}^m$ has exactly three nonzero components: $$ v_{j-1,j} \ := \ 1/\delta_{j-1} , \quad v_{j,j} \ := \ - (\delta_{j-1} + \delta_j)/(\delta_{j-1} \delta_j) , \quad v_{j+1,j} \ := \ 1/\delta_j . $$ Note that we changed the notation slightly by numbering the $m - 2$ constraint vectors from $2$ to $m-1$. This is convenient, because then $\boldsymbol{v}_j^\top \boldsymbol{\psi} \ne 0$ is equivalent to the corresponding function $\psi \in \mathcal{G}$ changing slope at $x_j$. Suitable basis vectors $\boldsymbol{b}_i$ are given, for instance, by $\boldsymbol{b}_1 := (1)_{i=1}^m$, $\boldsymbol{b}_m := (x_i)_{i=1}^m$ and $$ \boldsymbol{b}_j \ = \ \bigl( \min(x_i - x_j, 0) \bigr)_{i=1}^m, \quad 2 \le j < m . $$ For this particular problem it is convenient to rephrase the active set method in terms of {\sl inactive} constraints, i.e.\ true {\sl knots} of functions in $\mathcal{G}$. Throughout let $I = \{i(1),\ldots,i(k)\}$ be a subset of $\{1,2,\ldots,m\}$ with $k \ge 2$ elements $1 = i(1) < \cdots < i(k) = m$, and let $\mathcal{G}(I)$ be the set of all functions $\psi \in \mathcal{G}$ which are linear on all intervals $[x_{i(s)}, x_{i(s+1)}]$, $1 \le s < k$. This set corresponds to $\mathcal{V}(A)$ with $A := \{1,\ldots,m\} \setminus I$. A function $\psi \in \mathcal{G}(I)$ is uniquely determined by the vector $\bigl( \psi(x_{i(s)}) \bigr)_{s=1}^k$, and one may write $$ L(\psi) \ = \ \sum_{s=1}^k p_s(I) \psi(x_{i(s)}) - \sum_{s=1}^{k-1} (x_{i(s+1)} - x_{i(s)}) J \bigl( \psi(x_{i(s)}), \psi(x_{i(s+1)}) \bigr) $$ with suitable probability weights $p_1(I), \ldots, p_k(I) > 0$. Precisely, writing $$ \psi(x) \ = \ \frac{x_{i(s+1)} - x}{x_{i(s+1)} - x_{i(s)}} \, \psi(x_{i(s)}) + \frac{x - x_{i(s)}}{x_{i(s+1)} - x_{i(s)}} \, \psi(x_{i(s+1)}) $$ for $1 \le s < k$ and $x_{i(s)} \le x \le x_{i(s+1)}$ yields the explicit formulae \begin{eqnarray*} p_1(I) & = & \sum_{i=1}^{i(2)-1} \frac{x_{i(2)} - x_i}{x_{i(2)} - x_1} \, p_i^{} , \\ p_s(I) & = & \sum_{i=i(s-1)+1}^{i(s+1)-1} \min \Bigl( \frac{x_i - x_{i(s-1)}}{x_{i(s)} - x_{i(s-1)}}, \frac{x_{i(s+1)} - x_i}{x_{i(s+1)} - x_{i(s)}} \Bigr) \, p_i^{} \quad\mbox{for } 2 \le s < k ,\\ p_k(I) & = & \sum_{i=i(k-1)+1}^m \frac{x_i - x_{i(k-1)}}{x_m - x_{i(k-1)}} \, p_i^{} . \end{eqnarray*} Consequently, the computation of $\widetilde{\psi}$ or $\widetilde{\psi}^{(I)} := \mathop{\rm argmax}_{\psi \in \mathcal{G}(I)} L(\psi)$ are optimization problems of the same type. Since the vectors $\boldsymbol{b}_2,\ldots,\boldsymbol{b}_m$ correspond to the functions $\Delta_2, \ldots, \Delta_m$ in $\mathcal{G}$ with \begin{equation} \Delta_j(x) \ := \ \min(x - x_j, 0) , \label{eq: def Del_j} \end{equation} checking the inequality $\boldsymbol{b}_a^\top \nabla L(\boldsymbol{\psi}) \le 0$ for $a \in A$ amounts to checking whether the directional derivative \begin{equation} H_j(\psi) \ := \ \sum_{i=1}^m p_i \Delta_j(x_i) - \int_{x_1}^{x_m} \Delta_j(x) \exp \psi(x) \, dx \label{eq: def H_j} \end{equation} is nonpositive for all $j \in \{1,\ldots,m\} \setminus I$. If $\psi = \psi^{(I)}$ and $j \not\in I$, the inequality $H_j(\psi) > 0$ means that $L(\psi)$ could be increased strictly by allowing an additional knot at $x_j$. \begin{Example} \label{ex: Example 1} Figure~\ref{fig: Example_F} shows the empirical distribution function of $n = 25$ simulated random variables from a Gumbel distribution, while the smooth distribution function is the estimator $\widehat{F}(r) := \int_{-\infty}^r \exp \widehat{\psi}(x) \, dx$. Figure~\ref{fig: Example_phi} illustrates the computation of the log-density $\widehat{\psi}$ itself. Each picture shows the current function $\psi$ together with the new candidate function $\psi_{\rm cand}$. We followed the algorithm in Table~\ref{t: Active Set 2}, so the first (upper left) picture shows the starting point, a linear function $\psi$ on $[x_1, x_{25}]$, together with $\psi_{\rm cand}$ having an additional knot in $(x_1,x_{25})$. Since $\psi_{\rm cand}$ is concave, it becomes the new function $\psi$ shown in the second (upper right) plot. In the third (lower left) plot one sees the situation where adding another knot resulted in a non-concave function $\psi_{\rm cand}$. So the current function $\psi$ was replaced with a convex combination of $\psi$ and $\psi_{\rm cand}$. The latter new function $\psi$ and the almost identical final fit $\widehat{\psi}$ are depicted in the fourth (lower right) plot. \begin{figure}[h] \centerline{\includegraphics[width=10cm]{Example_F}} \caption{Estimated distribution functions for $n = 25$ data points.} \label{fig: Example_F} \end{figure} \begin{figure}[h] \includegraphics[width=7.4cm]{Example_phi1} \hfill \includegraphics[width=7.4cm]{Example_phi2} \includegraphics[width=7.4cm]{Example_phi3} \hfill \includegraphics[width=7.4cm]{Example_phi4} \caption{Estimating the log-density for $n = 25$ data points.} \label{fig: Example_phi} \end{figure} \end{Example} \section{Censored or binned data} \label{Censored Data} In the current and the next section we consider independent random variables $X_1$, $X_2$, \ldots, $X_n$ with unknown distribution $P$ on $(0,\infty]$ having sub-probability density $f = \exp\phi$ on $(0,\infty)$, where $\phi$ is concave and upper semicontinuous. In many applications the observations $X_i$ are not completely available. For instance, let the $X_i$ be event times for $n$ individuals in a biomedical study, where $X_i = \infty$ means that the event in question does not happen at all. If the study ends at time $c_i > 0$ from the $i$-th unit's viewpoint, whereas $X_i > c_i$, then we have a ``right-censored'' observation and know only that $X_i$ is contained in the interval $\widetilde{X}_i = (c_i, \infty]$. In other settings one has purely ``interval-censored'' data: For the $i$-th observation one knows only which of given intervals $(0,t_{i,1}], (t_{i,1}, t_{i,2}], \ldots, (t_{i,m(i)},\infty]$ contains $X_i$, where $0 < t_{i,1} < \cdots < t_{i,m(i)} < \infty$. If these candidate intervals are the same for all observations, one speaks of binned data. A related situation are rounded observations, e.g.\ when we observe $\lceil X_i \rceil$ rather than $X_i$. In all these settings we observe independent random intervals $\widetilde{X}_1$, $\widetilde{X}_2$, \ldots, $\widetilde{X}_n$. More precisely, we assume that either $\widetilde{X}_i = (L_i, R_i] \ni X_i$ with $0 \le L_i < R_i \le \infty$, or $\widetilde{X}_i$ consists only of the one point $L_i := R_i := X_i \in (0,\infty)$. The normalized log-likelihood for this model reads \begin{eqnarray} \label{eq: log-likelihood censored} \bar{\ell}(\phi) & := & n_{}^{-1} \sum_{i=1}^n \biggl( 1\{L_i = R_i\} \phi(X_i) \\ && \qquad\qquad\qquad + \ 1\{L_i < R_i\} \log \Bigl( \int_{L_i}^{R_i} \exp \phi(x) \, dx + 1\{R_i = \infty\} p_\infty \Bigr) \biggr) , \nonumber \end{eqnarray} where $$ p_\infty \ := \ 1 - \int_0^\infty \exp\phi(x) \, dx \ \in \ [0,1] . $$ \section{An EM algorithm} \label{EM} Maximizing the log-likelihood function $\bar{\ell}(\phi)$ for censored data is a non-trivial task and will be treated in detail elsewhere. Here we only indicate how this can be achieved in principle, assuming for simplicity that $P(\{\infty\}) = 0$, i.e.\ $\int_0^\infty \exp\phi(x) \, dx = 1$ and $p_\infty = 0$. In this case, the log-likelihood simplifies to $$ \bar{\ell}(\phi) \ = \ n_{}^{-1} \sum_{i=1}^n \biggl( 1\{L_i = R_i\} \phi(X_i) + 1\{L_i < R_i\} \log \Bigl( \int_{L_i}^{R_i} \exp \phi(x) \, dx \Bigr) \biggr) . $$ Again one may get rid of the constraint $\int_0^\infty \exp\phi(x) \, dx = 1$ by considering \begin{equation} \label{eq: log-likelihood censored simple} \bar{L}(\phi) \ := \ \bar{\ell}(\phi) - \int_0^\infty \exp\phi(x) \, dx \end{equation} for arbitrary concave and upper semicontinuous functions $\phi : (0,\infty) \to [-\infty,\infty)$. A major problem is that $\bar\ell(\phi)$ is not linear but convex in $\phi$. Namely, for $v : (0,\infty) \to \mathbb{R}$ and $0 \le L < R \le \infty$, \begin{equation} \label{eq: dir deriv} \frac{d^a}{dt^a} \Big\vert_{t=0}^{} \log \Bigl( \int_L^R \exp(\psi(x) + t v(x)) \, dx \Bigr) \ = \ \begin{cases} \mathop{\rm I\!E}\nolimits_\phi \bigl( v(X) \,\big|\, X \in (L,R] \bigr) & \text{if} \ a = 1 , \\ \mathrm{Var}_\phi \bigl( v(X) \,\big|\, X \in (L,R] \bigr) & \text{if} \ a = 2 . \end{cases} \end{equation} Thus we propose to maximize $\bar{\ell}(\phi)$ iteratively as follows: Starting from a function $\phi$ with $\bar{L}(\phi) > - \infty$, we replace the target function $\bar{L}(\phi_{\rm new})$ with $$ \widetilde{L}(\phi_{\rm new} \,|\, \phi) \ := \ \frac{d}{dt} \Big\vert_{t=0}^{} \bar{\ell} \bigl( \phi + t (\phi_{\rm new} - \phi) \bigr) - \int_0^\infty \exp\phi_{\rm new}(x) \, dx . $$ By means of (\ref{eq: dir deriv}), this may be written as \begin{equation} \label{eq: surrogate} \widetilde{L}(\phi_{\rm new} \,|\, \phi) \ = \ \mathrm{const}(\phi) + \int \phi_{\rm new}(x) \, P(dx \,|\, \phi) - \int_0^\infty \exp\phi_{\rm new}(x) \, dx , \end{equation} where $$ P(\cdot \,|\, \phi) \ := \ n_{}^{-1} \sum_{i=1}^n \biggl( 1\{L_i = R_i\} \delta_{X_i}^{} + 1\{L_i < R_i\} \mathcal{L}_\phi \bigl( X \,\big|\, X \in (L_i,R_i] \bigr) \biggr) , $$ a probability measure depending on the data and on $\phi$. In other words, for any Borel subset $B$ of $(0,\infty)$, $$ P(B \,|\, \phi) \ := \ n_{}^{-1} \sum_{i=1}^n \biggl( 1\{L_i = R_i \in B\} + 1\{L_i < R_i\} \frac{\int_{B \cap (L_i,R_i)} \exp\phi(x) \, dx}{\int_{(L_i,R_i)} \exp\phi(x) \, dx} \biggr) . $$ Note also that $\widetilde{L}(\phi_{\rm new} \,|\, \phi)$ equals the conditional expectation of the complete-data log-likelihood $L(\phi_{\rm new})$, given the available data and assuming the current $\phi$ to be the true log-density: $$ \widetilde{L}(\phi_{\rm new} \,|\, \phi) \ = \ \mathop{\rm I\!E}\nolimits_\phi \bigl( L(\phi_{\rm new}) \,\big|\, X_i \in \widetilde{X}_i \ \text{for} \ 1 \le i \le n \bigr) , $$ where the $\widetilde{X}_i$ are treated temporarily as fixed. After approximating the probability measure $P(\cdot \,|\, \phi)$ by a discrete distribution with finite support, one can maximize $\widetilde{L}(\phi_{\rm new} \,|\, \phi)$ over all concave functions $\phi_{\rm new}$ with the active-set algorithm presented in Section~\ref{Active Set}. Then we replace $\phi$ with $\phi_{\rm new}$ and repeat this procedure until the change of $\phi$ becomes negligable. \section{Auxiliary results and proofs} \label{Proofs} \paragraph{Explicit formulae for $J$ and some of its partial derivatives.} Recall the auxiliary function $J(r,s) := \int_0^1 \exp((1 - t)r + ts) \, dt$. One may write explicitly $$ J(r,s) = J(s,r) \ = \ \begin{cases} \bigl( \exp(r) - \exp(s) \bigr) \big/ (r - s) & \mbox{if } r \ne s , \\ \exp(r) & \mbox{if } r = s , \end{cases} $$ or utilize the fact that $J(r,s) = \exp(r) J(0, s-r)$ with $J(0,0) = 1$ and $$ J(0, y) \ = \ (\exp(y) - 1) / y \ = \ \sum_{k=0}^\infty \frac{y^k}{(k+1)!} . $$ To compute the partial derivatives $J_{ab}(r,s)$ of $J(r,s)$, one may utilize the facts that $J_{ab}(r,s) = J_{ba}(s,r) = \exp(r) J_{ab}(0, s-r)$. Moreover, elementary calculations reveal that \begin{eqnarray*} J_{10}(0,y) & = & \bigl( \exp(y) - 1 - y \bigr) \big/ y^2 \ = \ \sum_{k=0}^\infty \frac{y^k}{(k+2)!} , \\ J_{20}(0,y) & = & 2 \bigl( \exp(y) - 1 - y - y^2/2 \bigr) \big/ y^3 \ = \ \sum_{k=0}^\infty \frac{2 y^k}{(k+3)!} , \\ J_{11}(0,y) & = & \bigl( y (\exp(y) + 1) - 2 (\exp(y) - 1) \bigr) \big/ y^3 \ = \ \sum_{k=0}^\infty \frac{(k+1) y^k}{(k+3)!} . \end{eqnarray*} The Taylor series may be deduced as follows: \begin{eqnarray*} J_{ab}(0,y) & = & \int_0^1 (1 - t)^a t^b e^{ty} \, dt \\ & = & \sum_{k=0}^\infty \frac{y^k}{k!} \int_0^1 (1 - t)^a t^{b+k} \, dt \\ & = & \sum_{k=0}^\infty \frac{y^k}{k!} \frac{a! (b+k)!}{(k + a + b + 1)!} \\ & = & \sum_{k=0}^\infty \frac{a! (b+k)! \, y^k}{k! (k + a + b + 1)!} , \end{eqnarray*} according to the general formula $\int_0^1 (1 - t)^k t^\ell \, dt = k! \ell! / (k+\ell + 1)!$ for integers $k,\ell \ge 0$. Numerical experiments revealed that a fourth degree Taylor approximation for $J_{ab}(0,y)$ is advisable and works very well if $$ |y| \ \le \ \begin{cases} 0.005 & (a = b = 0) , \\ 0.01 & (a + b = 1) , \\ 0.02 & (a + b = 2) . \end{cases} $$ \paragraph{Explicit formulae for the gradient and hessian matrix of $L$.} At $\boldsymbol{\psi} \in \mathbb{R}^m$ these are given by \begin{eqnarray*} \frac{\partial}{\partial \psi_k} L(\boldsymbol{\psi}) & = & p_k^{} - \begin{cases} \delta_1 J_{10}(\psi_1, \psi_2) & \mbox{if } k = 1 , \\ \delta_{k-1} J_{01}(\psi_{k-1},\psi_k) + \delta_k J_{10}(\psi_k,\psi_{k+1}) & \mbox{if } 2 \le k < m , \\ \delta_{m-1} J_{01}(\psi_{m-1},\psi_m) & \mbox{if } k = m , \end{cases} \\ - \, \frac{\partial^2}{\partial \psi_j \partial \psi_k} L(\boldsymbol{\psi}) & = & \begin{cases} \delta_1 J_{20}(\psi_1, \psi_2) & \mbox{if } j = k = 1 , \\ \delta_{k-1} J_{02}(\psi_{k-1},\psi_k) + \delta_k J_{20}(\psi_k,\psi_{k+1}) & \mbox{if } 2 \le j = k < m , \\ \delta_{m-1} J_{02}(\psi_{m-1},\psi_m) & \mbox{if } j = k = m , \\ \delta_j J_{11}(\psi_j, \psi_k) & \mbox{if } 1 \le j = k-1 < m , \\ 0 & \mbox{if } |j - k| > 1 . \end{cases} \end{eqnarray*} \paragraph{Proof of \eqref{eq: coercivity of L}.} In what follows let $\min(\boldsymbol{v})$ and $\max(\boldsymbol{v})$ denote the minimum and maximum, respectively, of all components of a vector $\boldsymbol{v}$. Moreover let $R(\boldsymbol{v}) := \max(\boldsymbol{v}) - \min(\boldsymbol{v})$. Then with $\boldsymbol{p} := (p_j)_{j=1}^m$ and $\boldsymbol{\delta} = (\delta_k)_{k=1}^{m-1}$, note first that \begin{eqnarray*} L(\boldsymbol{\psi}) & \le & \max(\boldsymbol{\psi}) - (x_m - x_1) \exp(\min(\boldsymbol{\psi})) \\ & = & R(\boldsymbol{\psi}) + \min(\boldsymbol{\psi}) - (x_m - x_1) \exp(\min(\boldsymbol{\psi})) \\ & \to & - \infty \quad\mbox{as } \|\boldsymbol{\psi}\| \to \infty \mbox{ while } R(\boldsymbol{\psi}) \le r_o \end{eqnarray*} for any fixed $r_o < \infty$. Secondly, let $\widetilde{\psi}_j := \psi_j - \min(\boldsymbol{\psi})$. Then $\min(\widetilde{\boldsymbol{\psi}}) = 0$, $\max(\widetilde{\boldsymbol{\psi}}) = R(\boldsymbol{\psi})$, whence \begin{eqnarray*} L(\boldsymbol{\psi}) & = & \sum_{i=1}^m p_i \widetilde{\psi}_i + \min(\boldsymbol{\psi}) - \exp(\min(\boldsymbol{\psi})) \int_{x_1}^{x_m} \exp(\widetilde{\psi}(x)) \, dx \\ & \le & \left( 1 - \min(\boldsymbol{p}) \right) R(\boldsymbol{\psi}) + \sup_{s \in \mathbb{R}} \Bigl( s - \exp(s) \int_{x_1}^{x_m} \exp(\widetilde{\psi}(x)) \, dx \Bigr) \\ & = & \left( 1 - \min(\boldsymbol{p}) \right) R(\boldsymbol{\psi}) - \log \int_{x_1}^{x_m} \exp(\widetilde{\psi}(x)) \, dx - 1 \\ & = & \left( 1 - \min(\boldsymbol{p}) \right) R(\boldsymbol{\psi}) - \log \Bigl( \sum_{k=1}^{m-1} \delta_k J(\widetilde{\psi}_k, \widetilde{\psi}_{k+1}) \Bigr) - 1 \\ & \le & \left( 1 - \min(\boldsymbol{p}) \right) R(\boldsymbol{\psi}) - \log \Bigl( \min( \boldsymbol{\delta} ) J(0, R(\boldsymbol{\psi})) \Bigr) - 1 \\ & = & \left( 1 - \min(\boldsymbol{p}) \right) R(\boldsymbol{\psi}) - \log J(0, R(\boldsymbol{\psi})) - \log(e \min(\boldsymbol{\delta})) , \end{eqnarray*} where we used the fact that $\max_{s \in \mathbb{R}} (s - \exp(s) A) = - \log A - 1$ for any $A > 0$. Moreover, for $r > 0$, $$ - \log J(0, r) \ = \ \log \Bigl( \frac{r}{e^r - 1} \Bigr) \ = \ - r + \log \Bigl( \frac{r}{1 - e^{-r}} \Bigr) \ \le \ - r + \log(1 + r) , $$ whence $$ L(\boldsymbol{\psi}) \ \le \ - \min(\boldsymbol{p}) R(\boldsymbol{\psi}) + \log(1 + R(\boldsymbol{\psi})) - \log(e\min(\boldsymbol{\delta})) \ \to \ - \infty \quad\mbox{as } R(\boldsymbol{\psi}) \to \infty . \eqno{\Box} $$ \paragraph{Proof of Theorem~\ref{thm: 1dim functional}.} It follows from strict concavity of $L$ and \eqref{eq: 1st dir deriv L} that the function $\psi$ equals $\check{\psi}$ if, and only if, \begin{equation} \sum_{i=1}^m p_i v(x_i) \ = \ \int_{x_1}^{x_m} v(x) f(x) \, dx \label{eq: gradient condition} \end{equation} for any function $v \in \mathcal{G}$. Note that any vector $\boldsymbol{v} \in \mathbb{R}^m$ is a linear combination of the vectors $\boldsymbol{v}^{(1)}$, $\boldsymbol{v}^{(2)}$, \ldots, $\boldsymbol{v}^{(m)}$, where $$ \boldsymbol{v}_{}^{(k)} \ = \ \left( 1\{i \le k\} \right)_{i=1}^m . $$ With the corresponding functions $v^{(k)} \in \mathcal{G}$ we conclude that $\psi$ maximizes $L$ if, and only if, \begin{equation} \sum_{i=1}^k p_i \ = \ \int_{x_1}^{x_m} v_{}^{(k)}(x) f(x) \, dx \label{eq: special gradient condition} \end{equation} for $1 \le k \le m$. Now the vector $\boldsymbol{v}^{(m)}$ corresponds to the constant function $v^{(m)} := 1$, so that \eqref{eq: special gradient condition} with $k = m$ is equivalent to $F(x_m) = 1$. In case of $1 \le k < m$, $$ v_{}^{(k)}(x) \ := \ \begin{cases} 1 & \mbox{if } x \le x_k , \\ (x_{k+1} - x)/\delta_k & \mbox{if } x_k \le x \le x_{k+1} , \\ 0 & \mbox{if } x \ge x_{k+1} , \end{cases} $$ and it follows from Fubini's theorem that \begin{eqnarray*} \int_{x_1}^{x_m} v_{}^{(k)}(x) f(x) \, dx & = & \int_{x_1}^{x_m} \int_0^1 1\{u \le v_{}^{(k)}(x)\} \, du \, f(x) \, dx \\ & = & \int_0^1 \int_{x_1}^{x_m} 1\{x \le x_{k+1} - u \delta_k\} f(x) \, dx \, du \\ & = & \int_0^1 F(x_{k+1} - u \delta_k) \, du \\ & = & \delta_k^{-1} \int_{x_k}^{x_{k+1}} F(r) \, dr . \end{eqnarray*} These considerations yield the characterization of the maximizer of $L$. As for the first and second moments, equation~\eqref{eq: gradient condition} with $v(x) := x$ yields the assertion that $\sum_{i=1}^m p_i x_i$ equals $\int_{x_1}^{x_m} x f(x) \, dx$. Finally, let $\boldsymbol{v} := (x_i^2)_{i=1}^n$ and $v \in \mathcal{G}$ the corresponding piecewise linear function. Then \begin{eqnarray*} \sum_{i=1}^m p_i^{} x_i^2 - \int_{x_1}^{x_m} x^2 f(x) \, dx & = & \int_{x_1}^{x_m} (v(x) - x^2) f(x) \, dx \\ & = & \sum_{k=1}^{m-1} \int_{x_k}^{x_{k+1}} (x - x_k)(x_{k+1} - x) f(x) \, dx \\ & = & \sum_{k=1}^{m-1} \delta_k^3 J_{11}(\psi_k, \psi_{k+1}) . \end{eqnarray*}\\[-9ex] \strut \hfill $\Box$ \paragraph{Proof of Theorem~\ref{thm: KKstar and VVA}.} It is well known from convex analysis that $\boldsymbol{\psi} \in \mathcal{K} \cap \mathrm{dom}(L)$ belongs to $\mathcal{K}_*$ if, and only if, $\boldsymbol{v}^\top \nabla L(\boldsymbol{\psi}) \le 0$ for any vector $\boldsymbol{v} \in \mathbb{R}^m$ such that $\boldsymbol{\psi} + t \boldsymbol{v} \in \mathcal{K}$ for some $t > 0$. By the special form of $\mathcal{K}$, the latter condition on $\boldsymbol{v}$ is equivalent to $\boldsymbol{v}_a^\top \boldsymbol{v} \ge 0$ for all $a \in A(\boldsymbol{\psi})$. In other words, $\boldsymbol{v} = \sum_{i=1}^m \lambda_i \boldsymbol{b}_i$ with $\lambda_a \ge 0$ for all $a \in A(\boldsymbol{\psi})$. Thus $\boldsymbol{\psi} \in \mathcal{K}$ belongs to $\mathcal{K}_*$ if, and only if, it satisfies \eqref{eq: KKstar}. Similarly, a vector $\boldsymbol{\psi} \in \mathcal{V}(A) \cap \mathrm{dom}(L)$ belongs to $\mathcal{V}_*(A)$ if, and only if, $\boldsymbol{v}^\top \nabla L(\boldsymbol{\psi}) = 0$ for any vector $\boldsymbol{v}$ in the linear space $$ \bigl\{ \boldsymbol{v} \in \mathbb{R}^m : \boldsymbol{v}_a^\top \boldsymbol{v} = 0 \mbox{ for all } a \in A \bigr\} \ = \ \mathrm{span} \bigl\{ \boldsymbol{b}_i : i \in \{1,\ldots,m\} \setminus A \bigr\} . $$ But this requirement is obviously equivalent to \eqref{eq: VVA}. \hfill $\Box$ \paragraph{Acknowledgements.} This work was partially supported by the Swiss National Science Foundation. We are grateful to Charles Geyer for drawing our attention to active set methods and to Geurt Jongbloed for stimulating discussions about shape-constrained estimation. \paragraph{Software.} The methods of Rufibach~(2006, 2007) as well as the active set method from Section~\ref{Active Set} are available in the R package {\tt "logcondens"} by Rufibach and D\"{u}mbgen (2009), available from {\tt "CRAN"}. Corresponding Matlab code is available from the first author's homepage on {\tt www.stat.unibe.ch}.
1,941,325,220,125
arxiv
\section{Resonances} We study an example of mass parameter $\mu =0.0041$ and eccentricity $e' = 0.02$ of the primary in the framework of the ERTBP. Following P\'{a}ez \& Efthymiopoulos 2014 (hereafter, P\&E14), we describe Trojan orbits in terms of modified Delaunay variables given by \begin{displaymath} x = \sqrt{a} -1, \quad y = \sqrt{a} \left( \sqrt{1-e^2} -1 \right), \quad \Delta u = \lambda - \frac{\pi}{3} - u_0, \quad \omega~~, \end{displaymath} where $a$, $e$, $\lambda$ and $\omega$ are the major semi-axis, eccentricity, mean longitude, and argument of the perihelion of the Trojan body, and $u_0$ is such that $\Delta u = 0$ for the $1$:$1$ short period orbit at $L_4$. In this problem, the secondary resonances (see P\&E14) are of the form $m_f\omega_f+m_s\omega_s+ m_g\omega_g=0$, involving the fast frequency $\omega_f$, the synodic frequency $\omega_s$ and the secular frequency $\omega_g$ of the Trojan body. Resonances are denoted below as [$m_f$:$m_s$:$m_g$]. The most important resonances, called the 'main' secondary resonances, correspond to the condition $\omega_f - n \omega_s = 0$ ([$1$:$-n$:$0$]). For $\mu=0.0041$, this corresponds to [$1$:$-6$:$0$]. \begin{figure} \vspace*{-0.5 cm} \hspace*{-0.5cm} \includegraphics[width=0.4\textwidth,angle=270]{Figure4.eps} \includegraphics[width=0.40\textwidth,angle=270]{scalecolor2.eps} \vspace*{-5.1cm} \hspace*{10.5cm} \includegraphics[width=0.25\textwidth,angle=270]{histoesc.eps} \vspace*{0.4 cm} \caption{Left: FLI map for initial conditions described in the text where various secondary resonances are distinguished in the space of proper elements ($\Delta u$,$e_p$). Middle: Color distribution of escaping times for the same initial conditions (color scale indicated). Right: distribution of the escaping times of the orbits.} \label{fig1} \end{figure} \vspace*{-0.5cm} \section{Diffusion and stability} Numerical experiments show that, for $e' > 0$, at least two different mechanisms of diffusion are present. Along non-overlapping resonances, a slow (and practically undetectable) Arnold-like diffusion (Arnold, 1964) takes place. On the other hand, for initial conditions along partly overlapping resonances, due to the phenomenon of pulsating separatrices (P\&E14), we observe a faster 'modulational' diffusion (Chirikov et al., 1985) leading to relatively fast escapes. In order to distinguish which parts of the resonant web provide each behavior, we integrate 3600 initial conditions with $0.33 \leq \Delta u \leq 0.93$ and $0 \leq e_p \leq 0.06$, where $\Delta u$ (libration angle) and $e_p$ (proper eccentricity) are proper elements (see Efthymiopoulos and P\'{a}ez, this volume). We visualize the resonance web by color maps of the Fast Lyapunov Indicator FLI (Froeschl\'{e} et al., 2000) of the orbits. The resonances are identified by Frequency Analysis (Laskar, 1990). We integrate all orbits up to $5$ different integration times along $10^7$ periods of the primaries. After each integration, the initial conditions are categorized as {\bf \emph{Regular}} (if $\Psi (t) < \log_{10} (\frac{N}{10})$, where $\Psi$ denotes the FLI value and $N$ is the total number of integration periods), {\bf \emph{Escaping}} (if the orbit undergoes a sudden jump in the numerical energy error greater than $10^{-3}$) or {\bf \emph{Transition}} (non Regular nor Escaping). {\small \begin{center} \vspace*{0.1cm} \begin{tabular}{| c c c c |} \hline N. of periods & Regular Orb & Transition Orb & Escaping Orb \\ \hline $10^3$ & 1220 ($33.8 \%$) & 2027 ($56.3 \%$) & 353 ($09.9\%$) \\ $10^4$ & 1263 ($35.0 \%$) & 1388 ($38.5 \%$) & 946 ($26.5\%$) \\ $10^5$ & 1296 ($36.0 \%$) & 966 ($26.8 \%$) & 1338 ($37.2\%$) \\ $10^6$ & 1299 ($36.1 \%$) & 699 ($19.4 \%$) & 1602 ($44.5\%$) \\ $10^7$ & 1309 ($36.3 \%$) & 603 ($16.8 \%$) & 1688 ($46.9\%$) \\ \hline \end{tabular} \vspace*{0.1cm} \end{center} \vspace{-0.1cm} } After $10^7$ periods, $46.9\%$ of the orbits have escaped. However, a significant portion ($16.8\%$) still remain trapped, despite having a high FLI value. Figure \ref{fig1} resumes the results. The histogram in the right panel shows two distinct timescales. The first peak ($10^3$ periods), corresponds to fast escapes, and the second ($10^5$ periods), to slow escapes. When we compare the FLI map (left) with the color distribution of the escaping times (middle), we find that the majority of fast escaping orbits lay within the chaotic sea surrounding the secondary resonances. The thin chaotic layers delimiting the resonances provide both slowly escaping orbits and \emph{transition} orbits (\emph{sticky} set of initial conditions that do not escape after $10^7$ periods). For escaping orbits, beyond $t \sim 10^5$ periods, the distribution of the escape times is given by $P(t_{esc})\propto t_{esc}^{-\alpha},\,\alpha\approx 0.8$, while the sticky orbits exhibit features of 'stable chaos' (Milani \& Nobili, 1992), since their Lyapunov times are much shorter than $10^7$ periods. \vspace{0.5cm} \noindent \large{{\bf Acknowledgements:}} R.I.P. was supported by the Astronet-II Training Network (PITN-GA-2011-289240). C.E. was supported by the Research Committee of the Academy of Athens (Grant 200/815).
1,941,325,220,126
arxiv
\section{Introduction} \label{sec:1} There is a wealth of empirical information regarding photon fusion reactions to two mesons both at threshold and above~\cite{TPC,MARK,DESY,KEK,DORIS,SAN,KKpm1,TASSO,CELLO,pieta}. At low energy these reactions provide stringent constrainsts on our understanding of broken chiral symmetry and the way mesons and photons interact. At higher energy they reveal a variety of resonance structure that reflects on the importance of final state correlations and unitarity in strong interaction physics. Some of these reactions have been analyzed using chiral perturbation theory~\cite{dono93}, dispersion relations~\cite{DISPERSION} and also effective models~\cite{EFFECTIVE}. One-loop chiral perturbation theory ~\cite{ONE} does well in the charged channels, but yields results that are at odd with the data in the chargeless sector, suggesting that important correlations are at work in the final states. Some of these shortcomings have been removed by a recent two-loop calculations~\cite{TWO} and the help of few parameters that are fixed by resonance saturation. The results are overall in agreement with an early dispersion analysis for $\gamma\gamma\rightarrow \pi^0\pi^0$~\cite{DISPERSION}. Effective models using aspects of chiral symmetry and s-channel unitarisation have revealed the importance of final state interactions in most of these reactions~\cite{EFFECTIVE,OSET}. Recently, a global and unified understanding of broken chiral symmetry was reached in the form of a master formula for the extended S-matrix \cite{YAZA}. A number of reaction processes involving two light quarks were worked out and shown to be interdependent beyond threshold. The approach embodies the essentials of broken chiral symmetry, unitarity and crossing symmetry to all orders in the external momenta. By power counting it agrees with standard chiral perturbation theory in the threshold region. It is flexible enough to be used in conjunction with dispersion analysis or resonance saturation techniques to allow for a simple understanding of resonance effects and final state interactions beyond threshold. In this paper, we would like to give a global understanding of most of the fusion reaction processes using the master formula approach to broken chiral symmetry including the effects of strangeness. The present work confirms and extends the original analysis in the two flavour case \cite{CZ95}. In section~\ref{sec:2}, we introduce our conventions for the fusion reaction processes, and discuss the essentials of the T-matrix amplitudes. In section~\ref{sec:3}, we give the main result for the fusion reaction processes as expected from the master formula approach to QCD with three flavors. The importance of s-channel scalar correlations is immediately unravelled. In section~\ref{sec:4}, we analyze the general result in chiral power counting and compare to one-loop chiral perturbation theory with strangeness. In section~\ref{sec:5}, we analyze the master formula result beyond threshold by using resonance saturation methods. In section~\ref{sec:6}, we discuss briefly the meson polarizabilities in our case. In section~\ref{sec:7}, a detailed numerical analysis of our results is made and compared to presently available data. We predict a small cross section for $\gamma\gamma\rightarrow \eta\eta$. Our conclusions are in section~\ref{sec:8}. Some calculational details are given in three Appendices. \section{Generalities} \label{sec:2} \subsection{Conventions} We will consider generically the reactions $\gamma^c(q_1) \gamma^d(q_2)\rightarrow \pi^a(k_1)\pi^b(k_2)$ with $a,b=1\sim 8$ and $c,d=3,8$ for the light mesonic octet. The photon polarizations are chosen in the gauge $\epsilon_\mu(q_i) q_j^\mu =0$ with $i,j=1,2$. Throughout, the Mandelstam variables are given by \begin{eqnarray} s &=& (q_1+q_2)^2 = 2q_1\cdot q_2 \nonumber\\ t &=& (q_1-k_1)^2 = k_1^2 - 2 q_1\cdot k_1 \nonumber\\ u &=& (q_1-k_2)^2 = k_2^2 - 2 q_1\cdot k_2 \ . \end{eqnarray} and both the photons and the mesons are on-shell, $q_i^2=0$ and $k_i^2 = m_{a}^2$. Our convention for the electromagnetic current is standard \begin{eqnarray} {\rm\bf J}_\mu^{em} = \bar q\gamma_\mu \left( \frac 1 2 \lambda_3 +\frac{1}{2 \sqrt 3}\lambda_8 \right) q = {\rm\bf V}_\mu^3 +\frac{1}{\sqrt 3} {\rm\bf V}_\mu^8 \ , \end{eqnarray} so that the photon isospin indices are only 3 and 8. This will be assumed throughout. \subsection{Helicity Amplitudes} The T-matrix for the fusion process $\gamma(q_1)\gamma(q_2)\rightarrow \pi^a(k_1)\pi^b(k_2)$, will be defined as~\cite{TWO} \begin{eqnarray} {}_{\rm out}\langle \pi^a(k_1)\pi^b(k_2) |\gamma(q_1)\gamma(q_2)\rangle_{\rm in} =i (2\pi)^4\delta^4 (P_f-P_i) {\cal T}^{ab} \end{eqnarray} with \begin{eqnarray} {\cal T} = e^2 \epsilon_1^\mu \epsilon_2^\nu V_{\mu\nu}^{ab} \ . \end{eqnarray} The photons are transverse, that is $\epsilon_i\cdot q_j=0$, hence \begin{eqnarray} V_{\mu\nu} = A(s,t,u) T_{1\mu\nu} + B(s,t,u) T_{2\mu\nu} \label{tmat} \end{eqnarray} with the invariant tensors \begin{eqnarray} T_{1\mu\nu} = \frac 12 s g_{\mu\nu}- q_{1\mu}q_{2\nu} \qquad\qquad T_{2\mu\nu} = 2 s (k_1-k_2)_\mu (k_1-k_2)_\nu - \nu^2 g_{\mu\nu} \end{eqnarray} and $\nu=(t-u)$. As a result, the T-matrix reads \begin{eqnarray} {\cal T} &=& e^2 \left({A(s,t,u)} \,s/2 - \nu^2 B(s,t,u)\right) \epsilon_1\cdot\epsilon_2 - e^2 8 s B(s,t,u) \epsilon_1\cdot k_1 \epsilon_2\cdot k_2 \nonumber\\ &=& -2 e^2 \epsilon_1\cdot\epsilon_2 ({\rm\bf 1} - {\cal X}) -e^2 (\epsilon_1\cdot k_1) (\epsilon_2\cdot k_2) 8 s ( B_0 +{\cal Y}) \end{eqnarray} with ${\rm\bf 1}$ and $B_0$ defined as \begin{eqnarray} {\rm\bf 1} &=& \left\{ \begin{array}{cl} 1 & {\rm for}\; \pi^\pm, K^\pm \\ 0 & {\rm for}\; \pi^0,K^0,\bar K^0,\eta \end{array} \right. \nonumber\\ B_0 &=& {\rm\bf 1} \frac{1}{2s}\left(\frac{1}{t-m_\pi^2}+\frac{1}{u-m_\pi^2}\right) \ . \end{eqnarray} The corresponding helicity amplitudes are~\cite{TWO} \begin{eqnarray} H_{++}^{ab} &=& A^{ab} + 2( (m_a+ m_b)^2 -s) B^{ab} \nonumber\\ H_{+-}^{ab} &=& \frac{8 (m_a^2 m_b^2-tu)}{s} B^{ab} \ . \end{eqnarray} \subsection{Polarizabilities} The differential cross section for unpolarized photons to two mesons in the center-of-mass system is \begin{eqnarray} \frac{d\sigma^{\gamma\gamma\rightarrow \pi^a\pi^b}}{d\Omega} &=& f_{ab} \frac{\alpha^2 s}{32}\beta^{ab}(s) \left(|H_{++}|^2+|H_{+-}|^2\right) \nonumber\\ &=& f_{ab} \frac{\alpha^2 }{4s}\beta^{ab}(s) \left( \left| {\cal B}+\frac{m_\pi}{2\alpha} s\alpha^{ab}_\pi(s)\right|^2 +\left| {\cal B}^\prime+\frac{m_\pi}{2\alpha} s\alpha^{ab}_\pi(s)\right|^2 \right) \ , \end{eqnarray} with the degeneracy factor \begin{eqnarray} f_{ab} = \left\{ \begin{array}{cl} 1/2 & {\rm for}\;\; \pi^0\pi^0, \eta\eta \\ 1 & {\rm for}\;\; {\rm other \;\; processes} \end{array}\right. \ . \end{eqnarray} The expressions for ${\cal B}$, ${\cal B}^\prime$ and the polarizabilities $\alpha_\pi^{ab}$ are \begin{eqnarray} {\cal B} &=& {\rm\bf 1} \left(-1 +\frac{2s m_\pi^2}{(t-m_\pi^2)(u-m_\pi^2)} \right) \nonumber\\ {\cal B}^\prime &=& {\rm\bf 1} +4(m_\pi^4-tu) {\cal Y} \nonumber\\ \frac{m_\pi}{2\alpha} s\alpha_\pi^\pm(s) &=& -{\cal X}-\frac{s(4m_\pi^2-s)+4 (m_\pi^4-tu)}{2}{\cal Y}\ . \end{eqnarray} The center of mass velocity for outgoing particles $\beta^{ab}(s)$ will be defined as \begin{eqnarray} \beta^{ab}(s) = \sqrt{\left(1-\frac{(m_a+m_b)^2}{s}\right) \left(1-\frac{(m_a-m_b)^2}{s}\right)} \ . \end{eqnarray} \section{Master Formulae Result} \label{sec:3} The master formula approach to two flavours developed by two of us~\cite{YAZA} can be readily extended to three flavours~\cite{LYZ98}. In short, the extended S-matrix with strangeness included obeys a new and linear master equation, that is emmenable to on-shell chiral reduction formulas. The fusion reaction processes can be assessed as discussed in ~\cite{YAZA,CZ95} for two flavours. The three flavour result is \begin{eqnarray} {\cal T}_1 &= & i \epsilon_1\cdot\epsilon_2 \frac{E_a}{E_b} (f^{bci} f^{ida} +f^{bdi} f^{ica} ) \nonumber\\ && + i 4 \epsilon_1\cdot k_2 \epsilon_2\cdot k_1 \frac{E_a}{E_b} \left\{ \frac{f^{bci}f^{ida}}{u-m_i^2} + \frac{f^{bdi}f^{ica}}{t-m_i^2} \right\} \label{eq:T1} \\ {\cal T}_2 &=& i \epsilon_1\cdot\epsilon_2 \frac{1}{E_a E_b} f^{bdi}f^{aci} \left\{\frac 23 K \frac{M_a}{m_a^2} - E_i^2 \right\} \nonumber\\ && + \epsilon_1^\mu\epsilon_2^\nu k_2^\beta k_1^\alpha \frac{1}{E_a E_b} \int d^4z\int d^4y\int d^4x \,e^{ik_2\cdot x-i q_1\cdot y-i q_2\cdot z} \langle \,T^*\,{\rm\bf V}_\nu^d(z){\rm\bf V}_\mu^c(y){{\rm\bf j}_A}_\beta^b(x){{\rm\bf j}_A}_\alpha^a(0)\rangle \nonumber\\ && + i\epsilon_1^\mu\epsilon_2^\nu \frac 23\frac KC \delta^{ab} \frac{M_a}{E_a^2} \int d^4z\int d^4y\, e^{-iq_1\cdot y -i q_2\cdot z} \langle\,T^*\, {\rm\bf V}_\nu^d(z){\rm\bf V}_\mu^c(y)\sigma^0(0)\rangle \nonumber\\ && -i \epsilon_1^\mu\epsilon_2^\nu d^{abh} \frac{M_b}{E_a E_b} \frac{E_h m_h^2}{M_h} \int d^4z\int d^4y\, e^{-iq_1\cdot y -i q_2\cdot z} \langle\,T^*\, {\rm\bf V}_\nu^d(z){\rm\bf V}_\mu^c(y)\sigma^h(0)\rangle \ , \label{eq:T2} \end{eqnarray} where ${\cal T}_1$ summarizes the Born contributions to the charged mesons, and ${\cal T}_2$ the rest after two chiral reductions of the external meson states. (\ref{eq:T2}) constitutes our basic identity. It shows that the fusion reaction is related to the vacuum correlators ${\bf V}{\bf V}{\bf j}{\bf j}$ and ${\bf V}{\bf V}\sigma$ modulo Born terms. Quantum numbers and G-parity imply that the scalars dominate the final state interactions in the s-channel. This point will become clearer in the resonance saturation analysis. What is remarkable in (\ref{eq:T2}) is that the final state scalar correlations are driven by the symmetry breaking effects in QCD. In (\ref{eq:T2}) the isovector current ${\bf V}$ and the one-pion reduced iso-axial current ${\bf j}_A$ are given by \begin{eqnarray} {\rm\bf V}^a_\mu = \bar q \gamma_\mu \frac{\lambda^a}{2} q\ , \;\;\;\; {{\rm\bf j}_A^a}_\mu = \bar q \gamma_\mu \frac{\lambda^a}{2} \gamma_5 q\ + \left(\frac{M}{m_p^2}\right)^{ab} \partial_\mu (\bar q i\gamma_5\lambda^b q ) \ . \end{eqnarray} The mesons weak-decay constants and masses are \begin{eqnarray} E_{1\sim 8} & \equiv & (f_\pi,f_\pi,f_\pi,f_K,f_K,f_K,f_K,f_\eta) \nonumber\\ m_{1\sim 8} &\equiv & (m_\pi,m_\pi,m_\pi,m_K,m_K,m_K,m_K,m_\eta) \end{eqnarray} with $f_\pi=93$ MeV, $f_K=115$ MeV and $f_\eta=123$ MeV. The current mass matrix is chosen as \begin{eqnarray} M_{1\sim 8} \equiv \left(\hat m,\hat m,\hat m, \frac{\hat m+m_s}{2}, \frac{\hat m+m_s}{2}, \frac{\hat m+m_s}{2}, \frac{\hat m+m_s}{2}, \frac{\hat m+2 m_s}{3} \right) \end{eqnarray} with $\hat m=9$ MeV and $m_s=175$ MeV for some running scale. Since the $M$'s appear in RGE invariant combinations, the effects of the running scale is small in the range of energies probed by the fusion reaction processes we will be considering. The scalar densities are \begin{eqnarray} \sigma^0 &=& \frac CK \bar q q +C \nonumber\\ \sigma^h &=& -\frac{M_a}{E_a m_a^2} \bar q\lambda^a q \ . \label{eq:sigma} \end{eqnarray} with two (arbitrary) constants. For two flavours, $C \rightarrow -f_\pi$ and $2 K/3 \rightarrow f_\pi^2 m_\pi^2/\hat m$. The contact term in ${\cal T}_2$ vanishes in the two-flavour case. It does not in the tree-flavour case and is to be reabsorbed in the pertinent counterterm generated by the three and four point functions in (\ref{eq:T2}). The Born terms involve only charged mesons. Their explicit form is \begin{eqnarray} {\cal T}_{\gamma\gamma\rightarrow\pi^+\pi^-} &=& -i 2 e^2 \epsilon_1\cdot\epsilon_2 -i 4 e^2 \epsilon_1\cdot k_1 \epsilon_2\cdot k_2 \left(\frac{1}{t-m_\pi^2}+\frac{1}{u-m_\pi^2}\right) \nonumber\\ {\cal T}_{\gamma\gamma\rightarrow K^+ K^-} &=& -i 2 e^2 \epsilon_1\cdot\epsilon_2 -i 4 e^2 \epsilon_1\cdot k_1 \epsilon_2\cdot k_2 \left(\frac{1}{t-m_K^2}+\frac{1}{u-m_K^2}\right) \ . \end{eqnarray} These tree results are consistent with all chiral models with minimal coupling. To go beyond, we need to assess the effects of the three- and four-point functions in (\ref{eq:T2}). We will do this in two ways : at threshold by using power counting, and beyond threshold by using resonance saturation methods. \section{One Loop Result} \label{sec:4} The identity (\ref{eq:T2}) is a consequence of broken chiral symmetry in QCD, and any chiral approach that is consistent with QCD ought to satisfy it. In this section we show how this identity can be analysed near treshold using power counting in $1/E$. A simple comparison with the nonlinear sigma model shows that this is analogous to the loop expansion if $\phi={\bf V}, {\bf j}_A, \sigma$ are counted of order ${\cal O}(1)$. Also $f_K^2-f_{\pi}^2$ and $f_{\eta}^2-f_{\pi}^2$ are ${\cal O}(1)$ rather than ${\cal O}(E)$ because of G-parity. Some details regarding the one-loop analysis are given in Appendix B. The results for the various transition amplitudes are \begin{eqnarray} {\cal T}_{\gamma\gamma\rightarrow\pi^+\pi^-} &=& -i 2 e^2 k_1\cdot k_2 \frac{1}{f_\pi^2} \left(\tilde{\cal I}^\pi +\frac 12 \tilde{\cal I}^K\right) -i2 e^2 \frac{m_\pi^2}{f_\pi^2} \tilde{\cal I}^\pi -i e^2\frac{m_K^2}{f_\pi^2}\frac{2\hat m}{\hat m+m_s} \tilde{\cal I}^K \nonumber\\ {\cal T}_{\gamma\gamma\rightarrow\pi^0\pi^0} &=& -i 2 e^2 k_1\cdot k_2 \frac{1}{f_\pi^2} \left(2\tilde{\cal I}^\pi +\frac 12 \tilde{\cal I}^K\right) -i2 e^2 \frac{m_\pi^2}{f_\pi^2} \tilde{\cal I}^\pi -i e^2\frac{m_K^2}{f_\pi^2}\frac{2\hat m}{\hat m+m_s} \tilde{\cal I}^K \nonumber\\ {\cal T}_{\gamma\gamma\rightarrow K^+ K^-} &=& -i 2 e^2 k_1\cdot k_2 \frac{1}{f_K^2} \left(\frac 12\tilde{\cal I}^\pi + \tilde{\cal I}^K\right) -i e^2 \frac{m_\pi^2}{f_K^2} \frac{\hat m+m_s}{2\hat m}\tilde{\cal I}^\pi -i \frac 32 e^2\frac{m_K^2}{f_K^2} \tilde{\cal I}^K \nonumber\\ {\cal T}_{\gamma\gamma\rightarrow K^0 \bar K^0} &=& -i 2 e^2 k_1\cdot k_2 \frac{1}{f_K^2} \left(\frac 12\tilde{\cal I}^\pi + \frac 12 \tilde{\cal I}^K\right) -i e^2 \frac{m_\pi^2}{f_K^2} \frac{\hat m+m_s}{2\hat m}\tilde{\cal I}^\pi -i \frac 32 e^2\frac{m_K^2}{f_K^2} \tilde{\cal I}^K \nonumber\\ {\cal T}_{\gamma\gamma\rightarrow \eta \eta} &=& -i 3 e^2 k_1\cdot k_2 \frac{1}{f_\eta^2} \tilde{\cal I}^K -i \frac 23 e^2 \frac{m_\pi^2}{f_\eta^2} \frac{\hat m+ 2 m_s}{3\hat m}\tilde{\cal I}^\pi -i \frac 53 e^2\frac{m_K^2}{f_\eta^2} \frac{2(\hat m+2 m_s)}{3(\hat m+m_s)} \tilde{\cal I}^K \nonumber\\ {\cal T}_{\gamma\gamma\rightarrow \pi^0 \eta} &=& -i \sqrt 3 e^2 k_1\cdot k_2 \frac{1}{f_\pi f_\eta} \tilde{\cal I}^K \label{eq:one} \end{eqnarray} with $k_1\cdot k_2 =\frac 12 (s-m_1^2-m_2^2)$. The one-loop finite contributions are \begin{eqnarray} \tilde{\cal I}^i &\equiv & - H(s-4 m_i^2) \epsilon_1\cdot\epsilon_2 \frac{1}{16\pi^2} \left\{ 1+\frac{m_i^2}{s} \left(\ln\left[\frac{\sqrt s -\sqrt{s-4 m_i^2}}{\sqrt s +\sqrt{s-4 m_i^2}} \right] +i\pi\right)^2\right\} \nonumber\\ && - H(4 m_i^2-s) \epsilon_1\cdot\epsilon_2 \frac{1}{16\pi^2} \left\{ 1-\frac{4 m_i^2}{s} \arctan\sqrt{\frac{s}{4 m_i^2-s}} \right\} \ , \label{eq:tI} \end{eqnarray} where $H(x)$ is the Heaviside function. Following~\cite{YAZA} we used the LHZ subtraction procedure. The ensuing counterterms (one) are fixed by electric charge conservation. The results are independent of the the parameters $C$ and $K$ introduced in (\ref{eq:sigma}). They are also in agreement with one-loop chiral perturbation theory (ChPT)~\cite{TWO} modulo counterterms. In ChPT all possible counterterms commensurate with symmetry and power counting are retained, in our case only those that show up in the loop expansion (minimal). Which of which is relevant is only determined by comparison with (threshold) experiments. Below, we will show that both procedures yield almost identical results. \section{Resonance Saturation Result} \label{sec:5} To be able to address the fusion reaction processes beyond threshold we need to take into account the final state interactions in (\ref{eq:T2}). One way to do this is to use dispersion analysis for the three- and four-point functions with minimal weight-insertions. This is equivalent to a tree-level resonance saturation of the three- and four-point functions as shown in Fig.~\ref{fig:diag} with all possible crossings. Note that contact interactions are covered by the present description in the limit where the masses of the $\sigma$, $V$ and $A$ are taken to be very large. \begin{figure}[Th] \centerline{\epsfig{file=fusion-diag.eps,height=4cm}} \vskip 5mm \caption{Diagram for $\langle{\rm\bf V}\V\sigma\rangle$ (a) and $\langle{\rm\bf V}\V{\rm\bf j}_A{\rm\bf j}_A\rangle$ (b,c). } \label{fig:diag} \end{figure} {}From quantum numbers and parity, the vector current ${\rm\bf V}^a_\mu$ will be saturated by the light vector mesons ($v_\mu^a = \rho, \omega, \phi$), and the one-pion reduced axial-vector current ${{\rm\bf j}_A}^a_\mu$ by the light axial-vector mesons ($a_\mu^a = A_1, K_1$) \cite{LYZ98}. Typically, \begin{eqnarray} \langle 0 | {\rm\bf V}_\mu^a (x) |v^b_\nu(p)\rangle &\sim& g_{\mu\nu} \delta^{ab}\epsilon_\mu^V f_{v_a} m_{v_a} e^{-ip\cdot x} \nonumber\\ \langle 0 | {{\rm\bf j}_A}_\mu^a (x) |a^b_\nu(p)\rangle &\sim& g_{\mu\nu} \delta^{ab}\epsilon_\mu^A f_{a_a} m_{a_a} e^{-ip\cdot x}\ . \end{eqnarray} Since the photon carries indices $c,d=3,8$ \begin{eqnarray} v^3 &=& \rho^0 \nonumber\\ v^8 &=& \sqrt{\frac 13}\omega^0 -\sqrt{\frac 23}\phi\ , \end{eqnarray} only the chargeless vector mesons contribute to the fusion process. The occurence of the structure constant $f^{abc}$ in the reduction of the fusion reaction forces the axial-vector mesons to carry indices $3,8$ as well. Hence, only $ a^{1,2} = A_1$ and $ a^{4\sim 7} = K_1$ will be needed in our case. Similarly for ${\rm\bf V}\V{\rm\bf j}_A{\rm\bf j}_A$ with \begin{eqnarray} \langle 0 | {\rm\bf V}^d_\mu {\rm\bf V}^c_\nu {{\rm\bf j}_A}^b_\delta {{\rm\bf j}_A}^a_\gamma |0\rangle = \epsilon_\mu^V\epsilon_\nu^V \epsilon_\delta^A\epsilon_\gamma^A f_{a_a} f_{a_b} f_{v_c} f_{v_d} m_{a_a} m_{a_b} m_{v_c} m_{v_d} \langle 0| v^d_\mu v^c_\nu a^b_\delta a^a_\gamma |0\rangle\ . \end{eqnarray} Finally, the scalar field $\hat \sigma$ can be saturated by scalar mesons giving ${\rm\bf V}\V\sigma$ as \begin{eqnarray} \langle {\rm\bf V}^d_\mu {\rm\bf V}^c_\nu \hat \sigma\rangle = \epsilon_\mu^V\epsilon_\nu^V f_{v_c} f_{v_d} m_{v_c} m_{v_d} \langle v^d_\mu v^c_\nu \sigma \rangle\ . \end{eqnarray} All the mesons will have masses and widths fixed at their PDG (Particle Data Group) values. With the above in mind the various contributions from Fig.~\ref{fig:diag} can be readily constructed. In Appendix C we show how they could also be retrieved using a linear sigma-model. The contribution to ${\rm\bf V}\V\sigma$ is \begin{eqnarray} {\cal T}_{vv\sigma}^{ab} &=& 4 i e^2 \left( c_0 \delta^{ab}\delta^{cd} \delta^{h0} + c_h d^{abh} d^{cdh} \right) \epsilon_1\cdot\epsilon_2 \frac{\Lambda M_b }{E_a E_b} \frac{f_{v_c} f_{v_d}}{m_{v_c} m_{v_d}} \frac{1}{s-m_{\sigma_h^2}} \label{eq:r1} \ . \end{eqnarray} The insertions of powers of $1/E$ and the scale $\Lambda$ ($= 1$ GeV) are to make the arbitrary parameters $c_h$ (two) dimensionless. They will be fixed by threshold constraints. The scalar contribution to ${\rm\bf V}\V{\rm\bf j}_A{\rm\bf j}_A$ is \begin{eqnarray} {\cal T}_{vvaa,\sigma_0}^{ab} &=& i 16 e^2 \left( g_1 \delta^{cd}\delta^{ab}\delta^{h0} + g_2 d^{cdh} d^{abh}\right) \epsilon_1\cdot\epsilon_2 \frac{\Lambda^2}{E_a E_b} \frac{ f_{v_c} f_{v_d}}{m_{v_c} m_{v_d}} \frac{1}{s-m_{\sigma_h}^2} \nonumber\\ && \times k_1\cdot k_2\left(1-\frac{m_a^2}{m_{a_a}^2}-\frac{m_b^2}{m_{a_b}^2} +\frac{m_a^2}{m_{a_a}^2}\frac{m_b^2}{m_{a_b}^2}\right) \frac{f_{a_a}m_{a_a}}{m_a^2-m_{a_a}^2} \frac{f_{a_b}m_{a_b}}{m_b^2-m_{a_b}^2} \ . \label{eq:r2} \end{eqnarray} The dimensionless parameters $g_h$ (two) are again arbitrary. The intermediate vector contribution from Fig.~\ref{fig:diag}-(b) vanishes because of the antysymmetry of the structure constant $f^{abc}$. Finally, the contribution from Fig.~\ref{fig:diag}-(c) is \begin{eqnarray} {\cal T}_{vvaa,a}^{ab} &=& -i 16 e^2 g_3 f^{caf} f^{dbf} \frac{1}{E_a E_b} \left(\frac{f_{v_c} f_{v_d}}{m_{v_c} m_{v_d}}\right) \left(\frac{1}{t-m_{a_f}^2}\right) \left(\frac{ f_{a_a} m_{a_a}}{m_a^2-m_{a_a}^2}\right) \left(\frac{ f_{a_b} m_{a_b}}{m_b^2-m_{a_b}^2}\right) \nonumber\\ && \times \left[ (\epsilon_1\cdot k_1) (\epsilon_2\cdot k_2) \left(1-\frac{t}{m_{a_f}^2}\right) \left( -t + m_a^2 +m_b^2 -\frac{(k_1\cdot q_1)^2}{m_{a_a}^2} -\frac{(k_2\cdot q_2)^2}{m_{a_b}^2} \right) \right. \nonumber\\ && \left. +\left(\epsilon_1\cdot\epsilon_2 +\frac{ (\epsilon_1\cdot k_1) (\epsilon_2\cdot k_2)}{m_{a_f}^2}\right) \left( m_a^2-\frac{(k_1\cdot q_1)^2}{m_{a_a}^2}\right) \left( m_b^2-\frac{(k_2\cdot q_2)^2}{m_{a_b}^2}\right) \right] \nonumber\\ && + (t,a,k_1\leftrightarrow u,b,k_2) \label{eq:r3} \end{eqnarray} with one additional dimensionless parameter $g_3$. In the vector and axial channels all the resonances quoted above are introduced with their masses and decay witdths in the form of Breit-Wigner resonances fixed at their PDG values. In the scalar channels we will use three resonances for $\sigma^0$ : $f_0 (500)$, $f_0 (980)$ and $f_2 (1270)$. As our chief goal is to test the master formula result with resonance saturation, we will keep our description simple by substituting \begin{eqnarray} \frac{1}{s-m_{\sigma_0}^2} \rightarrow \sum_{m_f} \frac{f_f}{s-m_{f}^2 +i G (s,m_f) m_{f}} \end{eqnarray} with $f_{f_0(500)}=f_{f_2(1270)}=1$ and $f_{f_0(980)}=0.05$, and the decay widths \begin{eqnarray} G(s,m_f) = H(s-4 m_\pi^2) G_0 \left(\frac{1- 4 m_\pi^2/s}{1-4 m_\pi^2/m_f^2}\right)^n \ , \end{eqnarray} with $n=1/2$ and $3/2$ for scalar and vector mesons, respectively. A more detailed parametrization of the partial widths and so on will not be attempted here, again for simplicity. We have found that the contribution of $f_0(980)$ is suppressed (hence the order of magnitude change in the weight) in agreement with previous investigations~\cite{OSET}. In the numerical analysis to follow, we have checked that our results are not greatly sensitive to the resonance parametrizations provided that PDG masses and widths are enforced. In the isotriplet-scalar channel $\sigma^3$ we have : $a_0(980)$ and $a_2(1320)$, giving \begin{eqnarray} \frac{1}{s-m_{\sigma_3}^2} \rightarrow \frac{0.6}{s-m_{a_0}^2 +i G_{a_0} (s) m_{a_0}} -\frac{1}{s-m_{a_2}^2 +i G_{a_2} (s) m_{a_2}} \ . \label{isosign} \end{eqnarray} The same functional form for $G$ is used, but with a different cut-off corresponding to the lowest mass yields in the various decay channels. The relative sign in (\ref{isosign}) reflects the attractive character of $a_2(1320)$ in comparison to $a_0(980)$ in the isotriplet channel. We will not consider the effects of $\sigma^8$ as it involves higher octet-scalar resonances. \section{Polarizabilities} \label{sec:6} Before discussing in details how our analysis of the fusion reactions compare in details to the present data, we will first address the issue of the meson polarizabilities as inferred from our one-loop analysis. For the charged pions~\cite{dono93}, \begin{eqnarray} \bar\alpha_E^{\pi^\pm} &=& (6.8\pm1.4\pm1.2)\times 10^{-4}\; {\rm fm}^3 \nonumber\\ \bar\alpha_E^{\pi^\pm} &=& (20\pm 12)\times 10^{-4}\; {\rm fm}^3 \nonumber\\ \bar\alpha_E^{\pi^\pm} &=& (2.2 \pm 1.6)\times 10^{-4}\; {\rm fm}^3 \ , \end{eqnarray} and for the neutral pions~\cite{dono93} \begin{eqnarray} |\bar\alpha_E^{\pi^0}| &=& (0.69\pm 0.07\pm 0.04)\times 10^{-4}\; {\rm fm}^3 \nonumber\\ |\bar\alpha_E^{\pi^0}| &=& (0.8\pm 2.0)\times 10^{-4}\; {\rm fm}^3 \ . \end{eqnarray} The data are not accurate enough. This notwithstanding, our one-loop result for the charged pions is \begin{eqnarray} \alpha_L^{\pi^\pm} \approx 4.2\times 10^{-4}\; {\rm fm}^3 \ , \end{eqnarray} this is twice the value obtained using standard chiral perturbation theory \cite{dono93}. The difference stems for the additional (finite) counterterms in ChPT, which are purposely absent (minimal) in our analysis. This point was discussed in great details in \cite{YAZA}. For the neutral pions we have \begin{eqnarray} \alpha_L^{\pi^0} \approx 6.3\times 10^{-4}\; {\rm fm}^3 \ . \end{eqnarray} For the rest of the octet, we have \begin{eqnarray} \alpha_L^{K^\pm} &\approx & - 2.7 \times 10^{-5}\, {\rm fm}^3 \nonumber\\ \alpha_L^{K^0 \bar K^0} &\approx & + 2.8 \times 10^{-5}\, {\rm fm}^3 \nonumber\\ \alpha_L^{\eta} &\approx & - 4.4 \times 10^{-6}\, {\rm fm}^3 . \end{eqnarray} In the resonance saturation approach, the polarizabilities follow essentially from the ${\bf V V}{\bf j}_A{\bf j}_A$ contributions in (\ref{eq:r3}). These contributions are constrained at high energy to be small, resulting into naturally small polarizabilities. A global fit yields pion polarizabilities that are similar for charged and chargeless fusion reactions. \section{Numerical Results} \label{sec:7} Most of the calculations to be discussed in this section are carried with the PDG parameters for the quoted resonances. The dimensionless couplings involved in the resonance saturation approach are chosen so as to give a global fit that is consistent with the threshold constraints (mainly one-loop). Specifically, we will use \begin{eqnarray} c_0 &=& -98208 \nonumber \\ c_3 &=& 5 c_0 \nonumber \\ g_1 &=& 0.9744 \nonumber\\ g_2 &=& -13.64 \nonumber\\ g_3 &=& -1.5 \ , \end{eqnarray} with $c_8=0$ since we are ignoring the effects from $\sigma^8$. Some of the results for the total cross sections to be quoted will involve a parameter $Z$ defined as \begin{eqnarray} \sigma_Z = 2\int_0^Z d{\rm cos}\theta\, \frac{d\sigma}{d{\rm cos}\theta} \end{eqnarray} where $\theta$ is the relative angle between one of the two incoming photons and the outgoing mesons. \subsection{Pions} In Fig.~\ref{fig:spipm} we show the total cross section for fusion to charged pions up to $Z=0.6$. The data are from ~\cite{TPC,MARK,DESY,KEK}. The overall agreement with the data is good. Our analysis appears to favor the SLAC-PEP-MARK-II as well as the KEK-TE-001 data. The peak at $f_2 (1270)$ is clearly visible, while the $f_0 (980)$ is weaker. The Born contribution overwhelms the $f_0 (500)$ contribution in this channel, and is hardly visible in our results as well as the data. In the insert, we show an enlargement of the threshold region and comparison with our Born contribution, the one-loop analysis, and the resonance saturation approach. Overall, our approximations are consistent. \begin{figure} \begin{center} \epsfig{file=cpipm.eps,width=4.5in} \epsfig{file=cpipm_2p.eps,width=4.5in} \end{center} \caption{ Total cross section for $\gamma\gamma\rightarrow\pi^+\pi^-$ (Z = 0.6). Thick (thin) line correspond to resonance (loop) contribution. Dashed line in the lower panel corresponds to the Born term. The data are collected from Refs.~\protect\cite{TPC,MARK,DESY,KEK} } \label{fig:spipm} \end{figure} In Fig.~\ref{fig:spizz} we present our results for the fusion reaction into chargeless pions with $Z=0.8$. The data are from~\cite{DORIS,SAN}. Again the $f_2 (1270)$ is clearly visible, while the $f_0 (980)$ is barely. The broad effects from the $f_0 (500)$ are also visible in comparison to the data. The resonance saturation result is in overall agreement with the both sets of data. In the insert, we show an enlargement of the threshold region and comparison to our one-loop result as well as one-loop and two-loop ChPT. Clearly our one-loop and the one-loop ChPT are in good agreement although our construction is minimal (fewer counterterms). Most of the parameters (six) in the two-loop results from ChPT are fit using ideas similar to the resonance saturation approach we have adopted. \begin{figure} \begin{center} \epsfig{file=cpizz.eps,width=4.5in} \epsfig{file=cpizz_2p.eps,width=4.5in} \end{center} \caption{ Total cross section for $\gamma\gamma\rightarrow\pi^0\pi^0$ (Z = 0.8). Thick (thin) line correspond to resonance (loop) contribution. The dashed lines in the lower panel are the 1- and 2-loop ChPT results \protect\cite{TWO}. The data are taken from Refs.~\protect\cite{DORIS,SAN}. } \label{fig:spizz} \end{figure} \subsection{Kaons} In Fig.~\ref{fig:sKpm} we present our results for the fusion reaction into charged kaons. For $Z=0.6$ our analysis shows a treshold enhancement at about $980$ MeV, followed by another enhancement at $a_2(1320)$. The enhancement shown in the SLAC-PEP-TCP data is consistent with the $a_2(1320)$, although the error bars are large. Our results for the cross section are higher than the data in the energy range $\sqrt{s}=1.6-2.4$ GeV. For $Z=1.0$ we compare the resonance saturation results with the Born amplitude and the one-loop approximation. Again we see the same features as those encountered at $Z=0.6$. The DESY-DORIS-ARGUS data agree with our analysis around the $a_2(1320)$, but are not in agreement at threshold and above $\sqrt{s}=1.6$ GeV. The threshold enhancement due to the Born terms in our analysis is only partly decreased by the repulsive character of the scalar-isotriplet $a_0(980)$. A similar behaviour was also noted by Oller and Oset~\cite{OSET} using a coupled channel analysis. In Fig.~\ref{fig:K0K0} we show our results for chargeless kaons. Our results favor the data from DESY-PETRA-CELLO~\cite{CELLO} as opposed to the early data from DESY-PETRA-TASSO~\cite{TASSO}, although the data have large error bars. The effects from the $a_0(980)$ is weaker than the one from the $a_2(1320)$. In this case the Born contribution vanishes. \begin{figure} \begin{center} \epsfig{file=cKKpm_1.eps,width=4.5in} \epsfig{file=cKKpm_2p.eps,width=4.5in} \end{center} \caption{Total cross section for $\gamma\gamma\rightarrow K^+K^-$ with Z=0.6 and Z=1.0. Thick (thin) line corresponds to resonance (loop) contribution. The Born terms are plotted as dashed line. The data are taken from Refs.~\protect\cite{TPC,KKpm1}. } \label{fig:sKpm} \end{figure} \begin{figure} \begin{center} \epsfig{file=cKKzz.eps,width=4.5in} \end{center} \caption{Total cross section for $\gamma\gamma\rightarrow K^0\bar K^0$ (Z=1.0). The data are taken from Refs.~\protect\cite{TASSO,CELLO}. } \label{fig:K0K0} \end{figure} \subsection{Etas} In Fig.~\ref{fig:spieta} we show our results for the fusion into $\pi^0\eta$ for $Z=0.9$. The peaks are the scalar-isotriplets $a_0(980)$ and $a_2(1320)$. There is fair agreement with the DESY-DORIS-CRYSTAL-BALL~\cite{pieta} data. The strength between the two-resonances follow simply from the relative sign in (\ref{isosign}) reflecting on the attraction-repulsion in these two channels. In Fig.~\ref{fig:setae} we show our predictions for the fusion reaction to two eta's. The cross section is tiny in comparison to the other fusion reactions (about four orders of magnitude down). The reason is the near cancellation between the $f_2 (1270)$ contribution in ${\bf V}{\bf V}\sigma$ and ${\bf V}{\bf V}{\bf j}_A{\bf j}_A$ ($c_0$ and $g_1$ have opposite signs). Since the resonance is smeared differently in the two contributions, the exact cancellation takes place in the range $1.25-1.5$ GeV. \begin{figure} \begin{center} \epsfig{file=cpieta.eps,width=4.5in} \end{center} \caption{Total cross section for $\gamma\gamma\rightarrow \pi^0\eta$ (Z=0.9). The data are taken from Refs.~\protect\cite{pieta}. } \label{fig:spieta} \end{figure} \begin{figure} \begin{center} \epsfig{file=cetae.eps,width=4.5in} \end{center} \caption{Total cross section for $\gamma\gamma\rightarrow \eta\eta$.} \label{fig:setae} \end{figure} \section{Conclusions} \label{sec:8} We have analyzed the two-photon fusion reaction to two mesons using the master formulae approach to QCD with three flavors. The formulae for the fusion reaction amplitude encodes all the information about chiral symmetry and its breaking in QCD. We have analyzed this result in power counting and shown that it is overall in agreement with results from three-flavor ChPT in the threshold region. We have derived specific results for the real part of the polarizabilities of all the octet mesons. To analyze the reactions beyond threshold, we have implemented a simple dispersion analysis on the pertinent three- and four-point functions in the form of tree-level resonance saturation. The analysis enforces broken chiral symmetry, unitarity and crossing symmetry in a staightforward way. The pertinent resonance parameters (masses and widths) are fixed at their PDG values. Their couplings result into 5 parameters which we use to globally fit all available data through $\sqrt{s}= 2$ GeV and predict a very small cross section for $\gamma\gamma\rightarrow \eta\eta$. The master formulae to the fusion reaction processes implies from first principles scalar-isoscalar and scalar-isotriplet correlations in the s-channel, and axial-vector correlations in t-channels. The latters enforce the correct polarizabilities, while the formers account for most of the resonances seen in the experiments. In particular, the scalar-isoscalar $f_0 (500)$, $f_0(980)$ and $f_2(1270)$ are predominant in the fusion reactions involving pions, while the scalar-isotriplet $a_0(980)$ and $a_2(1320)$ are important in the fusion reactions involving kaons, and also etas and pions. The $a_0(980)$ is found to decrease considerably the threshold enhancements caused by the Born term in the fusion to charged kaons, in agreement with present experiments. The present results are important in the assessment of the electromagnetic emission rates from a hadronic gas in relativistic heavy-ion collisions \cite{LYZ98X}. \section*{Acknowledgement} IZ would like to thank Jose Oller and Eulogio Oset for discussions. This work was supported by the U.S. Department of Energy under Grant No. DE--FG02--88ER40388.
1,941,325,220,127
arxiv
\section{Copyright} All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form or, in the case of technical reports, by a valid signed permission to distribute form. There are no exceptions to this requirement. You must send us the original version of this form. However, to meet the deadline, you may fax (1-650-321-4457) or scan and e-mail the form ([email protected]) to AAAI by the submission deadline, and then mail the original via postal mail to the AAAI office. \textbf{If you fail to send in a signed copyright or permission form, we will be unable to publish your paper. There are no exceptions to this policy.} You will find PDF versions of the AAAI copyright and permission to distribute forms in the author kit. \section{Formatting Requirements in Brief} We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is governed by the aaai style file. You must not make any changes to this file, nor use any commands, packages, style files, or macros that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance. AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication must comply with the following: \begin{itemize} \item Your .tex file must compile in PDF\LaTeX{} --- \textbf{ no .ps or .eps figure files.} \item All fonts must be embedded in the PDF file --- \textbf{ this includes your figures.} \item Modifications to the style file, whether directly or via commands in your document may not be made, most especially when made in an effor to avoid extra page charges or make your paper fit in a specific number of pages. \item No type 3 fonts may be used (even in illustrations). \item You may not alter the spacing above and below captions, figures, headings, and subheadings. \item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and tables and mathematics, please see the the limited exceptions provided herein). \item You may not alter the line spacing of text. \item Your title must follow Title Case capitalization rules (not sentence case). \item Your .tex file include completed metadata to pass-through to the PDF (see PDFINFO below) \item \LaTeX{} documents must use the Times or Nimbus font package (do not use Computer Modern for the text of your paper). \item No \LaTeX{} 209 documents may be used or submitted. \item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to Arabic, Chinese, Hebrew, Japanese, Russian and other Cyrillic languages), you must restrict their use to figures. \item Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document). \item Two-column format in AAAI style is required for all papers. \item The paper size for final submission must be US letter without exception. \item The source file must exactly match the PDF. \item The document margins must be as specified in the formatting instructions. \item The number of pages and the file size must be as specified for your event. \item No document may be password protected. \item Neither the PDFs nor the source may contain any embedded links or bookmarks. \item Your source and PDF must not have any page numbers, footers, or headers. \item Your PDF must be compatible with Acrobat 5 or higher. \item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed. \item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" command) . \end{itemize} If you do not follow the above requirements, it is likely that we will be unable to publish your paper. \section{What Files to Submit} You must submit the following items to ensure that your paper is published: \begin{itemize} \item A fully-compliant PDF file. \item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). The only exception is the reference list, which you should include separately. Your source must compile on our system, which includes the standard \LaTeX{} support files. \item Only the graphics files used in compiling paper. \item The \LaTeX{}-generated files (e.g. .aux and .bib file, etc.) for your compiled source. \item If you have used an old installation of \LaTeX{}, you should include algorithm style files). If in doubt, include it. \end{itemize} Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, you may incur late fees). \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai.bst), and any custom macros. Accompanying this source file, you must also supply any nonstandard (or older) referenced style files and all your referenced graphics files. Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution. Place your PDF and source files in a single tar, zipped, gzipped, stuffed, or compressed archive. Name your source file with your last (family) name. \textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, standard style files, and so forth. \textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files. \section{Using \LaTeX{} to Format Your Paper} The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete author kit so that you will have the latest instruction set and style file. \subsection{Document Preamble} In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation). Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2. If (and only if) your author title information will not fit within the specified height allowed, put \textbackslash setlength \textbackslash titlebox{2.5in} in your preamble. Increase the height until the height error disappears from your log. You may not use the \textbackslash setlength command elsewhere in your paper, and it may not be used to reduce the height of the author-title box. \subsubsection{The Following Must Appear in Your Preamble} \begin{quote} \begin{scriptsize}\begin{verbatim} \documentclass[letterpaper]{article} \usepackage{aaai} \usepackage{times} \usepackage{helvet} \usepackage{courier} \usepackage{url} \usepackage{graphicx} \frenchspacing \setlength{pdfpagewidth}{8.5in} \setlength{pdfpageheight}{11in}\\ \pdfinfo{ /Title (Input Your Paper Title Here) /Author (John Doe, Jane Doe) /Keywords (Input your keywords in this optional area) } \title{Title}\\ \author\{Author 1 \ and Author 2\\ Address line\\ Address line\\ \ And\\ Author 3\\ Address line\\ Address line }\\ \end{verbatim}\end{scriptsize} \end{quote} \subsection{Preparing Your Paper} After the preamble above, you should prepare your paper as follows: \begin{quote} \begin{scriptsize}\begin{verbatim} \begin{document} \maketitle \begin{abstract} \end{abstract}\end{verbatim}\end{scriptsize} \end{quote} \subsubsection{The Following Must Conclude Your Document} \begin{quote} \begin{scriptsize}\begin{verbatim} \section{Copyright} All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form or, in the case of technical reports, by a valid signed permission to distribute form. There are no exceptions to this requirement. You must send us the original version of this form. However, to meet the deadline, you may fax (1-650-321-4457) or scan and e-mail the form ([email protected]) to AAAI by the submission deadline, and then mail the original via postal mail to the AAAI office. \textbf{If you fail to send in a signed copyright or permission form, we will be unable to publish your paper. There are no exceptions to this policy.} You will find PDF versions of the AAAI copyright and permission to distribute forms in the author kit. \section{Formatting Requirements in Brief} We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is governed by the aaai style file. You must not make any changes to this file, nor use any commands, packages, style files, or macros that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance. AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication must comply with the following: \begin{itemize} \item Your .tex file must compile in PDF\LaTeX{} --- \textbf{ no .ps or .eps figure files.} \item All fonts must be embedded in the PDF file --- \textbf{ this includes your figures.} \item Modifications to the style file, whether directly or via commands in your document may not be made, most especially when made in an effor to avoid extra page charges or make your paper fit in a specific number of pages. \item No type 3 fonts may be used (even in illustrations). \item You may not alter the spacing above and below captions, figures, headings, and subheadings. \item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and tables and mathematics, please see the the limited exceptions provided herein). \item You may not alter the line spacing of text. \item Your title must follow Title Case capitalization rules (not sentence case). \item Your .tex file include completed metadata to pass-through to the PDF (see PDFINFO below) \item \LaTeX{} documents must use the Times or Nimbus font package (do not use Computer Modern for the text of your paper). \item No \LaTeX{} 209 documents may be used or submitted. \item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to Arabic, Chinese, Hebrew, Japanese, Russian and other Cyrillic languages), you must restrict their use to figures. \item Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document). \item Two-column format in AAAI style is required for all papers. \item The paper size for final submission must be US letter without exception. \item The source file must exactly match the PDF. \item The document margins must be as specified in the formatting instructions. \item The number of pages and the file size must be as specified for your event. \item No document may be password protected. \item Neither the PDFs nor the source may contain any embedded links or bookmarks. \item Your source and PDF must not have any page numbers, footers, or headers. \item Your PDF must be compatible with Acrobat 5 or higher. \item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed. \item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" command) . \end{itemize} If you do not follow the above requirements, it is likely that we will be unable to publish your paper. \section{What Files to Submit} You must submit the following items to ensure that your paper is published: \begin{itemize} \item A fully-compliant PDF file. \item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). The only exception is the reference list, which you should include separately. Your source must compile on our system, which includes the standard \LaTeX{} support files. \item Only the graphics files used in compiling paper. \item The \LaTeX{}-generated files (e.g. .aux and .bib file, etc.) for your compiled source. \item If you have used an old installation of \LaTeX{}, you should include algorithm style files). If in doubt, include it. \end{itemize} Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, you may incur late fees). \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai.bst), and any custom macros. Accompanying this source file, you must also supply any nonstandard (or older) referenced style files and all your referenced graphics files. Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution. Place your PDF and source files in a single tar, zipped, gzipped, stuffed, or compressed archive. Name your source file with your last (family) name. \textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, standard style files, and so forth. \textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files. \section{Using \LaTeX{} to Format Your Paper} The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete author kit so that you will have the latest instruction set and style file. \subsection{Document Preamble} In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation). Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2. If (and only if) your author title information will not fit within the specified height allowed, put \textbackslash setlength \textbackslash titlebox{2.5in} in your preamble. Increase the height until the height error disappears from your log. You may not use the \textbackslash setlength command elsewhere in your paper, and it may not be used to reduce the height of the author-title box. \subsubsection{The Following Must Appear in Your Preamble} \begin{quote} \begin{scriptsize}\begin{verbatim} \documentclass[letterpaper]{article} \usepackage{aaai} \usepackage{times} \usepackage{helvet} \usepackage{courier} \usepackage{url} \usepackage{graphicx} \frenchspacing \setlength{pdfpagewidth}{8.5in} \setlength{pdfpageheight}{11in}\\ \pdfinfo{ /Title (Input Your Paper Title Here) /Author (John Doe, Jane Doe) /Keywords (Input your keywords in this optional area) } \title{Title}\\ \author\{Author 1 \ and Author 2\\ Address line\\ Address line\\ \ And\\ Author 3\\ Address line\\ Address line }\\ \end{verbatim}\end{scriptsize} \end{quote} \subsection{Preparing Your Paper} After the preamble above, you should prepare your paper as follows: \begin{quote} \begin{scriptsize}\begin{verbatim} \begin{document} \maketitle \begin{abstract} \end{abstract}\end{verbatim}\end{scriptsize} \end{quote} \subsubsection{The Following Must Conclude Your Document} \begin{quote} \begin{scriptsize}\begin{verbatim} \section{Introduction} \label{section:introduction} \input{texfiles/ontology_introduction} \section{Motivation} \label{section:problem_statement} \input{texfiles/ontology_problem_statement} \section{Best Practices} \label{section:best_practices} \input{texfiles/ontology_best_practices} \section{Alternative Topic Sets} \label{section:ontology_alternatives} \input{texfiles/ontology_alternatives} \section{Klout Topics Characteristics} \label{section:ontology_characteristics} \input{texfiles/ontology_characteristics} \section{Klout Topics Coverage Versus Alternatives} \label{section:ontology_results} \input{texfiles/ontology_results} \section{Case Studies} \label{section:ontology_sample_case_study} \input{texfiles/ontology_sample_case_study} \section{Conclusion and Future Work} \label{section:conclusion} \input{texfiles/ontology_conclusion} \vspace{-0.03in} \bibliographystyle{ACM-Reference-Format} \subsection{Overview of Development} \subsection{Bootstrapping the Ontology} The first version of the ontology consisted of 140k nodes from the following sources: \begin{enumerate} \item Keywords extracted from processed tweets \item (Which academic topic sets? need reference) \end{enumerate} A team of employees was tasked with whittling down the set of topics by removing those that were: \begin{itemize} \item too specific to be applied to a significant number of profiles or URLs (``Offices of Dentists (Industry)'', ``Australian Desert Raisin (Ingredient)'') \item too general or ambiguous to be meaningful (``Minister-President'', ``Comments'', ``Short List'') \item out of date (``Seattle Supersonics (Basketball)'') \item containing profanity or adult content (``Tits \& Clits Comix (Comic Books)'') \end{itemize} This approach resulted in a set of ~10,000 v1 topics. Each topic node contained a unique numerical identifier, a human-readable string identifier, an English-language display\_name, and a ``type'' field indicating where it lay within the ontology's tree structure (see below). The same team was then tasked with verifying parent-child relationships inferred from Freebase's data structure. A stand-alone tool was created for this task, which required (time estimate). v1 topics were organized into a three-level tree: \begin{itemize} \item Supertopics: 15 top-level domains, e.g. ``Business'' and ``Entertainment'' \item Subtopics: \~1000 more specific categories, such as ``Accounting'' and ``Music''. Each subtopic is limited to a single parent at the supertopic level. \item Entities: \~9000 named ``entities'' or more specific topics, such as ``TurboTax'' or ``Lady Gaga''. Each entity may have multiple parents, but at the subtopic level only. (todo: clarify definition of entities here versus in Papyrus) \end{itemize} A dedicated ontology specialist was brought on to normalize topic names and handle ongoing curation and maintenance. This included normalizing names from the formal, parenthesis-heavy academic labels; adding missing topics too new or simply overlooked in Freebase; and deleting duplicate concepts. v1 of the ontology had several drawbacks. First, the three-level limit caused a pile-up at the bottommost (``entity'') level, where topics that should have had a parent-child relationship were forced into a sibling one instead; for example, v1 contained both \textbf{Sports and Recreation > Baseball > Major League Baseball} and \textbf{Sports and Recreation > Baseball > Boston Red Sox}, where the preferred path would be \textbf{Sports and Recreation > Baseball > Major League Baseball > Boston Red Sox}. Similarly, the restriction on the allowed level of a parent topic meant that not all possible paths could be represented; the v1 ontology could support either \textbf{Hobbies > Antiques} or \textbf{Lifestyle > Home Decorating > Antiques}, but not both. An even more pressing problem from a business perspective was that the v1 ontology could only support a single display\_name, and therefore could not be internationalized without supporting multiple parallel versions. \subsection{Ontology Development} Beginning in 2014, we began developing an improved version of the Klout ontology, incorporating the following changes. (More about the characteristics of ontology v2 can be found in section 5, below.) First, v2 removed restrictions on number and level of parent topics, moving from a topic tree to a directed graph. We retained the original set of top-level topics, but now allow multiple parents at any level so long as the resulting path is not recursive. Next, we added internationalization as an additional dimension of metadata attached to the topic node; now every node includes multiple display\_names in supported languages. From both a technical and curation standpoint, we found this approach preferable to maintaining parallel ontologies by language or region. While some topics are certainly rooted in a given region or language and will be less frequently found elsewhere (''Toronto Film Festival'', for example), the parent topics will be unchanged, so there is no need to maintain a separate graph. Furthermore, adding internationalization to the existing ontology gave us a ``cold start'' for those languages, from which we can work to incrementally improve topic coverage for non-US-English concepts and domains. Each display\_name also comes with a flag to indicate whether the given topic should be shown or hidden for the given language, allowing some pruning of less-regionally-relevant topics on the front end. All internationalization within the ontology is handled at the language level, not the country or other region. For business reasons, we also may enforce the hiding of particular topics within particular regions (for example, ``LGBT'' and similar topics within Saudi Arabia). However, because those are business decisions that can vary widely by circumstance and application, we chose not to encode those within the ontology itself. Finally, v2 adds a pointer to the primary Freebase entity for each topic. This provides a richer data set for machine learning or text-processing applications. \subsection{Lessons Learned and Best Practices} While much has been written about formal ontology construction, especially for semantic web applications, less has been said about the practical realities of lightweight ontology construction for an enterprise or commercial user-facing application. The following are some of the most important lessons learned during the construction, implementation and maintenance of the Klout topic ontology. \textbf{Expectation of Curation and Incremental Change}. An ontology is a living knowledge artifact; it must be clear throughout the organization that building one requires an ongoing investment in the form of time, tools, and documentation. Even when choosing to adopt an externally created ontology, it's crucial to consider when and how to propogate updates, whether they come from the external ontology's original creators, or from inside one's own organization. \textbf{Clear Chain of Ownership and Documentation}. Because an ontology is a living artifact, it should be clear what teams or persons have the authority to make changes. Those owners should document the principles used to make changes, addressing the ontology's scope, structure, and voice. A transparent historical record of changes to the ontology is also recommended. \textbf{Limiting Assumptions}. Encoding too many assumptions into your ontology can be dangerous, especially when it spans multiple domains. As we saw above, v1 of the Klout ontology was unsatisfactory in part because it assumed both that the top-level categories were mutually exclusive and that there was no need for a path longer than three nodes. Fewer rules to enforce can, paradoxically, lead to a cleaner ontology, as well as saving development and curation time. \textbf{Inclusivity and Tone}. Especially if your ontology will be used to visibly classify persons, it is a matter of business importance to be as inclusive as possible. Your ontology will reflect your organization's values in the eyes of many users. Strive for both caution and consistency when dealing with ideologically controversial concepts, adult concepts, and so forth. \textbf{Storage Formats and Tools for Curation}. There are a range of approaches to storing and describing ontologies, from OWL and Protege, to XTM topic maps, to simple tables in SQL and similar databases. (TODO: Add references) Because ours is a lightweight ontology with a single relationship type, and to make it easier to integrate our ontology with the rest of the data processing pipeline, we chose the straightforward table method, with some temporary front-end tools spun up when needed. This tradeoff has often required the rules of the ontology to be enforced by curators rather than within the tool, emphasizing the importance of clear documentation and quality assurance (see below). \subsubsection{Metrics and Quality Assurance}. Like any feature, an ontology in active use should be monitored for quality issues. However, it can be difficult to quantify an ontology's issues apart from those of its application. When maintaining the Klout ontology, we consider the following: \textbf{Coverage, a.k.a. Missing Topics}. Misapplied topics application can indicate a gap in the ontology, where a less appropriate or too general topic is being used for lack of a better alternative. Topics that are missing because they are new concepts can often be identified through current news and tools like Google Trends. \textbf{Scope, a.k.a. Unneeded Topics}. Topics not being applied in the application are often topics that are unneeded. Some of these will be topics that were once relevant but are now obsolete and can be safely ``aged out'' of the ontology. This is especially true of constantly evolving areas like consumer electronics, movies and television. \textbf{Missing and Incorrect Edges}. Detecting missing or incorrect edges is one of the most difficult areas of ontology improvement. Because the Klout ontology does encode references to Freebase, we can compare our edges to Freebase and to some extent to Wikipedia. \textbf{Application Metrics and User Feedback}. Application-level metrics are undeniably useful, although some investigation is required to determine when the cause lies in the ontology and when it lies in some other part of the application. There should be a clear path for user feedback about topic assignments. \textbf{Validation Against Other Ontologies}. Validating against other available ontologies is time-consuming, since it requires aligning the ontologies to be compared, but can be revealing. See Section 6. \subsection{Methodology} KT was initially bootstrapped from a set of 140K Freebase topic nodes, selected by matching popular keywords from social media text. That set was curated down to 20K candidates by removing nodes that were not sufficiently popular on social media (``Australian Desert Raisin'') or not sufficiently intuitive for application use (``Comments''). Relationships within those 20K nodes were then inferred from Freebase and verified by curators. With ongoing curation, KT reached its current size of 8K nodes and 13K edges, with newly relevant topics added and obsolete ones removed. \subsection{Scope} As users on social networks interact with a variety of topics and domains, KT aims to include any concept that: \begin{enumerate}[nolistsep,noitemsep] \item \textbf{is specific enough} to meaningfully describe a user's interest or the primary topic of a piece of text (e.g., ``Life'' or ``Growing'' are too broad, but ``Biology'' or ``Child Development'' should be included). \item \textbf{is general enough} to be shared by a community of interest on social media, with content that can be detected and recommended. A topic that is theoretically possible, but lacks users or content (e.g. ``Scandinavian Death Metal Played on Kazoos'') will not be included. \item \textbf{is not redundant} with topics already present (``Child Development Milestones'' does not need to be a separate topic from ``Child Development''). \item \textbf{is not illegal or offensive} to a general audience. \end{enumerate} \subsection{Components and definitions} \subsubsection{Topic nodes} Each topic node in KT contains: \begin{enumerate}[nolistsep,noitemsep] \item \textbf{topic\_id}: a unique numerical identifier for the topic. \item \textbf{slug}: a human-readable English label, usable in a URL. \item \textbf{metadata}: includes both the Wikidata entity ID and Freebase machine\_id of the closest corresponding entity. This facilitates NLP tasks such as entity disambiguation and linking \cite{Bhargava:lithiumnlp}. \end{enumerate} \subsubsection{Topic Edges} KT is structured as a directed graph with all branches connected to one or more of the 16 root categories in Table \ref{table:1}. Each edge indicates a parent-child relationship where a child may be a subclass of the parent (e.g., ``Philosophy'' > ``Ethics''), or a notable entity within the topic context (``Philosophy'' > ``Friedrich Nietzsche''). A child may have multiple parents in different branches of the graph, and parents may be at different distances from the root. Each edge contains: \begin{enumerate}[nolistsep,noitemsep] \item \textbf{edge\_id}: a unique numerical identifier for the edge. \item \textbf{source\_topic\_id}: the topic\_id of the parent topic. \item \textbf{destination\_topic\_id}: the topic\_id of the child topic. \end{enumerate} \subsubsection{Topic Display Names} A topic node is associated with the following fields to define how it should be displayed: \begin{enumerate}[nolistsep,noitemsep] \item \textbf{topic\_id}: the unique identifier for the topic. \item \textbf{language}: the language of the given display\_name. \item \textbf{display\_type}: a flag to indicate whether the topic should be displayed in the given language. \item \textbf{display\_name}: the most correct and human-readable form of the topic name, in the given language. \end{enumerate} In order for KT to be used in consumer-facing applications, its editorial voice strives to be \textbf{transparent} (avoiding neologisms and domain-specific jargon), \textbf{neutral or positive in tone} (avoiding ideological connotations), and \textbf{inclusive} (avoiding derogatory speech and using self-identified terms for groups of people). \subsection{Internationalization} A core assumption of KT is that the concept encoded in a node is \textit{not} language- or region-specific, but can be shared across languages. Rather than maintaining separate topic sets for each language, therefore, a single multi-lingual instance can be used. This approach allows for direct comparison of the interests of users in different languages. For flexibility in the application, however, we include a flag to indicate whether a topic should be displayed in a given language. Currently, the following languages are supported: Arabic (KSA), English (USA), French, German, and Spanish (EU). \subsection{Overview of Development} \subsection{Bootstrapping the Ontology} The first version of the ontology consisted of 140k nodes from the following sources: \begin{enumerate} \item Keywords extracted from processed tweets \item (Which academic topic sets? need reference) \end{enumerate} A team of employees was tasked with whittling down the set of topics by removing those that were: \begin{itemize} \item too specific to be applied to a significant number of profiles or URLs (``Offices of Dentists (Industry)'', ``Australian Desert Raisin (Ingredient)'') \item too general or ambiguous to be meaningful (``Minister-President'', ``Comments'', ``Short List'') \item out of date (``Seattle Supersonics (Basketball)'') \item containing profanity or adult content (``Tits \& Clits Comix (Comic Books)'') \end{itemize} This approach resulted in a set of ~10,000 v1 topics. Each topic node contained a unique numerical identifier, a human-readable string identifier, an English-language display\_name, and a ``type'' field indicating where it lay within the ontology's tree structure (see below). The same team was then tasked with verifying parent-child relationships inferred from Freebase's data structure. A stand-alone tool was created for this task, which required (time estimate). v1 topics were organized into a three-level tree: \begin{itemize} \item Supertopics: 15 top-level domains, e.g. ``Business'' and ``Entertainment'' \item Subtopics: \~1000 more specific categories, such as ``Accounting'' and ``Music''. Each subtopic is limited to a single parent at the supertopic level. \item Entities: \~9000 named ``entities'' or more specific topics, such as ``TurboTax'' or ``Lady Gaga''. Each entity may have multiple parents, but at the subtopic level only. (todo: clarify definition of entities here versus in Papyrus) \end{itemize} A dedicated ontology specialist was brought on to normalize topic names and handle ongoing curation and maintenance. This included normalizing names from the formal, parenthesis-heavy academic labels; adding missing topics too new or simply overlooked in Freebase; and deleting duplicate concepts. v1 of the ontology had several drawbacks. First, the three-level limit caused a pile-up at the bottommost (``entity'') level, where topics that should have had a parent-child relationship were forced into a sibling one instead; for example, v1 contained both \textbf{Sports and Recreation > Baseball > Major League Baseball} and \textbf{Sports and Recreation > Baseball > Boston Red Sox}, where the preferred path would be \textbf{Sports and Recreation > Baseball > Major League Baseball > Boston Red Sox}. Similarly, the restriction on the allowed level of a parent topic meant that not all possible paths could be represented; the v1 ontology could support either \textbf{Hobbies > Antiques} or \textbf{Lifestyle > Home Decorating > Antiques}, but not both. An even more pressing problem from a business perspective was that the v1 ontology could only support a single display\_name, and therefore could not be internationalized without supporting multiple parallel versions. \subsection{Ontology Development} Beginning in 2014, we began developing an improved version of the Klout ontology, incorporating the following changes. (More about the characteristics of ontology v2 can be found in section 5, below.) First, v2 removed restrictions on number and level of parent topics, moving from a topic tree to a directed graph. We retained the original set of top-level topics, but now allow multiple parents at any level so long as the resulting path is not recursive. Next, we added internationalization as an additional dimension of metadata attached to the topic node; now every node includes multiple display\_names in supported languages. From both a technical and curation standpoint, we found this approach preferable to maintaining parallel ontologies by language or region. While some topics are certainly rooted in a given region or language and will be less frequently found elsewhere (''Toronto Film Festival'', for example), the parent topics will be unchanged, so there is no need to maintain a separate graph. Furthermore, adding internationalization to the existing ontology gave us a ``cold start'' for those languages, from which we can work to incrementally improve topic coverage for non-US-English concepts and domains. Each display\_name also comes with a flag to indicate whether the given topic should be shown or hidden for the given language, allowing some pruning of less-regionally-relevant topics on the front end. All internationalization within the ontology is handled at the language level, not the country or other region. For business reasons, we also may enforce the hiding of particular topics within particular regions (for example, ``LGBT'' and similar topics within Saudi Arabia). However, because those are business decisions that can vary widely by circumstance and application, we chose not to encode those within the ontology itself. Finally, v2 adds a pointer to the primary Freebase entity for each topic. This provides a richer data set for machine learning or text-processing applications. \subsection{Lessons Learned and Best Practices} While much has been written about formal ontology construction, especially for semantic web applications, less has been said about the practical realities of lightweight ontology construction for an enterprise or commercial user-facing application. The following are some of the most important lessons learned during the construction, implementation and maintenance of the Klout topic ontology. \textbf{Expectation of Curation and Incremental Change}. An ontology is a living knowledge artifact; it must be clear throughout the organization that building one requires an ongoing investment in the form of time, tools, and documentation. Even when choosing to adopt an externally created ontology, it's crucial to consider when and how to propogate updates, whether they come from the external ontology's original creators, or from inside one's own organization. \textbf{Clear Chain of Ownership and Documentation}. Because an ontology is a living artifact, it should be clear what teams or persons have the authority to make changes. Those owners should document the principles used to make changes, addressing the ontology's scope, structure, and voice. A transparent historical record of changes to the ontology is also recommended. \textbf{Limiting Assumptions}. Encoding too many assumptions into your ontology can be dangerous, especially when it spans multiple domains. As we saw above, v1 of the Klout ontology was unsatisfactory in part because it assumed both that the top-level categories were mutually exclusive and that there was no need for a path longer than three nodes. Fewer rules to enforce can, paradoxically, lead to a cleaner ontology, as well as saving development and curation time. \textbf{Inclusivity and Tone}. Especially if your ontology will be used to visibly classify persons, it is a matter of business importance to be as inclusive as possible. Your ontology will reflect your organization's values in the eyes of many users. Strive for both caution and consistency when dealing with ideologically controversial concepts, adult concepts, and so forth. \textbf{Storage Formats and Tools for Curation}. There are a range of approaches to storing and describing ontologies, from OWL and Protege, to XTM topic maps, to simple tables in SQL and similar databases. (TODO: Add references) Because ours is a lightweight ontology with a single relationship type, and to make it easier to integrate our ontology with the rest of the data processing pipeline, we chose the straightforward table method, with some temporary front-end tools spun up when needed. This tradeoff has often required the rules of the ontology to be enforced by curators rather than within the tool, emphasizing the importance of clear documentation and quality assurance (see below). \subsubsection{Metrics and Quality Assurance}. Like any feature, an ontology in active use should be monitored for quality issues. However, it can be difficult to quantify an ontology's issues apart from those of its application. When maintaining the Klout ontology, we consider the following: \textbf{Coverage, a.k.a. Missing Topics}. Misapplied topics application can indicate a gap in the ontology, where a less appropriate or too general topic is being used for lack of a better alternative. Topics that are missing because they are new concepts can often be identified through current news and tools like Google Trends. \textbf{Scope, a.k.a. Unneeded Topics}. Topics not being applied in the application are often topics that are unneeded. Some of these will be topics that were once relevant but are now obsolete and can be safely ``aged out'' of the ontology. This is especially true of constantly evolving areas like consumer electronics, movies and television. \textbf{Missing and Incorrect Edges}. Detecting missing or incorrect edges is one of the most difficult areas of ontology improvement. Because the Klout ontology does encode references to Freebase, we can compare our edges to Freebase and to some extent to Wikipedia. \textbf{Application Metrics and User Feedback}. Application-level metrics are undeniably useful, although some investigation is required to determine when the cause lies in the ontology and when it lies in some other part of the application. There should be a clear path for user feedback about topic assignments. \textbf{Validation Against Other Ontologies}. Validating against other available ontologies is time-consuming, since it requires aligning the ontologies to be compared, but can be revealing. See Section 6. \section{Introduction} \label{section:introduction} \input{texfiles/ontology_introduction} \section{Motivation} \label{section:problem_statement} \input{texfiles/ontology_problem_statement} \section{Best Practices} \label{section:best_practices} \input{texfiles/ontology_best_practices} \section{Alternative Topic Sets} \label{section:ontology_alternatives} \input{texfiles/ontology_alternatives} \section{Klout Topics Characteristics} \label{section:ontology_characteristics} \input{texfiles/ontology_characteristics} \section{Klout Topics Coverage Versus Alternatives} \label{section:ontology_results} \input{texfiles/ontology_results} \section{Case Studies} \label{section:ontology_sample_case_study} \input{texfiles/ontology_sample_case_study} \section{Conclusion and Future Work} \label{section:conclusion} \input{texfiles/ontology_conclusion} \vspace{-0.03in} \bibliographystyle{ACM-Reference-Format} \subsection{Overview of Development} \subsection{Bootstrapping the Ontology} The first version of the ontology consisted of 140k nodes from the following sources: \begin{enumerate} \item Keywords extracted from processed tweets \item (Which academic topic sets? need reference) \end{enumerate} A team of employees was tasked with whittling down the set of topics by removing those that were: \begin{itemize} \item too specific to be applied to a significant number of profiles or URLs (``Offices of Dentists (Industry)'', ``Australian Desert Raisin (Ingredient)'') \item too general or ambiguous to be meaningful (``Minister-President'', ``Comments'', ``Short List'') \item out of date (``Seattle Supersonics (Basketball)'') \item containing profanity or adult content (``Tits \& Clits Comix (Comic Books)'') \end{itemize} This approach resulted in a set of ~10,000 v1 topics. Each topic node contained a unique numerical identifier, a human-readable string identifier, an English-language display\_name, and a ``type'' field indicating where it lay within the ontology's tree structure (see below). The same team was then tasked with verifying parent-child relationships inferred from Freebase's data structure. A stand-alone tool was created for this task, which required (time estimate). v1 topics were organized into a three-level tree: \begin{itemize} \item Supertopics: 15 top-level domains, e.g. ``Business'' and ``Entertainment'' \item Subtopics: \~1000 more specific categories, such as ``Accounting'' and ``Music''. Each subtopic is limited to a single parent at the supertopic level. \item Entities: \~9000 named ``entities'' or more specific topics, such as ``TurboTax'' or ``Lady Gaga''. Each entity may have multiple parents, but at the subtopic level only. (todo: clarify definition of entities here versus in Papyrus) \end{itemize} A dedicated ontology specialist was brought on to normalize topic names and handle ongoing curation and maintenance. This included normalizing names from the formal, parenthesis-heavy academic labels; adding missing topics too new or simply overlooked in Freebase; and deleting duplicate concepts. v1 of the ontology had several drawbacks. First, the three-level limit caused a pile-up at the bottommost (``entity'') level, where topics that should have had a parent-child relationship were forced into a sibling one instead; for example, v1 contained both \textbf{Sports and Recreation > Baseball > Major League Baseball} and \textbf{Sports and Recreation > Baseball > Boston Red Sox}, where the preferred path would be \textbf{Sports and Recreation > Baseball > Major League Baseball > Boston Red Sox}. Similarly, the restriction on the allowed level of a parent topic meant that not all possible paths could be represented; the v1 ontology could support either \textbf{Hobbies > Antiques} or \textbf{Lifestyle > Home Decorating > Antiques}, but not both. An even more pressing problem from a business perspective was that the v1 ontology could only support a single display\_name, and therefore could not be internationalized without supporting multiple parallel versions. \subsection{Ontology Development} Beginning in 2014, we began developing an improved version of the Klout ontology, incorporating the following changes. (More about the characteristics of ontology v2 can be found in section 5, below.) First, v2 removed restrictions on number and level of parent topics, moving from a topic tree to a directed graph. We retained the original set of top-level topics, but now allow multiple parents at any level so long as the resulting path is not recursive. Next, we added internationalization as an additional dimension of metadata attached to the topic node; now every node includes multiple display\_names in supported languages. From both a technical and curation standpoint, we found this approach preferable to maintaining parallel ontologies by language or region. While some topics are certainly rooted in a given region or language and will be less frequently found elsewhere (''Toronto Film Festival'', for example), the parent topics will be unchanged, so there is no need to maintain a separate graph. Furthermore, adding internationalization to the existing ontology gave us a ``cold start'' for those languages, from which we can work to incrementally improve topic coverage for non-US-English concepts and domains. Each display\_name also comes with a flag to indicate whether the given topic should be shown or hidden for the given language, allowing some pruning of less-regionally-relevant topics on the front end. All internationalization within the ontology is handled at the language level, not the country or other region. For business reasons, we also may enforce the hiding of particular topics within particular regions (for example, ``LGBT'' and similar topics within Saudi Arabia). However, because those are business decisions that can vary widely by circumstance and application, we chose not to encode those within the ontology itself. Finally, v2 adds a pointer to the primary Freebase entity for each topic. This provides a richer data set for machine learning or text-processing applications. \subsection{Lessons Learned and Best Practices} While much has been written about formal ontology construction, especially for semantic web applications, less has been said about the practical realities of lightweight ontology construction for an enterprise or commercial user-facing application. The following are some of the most important lessons learned during the construction, implementation and maintenance of the Klout topic ontology. \textbf{Expectation of Curation and Incremental Change}. An ontology is a living knowledge artifact; it must be clear throughout the organization that building one requires an ongoing investment in the form of time, tools, and documentation. Even when choosing to adopt an externally created ontology, it's crucial to consider when and how to propogate updates, whether they come from the external ontology's original creators, or from inside one's own organization. \textbf{Clear Chain of Ownership and Documentation}. Because an ontology is a living artifact, it should be clear what teams or persons have the authority to make changes. Those owners should document the principles used to make changes, addressing the ontology's scope, structure, and voice. A transparent historical record of changes to the ontology is also recommended. \textbf{Limiting Assumptions}. Encoding too many assumptions into your ontology can be dangerous, especially when it spans multiple domains. As we saw above, v1 of the Klout ontology was unsatisfactory in part because it assumed both that the top-level categories were mutually exclusive and that there was no need for a path longer than three nodes. Fewer rules to enforce can, paradoxically, lead to a cleaner ontology, as well as saving development and curation time. \textbf{Inclusivity and Tone}. Especially if your ontology will be used to visibly classify persons, it is a matter of business importance to be as inclusive as possible. Your ontology will reflect your organization's values in the eyes of many users. Strive for both caution and consistency when dealing with ideologically controversial concepts, adult concepts, and so forth. \textbf{Storage Formats and Tools for Curation}. There are a range of approaches to storing and describing ontologies, from OWL and Protege, to XTM topic maps, to simple tables in SQL and similar databases. (TODO: Add references) Because ours is a lightweight ontology with a single relationship type, and to make it easier to integrate our ontology with the rest of the data processing pipeline, we chose the straightforward table method, with some temporary front-end tools spun up when needed. This tradeoff has often required the rules of the ontology to be enforced by curators rather than within the tool, emphasizing the importance of clear documentation and quality assurance (see below). \subsubsection{Metrics and Quality Assurance}. Like any feature, an ontology in active use should be monitored for quality issues. However, it can be difficult to quantify an ontology's issues apart from those of its application. When maintaining the Klout ontology, we consider the following: \textbf{Coverage, a.k.a. Missing Topics}. Misapplied topics application can indicate a gap in the ontology, where a less appropriate or too general topic is being used for lack of a better alternative. Topics that are missing because they are new concepts can often be identified through current news and tools like Google Trends. \textbf{Scope, a.k.a. Unneeded Topics}. Topics not being applied in the application are often topics that are unneeded. Some of these will be topics that were once relevant but are now obsolete and can be safely ``aged out'' of the ontology. This is especially true of constantly evolving areas like consumer electronics, movies and television. \textbf{Missing and Incorrect Edges}. Detecting missing or incorrect edges is one of the most difficult areas of ontology improvement. Because the Klout ontology does encode references to Freebase, we can compare our edges to Freebase and to some extent to Wikipedia. \textbf{Application Metrics and User Feedback}. Application-level metrics are undeniably useful, although some investigation is required to determine when the cause lies in the ontology and when it lies in some other part of the application. There should be a clear path for user feedback about topic assignments. \textbf{Validation Against Other Ontologies}. Validating against other available ontologies is time-consuming, since it requires aligning the ontologies to be compared, but can be revealing. See Section 6. \subsection{Methodology} KT was initially bootstrapped from a set of 140K Freebase topic nodes, selected by matching popular keywords from social media text. That set was curated down to 20K candidates by removing nodes that were not sufficiently popular on social media (``Australian Desert Raisin'') or not sufficiently intuitive for application use (``Comments''). Relationships within those 20K nodes were then inferred from Freebase and verified by curators. With ongoing curation, KT reached its current size of 8K nodes and 13K edges, with newly relevant topics added and obsolete ones removed. \subsection{Scope} As users on social networks interact with a variety of topics and domains, KT aims to include any concept that: \begin{enumerate}[nolistsep,noitemsep] \item \textbf{is specific enough} to meaningfully describe a user's interest or the primary topic of a piece of text (e.g., ``Life'' or ``Growing'' are too broad, but ``Biology'' or ``Child Development'' should be included). \item \textbf{is general enough} to be shared by a community of interest on social media, with content that can be detected and recommended. A topic that is theoretically possible, but lacks users or content (e.g. ``Scandinavian Death Metal Played on Kazoos'') will not be included. \item \textbf{is not redundant} with topics already present (``Child Development Milestones'' does not need to be a separate topic from ``Child Development''). \item \textbf{is not illegal or offensive} to a general audience. \end{enumerate} \subsection{Components and definitions} \subsubsection{Topic nodes} Each topic node in KT contains: \begin{enumerate}[nolistsep,noitemsep] \item \textbf{topic\_id}: a unique numerical identifier for the topic. \item \textbf{slug}: a human-readable English label, usable in a URL. \item \textbf{metadata}: includes both the Wikidata entity ID and Freebase machine\_id of the closest corresponding entity. This facilitates NLP tasks such as entity disambiguation and linking \cite{Bhargava:lithiumnlp}. \end{enumerate} \subsubsection{Topic Edges} KT is structured as a directed graph with all branches connected to one or more of the 16 root categories in Table \ref{table:1}. Each edge indicates a parent-child relationship where a child may be a subclass of the parent (e.g., ``Philosophy'' > ``Ethics''), or a notable entity within the topic context (``Philosophy'' > ``Friedrich Nietzsche''). A child may have multiple parents in different branches of the graph, and parents may be at different distances from the root. Each edge contains: \begin{enumerate}[nolistsep,noitemsep] \item \textbf{edge\_id}: a unique numerical identifier for the edge. \item \textbf{source\_topic\_id}: the topic\_id of the parent topic. \item \textbf{destination\_topic\_id}: the topic\_id of the child topic. \end{enumerate} \subsubsection{Topic Display Names} A topic node is associated with the following fields to define how it should be displayed: \begin{enumerate}[nolistsep,noitemsep] \item \textbf{topic\_id}: the unique identifier for the topic. \item \textbf{language}: the language of the given display\_name. \item \textbf{display\_type}: a flag to indicate whether the topic should be displayed in the given language. \item \textbf{display\_name}: the most correct and human-readable form of the topic name, in the given language. \end{enumerate} In order for KT to be used in consumer-facing applications, its editorial voice strives to be \textbf{transparent} (avoiding neologisms and domain-specific jargon), \textbf{neutral or positive in tone} (avoiding ideological connotations), and \textbf{inclusive} (avoiding derogatory speech and using self-identified terms for groups of people). \subsection{Internationalization} A core assumption of KT is that the concept encoded in a node is \textit{not} language- or region-specific, but can be shared across languages. Rather than maintaining separate topic sets for each language, therefore, a single multi-lingual instance can be used. This approach allows for direct comparison of the interests of users in different languages. For flexibility in the application, however, we include a flag to indicate whether a topic should be displayed in a given language. Currently, the following languages are supported: Arabic (KSA), English (USA), French, German, and Spanish (EU). \subsection{Overview of Development} \subsection{Bootstrapping the Ontology} The first version of the ontology consisted of 140k nodes from the following sources: \begin{enumerate} \item Keywords extracted from processed tweets \item (Which academic topic sets? need reference) \end{enumerate} A team of employees was tasked with whittling down the set of topics by removing those that were: \begin{itemize} \item too specific to be applied to a significant number of profiles or URLs (``Offices of Dentists (Industry)'', ``Australian Desert Raisin (Ingredient)'') \item too general or ambiguous to be meaningful (``Minister-President'', ``Comments'', ``Short List'') \item out of date (``Seattle Supersonics (Basketball)'') \item containing profanity or adult content (``Tits \& Clits Comix (Comic Books)'') \end{itemize} This approach resulted in a set of ~10,000 v1 topics. Each topic node contained a unique numerical identifier, a human-readable string identifier, an English-language display\_name, and a ``type'' field indicating where it lay within the ontology's tree structure (see below). The same team was then tasked with verifying parent-child relationships inferred from Freebase's data structure. A stand-alone tool was created for this task, which required (time estimate). v1 topics were organized into a three-level tree: \begin{itemize} \item Supertopics: 15 top-level domains, e.g. ``Business'' and ``Entertainment'' \item Subtopics: \~1000 more specific categories, such as ``Accounting'' and ``Music''. Each subtopic is limited to a single parent at the supertopic level. \item Entities: \~9000 named ``entities'' or more specific topics, such as ``TurboTax'' or ``Lady Gaga''. Each entity may have multiple parents, but at the subtopic level only. (todo: clarify definition of entities here versus in Papyrus) \end{itemize} A dedicated ontology specialist was brought on to normalize topic names and handle ongoing curation and maintenance. This included normalizing names from the formal, parenthesis-heavy academic labels; adding missing topics too new or simply overlooked in Freebase; and deleting duplicate concepts. v1 of the ontology had several drawbacks. First, the three-level limit caused a pile-up at the bottommost (``entity'') level, where topics that should have had a parent-child relationship were forced into a sibling one instead; for example, v1 contained both \textbf{Sports and Recreation > Baseball > Major League Baseball} and \textbf{Sports and Recreation > Baseball > Boston Red Sox}, where the preferred path would be \textbf{Sports and Recreation > Baseball > Major League Baseball > Boston Red Sox}. Similarly, the restriction on the allowed level of a parent topic meant that not all possible paths could be represented; the v1 ontology could support either \textbf{Hobbies > Antiques} or \textbf{Lifestyle > Home Decorating > Antiques}, but not both. An even more pressing problem from a business perspective was that the v1 ontology could only support a single display\_name, and therefore could not be internationalized without supporting multiple parallel versions. \subsection{Ontology Development} Beginning in 2014, we began developing an improved version of the Klout ontology, incorporating the following changes. (More about the characteristics of ontology v2 can be found in section 5, below.) First, v2 removed restrictions on number and level of parent topics, moving from a topic tree to a directed graph. We retained the original set of top-level topics, but now allow multiple parents at any level so long as the resulting path is not recursive. Next, we added internationalization as an additional dimension of metadata attached to the topic node; now every node includes multiple display\_names in supported languages. From both a technical and curation standpoint, we found this approach preferable to maintaining parallel ontologies by language or region. While some topics are certainly rooted in a given region or language and will be less frequently found elsewhere (''Toronto Film Festival'', for example), the parent topics will be unchanged, so there is no need to maintain a separate graph. Furthermore, adding internationalization to the existing ontology gave us a ``cold start'' for those languages, from which we can work to incrementally improve topic coverage for non-US-English concepts and domains. Each display\_name also comes with a flag to indicate whether the given topic should be shown or hidden for the given language, allowing some pruning of less-regionally-relevant topics on the front end. All internationalization within the ontology is handled at the language level, not the country or other region. For business reasons, we also may enforce the hiding of particular topics within particular regions (for example, ``LGBT'' and similar topics within Saudi Arabia). However, because those are business decisions that can vary widely by circumstance and application, we chose not to encode those within the ontology itself. Finally, v2 adds a pointer to the primary Freebase entity for each topic. This provides a richer data set for machine learning or text-processing applications. \subsection{Lessons Learned and Best Practices} While much has been written about formal ontology construction, especially for semantic web applications, less has been said about the practical realities of lightweight ontology construction for an enterprise or commercial user-facing application. The following are some of the most important lessons learned during the construction, implementation and maintenance of the Klout topic ontology. \textbf{Expectation of Curation and Incremental Change}. An ontology is a living knowledge artifact; it must be clear throughout the organization that building one requires an ongoing investment in the form of time, tools, and documentation. Even when choosing to adopt an externally created ontology, it's crucial to consider when and how to propogate updates, whether they come from the external ontology's original creators, or from inside one's own organization. \textbf{Clear Chain of Ownership and Documentation}. Because an ontology is a living artifact, it should be clear what teams or persons have the authority to make changes. Those owners should document the principles used to make changes, addressing the ontology's scope, structure, and voice. A transparent historical record of changes to the ontology is also recommended. \textbf{Limiting Assumptions}. Encoding too many assumptions into your ontology can be dangerous, especially when it spans multiple domains. As we saw above, v1 of the Klout ontology was unsatisfactory in part because it assumed both that the top-level categories were mutually exclusive and that there was no need for a path longer than three nodes. Fewer rules to enforce can, paradoxically, lead to a cleaner ontology, as well as saving development and curation time. \textbf{Inclusivity and Tone}. Especially if your ontology will be used to visibly classify persons, it is a matter of business importance to be as inclusive as possible. Your ontology will reflect your organization's values in the eyes of many users. Strive for both caution and consistency when dealing with ideologically controversial concepts, adult concepts, and so forth. \textbf{Storage Formats and Tools for Curation}. There are a range of approaches to storing and describing ontologies, from OWL and Protege, to XTM topic maps, to simple tables in SQL and similar databases. (TODO: Add references) Because ours is a lightweight ontology with a single relationship type, and to make it easier to integrate our ontology with the rest of the data processing pipeline, we chose the straightforward table method, with some temporary front-end tools spun up when needed. This tradeoff has often required the rules of the ontology to be enforced by curators rather than within the tool, emphasizing the importance of clear documentation and quality assurance (see below). \subsubsection{Metrics and Quality Assurance}. Like any feature, an ontology in active use should be monitored for quality issues. However, it can be difficult to quantify an ontology's issues apart from those of its application. When maintaining the Klout ontology, we consider the following: \textbf{Coverage, a.k.a. Missing Topics}. Misapplied topics application can indicate a gap in the ontology, where a less appropriate or too general topic is being used for lack of a better alternative. Topics that are missing because they are new concepts can often be identified through current news and tools like Google Trends. \textbf{Scope, a.k.a. Unneeded Topics}. Topics not being applied in the application are often topics that are unneeded. Some of these will be topics that were once relevant but are now obsolete and can be safely ``aged out'' of the ontology. This is especially true of constantly evolving areas like consumer electronics, movies and television. \textbf{Missing and Incorrect Edges}. Detecting missing or incorrect edges is one of the most difficult areas of ontology improvement. Because the Klout ontology does encode references to Freebase, we can compare our edges to Freebase and to some extent to Wikipedia. \textbf{Application Metrics and User Feedback}. Application-level metrics are undeniably useful, although some investigation is required to determine when the cause lies in the ontology and when it lies in some other part of the application. There should be a clear path for user feedback about topic assignments. \textbf{Validation Against Other Ontologies}. Validating against other available ontologies is time-consuming, since it requires aligning the ontologies to be compared, but can be revealing. See Section 6.
1,941,325,220,128
arxiv
\section{INTRODUCTION} The application of machine learning and deep learning brought a significant impact in healthcare industry, improving diagnosis outcomes and changing the way of providing care to patients~\cite{cheng2016risk}. The main challenge that machine learning is asked to solve is to discover relevant structural patterns in clinical data, usually concealed and difficult to detect manually. An important fraction of electronic health records are clinical measurements collected from patients over time, which are represented as multivariate time series (MTS)~\cite{che2016recurrent}. Several efforts have been devoted to learn informative and compact representations of MTS~\cite{che2017time}, not only to improve the quality of the analysis, but also to manage the large amounts of data necessary to train deep learning models~\cite{miotto2016deep}. Furthermore, MTS are characterized by complex relationships across the variables and time that must be accounted in the analysis. However, most methods are designed to treat vectorial data and they cannot be trivially extended to capture such relationships. The autoencoder (AE) is a type of neural network originally conceived as a non-linear dimensionality reduction algorithm~\cite{Hinton504}, which has been further exploited to learn data representations in deep architectures~\cite{bengio2009learning}. AEs have been adopted to map time series data into \textit{codes}, which are real-typed vectors lying in a lower dimensional space~\cite{langkvist2014review}. Clinical measurements are often recorded at irregular frequencies that change for different patients, across variables, and over time. Hence, after discretizing time, the resulting MTS end up containing missing values~\cite{mikalsen2016learning}. Missing values follow patterns that reflect medical conditions of the patients or decisions of the doctors and, therefore, are important to be included in the analysis. Since AE cannot process data containing missing values, those are usually replaced with imputation techniques that, however, cannot capture those patterns as they only fill blanks trying to introduce as less bias as possible. On the other hand, a recently proposed method, called Time series Cluster Kernel (TCK)~\cite{mikalsen2017time}, computes an unsupervised kernel similarity between MTS with missing data. TCK leverages on the configurations of missingness patterns to improve the evaluation of the similarity. In this work, we propose a completely unsupervised approach for learning compressed representations of MTS in presence of missing data. Towards that end, we utilize the \textit{deep kernelized autoencoder} (dkAE)~\cite{kampffmeyer2017deep}, a recently proposed architecture that embeds the properties of a given prior kernel in the code representation of an AE through kernel alignment. By introducing TCK as prior kernel, we extend the dkAE framework to time series. Moreover, due TCK's properties, the relationships among the learned codes accounts for the presence of missing data, yielding a more discriminative representation of the data. We apply our method to classify MTS of blood samples, relative to patients with site infections contracted after surgery and with a high percentage of missing data. We compare the classification results obtained on the representations learned by a standard AE with the ones of a dkAE implementing the alignment to TCK. Results indicate that the learned codes not only provide a compact vectorial representation, but the same classifier achieves better results when operates in our code space rather than in the input space. \section{METHODS} \subsection{Time series Cluster Kernel} The \emph{Time series Cluster Kernel} \cite{mikalsen2017time} exploits the missing patterns in MTS to compute their similarities, rather than relying on imputation methods that may introduce strong biases. TCK implements an ensemble learning approach wherein the robustness to hyperparameters is ensured by joining the clustering results of many Gaussian mixture models (GMM) to form the final kernel. Hence, no critical hyperparameters must be tuned by the user. To deal with missing data, the GMMs are extended using informative prior distributions \cite{Marlin:2012:UPD:2110363.2110408}. The TCK matrix is built by fitting GMMs to the set of time series for a range of numbers of mixture components, to provide partitions with different resolutions that capture both local and global structures in the data. To enhance diversity in the ensemble, each partition is evaluated on a random subset of attributes and segments, using random initializations and randomly chosen hyperparameters. This also provides robustness in the hyperparameters selection. TCK is then built by summing (for each partition) the inner products between pairs of posterior distributions corresponding to different MTS. \subsection{Autoencoder} AEs simultaneously learn two functions. The first one, \textit{encoder}, provides a mapping from an input domain, $\mathcal{X}$, to a code domain, $\mathcal{C}$, i.e., the hidden representation. The second function, \textit{decoder}, maps from $\mathcal{C}$ back to $\mathcal{X}$. In AEs with a single hidden layer, the encoding and decoding function are $\mathbf{c} = \phi(\mathbf{W}_E\mathbf{x} + \mathbf{b}_E)$ and $\mathbf{\tilde{x}} = \psi(\mathbf{W}_D\mathbf{h} + \mathbf{b}_D)$, where $\mathbf{x}$, $\mathbf{c}$, and $\mathbf{\tilde{x}}$ denote, respectively, a sample from the input space, its hidden representation (the \textit{code}), and its reconstruction. While $\phi(\cdot)$ is usually implemented as a sigmoid, in the case inputs are real-valued vectors, the squashing nonlinearity in $\psi(\cdot)$ can be replaced by a linear activation. Finally, $\mathbf{W}_{E}$ and $\mathbf{W}_{D}$ are the weights and $\mathbf{b}_{E}$ and $\mathbf{b}_{D}$ the bias of the encoder and decoder, respectively. To minimize the discrepancy between the input and its reconstruction, model parameters are learned by minimizing a reconstruction loss \begin{equation} \label{eq:distortion} L_r(\mathbf{x}, \mathbf{\tilde{x}}) = \mathbb{E}\left\{\lVert \mathbf{x} - \mathbf{\tilde{x}} \rVert^{2} \right\} \; . \end{equation} By stacking more hidden layers an AE is capable of learning more complex representations by transforming inputs through multiple nonlinear transformations. In its native formulation, an AE can process vectorial data and, therefore, a MTS is flattened into a uni-dimensional vector when fed to the AE. Since an AE process inputs of same lengths, missing are filled with numeric values. \subsection{Deep Kernelized Autoencoder} A dkAE is trained by minimizing the loss function \begin{equation} \label{eq:cost} L = (1-\lambda) L_r(\mathbf{x}, \mathbf{\tilde{x}}) + \lambda L_c(\mathbf{C}, \mathbf{K}), \end{equation} where $L_r(\cdot, \cdot)$ is the reconstruction loss in Eq.~\ref{eq:distortion} and $\lambda$ is a hyperparameter that balances the contribution of the two cost terms. If $\lambda=0$, $L$ becomes the traditional AE loss in Eq.~\ref{eq:distortion}. $L_c(\cdot, \cdot)$ is the \textit{code loss} that enforces similarity between two matrices: $\mathbf{K} \in \mathbb{R}^{N \times N}$, the kernel matrix given as prior, and $\mathbf{C} \in \mathbb{R}^{N \times N}$, the inner product matrix of codes associated to input data. A depiction of the training procedure is reported in Fig. \ref{fig:kAE_arch}. \begin{SCfigure}[1][t!] \includegraphics[width=0.45\columnwidth, keepaspectratio,trim={0.5cm 0.1cm 0cm 0cm},clip]{kAE_arch} \caption{Schematic illustration of dkAE architecture. The total loss function $L$ depends on two terms. First, $L_r(\cdot,\cdot)$, which computes the reconstruction error between true input $\mathbf{x}_i$ and output of dkAE, $\tilde{\mathbf{x}}_i$. The second term, $L_c(\cdot, \cdot)$, is the distance measure between the matrices $\mathbf{C}$ (computed as inner products of codes $\{ \mathbf{c}_i \}_{i=1}^{N}$) and the target prior kernel matrix $\mathbf{K}$.} \label{fig:kAE_arch} \end{SCfigure} $L_c(\cdot, \cdot)$ can be implemented as the normalized Frobenius distance between $\mathbf{C}$ and $\mathbf{K}$. Each matrix element $C_{ij}$ in $\mathbf{C}$ is given by $C_{ij}=\phi(\mathbf{x}_i) \cdot \phi(\mathbf{x}_j)$ and the code loss reads \begin{equation} \label{eq:regularization} L_c(\mathbf{C}, \mathbf{K}) = \Bigg{\lVert} \frac{\mathbf{C}}{\|\mathbf{C}\|_F} - \frac{\mathbf{K}}{\|\mathbf{K}\|_F} \Bigg{\rVert}_{F}. \end{equation} By minimizing the normalized Frobenius distance from TCK, we indirectly include in the codes the information it captures about the missingness patterns and we improve the quality of the learned codes in presence of missing data. The dkAE model is trained using mini-batches. Therefore, a training matrix $\mathbf{C}_m$ is generated from the codes associated to the elements in the $m$th mini-batch and distance $L_c$ is computed on the submatrix of $\mathbf{K}$ related to the entries in the mini-batch $m$. \section{EXPERIMENTS} We analyze blood measurements collected from patients undergoing a gastrointestinal surgery at University Hospital of North Norway in the years 2004--2012. Each patient in the dataset is represented by a MTS of blood samples extracted within $20$ days after surgery. The MTS contain measurements of $10$ variables, which are alanine aminotransferase, albumin and alkaline phosphatase, creatinine, CRP, hemoglobine, leukocytes, potassium, sodium and thrombocytes. We focus on a cohort of two classes of patients: the ones with and without surgical site infections. Dataset labels are assigned according to International Classification of Diseases and NOMESCO Classification of Surgical Procedures, relative to patients with severe postoperative complications. Missing data in MTS correspond to measurements that are not collected for a given patient in one day of the observation period. Patients with less than two measurements are excluded from the cohort. We ended up with $883$ MTS, of which $232$ are patients with infections. The first $80\%$ of the datasets is used as training set and the rest as test set. The dataset, the code implementing all the methods described in this paper, and a detailed description of experimental setup are publicly available\footnote{\url{https://github.com/FilippoMB/TCK_AE}}. \subsection{Results} To evaluate the effect of the alignment with TCK kernel, we compare the classification results obtained on the codes learned by standard AE and dkAE. Missing values are filled with three different imputation techniques: zero imputation (AE-z and dkAE-z), mean imputation (AE-m and dkAE-m) and last-value-carried-forward imputation (AE-l and dkAE-l). The codes are classified by a $k$-NN with $k=3$ equipped with Euclidean distance. We also consider the results yielded in the input space by a $k$NN with TCK similarity (TCK-i). In Tab. \ref{tab:res} we report the mean and standard deviation of F1 score and area under the ROC curve (AUC) of the test set in 10 independent runs. For AE and dkAE we also report the mean squared error (MSE) between the encoder input and the decoder output. A low MSE of the reconstruction does not only guarantee to learn a better representation of the input, but it implies an accurate back-mapping from code to input space. In both AE without kernel alignment and dkAE with zero imputation and last-value-carried-forward we obtain the best and worst classification performance, respectively. For each imputation method, codes learned by dkAE are classified more accurately and the reconstruction error does not increase even if the codes are aligned with the prior kernel. This demonstrate the importance of embedding into the codes the similarity information yielded by TCK, which captures missingness patterns. Indeed, those patterns are ignored if one relies solely on imputation, whose purpose is to fill missing entries introducing as less bias as possible. It is interesting to notice that the classification in the input space based on TCK similarity is slightly less accurate than the classification in the code space of dkAE. Therefore, dkAE not only yields codes of reduced dimensionality that can be handled more easily and processed faster, but they are discriminated easier than the inputs themselves from a simple classifier. \bgroup \def\arraystretch{1.1} \setlength\tabcolsep{.2em} \begin{SCtable}[1][!ht] \footnotesize \centering \caption{Reconstruction MSE and classification results of the codes learned by AE and dkAE. We also report the classification results in the input space using TCK as similarity. In AE and dkAE we apply three different imputations: zero imputation (z), mean imputation (m) and last value carried forward (l). Best results are highlighted in bold.} \label{tab:res} \begin{tabular}{l|ccc} \cmidrule[1.5pt]{1-4} \textbf{Method} & \textbf{MSE} & \textbf{F1} & \textbf{AUC} \\ \cmidrule[.5pt]{1-4} \texttt{AE-z} & 0.103$\pm$0.002 & 0.654$\pm$0.028 & 0.751$\pm$0.018 \\ \texttt{dkAE-z} & 0.096$\pm$0.001 & \textbf{0.748}$\pm$0.017 & \textbf{0.813}$\pm$0.011 \\ \texttt{AE-m} & 0.094$\pm$0.003 & 0.569$\pm$0.035 & 0.7034$\pm$0.02 \\ \texttt{dkAE-m} & \textbf{0.091}$\pm$0.001 & 0.690$\pm$0.029 & 0.773$\pm$0.018 \\ \texttt{AE-l} & 0.136$\pm$0.002 & 0.662$\pm$0.010 & 0.764$\pm$0.006 \\ \texttt{dkAE-l} & 0.128$\pm$0.000 & 0.678$\pm$0.026 & 0.763$\pm$0.016 \\ \texttt{TCK-i} & -- & 0.698$\pm$0.021 & 0.776$\pm$0.012 \\ \cmidrule[1.5pt]{1-4} \end{tabular} \end{SCtable} \egroup In Fig. \ref{fig:PCA} we visualize the first two PCA components of the test set, both in input and in the code spaces. We compute a linear PCA on the codes and on the TCK kernel matrix (this corresponds to compute kernel PCA in the input space using TCK as kernel). Coloring depends on the ground truth label and we observe the two classes to be better separated in the code space of dkAE. Interestingly, in dkAE we notice the same structure yield by kPCA in the input space with TCK as kernel. This demonstrate how the kernel alignment procedure successfully embed in the codes the properties of TCK, without compromising the precision of the decoder reconstruction. We underline that by using an AE rather than kPCA we avoid performing a costly eigendecomposition and we also learn the inverse mapping from the code to the input space, provided by the decoder. \captionsetup{format=side2,font=footnotesize,labelfont=bf,labelsep=period} \begin{figure*}[htp!] \centering \subfigur { \includegraphics[width=3.5cm, height=2.8cm,trim={2.5cm 1cm 2cm 1.5cm},clip]{TCK_PCA.pdf} } ~ \subfigur { \includegraphics[width=3.5cm, height=2.8cm,trim={2cm 1cm 2cm 1.5cm},clip]{AE-m_PCA.pdf} } ~ \subfigur { \includegraphics[width=3.5cm, height=2.8cm,trim={2.5cm 1cm 2cm 1.5cm},clip]{dkAE-z_PCA.pdf} } \vspace{-4mm} \caption{Projection of test set on the first two PCA components using (i) kPCA on the input space, (ii) PCA on AE code space, and (iii) PCA on dkAE code space. Yellow dots and red triangles represent infected and non-infected patients respectively.} \label{fig:PCA} \end{figure*} \section{CONCLUSIONS} In this paper, we proposed a novel approach for learning compressed vectorial representations of MTS with missing values, which are common in clinical records. This is achieved by combining a deep kernelized Autoencoder with TCK, a similarity measure for MTS that accounts for missingness patterns. We tackled the classification of blood samples from patients with postoperative infections, where data are MTS with a high percentage of missing data. Our results showed that by aligning the codes in the AE to TCK kernel matrix, we embed into the representation important information relative to the missingess patterns in the data and improve the classification outcome. \begin{footnotesize} \bibliographystyle{unsrt}
1,941,325,220,129
arxiv
\section{Introduction} Luminous infrared galaxies (LIRGs) with $\rm L_{IR} [8 \hbox{--} 1000\mu m] > 10^{11.5} \; L_\odot$, including ultra-luminous infrared galaxies (ULIRGs: $\rm L_{IR} > 10^{12}\; L_\odot$) are mostly advanced mergers \citep{Sanders1988b, Sanders1996, Scoville2000, Veilleux2002}. They harbor extreme starbursts (star formation rate (SFR) $\rm \, \lower2truept\hbox{${> \atop\hbox{\raise4truept\hbox{$\sim$}}}$}\, 50\; M_\odot\; yr^{-1}$) and sometimes strong active galactic nuclei (AGN), and are among the most luminous objects in the local Universe \citep{Sanders1988b, Genzel1998, Surace1998, Veilleux1999, Scoville2000, Veilleux2009}. Observations and theoretical simulations have shown that mergers can transform spirals to ellipticals \citep{Toomre1977, Schweizer1982, Barnes1990, Genzel2001, Veilleux2002, Dasyra2006}. Gas outflows ubiquitously found in (U)LIRGs \citep{Armus1990, Heckman2000, Walter2002, Rupke2005, Sakamoto2009, Fischer2010, Feruglio2010, Sturm2011, Aalto2012b, Veilleux2013, Cicone2014} may play an important role in quenching the star formation that leads to the formation of red sequence galaxies \citep{Bell2007, Faber2007, Hopkins2008a, Hopkins2013b}. \begin{deluxetable*}{cccccccc} \tabletypesize{\normalsize} \setlength{\tabcolsep}{0.03in} \tablecaption{ALMA Observations \label{tbl:obs}} \tablehead{ {SB} & Date & {Time (UTC)} & {Config} & {$\rm N_{ant}$} & {$\rm l_{max}$} & {$\rm t_{int}$} & {$\rm T_{sys}$}\\ & (yyyy/mm/dd) & & & & (m) & {(min)} & {(K)}\\ {(1)} & {(2)} & {(3)} & {(4)} & {(5)} & {(6)} & {(7)} & {(8)} } \startdata X49990a\_X505 & 2012/08/13 & 11:31:46 -- 12:52:33 & E\&C & 23 & 402&24.7 &537 \\ X4b58a4\_X1ee & 2012/08/28 & 08:58:50 -- 10:23:37 & E\&C & 27 & 402&24.7 &756 \enddata \tablecomments{Column (1) -- schedule-block number; (2) \& (3) -- observation date and time; (4) -- configuration; (5) -- number of antennae; (6) -- maximum baseline length; (7) -- on-target integration time; (8) -- median $\rm T_{sys}$. } \end{deluxetable*} Extensive surveys of CO rotation lines in low J transitions such as CO~(1-0) at 2.6 mm and CO~(2-1) at 1.3 mm have found very large concentrations of molecular gas (up to a few times $\rm 10^{10}\; M_\odot$) in the central kpc of (U)LIRGs \citep{Solomon1988, Scoville1989, Sanders1991, Solomon1997, Downes1998, Bryant1999, Gao2001a, Evans2002}. This gas, funneled into the nuclear region by the gravitational torque during a merger \citep{Barnes1996, Hopkins2009a}, provides fuel for the nuclear starburst and/or AGN. However, due to the heavy dust extinction for the UV/optical/NIR observations and the lack of high angular resolution FIR/sub-mm/mm observations, it is still not very clear how the different constituents (i.e., gas, dust, stars, and black holes) in (U)LIRG nuclei interplay with each other. Some studies \citep{Scoville1997, Downes1998, Bryant1999, Gao2001b} suggest that much of the low J CO luminosities may be due to the emission of diffuse gas not closely related to the active star formation regions. Indeed, single dish and interferometry mm and submm observations have found that the intensities and spatial distributions of star formation in (U)LIRGs correlate significantly stronger with those of higher J CO lines (with upper level J $\geq 3$), which probe warmer and denser gas than low J lines \citep{Yao2003, Iono2004, WangJ2004, Wilson2008, Iono2009, Sakamoto2008, Tsai2012, Sakamoto2013, Xu2014}. This is consistent with results of observations of other dense molecular gas indicators such as HCN lines \citep{Solomon1992, Gao2004a, Narayanan2008, Gracia-Carpio2008, Garcia-Burillo2012}. The multi-J CO observations of \citet{Papadopoulos2012a} indicate that for many (U)LIRGs the global CO spectral line energy distribution (SLED) is dominated by a very warm ($\rm T \sim 100 K$) and dense ($\rm n \geq 10^4\; cm^{-3}$) gas phase. \citet{Lu2014} found a strong and linear correlation between the mid-J (with upper level J between 5 and 10) luminosity and the $\rm L_{IR}$ in a Herschel SPIRE Fourier Transform Spectrometer (FTS) survey of a large (U)LIRG sample. In order to study the warm dense gas in nuclear regions of (U)LIRGs, we observed the CO~(6-5) line emission (rest-frame frequency = 691.473 GHz) and associated dust continuum emission in two nearby examples, NGC~34 and NGC~1614, using the Band 9 receivers of the Atacama Large Millimeter Array (ALMA; \citealt{Wootten2009}). Both NGC~34 and NGC~1614 were chosen for these early ALMA observations, among the complete sample of 202 LIRGs of the Great Observatories All-sky LIRG Survey (GOALS; \citealt{Armus2009}), because of their close proximity ($\rm D < 85\; Mpc$) and bright CO~(6-5) line fluxes ($\rm f_{CO~(6-5)} \, \lower2truept\hbox{${> \atop\hbox{\raise4truept\hbox{$\sim$}}}$}\, 1000\; Jy\; km\; s^{-1}$) observed in the Herschel SPIRE FTS survey of GOALS galaxies (angular resolution: $\sim 30\arcsec$; \citealt{vanderWerf2010, Lu2014}). This enables high signal-to-noise-ratio ALMA observations of warm gas structures with linear resolutions of $\rm \, \lower2truept\hbox{${< \atop\hbox{\raise4truept\hbox{$\sim$}}}$}\, 100\; pc$ for the given angular resolutions of $\rm \sim 0\farcs25$. Further, both LIRGs have declination angles close to the latitude of the ALMA site, therefore the Band 9 observations are affected by minimal atmospheric absorption when being carried out near transit. In this paper, we present ALMA Cycle-0 observations of the CO~(6-5) line emission and the 435$\mu m$ dust continuum emission in the central kpc of NGC~1614 (also known as Mrk~617 and Arp~186). This LIRG has an infrared luminosity of $\rm L_{IR} = 10^{11.65}\; L_\odot$ \citep{Armus2009} at a distance of 67.8 Mpc ($\rm 1\arcsec = 329\; pc$). Most of the current star formation activity is in a circum-nuclear starburst ring \citep{Neff1990, Alonso-Herrero2001, Diaz-Santos2008, Olsson2010}, presumably triggered by a minor merger with a mass ratio of $\gtrsim 4:1$ \citep{Neff1990, Vaisanen2012}. The nucleus itself may harbor a much weaker and older starburst \citep{Alonso-Herrero2001} and a Compton-thick AGN (\citealt{Risaliti2000}, but see \citealt{Olsson2010, Vaisanen2012}). The observations and data reductions are described in Section 2; the results are presented in Section 3; Section 4 and Section 5 are devoted to a discussion and the summary, respectively. All velocities in this paper are in the radio LSR convention. Throughout this paper, we adopt the $\rm \Lambda$-cosmology with $\rm \Omega_m = 0.3$ and $\Omega_\Lambda = 0.7$, and $\rm H_0 = 70\; km\; s^{-1}\; Mpc^{-1}$. \begin{figure*}[!htb] \plottwo{NGC1614_line_int_new.eps}{NGC1614_cont_new.eps} \plottwo{NGC1614_line_mom1_new.eps}{NGC1614_line_mom2_7kms_new.eps} \caption{ {\it Upper-left:} Image and contours of the integrated CO~(6-5) line emission. The contour levels are [1, 2, 4, 8]$\rm \times 2.7\; Jy\; km\; s^{-1}\; beam^{-1}$. {\it Upper-right:} Image of the continuum overlaid by contours of the integrated CO~(6-5) line emission. {\it Lower-left:} The first moment map overlaid by contours of the integrated CO~(6-5) line emission. {\it Lower-right:} The second moment map overlaid by contours of the integrated CO~(6-5) line emission. The white (black) ellipse at the bottom left of each panel shows the synthesized beam size (FWHM = $\rm 0\farcs26\times 0\farcs20$, $\rm P.A.=280^\circ$). All figures have the same size of $\rm 4\arcsec \times 4\arcsec$, and $\rm 1\arcsec = 329\; pc$. The cross in the center of each map marks the position of the radio nucleus in the 5~GHz (MERLIN) map \citep{Olsson2010}. } \label{fig:images} \end{figure*} \begin{figure*}[!htb] \plotone{NGC1614_channel_20kms_nocont_new.eps} \caption{CO~(6-5) line emission contours of the channel maps (the velocity channel width = $\rm 20.4\; km\; s^{-1}$), and overlaid on the integrated emission map. The contour levels are $\rm 21\; mJy\; beam^{-1} \times$ [1, 2, 4, 8, 16]. All maps have the same size of $\rm 4\arcsec\times 4\arcsec$. In each panel, the central velocity of the channel is given. (The system velocity of NGC~1614 is $\rm 4723\; km\; s^{-1}$.)} \label{fig:channel} \end{figure*} \section{Observations} We observed the central region of NGC~1614 in CO~(6-5) line emission and 435~$\mu m$ dust continuum emission using the Band 9 receivers of ALMA in the time division mode (TDM; velocity resolution: 6.8 km~sec$^{-1}$). The four basebands (i.e. ``Spectral Windows'', hereafter SPWs) were centered at the sky frequencies of 680.539, 682.308, 676.826 and 678.764 GHz, respectively, each with a bandwidth of 2 GHz. Observations were carried out in the extended \& compact (E\&C) configuration using up to 27 antennae (Table~\ref{tbl:obs}). The total on-target integration time was 50.4 minutes. During the observations, phase and gain variations were monitored using QSO 0423-013. Observations of the minor planet Ceres were made for the flux calibration. The error in the flux calibration was estimated to be 17\%. The final data reduction was done using CASA 4.1.0. Images were cleaned using Briggs weighting. Both phase and amplitude self-calibrations have been carried out. The primary beam is $\sim 8\arcsec$. However, emission features larger than $\sim 3\arcsec$ are poorly sampled because of limited uv-coverage for short baselines. Two data sets were generated from the observations. In the first data set, the CO~(6-5) line data cube was generated using data in SPW-0 (sky-freq = 680.539$\pm 1$ GHz), which encompass the CO~(6-5) line emission at the system velocity ($\rm 4723\; km\; s^{-1}$) with an effective bandpass of $\rm \sim 800\; km\; s^{-1}$. The continuum was estimated using data in the other 3 SPWs. In the second data set, the CO~(6-5) line data cube was generated using three SPWs: SPW-0, SPW-1 and SPW-3, respectively. The bandwidth of this CO~(6-5) line data cube is $\rm \sim 2400\; km\; s^{-1}$. For the second data set, the continuum was estimated using data in SPW-2 (sky-freq = 676.826$\pm 1$ GHz). The first data set has better continuum estimation and subtraction, and was therefore used for most of the analysis; the second data set was used to search for evidence of molecular outflows or nuclear $\rm H^{13}CN$~(8-7) line emission. The continuum subtraction was carried out using the CASA task {\it UVcontsub}. For the first data set, the channel maps of the CO~(6-5) line emission have 1-$\sigma$ rms noise of $\rm 8.0\; mJy\; beam^{-1}$ per $\rm 6.8\; km\; s^{-1}$, and for the continuum it is $\rm 0.6\; mJy\; beam^{-1}$. The CO~(6-5) line emission map, integrated over the LSR velocity range between $\rm v = 4447.4$ and $\rm 4894.5\; km\; s^{-1}$ ($\rm \delta v = 447.1\; km\; s^{-1}$), has 1-$\sigma$ rms noise of $\rm 0.90\; Jy\; beam^{-1} \; km\; s^{-1}$. The synthesized beams of these maps are nearly identical, having FWHMs of $\rm 0\farcs26\times 0\farcs20$, corresponding to physical scales of $\rm 86\; pc \times 66\; pc$, and a P.A. of $280^\circ$. The absolute pointing accuracy of these ALMA observations is on the order of $0\farcs1$. \begin{figure}[!htb] \epsscale{1.1} \plotone{N1614_histo_spectra_multi4.eps} \caption{ {\bf Panel a:} Spectrum of the CO~(6-5) line emission in the velocity domain, measured in the channel maps with an aperture of $\rm radius=1\farcs5$. The dashed lines mark the 1-$\sigma$ noise boundaries. {\bf Panel b:} Zoom-in of the bottom part of {\it Panel a}. Again the dashed lines mark the 1-$\sigma$ noise boundaries. The arrow marks the expected location of the $\rm H^{13}CN$~(8-7) line at the systemic velocity of $\rm v = 4723\; km\; s^{-1}$. {\bf Panel c:} Spectrum at the peak position ($\rm ra=04:34:00.006$, $\rm dec=-08:34:44.47$) of the integrated CO~(6-5) line emission map. In order to show it more clearly, the flux is scaled up arbitrarily. {\bf Panel d:} Zoom-in of the bottom part of {\it Panel c}. } \label{fig:spectra} \end{figure} \section{Results} \subsection{The CO~(6-5) Line Emission}\label{sect:CO} In Figure~\ref{fig:images} we present images of the integrated CO~(6-5) line emission, the continuum at 435~$\mu m$, the first moment map, and the second moment map. All images are overlaid by the same contours of the integrated CO~(6-5) line emission at levels of [1, 2, 4, 8]~$\rm \times 2.7\; Jy\; beam^{-1} \; km\; s^{-1}$. The line and the continuum emissions correlate closely with each other, both showing a ring configuration without a detectable nucleus. The ring has a diameter of $\sim 2\arcsec$ ($\rm \sim 650\; pc$). In both the line and the continuum maps, the ring looks clumpy and can be decomposed into several knots. The first moment map shows a clear velocity gradient along the south-north direction, consistent with an inclined rotating ring. According to \citet{Olsson2010}, the inner disk of NGC~1614 has an inclination angle of $\rm 51^\circ$. In the second moment map, the velocity dispersion in most regions in the ring is rather constant at the level of $\rm \delta v \sim 40\; km\; s^{-1}$, though in some inter-knot regions it can be as low as $\rm \delta v \sim 20\; km\; s^{-1}$. From the first moment map, the velocity gradient due to the rotation can be estimated to be $\rm dV/dr \sim 0.3 \; km\; s^{-1}\; pc^{-1}$, corresponding to a line widening of $\sim 20\; km\; s^{-1}$ within individual beams (linear size: $\sim 80\; pc$). This is consistent with the lowest velocity dispersion seen in the second moment map. The channel maps ($\rm \delta v = 20.4\; km\; s^{-1}$) are shown in Figure~\ref{fig:channel}, overlaid on the image of the integrated line emission. They provide more details about the rotating ring. First of all, given the relatively narrow local velocity dispersions (see the second moment map in Figure~\ref{fig:images}), the channel maps dissect the ring spatially. It appears that the spatial width of ring segments in individual channel maps are generally broader than that in the integrated emission map. This is because, by co-adding all channel maps, the integrated emission map is affected more severely by the (negative) sidelobes of different segments of the ring. This is a significant effect because some ring segments are separated by $\rm \sim 3\arcsec$, the angular scale limit of our interferometer observations. Indeed the total flux of the CO~(6-5) line emission, $\rm f_{CO~(6-5)} = 898\; (\pm 153) \; Jy\; km\; s^{-1}$ (the error being dominated by the calibration uncertainty) which is derived from the sum of the aperture photometry of individual channels (centered on the emission features for each given channel), is 31\% higher than that measured on the integrated CO~(6-5) line emission map. Comparison with the Herschel measurement of the integrated CO~(6-5) line emission of NGC~1614 ($\rm 1423\pm 126\; Jy\; km\; s^{-1}$ within a beam of $\sim 30\arcsec$; \citealt{vanderWerf2010, Lu2014}) yields an interferometer-to-single-dish flux ratio of $0.63\pm 0.12$. This suggests that most warm dense gas in NGC~1614 is concentrated in the circum-nuclear ring. Figure~\ref{fig:spectra} shows plots of the velocity distributions of the central region ($\rm radius= 1\farcs5 \simeq 500\; pc$) and of the peak position ($\rm RA=04{^h}34{^m}00{\fs}006$, $\rm Dec=-08{\degr}34{\arcmin}44{\farcs}47$) of the integrated CO~(6-5) line emission map. In order to reduce the noise, we used relatively broad bins of $\rm \delta v = 34\; km\; s^{-1}$. The velocity distribution has a FWHM of $\rm 272\; km\; s^{-1}$. Its shape is rather irregular and spiky, reflecting the clumpiness of the rotating ring and the narrow velocity dispersions of individual clumps (Figure~\ref{fig:images}). No evidence for any outflow/inflow of $\rm |\delta v| < 1200\; km\; s^{-1}$, nor any detection of the $\rm H^{13}CN$~(8-7) line (rest-frame frequency = 690.552 GHz), can be found in the spectrum. In the velocity distribution of the peak position, which is in the north-western quadrant of the ring (Figure~\ref{fig:images}), we also found no evidence of outflow/inflow or of the $\rm H^{13}CN$~(8-7) line emission. \begin{figure}[!htb] {\hspace{-0.8truecm}\epsfig{figure=dust_mass_2bb.ps, angle=90, scale=0.4}} \caption{SED fitting of the dust emission in NGC~1614 by a 2-graybody model.} \label{fig:dust_sed} \end{figure} \begin{figure}[!htb] \epsscale{1.15} \plottwo{co65_21cont_new.eps}{co65_21ratio_phots_new.eps} \caption{ {\bf Left:} Comparison between integrated CO~(6-5) line emission map (resolution: $0\farcs26\times 0\farcs20$) and integrated CO~(2-1) line emission contours (resolution: $0\farcs50\times 0\farcs44$; \citealt{Konig2013}). {\bf right:} Contours of integrated CO~(2-1) line emission overlaid on image of the ratio between integrated CO~(6-5) line emission and integrated CO~(2-1) line emission, with both being smoothed to a common beam (convolution of two original beams). Signals in the both maps are in the same units of $\rm K\; km\; s^{-1}$. } \label{fig:compCO21} \end{figure} \begin{figure*}[!htb] \epsscale{1.1} \plotone{CO_radio_comp2_new1.eps} \caption{Comparison between contours of the integrated CO~(6-5) line emission and images of the total 8.4 GHz radio continuum (left), the nonthermal radio component (middle), and the thermal radio component (right). Positions of CO~(6-5) knots listed in Table~\ref{tbl:knots} are marked by corresponding numbers in the left panel. } \label{fig:sfr} \end{figure*} \subsection{The 435~$\mu m$ Continuum Emission}\label{sec:dust} The flux density of the 435~$\mu m$ continuum measured by ALMA is $\rm f_{435\mu m}=269\pm 46\; mJy$. The continuum correlates spatially with the CO~(6-5) emission in the central kpc of NGC~1614 (Figure~\ref{fig:images}). This suggests that dust heating and gas heating in the warm dense gas cores are strongly coupled, a conclusion also reached by \citet{Lu2014} in a Herschel FTS study of the CO SLED of LIRGs. NGC~1614 was observed by Herschel-SPIRE \citep{Griffin2010} both in the photometry mode (Chu et al., in preparation) and in the FTS mode \citep{vanderWerf2010, Lu2014}, with beams of $\rm \sim 30\arcsec$. Because the error of the continuum measured in the FTS mode is large ($\rm \sim 1\; Jy$), we estimated the total flux of the 435~$\mu m$ continuum of NGC~1614 using SPIRE photometer fluxes $\rm f_{350\mu m, SPIRE}=1916\pm 134\; mJy$ and $\rm f_{500\mu m, SPIRE}=487\pm 34\; mJy$. Assuming a power-law spectrum for the dust continuum (i.e. $\rm \log f_\nu$ depending on $\rm \log \nu$ linearly), we carried out linear interpolation in the logarithmic domain of the flux and of the frequency between 350 and 500~$\mu m$, and found $\rm f_{435\mu m, SPIRE}=831\pm 58\; mJy$. The ratio between $\rm f_{435\mu m}$ and $\rm f_{435\mu m, SPIRE}$ then yields an interferometer-to-single-dish flux ratio of $\rm 0.32\pm 0.06$. This is a factor of $\sim 2$ lower than the interferometer-to-single-dish flux ratio of the line emission, indicating that the distribution of dust is substantially more extended than that of the warm dense gas. The total dust mass in NGC~1614 can be estimated using the mid- and far-IR fluxes in the Spitzer/MIPS 24~$\mu m$ band and in the Herschel 70, 100, 160, 250, 350 and 500~$\mu m$ bands. The Herschel data are taken from Chu et al. (in preparation). A least-squares fit to the IR SED by a 2-graybody model, with the emissivity spectral index $\rm \beta = 2$ for both components, yields a total dust mass of $\rm M_{dust, total} = 10^{7.60\pm 0.07}\; M_\odot$ with a cold dust temperature of $\rm T_C = 35\pm 2 K$ (Figure~\ref{fig:dust_sed}). Fits by 2-graybody models with $\rm \beta$ as a free parameter or by the model of \citet{DL07} yield very similar results. If dust in the central region has the same $\rm T_C$, then $\rm M_{dust, cent} = f_{435\mu m, ALMA}/ f_{435\mu m, SPIRE} \times M_{dust, total}$. Taking into account the uncertainties due to the assumption on the cold dust temperature ($\sim 50\%$), the dust mass in the central region observed by ALMA is $\rm M_{dust, cent} = 10^{7.11\pm 0.20}\; M_\odot$. NGC~1614 has a metallicity of $\rm 12+log(O/H) = 8.65\pm 0.10$ \citep{Armus1989, Vacca1992, Engelbracht2008, Modjaz2011}. According to \citet{Remy-Ruyer2014}, for galaxies with $\rm 12+log(O/H)> 8.5$, the gas-to-dust ratio is 100 with a 1-$\sigma$ uncertainty of $\sim 0.2$~dex. Therefore, assuming $\rm M_{gas}/M_{dust} = 10^{2.0\pm 0.2}$, the gas mass in the central region of NGC~1614 is $\rm M_{gas, cent} = 10^{9.11\pm 0.30}\; M_\odot$. This is consistent with the molecular gas mass (which should dominate the total gas mass) found in the same region ($\rm M_{gas, cent} = 10^{9.30}\; M_\odot$, with a conversion factor of $\rm X_{CO} = 3\times 10^{20}\; cm^{-2}(K\; km\; s^{-1})^{-1}$; \citealt{Konig2013}). It should be pointed out that both the ALMA continuum observations and the SMA observations of CO~(2-1) by \citet{Konig2013} detected mostly dust and gas emission in dense gas structures and missed significantly the diffuse emission, therefore the dust and gas mass derived from these observations are lower limits. There has been a debate on whether there is an AGN in NGC~1614. \citet{Risaliti2000} argued, based on the detection of a hard X-ray source and its spectrum, that in the center of NGC~1614 there is a hidden AGN obscured by Compton-thick gas ($\rm N_{H} > 1.5\times 10^{24}\; cm^{-2}$; \citealt{Comastri2004}). However such high column density gas in the nucleus, which shall not be affected by the missing flux issue, is not detected in either the CO~(6-5) map or the dust continuum map. Using the scaling factor between the gas mass and the continuum flux derived above, the non-detection of the continuum in the nucleus ($\rm \sigma = 0.6\; mJy\; beam^{-1}$) sets a 3-$\sigma$ upper-limit for the gas surface density of $\rm N_{H} = 10^{23.1\pm 0.3}\; cm^{-2}$. Since an AGN cannot hide itself in the emission of dust that absorbs its UV/optical/NIR radiation, our results can rule out this possibility with relatively high confidence. Indeed a Compton-thick torus with a radius of $\rm r=20\; pc$ \citep{Garcia-Burillo2014}, which fills 23\% of the ALMA beam, should be detectable in the continuum with a signal-to-noise ratio of $\rm s/\sigma\geq 7$. The $\rm s/\sigma$ could be even higher since the $\rm T_d$ in a torus is likely much warmer than the assumed dust temperature of $\rm T_d= 35\; K$. The high resolution MIR L-band observations of \citet{Vaisanen2012} also argue against a Compton-thick AGN in NGC~1614. As pointed out by \citet{Olsson2010} and \citet{Herrero-Illana2014}, the X-ray source detected by \citet{Risaliti2000} could be explained by low-mass X-ray binaries. \section{Discussion} \subsection{Comparison with Previous CO Observations}\label{sect:co_comp} There is a rich literature on molecular line observations in the submm and mm bands for NGC~1614 \citep{Young1986, Solomon1988, Scoville1989, Sanders1991, Casoli1991, Gao2004b, Albrecht2007, Wilson2008, Olsson2010, Costagliola2011, Konig2013, Imanishi2013}. The single dish CO~(1-0) observations of \citet{Sanders1991} found a total molecular gas mass of $\rm 10^{10.12}\; M_\odot$, aasuming a standard conversion factor of $\rm X_{CO} = 3\times 10^{20}\; cm^{-2}(K\; km\; s^{-1})^{-1}$ and distance of $\rm D=67.8\; Mpc$. This is consistent with the result of \citet{Casoli1991}, but significantly larger than those obtained in earlier and less sensitive observations \citep{Young1986, Solomon1988}. The OVRO observations of \citet{Scoville1989}, with a beam of $4\arcsec \times 6\arcsec$, allocated $\rm 30\%$ of the total CO~(1-0) emission within a nuclear region of $\rm radius = 1\; kpc$. The more recent and higher resolution ($2\farcs75 \times 2\farcs40$) observations of \citet{Olsson2010} resolved the central CO~(1-0) line emission into an arc-like feature $\rm \sim 3\; kpc$ in length and $\rm \sim 1.3\; kpc$ in width, but did not resolve the ring. The SMA map of CO~(3-2) (beam $=2\farcs6\times 2\farcs1$; \citealt{Wilson2008}) and the ALMA maps of HCN/HCO$^+$/HNC~(4-3) (beam $=1\farcs5\times 1\farcs3$; \citealt{Imanishi2013}) also did not resolve the ring. Before our ALMA observations, the best angular resolution for any CO rotation lines was obtained by \citet{Konig2013} in their SMA observations of the CO~(2-1) line, with a beam of $\rm 0\farcs50\times 0\farcs44$. In Figure~\ref{fig:compCO21} we compare their CO~(2-1) map with our CO~(6-5) map. In the ring the two maps have good correspondence, though the CO~(6-5) emission looks clumpier, most likely due to the better angular resolution. \citet{Konig2013} noticed a strong asymmetry in the CO~(2-1) distribution between the eastern and western sides of the ring, and interpret it as a consequence of the feeding of the ring by the dust lane on the north-west of the ring. In the CO~(6-5) map, we still see this asymmetry albeit being less prominent than in CO~(2-1). When smoothed to a common beam, the ratio between the two emissions is rather constant in most regions of the ring, with a median brightness temperature ratio of 0.72 (Figure~\ref{fig:compCO21}). The east quadrant of the ring has the highest brightness temperature ratio ($\sim 1$). This could be due, at least partially, to a slight mismatch between the two maps, given the steep gradient in both maps in this region. On the other hand, this seems to be consistent with the stronger east-west asymmetry seen in the CO~(2-1) map than in the CO~(6-5) map. \citet{Konig2013} argued that the reason of the asymmetry could be the feeding of the ring by a dust lane on the northwest of the ring. In this scenario, the east quadrant has higher CO~(6-5)/CO~(2-1) ratio than the west quadrant because it has less diffuse gas (freshly fed by the dust lane) compared to the west quadrant. The nucleus is not detected in either map. The CO~(6-5) 3-$\sigma$ upper-limit of 742 $\rm M_\odot\; pc^{-2}$ ($\rm N_{H} = 10^{22.86}\; cm^{-2}$) for the surface density of the warm dense gas, derived by assuming the same relation between $\rm \Sigma_{Gas}$ and CO~(6-5) surface brightness in the ring region (Eq~\ref{eq:gas}), is consistent with the upper-limit set by the 435~$\mu m$ continuum. If the conversion factor advocated by \citet{Downes1998} for (U)LIRGs is used, which is a factor of $\sim 6$ lower than the standard value adopted in Eq~\ref{eq:gas}, the result is a significantly lower value for the upper limit of $\rm \Sigma_{Gas}$ in the nucleus. In the region north of the ring, where significant CO~(2-1) emission is found, little CO~(6-5) emission is detected and the brightness temperature ratio is $<0.1$. \begin{deluxetable*}{cccccccc}[ht] \tabletypesize{\normalsize} \setlength{\tabcolsep}{0.05in} \tablecaption{CO~(6-5) Knots in Circum-Nuclear Starburst Ring \label{tbl:knots}} \tablehead{ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline ID & R.A. & Dec. & $\rm S_{CO~(6-5)}$ & $\rm S_{435\mu m}$ & $\rm \log (\Sigma_{Gas})$& $\rm \log (\Sigma_{SFR})$ & Notes \\ \hline & (J2000) & (J2000) & ($\rm Jy\; km\; s^{-1}\; beam^{-1}$) & ($\rm mJy\; beam^{-1})$ & ($\rm M_\odot\; pc^{-2})$ & ($\rm M_\odot\; yr^{-1}\; kpc^{-2}$) & } \startdata 1 & 04:34:00.006 & -08:34:44.49 & 34.0$\pm 6.1$ & 19.6$\pm 3.5$ & 3.94 & 2.49 & 4 \\ 2 & 04:33:59.991 & -08:34:44.86 & 27.2$\pm 4.9$ & 13.8$\pm 2.5$ & 3.87 & 2.40 & 6 \\ 3 & 04:33:59.981 & -08:34:45.10 & 25.5$\pm 4.6$ & 12.1$\pm 2.2$ & 3.91 & 2.33 & 6 \\ 4 & 04:33:59.998 & -08:34:45.52 & 25.5$\pm 4.6$ & 11.7$\pm 2.1$ & 3.88 & 2.41 & 7 \\ 5 & 04:34:00.015 & -08:34:45.95 & 25.7$\pm 4.6$ & 7.7$\pm 1.4$ & 3.87 & 2.14 & 8 \\ 6 & 04:34:00.069 & -08:34:45.29 & 22.8$\pm 4.1$ & 10.3$\pm 1.9$ & 3.81 & 2.36 & 10 \\ 7 & 04:34:00.077 & -08:34:45.04 & 23.6$\pm 4.2$ & 8.2$\pm 1.5$ & 3.84 & 2.00 & 10 \enddata \tablecomments{ Column (4) -- CO~(6-5) peak flux; (5) -- continuum peak flux; (6) -- peak molecular gas surface density, after smoothed to the beam of 8.4 GHz observations (0\farcs41$\times$0\farcs26); (7) -- peak SFR density, derived using flux of the nonthermal radio at 8.4 GHz; (8) -- corresponding GMA in \citet{Konig2013}. } \end{deluxetable*} \subsection{Relation Between Warm Dense Gas and Star Formation} Most star formation in NGC~1614 is occurring in the circum-nuclear starburst ring. \citet{Soifer2001} found that 72\% of the 12$\mu m$ flux of NGC~1614 measured by IRAS is contained within the 2{\arcsec} beam of their Keck observations. The comparison between the high resolution 8.4 GHz map of the central kpc region \citep{Herrero-Illana2014} and that of a lower resolution map at the same frequency by \citet{Schmitt2006} indicates that $\sim 67\%$ of the total SFR is contributed by the starburst ring and the nucleus. The star formation in the ring has been studied by \citet{Alonso-Herrero2001} using a HST Pa~$\alpha$ map, \citet{Diaz-Santos2008} using a Gemini 8$\mu m$ map, \citet{Vaisanen2012} using a 3.3$\mu m$ polycyclic aromatic hydrocarbon (PAH) map obtained with UIST (an imager-spectrometer for integral field spectroscopy) at UKIRT, \citet{Olsson2010} and \citet{Herrero-Illana2014} using VLA and Merlin radio continuum maps. While the NIR and MIR observations may still be affected by the dust obscuration associated with the dense gas \citep{Imanishi2013}, the radio continuum is an SFR indicator insensitive to dust obscuration. In the left panel of Figure~\ref{fig:sfr}, we compare the CO~(6-5) contours with the radio continuum at 8.4 GHz (beam = $\rm 0\farcs41\times 0\farcs26$, \citealt{Herrero-Illana2014}). The CO~(6-5) knots in the ring (Table~\ref{tbl:knots}) are marked in the image. While there is a radio nucleus, the CO map has a hole at the ring's center. \citet{Herrero-Illana2014} argued that the radio nucleus is not an AGN, which is consistent with our conclusion that there is no (Compton-thick) AGN in NGC~1614 (Section~\ref{sec:dust}). In the other two panels of Figure~\ref{fig:sfr}, the CO~(6-5) emission is compared to the thermal and nonthermal radio emission components, respectively. Following \citet{Herrero-Illana2014}, the thermal radio emission is estimated using a high-resolution, extinction-corrected Pa-${\alpha}$ map obtained from a set of HST NIR narrow- and broad-band images. The method involves the comparison of a Pa-${\alpha}$ equivalent width map as well as an NIR color (F160W/F222M) image to stellar population synthesis models (Starburst99; \citealt{Leitherer1999}) to derive a spatially-resolved dust obscuration map with which to correct the original Pa-${\alpha}$ image (see also \citealt{Diaz-Santos2008}). The details of this procedure can be found in the Appendix. As a final step, the nonthermal component is derived by subtracting from the total radio emission the thermal component. The thermal fraction of the radio continuum at 8.4 GHz is found to be 51\% in the ring region ($\rm 100 < r < 350$~pc). In Figure~\ref{fig:comp_co65_10_radio} the radial profiles of the emissions are compared. The peak of the radial distribution of the thermal radio is shifted (by $\sim 70$~pc) toward the smaller radius compared to that of the CO~(6-5) radial distribution. The peak of the distribution of the total radio emission also has a small offset compared to that of the CO~(6-5), while the radial profile of nonthermal radio emission is similar to that of the CO~(6-5) in the ring region. The radio profile of CO~(2-1) \citep{Konig2013} is also shown in the same plot. If we compare the CO~(2-1) profile with the profile of CO~(6-5), which has been smoothed to the same resolution of CO~(2-1), we see that the former is significantly more extended than the latter. Beyond $\rm r \sim 400\; pc$, the cold and diffuse molecular gas probed by CO~(2-1) emission is devoid of any significant star formation (as revealed by the profiles of the radio emissions). \begin{figure}[!htb] \epsscale{1.25} \plotone{plot_profiles_new2.ps} \caption{Comparison between normalized radial profiles of the CO~(6-5), total radio continuum at 8.4 GHz, nonthermal radio component, thermal radio component, and CO~(2-1). The arrows at $\rm r=0.1$~kpc show the 3$\sigma$ upper-limits of CO~(6-5) and CO~(2-1) in the central hole. } \label{fig:comp_co65_10_radio} \end{figure} \begin{figure}[!htb] \epsscale{1.2} \plotone{KS_plot_sub2_totalR3_avg1.ps} \caption{ Plot of $\rm log \Sigma_{SFR}$ versus $\rm log \Sigma_{Gas}$. For individual cells in the NGC~1614 ring: $\rm log \Sigma_{SFR,th}$ (red diamonds) and $\rm log \Sigma_{SFR,nth}$ (blue crosses), and $\rm log \Sigma_{SFR,total}$ (black dots) are estimated using the thermal, nonthermal, and total radio maps, respectively; and $\rm log \Sigma_{Gas}$ is estimated from the CO~(6-5) map that is smoothed and regridded to match the radio maps. For the NGC~1614 nucleus (open squares): the 3-$\sigma$ upper-limit for $\rm \Sigma_{Gas}$ was derived using the CO~(6-5) map assuming the same relation for the ring region (Eq~\ref{eq:gas}). The average for the NGC~1614 ring (black solid square): data taken from Table~\ref{tbl:comparison}. Nuclear starburst in NGC~34 (black solid circle): data taken from Table~\ref{tbl:comparison}. The shaded area (in green color) represents the data for local starbursts in the sample of \citet{Kennicutt1998b}. } \label{fig:ksplot} \end{figure} In Figure~\ref{fig:ksplot} we plot the SFR surface density ($\rm \Sigma_{SFR}$) vs. the gas surface density ($\rm \Sigma_{Gas}$) (i.e. the Kennicutt-Schmidt law) for the nuclear starburst and individual cells ($\rm 3\time 3$ pixels, pixel=0\farcs{089}) in the ring, using the thermal and nonthermal maps to derive $\rm \Sigma_{SFR}$ and the CO~(6-5) map (smoothed and regridded to match the radio maps) to obtain $\rm \Sigma_{Gas}$. The SFR can be estimated from the nonthermal and thermal radio luminosities using two formulae given in \citet{Murphy2012}, respectively: \begin{equation} \rm \left( {SFR^{nth}_\nu\over M_\odot\; yr^{-1}} \right) = 6.64\times 10^{-29}\left( {\nu\over GHz}\right)^{\alpha^{nth}}\left( {L^{nth}_\nu\over erg\; s^{-1}Hz^{-1}}\right). \label{eq:nth} \end{equation} and \begin{eqnarray} \rm \left( {SFR^{th}_\nu\over M_\odot\; yr^{-1}} \right) & = & \rm 4.6\times 10^{-28}\left({T_e\over 10^4 \; K}\right)^{-0.45} \times \nonumber \\ & &\rm \left({\nu\over GHz}\right)^{0.1} \left({L^{th}_\nu\over erg\; s^{-1}\; Hz^{-1}}\right). \end{eqnarray} where $\rm T_e= 10^4 \rm K$, $\nu = 8.4\; GHz$, and $\rm \alpha^{nth}=1.2$ \citep{Herrero-Illana2014}. For individual cells in the ring, we also plotted in Figure~\ref{fig:ksplot} the $\rm \Sigma_{SFR}$ vs. $\rm \Sigma_{Gas}$ relation with the $\rm \Sigma_{SFR}$ estimated from the total radio emission, assuming a constant nonthermal fraction ($\rm f_{nth}=0.5$) and the SFR vs. $\rm L^{nth}$ relation in Eq~\ref{eq:nth}. The gas surface density was estimated using the CO~(6-5) surface brightness as following: According to \citet{Konig2013}, the total $\rm H_2$ mass in the ring is $\rm M_{H_2} = 10^{8.97}\; M_\odot$ (for D=67.8~Mpc), estimated using the CO~(1-0) map of \citet{Olsson2010} and assuming a conversion factor of $\rm 3\times 10^{20}\; cm^{-2}\; (K\; km\; s^{-1})^{-1}$. Dividing this by the integrated CO~(2-1) flux of the ring, $\rm S_{CO(2-1)} = 65.4\pm 6.9\; Jy\; km\; s^{-1}$ \citep{Konig2013} and assuming a brightness temperature ratio of 0.72 between CO~(6-5) and CO~(2-1) (Figure~\ref{fig:compCO21}), we have: \begin{equation} \rm \left( {\Sigma_{Gas}\over M_\odot\; pc^{-2}} \right) = 20.3\times \left( {f_{CO(6-5)}\over Jy\; arcsec^{-2} \; km\; s^{-1}} \right). \label{eq:gas} \end{equation} In the ring region, only cells that are detected in both radio and CO~(6-5) maps above a 3-$\sigma$ threshold are plotted (therefore the random errors are $\rm <0.12$ dex for these data points). In the nuclear region ($\rm r < 100$~pc), the 3-$\sigma$ upper-limit for $\rm \Sigma_{Gas}$ was derived using the CO~(6-5) map assuming the same relation for the ring region (Eq~\ref{eq:gas}). For individual cells in the ring, the $\rm \Sigma_{SFR}$ vs. $\rm \Sigma_{Gas}$ relation is systematically above that for local starbursts \citep{Kennicutt1998b}, indicating a higher star formation efficiency (SFE). This is because, by relating the SFR to the warm dense gas probed by the high resolution ALMA observations of CO~(6-5), much of the cold diffuse gas probed by low J CO (more extended than the warm dense gas) is excluded from the $\rm \Sigma_{Gas}$ in our results. It is worth noting that we used a standard CO conversion factor ($\rm X_{CO} = 3\times 10^{20}\; cm^{-2}(K\; km\; s^{-1})^{-1}$) for NGC~1614 data. In the literature, arguments for high SFE in (U)LIRGs are very often based on results obtained using a CO conversion factor $\sim 5$ times lower than the standard value \citep[e.g.,][]{Daddi2010, Genzel2010}. It appears that, on the linear scale of 100 pc, the tight correlation previously found between $\rm \Sigma_{SFR}$ vs. $\Sigma_{Gas}$ \citep{Kennicutt1998b, Genzel2010, Leroy2013, Yao2003, Iono2004, Wilson2008} breaks down in the central kpc of NGC~1614. In particular, the non-detections of the nucleus in both CO~(6-5) and CO~(2-1) maps set a lower-limit of the $\rm \Sigma_{SFR}$-to-$\Sigma_{Gas}$ ratio about an order of magnitude above the nominal value, corresponding to a very short gas exhaustion time scale of $\rm M_{gas}/SFR < 10$ Myr. The low extinction \citep{Alonso-Herrero2001, Kotilainen2001, Diaz-Santos2008} and low PAH emission \citep{Vaisanen2012} also indicate an ISM depression in the nuclear region. The star formation time scale associated with the thermal radio is $\sim 10$ Myr and that with the nonthermal radio is $\sim 100$ Myr. \citet{Alonso-Herrero2001} argued that, based on detections of deep CO stellar absorption, NGC~1614 harbors a nuclear starburst older than 10 Myr, which could have blown away the ambient ISM \citep{Vaisanen2012}. If the time scale for the feedback effects, including both the gas consumption by star formation and mass loss by superwinds, is significantly shorter than 10 Myr (the dynamic time scale of the nuclear region is only $\rm \tau \sim 1$ Myr), then the deviation of the nucleus from the $\rm \Sigma_{SFR}$ vs. $\Sigma_{Gas}$ relation may indeed be due to the feedback of the old nuclear starburst. This is consistent with the results of \citet{Garcia-Burillo2012} who found that NGC~1614 has the highest value of the SFE (estimated from the FIR/HCN ratio) among a sample of normal star-forming galaxies and mergers, and argued that this could be due to the exhaustion of the dense molecular gas by starburst activity. In the starburst ring, the correlation between $\rm \Sigma_{SFR,th}$ and $\Sigma_{Gas}$ is rather weak (Spearman's rank correlation coefficient $\rm \rho =0.37$ with the significance of its deviation from zero $\rm p = 0.20$). Also, the correlation between $\rm \Sigma_{SFR,total}$ (estimated using total radio emission) and $\Sigma_{Gas}$ has a large scatter, and is only marginally significant ($\rm \rho =0.64$ and $\rm p = 0.0023$). This is consistant with the systematic offset between the radial profiles of the total radio and CO~(6-5) in Figure~\ref{fig:comp_co65_10_radio}. While the weak correlation between $\rm \Sigma_{SFR,th}$ and $\Sigma_{Gas}$ could be mainly due to the uncertainties associated with the obscuration correction of the $\rm Pa\; {\alpha}$ (the thermal radio is estimated using the obscuration-corrected $\rm Pa\; {\alpha}$), this cannot explain the lack of correlation between $\rm \Sigma_{SFR,total}$ and $\Sigma_{Gas}$. We have already seen a breakdown of the $\rm \Sigma_{SFR}$-to-$\Sigma_{Gas}$ correlation in the nucleus, and interpreted it as a consequence of starburst feedback. The same scenario can be applied to the individual cells in the ring. Given the high resolutions of the radio and CO~(6-5) maps, these cells correspond to star formation regions of linear scales of $\sim 100~pc$. On such fine scales, the $\rm \Sigma_{SFR}$-to-$\Sigma_{Gas}$ relation could be sensitive to the local star-formation history. Indeed, \citet{Alonso-Herrero2001} suggested that in NGC~1614 the starburst propagates like a ``wild fire'' from the nucleus outward. \citet{Vaisanen2012} proposed that even the ring is stratified in terms of the star formation age. In a CO~(1-0) survey of M33, \citet{Onodera2010} found a breakdown of the Kennicutt-Schmidt law on the linear scale of $\rm \sim 80~pc$, and attributed it to the various evolutionary stages of giant molecular clouds (GMCs) and to the drift of young clusters from their parent GMCs. These interpretations are applicable to our results, although our ALMA observations probe an even tighter correlation between CO~(6-5) and SFR in a LIRG associated with a starburst merger. A stronger correlation is found between $\rm \Sigma_{SFR,nth}$ and $\rm \Sigma_{Gas}$ in the starburst ring ($\rm \rho =0.81$ and $\rm p = 1.6\times 10^{-5}$). This is puzzling because, given the longer star formation time scale associated with the nonthermal radio, this relation should be more sensitive to the star formation history than the $\rm \Sigma_{SFR,th}$ and $\rm \Sigma_{Gas}$ relation. It is likely that the $\rm \Sigma_{SFR,nth}$ and $\rm \Sigma_{Gas}$ correlation is driven by other factors than the Kennicutt-Schmidt law. One possibility is that it is due to the correlation between the magnetic field strength and the gas density \citep{Fiebig1989, Helou1993, Niklas1997}. Observationally, this correlation extends from the smallest \citep{Fiebig1989} to the largest cosmic scales \citep{Vallee1990, Vallee1995}, and has the form of $\rm B \propto n^{k}$ for $\rm n > 10^2\; cm^{-3}$, where B is the magnetic field strength, n the gas density, and $\rm k = 0.5\pm 0.1$ \citep{Fiebig1989}. Since the emissivity of the nonthermal (synchrotron) radiation is proportional to $\rm B^2$, the B vs. n correlation leads naturally to a localized ($\sim$ linear) correlation between nonthermal surface brightness and gas surface density. Another possibility is that the nonthermal radio and the CO~(6-5) correlate with each other because they are both powered by cosmic rays (CRs). Indeed, CSO observations of $\rm ^{12}$CO(6-5) and $\rm ^{13}$CO(6-5) by \citet{Hailey-Dunsheath2008} of the nuclear starburst in the central 180 pc of NGC~253 suggested that warm molecular gas is most likely to be heated by an elevated density of CRs or by turbulence. In order to test whether CRs dominate the heating of warm molecular gas in NGC~1614, we carried out model-fitting using theoretical models \citep{Meijerink2005, Kazandjian2012, Kazandjian2014} of the Cosmic Ray Dominated Regions (CDRs) and Photon Dominated Regions (PDRs) to fit the emission lines of $^{12}$CO, $^{13}$CO, HCN, HNC, and HCO$^+$. These are mostly single dish data for the entire system of NGC~1614 \citep{Sanders1991, Albrecht2007, Costagliola2011, Lu2014b}, plus some high resolution interferometry data for the central region taken from the literature \citep{Wilson2008, Olsson2010, Konig2013, Imanishi2013} and from this work. The results show that PDR models with strong mechanical heating (by turbulence) provide the best fit while CDR models fit the data rather poorly. Details of these results will be presented elsewhere (Meijerink et al., in preparation). This is consistent with \citet{Rosenberg2014a} who modeled the Herschel observations of $\rm ^{12}$CO up to upper J=13 and $\rm ^{13}$CO up to upper J=6, together with data of other submm lines taken from the literature, of NGC~253. They found that mechanical heating by turbulence is necessary to reproduce the observed molecular emission and CR heating is a negligible heating source. \citet{Rosenberg2014b} reached a similar conclusion for Arp~299A, a nuclear starburst in Arp~299 (a merger-induced LIRG). In principle, the turbulence can be related to the CRs through shocks generated by supernova remnants (SNRs) which can both power the turbulence \citep{Draine1980} and accelerate CRs \citep{Drury1994}. However, given the very different mechanisms for energizing low velocity turbulence and for CR acceleration by SNR shocks, it is unlikely that this can explain the localized correlation between $\rm \Sigma_{SFR,nth}$ and $\rm \Sigma_{Gas}$ in the starburst ring down to the linear scale of 100 pc. \subsection{NGC~1614 and NGC~34: A Tale of Two LIRGs} \begin{deluxetable}{lccc} \tabletypesize{\normalsize} \setlength{\tabcolsep}{0.03in} \tablecaption{Comparison between NGC~1614 and NGC~34 \label{tbl:comparison} } \tablehead{ & NGC~1614 & & NGC~34 } \startdata R.A. (J2000)$\rm ^a$ & $\rm 04{^h}34{^m}00{\fs}03$ & & $\rm 00{^h}11{^m}06{\fs}54$ \\ Dec. (J2000)$\rm ^a$ & $\rm -08{\degr}34{\arcmin}45{\farcs}1$&&$\rm -12{\degr}06{\arcmin}27{\farcs}5$ \\ Distance (Mpc) & 67.8 & & 84.1 \\ $\rm L_{IR}$ ($\rm L_\odot$)$\rm ^b$ & $10^{11.65}$ & &$10^{11.49}$ \\ $\rm M_{K}$ (mag)$\rm ^c$& $-24.59$ & & $-24.46$ \\ $\rm M_{HI}$ ($\rm M_\odot$)$\rm ^d$ & $10^{9.45}$ && $10^{9.72}$ \\ $\rm SFR_{tot}\; (M_\odot\; yr^{-1}$)$\rm ^e$ & 51.3 & & 34.7 \\ $\rm M_{H_2, tot}$ ($\rm M_\odot$)$\rm ^f$& $10^{10.12}$ && $10^{10.15}$ \\ $\rm M_{dust,tot}$ ($\rm M_\odot$)$\rm ^g$& $10^{7.60}$ && $10^{7.48}$ \\ AGN & No & & Yes \\ Merger mass ratio & 4:1 -- 5:1 & & 3:2 -- 3:1 \\ $\rm S_{8.4GHz, tot}$ (mJy)$\rm ^h$ & 41.1 & & \\ $\rm S_{CO~(6-5), tot}$ ($\rm Jy\; km\; s^{-1}$)$\rm ^i$ & $1423\pm 126$ & & $937\pm 63$ \\ $\rm S_{435\mu m, tot}$ (mJy)$\rm ^j$ & $831\pm 58$ & & $517\pm 36$ \\ & & & \\ {\bf Central Starburst:} & & & \\ Morphology & circum-nuclear ring & & nuclear disk \\ radius (pc) & $\rm r_{in}=100,\; r_{out}=350$ & & $\rm 100$ \\ $\rm S_{8.4GHz, cent}$ (mJy)$\rm ^k$ & 26.5 & & 15.2 \\ $\rm SFR_{cent}\; (M_\odot\; yr^{-1}$)$\rm ^l$ & 32.8 & & 26.0 \\ $\rm \Sigma_{SFR}$ ($\rm M_\odot\; yr^{-1}\; kpc^{-2}$)$\rm ^m$ & 92.8 & & 827.6 \\ $\rm M_{H_2, cent}$ ($\rm M_\odot$)$\rm ^n$ & $10^{8.97}$ & & $10^{8.76}$ \\ $\rm \Sigma_{Gas}$ ($\rm M_\odot\; pc^{-2}$)$\rm ^o$ & $10^{3.54}$ & & $10^{4.40}$ \\ $\rm S_{CO~(6-5), cent}$ ($\rm Jy\; km\; s^{-1}$)$\rm ^p$ & $898\pm 153$ & & $1004\pm 151$ \\ $\rm S_{435\mu m, cent}$ (mJy)$\rm ^q$ & $269\pm 46$ & & $275\pm 41$ \\ $\rm M_{dust, cent}$ ($\rm M_\odot$)$\rm ^r$ & $10^{7.11}$ & & $10^{6.97}$ \\ \enddata \tablecomments{{\small{ \\ {$\rm ^a$} Coordinates of the nucleus in the 8.4 GHz radio continuum.\\ {$^b$} IR luminosity between 8 -- 1000 $\mu m$ \citep{Armus2009}.\\ {$^c$} Absolute K band magnitude \citep{Rothberg2004}.\\ {$^d$} Total mass of neutral atomic hydrogen gas, taken from compilation by \citep{Kandalyan2003}.\\ {$^e$} Total star formation rate \citep{U2012}.\\ {$^f$} Total mass of molecular hydrogen gas (assuming $\rm X_{CO} = 3\times 10^{20}\; cm^{-2}(K\; km\; s^{-1})^{-1}$); NGC~1614: \citet{Sanders1991}; NGC~34: \citet{Krugel1990}. \\ {$^g$} Total dust mass; NGC~1614: this work; NGC~34: \citet{Esquej2012}.\\ {$^h$} Total flux of the 8.4 GHz radio continuum \citep{Schmitt2006}.\\ {$^i$} Total flux of the CO~(6-5) emission \citep{Lu2014b}.\\ {$^j$} Total flux of the 435$\mu m$ continuum emission; NGC~1614: this work; NGC~34: \citet{Xu2014}.\\ {$^k$} Flux of the 8.4 GHz radio continuum in the central region; NGC~1614: \citet{Herrero-Illana2014}; NGC~34: \citet{Condon1991}.\\ {$^l$} The SFR of central starburst: $\rm SFR_{cent} = SFR_{tot}\times f_{cent}$, where $\rm f_{cent} = S_{8.4GHz,cent}/S_{8.4GHz,tot} = 0.64$ for NGC~1614, and $\rm f_{cent} = 0.75 $ for NGC~34.\\ {$^m$} Mean SFR column density of the central starburst.\\ {$^n$} Mass of molecular hydrogen gas in the central region; NGC~1614 ($\rm X_{CO} = 3\times 10^{20}\; cm^{-2}(K\; km\; s^{-1})^{-1}$): \citep{Konig2013}; NGC~34 ($\rm X_{CO} = 0.5\times 10^{20}\; cm^{-2}(K\; km\; s^{-1})^{-1}$): \citet{Fernandez2014}.\\ {$^o$} Mean gas column density of the central starburst ($\rm M_{Gas} = 1.36\times M_{H_2}$).\\ {$^p$} Flux of the CO~(6-5) emission in the central starburst region; NGC~1614: this work; NGC~34: \citet{Xu2014}.\\ {$^q$} Flux of the 435$\mu m$ continuum emission in the central starburst region; NGC~1614: this work; NGC~34: \citet{Xu2014}.\\ {$^r$} Dust mass in the central starburst region; NGC~1614: this work; NGC~34: \citet{Xu2014}.\\ }}} \end{deluxetable} In this section we compare NGC~1614 with NGC~34, another local LIRG observed by our team using ALMA band-9 receivers \citep{Xu2014}. Both galaxies are late-stage mergers \citep{Neff1990, Schweizer2007}. As shown in Table~\ref{tbl:comparison}, they have similar absolute K-band magnitude $\rm M_K$ (indicating similar stellar mass), similar total gas mass as obtained by HI and CO observations, and similar total SFR as derived from the IR+UV luminosities \citep{U2012}. On the other hand, as revealed by the ALMA observations and high angular resolution observations in other bands, the two galaxies are very different in the central kpc. First of all, our ALMA data ruled out a Compton-thick AGN in NGC~1614. By comparison, there is a weak AGN in NGC~34 according to the X-ray data \citep{Brightman2011a, Esquej2012}, and the ALMA results \citep{Xu2014} are consistent with the AGN being Compton thick. Nevertheless, for both galaxies, the central kpc is dominated by starburst activity and AGN contributions to both dust and gas heatings are insignificant \citep{Xu2014, Stierwalt2013}. In NGC~34, the starburst is concentrated in a compact nuclear disk of $\rm r \sim 100\; pc$, with very high $\rm \Sigma_{SFR,th}$ and $\Sigma_{Gas}$. In NGC~1614, a starburst ring between $\rm r_{in}=100\; pc$ and $\rm r_{out}=350\; pc$ dominates the central region, with moderate mean $\rm \Sigma_{SFR,th}$ and mean $\Sigma_{Gas}$ compared to other local starbursts (Figure~\ref{fig:ksplot}). It is worth pointing out that different CO conversion factors have been adopted for the two cases: For the nuclear starburst in NGC~34, the ALMA observations showed that the molecular gas is concentrated in a well organized disk controlled mostly by the gravity of stars \citep{Xu2014}. Therefore, we choose to use the conversion factor for (U)LIRGs: $\rm X_{CO} = 0.5\times 10^{20}\; cm^{-2}(K\; km\; s^{-1})^{-1}$ \citep{Scoville1997, Downes1998}. On the other hand, for the starburst ring in NGC~1614, the ALMA observations presented here and the SMA observations of \citet{Konig2013} reveal that much of the CO emission is clumped in individual knots associated with giant molecular associations (GMAs), which might be self-gravitating. In this case, a standard Galactic CO conversion factor is more appropriate \citep{Papadopoulos2012a}: $\rm X_{CO} = 3.0\times 10^{20}\; cm^{-2}(K\; km\; s^{-1})^{-1}$. Nevertheless, these conversion factors are very uncertain and are the major error sources for the molecular gas mass estimates. \begin{figure}[!htb] \plotone{CO_SLED.ps} \caption{Plot of the $\rm L_{IR}$ normalized spectral line energy distributions (SLEDs) of NGC~1614 and NGC~34. The data points (all obtained by single-dish observations) are taken from the literature with the following references: for NGC~1614: \citet{Sanders1991} for CO~(1-0); \citet{Albrecht2007} for CO~(2-1); \citet{Wilson2008} for CO~(3-2); \citet{Lu2014b} for CO~(4-3) and other higher-J lines; for NGC~34: \citep{Albrecht2007} and \citet{Maiolino1997} for CO~(1-0); \citet{Papadopoulos1998} for CO~(2-1); Zhang et al. (2014, in preparation) for CO~(3-2); \citet{Lu2014b} for CO~(4-3) and other higher-J lines. The solid (dotted) line is model fitting of the CO SLED of NGC~1614 (NGC~34).} \label{fig:co-sled} \end{figure} In Figure~\ref{fig:co-sled} we compare the spectral line energy distributions (SLEDs) of the total CO emission (measured by single-dish observations) of these two galaxies, taken from observations of Herschel SPIRE FTS observations \citep{Lu2014b}. The CO SLED of NGC~1614 peaks around upper J = 5-7, while that of NGC~34 reaches a plateau after a rapid increase, and the peak is around upper J=9. In order to further investigate the physical conditions of these two galaxies, we modeled the observed CO SLEDs using simple two-component RADEX large velocity gradient (LVG) radiative transfer models \citep{vanderTak2007}, and adopting a similar procedure to that in \citet{Kamenetzky2012}. Admittedly, results from such model fittings suffer significant degeneracy between parameters \citep{Rosenberg2014a}. Nevertheless, they are useful for translating the information in the CO SLED into quantitative estimates of physical parameters of the gas, albeit with large uncertainties. We find that both SLEDs can be well fitted by the combination of a cool and a warm component. Both galaxies have similar gas densities of ~($10^{2.5}$, $10^{2.6}$) cm$^{-3}$ and ($10^4$, $10^4$) cm$^{-3}$, for the cool and warm components in (NGC~34, NGC~1614), respectively. However, the kinetic temperature of the warm component in NGC~34 (890 K) is 2 times higher than that of NGC~1614 (445 K), consistent with the fact that the nuclear starburst in NGC~34 is 5 times more compact than the circum-nuclear starburst ring in NGC~1614. It is worthwhile noting that the AGN contribution to the warm gas in NGC~34 is insignificant \citep{Xu2014}. Given the overall similarities between the two host galaxies (Table~\ref{tbl:comparison}), it is likely that the staunch difference between the two central starbursts is caused by the difference in the merging processes that the two LIRGs have experienced. The morphology of NGC~1614---one prominent tail and one relatively small secondary tail---suggests an unequal mass encounter (mass ratio $\gtrsim 4:1$) and/or a scenario in which one of the galaxies experienced a retrograde passage. Several authors have argued for a high mass ratio encounter; \citet{Rothberg2006} note the isophotal shape of NGC 1614 and its correspondence with simulations of high mass ratio mergers, while \citet{Vaisanen2012} identify a possible remnant body of the lower-mass companion. Both \citet{Rothberg2006} and \citet{Vaisanen2012} come to the same conclusion---that NGC 1614 is a 4:1 mass ratio merger---but the former assumes the nuclei have already merged and the latter relies on the identification of an interacting galaxy. NGC~34 has no clear evidence for dual nuclei, suggesting the two galaxies have already coalesced. Owing to the asymmetry of integrated brightness of the two tidal tails \citet{Schweizer2007} argue this system is the result of a merger of two disk galaxies with a mass ratio between 3:2 and 3:1. The disky isophotal shape of the remnant (which shows no evidence for a disk in the K-band morphology) is consistent with a formation scenario of a major but unequal mass merger \citep{Rothberg2006,Naab2006}. Preliminary dynamical modeling of this system is consistent with the aforementioned mass ratio and both disks experiencing prograde interactions (G. C. Privon et al.\ \emph{in prep}). This dynamical model is consistent with the system being observed $\sim250-300$ Myr since the first passage of the two galaxies, somewhat lower than the suggested $400$ Myr age of the stellar disk \citep{Schweizer2007}. Hence, NGC~34 has experienced a major merger of two galaxies of similar mass, which was catastrophic and destroyed both progenitor disks \citep{Schweizer2007}. Simulations by \citet{Cox2008} exploring the effect of mass ratio on merger-induced starbursts found a decreasing burst strength with increasing primary/secondary mass ratio; given the previously mentioned estimates of the mass ratios for NGC 34 and NGC 1614, the star formation surface densities are consistent with this interpretation. It might be that the higher mass ratio merger experienced by NGC~1614 caused less efficient torquing of the gas, leading to much of the central gas settling into the nuclear ring (with the help of either the inner Lindblad resonance associated with a bar \citep{Olsson2010} or the non-axisymmetric potential caused by a minor merger \citep{Combes1988, Knapen2004, Mazzuca2006}) rather than collecting in the center, as in NGC~34. This may also hint at the answer to the question why NGC~1614 has not yet developed an AGN \citep{Vaisanen2012} while NGC~34 has one. According to \citet{Hopkins2012a}, the built-up of a centrally peaked dense gas disk is a necessary condition for triggering of the AGN activity in late stage mergers. An alternate explanation for NGC~1614's comparatively lower $\rm \Sigma_{SFR}$, if the scenario proposed by \citet{Vaisanen2012} is accurate, is that the merger has simply not yet run to completion and so has not yet caused the final funneling of gas towards the nucleus at the time of the merger \citep[e.g.,][]{Mihos1994b, Hopkins2012a}. \citet{Olsson2010} and \citet{Konig2013} both show that indeed most of the molecular gas in NGC~1614 sits in the dust lane (outside the ring) and even further out. It could be that the relatively minor perturbation of the first pass (which led to the northeast tail) created the outward propagating starburst (i.e. the ``wild fire''), as revealed by the nuclear ring and the weak and old nuclear starburst, while a future merger will trigger a much stronger nuclear starburst as seen in NGC~34. With current knowledge of the encounters in NGC~34 and NGC~1614, we cannot firmly assign the cause for the different starburst characteristics in the two systems. It is likely to be due to the effect of different mass ratios, but we cannot rule out other possible causes such as different current phases of the encounters and different encounter geometries. While NGC~34 represents a large population of LIRGs with starburst nuclei (e.g. Arp~220), NGC~1614 represents those with circum-nuclear starburst rings, which are also common in LIRGs. Among the GOALS sample, at least five other LIRGs (NGC~1068, NGC~5135, NGC~7469, NGC~7552, and NGC~7771) have such rings. Future dynamical models (G. C. Privon et al.\ \emph{in prep}) matched to the kinematics and morphology of NGC~34 and NGC~1614 may provide a more concrete answer to the question of how the two galaxies, and the two LIRG populations they represent, developed such different central starbursts over the merging process. \section{Summary}\label{sect:summary} We carried out ALMA observations of the CO~(6-5) line emission and of the 435~$\mu m$ dust continuum emission in the central kpc of NGC~1614, a local LIRG at distance of 67.8 Mpc ($\rm 1\arcsec = 329\; pc$). The CO emission and the continuum are both well resolved by the ALMA beam ($\rm 0\farcs26\times 0\farcs20$) into a circum-nuclear ring. The integrated flux of CO~(6-5) is $\rm f_{CO~(6-5)} = 898\; (\pm 153) \; Jy\; km\; s^{-1}$, and the flux of the continuum is $\rm f_{CO~(6-5)} = 269\; (\pm 46) mJy$. These are $\rm 63(\pm 12) \%$ and $\rm 32(\pm 6) \%$ of the total CO~(6-5) flux and 435~$\mu m$ continuum flux of NGC~1614 measured by Herschel, respectively. The molecular ring, located between $\rm 100\; pc < r < 350\; pc$, looks clumpy and includes several unresolved (or marginally resolved) knots with median velocity dispersion of $\rm \delta v \sim 40\; km\; s^{-1}$. These knots are associated with star formation regions with $\rm \Sigma_{SFR}\sim 100\; M_\odot\; yr^{-1}\; kpc^{-2}$ and $\rm \Sigma_{Gas}\sim 10^4\; M_\odot\; pc^{-2}$. The non-detections of the nucleus in both the CO~(6-5) and the 435 $\mu m$ continuum rule out, with relatively high confidence, a Compton-thick AGN in NGC~1614. Comparisons with the radio continuum show that the local correlation, on the linear scale of $\sim 100$~pc, between $\rm \Sigma_{Gas}$ and $\rm \Sigma_{SFR}$ (i.e. the Kennicutt-Schmidt law) is severely disturbed. In particular, the nucleus has a lower-limit of the $\rm \Sigma_{SFR}$-to-$\Sigma_{Gas}$ ratio about an order of magnitude above the nominal value in the standard Kennicutt-Schmidt law. This break-down of the star formation law could be caused by an outward propagation of the central starburst (i.e. the ``wild fire'' scenaio proposed by \citealt{Alonso-Herrero2001}). Our results also show that the CO~(6-5) correlates stronger with the nonthermal radio component than both the total radio emission and the thermal radio component, possibly due to an in situ correlation between the magnetic field strength and the gas density. \vskip1truecm \noindent{\it Acknowledgments}: Adam Leroy and Tony Remijan from NAASC are thanked for their help with data reduction. An anonymous referee is thanked for constructive comments. Y.G. is partially supported by NSFC-11173059, NSFC-11390373, and CAS-XDB09000000. Y.Z. thanks the NSF of Jiangsu Province for partial support under grant BK2011888. V.C. would like to acknowledge partial support from the EU FP7 Grant PIRSES-GA-2012-316788. This paper makes use of the following ALMA data: ADS/JAO.ALMA-2011.0.00182.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This research has made extensive use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \bibliographystyle{apj}
1,941,325,220,130
arxiv
\@startsection{section}{1}{\z@{INTRODUCTION} Two Galactic X-ray sources known to produce relativistic radio jets are GRS 1915+105 \cite{Mir94} and GRO J1655-40 \cite{Tin95,Hje95}. Optical observations of GRO J1655-40 have provided dynamical evidence for a 7 $M_{\odot}$ black hole \cite{Oro97} in a 2.6 day binary orbit with a $\sim$F4 IV companion star. GRS1915+105 is presumed to be a black hole binary, based on its X-ray high luminosity and similarities with GRO J1655-40. However, a direct measurement of the motion of its companion star has been prevented by interstellar extinction, which limits optical/IR studies of GRS1915+105 to wavelengths $> 1$ micron \cite{Mir94}. While each source was active at radio frequencies, H I absorption measurements were combined with Galactic rotation models to derive distance measurements of 12.5 kpc and 3.2 kpc for GRS 1915+105 \cite{Mir94} and GRO J1655-40 \cite{Tin95}, respectively. GRS 1915+105 is a transient X-ray source, and the BATSE light curve (20--100 keV) indicates that bright X-ray emission began during May 1992 \cite{Har97}. Before the launch of the $Rossi ~X$-$ray ~Timing ~Explorer$ ($RXTE$), observations in soft X-rays were sporadic, and GRS1915+105 may have persisted as a bright source in soft X-rays since 1992. When the All Sky Monitor (ASM) on $RXTE$ established regular coverage on 1996 Feb 22, the source was bright and highly variable, and it has remained so throughout 1996 and 1997. The ASM light curve, which is shown in Figure~\ref{fig:asm19}, illustrates both the extent of the intensity variations and also the repetitive character of particular variability patterns. The early ASM light curve was used to initiate $RXTE$ pointed observations (PCA and HEXTE instruments), which began on 1996 April 6. Since then the source has been observed once or twice per week, and most of the data are available in a public archive. At the higher time resolution provided by PCA light curves, there are again dramatic and repetitive patterns of variations \cite{Gre96}. These results are one of the extraordinary chapters in the history of high-energy astronomy. \begin{figure*} \centerline{\psfig{figure=asm1915.ps,width=16cm,height=16cm} } \caption{ASM light curve (2--12 keV) of GRS1915+105 for 1996 and 1997. The Crab Nebula, for reference, yields 75.5 ASM c/s. The ASM hardness ratio, $HR2$ is defined as the count rate in the 5--12 keV band relative to the rate in the 3--5 keV band. The time intervals that correspond with our groups of combined X-ray power spectra (see Table 1) are shown above the light curve.} \label{fig:asm19} \end{figure*} Fourier analyses of the first 31 PCA observations \cite{Mor97} of GRS1915+105 revealed 3 different types of oscillations: a quasi-periodic oscillation (QPO) with a constant frequency of 67 Hz; dynamic, low-frequency (0.05 to 10 Hz) QPO with a large variety of amplitudes and widths; and complex, high-amplitude dip cycles ($10^{-3}$ to $10^{-1}$ Hz) that are related to the extreme X-ray variations noted above. The combined characteristics of the power spectra, light curves, and energy spectra were interpreted as representing four different emission states \cite{Mor97}, none of which resemble the canonical states of black hole binaries \cite{Van95}. The other microquasar, GRO J1655-40, was first detected with BATSE on 1994 July 27, and the correlation between hard X-ray activity and the ejections of relativistic radio jets \cite{Har95} was an important step in establishing the relationship between accretion changes and the formation of jets. During late 1995 and early 1996, GRO J1655-40 entered a quiescent accretion state, permitting optical spectroscopy of the companion star, which led to our knowledge of the binary constituents and mass of the black hole~\cite{Oro97}, as noted above. The ASM recorded a renewed outburst from GRO J1655-40 \cite{Lev96} that began on 1996 April 25. The ASM light curve is shown in Figure ~\ref{fig:asm16}. With great fortune a concurrent optical campaign was in progress, and it was determined that optical brightening preceded the X-ray turn-on by 6 days, beginning first in the I band and then accelerating rapidly in the B and V bands. These results provide concrete evidence favoring the accretion disk instability as the cause of the X-ray nova episode. \begin{figure*} \centerline{\psfig{figure=asm1655.ps,width=16cm,height=16cm} } \caption{(top) ASM light curve (1.5--12 keV) of GRO J1655-40 for 1996 and 1997. The tick marks above the light curve show the times of RXTE pointed observations, either from the public archive (1997) or our guest observer program (1996). (bottom) The ASM hardness ratio, $HR2$ as defined previously.} \label{fig:asm16} \end{figure*} The $RXTE$ observations of GRO J1655-40 indicate a more stable form of accretion. X-ray spectral variations (see Fig.~\ref{fig:asm16}) resemble the canonical ``soft/high'' and ``very high'' states in black hole binaries \cite{Rem98,Van95}. There are X-ray QPOs in the range of 8--22 Hz, and there is also a transient, high-frequency QPO at 300 Hz~\cite{Rem98}. This QPO is detected only when the X-ray power-law component reaches its maximum strength. The efforts to explain the 67 Hz QPO in GRS1915+105 and the 300 Hz QPO in GRO J1655-40 commonly invoke effects rooted in General Relativity (GR). There are at least 4 proposed mechanisms that relate the QPO frequency to a natural time scale of the inner accretion disk in a black hole binary. These are: the last stable orbit \cite{Sha83,Mor97}, diskoseismic oscillations \cite{Per97,Now97}, frame dragging \cite{Cui98}, and an oscillation in the centrifugal barrier \cite{Tit98}. The physics of all of these phenomena invokes GR effects in the inner accretion disk. It has also been proposed that the high frequency QPOs may be caused by an inertial-acoustic instability in the disk \cite{Che95} (with non-GR origin), although the oscillation in GRO J1655-40 would extend this application to higher frequencies than had been argued previously. In this paper we advertise some recent work that associates jet formation in GRS1915+105 with features in the X-ray light curve. We then turn to the topic of X-ray QPOs. New results are presented on the reappearance of 67 Hz oscillations in GRS1915+105. Finally we describe the various QPO tracks that appear in GRO J1655-40, and we explain how they behave in response to the strength of the power-law component in the X-ray spectrum. \@startsection{section}{1}{\z@{CLUES FOR THE ORIGIN OF JETS IN GRS1915+105} Several groups have combined X-ray, radio, and/or infrared observations of GRS 1915+105 to probe the properties of jet formation and relate the ejection events to features in the X-ray light curves. Infrared jets were discovered \cite{Sam96}, and infrared flares were seen to occur after radio flares\cite{Fen97,Mir97}. These investigations provide solid evidence that the infrared flares represent synchrotron emission from rapidly evolving jets. It has been further demonstrated that the radio, infrared, and X-ray bands occasionally show strong oscillations with a quasiperiodic time scale of 20--40 min \cite{Rod97,Fen97,Eik98,Poo98}. In perhaps the most impressive of these studies to date, there were a series of infrared flares (with 20 min recurrence time), and in six of six possible cases the flares were seen to follow dramatic dipping cycles in the X-ray light curve. Since these dips have been analyzed as representing the disappearance of the thermal X-ray emission from the inner disk \cite{Bel97a,Bel97b}, the infrared/X-ray correlation shows that the jet material originates in the inner accretion disk\cite{Eik98}. Another conclusion drawn from the recent X-ray/radio/infrared studies is that there is a wide distribution of ``baby jets'' in which quantized impulses appear at $\sim30$ min intervals. The radio strength of these events is one to three orders of magnitude below the levels of the superluminal outbursts of 1994 \cite{Poo98,Mir94}. We expect that $RXTE$ will continue to support multifrequency observations of GRS1915+105 during 1998. There are opportunities for further analysis to characterize the distribution and expansion times of the jets, analyze the infrared and radio spectra of these events, and study the details of the X-ray light curve in the effort to constrain the physics of the trigger mechanism. \@startsection{section}{1}{\z@{67 HZ OSCILLATIONS IN GRS1915+105} There have been many observations of GRS1915+105 with $RXTE$ since the six (1996 April 6--June 11) that provided detections of QPO at 67 Hz \cite{Mor97}. Given the importance of this QPO and also the variety of emission states recorded for GRS1915+105 (see Figure~\ref{fig:asm19}), we investigated the data archive for new detections of this QPO. We adopted a global perspective, and we divided the $RXTE$ observations into a sequence of X-ray state intervals, which we label as groups ``g1'' through ``g10'' in Figure~\ref{fig:asm19}. The groups were selected with consideration of both the ASM light curve and the characteristics of the PCA power spectra, and some observations between the group boundaries were ignored as representing transition states. In Table~\ref{tab:67hz} we list the time intervals (cols. 2, 3) the number of observations (col. 4), the X-ray state (col. 5), and the average X-ray flux (in Crab units) for each group. The typical observation has an exposure time of 10 ks. The X-ray state description follows the convention of Morgan et al. \cite{Mor97}, which describes GRS1915+105 as being relatively steady and bright (B), flaring (FL), chaotic (CH), or low-hard (LH). \begin{table*} \newlength{\digitwidth} \settowidth{\digitwidth}{\rm 0} \catcode`?=\active \def?{\kern\digitwidth} \caption{The 67 Hz QPO in GRS1915+105} \label{tab:67hz} \begin{tabular*}{\textwidth}{@{}l@{\extracolsep{\fill}}llrccccc} \hline group & start & end & obs & state & flux & freq. & FWHM & ampl. \\ \hline 1 & 1996 Apr 06 & 1996 May 14 & 7 & B & 1.06 & 64.5 & 4.0 & 0.0069 \\ 2 & 1996 May 21 & 1996 Jul 06 & 14 & FL & 1.00 & 65.7 & 2.3 & 0.0022 \\ 3 & 1996 Jul 14 & 1996 Aug 10 & 6 & LH & 0.58 & & & \\ 4 & 1996 Sep 16 & 1996 Oct 15 & 8 & B & 1.01 & 67.6 & 1.5 & 0.0016 \\ 5 & 1996 Nov 28 & 1997 May 08 & 28 & LH & 0.31 & 68.3 & 2.3 & 0.0023 \\ 6 & 1997 May 13 & 1997 Jun 30 & 18 & CH/B & 0.64 & & & \\ 7 & 1997 Jul 07 & 1997 Aug 21 & 17 & B & 1.33 & 66.9 & 4.3 & 0.0039 \\ 8 & 1997 Aug 24 & 1997 Sep 29 & 15 & CH/FL & 1.17 & & & \\ 9 & 1997 Oct 09 & 1997 Oct 25 & 4 & LH & 0.47 & & & \\ 10 & 1997 Oct 30 & 1997 Dec 22 & 15 & FL & 1.41 & 67.4 & 4.2 & 0.0035 \\ \hline \end{tabular*} \end{table*} We then combined the power spectra in each group, using the full energy coverage of the PCA instrument. We fit the results for a power continuum (with a power-law function) and a QPO feature (with a Lorentian profile) over the range of 40--120 Hz. We emphasize that the location of the central QPO frequency is free to wander within this frequency interval. The average power spectra for the 10 groups (linear units) and the QPO fits for 6 cases are shown in Figure~\ref{fig:fit67hz}. \begin{figure*} \centerline{\psfig{figure=hmult67.ps,width=16cm,height=16cm} } \caption{Average power density spectra in the range of 20--120 Hz for RXTE PCA observations of 1996 and 1997, combined in 10 groups. For the 6 cases in which a QPO is detected (see Table 1), the QPO fit is shown with a solid line.} \label{fig:fit67hz} \end{figure*} The results derived from this analysis are listed in Table~\ref{tab:67hz}. The central QPO frequency is given in col. 7, and there is a narrow distribution of $66.7 \pm 1.4$ Hz. The QPO FWHM values (col. 8) have a mean value of $3.4 \pm 1.0$ Hz. Comparing these observing intervals, we conclude that the average X-ray luminosity of GRS1915+105 may vary by a factor of 4 with no significant change in the characteristics of the 67 Hz QPO. The integrated QPO amplitude is given in col. 9. The amplitudes (like the power spectra in Figure~\ref{fig:fit67hz}) are normalized by the mean X-ray count rate for GRS1915+105. The integrated power in the 67 Hz QPO is in the range of 0.2\%--0.7\% of the mean X-ray flux. The results for group 5 are particularly noteworthy. During this period the source was in the low-hard state for a long time (see Figure~\ref{fig:asm19}. The PCA light curves in 1 s time bins show variations limited to moderate flickering, with rms variations $\sim 10$\%. However the continuum power at 40--120 Hz is relatively high during this interval (see Figure~\ref{fig:fit67hz}). The large number of observations in group 5 partially compensates for the losses in statistical sesitivity to QPO detection due to lower count rate and elevated continuum power. Nevertheless the QPO search does find a small feature that is consistent in frequency (68.3 Hz), width (2.3 Hz), and amplitude (0.23\%) with the other detections. We estimate that the uncertainty in the amplitude is 0.09\%, so that the detection of the 67 Hz QPO in group 5 has a signigicance of 2.6 $\sigma$. For the 4 groups that do not yield QPOs in the range of 40--120 Hz, the uncertainties are slightly larger, and we cannot exclude the possibility that GRS1915+105 is $always$ emitting X-ray QPOs at 67 Hz with amplitudes in the range of 0.1\% or larger. There are yet many avenues for further investigation of this QPO, e.g. time lags at 67 Hz, analysis of the energy spectrum for the groups with positive QPO detection, and segregation of data with alternative schemes such as the phases of jet-related dipping cycles. All of these topics will be pursued during the next several months. \@startsection{section}{1}{\z@{QPOs in GRO J1655-40} We have conducted similar analyses of PCA power spectra for individual observations of GRO J1655-40. As reported previously \cite{Rem98}, there are transient QPOs in the range of 8--30 Hz and there is a high frequency QPO near 300 Hz. All of these QPOs are associated with the strength of the power-law component. With respect to Figure~\ref{fig:asm16}, the QPOs at 8--30 Hz appear when observations have hard spectra that correspond with ASM HR2 values above 0.8, while the 300 Hz QPO is significant only when the combine the power spectra for the 7 ``hardest'' observations made with the PCA (1996 August and October). We fit the individual PCA power spectra for power continuum and QPOs, as described above, using frequency windows of 0.02--2 Hz and 5--50 Hz. In Figure~\ref{fig:qpo16} we show the central QPO frequencies as a function of the source count rate in the PCA energy channels above 13 keV (or above channel 35). We use on open triangle for narrow QPOs ($\nu / \delta\nu > 5$) and the ``*'' symbol for broad QPOs ($\nu / \delta\nu < 4$). In some observations, both narrow and broad QPOs appear in the same power spectrum (i.e. one 10 ks observation). The ``x'' symbol shows a narrow and weak QPO derived from the average power spectrum obtained during the 1997 PCA observations (MJD interval 50500--50650). \begin{figure*} \centerline{\psfig{figure=pub_fxqpo.ps,width=10cm,height=5.5cm} } \caption{The central frequency of X-ray QPOs in GRO J1655-40 as a function of the PCA count rate above 13 keV. The open triangles represent broad QPOs, while the solid triangles represent narrow ones.} \label{fig:qpo16} \end{figure*} The results in Figure~\ref{fig:qpo16} show that the low-frequency QPOs in GRO J1655-40 are organized in three tracks. A broad QPO appears to be stationary near 8 Hz, while the narrow QPOs shifts to lower frequency as the hard X-ray flux increases. The QPO derived from the sum of 1997 observations appears to be a simple extension of this narrow QPO track, occurring when the X-ray flux above 13 keV is nearly zero. Very low frequency QPO (0.085 and 0.11 Hz) are seen on two occasions when the hard X-ray flux is near maximum. These QPO coexist with 300 Hz QPO, and they are reminiscent of the 0.067 Hz QPOs in GRS1915+105. We speculate that the 0.1 Hz QPOs appear near the threshold of the chaotic light curves manifest in GRS1915+105. GRO1655-40 approaches this threshhold but does not cross the line into unstable light curves during the 1996-1997 outburst. In Figure~\ref{fig:asm16} we see that GRO J1655-40 fades below 20 mCrab on 1997 Aug 17. Whether there will be a renewed outburst in 1998 is anyone's guess, but the ASM will surely be monitoring this source for any signs of X-ray activity.
1,941,325,220,131
arxiv
\section{Introduction} In this paper we investigate quantities of the form \begin{equation} \label{q-versie} % h^{\pm}(q_1,q_2):=\sum_{k=1}^\infty\frac{\q{k}}{1\pm \Q{k}},\qquad 0<q_1,q_2<1,\quad q_1\in\mathbb{Q},\quad q_2=1/p_2,\quad p_2\in\mathbb{N}\setminus\{1\}. \end{equation} Since we will assume $q_1 ,q_2$ to be fixed, we will write $h^{\pm}=h^{\pm}(q_1 ,q_2 )$. In the special case \begin{equation} \label{specgeval} % q_i=q^{r_i},\qquad q=1/p,\quad p\in\mathbb{N}\setminus\{1\},\quad r_i \in\mathbb{N},\end{equation} by writing $(1+ \Q{k})^{-1}=\sum_{j=0}^\infty (- \Q{k})^j$ and changing the order of summation, we clearly have \[\lim_{q\uparrow 1} (1-q)\,h^+=\sum_{j=0}^\infty \frac{(-1)^j}{r_1+jr_2}=\frac{1}{r_2 }\, \Psi(-1,1,\frac{r_1 }{r_2 })\] where $\Psi$ is the Lerch transcendent, which is a generalization of the Hurwitz zeta function and the polylogarithm function. Some particular cases are $h^+(q,q)=-\ln_q 2$ and $h^+(q,q^2)=\beta_q (1)$ which are $q$-extensions of $-\ln 2$ and $\beta(1)=\pi/4$, respectively. In the same manner $h^-$ can be seen as a $q$-analogue of the (harmonic) series $\sum_{k=1}^\infty (r_1 +kr_2)^{-1}$. In 1948 Erd\H{o}s proved that $h^-(q,q)=\zeta_q(1)$ is irrational when $q=1/2$, see \cite{Erdos}. Later, Peter Borwein \cite{bor3,bor1} showed that $\zeta_q(1)$ and $\ln_q 2$ are irrational whenever $q=1/p$ with $p$ an integer greater than $1$. Other irrationality proofs were found in, e.g., \cite{Amde,Bund,Matala2,Walter1,kelly,Zudilin1,Zudilin2}. To the best of our knowledge, the sharpest upper bounds for the irrationality measure of $\zeta_q(1)$ and $\ln_q 2$ which are known in the literature until now, are $2.42343562$ and $3.29727451$ respectively \cite{Zudilin1,Zudilin2}. In \cite{Matala} Matala-aho and Pr\'evost also considered quantities of the form \reff{q-versie}. However, not all the numbers we prove to be irrational are covered by their result. To prove this irrationality we use a well-known lemma, which expresses the fact that a rational number can be approximated to order 1 by rational numbers and to no higher order \cite[Theorem 186]{Hardy}. \begin{lemma} \label{irrlemma} % Let x be a real number. Suppose there exist integers $a_n,b_n \left(n\in \mathbb{N}\right)$ such that \begin{enumerate} \item[(i)] $b_nx-a_n\neq 0$ for all $n\in \mathbb{N}$; \item[(ii)] $\lim\limits_{n\rightarrow\infty}\left(b_nx-a_n\right)=0$, \end{enumerate} then x is irrational. \end{lemma} \begin{proof} Suppose $x$ is rational, so write $x=a/b$ with $a,b$ coprime. Then $b_na-a_nb$ is a nonzero integer sequence that tends to zero, which is a contradiction. \end{proof} In Section~\ref{sectionRA} we construct rational approximations to $h^{\pm}$. In particular, we extend the Pad\'e approximation technique applied in \cite{Walter1} to prove the irrationality of $\zeta_q(1)$ and $\ln_q 2$ and use little $q$-Jacobi polynomials (which are a generalization of the $q$-Legendre polynomials). Section~\ref{sectionirr} then mainly consists of calculating the asymptotic behaviour of the 'error term'. Section~\ref{sectionspec} points out what improvements can be made in the special case \reff{specgeval}. If we define \begin{equation} \label{eta}\eta^-:=1+\frac{3}{\pi^2},\qquad \eta^+:=1+\frac{4}{\pi^2}, \end{equation} \begin{equation} \label{gamma-} \gamma^{-}(r_2 ):=\frac{3}{\pi^2}\left[1+2\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1}\sum_{\substack{l=1\\ (l,r_2 )=1}}^{r_2 }\frac{1}{l^2}-\frac{1}{r_2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi}{\varpi+1}\right]<1+\frac{3}{\pi^2} \end{equation} and \begin{multline} \label{gamma+} % \gamma^+(r_2 ):=\frac{1}{\pi^2}\left[4 +6\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1}\sum_{\substack{l=1\\ (l,r_2 )=1}}^{r_2 }\frac{1}{l^2} \right] \\ -\frac{1-(-1)^{r_2}}{2\pi^2}\left[ \frac{2}{r_2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi}{\varpi+1} +\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1} \sum_{\substack{l=\lceil \frac{r_2{}}{2} \rceil \\(l,r_2 )=1}}^{r_2 } \frac{1}{l^2}\right] < 1+\frac{4}{\pi^2}, \end{multline} then our main results are the following. \begin{theorem} \label{irrationality alg}% Let $q_2=1/p_2$ with $p_2\in\mathbb{N}\setminus\{1\}$ and $q_1\in\mathbb{Q}$ with $0<q_1<1$. Then the number $h^{\pm}$, defined as in \reff{q-versie}, is irrational. Moreover, there exist integer sequences $a_{n}^\pm$, $b_{n}^\pm$ such that \begin{equation} \label{rest asymp in p alg} % \lim_{n\rightarrow\infty}\left|b_n^\pm h^{\pm}-a_n^\pm \right|^{1/n^2}\leq \P{\eta^\pm-\frac{3}{2}}<1 \end{equation} and \begin{equation} \label{asympbeta alg}% \lim_{n\rightarrow\infty}|b_n^\pm|^{1/n^2}\leq \P{\eta^\pm+\frac{3}{2}}. \end{equation} \end{theorem} \begin{theorem} \label{irrationality}% Let $q=1/p$ with $p\in\mathbb{N}\setminus\{1\}$, $q_i^{}=q^{r_i}$ and $p_i^{}=p^{r_i}$ with $r_i\in\mathbb{N}$, $i=1,2$ and $(r_1,r_2)=1$. Then the number $h^{\pm}$, defined as in \reff{q-versie}, is irrational. Moreover, there exist integer sequences $a_{n}^\pm$, $b_{n}^\pm$ such that \begin{equation} \label{rest asymp in p} % \lim_{n\rightarrow\infty}\left|b_n^\pm h^{\pm}-a_n^\pm \right|^{1/n^2}\leq \P{\gamma^\pm(r_2 )-\frac{3}{2}}<1 \end{equation} and \begin{equation} \label{asympbeta}% \lim_{n\rightarrow\infty}|b_n^\pm|^{1/n^2}\leq \P{\gamma^\pm(r_2 )+\frac{3}{2}}. \end{equation} \end{theorem} \begin{remark} In fact, Theorem~\ref{irrationality} can also be applied if ${\rm gcd}(r_1,r_2)=\rho\not=1$. In that case just note that $h^{\pm}(q_1,q_2)=h^{\pm}(q'^{r_1'},q'^{r_2'})$ with $r_1'=r_1/\rho$, $r_2'=r_2/\rho$ and $q'=q^\rho$. \end{remark} As a side result of the irrationality, we also obtain an upper bound for the irrationality measure (Liouville-Roth number, order of approximation) for $h^{\pm}$. Recall that this measure is defined as \[\mu(x):=\inf\left\{t\ :\ \left|x-\frac{a}{b}\right|>\frac{1}{b^{t+\varepsilon}},\ \forall\varepsilon>0,\ \forall a,b \in\mathbb{Z},\ b \:\rm{sufficiently}\: \rm{large}\right\},\] see, e.g., \cite{bor2}. It is known that all rational numbers have irrationality measure 1, whereas irrational numbers have irrationality measure at least 2. Furthermore, if $b_nx-a_n\neq 0$ for all $n\in \mathbb{N}$, $\left|b_nx-a_n\right|=\mathcal{O}(b_n^{-s})$ with $0<s<1$ and $|b_n|<|b_{n+1}|<|b_n|^{1+o(1)}$, then the measure of irrationality satisfies $2\le\mu(x)\le 1+1/s$, see \cite[exercise 3, p.~376]{bor2}. Note that by \reff{rest asymp in p alg} and \reff{asympbeta alg}, respectively \reff{rest asymp in p} and \reff{asympbeta}, we get the asymptotic behaviour \begin{equation} \left|b_n^\pm h^{\pm}-a_n^\pm \right|=\mathcal{O}\Bigl((b_n^\pm)^{-\frac{3-2\,\eta^\pm}{3+2\,\eta^\pm }+\varepsilon}\Bigr),\qquad \mbox{for all } \varepsilon >0,\qquad n\to \infty, \end{equation} and for the special case \reff{specgeval} \begin{equation} \left|b_n^\pm h^{\pm}-a_n^\pm \right|=\mathcal{O}\Bigl((b_n^\pm)^{-\frac{3-2\,\gamma^\pm (r_2 )}{3+2\,\gamma^\pm (r_2 )}+\varepsilon}\Bigr),\qquad \mbox{for all } \varepsilon >0,\qquad n\to \infty, \end{equation} which then implies the following upper bound for $\mu(h^{\pm})$. \begin{corollary} \label{irrationality measure} % Under the assumptions of Theorem~\ref{irrationality alg} we have $2\le \mu(h^{\pm})\le \nu^\pm$; under the assumptions of Theorem~\ref{irrationality} we have $2\le \mu(h^{\pm})\le m^\pm (r_2)$ where \begin{equation} \label{upb} % m^\pm (r_2) = \left(\frac{3-2\,\gamma^\pm (r_2 )}{6}\right)^{-1}, \end{equation} with $m^\pm (r_2)\le \nu^\pm$, where $\nu^+=\frac{6\pi^2}{\pi^2-8}$ and $\nu^-=\frac{6\pi^2}{\pi^2-6}$. \end{corollary} \begin{table}[t] \begin{center} \begin{tabular}{|c|rcl|rcl|} \hline % $r_2$ & \multicolumn{3}{|c|}{$m^-(r_2)$}&\multicolumn{3}{|c|}{$m^+(r_2)$}\\ \hline & & & & & & \\[-2ex] 1 & $\frac{2\pi^2}{\pi^2-4}$&$\approx$&$ 3.362953864 $&$\frac{6\pi^2}{3\pi^2-14}$&$\approx$&$ 3.793858357 $ \\[0.5ex] 2 & $\frac{6\pi^2}{3\pi^2-20}$&$\approx$&$ 6.162845000 $&$\frac{2\pi^2}{\pi^2-8}$&$\approx$&$ 10.55796017 $ \\[0.5ex] 3 & $\frac{16\pi^2}{8\pi^2-57}$&$\approx$&$ 7.192005083 $&$\frac{96\pi^2}{48\pi^2-373}$&$\approx$&$ 9.405127174$ \\[0.5ex] 4 & $\frac{54\pi^2}{27\pi^2-205}$&$\approx$& $8.668909282 $&$\frac{54\pi^2}{27\pi^2-232}$&$\approx$&$ 15.45734242 $ \\[0.5ex] 5 & $\frac{1728\pi^2}{864\pi^2-6565}$&$\approx$&$ 8.690997496 $&$\frac{10368\pi^2}{5184\pi^2-42797}$&$\approx$&$ 12.22991528 $ \\[0.5ex] 6 & $\frac{300\pi^2}{150\pi^2-1211}$&$\approx$& $10.98899223 $&$\frac{150\pi^2}{75\pi^2-668}$&$\approx$&$ 20.49894619 $ \\[0.5ex] 7 & $\frac{86400\pi^2}{43200\pi^2-338681}$&$\approx$&$ 9.724867074 $&$\frac{103680\pi^2}{51840\pi^2-440701}$&$\approx$&$ 14.42473632 $ \\[0.5ex] 8 & $\frac{132300\pi^2}{66150\pi^2-534587}$&$\approx$& $11.03878708 $&$\frac{66150\pi^2}{33075\pi^2-294856}$&$\approx$&$ 20.67290169 $ \\[0.5ex] 9 & $\frac{940800\pi^2}{470400\pi^2-3801647}$&$\approx$&$ 11.04061736 $&$\frac{1128960\pi^2}{564480\pi^2-4937467}$&$\approx$&$ 17.58230823 $ \\[0.5ex] 10 & $\frac{71442\pi^2}{35721\pi^2-294473}$&$\approx$& $12.14040518 $&$\frac{71442\pi^2}{35721\pi^2-322256}$&$\approx$&$ 23.27373406 $ \\[0.5ex] \hline% \end{tabular} \caption{\label{tabel1} Some values of the upper bound $m^\pm(r_2)$ for the irrationality measure of $h^\pm$.} \end{center} \end{table} \begin{remark} In the case \reff{specgeval} with $r_2 =1$, we can sharpen the upper bound $m^\pm(r_2)$. We will discuss this in Section~\ref{finalremark}. In particular, we will show that $\mu(\zeta_q(1))\le \frac{2\pi^2}{\pi^2-2}\approx 2.508284762$, which was also found in \cite{Walter1}, and $\mu(\ln_q 2)\le \frac{6\pi^2}{3\pi^2-8}\approx 2.740438628$, which is a better upper bound than the one in \cite{Zudilin1}. \end{remark} \section{Rational approximation} \label{sectionRA} We first focus on the general case \reff{q-versie}, the special case \reff{specgeval} will be treated in Section~\ref{sectionspec}. We use the notation $q_1=s_1/t_1$ with $\gcd(s_1,t_1)=1$, and $p_1=1/q_1$. \subsection{Pad\'e approximation} To prove the irrationality of $h^{\pm}$ we will apply Lemma~\ref{irrlemma}. So we need a sequence of 'good' rational approximations. To find these we will perform the (well-known) idea of Pad\'e approximation to the Markov function \begin{equation} \label{Markov function} f(z):=\sum_{k=0}^\infty \frac{\q{k}}{z-\Q{k}} =\int_0^1\frac{q_1^{\log_{q_2}x}}{z-x}\, \frac{{\rm d}_{\Q{}}x}{x}, % \end{equation} where $\log_qx=\frac{\log x}{\log q}$ and the $q$-integration is defined as \begin{equation} \int_0^1 g(x)\,{\rm d}_q x:=\sum_{k=0}^\infty q^k g(q^k). \end{equation} So, we look for polynomials $P_n$ and $Q_n$ of degree $n$ such that \begin{equation} \label{PA} % Q_n(z)f(z)-P_n(z)=O\left(z^{-n-1}\right), \qquad z\rightarrow\infty. \end{equation} As is well known in the Pad\'e approximation theory (see, e.g., \cite{Nikishin}) the polynomials $Q_n$ then satisfy the orthogonality relations \begin{equation} \label{OP} % \int_0^1 Q_n(x)\, x^m \, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}}x=0,\qquad m=0,\dots,n-1. \end{equation} Little $q$-Jacobi polynomials satisfy \begin{equation} \label{defortholittleqjac} % \sum_{k=0}^\infty p_n(q^k;a,b|q)p_m(q^k;a,b|q)(aq)^k\frac{(bq;q)_k}{(q;q)_k}=0,\qquad m\neq n. \end{equation} Hence $Q_n$ are little $q$-Jacobi polynomials with a particular set of parameters, namely $Q_n(z)=p_n(z;\q{}\P{},1|\Q{})$, see, e.g., \cite[Section~3.12]{Koekoek}. \begin{lemma} The polynomials $Q_n$ have the explicit expressions \begin{eqnarray} \label{polynomialQ1} % Q_n(z) & = & % \sum_{k=0}^n \frac{(\P{n};\Q{})_k (\q{} \Q{n};\Q{})_k}{(\q{};\Q{})_{k}(\Q{};\Q{})_k}\,\Q{k}\, z^k,\\ \label{polynomialQ2} % & = & \frac{(\P{n};\Q{})_n}{(\q{};\Q{})_n}\sum_{k=0}^n \frac{(\P{n};\Q{})_k(\q{}\Q{n};\Q{})_k}{\left[(\Q{};\Q{})_{k}\right]^2}\,\Q{k}\, (\Q{}z;\Q{})_k. \end{eqnarray} \end{lemma} \begin{proof} From, e.g., \cite[Section 3.12]{Koekoek}, we know that the polynomials satisfying the orthogonality conditions \reff{OP} have the hypergeometric expression \[ Q_n(z)=\f{2}{\phi}{1}{\P{n},\q{}\Q{n}}{\q{}}{\Q{};\Q{}z}, \] which is \reff{polynomialQ1}. Next we apply the transformation formula \cite[(0.6.24)]{Koekoek} and find \[ Q_n(z)=\frac{(\P{n};\Q{})_n}{(\q{};\Q{})_n} \f{3}{\phi}{2}{\P{n},\q{}\Q{n},\Q{}z}{\Q{},0}{\Q{};\Q{}}, \] giving the expression \reff{polynomialQ2}. \end{proof} It is easily checked that the $P_n$ are connected with the polynomials $Q_n$ by the formula \begin{equation} \label{pade geeft Pn} % P_n(z)=\int_0^1\frac{Q_n(z)-Q_n(x)}{z-x}\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}}x. \end{equation} Indeed, then \begin{equation} \label{errorint} % Q_n(z)f(z)-P_n(z)=\int_0^1\frac{Q_n(x)}{z-x}\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}}x, \end{equation} and the conditions \reff{PA} are fulfilled. \begin{lemma} The polynomials $P_n$ have the explicit formulae \begin{eqnarray} \label{polynomialP1}% P_n(z) & = & \sum_{k=0}^n\frac{(\P{n};\Q{})_k (\q{}\Q{n};\Q{})_k}{(\q{};\Q{})_{k}(\Q{};\Q{})_k}\,\Q{k}\, \sum_{j=0}^{k-1}\frac{z^j}{1-\q{}\Q{k-j-1}},\\[1ex] \nonumber % & = & -\frac{(\P{n};\Q{})_n}{(\q{};\Q{})_n}\sum_{k=0}^n \frac{(\P{n};\Q{})_k (\q{}\Q{n};\Q{})_k}{\left[(\Q{};\Q{})_{k}\right]^2}\,\Q{k}\\ \label{polynomialP2} % & & \hspace{4cm} \times \sum_{j=1}^k\Q{j}\,(\Q{j+1}z;\Q{})_{k-j}\,\frac{(\Q{};\Q{})_{j-1}}{(\q{};\Q{})_j}. \end{eqnarray} \end{lemma} \begin{proof} First of all we mention that (for $k\in \mathbb{N}\cup \{0\}$) \[\int_0^1\frac{z^k-x^k}{z-x}\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}}x= \sum_{j=0}^{k-1}z^j\sum_{\ell=0}^{\infty}\Q{(k-1-j)\ell}\q{\ell}=\sum_{j=0}^{k-1}\frac{z^j}{1-\q{}\Q{k-j-1}}.\] So, applying \reff{polynomialQ1} to \reff{pade geeft Pn} we easily obtain \reff{polynomialP1}. Next, observe that \begin{equation} \label{z-x} % \frac{(\Q{} z;\Q{})_k-(\Q{} x;\Q{})_k}{z-x}=-\sum_{j=1}^k\Q{j}\,(\Q{j+1}z;\Q{})_{k-j}\,(\Q{} x;\Q{})_{j-1} \end{equation} which one can prove by induction. Moreover, using the $q$-binomial series \cite[Section~10.2]{Andrews}, \cite[Section~1.3]{Gasper} \[\sum_{n=0}^\infty\frac{(a;q)_n}{(q;q)_n}x^n=\frac{(ax;q)_\infty}{(x;q)_\infty},\qquad |q|<1,\ |x|<1,\] we get \begin{equation} \label{integral J} % \int_0^1(\Q{} x;\Q{})_{j-1}\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}}x= (\Q{};\Q{})_{j-1}\sum_{\ell=0}^\infty \frac{(\Q{j};\Q{})_{\ell}}{(\Q{};\Q{})_{\ell}}\,\q{\ell} = \frac{(\Q{};\Q{})_{j-1}}{(\q{};\Q{})_j}. \end{equation} Combining \reff{pade geeft Pn}, \reff{polynomialQ2}, \reff{z-x} and \reff{integral J} we then finally establish \reff{polynomialP2}. \end{proof} \subsection{Rational approximants to $h^{\pm}$} \label{an bn integer} Notice that by the definition \reff{Markov function} of $f$ we have \[h^{\pm}=\mp \frac{\q{}}{\Q{}}\, f(\mp \P{}).\] Following the idea of Pad\'e approximation we could try to approximate $h^{\pm}$ by the sequence of rational numbers $\mp \q{} P_n(\mp \P{})/[\Q{}Q_n(\mp \P{})]$. However, we prefer the evaluation of $f$ at $\mp \P{n}$, which gives \begin{equation} \label{f in -p2n}% h^{\pm}=\sum_{k=1}^{n-1}\frac{\q{k}}{1\pm \Q{k}} \mp \left(\frac{\q{}}{\Q{}}\right)^n f(\mp \P{n}). \end{equation} In this way we can benefit from the fact that the finite sum on the right hand side of \reff{f in -p2n} already gives a good approximation for $h^{\pm}$. Moreover, the approximation \reff{PA} is useful for $z$ tending to infinity. Hence the evaluation at the point $\mp \P{n}$ makes more sense, especially since $n$ itself will tend to infinity too. So, a 'natural' choice for rational approximations $a_n^{\pm}/b_n^{\pm}$ to $h^{\pm}$ then has the following expressions. \begin{definition} \label{defanbn} % Define \begin{align} \label{an} & a_n^\pm := e_n^\pm\left[\p{n}\, Q_n(\mp \P{n})\sum_{k=1}^{n-1}\frac{\q{k}}{1\pm \Q{k}}\mp \P{n}\,P_n(\mp \P{n})\right],\\[1ex] \label{bn} & b_n^\pm := e_n^\pm\ \p{n}\,Q_n(\mp \P{n}), \end{align} where $e_n^\pm$ are factors such that these are integer sequences. \end{definition} The following lemma gives a possible choice for the factors $e_n^\pm$. \begin{lemma} \label{lemma en alg} % By taking \begin{equation} \label{en alg} % e_n^\pm= % \left(\prod_{k=0}^{n-1}(t_1p_2^k-s_1)\right)^2\,s_1^n\,{\rm lcm}\left\{\left. p_2^k\pm1 \, \right| \, 0\le k \le n-1 \right\} \end{equation} the $a_n^\pm$ and $b_n^\pm$, defined as in \reff{an} and \reff{bn}, are integer sequences. \end{lemma} \begin{proof} It is not very convenient to prove that an expression is an integer when it depends on the rational $q_2$. So, we first write $Q_n(\mp \P{n})$ depending on the integer $\P{}$. By \reff{polynomialQ1} we get \begin{equation} \label{Qn in p} % Q_n(\mp \P{n}) = \sum_{k=0}^n \frac{(\P{n};\Q{})_k (\p{}\P{n};\P{})_k}{(\p{};\P{})_{k}(\P{};\P{})_k} \,\P{\frac{k^2-k}{2}}(\pm 1)^k. \end{equation} Notice that \[\frac{(\P{n};\Q{})_k}{(\P{},\P{})_k} = \frac{(\P{};\P{})_n}{(\P{};\P{})_{n-k}(\P{};\P{})_k} =\left[{n\atop k}\right]_{\P{}},\] which is an integer. Moreover, the possible denominators $t_1$ appear as often in the numerator as in the denominator of $Q_n(\mp \P{n})$. The factor $s_1^n$ in $e_n^\pm$ is needed because of the factor $p_1^n$ in $a_n^\pm$ and $b_n^\pm$. So the only denominators in $Q_n(\mp \P{n})$ originate from $(\p{};\P{})_{k}$, and hence they are cancelled out by the first product in $e_n^\pm$. This already implies that $b_n^\pm$ is an integer. Obviously, then also \[e_n^\pm\,\p{n}\, Q_n(\mp \P{n})\sum_{k=1}^{n-1}\frac{\q{k}}{1\pm \Q{k}}=b_n^\pm \sum_{k=1}^{n-1}\frac{\q{k}}{1\pm \Q{k}}\] is an integer by the definition of $e_n^\pm$. So, what remains to prove is that $e_n^\pm\, \P{n}\,P_n(\mp \P{n})$ is an integer. By \reff{polynomialP1} we have \begin{equation} \label{Pn een}% P_n(\mp \P{n})=\sum_{k=0}^n\frac{(\P{n};\Q{})_k (\p{}\P{n};\P{})_k}{(\p{};\P{})_{k}(\P{};\P{})_k}\,(-1)^k\, \sum_{j=0}^{k-1}\frac{(\mp 1)^{j}\, \p{}\P{\frac{k^2+k}{2}+n(j-k)-j-1}}{\p{}\P{k-j-1}-1}. \end{equation} Since $(\P{};\P{})_k$ is a divisor of $(\P{n};\Q{})_k$, it is clear that by \reff{en} the only possible denominators in $e_n^\pm \, \P{n}\,P_n(\mp \P{n})$ are powers of $p_2$. The formula \reff{polynomialP2} leads to \begin{multline*} P_n(\mp \P{n})=\frac{(\P{n};\Q{})_n}{(\p{};\P{})_n}\sum_{k=0}^n \frac{(\P{n};\Q{})_k (\p{}\P{n};\P{})_k}{\left[(\P{};\P{})_{k}\right]^2}\,\p{n-k}\P{\binom{n-k}{2}}\,(-1)^{n-k}\\ \times \sum_{j=1}^k\left(\frac{\p{}}{\P{}}\right)^{j}\,(\mp\P{n-j-1};\Q{})_{k-j}\,\frac{(\P{};\P{})_{j-1}}{(\p{};\P{})_j}. \end{multline*} \end{proof} \begin{remark} Looking at \reff{Pn een} one could expect a power of $p_2$ in the denominator of $\P{n}\,P_n(\mp \P{n})$, and this of the order of $\P{n^2/2}$. This would totally ruin the asymptotics in the next section. However, Maple calculations showed the absence of a power of $p_2$ in the denominator. This is why we had to use an equivalent formula for $P_n$, which is given by \reff{polynomialP2}. \end{remark} \section{Irrationality of $h^{\pm}$} \label{sectionirr} In this section we look at the error term $b_n^\pm h^\pm - a_n^\pm$, where $a_n^\pm$ and $b_n^\pm$ are defined as in Definition~\ref{defanbn} and \reff{en alg}. Using \reff{f in -p2n} and \reff{errorint} one easily sees that it has the integral representation \begin{align} \nonumber % b_n^\pm h^\pm - a_n^\pm & = \mp e_n^\pm\, \P{n} \Bigl[Q_n(\mp \P{n})f(\mp \P{n})-P_n(\mp \P{n})\Bigr]\\[1ex] & = e_n^\pm \, \P{n} \int_0^1\frac{Q_n(x)}{\P{n}\pm x}\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}}x. \end{align} We will show that this expression is different from zero for all $n\in \mathbb{N}$ and obtain its asymptotic behaviour. Here we study \begin{equation} \label{Rn} % R_n^\pm:=\int_0^1\frac{Q_n(x)}{\P{n}\pm x}\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}}x \end{equation} and $e_n^\pm$ separately. \subsection{Asymptotic behaviour of $R_n^\pm$} % \label{restterm asymp} We will need the following very general lemma for sequences of polynomials with uniformly bounded zeros. This can be found in, e.g., \cite[Lemma 3]{Walter1}, but we include a short proof for completeness. \begin{lemma} \label{hulplemma} % Let $\{\pi_n\}_{n\in \mathbb{N}}$ be a sequence of monic polynomials for which $\textrm{deg}(\pi_n)=n$ and the zeros $x_{j,n}$ satisfy $|x_{j,n}|\le M$, with $M$ independent of $n$. Then \[\lim_{n\to \infty} \left|\pi_n\left(cx^n\right)\right|^{1/n^2}=|x|, \qquad |x|>1,\ c\in\mathbb{C}.\] \end{lemma} \begin{proof} Since $|x|>1$, for large $n$ we easily get \[0\le |cx^n|-M\le |cx^n-x_{j,n}|\le |cx^n|+M, \qquad j=1,\ldots,n.\] This implies \[\left(|cx^n|-M\right)^n\le \left|\pi_n\left(cx^n\right)\right|\le \left(|cx^n|+M\right)^n\] and \[|x|\left(|c|-\frac{M}{|x|^n}\right)^{1/n}\le \left|\pi_n\left(cx^n\right)\right|^{1/n^2}\le |x|\left(|c|+\frac{M}{|x|^n}\right)^{1/n}.\] The lemma then follows by taking limits. \end{proof} For $R_n^\pm$, defined as in \reff{Rn}, we have the following asymptotic result. Here we use a similar reasoning as in \cite{Walter1} for the irrationality of $\zeta_q(1)$. \begin{lemma} \label{restterm asymp lemma} % Let $Q_n$ be the polynomials \reff{polynomialQ1} satisfying the orthogonality relations \reff{OP}. Then $R_n^\pm$ is different from zero for all $n$ and \begin{equation} \label{Rnasymp} \lim\limits_{n\to \infty} \left|R_n^\pm\right|^{1/n^2}=\P{-3/2}. \end{equation} \end{lemma} \begin{proof} First of all observe that \[ Q_n(\mp \P{n})\,R_n^\pm = \mp \int_0^1 Q_n(x)\,\frac{Q_n(\mp \P{n})-Q_n(x)}{\mp \P{n}-x} \, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}} x + \int_0^1 \frac{Q_n^2(x)}{\P{n}\pm x}\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}} x. \] The first integral on the right hand side vanishes because of the orthogonality relations \reff{OP} for the polynomial $Q_n$. Furthermore, note that $0\le x\le 1$ so that \begin{equation} \label{afschatting} % 0<\frac{1}{\P{n}+1}\int_0^1 Q_n^2(x)\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}} x \le Q_n(\mp \P{n})\,R_n^\pm \le \frac{1}{\P{n}-1}\int_0^1 Q_n^2(x)\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}} x. \end{equation} This already proves that $R_n^\pm\not=0$. Next, from \cite[(3.12.2)]{Koekoek} we get \[\int_0^1 Q_n^2(x)\, q_1^{\log_{q_2}x}\,x^{-1}\, {\rm d}_{\Q{}} x= \frac{\q{n}}{1-\q{}\Q{2n}}\left(\frac{(\Q{};\Q{})_n}{(\q{};\Q{})_n}\right)^2.\] Applying this on \reff{afschatting}, we easily establish \begin{equation} \label{R naar Q} % \lim\limits_{n\to \infty}\left|Q_n(\mp \P{n})\,R_n^\pm\right|^{1/n^2}=1. \end{equation} Now write $Q_n(x)=\kappa_n\, \hat Q_n(x)$ where $\hat Q_n$ is monic. From \reff{polynomialQ1} we get that the leading coefficient $\kappa_n$ has the expression \[\kappa_n=\frac{(\P{n};\Q{})_n(\q{}\Q{n};\Q{})_n}{(\q{};\Q{})_{n}(\Q{};\Q{})_{n}}\,\Q{n}.\] Since $\prod_{i=1}^n \P{i-1}\le |(\P{n};\Q{})_n| \le \prod_{i=1}^n \P{i}$, this gives the asymptotic behaviour \begin{equation} \label{kappa} % \lim_{n\to \infty}|\kappa_n|^{1/n^2}=\P{1/2}. \end{equation} Since the $Q_n$ are orthogonal polynomials with respect to a positive measure on $[0,1]$, their zeros are all in $[0,1]$. From Lemma~\ref{hulplemma} we then also get \begin{equation} \label{hat Q} % \lim\limits_{n\to \infty}\left|\hat Q_n\left(\mp \P{n}\right)\right|^{1/n^2}=\P{}. \end{equation} Applying \reff{kappa} and \reff{hat Q} to \reff{R naar Q} then completes the proof. \end{proof} \subsection{Asymptotic behaviour of $e_n^\pm$} We obviously have the asymptotic properties \begin{align} \label{enasymp alg 1} % & \lim_{n\rightarrow\infty}\left(\prod_{k=0}^{n-1}(t_1p_2^k-s_1)\right)^{1/n^2}=p_2^{1/2} ,\\[1ex] \label{enasymp alg 2} % & \lim_{n\rightarrow\infty}\left(s_1^n\right)^{1/n^2}=1 ,\\[1ex] \label{enasymp alg 3} % & \lim_{n\rightarrow\infty}\left({\rm lcm}\left\{\left. p_2^k-1 \, \right| \, 0\le k \le n-1 \right\}\right)^{1/n^2}\leq p_2^{3/\pi^2} ,\\[1ex] \label{enasymp alg 4} % & \lim_{n\rightarrow\infty}\left({\rm lcm}\left\{\left. p_2^k+1 \, \right| \, 0\le k \le n-1 \right\}\right)^{1/n^2}\leq p_2^{4/\pi^2}, \end{align} where the latter two are well-known properties of the least common multiple that can easily be deduced from the asymptotic results as given in \reff{Azonderb}. This leads us to the following asymptotic behaviour of $e_n^\pm$. \begin{corollary} \label{en asymp alg} % For $e_n^\pm$ defined as in \reff{en alg} we have \begin{equation} \label{en asymp formule alg} % \lim_{n\rightarrow\infty}\left|e_n^\pm\right|^{1/n^2} \le \P{\eta^\pm}, \end{equation} where $\eta^\pm$ is defined as in \reff{eta}. As a result we now have \begin{equation} \label{restterm asymp lemma nieuw} \lim_{n\rightarrow\infty}\left|b_n^\pm h^{\pm}-a_n^\pm \right|^{1/n^2}\le \P{\eta^\pm-\frac{3}{2}}. \end{equation} \end{corollary} \subsection{Proof of Theorem~\ref{irrationality alg}} In the previous sections we defined integer sequences $a_n^\pm$ and $b_n^\pm$ and managed to find the asymptotic behaviour of $b_n^\pm h^{\pm}-a_n^\pm$. Putting these results together, we can now prove Theorem~\ref{irrationality alg}. \begin{varproof}{\bf of Theorem~\ref{irrationality alg}.} In Lemma~\ref{lemma en alg} we made sure that $a_n^\pm$ and $b_n^\pm$, defined as in \reff{an} and \reff{bn}, are integer sequences. Note that by \reff{kappa}, \reff{hat Q} and \reff{en asymp formule alg} we then get \begin{equation} \lim_{n\rightarrow\infty} |b_n|^{1/n^2}\leq \P{\eta^\pm+\frac{3}{2}}. \end{equation} Lemma~\ref{restterm asymp lemma} assures us that $b_n^\pm h^{\pm}-a_n^\pm\neq 0$ for all $n\in \mathbb{N}$ and since $\eta^\pm<\frac32$, \reff{restterm asymp lemma nieuw} guarantees that $\lim_{n\rightarrow\infty}\left|b_n^\pm h^{\pm}-a_n^\pm\right|=0$. So, all the conditions of Lemma~\ref{irrlemma} are fulfilled and $h^{\pm}$ is irrational. \end{varproof} \section{Improvements on the results in the special case \reff{specgeval}} \label{sectionspec} Throughout this section we consider the special case given by \reff{specgeval}. The only difference with the general case is that we can (in some cases considerably) improve the factor $e_n^\pm$, which is needed to make the approximation sequences into integer sequences. The following lemma gives the enhanced formula for $e_n^\pm$, and can be seen as an analogue of Lemma~\ref{en alg}. \begin{lemma} \label{lemma en} % By taking \begin{multline} \label{en} % e_n^\pm= % {\rm lcm}\left\{\left. {\rm denom} \left((\p{}\P{n};\P{})_k\,(\p{};\P{})_{k}^{-1}\right) \, \right| \, 0\le k \le n-1 \right\}\\[1ex] \times {\rm lcm}\left\{\left. \P{j}\pm 1, \, \p{}\P{k}-1 \, \right| \, 1\le j\le n-1, \, 0\le k \le n-1 \right\} \end{multline} the $a_n^\pm$ and $b_n^\pm$, defined as in \reff{an} and \reff{bn}, are integer sequences. \end{lemma} \begin{proof} The proof is completely analogous to the proof of Lemma~\ref{lemma en alg}. There is no factor $s_1^n$ needed since in this case $p_1$ is an integer. \end{proof} \subsection{Asymptotic behaviour of $e_n^\pm$} In order to obtain some asymptotic results for the quantities $e_n^\pm$, see \reff{en}, we will use the cyclotomic polynomials \begin{equation} \label{cyclotomic def} % \Phi_n(x)=\prod_{\substack{k=1,\\(k,n)=1}}^n\left(x-e^{\frac{2\pi ik}{n}}\right). \end{equation} Their degree is denoted by Euler's totient function $\phi(n)$, being the number of positive integers $\le n$ that are coprime with $n$. It is well-known \cite[Section~4.8]{Still} that \begin{equation} \label{nice P} % x^n-1=\prod_{d|n}\Phi_d(x),\qquad n=\sum_{d|n}\phi (d),% \end{equation} and that every cyclotomic polynomial is monic, has integer coefficients and is irreducible over $\mathbb{Q} [x]$. Furthermore, some interesting asymtotic properties are \begin{align} \label{Azonderb} % & \lim_{n\to \infty}\frac{1}{n^2} \sum_{j=0}^n\phi(aj) =\frac{3a}{\pi^2}\prod_{\substack{\varpi|a\\ \varpi\: {\rm prime}}}\frac{\varpi}{\varpi+1},\\[1ex] \label{Ametb} % & \lim_{n\to \infty}\frac{1}{n^2} \sum_{j=0}^n\phi(aj+b) =\frac{3a}{\pi^2}\prod_{\substack{\varpi |a\\\varpi \: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1},\\[1ex] \label{A2metb} % & \lim_{n\to \infty}\frac{1}{n^2} \sum_{j=0}^n\phi(2(aj+b)) =\frac{4a}{\pi^2}\prod_{\substack{\varpi |a,\, \varpi\ge 3\\\varpi \: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1}, \end{align} where $(a,b)=1$, see~\cite{Bavencoffe,Bezivin,Matala}. They imply the following results. \begin{lemma} \label{lemmaLCM-} % Let $r_1 ,r_2 \in\mathbb{N}$ and $(r_1 ,r_2 )=1$. Then \begin{equation} \label{AsLCM-} % \lim_{n\to \infty}\left[{\rm lcm}\left\{\left. \P{j}- 1, \, \p{}\P{k}-1 \, \right| \, 1\le j\le n-1, \, 0\le k \le n-1 \right\}\right]^{1/n^2}\le \P{\theta^{-} (r_2 )}, \end{equation} where \[ \theta^-(r_2 )=\gamma^-(r_2 ) - \frac{3}{\pi^2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1}\sum_{\substack{l=1\\ (l,r_2 )=1}}^{r_2 }\frac{1}{l^2} \] with $0<\theta^{-}(r_2 )<\frac{3}{\pi^2}+\frac{1}{2}$ and $\gamma^-(r_2 )$ defined as in \reff{gamma-}. \end{lemma} \begin{proof} By \reff{nice P} and Lemma~\ref{lemmaA1} in the appendix we have that \begin{equation} \label{multipleM-} % M_n^-:=\prod_{d=1}^{n-1} \Phi_d(\P{})=\prod_{\substack{d|r_2 k \mbox{\scriptsize{ for}} \\\mbox{\scriptsize{some} } 1\le k\le n-1}}\Phi_d(p) \end{equation} is a common multiple of all $\P{j}-1$, $j=1,\ldots,n-1$. Next, for each $1\le l\le r_2-1$, $(l,r_2 )=1$ we define $1\le b_l \le r_2 -1$ by $b_l\equiv r_1 /l \mbox{ mod } r_2$. Notice that if $d\in\mathbb{N}$ satisfies $dl=r_2 k+r_1 $ for some $k\in\mathbb{Z}$, then $d\equiv b_l \mbox{ mod } r_2$ and $(l,r_2 )=1$ since $(r_1 ,r_2 )=1$. Hence \begin{equation} \label{Mn} % \mathcal{M}_n:= \prod_{\substack{d|r_2 k+r_1 \mbox{\scriptsize{ for}} \\\mbox{\scriptsize{some} } 0\le k\le n-1}}\Phi_d(p)=\prod_{\substack{l=1\\(l,r_2 )=1}}^{r_2 }\prod_{j=0}^ {\left\lfloor{\frac{n-1}{l}-\frac{lb_l-r_1 }{lr_2 }}\right\rfloor}\Phi_{jr_2 +b_l}(p) \end{equation} is a common multiple of all $\p{}\P{k}-1$, $k=0,\ldots,n-1$. So, $M_n^-\, \mathcal{M}_n$ is a multiple of $e_n^-$. However, there are some factors of the form $\Phi_{jr_2 +b_l}(p)$ appearing in both $\mathcal{M}_n$ and $M_n^-$. Looking at \reff{multipleM-}, since $(r_2 ,b_l)=1$ this means that $jr_2 +b_l$ should be a divisor of a natural number $k\leq n-1$. So, if $n$ is large enough the factor $\Phi_{jr_2 +b_l}(p)$ of $\mathcal{M}_n$ is also present in $M_n^-$ for $j$ from 0 up to $\lfloor\frac{n-1}{r_2 }\rfloor -1$ $(\leq\left\lfloor{\frac{n-1}{l}-\frac{lb_l-r_1 }{lr_2 }}\right\rfloor)$, meaning that they have the common factor \begin{equation} \label{common} % C_n^-:=\prod_{\substack{l=1\\(l,r_2 )=1}}^{r_2 } \prod_{j=0}^{\lfloor{\frac{n-1}{r_2 }}\rfloor -1}\Phi_{jr_2 +b_l}(p). \end{equation} We proved that $M_n^-\, \mathcal{M}_n \, /\, C_n^-$ is a multiple of $e_n^-$. Now we look at its asymptotic behaviour. Applying \reff{Azonderb} on \reff{multipleM-} we easily establish \begin{equation} \label{AsM-} % \log_{\P{}} \left[\lim_{n\to \infty} \left(M_n^-\right)^{1/n^2} \right] = \frac{3}{\pi^2}. \end{equation} Next, recall that $(r_2{},b_l)=1$. So, by \reff{Ametb} we also get \begin{align} \label{AsM} % & \log_{\P{}} \left[\lim_{n\to \infty} \left(\mathcal{M}_n\right)^{1/n^2} \right] = \frac{3}{\pi^2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1}\sum_{\substack{l=1\\ (l,r_2 )=1}}^{r_2 }\frac{1}{l^2}<\frac{1}{2},\\[1ex] \label{AsC-} % & \log_{\P{}} \left[\lim_{n\to \infty} \left(C_n^-\right)^{1/n^2} \right] = \frac{3}{\pi^2}\frac{\phi(r_2)}{r_2^2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1}=\frac{3}{\pi^2}\frac{1}{r_2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi}{\varpi+1}, \end{align} where the last equality follows from the well-known fact \begin{equation} \label{propPhi} \frac{\phi(m)}{m}=\prod_{\substack{\varpi|m\\\varpi\: {\rm prime}}}\frac{\varpi-1}{\varpi}. \end{equation} Combining \reff{AsM-}, \reff{AsM} and \reff{AsC-} we then finally obtain \reff{AsLCM-}. \end{proof} \begin{remark} The common multiple $\mathcal{M}_n$ of all $\p{}\P{k}-1$, $k=0,\ldots,n-1$ and its asymptotic behaviour were discussed already in \cite[Lemma~2]{Matala}. \end{remark} \begin{lemma} \label{lemmaLCM+} % Let $r_1 ,r_2 \in\mathbb{N}$ and $(r_1 ,r_2 )=1$. Then \begin{equation} \label{AsLCM+} % \lim_{n\to \infty}\left[{\rm lcm}\left\{\left. \P{j}+ 1, \, \p{}\P{k}-1 \, \right| \, 1\le j\le n-1, \, 0\le k \le n-1 \right\}\right]^{1/n^2}\le \P{\theta^+ (r_2 )}, \end{equation} where \[ \theta^+(r_2 )=\gamma^+(r_2 ) - \frac{3}{\pi^2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1}\sum_{\substack{l=1\\ (l,r_2 )=1}}^{r_2 }\frac{1}{l^2} \] with $0<\theta^+(r_2 ) < \frac{4}{\pi^2}+\frac{1}{2}$ and $\gamma^+(r_2 )$ defined as in \reff{gamma+}. \end{lemma} \begin{proof} Recall from the proof of Lemma~\ref{lemmaLCM-} that $\mathcal{M}_n$, defined as in \reff{Mn}, is a common multiple of all $\p{}\P{k}-1$, $k=0,\ldots,n-1$. By the property $x^j+1=(x^{2j}-1)/(x^j-1)$, \reff{nice P} and Lemma~\ref{lemmaA2} in the appendix we also have that \begin{equation} \label{multipleM+} % M_n^+:=\prod_{d=1}^{n-1} \Phi_{2d}(\P{})=\prod_{\substack{d|2r_2 k,\, d \nmid \, r_2 k \mbox{\scriptsize{ for}} \\\mbox{\scriptsize{some} } 1\le k\le n-1}}\Phi_d(p) \end{equation} is a common multiple of all $\P{j}+1$, $j=1,\ldots,n-1$. So, $M_n^+\, \mathcal{M}_n$ is a multiple of $e_n^+$. If $r_2{}$ is even, then the index $jr_2{}+b_l$ is odd since $(r_2{},b_l)=1$. So, in this case there are no factors of the form $\Phi_{jr_2 +b_l}(p)$ appearing in both $\mathcal{M}_n$ and $M_n^+$. Now suppose that $r_2{}$ is odd. Then \[ M_n^+=\prod_{\substack{d|r_2 k \mbox{\scriptsize{ for}} \\\mbox{\scriptsize{some} } 1\le k\le n-1}}\Phi_{2d}(p). \] This implies that for a factor $\Phi_{jr_2 +b_l}(p)$ of $\mathcal{M}_n$ also appearing in $M_n^+$, we should have $j\equiv b_l \mod 2$ and $\frac{jr_2+b_l}{2}$ should be a divisor of a natural number $k\leq n-1$. So, in the case that $r_2{}$ is odd, $\mathcal{M}_n$ and $M_n^+$ have the common factor \begin{equation} % \label{Cn+} C_n^+:=\prod_{\substack{l=1\\(l,r_2 )=1}}^{r_2 } \prod_{\substack{j=0\\j\equiv b_l \mbox{\,{\scriptsize mod}\,} 2}}^{\min\left(\lfloor \frac{2(n-1)}{r_2 }\rfloor -1 ,\left\lfloor{\frac{n-1}{l}-\frac{lb_l-r_1 }{lr_2 }}\right\rfloor \right)}\Phi_{jr_2 +b_l}(p). \end{equation} and $M_n^+\, \mathcal{M}_n \, /\, C_n^+$ is a multiple of $e_n^+$. Now we are interested in the asymptotic behaviour of $M_n^+$ and $C_n^+$. Applying \reff{Azonderb} on \reff{multipleM+} we easily obtain \begin{equation} \label{AsM+} % \log_{\P{}} \left[\lim_{n\to \infty} \left(M_n^+\right)^{1/n^2} \right] = \frac{4}{\pi^2}<\frac{1}{2}. \end{equation} Next, we suppose $r_2{}$ is odd and look at \reff{Cn+}. Note that for $n$ large enough we have $\lfloor \frac{2(n-1)}{r_2 }\rfloor -1 > \left\lfloor{\frac{n-1}{l}-\frac{lb_l-r_1 }{lr_2 }}\right\rfloor$ if and only if $l> \frac{r_2}{2}$. Moreover, if $b_l$ is even, then $j=2i$ is even and we write $jr_2 +b_l=2(ir_2 +b_l/2)$. On the other hand, if $b_l$ is odd then $j=2i+1$ is odd and we write $jr_2 +b_l=2(ir_2 +(b_l+r_2{})/2)$. By \reff{A2metb} and \reff{propPhi} we then get \begin{align} \nonumber \log_{\P{}} \left[\lim_{n\to \infty} \left(C_n^+\right)^{1/n^2} \right] & = \frac{\phi(r_2)}{2r_2^2}\frac{4}{\pi^2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1} +\frac{1}{\pi^2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1} \sum_{\substack{l=\lceil \frac{r_2{}}{2} \rceil \\(l,r_2 )=1}}^{r_2 } \frac{1}{l^2}\\ \label{AsC+} % & =\frac{1}{r_2}\frac{2}{\pi^2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi}{\varpi+1} +\frac{1}{\pi^2}\prod_{\substack{\varpi|r_2\\ \varpi\: {\rm prime}}}\frac{\varpi^2}{\varpi^2-1} \sum_{\substack{l=\lceil \frac{r_2{}}{2} \rceil \\(l,r_2 )=1}}^{r_2 } \frac{1}{l^2}. \end{align} Combining \reff{AsM+}, \reff{AsM} and \reff{AsC+} we then finally obtain \reff{AsLCM+}. \end{proof} By \reff{nice P} we obtain \[ \frac{(\p{}\P{n};\P{})_k}{(\p{};\P{})_{k}} = \frac{\prod_{i=n}^{n+k-1} \prod_{d|r_2 i+r_1} \Phi_d (p)}{\prod_{i=0}^{k-1} \prod_{d|r_2 i+r_1} \Phi_d (p)}. \] Having a closer look at this expression, it is clear that its denominator is a divisor of $\mathcal{M}_n$, defined as in \reff{Mn}, for each $0\le k\le n-1$. As a corollary of Lemma~\ref{lemmaLCM-}, Lemma~\ref{lemmaLCM+} and \reff{AsM} we then get the following asymptotic behaviour for $e_n^\pm$. \begin{corollary} \label{en asymp} % Let $r_1 ,r_2 \in\mathbb{N}$ and $(r_1 ,r_2 )=1$. For $e_n^\pm$ defined as in \reff{en} we have \begin{equation} \label{en asymp formule} % \lim_{n\rightarrow\infty}\left|e_n^\pm\right|^{1/n^2} \le \P{\gamma^\pm(r_2 )}, \end{equation} where $\gamma^-(r_2 )$ and $\gamma^+(r_2 )$ are defined as in \reff{gamma-} and \reff{gamma+}. As a result we now have \begin{equation} % \label{errorasympspeciaal} \lim_{n\rightarrow\infty}\left|b_n^\pm h^{\pm}-a_n^\pm \right|^{1/n^2}\le \P{\gamma^\pm(r_2 )-\frac{3}{2}}. \end{equation} \end{corollary} \subsection{Proof of Theorem~\ref{irrationality}} In the previous sections we defined integer sequences $a_n^\pm$ and $b_n^\pm$ and managed to find the asymptotic behaviour of $b_n^\pm h^{\pm}-a_n^\pm$. Putting these results together, we can now prove Theorem~\ref{irrationality}. \begin{varproof}{\bf of Theorem~\ref{irrationality}.} In Lemma~\ref{lemma en} we made sure that $a_n^\pm$ and $b_n^\pm$, defined as in \reff{an} and \reff{bn}, are integer sequences. Note that by \reff{kappa}, \reff{hat Q} and \reff{en asymp formule} we then get \begin{equation} \label{bn asymp lemma nieuw speciaal} \lim_{n\rightarrow\infty} |b_n|^{1/n^2}\leq \P{\gamma^\pm(r_2 )+\frac{3}{2}}. \end{equation} Lemma~\ref{restterm asymp lemma} assures us that $b_n^\pm h^{\pm}-a_n^\pm\neq 0$ for all $n\in \mathbb{N}$ and since $\gamma^\pm<\frac32$, \reff{errorasympspeciaal} guarantees that $\lim_{n\rightarrow\infty}\left|b_n^\pm h^{\pm}-a_n^\pm\right|=0$. So, all the conditions of Lemma~\ref{irrlemma} are fulfilled and $h^{\pm}$ is irrational. Obviously, the irrationality also follows from the result in the general case, as given in Theorem~\ref{irrationality alg}. The sharper asymptotics for this case were only necessary to find the upper bounds for the irrationality measure, as proposed in Corollary~\ref{irrationality measure}. \end{varproof} \section{Final remark on the special case \reff{specgeval}} \label{finalremark} In Lemma~\ref{lemma en} we proposed a possible factor ${\rm e}_n^\pm$ such that $a_{n}^\pm$ and $b_{n}^\pm$, defined as in \reff{an} and \reff{bn}, are integers. It seems that this choice is (asymptotically) not the optimal one. Empirically (using Maple) we observed that $e_{n}^\pm$ can be replaced by \begin{align} \label{xi-}\xi_{n}^- & = \frac{(\p{};\P{})_n}{(\P{};\P{})_{n-1}}\,{\rm lcm}\left\{\left. \P{j}- 1, \, \p{}\P{k}-1 \, \right| \, 1\le j\le n-1, \, 0\le k \le n-1 \right\},\\ \nonumber \xi_{n}^+ & = \frac{(\p{};\P{})_n}{(\P{};\P{})_{n}}\,\prod_{i=1}^{n}\frac{\P{m_i}-1}{p^{m_i}-1} \ \prod_{\substack{d|r_2,\, d\not=2\\ d \mbox{ \scriptsize{prime}}}} \, \prod_{k=1}^\infty \Bigl(\phi_{d^k}(p)\Bigr)^{\left\lfloor \frac{n}{d^k} \right\rfloor} \\ % \label{xi+}& \qquad \qquad \qquad \qquad \times{\rm lcm}\left\{\left. \P{j}+ 1, \, \p{}\P{k}-1 \, \right| \, 1\le j\le n-1, \, 0\le k \le n-1 \right\}, \end{align} where $m_i$ is the highest odd factor of $i$ and $\lfloor x\rfloor$ is the integer part of $x$. Then the $a_{n}^\pm$ and $b_{n}^\pm$ are still integers and ${\rm gcd} (a_{n}^\pm,b_{n}^\pm)$ turns out to be very small and asymptotically irrelevant. Up to now we do not have an exact proof for this, except for the case $r_2 =1$. Having in mind the proof of Lemma~\ref{lemma en}, in this case the denominator of \[\frac{(\p{}p^n;p)_k}{(\p{};p)_k}=\left[{n+k+r_1-1\atop n}\right]_{p}\left[{n+r_1-1\atop n}\right]_{p}^{-1}=\left[{n+k+r_1-1\atop n}\right]_{p}\frac{(p;p)_n}{(\p{};p)_{n}}\] is clearly cancelled out by the first factor of \reff{xi-} and \reff{xi+}. Now define $\delta^-(r_2 ):=\theta^-(r_2 )$ and $\delta^+(r_2 ):=\theta^+(r_2 )+\frac{1}{3}-\frac{1}{3r_2}$, where $\theta^\pm(r_2 )$ is as in Lemma~\ref{lemmaLCM-} and Lemma~\ref{lemmaLCM+}. Following the arguments of this paper, with the adjustment mentioned above we could prove the existence of integer sequences $\alpha_n^\pm$, $\beta_n^\pm$ such that \begin{align*} & \lim_{n\rightarrow\infty}\left|\xi_n^\pm\right|^{1/n^2} \le \P{\delta^\pm(r_2 )}, \\ & \left|\beta_{n}^\pm h^\pm -\alpha_{n}^\pm\right| =\mathcal{O}\Bigl((\beta_n^\pm)^{-\frac{3-2\,\delta^\pm(r_2 )}{3+2\,\delta^\pm(r_2 )}+\varepsilon}\Bigr),\qquad \mbox{for all } \varepsilon >0,\qquad n\to \infty, \end{align*} implying \begin{equation} \label{upbfinal} \mu\left(h^\pm\right)\le \chi^\pm(r_2):=\left(\frac{3-2\,\delta^\pm(r_2 )}{6}\right)^{-1}. \end{equation} Comparing Table~\ref{tabel1} with Table~\ref{tabel2} we see that this would considerably improve the upper bound for the irrationality measure. \begin{table}[t] \begin{center} \begin{tabular}{|c|rcl|rcl|} \hline % $r_2$ & \multicolumn{3}{|c|}{$\chi^-(r_2)$}&\multicolumn{3}{|c|}{$\chi^+(r_2)$}\\ \hline & & & & & & \\[-2ex] 1 & $\frac{2\pi^2}{\pi^2-2}$&$\approx$&$ 2.508284762 $&$\frac{6\pi^2}{3\pi^2-8}$&$\approx$&$ 2.740438628 $ \\[0.5ex] 2 & $\frac{2\pi^2}{\pi^2-4}$&$\approx$&$ 3.362953864 $&$\frac{9\pi^2}{4\pi^2-24}$&$\approx$&$ 5.738728718 $ \\[0.5ex] 3 & $\frac{32\pi^2}{16\pi^2-69}$&$\approx$&$ 3.552067296 $&$\frac{432\pi^2}{184\pi^2-1071}$&$\approx$&$ 5.722990389 $ \\[0.5ex] 4 & $\frac{54\pi^2}{27\pi^2-125}$&$\approx$& $3.767042717 $&$\frac{108\pi^2}{45\pi^2-304}$&$\approx$&$ 7.606512209 $ \\[0.5ex] 5 & $\frac{3456\pi^2}{1728\pi^2-8005}$&$\approx$&$ 3.769124031 $&$\frac{25920\pi^2}{10656\pi^2-68555}$&$\approx$&$ 6.986661787 $ \\[0.5ex] 6 & $\frac{300\pi^2}{150\pi^2-743}$&$\approx$& $4.015077389 $&$\frac{675\pi^2}{275\pi^2-1953}$&$\approx$&$ 8.752624192 $ \\[0.5ex] 7 & $\frac{172800\pi^2}{86400\pi^2-414281}$&$\approx$&$ 3.889740382 $&$\frac{1814400\pi^2}{734400\pi^2-4949917}$&$\approx$&$ 7.791520131 $ \\[0.5ex] 8 & $\frac{132300\pi^2}{66150\pi^2-327931}$&$\approx$& $4.018388861 $&$\frac{264600\pi^2}{106575\pi^2-766112}$&$\approx$&$ 9.139383255 $ \\[0.5ex] 9 & $\frac{1881600\pi^2}{940800\pi^2-4664047}$&$\approx$&$ 4.018510114 $&$\frac{25401600\pi^2}{10192000\pi^2-71413173}$&$\approx$&$ 8.592266790 $ \\[0.5ex] 10 & $\frac{71442\pi^2}{35721\pi^2-180973}$&$\approx$& $4.109498873 $&$\frac{178605\pi^2}{71442\pi^2-521890}$&$\approx$&$ 9.621306357 $ \\[0.5ex] \hline% \end{tabular} \caption{\label{tabel2} Some values of $\chi^\pm(r_2)$, see \reff{upbfinal}.} \end{center} \end{table} \section*{Acknowledgements} We like to thank W. Van Assche for careful reading and useful discussions. This work was supported by INTAS project 03-51-6637, by FWO projects G.0184.02 and G.0455.05 and by OT/04/21 of K.U.Leuven. The first author is a postdoctoral researcher at the K.U.Leuven (Belgium).
1,941,325,220,132
arxiv
\section{Introduction} \label{sec:introduction} The core problem when studying dynamical systems is to understand how they evolve as time progresses. For example, we want to understand the equilibrium of a stochastic process. The \emph{Markov semigroup theory} mathematically describes the time evolution of dynamical systems. With a Markov semigroup operator $\{\mathsf{P}_t\}_{t\geq 0}$ acting on real-valued functions defined on some Polish space $\Omega$, for example, one can ask: is there an invariant measure $\mu$ such that \[ \int_{x\in\Omega} \mathsf{P}_t f(x) \, \mu(\mathrm{d}x) = \int_{x\in\Omega} f(x) \, \mu(\mathrm{d}x) \] holds for all such functions $f$? (In the following we use the shorthand $\int f \,\mathrm{d}\mu$ for $\int_{x\in\Omega} f(x) \,\mu(\mathrm{d}x)$.) If there does exist such an invariant measure $\mu$ on $\Omega$, then how fast the system evolution $\mathsf{P}_t f $ converges to the constant equilibrium $\int f \,\mathrm{d}\mu$ when $t$ goes to infinity? To address these problems, functional inequalities like \emph{spectral gap inequalities} (also called \emph{Poincar\'e inequalities}) and \emph{logarithmic Sobolev inequalities} (log-Sobolev) play crucial roles \cite{DS96, Bak94, Bak06, BGL13, Gui03, Sal97}. More explicitly, the spectral gap inequality with a constant $C>0$: \begin{align} \label{eq:Var2} \Var(f) \triangleq \int f^2 \, \mathrm{d}\mu - \left( \int f \, \mathrm{d}\mu \right)^2 \leq C\mathcal{E}(f,f) \end{align} where $\Var(f)$ denotes the variance of the real-valued function $f$ and $\mathcal{E}(f,f)$ is the ``energy" of $f$ (see Section \ref{sec:semigroup} for formal definitions), is equivalent to the so-called $\mathbb{L}^2$ ergodicity of the semigroup $\{\mathsf{P}_t\}_{t\geq 0}$: \begin{align} \label{eq:Var} \Var(\mathsf{P}_t f) \leq \mathrm{e}^{-2t/C} \Var(f). \end{align} On the other hand, the log-Sobolev inequality, which is well-known from the seminal work of Gross \cite{Gro75}: \begin{align} \label{eq:Ent} \Ent(f^2) \triangleq \int f^2\log \left( \frac{f^2}{\int f^2 \, \mathrm{d}\mu} \right) \, \mathrm{d}\mu \leq C\mathcal{E}(f,f) \end{align} is equivalent to an exponential decrease of entropies: \begin{align} \label{eq:Ent2} \Ent(\mathsf{P}_t f) \leq \mathrm{e}^{-t/C} \Ent(f). \end{align} Chafa\"i \cite{Cha03} generalized the previous results and introduced the classical $\Phi$-entropy functionals to establish the equivalence between exponential decays of the $\Phi$-entropies to the $\Phi$-Sobolev inequalities, which interpolates between spectral gap and log-Sobolev inequalities \cite{LO00}. Consequently, the optimal constant in those functional inequalities directly determines the convergence rate of the Markov semigroups. Recently, Chen and Tropp \cite{CT14} introduced a \emph{matrix $\Phi$-entropy functional} for the matrix-valued function ${\mathbf m}{f}:\Omega \to \mathbb{C}^{d\times d}$, extending its classical counterpart to include matrix objects, and proved a \emph{subadditive} property. This extension has received great attention, and leads to powerful matrix concentration inequalities \cite{Tro15, PMT14}. Furthermore, two of the present authors \cite{CH1} derived a series of matrix Poincar\'e inequalities and matrix $\Phi$-Sobolev inequalities for the matrix-valued functions. This result partially generalized Chafa\"i's work \cite{Cha03, Cha06}. Equipped with the tools of matrix $\Phi$-entropies \cite{CT14} and the functional inequalities \cite{CH1}, we are at the position to explore a more general form of dynamical systems; namely those systems consisting of matrix components and their evolution governed by the Markov semigroup: \begin{align*} \mathsf{P}_t {\mathbf m}{f}(x) = \int_{y\in\Omega} {\mathsf{T}}_t(x,\mathrm{d}y) \circ {\mathbf m}{f}(y), \end{align*} where ${\mathsf{T}}_t(x,\mathrm{d}y):\mathbb{C}^{d\times d}\to \mathbb{C}^{d\times d}$ is a completely positive (CP) map and $\int_{y\in\Omega} {\mathsf{T}}_t(x,\mathrm{d}y)$ is unital. We are able to establish the equivalence conditions for the exponential decay of matrix $\Phi$-entropy functionals. \medskip The contributions of this paper are the following: \begin{enumerate} \item We propose a Markov semigroup acting on matrix-valued functions and define a non-commutative version of the carr\'e du champ operator ${\mathbf m}{\Gamma}$ in Section \ref{sec:semigroup}. We obtain the time derivatives of matrix $\Phi$-entropy functionals, a generalization of the de Bruijn's identity for matrix-valued functions in Proposition \ref{prop:de}. The equivalence condition of the exponential decay of matrix $\Phi$-entropy functionals is established in Theorem \ref{theo:decay}. When $\Phi$ is a square function, our result generalizes Eqs.~\eqref{eq:Var2} and \eqref{eq:Var} to the equivalence condition of matrix spectral gap inequalities (Corollary \ref{coro:spectral}). On the other hand, when $\Phi(u)=u\log u$, we obtain the equivalence between exponential entropy decays and the modified log-Sobolev inequalities (Corollary \ref{coro:Ent}). This is slightly different from Eqs.~\eqref{eq:Ent} and \eqref{eq:Ent2}. \item We show that the introduced Markov semigroup has a connection with quantum information theory and can be used to characterize the dynamical evolution of quantum ensembles that do not depend on the history. More precisely, when the outputs of the matrix-valued function are restricted to a set of quantum states ${\mathbf m}{f}(x) = {\mathbf m}{\rho}_x$ (i.e.~positive semi-definite matrices with unit trace), the measure $\mu$ together with the function ${\mathbf m}{f}$ yields a \emph{quantum ensemble} $\mathcal{S} \triangleq \left\{ \mu(x), {\mathbf m}{\rho}_x \right\}_{x\in\Omega}$. Its time evolution undergoing the semigroup $\{\mathsf{P}_t\}_{t\geq 0}$ can be described by $\mathcal{S}_t \triangleq \left\{ \mu(x), \int \mathsf{T}_t(x,\mathrm{d} y) \circ {\mathbf m}{\rho}_y \right\}_{x\in\Omega}$. Moreover, the matrix $\Phi$-entropy functional coincides the \emph{Holevo quantity} $\chi(\{\mu(x), {\mathbf m}{\rho}_x\}_{x\in\Omega}) = \Ent({\mathbf m}{f})$. Our main theorem hence shows that the Holevo quantity of the ensemble $\mathcal{S}$ exponentially decays through the dynamical process: \chi\left( \mathcal{S}_t \right) \leq \mathrm{e}^{t/C} \chi \left( \mathcal{S} \right), where the convergence rate is determined by the modified log-Sobolev inequality\footnote{Here we assume there exists a unique \emph{invariant measure} (see Section \ref{sec:semigroup} for precise definitions) for the Markov semigroups, which ensures the existence of the average state. We discuss the conditions of uniqueness in Section \ref{sec:exam} and \ref{sec:exam2}.}. This result directly strengthens the celebrated monotonicity of the Holevo quantity \cite{Pet03}. \item We study an example of matrix-valued functions defined on a Boolean hypercube $\{0,1\}^n$ with transition rates $p$ from state $0$ to $1$ and $(1-p)$ from $1$ to $0$ (so-called \emph{Markovian jump process}). In this example, we can explicitly calculate the convergence rate of the Markovian jump process (Theorem \ref{theo:Var_Ber} and \ref{theo:Ent_Ber}) by exploiting the matrix Efron-Stein inequality \cite{CH1}. \item We introduce a random walk of a quantum ensemble, where each vertex of the graph corresponds to a quantum state and the transition rates are determined by the weights of the edges. The time evolution of the ensemble can be described by a statistical mixture of the density operators. By using the Holevo quantity as the entropic measure, our main theorem shows that the states in the ensemble will converge to its equilibrium---the average state of the ensemble. Moreover, we can upper bound the mixing time of the ensemble. \end{enumerate} \begin{comment} Finally, certain classical computational tasks can be significantly improved if quantum computation is allowed. In order to harness of the power of quantum computing, the first step is to map classical messages $x \in \Omega$ whose distribution is $p_X(x)$ to an ensemble $\{p_X(x),\rho_x\}_{x\in\Omega}$ so that a quantum computer can further proceed. This type of statistical mixture model falls exactly into the category of the dynamical systems considered in this paper. Specifically, the above manipulations of quantum states can be mapped to a random walk of a quantum ensemble, where each vertex of the graph corresponds to a quantum state and the transition rates are determined by the weights of the edges. We assume that the evolution of the ensemble can be described by the Markov semigroups acting on density operators. As a result, our main theorems lead to upper bounds on the mixing time of the ensemble to its equilibrium---the average state of the ensemble\footnote{Here we assume there exists a unique \emph{invariant measure} (see Section \ref{sec:semigroup} for precise definitions) for the Markov semigroups, which ensures the existence of the average state. We discuss the conditions of uniqueness in Section \ref{sec:application}}. This mixing time gives us a good estimate of the duration that each quantum computation needs to complete before all useful information disappears due to decoherence. \end{comment} \subsection{Related Work} \label{ssec:related} When considering the case of discrete time and finite domain (i.e.~$\Omega$ is a finite set), our setting reduces to the \emph{discrete quantum Markov} introduced by Gudder \cite{Gud08}, and the family $\{\mathsf{T}_t(x,y)\}$ is called the \emph{transition operation matrices} (TOM). We note that this discrete model has been applied in quantum random walks by Attal \textit{et al.} \cite{APS+12, APS12, HJ14}, and model-checking in quantum protocols by Feng \textit{et al.} \cite{FYY13, YYF+13}. If the state space is a singleton (i.e.~$\Omega = \{x\}$) and the trace-preserving property is imposed (i.e.~$\mathsf{T}_t$ is a quantum channel), our model reduces to the conventional quantum Markov processes (also called the \emph{quantum dynamical semigroups}) \cite{Lin76, TKR+10, KRW12, SW13, SRW14} and the Markov semigroups defined on non-commutative $\mathbb{L}_p$ spaces \cite{OZ99}. This line of research was initiated by Lindblad who studied the time evolution of a quantum state: $\mathsf{T}_t({\mathbf m}{\rho})$ and was recently extended by Kastoryano \textit{et al.} \cite{KRW12} and Szehr \cite{SW13, SRW14} who analyzed its long-term behavior. Subsequently, Olkiewicz and Zegarlinski generalized Gross' log-Sobolev inequalities \cite{Gro75} to the non-commutative $\mathbb{L}_p$ space \cite{OZ99}. The connections between the quantum Markov processes, hypercontracitivity, and the noncommutative log-Sobolev inequalities are hence established \cite{BZ00, Car04, TPK14, Kin14, CKMT15, MFW15, SK15}. The exponential decay properties in non-commutative $\mathbb{L}_p$ spaces are also studied \cite{TKR+10, KT13, Car14,CM15}. The major differences between our work and the non-commutative setting are the following: (1) the semigroup in the latter is applied on a single quantum state, i.e.~${\mathbf m}{\rho} \mapsto \mathsf{T}_t({\mathbf m}{\rho})$, while the novelty of this work is to propose a Markov semigroup that acts on matrix-valued functions. i.e.~${\mathbf m}{f} \mapsto \mathsf{P}_t {\mathbf m}{f}$. (2) Olkiewicz and Zegarlinski used a $\mathbb{L}_p$ relative entropy: \[ \Ent({\mathbf m}{\rho}) = \frac1d\Tr[{\mathbf m}{\rho}\log {\mathbf m}{\rho}] - \frac1d\Tr[{\mathbf m}{\rho}]\log\left(\frac1d\Tr[{\mathbf m}{\rho}]\right) \] to measure the state. However, every state in the ensemble is endowed with a probability $\mu(x)$ in this work. Thus, we can use the matrix $\Phi$-entropy functionals \cite{CT14}: \[ H_\Phi({\mathbf m}{\rho}_X) = \sum_{x\in\Omega} \mu(x) \Tr\left[ {\mathbf m}{\rho}_x \log {\mathbf m}{\rho}_x \right] - \Tr\left[ \left( \sum_{x}\mu(x){\mathbf m}{\rho}_x \right) \log\left( \sum_{x}\mu(x){\mathbf m}{\rho}_x\right) \right] \] as the measure of the ensemble through the dynamical process. In other words, we investigate the time evolution and the long-term behavior of a quantum ensemble instead of a quantum state. The key tools to develop the whole theory require operator algebras (e.g.~Fr\'echet derivatives and operator convex functions), the subadditivity of the matrix $\Phi$-entropy functionals, operator Jensen inequalities \cite{HP03, FZ07}, and the matrix $\Phi$-Sobolev inequalities \cite{CH1}. \medskip The paper is organized as follows. The notation and basic properties of matrix algebras are presented in Section \ref{sec:preliminaries}. The Markov semigroups acting on matrix-valued functions are introduced in Section \ref{sec:semigroup}. We establish the main results of exponential decays in Section \ref{sec:main}. In Section \ref{sec:exam} we study the quantum ensemble going through a quantum unital channel and demonstrate the exponential decays of the Holevo quantity. We discuss another example of a statistical mixture of the semigroup in Section \ref{sec:exam2}. We prove an upper bound to the mixing time of a quantum random graph. Finally, we conclude this paper in Section \ref{sec:diss}. \section{Preliminaries and Notation} \label{sec:preliminaries} \subsection{Notation and Definitions} \label{ssec:notation} We denote by $\mathbb{M}^\text{sa}$ the space of all self-adjoint operators on some (separable) Hilbert space. If we restrict to the case of $d\times d$ Hermitian matrices, we refer to the notation $\mathbb{M}_d^\text{sa}$. Denote by $\Tr$ the standard trace function. The Schatten $p$-norm is defined by $\| {\mathbf m}{M} \|_p \triangleq ( \Tr |{\mathbf m}{M}|^p )^{1/p}$ for $1\leq p<\infty$, and $\|{\mathbf m}{M}\|_\infty$ corresponds to the operator norm. For ${\mathbf m}{A},{\mathbf m}{B}\in\mathbb{M}^\textnormal{sa}$, ${\mathbf m}{A}\succeq {\mathbf m}{B}$ means that ${\mathbf m}{A}-{\mathbf m}{B}$ is positive semi-definite. Similarly, ${\mathbf m}{A} \succ {\mathbf m}{B}$ means ${\mathbf m}{A} - {\mathbf m}{B}$ is positive-definite. Denote by $\mathbb{M}^+$ (resp.~$\mathbb{M}_d^+$) the positive semi-definite operators (resp.~$d\times d$ positive semi-definite matrices). Considering any matrix-valued function ${\mathbf m}{f},{\mathbf m}{g}:\Omega \to \mathbb{M}^\text{sa}$ defined on some Polish space $\Omega$\footnote{A Polish space is a separable and complete metric space, e.g.~a discrete space, $\mathbb{R}$, or the set of Hermitian matrices.}, we shorthand ${\mathbf m}{f}-{\mathbf m}{g} \succeq {\mathbf m}{0}$ for ${\mathbf m}{f}(x) - {\mathbf m}{g}(x) \succeq {\mathbf m}{0}$, $\forall x\in\Omega$. Throughout the paper, italic boldface letters (e.g.~${\mathbf m}{X}$ or ${\mathbf m}{f}$) are used to denote matrices or matrix-valued functions. A linear map $\mathsf{T}:\mathbb{M}^\text{sa} \to \mathbb{M}^\text{sa}$ is \emph{positive} if $\mathsf{T}({\mathbf m}{A}) \succeq {\mathbf m}{0}$ for all ${\mathbf m}{A} \succeq {\mathbf m}{0}$. A linear map $\mathsf{T}:\mathbb{M}^\text{sa} \to \mathbb{M}^\text{sa}$ is \emph{completely positive} (CP) if for any $\mathbb{M}_d^\text{sa}$, the map $\mathsf{T}\otimes \mathds{1}$ is positive on $\mathbb{M}^\text{sa} \otimes \mathbb{M}^\text{sa}$. It is well-known that any CP map $\mathsf{T}:\mathbb{M}^\text{sa} \to \mathbb{M}^\text{sa}$ enables a \emph{Kraus decomposition} \[ \mathsf{T}({\mathbf m}{A}) = \sum_{i} {\mathbf m}{K}_i {\mathbf m}{A} {\mathbf m}{K}_i^\dagger. \] The CP map $\mathsf{T}$ is \emph{trace-preserving} (TP) if and only if $\sum_i {\mathbf m}{K}_i^\dagger {\mathbf m}{K}_i = {\mathbf m}{I}$ (the identity matrix in $\mathbb{M}^\text{sa}$), and is \emph{unital} if and only if $\sum_i {\mathbf m}{K}_i {\mathbf m}{K}_i^\dagger = {\mathbf m}{I}$ (see e.g.~\cite{MW09}). A CPTP map is often called a \emph{quantum channel} or \emph{quantum operation} in quantum information theory \cite{NC09}. We denote by $|i-1\rangle \langle i-1|$ the zero matrix with the exception that its $i$-th diagonal entry is $1$. The set $\{|0\rangle, |1\rangle, \ldots, |d-1\rangle\}$ is the computational basis of the Hilbert space $\mathbb{C}^d$. \begin{defn}[Matrix $\Phi$-Entropy Functional {\cite{CT14}}] \label{defn:entropy} Let $\mathrm{\Phi}:[0,\infty)\rightarrow \mathbb{R}$ be a convex function. Given any probability space $(\Omega, \Sigma, \mathbb{P})$, consider a positive semi-definite random matrix ${\mathbf m}{Z}$ that is $\mathbb{P}$-measurable. Its expectation \[ \mathbb{E}[{\mathbf m}{Z}] \triangleq \int_\Omega {\mathbf m}{Z}\, \mathrm{d} \mathbb{P} = \int_{x\in\Omega} {\mathbf m}{Z}(x)\, \mathbb{P} (\mathrm{d}x) \] is a bounded matrix in $\mathbb{M}^+$. Assume ${\mathbf m}{Z}$ satisfies the integration conditions: $\Tr\left[\mathbb{E}| {\mathbf m}{Z}|\right]<\infty$ and $\Tr\left[\mathbb{E}| \mathrm{\Phi}({\mathbf m}{Z})|\right]<\infty$. The matrix $\mathrm{\Phi}$-entropy functional $H_\mathrm{\Phi}$ is defined as\footnote{ Chen and Tropp \cite[Definition 2.4]{CT14} defined the matrix $\Phi$-entropy functional for the random matrix ${\mathbf m}{Z}$ taking values in $\mathbb{M}_d^+$ with the normalized trace function: \[ H_\mathrm{\Phi}({\mathbf m}{Z})\triangleq \tr\left[\mathbb{E}\mathrm{\Phi}({\mathbf m}{Z})-\mathrm{\Phi}(\mathbb{E}{\mathbf m}{Z})\right], \] where $\tr[\cdot] \triangleq \frac1d \Tr[\cdot]$. In this paper we adopt the standard trace function; however, the results remain valid for $\tr$ as well. } \[ H_\mathrm{\Phi}({\mathbf m}{Z})\triangleq \Tr\left[\mathbb{E}\mathrm{\Phi}({\mathbf m}{Z})-\mathrm{\Phi}(\mathbb{E}{\mathbf m}{Z})\right]. \] Define $\mathcal{F}\subseteq \Sigma$ as a sub-sigma-algebra of $\Sigma$ such that the expectation $\mathbb{E}[{\mathbf m}{Z}|\mathcal{F}]$ satisfying $\int_E \mathbb{E}[{\mathbf m}{Z}|\mathcal{F}] \,\mathrm{d}\mathbb{P} = \int_E {\mathbf m}{Z} \, \mathrm{d}\mathbb{P}$ for each measurable set $E\in\mathcal{F}$. The conditional matrix $\Phi$-entropy functional is then \[ H_\mathrm{\Phi}({\mathbf m}{Z}|\mathcal{F})\triangleq \Tr\left[\mathbb{E} \left[ \mathrm{\Phi}({\mathbf m}{Z})|\mathcal{F} \right] - \mathrm{\Phi}(\mathbb{E}\left[ {\mathbf m}{Z} | \mathcal{F} \right] )\right]. \] In particular, we denote by $\Ent({\mathbf m}{Z}) \triangleq H_\Phi({\mathbf m}{Z})$ when $\Phi(u)\equiv u\log u$ and call it the \emph{entropy functional}. \end{defn} \begin{theo}[Subadditivity of Matrix $\mathrm{\Phi}$-Entropy Functionals {\cite[Theorem 2.5]{CT14}}] \label{theo:sub} Let $X\triangleq (X_1, \ldots, X_n)$ be a vector of independent random variables taking values in a Polish space. Consider a positive semi-definite random matrix ${\mathbf m}{Z}$ that can be expressed as a measurable function of the random vector $X$. Assume the integrability conditions $\Tr\left[\mathbb{E}| {\mathbf m}{Z}|\right]<\infty$ and $\Tr\left[\mathbb{E}| \mathrm{\Phi}({\mathbf m}{Z})|\right]<\infty$. If $\Phi(u)= u \log u$ or $\Phi(u) = u^p$ for $1\leq p\leq 2$, then \begin{eqnarray}\label{eq:entropy} H_\mathrm{\Phi}({\mathbf m}{Z})\leq \sum_{i=1}^n \mathbb{E} \Big[ H_\mathrm{\Phi}({\mathbf m}{Z}| X_{-i}) \Big], \end{eqnarray} where the random vector ${X}_{-i}\triangleq ({X}_1,\ldots,{X}_{i-1},{X}_{i+1},\ldots,{X}_n)$ is obtained by deleting the $i$-th entry of $X$. \end{theo} \subsection{Matrix Algebra} \label{ssec:matrix} Let $\mathcal{U},\mathcal{W}$ be Banach spaces. The \emph{Fr\'{e}chet derivative} of a function $\mathcal{L}:\mathcal{U} \rightarrow \mathcal{W}$ at a point ${\mathbf m}{X}\in\mathcal{U}$, if it exists\footnote{We assume the functions considered in the paper are Fr\'{e}chet differentiable. The readers is referred to~\cite{Pel85,Bic12} for conditions for when a function is Fr\'{e}chet differentiable. }, is a unique linear mapping $\mathsf{D}\mathcal{L}[{\mathbf m}{X}]:\mathcal{U}\rightarrow\mathcal{W}$ such that \[ \lim_{\|{\mathbf m}{E}\|_\mathcal{U} \rightarrow 0} \frac{\|\mathcal{L}({\mathbf m}{X}+{\mathbf m}{E}) - \mathcal{L}({\mathbf m}{X}) - \mathsf{D}\mathcal{L}[{\mathbf m}{X}]({\mathbf m}{E})\|_{\mathcal{W}}}{\|{\mathbf m}{E}\|_\mathcal{U}} = 0, \] where $\|\cdot\|_{\mathcal{U}(\mathcal{W})}$ is a norm in $\mathcal{U}$ (resp.~$\mathcal{W}$). The notation $\mathsf{D}\mathcal{L}[{\mathbf m}{X}]({\mathbf m}{E})$ then is interpreted as ``the Fr\'{e}chet derivative of $\mathcal{L}$ at ${\mathbf m}{X}$ in the direction ${\mathbf m}{E}$''. The Fr\'echet derivative enjoys several properties of usual derivatives. \begin{prop}[Properties of Fr\'{e}chet Derivatives {\cite[Section 5.3]{AH09}}] \label{prop:properties} Let $\mathcal{U},\mathcal{V}$ and $\mathcal{W}$ be real Banach spaces. \begin{itemize} \item[1.] (Sum Rule) If $\mathcal{L}_1:\mathcal{U}\rightarrow\mathcal{W}$ and $\mathcal{L}_2:\mathcal{U}\rightarrow\mathcal{W}$ are Fr\'{e}chet differentiable at ${\mathbf m}{A}\in\mathcal{U}$, then so is $\mathcal{L} = \alpha \mathcal{L}_1 + \beta \mathcal{L}_2$ and $\mathsf{D}\mathcal{L}[{\mathbf m}{A}]({\mathbf m}{E}) = \alpha \cdot \mathsf{D}\mathcal{L}_1[{\mathbf m}{A}]({\mathbf m}{E}) + \beta \cdot \mathsf{D}\mathcal{L}_2[{\mathbf m}{A}]({\mathbf m}{E})$. \item[2.] (Product Rule) If $\mathcal{L}_1:\mathcal{U}\rightarrow\mathcal{W}$ and $\mathcal{L}_2:\mathcal{U}\rightarrow\mathcal{W}$ are Fr\'{e}chet differentiable at ${\mathbf m}{A}\in\mathcal{U}$ and assume the multiplication is well-defined in $\mathcal{W}$, then so is $\mathcal{L}=\mathcal{L}_1 \cdot \mathcal{L}_2$ and $\mathsf{D}\mathcal{L}[{\mathbf m}{A}]({\mathbf m}{E}) = \mathsf{D}\mathcal{L}_1[{\mathbf m}{A}]({\mathbf m}{E}) \cdot \mathcal{L}_2({\mathbf m}{A}) + \mathcal{L}_1({\mathbf m}{A}) \cdot \mathsf{D}\mathcal{L}_2[{\mathbf m}{A}]({\mathbf m}{E})$. \item[3.] (Chain Rule) Let $\mathcal{L}_1:\mathcal{U}\rightarrow\mathcal{V}$ and $\mathcal{L}_2:\mathcal{V}\rightarrow\mathcal{W}$ be Fr\'{e}chet differentiable at ${\mathbf m}{A}\in\mathcal{U}$ and $\mathcal{L}_1({\mathbf m}{A})$ respectively, and let $\mathcal{L} = \mathcal{L}_2 \circ \mathcal{L}_1$ (i.e.~$\mathcal{L}({\mathbf m}{A}) = \mathcal{L}_2\left( \mathcal{L}_1 ({\mathbf m}{A}) \right)$. Then $\mathcal{L}$ is Fr\'{e}chet differentiable at ${\mathbf m}{A}$ and $\mathsf{D}\mathcal{L}[{\mathbf m}{A}]({\mathbf m}{E}) = \mathsf{D}\mathcal{L}_2 [\mathcal{L}_1({\mathbf m}{A})] \left( \mathsf{D}\mathcal{L}_1[{\mathbf m}{A}]({\mathbf m}{E}) \right)$. \end{itemize} \end{prop} For each self-adjoint and bounded operator ${\mathbf m}{A}\in\mathbb{M}^\textnormal{sa}$ with the spectrum $\sigma({\mathbf m}{A})$ and the spectral measure ${\mathbf m}{E}$, its \emph{spectral decomposition} can be written as ${\mathbf m}{A} = \int_{\lambda \in \sigma({\mathbf m}{A})} \lambda \, \mathrm{d} {\mathbf m}{E}(\lambda)$. As a result, each scalar function is extended to a \emph{standard matrix function} as follows: \[ f({\mathbf m}{A}) \triangleq \int_{\lambda \in \sigma({\mathbf m}{X})} f(\lambda) \, \mathrm{d} {\mathbf m}{E}(\lambda). \] A real-valued function $f$ is called \emph{operator convex} if for each ${\mathbf m}{A},{\mathbf m}{B}\in \mathbb{M}^\textnormal{sa}$ and $0\leq t \leq 1$, \[ f(t{\mathbf m}{A}) + f((1-t){\mathbf m}{B}) \preceq f( t{\mathbf m}{A} + (1-t){\mathbf m}{B}). \] \begin{prop} [Operator Jensen's Inequality for Matrix-Valued Measures \cite{HP03}, {\cite[Theorem 4.2]{FZ07}}] \label{prop:Jensen} Let $(\mathrm{\Omega},\mathrm{\Sigma})$ be a measurable space and suppose that $I\subseteq \mathbb{R}$ is an open interval. Assume for every $x \in \Omega$, ${\mathbf m}{K}(x)$ is a (finite or infinite dimensional) square matrix and satisfies \[ \int_{x\in\Omega} {\mathbf m}{K}(\mathrm{d}x) {\mathbf m}{K}(\mathrm{d}x)^\dagger = {\mathbf m}{I} \] (identity matrix in $\mathbb{M}^\text{sa}$). If ${\mathbf m}{f}:\mathrm{\Omega}\rightarrow \mathbb{M}^\text{sa}$ is a measurable function for which $\sigma({\mathbf m}{f}(x)) \subset I$, for every $x\in \Omega$, then \[ \phi\left( \int_{x\in\Omega} {\mathbf m}{K}(\mathrm{d}x) {\mathbf m}{f}(x) {\mathbf m}{K}(\mathrm{d}x)^\dagger \right) \preceq \int_{x\in\Omega} {\mathbf m}{K}(\mathrm{d}x) \phi\left({\mathbf m}{f}(x)\right) {\mathbf m}{K}(\mathrm{d}x)^\dagger \, \mu(\mathrm{d}x) \] for every operator convex function $\phi:I\rightarrow \mathbb{R}$. Moreover, \[ \Tr \left[ \phi\left( \int_{x\in\Omega} {\mathbf m}{K}(\mathrm{d}x) {\mathbf m}{f}(x) {\mathbf m}{K}(\mathrm{d}x)^\dagger \, \mu(\mathrm{d}x) \right) \right] \leq \Tr \left[ \int_{x\in\Omega} {\mathbf m}{K}(\mathrm{d}x) \phi\left({\mathbf m}{f}(x)\right) {\mathbf m}{K}(\mathrm{d}x)^\dagger \, \mu(\mathrm{d}x) \right] \] for every convex function $\phi:I\rightarrow \mathbb{R}$. \end{prop} \begin{prop}[{\cite[Theorem 3.23]{HP14}}] \label{prop:trace_Petz} Let $ {\mathbf m}{A}, {\mathbf m}{X}\in{\mathbb{M}}^{sa}$ and $t\in\mathbb{R}$. Assume $f:I\to \mathbb{R}$ is a continuously differentiable function defined on an interval $I$ and assume that the eigenvalues of $ {\mathbf m}{A}+t{\mathbf m}{X} \subset I$. Then \[ \left.\frac{\mathrm{d} }{ \mathrm{d} t} \Tr f({\mathbf m}{A}+t {\mathbf m}{X})\right|_{t=t_0} = \Tr [ {\mathbf m}{X} f' ( {\mathbf m}{A} + t_0 {\mathbf m}{X}) ]. \] \end{prop} \section{Markov Semigroups on Matrix-Valued Functions} \label{sec:semigroup} We will introduce the theory of Markov semigroups in this section. We particularly focus on the Markov semigroup acting on matrix-valued functions. The reader may find general references of Markov semigroups on real-valued functions in \cite{Bak94, Bak06, BGL13, Gui03, Sal97}. Throughout this paper, we consider a probability space $(\mathrm{\Omega},\mathrm{\Sigma},\mu)$ with $\Omega$ being a discrete space or a compact connected smooth manifold (e.g.~$\Omega\equiv\mathbb{R}$). We consider the Banach space $\mathcal{B}$ of continuous, bounded and Bochner integrable (\cite{Die77, Mik78}) matrix-valued functions ${\mathbf m}{f}:\mathrm{\Omega}\rightarrow \mathbb{M}^\text{sa}$ equipped with the uniform norm (e.g.~$\@ifstar\@opnorms\@opnorm{{\mathbf m}{f}} \triangleq \sup_{x\in\Omega }\| {\mathbf m}{f}(x) \|_\infty$). We denote the expectation with respect to the measure $\mu$ for any measurable function ${\mathbf m}{f}:\Omega \to \mathbb{M}^\text{sa}$ by \[ \mathbb{E}_{\mu}[{\mathbf m}{f}] \triangleq \int_\Omega {\mathbf m}{f} \, \mathrm{d} \mu = \int_{x\in\Omega} {\mathbf m}{f}(x) \, \mu(\mathrm{d}x). \] where the integral is the Bochner integral. We also instate the integration condition $\Tr\left[ \mathbb{E}_{\mu} | {\mathbf m}{f} | \right]\leq \infty$. Define a \emph{completely positive kernel} $ \mathsf{T}_t (x, \mathrm{d} y): \mathbb{M}^\text{sa} \to \mathbb{M}^\text{sa}$ to be a family of CP maps on $\mathbb{M}^\text{sa}$ depending on the parameter $t\in\mathbb{R}_+\triangleq[0,\infty)$ such that \begin{align} \label{eq:label} \int_{y\in\Omega} \mathsf{T}_t (x, \mathrm{d} y) \;\text{is a unital map}, \end{align} and satisfies the \emph{Chapman-Kolmogorov identity}: \begin{align} \label{eq:kernel} \int_{y\in\mathrm{\Omega}} \mathsf{T}_s(x,\mathrm{d}y) \circ \mathsf{T}_t(y,\mathrm{d}z) = \mathsf{T}_{s+t} (x,\mathrm{d}z)\quad \forall s,g\in\mathbb{R}_+. \end{align} In particular, we can impose the trace-preserving property: $\int_{y\in\Omega} \mathsf{T}_t (x, \mathrm{d} y)$ is a unital quantum channel. The central object investigated in this work is a family of operators $\{\mathsf{P}_t\}_{t\geq 0}$ acting on matrix-valued functions. These operators are called \emph{Markov semigroups} if they satisfy: \begin{defn} [Markov Semigroups on Matrix-Valued Functions] \label{defn:Markov} A family of linear operators $\{\mathsf{P}_t\}_{t\geq 0}$ on the Banach space $\mathcal{B}$ is a \emph{Markov semigroup} if and only if it satisfies the following conditions \begin{itemize} \item[(a)] $\mathsf{P}_0 = \mathds{1}$, the identity map on $\mathcal{B}$ (\emph{intitial condition}). \item[(b)] The map $t\to \mathsf{P}_t {\mathbf m}{f}$ is a continuous map from $\mathbb{R}_+$ to $\mathcal{B}$ (\emph{continuity property}). \item[(c)] The semigroup properties: $\mathsf{P}_t \circ \mathsf{P}_s = \mathsf{P}_{s+t}$, for any $s,t\in\mathbb{R}_+$ (\emph{semigroup property}). \item[(d)] $\mathsf{P}_t {\mathbf m}{I} = {\mathbf m}{I}$ for any $t\in\mathbb{R}_+$, where ${\mathbf m}{I}$ is the constant identity matrix in $\mathbb{M}^\text{sa}$ (\emph{mass conservation}). \item[(e)] If ${\mathbf m}{f}$ is non-negative (i.e.~${\mathbf m}{f}(x)\succeq {\mathbf m}{0}$ for all $x\in\Omega$), then $\mathsf{P}_t {\mathbf m}{f}$ is non-negative for any $t\in\mathbb{R}_+$ (\emph{positivity preserving}). \end{itemize} \end{defn} Given the CP kernel $\{\mathsf{T}_t(x,\mathrm{d}y)\}_{t\geq 0}$, the Markov semigroup acing on the matrix-valued function ${\mathbf m}{f}$ is defined by \begin{align} \label{eq:Pt} \mathsf{P}_t {\mathbf m}{f}(x) \triangleq \int_{y\in\Omega} {\mathsf{T}}_t(x,\mathrm{d}y) {\mathbf m}{f}(y), \end{align} where $\mathsf{T}_t(x,\mathrm{d}y) {\mathbf m}{f}(y)$ refers to the output of the linear map $\mathsf{T}_t(x,\mathrm{d}y)$ acting on ${\mathbf m}{f}(y)$. If $\int_{y\in\Omega} \mathsf{T}_t (x, \mathrm{d} y)$ is a CPTP map, then it exhibits the \emph{contraction} property of the semigroup (with respect to the norm $\@ifstar\@opnorms\@opnorm{\,\cdot\,}$ on the Banach space $\mathcal{B}$): \begin{align} \label{eq:contraction} \begin{split} \@ifstar\@opnorms\@opnorm{ \mathsf{P}_t {\mathbf m}{f} } &= \sup_{x\in\Omega} \left\| \mathsf{P}_t {\mathbf m}{f}(x) \right\|_\infty \\ &= \sup_{x\in\Omega} \left\| \int_{y\in\Omega} \mathsf{T}_t(x,\mathrm{d} y) {\mathbf m}{f}(y) \right\|_\infty \\ &\leq \sup_{y\in\Omega} \left\| {\mathbf m}{f}(y) \right\|_\infty \\ &= \@ifstar\@opnorms\@opnorm{ {\mathbf m}{f} }, \end{split} \end{align} where we use fact that the CPTP map is contractive \cite{PWP+06}. From the operator Jensen's inequality (Proposition \ref{prop:Jensen}) we have the following two inequalities: \begin{align} \label{eq:Pt_convex} \mathsf{P}_t \left( \phi \circ {\mathbf m}{f} \right) = \mathsf{P}_t \phi\left( {\mathbf m}{f} \right) \succeq \phi\left( \mathsf{P}_t{\mathbf m}{f} \right) \end{align} for any operator convex function $\phi$, and \begin{align} \label{eq:Pt_convex2} \Tr \left[ \mathsf{P}_t \phi\left( {\mathbf m}{f} \right) \right] \geq \Tr \left[ \phi\left( \mathsf{P}_t{\mathbf m}{f} \right) \right] \end{align} for any convex function $\phi$. Since the map $t\to \mathsf{P}_t {\mathbf m}{f}$ is continuous (Definition \ref{defn:Markov}); hence, the derivative of the operator $\mathsf{P}_t$ with respect to $t$, i.e. the convergence rate of $\mathsf{P}_t$, is a main focus of the analysis of Markov semigroups. More precisely, we define the \emph{infinitesimal generator} for any Markov semigroup $\{\mathsf{P}_t\}_{t\geq0}$ by \begin{align} \label{eq:L} \mathsf{L}({\mathbf m}{f}) \triangleq \lim_{t\rightarrow 0^+} \frac1t( \mathsf{P}_t {\mathbf m}{f} - {\mathbf m}{f} ). \end{align} For convenience, we denote by $\mathcal{D}(\mathsf{L})$ the \emph{Dirichlet domain} of $\mathsf{L}$, which is the set of matrix-valued functions in $\mathcal{B}$ such that the limit in Eq.~\eqref{eq:L} exists. We provide an equivalent condition of $\mathcal{D}(\mathsf{L})$ in Appendix \ref{sec:HY}. Combined with the linearity of the operators $\{\mathsf{P}_t\}_{t\geq 0 }$ and the semigroup property, we deduce that the generator $\mathsf{L}$ is the derivative of $\mathsf{P}_t$ at any time $t> 0$. That is, for $t,s>0$, \[ \frac1s \left( \mathsf{P}_{t+s} - \mathsf{P}_t \right) = \mathsf{P}_t \left( \frac1s \left( \mathsf{P}_s - \mathds{1} \right) \right) = \left( \frac1s \left( \mathsf{P}_s - \mathds{1} \right) \right) \mathsf{P}_t. \] Letting $s\to 0$ shows that \begin{align} \label{eq:partial_Pt} \frac{\partial}{\partial t} \mathsf{P}_t = \mathsf{L} \mathsf{P}_t = \mathsf{P}_t \mathsf{L}. \end{align} The above equation combined with Eq.~$\eqref{eq:Pt_convex}$ implies the following proposition. \begin{prop} \label{prop:L} Let $\{\mathsf{P}_t \}_{t\geq 0}$ be a Markov semigroup with the infinitesimal generator $\mathsf{L}$. For any operator convex function $\phi:\mathbb{R}\to\mathbb{R}$ and ${\mathbf m}{f} \in \mathcal{D}(\mathsf{L})$, we have \begin{align} \label{eq:L_fre} \mathsf{L}\left( \phi({\mathbf m}{f}) \right) \succeq \mathsf{D}\phi[{\mathbf m}{f}]\left( \mathsf{L} {\mathbf m}{f} \right). \end{align} \end{prop} \begin{proof} For any $s>0$, Eq.~\eqref{eq:Pt_convex} implies \begin{align} \label{eq:L_fre1} \frac1s \big( \mathsf{P}_{s} \phi( {\mathbf m}{f} ) - \phi( \mathsf{P}_0 {\mathbf m}{f} ) \big) \succeq \frac1s \big( \phi( \mathsf{P}_{s} {\mathbf m}{f} ) - \phi( \mathsf{P}_0 {\mathbf m}{f} ) \big). \end{align} By letting $s\to 0^+$ and using the chain rule of the Fr\'echet derivative (Proposition \ref{prop:properties}), the right-hand side yields \begin{align} \label{eq:L_fre2} \begin{split} \lim_{s\to 0^+} \frac1s \big( \phi( \mathsf{P}_{0+s} {\mathbf m}{f} ) - \phi( \mathsf{P}_0 {\mathbf m}{f} ) \big) &= \left. \mathsf{D} \phi \left[ \mathsf{P}_t {\mathbf m}{f} \right] \left( \frac{\partial}{\partial t} \mathsf{P}_t {\mathbf m}{f} \right) \right|_{t=0} \\ &= \left. \mathsf{D} \phi \left[ \mathsf{P}_t {\mathbf m}{f} \right] \left( \mathsf{L} \mathsf{P}_t {\mathbf m}{f} \right) \right|_{t=0} \\ &= \mathsf{D} \phi \left[{\mathbf m}{f} \right] \left( \mathsf{L} {\mathbf m}{f} \right), \end{split} \end{align} where the second equality follows from Eq.~\eqref{eq:partial_Pt}. In the last line we apply the property $\mathsf{P}_0 {\mathbf m}{f} = {\mathbf m}{f}$ (item (a) in Definition \ref{defn:Markov}). On the other hand, the left-hand side of Eq.~\eqref{eq:L_fre1} can be rephrased as \begin{align} \label{eq:L_fre3} \begin{split} \lim_{s\to 0^+} \frac1s \big( \mathsf{P}_{s} \phi( {\mathbf m}{f} ) - \phi( \mathsf{P}_0 {\mathbf m}{f} ) \big) &= \lim_{s\to 0^+} \frac1s \big( \mathsf{P}_{s} \phi( {\mathbf m}{f} ) - \phi( {\mathbf m}{f} ) \big) \\ &= \mathsf{L} \left( \phi({\mathbf m}{f}) \right). \end{split} \end{align} Hence, combining Eqs.~\eqref{eq:L_fre3}, \eqref{eq:L_fre1} and \eqref{eq:L_fre2} arrives at the desired inequality \begin{align*} \mathsf{L}\left( \phi({\mathbf m}{f}) \right) \succeq \mathsf{D}\phi[{\mathbf m}{f}]\left( \mathsf{L} {\mathbf m}{f} \right). \end{align*} \end{proof} In the classical setup (i.e.~Markov semigroups acting on real-valued functions), the \emph{carr\'{e} du champ} operator (see e.g.~\cite{BGL13}) is defined by : \begin{align} \label{eq:Gamma0} \Gamma(f,g) = \frac12 \big( \mathsf{L}(fg) - f \mathsf{L}(g) - g \mathsf{L}(f) \big). \end{align} Here we introduce a non-commutative version of the {carr\'{e} du champ} operator $\mathbf{\Gamma}:\mathcal{D}(\mathsf{L})\times \mathcal{D}(\mathsf{L}) \rightarrow \mathcal{D}(\mathsf{L})$ of the generator $\mathsf{L}$ by \begin{align} \label{eq:Gamma1} \mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) \triangleq \frac12 \big( \mathsf{L}({\mathbf m}{f}^2) - {\mathbf m}{f}\mathsf{L}({\mathbf m}{f}) - \mathsf{L}({\mathbf m}{f}){\mathbf m}{f}\big), \end{align} and its symmetric and bilinear extension \begin{align} \label{eq:Gamma2} \begin{split} \mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{g}) = \mathbf{\Gamma}({\mathbf m}{g},{\mathbf m}{f}) &\triangleq \frac12 \big( \mathbf{\Gamma}({\mathbf m}{f}+{\mathbf m}{g},{\mathbf m}{f}+{\mathbf m}{g}) - \mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) - \mathbf{\Gamma}({\mathbf m}{g},{\mathbf m}{g}) \big) \\ &= \frac14 \left( \mathsf{L}( {\mathbf m}{f} {\mathbf m}{g} ) + \mathsf{L}( {\mathbf m}{g} {\mathbf m}{f} ) - {\mathbf m}{f}\mathsf{L}({\mathbf m}{g}) - {\mathbf m}{g}\mathsf{L}({\mathbf m}{f}) - \mathsf{L}({\mathbf m}{f}){\mathbf m}{g} - \mathsf{L}({\mathbf m}{g}){\mathbf m}{f} \right). \end{split} \end{align} We note that when ${\mathbf m}{f}$ commutes\footnote{Here we means that $[{\mathbf m}{f}(x),{\mathbf m}{g}(x)]={\mathbf m}{0}$ for all $x\in\mathrm{\Omega}$.} with ${\mathbf m}{g}$, the carr\'{e} du champ operator reduces to the conventional expression (cf.~Eq.~\eqref{eq:Gamma0}): \begin{align*} \mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{g}) \equiv \frac12 \left( \mathsf{L}({\mathbf m}{f}{\mathbf m}{g}) - {\mathbf m}{f}\mathsf{L}({\mathbf m}{g}) - {\mathbf m}{g}\mathsf{L}({\mathbf m}{f})\right). \end{align*} Recall that the square function $\phi(u) = u^2$ is operator convex. The formula of the Fr\'echet derivative: $\mathsf{D}\phi[{\mathbf m}{A}]({\mathbf m}{B}) = {\mathbf m}{A}{\mathbf m}{B} + {\mathbf m}{B}{\mathbf m}{A}$ together with Proposition \ref{prop:L} yields \[ \mathsf{L}({\mathbf m}{f}^2) \succeq {\mathbf m}{f}\mathsf{L}({\mathbf m}{f}) + \mathsf{L}({\mathbf m}{f}){\mathbf m}{f}. \] Hence the carr\'{e} du champ operator is positive semi-definite: $\mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) \succeq {\mathbf m}{0}$. We can also observe that ${\mathbf m}{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) = {\mathbf m}{0}$ implies that ${\mathbf m}{f}$ is essentially constant, i.e.~$\mathsf{P}_t {\mathbf m}{f} = {\mathbf m}{f}$ for $t\geq 0$. Moreover, the non-negativity and the bilinearity of the carr\'{e} du champ operator directly yield a trace Cauchy-Schwartz inequality: \begin{prop} [Trace Cauchy-Schwartz Inequality for Carr\'e du Champ Operators] \label{prop:Cauchy} For all ${\mathbf m}{f},{\mathbf m}{g} \in \mathcal{D}(\mathsf{L})$, \[ \big( \Tr \left[ {\mathbf m}{\Gamma}({\mathbf m}{f},{\mathbf m}{g}) \right] \big)^2 \leq \Tr \left[ {\mathbf m}{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) \right] \cdot \Tr \left[ {\mathbf m}{\Gamma}({\mathbf m}{g},{\mathbf m}{g}) \right]. \] \end{prop} \begin{proof} From Eq.~\eqref{eq:Gamma2}, for all $s\in \mathbb{R}$ if follows that \begin{align*} {\mathbf m}{\Gamma}( s{\mathbf m}{f} + {\mathbf m}{g}, s{\mathbf m}{f} + {\mathbf m}{g}) &= s^2 \cdot {\mathbf m}{\Gamma}( {\mathbf m}{f}, {\mathbf m}{f}) + 2s \cdot {\mathbf m}{\Gamma}( {\mathbf m}{f}, {\mathbf m}{g} ) + {\mathbf m}{\Gamma}({\mathbf m}{g},{\mathbf m}{g}) \succeq {\mathbf m}{0} \end{align*} After taking trace, the non-negativity of the above equation ensures the discriminant being non-positive: \[ \big( 2\cdot \Tr \left[ {\mathbf m}{\Gamma}({\mathbf m}{f},{\mathbf m}{g}) \right] \big)^2 - 4 \cdot \Tr \left[ {\mathbf m}{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) \right] \cdot \Tr \left[ {\mathbf m}{\Gamma}({\mathbf m}{g},{\mathbf m}{g}) \right] \leq 0 \] as desired. \end{proof} Note that a bilinear map $ \langle \cdot, \cdot \rangle : V \times V \to \mathbb{R}$ on some vector space $V$ is a \emph{scalar inner product} if it satisfies conjugate symmetry and non-negativity ($\langle x, x\rangle \geq0$ and $x=0$ when $\langle x, x\rangle =0$). As a result, the non-commutative carr\'e du champ operator that exhibits the properties of symmetry, linearity, and non-negativity (${\mathbf m}{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) \succeq {\mathbf m}{0}$ and $\mathsf{L}{\mathbf m}{f} = {\mathbf m}{0}$ when ${\mathbf m}{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) = {\mathbf m}{0}$) can be viewed as a matrix-valued inner product (with respect to the generator $\mathsf{L}$) on the space $\mathcal{D}(\mathsf{L})$. Given the semigroup $\{\mathsf{P}_t\}_{t\geq 0}$, the measure $\mu$ is \emph{invariant} for the function ${\mathbf m}{f}\in \mathcal{D}(\mathsf{L})$ if \begin{align} \label{eq:invariance} \int \mathsf{P}_t {\mathbf m}{f}(x) \, \mu(\mathrm{d}x) = \int {\mathbf m}{f}(x) \,\mu(\mathrm{d} x) , \quad t\in\mathbb{R}_+. \end{align} We can observe from Eqs.~\eqref{eq:L} and \eqref{eq:invariance} that any invariant measure $\mu$ satisfies \begin{align} \label{eq:L0} \int \mathsf{L}({\mathbf m}{f}) \, \mathrm{d} \mu = {\mathbf m}{0}. \end{align} We call the measure $\mu$ \emph{symmetric} if and only if \begin{align} \label{eq:symmetric} \int {\mathbf m}{f} \mathsf{L}({\mathbf m}{g}) + \mathsf{L}({\mathbf m}{g}) {\mathbf m}{f} \, \mathrm{d} \mu = \int {\mathbf m}{g} \mathsf{L}({\mathbf m}{f}) + \mathsf{L}({\mathbf m}{f}) {\mathbf m}{g} \, \mathrm{d} \mu, \end{align} which implies an integration by parts formula: \begin{align} \label{eq:by_part} -\frac12 \int {\mathbf m}{f} \mathsf{L}({\mathbf m}{g}) + \mathsf{L}({\mathbf m}{g}) {\mathbf m}{f} \, \mathrm{d} \mu = \int \mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{g}) \, \mathrm{d} \mu = \int \mathbf{\Gamma}({\mathbf m}{g},{\mathbf m}{f}) \, \mathrm{d} \mu. \end{align} The notion of carr\'{e} du champ operator and the invariant measure $\mu$ (i.e.~Eq.~\eqref{eq:L0}) immediately lead to a symmetric bilinear \emph{Dirichlet form}: \begin{align} \label{eq:Dirichlet} \begin{split} {\mathbf m}{\mathcal{E}}({\mathbf m}{f},{\mathbf m}{g}) \triangleq \int \mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{g}) \, \mathrm{d} \mu &= -\frac14 \int {\mathbf m}{f}\mathsf{L}({\mathbf m}{g}) + {\mathbf m}{g}\mathsf{L}({\mathbf m}{f}) + \mathsf{L}({\mathbf m}{f}){\mathbf m}{g} + \mathsf{L}({\mathbf m}{g}){\mathbf m}{f} \, \mathrm{d}\mu. \end{split} \end{align} The non-negativity of the carr\'e du champ operator also yields that ${\mathbf m}{\mathcal{E}}({\mathbf m}{f},{\mathbf m}{f}) = \int \mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{f}) \, \mathrm{d} \mu \succeq {\mathbf m}{0}$, which stands for a kind of second-moment quantity or the energy of the function ${\mathbf m}{f}$. For convenience, we shorthand the notation $\mathbf{\Gamma}({\mathbf m}{f}) \equiv \mathbf{\Gamma}({\mathbf m}{f},{\mathbf m}{f})$ and ${\mathbf m}{\mathcal{E}}({\mathbf m}{f})\equiv {\mathbf m}{\mathcal{E}}({\mathbf m}{f},{\mathbf m}{f})$. In the following, we often refer to the \emph{Markov Triple} $(\Omega,\mathbf{\Gamma},\mu)$ with state space $\Omega$, carr\'e du champ operator $\mathbf{\Gamma}$ acting on the Dirichlet domain $\mathcal{D}(\mathsf{L})$ of matrix-valued functions, and invariant measure $\mu$. Additionally, we will apply the Fubini's theorem to freely interchange the order of trace and the expectation with respect to $\mu$. \section{Main Results: Exponential Decays of Matrix $\Phi$-Entropy Functionals} \label{sec:main} In this section, our goal is to show that the matrix $\Phi$-entropy functional exponentially decays along the Markov semigroup and its relation with the spectral gap inequalities and logarithmic Sobolev inequalities. With the invariant measure $\mu$ of the semigroup $\left\{\mathsf{P}_t\right\}_{t\geq0}$ and the Jensen inequality \eqref{eq:Pt_convex2}, we observe that \begin{align*} H_\Phi \left(\mathsf{P}_t {\mathbf m}{f} \right) &= \Tr \Big[ \mathbb{E}_{\mu}\big[\Phi \left(\mathsf{P}_t {\mathbf m}{f} \right) \big] - \Phi\big( \mathbb{E}_{\mu} \left[ \mathsf{P}_t {\mathbf m}{f} \right] \big) \Big] \\ &= \Tr \Big[ \mathbb{E}_{\mu}\big[\Phi \left(\mathsf{P}_t {\mathbf m}{f} \right) \big] - \Phi\big( \mathbb{E}_{\mu} {\mathbf m}{f} \big) \Big] \\ &\leq \Tr \Big[ \mathbb{E}_{\mu}\big[\mathsf{P}_t\Phi \left( {\mathbf m}{f} \right) \big] - \Phi\big( \mathbb{E}_{\mu} {\mathbf m}{f} \big) \Big] \\ &= H_\Phi \left( {\mathbf m}{f} \right), \end{align*} where in the second and the last lines we use the property of the invariant measure $\mu$, Eq.~\eqref{eq:invariance}. Thus, the matrix $\Phi$-entropy functional is non-increasing along the flow of the semigroup and behaves like the classical $\Phi$-entropy functionals (see e.g.~\cite{Cha03}). Moreover, we are able to obtain the time differentiation of the matrix $\Phi$-entropy functional, which can be viewed as the \emph{Boltzmann H-Theorem} for matrix-valued functions. \begin{prop} [de Bruijn's Property for Markov Semigroups] \label{prop:de} Fix a probability space $(\Omega,\Sigma,\mu)$. Let $\left\{ \mathsf{P}_t \right\}_{t\geq 0} $ be a Markov semigroup with infinitesimal generator $\mathsf{L}$ and carr\'e du champ operator $\mathbf{\Gamma}$. Assume that $\mu$ is an invariant probability measure for the semigroup. Then, for any suitable matrix-valued function ${\mathbf m}{f}:\Omega \rightarrow \mathbb{M}^\text{sa}$ in the Dirichlet domain $\mathcal{D}(\mathsf{L})$ with $\mu$ being its invariant measure, \begin{align} \label{eq:deBru1} \frac{\partial}{\partial t} H_\Phi \left( \mathsf{P}_t {\mathbf m}{f} \right) = \Tr \mathbb{E}_{\mu} \big[ \Phi'\left(\mathsf{P}_t {\mathbf m}{f}\right) \mathsf{L}\mathsf{P}_t {\mathbf m}{f} \big] \leq 0,\quad \forall t\in\mathbb{R}_+. \end{align} When $\mu$ is symmetric, one has the following formulation \begin{align} \label{eq:deBru2} \frac{\partial}{\partial t} H_\Phi \left( \mathsf{P}_t {\mathbf m}{f} \right) = - \Tr\mathbb{E}_{\mu} \big[ \mathbf{\Gamma} \left(\Phi' \left( \mathsf{P}_t {\mathbf m}{f} \right), \mathsf{P}_t {\mathbf m}{f} \right) \big],\quad \forall t\in\mathbb{R}_+. \end{align} \end{prop} \begin{proof} The proof directly follows from the definition of the matrix $\Phi$-entropy functional and the properties of the Markov semigroup. Namely, \begin{align*} \frac{\partial}{\partial t} H_\Phi \left( \mathsf{P}_t {\mathbf m}{f} \right) &= \frac{\partial}{\partial t} \Tr \Big[ \mathbb{E}_{\mu} \big[ \Phi \left( \mathsf{P}_t {\mathbf m}{f} \right) \big] - \Phi \big( \mathbb{E}_{\mu}\left[ \mathsf{P}_t {\mathbf m}{f} \right] \big) \Big] \\ &= \frac{\partial}{\partial t} \Tr \Big[ \mathbb{E}_{\mu} \big[ \Phi \left( \mathsf{P}_t {\mathbf m}{f} \right) \big] - \Phi \big( \mathbb{E}_{\mu}\left[ {\mathbf m}{f} \right] \big) \Big] \\ &= \frac{\partial}{\partial t} \Tr\mathbb{E}_{\mu} \big[ \Phi \left( \mathsf{P}_t {\mathbf m}{f} \right) \big] \\ &= \Tr \mathbb{E}_{\mu} \big[ \mathsf{D}\Phi\left[\mathsf{P}_t {\mathbf m}{f} \right] \big( \mathsf{L}\mathsf{P}_t {\mathbf m}{f} \big) \big]\\ &= \Tr \mathbb{E}_{\mu} \big[ \Phi'\left(\mathsf{P}_t {\mathbf m}{f}\right) \mathsf{L}\mathsf{P}_t {\mathbf m}{f} \big], \end{align*} where the second equality is due to the invariance of $\mu$, Eq.~\eqref{eq:invariance}. The fourth equation is due to the chain rule of Fr\'echet derivative (see Proposition \ref{prop:properties}) and Eq.~\eqref{eq:partial_Pt}. We obtain the last identity by Proposition \ref{prop:trace_Petz}. Proposition \ref{prop:L} yields $\mathsf{D}\Phi\left[\mathsf{P}_t {\mathbf m}{f} \right] \big( \mathsf{L}\mathsf{P}_t {\mathbf m}{f} \big) \preceq \mathsf{L}\Phi(\mathsf{P}_t {\mathbf m}{f})$. By the invariance of $\mu$, we deduce the non-positivity of Eq.~\eqref{eq:deBru1}: \begin{align*} \mathbb{E}_{\mu} \big[ \mathsf{D}\Phi\left[\mathsf{P}_t {\mathbf m}{f} \right] \big( \mathsf{L}\mathsf{P}_t {\mathbf m}{f} \big) \big] \preceq \mathbb{E}_{\mu} \big[ \mathsf{L}\Phi(\mathsf{P}_t {\mathbf m}{f}) \big] = {\mathbf m}{0}. \end{align*} The symmetric case \eqref{eq:deBru2} stands by further applying the integration by parts formula, Eq.~\eqref{eq:by_part}, i.e.~ \begin{align*} \Tr \Big[ \mathbb{E}_{\mu} \big[{\mathbf m}{\Gamma}({\mathbf m}{g},{\mathbf m}{h}) \big] \Big] &= -\frac12 \Tr \Big[ \mathbb{E}_{\mu} \big[ {\mathbf m}{g} \cdot \mathsf{L}({\mathbf m}{h}) + \mathsf{L}({\mathbf m}{h}) \cdot {\mathbf m}{g} \big]\Big] \\ &= - \Tr \Big[ \mathbb{E}_{\mu} \big[ {\mathbf m}{g} \cdot \mathsf{L}({\mathbf m}{h}) \big] \Big], \end{align*} where we apply the cyclic property of the trace function. Hence, Eq.~\eqref{eq:deBru2} follows by taking ${\mathbf m}{g} \equiv \Phi'(\mathsf{P}_t {\mathbf m}{f})$ and ${\mathbf m}{h} \equiv \mathsf{P}_t {\mathbf m}{f}$. \end{proof} In the following, we first give the definitions of the spectral gap inequalities and logarithmic Sobolev inequalities related to Markov semigroups. The main result---the relation between the exponential decays in matrix $\Phi$-entropies and the functional inequalities, is presented in Theorem \ref{theo:decay}. \begin{defn} [Spectral Gap Inequality for Matrix-Valued Functions] \label{defn:spectral} A Markov Triple $(\Omega,\mathbf{\Gamma},\mu)$ is said to satisfy a spectral gap inequality with a constant $C>0$, if for all matrix-valued functions ${\mathbf m}{f}:\Omega \rightarrow \mathbb{M}_d^\text{sa}$ in the Dirichlet domain $\mathcal{D}(\mathsf{L})$ with $\mu$ being its invariant measure, \[ \textnormal{Var}({\mathbf m}{f}) \leq C\mathcal{E}({\mathbf m}{f}), \] where \[ \textnormal{Var}({\mathbf m}{f})\triangleq \Tr\mathbb{E}_{\mu}\left[ \big( {\mathbf m}{f}- \mathbb{E}_{\mu}[{\mathbf m}{f}] \big)^2 \right] \] denotes the variance of the function ${\mathbf m}{f}$ with respect to the measure $\mu$ and $\mathcal{E}({\mathbf m}{f})\triangleq \Tr\left[ {\mathbf m}{\mathcal{E}}({\mathbf m}{f}) \right]$. The infimum of the constants among all the spectral gap inequalities is called the \emph{spectral gap constant}. \end{defn} \begin{defn} [Logarithmic Sobolev Inequality for Matrix-Valued Functions] \label{defn:log} A Markov Triple $(\Omega,\mathbf{\Gamma},\mu)$ is said to satisfy a logarithmic Sobolev inequality LS$(C,B)$ with constants $C>0$, $B\geq 0$, if for all matrix-valued functions ${\mathbf m}{f}:\Omega \rightarrow \mathbb{M}_d^\text{sa}$ in the Dirichlet domain $\mathcal{D}(\mathsf{L})$ with $\mu$ being its invariant measure, \[ \Ent\left( {\mathbf m}{f}^2 \right) \leq B \, \Tr\mathbb{E}_{\mu} \big[ {\mathbf m}{f}^2 \big] + C\mathcal{E}({\mathbf m}{f}). \] The logarithmic Sobolev inequality is called \emph{tight} and is denoted by LS$(C)$ when $B=0$. When $B>0$, the logarithmic Sobolev inequality LS$(C,B)$ is called \emph{defective}. We also define the \emph{modified logarithmic Sobolev inequality} (MLSI) if there exists a constant $C$ such that \[ \Ent\left( {\mathbf m}{f} \right) \leq -C \Tr\mathbb{E}_{\mu} \big[\left( {\mathbf m}{I} + \log{\mathbf m}{f}\right) \mathsf{L} {\mathbf m}{f} \big]. \] \end{defn} \begin{theo} [Exponential Decay of Matrix $\Phi$-Entropy Functionals of Markov Semigroups] \label{theo:decay} Given a Markov triple $(\Omega, {\mathbf m}{\Gamma}, \mu)$, the following two statements are equivalent: there exists a $\Phi$-Sobolev constant $C \in (0,\infty]$ such that \begin{align} \label{eq:exp1} H_\Phi({\mathbf m}{f}) \leq -C \Tr \mathbb{E}_{\mu} \big[ \Phi'\left( {\mathbf m}{f}\right) \mathsf{L} {\mathbf m}{f} \big], \end{align} and \begin{align} \label{eq:exp2} H_\Phi( \mathsf{P}_t{\mathbf m}{f}) \leq \mathrm{e}^{-t/C} H_\Phi({\mathbf m}{f}), \quad \forall t\geq 0 \end{align} for all ${\mathbf m}{f}\in\mathcal{D}(\mathsf{L})$ with $\mu$ being its invariant measure. \end{theo} \begin{proof} The theorem is a consequence of the de Bruijn's property, Proposition \ref{prop:de}. More precisely, Eq.~\eqref{eq:deBru1} and the inequality \eqref{eq:exp1} imply \begin{align*} \frac{\partial}{\partial t} H_\Phi \left( \mathsf{P}_t {\mathbf m}{f} \right) &= \Tr \mathbb{E}_{\mu} \big[ \Phi'\left(\mathsf{P}_t {\mathbf m}{f}\right) \mathsf{L}\mathsf{P}_t {\mathbf m}{f} \big]. \end{align*} Recall that the function $\mathsf{P}_t{\mathbf m}{f}$ is invariant under the measure $\mu$ and therefore satisfies Eq.~\eqref{eq:exp1}. Hence, the above inequality can be rewritten as \begin{align*} \frac{\partial}{\partial t} H_\Phi \left( \mathsf{P}_t {\mathbf m}{f} \right) &= \Tr \mathbb{E}_{\mu} \big[ \Phi'\left(\mathsf{P}_t {\mathbf m}{f}\right) \mathsf{L}\mathsf{P}_t {\mathbf m}{f} \big]\\ &\leq -\frac1C H_\Phi(\mathsf{P}_t {\mathbf m}{f}), \end{align*} from which we obtain \[ H_\Phi( \mathsf{P}_t{\mathbf m}{f}) \leq \mathrm{e}^{-t/C} H_\Phi( \mathsf{P}_0{\mathbf m}{f}) =\mathrm{e}^{-t/C} H_\Phi({\mathbf m}{f}). \] Conversely, differentiating inequality \eqref{eq:exp2} at $t=0$ gives the desired inequality \eqref{eq:exp1}. \end{proof} From Theorem \ref{theo:decay}, we immediately establish the equivalence between the spectral gap inequality and the exponential decay of variance functions. \begin{coro}[Exponential Decay of Variance and Spectral Gap Inequalities] \label{coro:spectral} A Markov Triple $(\Omega,\mathbf{\Gamma},\mu)$ satisfies the spectral gap inequality with constant $C$ if and only if, for matrix-valued functions ${\mathbf m}{f}:\Omega\rightarrow \mathbb{M}_d^\text{sa}$ in the Dirichlet domain $\mathcal{D}(\mathsf{L})$, one has \[ \textnormal{Var}\big( \mathsf{P}_t {\mathbf m}{f} \big) \leq \mathrm{e}^{-2t/C} \cdot \textnormal{Var}\left( {\mathbf m}{f} \right). \] \end{coro} \begin{proof} Recall that $H_{u\mapsto u^2} ({\mathbf m}{f}) \equiv \Var({\mathbf m}{f})$. Hence, by taking $\Phi(u)=u^2$ in Theorem \ref{theo:decay}, the corollary follows since the right-hand side of Eq.~\eqref{eq:exp1} can be rephrased as: \[ \Tr \mathbb{E}_{\mu} \big[ \Phi'\left( {\mathbf m}{f}\right) \mathsf{L} {\mathbf m}{f} \big] = 2 \Tr \mathbb{E}_{\mu} \big[ {\mathbf m}{f} \cdot \mathsf{L} {\mathbf m}{f} \big] = - 2 \Tr\mathbb{E}_{\mu} \big[ \mathbf{\Gamma}\left({\mathbf m}{f}\right) \big] = -2 \mathcal{E}({\mathbf m}{f}). \] \end{proof} Similarly, by taking $\Phi(u) = u\log u$, we have the equivalence between the modified log-Sobolev inequalities and the exponential decay in entropy functionals. \begin{coro}[Exponential Decay of Entropy and Modified Log-Sobolev Inequalities] \label{coro:Ent} A Markov Triple $(\Omega,\mathbf{\Gamma},\mu)$ satisfies the modified log-Sobolev inequality with constant $C$ if and only if, for matrix-valued functions ${\mathbf m}{f}:\Omega\rightarrow \mathbb{M}_d^\text{sa}$ in the Dirichlet domain $\mathsf{D}(\mathsf{L})$, one has \[ \textnormal{Ent}\big( \mathsf{P}_t {\mathbf m}{f} \big) \leq \mathrm{e}^{-t/C} \cdot \textnormal{Ent}\left( {\mathbf m}{f} \right). \] \end{coro} \section{Time Evolutions of a Quantum Ensemble} \label{sec:exam} In this section, we discuss the applications of the Markov semigroups in quantum information theory and study a special case of the Markov semigroups---quantum unital channel. From the analysis in Section \ref{sec:main}, we demonstrate the exponential decays of the matrix $\Phi$-entropy functionals and give a tight bound to the monotonicity of the Holevo quantity: $\chi(\{\mu(x),\mathsf{T}_t ({\mathbf m}{\rho}(x))\}_x) \leq \mathrm{e}^{-t/C}\chi(\{\mu(x),{\mathbf m}{\rho}(x)\}_x)$, when $\mathsf{T}_t$ is a unital quantum dynamical semigroup \cite{Lin76}. To connect the whole machinery to quantum information theory, it is convenient to introduce some basic notation. First of all, we will restrict to the following set of matrix-valued functions in this section: ${\mathbf m}{f}(x) = {\mathbf m}{\rho}_x$, where ${\mathbf m}{\rho}_x$ is a density operator (i.e.~positive semi-definite matrix with unit trace) for all $x\in\Omega$. In other words, the function ${\mathbf m}{f}$ is a classical-quantum encoder that maps the classical message $x$ to a quantum state ${\mathbf m}{\rho}_x$. Therefore, a quantum ensemble is constructed by the classical-quantum encoder and the given measure $\mu$: $\mathcal{S}\triangleq \{ \mu(x), {\mathbf m}{\rho}_x\}_{x\in\Omega}$. The Holevo quantity of a quantum ensemble $\mathcal{S}$ is defined as \begin{align*} \chi(\mathcal{S}) &\triangleq -\Tr\big[ \overline{{\mathbf m}{\rho}} \log \overline{{\mathbf m}{\rho}} \, \big] + \int_{x\in\Omega} \Tr \big[ {\mathbf m}{\rho}_x \log {\mathbf m}{\rho}_x \big] \, \mu(\mathrm{d}x), \end{align*} where $\overline{{\mathbf m}{\rho}} \triangleq \int {\mathbf m}{\rho}_x \, \mu(\mathrm{d}x)$ denotes the average state. It is not hard to verify that the Holevo quantity is a special case of the matrix $\Phi$-entropy functionals \cite{CH1}: $ \chi(\mathcal{S})= \Ent({\mathbf m}{f}).$ If we impose the trace-preserving condition on the CP kernel $\int_{y\in\Omega} \mathsf{T}_t (x, \mathrm{d} y)$, it becomes a quantum unital channel. Consequently, the Markov semigroup $\{\mathsf{P}_t\}_{t\geq 0}$ acts on the matrix-valued function ${\mathbf m}{f}$ can be interpreted as a time evolution of the quantum ensemble $\mathcal{S}$, and the results of the matrix-valued functions in Section \ref{sec:main} also work for quantum ensembles. For each $t\in\mathbb{R}_+$, let the CP kernel be \begin{align*} \mathsf{T}_t(x,y) = \begin{cases} \mathsf{T}_t \; (\text{a quantum unital channel}), \; &\text{if } x = y \\ \mathsf{0} \; (\text{a zero map}), \; &\text{if } x\neq y. \end{cases} \end{align*} The set of unital maps $\mathsf{T}_t: \mathbb{M}_d^\text{sa} \to \mathbb{M}_d^\text{sa}$ forms a \emph{quantum dynamical semigroup}, which satisfies the semigroup conditions: \begin{itemize} \item[(a)] $\mathsf{T}_0 ({\mathbf m}{X})= {\mathbf m}{X}$ for all ${\mathbf m}{X}\in\mathbb{M}_d^\text{sa}$. \item[(b)] The map $t\to \mathsf{T}_t ({\mathbf m}{X})$ is a continuous map from $\mathbb{R}_+$ to $\mathbb{M}_d^\text{sa}$. \item[(c)] The semigroup properties: $\mathsf{T}_t \circ \mathsf{T}_s = \mathsf{T}_{s+t}$, for any $s,t\in\mathbb{R}_+$. \item[(d)] $\mathsf{T}_t ({\mathbf m}{I}) = {\mathbf m}{I}$ for any $t\in\mathbb{R}_+$, where ${\mathbf m}{I}$ is the identity matrix in $\mathbb{M}_d^\text{sa}$ (\emph{mass conservation}). \item[(e)] $\mathsf{T}_t$ is a positive map for any $t\in\mathbb{R}_+$. \end{itemize} We note that the quantum dynamical semigroup has been studied in the contexts of quantum Markov processes (see Section \ref{ssec:related}). It is shown \cite{Lin76, GKS76, AZ15} that any unital quantum dynamical semigroup is generated by a Liouvillian $\mathcal{L}: \mathbb{M}_d^\text{sa} \to \mathbb{M}_d^\text{sa}$ of the form \begin{align*} \mathcal{L}: {\mathbf m}{X} \mapsto \mathsf{\Psi}({\mathbf m}{X}) - \kappa{\mathbf m}{X} - {\mathbf m}{X}\kappa^\dagger, \end{align*} where $\kappa \in \mathbb{C}^{d\times d}$ and $\mathsf{\Psi}$ is a CP map such that $\mathsf{\Psi}({\mathbf m}{I}) = \kappa + \kappa^\dagger$. Therefore, each unital map can be expressed as $\mathsf{T}_t = \mathrm{e}^{t\mathcal{L}}$ for all $t\in\mathbb{R}_+$. The Markov semigroup acting on the matrix-valued function ${\mathbf m}{f}$ is hence defined by $\mathsf{P}_t {\mathbf m}{f} (x) = \mathsf{T}_t ( {\mathbf m}{f}(x))$ for all $x\in\Omega$. The invariant measure exists if \begin{align*} \int {\mathbf m}{f}(x) \, \mu(\mathrm{d} x) &= \int \mathsf{P}_t {\mathbf m}{f}(x) \, \mu(\mathrm{d}x) \\ &= \int \mathsf{T}_t ( {\mathbf m}{f}(x)) \, \mu(\mathrm{d}x) \\ &= \mathsf{T}_t \left( \int {\mathbf m}{f}(x) \, \mu(\mathrm{d}x) \right), \quad \forall t\in\mathbb{R}_+. \end{align*} In other words, the expectation $\mathbb{E}_\mu[{\mathbf m}{f}] = \int {\mathbf m}{f}(x) \, \mu(\mathrm{d} x)$ is the fixed point of the unital semigroup $\{\mathsf{T}_t\}_{t\geq0}$. From our main result---Theorem \ref{theo:decay}, we can establish the exponential decays of the matrix $\Phi$-entropy functionals through the quantum dynamical semigroup $\{\mathsf{T}_t\}_{t\geq 0}$: \begin{align*} H_\Phi({\mathbf m}{f}) \leq -C \Tr \mathbb{E}_\mu \big[ \Phi'\left( {\mathbf m}{f}\right) \mathsf{L} {\mathbf m}{f} \big] \quad\text{if and only if} \quad H_\Phi\left( \mathsf{T}_t({\mathbf m}{f})\right) \leq \mathrm{e}^{-t/C} H_\Phi({\mathbf m}{f}), \quad \forall t\geq 0, \end{align*} where the infinitesimal generator is given by $\mathsf{L}{\mathbf m}{f}(x) = \mathcal{L}({\mathbf m}{f}(x))$, for all $x \in \Omega$. In the following, we consider the cases of depolarizing and phase-damping channels, and demonstrate the exponential decay phenomenon when all the density operators converge to the same equilibrium. However, as it will be shown in the case of the phase-damping channel, the $\Phi$-Sobolev constant is infinite when the density operators converge to different states. \subsection{Depolarizing Channel} Denote by ${\mathbf m}{\pi} \triangleq {\mathbf m}{I}/d$ the maximally mixed state on the Hilbert space $\mathbb{C}^d$, and let $r>0$ be a constant. The quantum dynamical semigroup defined by the depolarizing channel is: \begin{align} \label{eq:pd} \mathsf{T}_t : {\mathbf m}{f}(x) \mapsto \mathrm{e}^{-rt} {\mathbf m}{f}(x) + \left( 1- \mathrm{e}^{-rt} \right) \Tr[{\mathbf m}{f}(x)] \cdot {\mathbf m}{\pi}, \quad \forall x\in\Omega,\, t\in\mathbb{R}_+. \end{align} It is not hard to verify that $\{\mathsf{T}_t\}_{t\geq 0}$ forms a Markov semigroup with a unique fixed point (also called the stationary state) $\Tr[{\mathbf m}{f}]{\mathbf m}{\pi}$. We assume $\mu$ is the invariant measure of ${\mathbf m}{f}$: $\mathsf{T}_t\left(\mathbb{E}_\mu[{\mathbf m}{f}]\right) = \mathbb{E}_\mu[{\mathbf m}{f}] = \Tr[{\mathbf m}{f}] {\mathbf m}{\pi}$. The infinitesimal generator and the Dirichlet form can be calculated as \begin{align*} \mathsf{L} {\mathbf m}{f} = \lim_{t\to 0} \frac1t \left( \mathsf{T}_t ({\mathbf m}{f}) - {\mathbf m}{f} \right) = r\left( \Tr[{\mathbf m}{f}]{\mathbf m}{\pi} - {\mathbf m}{f} \right); \end{align*} \begin{align*} {\mathbf m}{\mathcal{E}}({\mathbf m}{f}) = \frac12 \,\mathbb{E}_\mu \left[ \mathsf{L}{\mathbf m}{f}^2 - {\mathbf m}{f}\cdot\mathsf{L}{\mathbf m}{f} - \mathsf{L}{\mathbf m}{f}\cdot {\mathbf m}{f} \right] = \frac{r}2\left( \mathbb{E}_\mu[{\mathbf m}{f}^2] + \Tr\mathbb{E}_\mu[{\mathbf m}{f}^2]{\mathbf m}{\pi} - 2\left(\Tr[{\mathbf m}{f}]{\mathbf m}{\pi} \right)^2 \right). \end{align*} Now let the matrix-valued function correspond to a set of density operators---${\mathbf m}{f}(x):={\mathbf m}{\rho}_x$, $\forall x\in\Omega$, with the average state $\overline{{\mathbf m}{\rho}} = {\mathbf m}{\pi}$. The constant $C_2$ in the spectral gap inequality is \begin{align} \label{eq:Var_const} C_2 = \sup_{{\mathbf m}{f}:\,\mathbb{E}_\mu[{\mathbf m}{f}] = {\mathbf m}{\pi}} \; \frac{\Var({\mathbf m}{f})}{\mathcal{E}({\mathbf m}{f})} = \sup_{{\mathbf m}{\rho}_X:\,\overline{{\mathbf m}{\rho}} = {\mathbf m}{\pi}} \; \frac{2}{r} \cdot \frac{\Tr\mathbb{E}_\mu[{\mathbf m}{\rho}_X^2] - \frac1d}{2\Tr\mathbb{E}_\mu[{\mathbf m}{\rho}_X^2] - \frac2d} = \frac1r, \end{align} where we denote by $X$ the random variable such that $\Pr(X=x)= \mu(x)$ for all $x\in\Omega$. Hence, the spectral gap constant is $C_2=\frac1r$, and we have the exponential decays of the variance from Corollary \ref{coro:spectral}: \begin{align} \label{eq:pd_var} \Var(\mathsf{T}_t({\mathbf m}{\rho}_X)) \leq \mathrm{e}^{-2rt} \cdot \Var({\mathbf m}{\rho}_X). \end{align} Similarly, the modified log-Sobolev constant $C_\chi$ can be calculated by \begin{align} \label{eq:Ent_const} \begin{split} C_\chi &= \sup_{{\mathbf m}{f}:\,\mathbb{E}_\mu[{\mathbf m}{f}] = {\mathbf m}{\pi}} \; \frac{\Ent({\mathbf m}{f})}{-\Tr\mathbb{E}_\mu\left[ ({\mathbf m}{I}+\log{\mathbf m}{f})\mathsf{L}{\mathbf m}{f}\right]} \\ &= \sup_{{\mathbf m}{\rho}_X:\,\overline{{\mathbf m}{\rho}} = {\mathbf m}{\pi}} \; \frac1{r}\cdot \frac{\Tr\mathbb{E}_\mu[{\mathbf m}{\rho}_X\log {\mathbf m}{\rho}_X] + \log d}{\Tr\mathbb{E}_\mu[{\mathbf m}{\rho}_X\log {\mathbf m}{\rho}_X] - \frac{\Tr\mathbb{E}_\mu[\log {\mathbf m}{\rho}_X]}{d}}. \end{split} \end{align} In the following proposition , we show that $C_\chi = \frac{1}{2r}$ when $d=2$. Therefore, we are able to establish the exponential decay of the Holevo quantity. \begin{prop} \label{prop:depolarizing} Consider a Hilbert space $\mathbb{C}^2$. Denote the quantum dynamical semigroup of the depolarizing channel by \begin{align*} \mathsf{T}_t : {\mathbf m}{\rho} \mapsto \mathrm{e}^{-rt} {\mathbf m}{\rho} + \left( 1- \mathrm{e}^{-rt} \right) \cdot {\mathbf m}{\pi}, \quad t\in\mathbb{R}_+. \end{align*} For any quantum ensemble on $\mathbb{C}^2$ with the average state being ${\mathbf m}{\pi}$, the modified log-Sobolev constant is $C_\chi = \frac{1}{2r}$. Moreover, we have \begin{align} \label{eq:pd_Ent} \chi\left(\{\mu(x), \mathsf{T}_t\left( {\mathbf m}{\rho}_x\right)\}_{x\in\Omega} \right) \leq \mathrm{e}^{-2rt} \cdot \chi(\{\mu(x), {\mathbf m}{\rho}_x\}_{x\in\Omega}). \end{align} \end{prop} The proof can be found in Appendix \ref{proof:Prop15}. \medskip We simulate the depolarizing qubit channel with the initial states ${\mathbf m}{\rho}_1 = |0\rangle \langle 0|$, ${\mathbf m}{\rho}_2 = |1\rangle \langle 1|$ and the uniform distribution in Figure \ref{fig:depolarizing}. The blue dashed curve and red solid curve show that the upper bounds for the exponential decays in Eqs.~\eqref{eq:pd_Ent} and \eqref{eq:pd_var} are quit tight. We remark that if every state ${\mathbf m}{\rho}_x$ in the ensemble goes through different depolarizing channel with rate $r_x$, i.e. \begin{align*} \mathsf{T}_t(x,x) = \mathsf{T}_t^x : {\mathbf m}{f}(x) \mapsto \mathrm{e}^{-r_x t} {\mathbf m}{f}(x) + \left( 1- \mathrm{e}^{-r_x t} \right) \Tr[{\mathbf m}{f}(x)] \cdot {\mathbf m}{\pi}, \end{align*} the Sobolev constant $C_2$ and $C_\chi$ will be dominated by the channel with the minimal rate $\inf_{x\in\Omega} r_x := r_\text{inf}$. Namely, the spectral gap constant in Eq.~\eqref{eq:Var_const} becomes \begin{align*} C_2 = \sup_{{\mathbf m}{\rho}_X:\, \overline{{\mathbf m}{\rho}} = {\mathbf m}{\pi}} \; \frac{\Tr\mathbb{E}_\mu[ {\mathbf m}{\rho}_X^2] - \frac1d}{\Tr\mathbb{E}_\mu[r_X{\mathbf m}{\rho}_X^2] - \frac{\mathbb{E}_\mu[r_X]}d} \leq \frac{1}{r_\text{inf}}, \end{align*} and the modified log-Sobolev constant in Eq.~\eqref{eq:Ent_const} is \begin{align*} C_\chi \leq \frac{1}{r_\text{inf}} \cdot \frac{\Tr\mathbb{E}_\mu[{\mathbf m}{\rho}_X\log {\mathbf m}{\rho}_X] - \log d}{\Tr\mathbb{E}_\mu[{\mathbf m}{\rho}_X\log {\mathbf m}{\rho}_X] - \frac{\Tr\mathbb{E}_\mu[\log {\mathbf m}{\rho}_X]}{d}}. \end{align*} \subsection{Phase-Damping Channel} Fix $d=2$, and denote the Pauli matrix by \begin{align*} {\mathbf m}{\sigma}_Z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \end{align*} The quantum dynamical semigroup defined by the phase-damping channel is \begin{align*} \mathsf{T}_t : {\mathbf m}{f}(x) \mapsto \frac{\left(1+\mathrm{e}^{-rt}\right)}2 {\mathbf m}{f}(x) + \frac{\left(1-\mathrm{e}^{-rt}\right)}2 {\mathbf m}{\sigma}_Z{\mathbf m}{f}(x){\mathbf m}{\sigma}_Z, \quad \forall x\in\Omega,\, t\in\mathbb{R}_+ \end{align*} with the generator: \begin{align*} \mathsf{L}{\mathbf m}{f} = \lim_{t\to 0} \frac{1-\mathrm{e}^{-rt}}{2t} \left( {\mathbf m}{\sigma}_Z {\mathbf m}{f} {\mathbf m}{\sigma}_Z - {\mathbf m}{f} \right) = \frac{r}2 \left( {\mathbf m}{\sigma}_Z {\mathbf m}{f} {\mathbf m}{\sigma}_Z - {\mathbf m}{f} \right). \end{align*} It is well-known that any diagonal matrix (with respect to the computation basis) is a fixed point of the phase-damping channel $\mathsf{T}_t$. Now if we assume every matrix $\mathsf{P}_t{\mathbf m}{f}(x)$ converges to different matrices, i.e.~$\mathsf{T}_t({\mathbf m}{f}(x)) \neq \mathsf{T}_t({\mathbf m}{f}(y))$ for all $x\neq y$ and $t\in\mathbb{R}_+$, then the matrix $\Phi$-entropy functional $H_\Phi(\mathsf{P}_t {\mathbf m}{f})$ is non-zero for all $t\in\mathbb{R}_+$. However, the infinitesimal generator approaches zero as $t$ goes to infinity, i.e.~ \begin{align*} \lim_{t\to \infty} \mathsf{L} \mathsf{P}_t {\mathbf m}{f} = \lim_{t\to \infty}\frac{r}2 \left( {\mathbf m}{\sigma}_Z \left(\mathsf{P}_t {\mathbf m}{f}\right) {\mathbf m}{\sigma}_Z - \mathsf{P}_t {\mathbf m}{f} \right) = {\mathbf m}{0}, \end{align*} which means that the $\Phi$-Sobolev constant $C$ in Theorem \ref{theo:decay} is infinity. In other words, the matrix $\Phi$-entropy $H_\Phi(\mathsf{P}_t {\mathbf m}{f})$ does not decay exponentially in this phase-damping channel. \begin{remark} The reason that makes these two examples quite different is the uniqueness of fixed point of the quantum dynamic semigroup $\mathsf{T}_t$. Since the depolarizing channel has a unique equilibrium state, all the matrices eventually converges. Hence, the Sobolev constants are finite, which leads to the exponential decay phenomenon. On the other hand, the phase-damping channel has multiple fixed points. This ensures the matrix $\Phi$-entropy functionals never vanish. \end{remark} \begin{figure}[ht] \includegraphics[width=0.8\columnwidth]{depolarizing.pdf} \caption{This figure illustrates the exponential decay phenomenon of the variance and the Holevo quantity through the depolarizing qubit channel $(d=2)$ (see Eq.~\eqref{eq:pd}) with rate $r=1$. Assume ${\mathbf m}{\rho}_1 = |0\rangle\langle 0|$ and ${\mathbf m}{\rho}_2 = |1\rangle\langle 1|$ with uniform distribution. The blue dashed curve and red solid curve represent the upper bounds of the Holevo quantity and the variance, respectively, i.e.~right-hand sides of Eqs.\eqref{eq:pd_Ent} and ~\eqref{eq:pd_var}. The actual variance and Holevo quantity through the time evolution of the depolarizing qubit channel are plotted by the `o' and `*' lines, , which demonstrates the tightness of the exponential upper bounds. } \label{fig:depolarizing} \end{figure} \section{The Statistical Mixture of the Markov Semigroup} \label{sec:exam2} In this section, we study a statistical mixing of Markov semigroups. The interested matrix-valued functions are defined on the Boolean hypercube $\{0,1\}^n$, which arise in the context of Fourier analysis \cite{Wol08, FS08}. Moreover, a matrix hypercontractivity inequality has been established on this particular set of matrix-valued functions \cite{BRW08}. Our first example is the {Markovian jump process} with transition rates $p$ from state $0$ to $1$ and $(1-p)$ from $1$ to $0$. We will calculate its convergence rate using the matrix Efron-Stein inequality \cite{CH1}. Second, we consider the statistical mixing of a quantum random graph where each vertex corresponds to a quantum state, and further bound its mixing time. \subsection{Markovian Jump Process on Symmetric Boolean Hypercube} \label{ssec:Jump} We consider a special case of the Markov semigroup induced by a classical Markov kernel: \begin{align} \label{eq:c_Markov} \mathsf{P}_t {\mathbf m}{f}(x) \triangleq \int_{y\in\Omega} {p}_t(x,\mathrm{d}y) {\mathbf m}{f}(y), \end{align} where $p_t(x,\mathrm{d} y)$ is a family of transition probabilities\footnote{For every $t\geq0$ and $x\in\Omega$, $p_t(x,\cdot)$ is a probability measure on $\Omega$, and $x\mapsto p_t(x,E)$ is measurable for every measurable set $E\in\Sigma$} on $\Omega$, and satisfies the following Chapman-Kolmogorov identity \[ \int_{y\in\mathrm{\Omega}} p_s(x,\mathrm{d}y) \, p_t(y,\mathrm{d}z) = p_{s+t} (x,\mathrm{d}z). \] In other words, the time evolution of the matrix-valued function ${\mathbf m}{f}$ is under a statistical mixture according to Eq.~\eqref{eq:c_Markov}. Let the state space be a hypercube, i.e.~$\Omega\equiv \{0,1\}^n$ with the measure denoted by \[ \mu_{n,p}(x) = p^{\sum_{i=1}^n x_i} (1-p)^{\sum_{i=1}^n (1-x_i)}, \quad \forall x\in\{0,1\}^n. \] We introduce the operator $\mathsf{\Delta}_i$ that acts on any matrix-valued function ${\mathbf m}{f}:\{0,1\}^n \to\mathbb{M}_d^\text{sa}$ as follows: \begin{align*} \mathsf{\Delta}_i {\mathbf m}{f} &= \begin{cases} (1-p) \mathsf{\nabla}_i {\mathbf m}{f}, &\text{if } x_i = 1\\ -p \mathsf{\nabla}_i {\mathbf m}{f}, &\text{if } x_i = 0 \end{cases} \\ &= {\mathbf m}{f} - \int {\mathbf m}{f}\, \mathrm{d} \mu_{1,p}(x_i), \end{align*} where \[ \mathsf{\nabla}_i {\mathbf m}{f} \triangleq {\mathbf m}{f}(x_1,\ldots,x_{i-1},1,x_{i+1},\ldots,x_n) - {\mathbf m}{f}(x_1,\ldots,x_{i-1},0,x_{i+1},\ldots,x_n). \] The semigroup $\{\mathsf{P}_t\}_{t\geq0}$ of the Markovian jump process is given by the generator $\mathsf{L}$ with transition rates $p$ from state $0$ to $1$ and $(1-p)$ from $1$ to $0$: \[ \mathsf{L} = -\sum_{i=1}^n \mathsf{\Delta}_i. \] Then, we are able to derive the rate of the exponential decay in variance functions. \begin{theo} [Exponential Decay of Variances for Symmetric Bernoulli Random Variables] \label{theo:Var_Ber} Given a Markov Triple $(\{0,1\}^n, \mathbf{\Gamma}, \mu_{n,p})$ of a Markovian jump process, one has \[ \textnormal{Var}\big( \mathsf{P}_t {\mathbf m}{f} \big) \leq \mathrm{e}^{-2t} \cdot \textnormal{Var}\left( {\mathbf m}{f} \right), \] for any matrix-valued function ${\mathbf m}{f}:\{0,1\}^n\rightarrow \mathbb{M}_d^\text{sa}$. \end{theo} \begin{proof} In Corollary \ref{coro:spectral} we show the equivalence between the exponential decay in variances and the spectral gap inequality (see Definition \ref{defn:spectral}). Therefore, it suffices to establish the spectral gap constant of the Markovian jump process. Notably, the spectral gap inequality of the Markov jump process is a special case of the \emph{matrix Efron-Stein inequality} in Ref.~\cite{CH1}: \begin{prop} [Matrix Efron-Stein Inequality {\cite[Theorem 4.1]{CH1}}] \label{prop:EF} For any measurable and bounded matrix-valued function ${\mathbf m}{f}:(\mathbb{M}_d^\text{sa})^n \rightarrow \mathbb{M}_d^\text{sa}$, we have \begin{align} \label{eq:EF} \textnormal{Var} ({\mathbf m}{f}) \leq \frac12 \Tr \mathbb{E} \left[ \sum_{i=1}^n \left( {\mathbf m}{f}(\underline{{\mathbf m}{X}}) - {\mathbf m}{f}\left(\widetilde{{\mathbf m}{X}}^{(i)}\right) \right)^2 \right], \end{align} where $\underline{{\mathbf m}{X}} \triangleq ({\mathbf m}{X}_1, \ldots, {\mathbf m}{X}_n) \in (\mathbb{M}_d^\text{sa})^n$ denote an $n$-tuple random vector with independent elements, and $\widetilde{{\mathbf m}{X}}^{(i)} \triangleq ({\mathbf m}{X}_1,\ldots,{\mathbf m}{X}_{i-1}, {\mathbf m}{X}_i', {\mathbf m}{X}_{i+1}, \ldots,{\mathbf m}{X}_n)$ is obtained by replacing the $i$-th component of $\underline{{\mathbf m}{X}}$ by an independent copy of ${\mathbf m}{X}_i'$. \end{prop} By taking $\underline{{\mathbf m}{X}}$ to be an $n$-tuple Bernoulli random vector, and observe that the right-hand side of Eq.~\eqref{eq:EF} coincides with $\Tr\left[ {\mathbf m}{\mathcal{E}({\mathbf m}{f})} \right]$ for the Markov jump process to complete the proof. \end{proof} Similarly, the convergence rate of the exponential decay in entropy functionals can be calculated as follows. \begin{theo} [Exponential Decay of Matrix $\Phi$-Entropies for Symmetric Bernoulli Random Variables] \label{theo:Ent_Ber} Given a Markov Triple $(\{0,1\}^n, \mathbf{\Gamma}, \mu_{n,p})$ of a Markovian jump process, one has \[ \Ent\big( \mathsf{P}_t {\mathbf m}{f} \big) \leq \mathrm{e}^{-t} \cdot \Ent\left( {\mathbf m}{f} \right), \] for any matrix-valued function ${\mathbf m}{f}:\{0,1\}^n\rightarrow \mathbb{M}_d^\text{sa}$. \end{theo} \begin{proof} The theorem is equivalent to proving \[ \Ent\left( {\mathbf m}{f} \right) \leq - \Tr\mathbb{E} \big[\left( {\mathbf m}{I} + \log{\mathbf m}{f}\right) \mathsf{L} {\mathbf m}{f} \big]. \] By virtue of the subadditivity property, we first establish the case $n=1$, i.e. \begin{align} \label{eq:Ent_Ber2} \Ent\left( {\mathbf m}{f} \right) \leq - \Tr\mathbb{E} \big[\left( {\mathbf m}{I} + \log{\mathbf m}{f}\right) \mathsf{L} {\mathbf m}{f} \big], \quad \forall {\mathbf m}{f}:\{0,1\}\rightarrow \mathbb{M}_d^\text{sa}. \end{align} Taking $\Phi(u) = u \log u$, the first-order convexity property implies that \[ \Tr\big[ \Phi({\mathbf m}{Y}) - \Phi({\mathbf m}{X}) \big] \geq \Tr \big[ \mathsf{D} \Phi[{\mathbf m}{X}]\left( {\mathbf m}{Y}-{\mathbf m}{X} \right) \big], \quad \forall {\mathbf m}{X}, {\mathbf m}{Y} \in\mathbb{M}_d^\text{sa}. \] Let ${\mathbf m}{X}\equiv {\mathbf m}{f}$ and ${\mathbf m}{Y} \equiv \mathbb{E}{\mathbf m}{f}$. Then it follows that \[ \Tr\big[ \Phi(\mathbb{E}{\mathbf m}{f}) - \Phi({\mathbf m}{f}) \big] \geq \Tr \big[ \mathsf{D} \Phi[{\mathbf m}{f}]\left( \mathbb{E}{\mathbf m}{f}-{\mathbf m}{f} \right) \big], \quad \forall {\mathbf m}{f}:\{0,1\}\rightarrow \mathbb{M}_d^\text{sa} \] from which we apply the expectation again to obtain \begin{align} \label{eq:Ent_Ber3} \Tr\big[ \mathbb{E}\Phi({\mathbf m}{f}) - \Phi(\mathbb{E}{\mathbf m}{f}) \big] \leq \Tr \Big[ \mathbb{E} \big[ \mathsf{D} \Phi[{\mathbf m}{f}]\left( {\mathbf m}{f}-\mathbb{E}{\mathbf m}{f} \right) \big] \Big]. \end{align} Then by elementary manipulation, the right-hand side of Eq.~\eqref{eq:Ent_Ber3} leads to \begin{align*} \Tr \Big[ \mathbb{E} \big[ \mathsf{D} \Phi[{\mathbf m}{f}]\left( {\mathbf m}{f}-\mathbb{E}{\mathbf m}{f} \right) \big] \Big] &= \Tr \big[ p(1-p) \, \mathsf{D}\Phi[{\mathbf m}{f}(1)]\left({\mathbf m}{f}(1)-{\mathbf m}{f}(0) \right) + (1-p)p \, \mathsf{D}\Phi[{\mathbf m}{f}(0)]\left( {\mathbf m}{f}(0) - {\mathbf m}{f}(1) \right) \big]\\ &= - \Tr \mathbb{E}\big[ \Phi'({\mathbf m}{f})\cdot \mathsf{\Delta}_1 {\mathbf m}{f} \big], \end{align*} and hence arrives at Eq.~\eqref{eq:Ent_Ber2}. Then the subadditivity of $\Phi$-entropy in Theorem \ref{theo:sub} yields \begin{align*} \Ent\left( {\mathbf m}{f} \right) &\leq \sum_{i=1}^n \mathbb{E} \Big[ \Ent^{(i)} \left( {\mathbf m}{f} \right) \Big] \\ &\leq - \sum_{i=1}^n \mathbb{E} \Big[ \Tr \mathbb{E}_i\big[ \Phi'({\mathbf m}{f})\cdot \mathsf{\Delta}_i {\mathbf m}{f} \big] \Big]\\ &= - \Tr\mathbb{E} \big[\left( {\mathbf m}{I} + \log{\mathbf m}{f}\right) \mathsf{L} {\mathbf m}{f} \big], \end{align*} which completes the proof. \end{proof} \subsection{Mixing Times of Quantum Random Graphs} \label{sec:application} In the following, we introduce a model of quantum states defined on a random graph and apply the above results to calculate the mixing time. Consider a directed graph $\Omega$ with finite vertices. Every arc $e=(x,y)$, $x,y\in\Omega$ of the graph corresponds a non-negative weight $w({x,y})$ (assume $y\neq x$), which represents the \emph{transition rate} starting from node $x$ to $y$. Here we denote by $(L(x,y))_{x,y\in\Omega}$ the weight matrix that satisfies $L(x,y)\geq 0$ as $x\neq y$. Moreover, a balance condition $\sum_{y\in\Omega} L(x,y) = 0$ for any $x\in\Omega$ is imposed. The Markov transition kernel can be constructed via the exponentiation of the weight matrix $L$ (see e.g.~\cite{Nor97, Bak06,BGL13}): \[ p_t(x,y) = \left( \mathrm{e}^{ t L} \right) (x,y), \] which stands for the probability from node $x$ to $y$ after time $t$. Now, each vertex $x$ of the graph is endowed with a density operator ${\mathbf m}{f}(x) = {\mathbf m}{\rho}_x$ on some fixed Hilbert space $\mathbb{C}^d$. The evolution of the quantum states in the graph is characterized by the Markov semigroup acting on the ensembles $\{{\mathbf m}{\rho}_x\}_{x\in\Omega}$ according to the rule: \begin{align} {\mathbf m}{\rho}_{t,x} \triangleq \mathsf{P}_t {\mathbf m}{f}(x) = \sum_{y\in\Omega} {\mathbf m}{\rho}_y \, p_t(x,y). \end{align} Thus ${\mathbf m}{\rho}_{t,x}$ is the quantum state at node $x$ that is mixed from other nodes according to weight $p_t(x,y)$. It is not hard to observe that the measure $\mu$ is invariant for this Markov semigroup $\{\mathsf{P}_t\}_{t\geq 0}$ if \[ \mu(y) = \sum_{x\in\Omega} \mu(x) p_t (x,y), \; \forall y\in\Omega. \] We note that there always exists a probability measure satisfying the above equation. However, the probability measure is unique if and only if the Markov kernel $(p_t(x,y))$ is \emph{irreducible}\footnote{ A Markov kernel matrix is called irreducible if there exists a finite $t\geq 0$ such that $p_t(x,y)>0$ for all $x$ and $y$. In other words, it is possible to get any state from any state. However, the uniqueness of the invariant measure gets more involved when the state space $\Omega$ is uncountable. We refer the interested readers to reference \cite[Chapter 7]{Pra06} for further discussions. We also remark that it is still unclear whether the classical characterizations of the unique invariant measures can be directly extended to the case of matrix-valued functions. This problem is left as future work. } \cite{Nor97, Sal97}. As shown in Section \ref{sec:main}, all the states ${\mathbf m}{\rho}_{t,x} = \mathsf{P}_t {\mathbf m}{f}(x)$ will converge to the average state $\sum_{y\in\Omega} \mu(y)\cdot {\mathbf m}{\rho}_y =: \overline{{\mathbf m}{\rho}}$ as $t$ goes to infinity, where $\mu$ is a unique invariant measure for $\{\mathsf{P}_t\}_{t\geq 0}$. To measure how close it is to the average states $\overline{{\mathbf m}{\rho}}$, we exploit the matrix $\Phi$-entropy functionals (with respect to the invariant measure $\mu$) to capture the convergence rate. In particular, we choose $\Phi(u) = u^2$ and $\Phi(u) = u \log u$, which coincide the variance function and the {Holevo quantity}. We define the $\mathbb{L}^2$ and Holevo mixing times as follows: \begin{defn} Let ${\mathbf m}{\rho}_t:x\mapsto {\mathbf m}{\rho}_{t,x}$ be the ensembles of quantum states after time $t$. The $\mathbb{L}^2$ and Holevo mixing times are defined as: \begin{align*} &\tau_2(\epsilon) \triangleq \inf\{ t: \Var({\mathbf m}{\rho}_t) \leq \epsilon\}\\ &\tau_{\chi}(\epsilon) \triangleq \inf\{ t: \chi({\mathbf m}{\rho}_t) \leq \epsilon\}. \end{align*} \end{defn} By applying our main result (Theorem~\ref{theo:decay}), we upper bound the mixing time of the Markov random graphs. \begin{coro} Let $C_2>0$ and $C_{\chi}>0$ be the spectral gap constant and the modified log-Sobolev constant of the Markov Triple $(\Omega, {\mathbf m}{\Gamma}, \mu)$. Then one has \begin{align} &\tau_2 (\epsilon) \leq \frac{C_2}2 \left( \log \Var({\mathbf m}{\rho}) + \log\frac1\epsilon \right) \label{eq:t_2}\\ &\tau_{\chi} (\epsilon) \leq {C_{\chi}} \left( \log \chi({\mathbf m}{\rho}) + \log\frac1\epsilon \right), \label{eq:t_chi} \end{align} where ${\mathbf m}{\rho}:x\mapsto {\mathbf m}{\rho}_x$ denotes the initial ensemble of the graph. \end{coro} \begin{proof} The corollary follows immediately from Theorem \ref{theo:decay}. Set $\Var({\mathbf m}{\rho}_t)=\epsilon$. Then we have \[ \epsilon \leq \mathrm{e}^{-2 \tau_2(\epsilon)/ C_2} \Var({\mathbf m}{\rho}), \] which implies the desired upper bound for the $\mathbb{L}^2$ mixing time. The upper bound for the mixing time of the Holevo quantity follows in a similar way. \end{proof} \begin{remark} Note that for every initial ensemble ${\mathbf m}{\rho}_0$, it follows that \begin{align*} \Var({\mathbf m}{\rho}) = \Tr \left[ \sum_{x\in\Omega} \mu(x) {\mathbf m}{\rho}_{x}^2 - \left( \sum_{x\in\Omega} \mu(x) {\mathbf m}{\rho}_{x} \right)^2 \right] \leq \max_{x\in\Omega} \mu(x)/d =: \mu^*/d. \end{align*} Equation \eqref{eq:t_2} can be replaced by \[ \tau_2 (\epsilon) \leq \frac{C_2}2 \left( \log \frac{\mu^*}{d} + \log\frac1\epsilon \right). \] Moreover, it is well-known \cite{NC09} that the Holevo quantity $\chi(\{\mu(x), {\mathbf m}{\rho}_{x}\})$ is bounded by the Shannon entropy $H(\mu)$ of the probability distribution $\mu$. Hence, Eq. \eqref{eq:t_chi} is rewritten as \begin{align*} \tau_2 (\epsilon) \leq \frac{C_2}2 \left( \log H(\mu) + \log\frac1\epsilon \right). \end{align*} \Endremark \end{remark} Consider the random graph generated by the Markovian jump process on a hypercube $\{0,1\}^n$ with probability $p=1/2$. Theorems \ref{theo:Var_Ber} and \ref{theo:Ent_Ber} give the spectral gap constant $C_2=1$ and modified log-Sobolev constant $C_\chi=1$. Hence, for every initial ensemble ${\mathbf m}{\rho}$, the mixing times of this quantum random graph is \begin{align*} &\tau_2 (\epsilon) \leq \frac{1}2 \left( \log \Var({\mathbf m}{\rho}) + \log\frac1\epsilon \right) \\ &\tau_{\chi} (\epsilon) \leq \left( \log \chi({\mathbf m}{\rho}) + \log\frac1\epsilon \right). \end{align*} \section{Discussions} \label{sec:diss} Classical spectral gap inequalities and logarithmic Sobolev inequalities have proven to be a fundamental tool in analyzing Markov semigroups on real-valued functions. In this paper, we extend the definition of Markov semigroups to matrix-valued functions and investigate its equilibrium property. Our main result shows that the matrix $\Phi$-entropy functionals exponentially decay along the Markov semigroup, and the convergence rates are determined by the coefficients of the matrix $\Phi$-Sobolev inequality \cite{CH1}. In particular, we establish the variance and entropy decays of the Markovian jump process using the subadditivity of matrix $\Phi$-entropies \cite{CT14} and tools from operator algebras. The Markov semigroup introduced in this paper is not only of independent interest in mathematics, but also has substantial applications in quantum information theory. In this work, we study the dynamical process of a quantum ensemble governed by the Markov semigroups, and analyze how the entropies of the quantum ensemble evolve as time goes on. When the quantum dynamical process is a quantum unital map, our result yields a stronger version of the monotonicity of the Holevo quantity $\chi(\{\mu(x),\mathsf{T}_t({\mathbf m}{\rho}_x)\}_x) \leq \mathrm{e}^{-t/C}\cdot\chi(\{\mu(x),{\mathbf m}{\rho}_x\}_x)$. \section*{Acknowledgements} MH would like to thank Matthias Christandl, Michael Kastoryano, Robert Koenig, Joel Tropp, and Andreas Winter for their useful comments. MH is supported by an ARC Future Fellowship under Grant FT140100574. MT is funded by an University of Sydney Postdoctoral Fellowship and acknowledges support from the ARC Centre of Excellence for Engineered Quantum Systems (EQUS).
1,941,325,220,133
arxiv
\section{\label{sect.1}Introduction} The $5d$ transition-metal compounds have recently attracted much interest, since the interplay between the spin-orbit interaction (SOI) and the electron correlation strongly influences their electronic structures in such systems. A typical example is Sr$_2$IrO$_4$, which consists of two-dimensional IrO$_2$ layers, similar to the parent compound of cuprates La$_2$CuO$_4$. \cite{Crawford1994} Sr$_2$IrO$_4$ is a canted antiferromagnet below 230 K. \cite{Crawford1994,Cao1998,Moon2006} The electronic structure has been calculated by the LDA+U method, \cite{Kim2008} and by the LDA combined with the dynamical mean-field theory. \cite{Arita2012} They have found that the system is an antiferromagnetic insulator with a finite gap in the electron-hole pair creation, when the SOI is taken into account. The similar conclusion has been derived by the variational Monte-Carlo calculation in the Hubbard model. \cite{Watanabe2010} Such spin-orbit induced antiferromagnetic insulator could be obtained from the localized electron picture. In contrast to $(3d)^9$ configuration of Cu atom in La$_2$CuO$_4$, five $5d$ electrons are occupied per Ir atom, and the energy of the $e_g$ orbitals is about 2 eV higher than the energy of the $t_{2g}$ orbitals due to the large crystal field. Therefore, one could regard the situation as one hole is sitting on the t$_{2g}$ orbitals. The matrices of the orbital angular momentum operators with $L=2$ represented by the $t_{2g}$ states are the negative of those with $L=1$ represented by $|p_x\rangle$, $|p_y\rangle$, $|p_z\rangle$, if they are identified by $|yz\rangle$, $|zx\rangle$, $|xy\rangle$, respectively, where $yz$, $zx$, and $xy$ designate $t_{2g}$ orbitals. Therefore, under the SOI, the lowest-energy states of a hole are Kramers' doublet with the effective total angular momentum $j_{\rm eff}=1/2$: $\frac{1}{\sqrt{3}}\left(|yz,\mp\sigma\rangle\pm i|zx,\mp\sigma\rangle \pm|xy,\pm\sigma\rangle \right)$, where spin component $\sigma=\uparrow$ and $\downarrow$. \cite{Kim2008,Kim2009} The degeneracy is lifted by the inter-site interaction. In the strong Coulomb interaction, the effective spin Hamiltonian describing the low-lying excitations is derived by the second-order perturbation with respect to the electron-transfer terms. Introducing the isospin operators acting on the doublet, we obtain the Heisenberg Hamiltonian with the antiferromagnetic coupling, consistent with the above findings.\cite{Jackeli2009,Jin2009,Kim2012} The effect of lattice distortion has also been analyzed.\cite{Wang2011} More importantly, it has been pointed out that the small anisotropic terms emerge in addition to the isotropic term, when Hund's coupling is taken into account on the two-hole states in the intermediate state of the second-order perturbation.\cite{Jackeli2009,Kim2012} Such anisotropic terms are expected to modify substantially the excitation spectra, but have not been fully investigated yet. The purpose of this paper is to study such effects theoretically. We derive the effective spin Hamiltonian from the multi-orbital Hubbard model by taking full account of the Coulomb interaction in the intermediate state of the second-order perturbation. We obtain the exchange couplings consistent with the previous studies. \cite{Jackeli2009,Kim2012} Since the anisotropic terms favor the staggered moment lying in the $ab$ plane, we assume that the staggered moment directs to the $a$ axis. Expanding the spin operators in terms of boson operators within the lowest order of $1/S$,\cite{Holstein1940} we introduce the Green's functions for the boson operators, which include the so-called anomalous type.\cite{Bulut1989} We solve the coupled equations of motion to obtain the Green's functions. It is found that the ``spin waves" in the isotropic Heisenberg model are split into two modes with slightly different energy, due to the anisotropic terms in the entire Brillouin zone. At the $\Gamma$ point, one mode has zero excitation energy while the other has a finite energy. These excitation modes are to be clarified in future experiments. This paper is organized as follows. In Sec. II, we introduce the multi-orbital Hubbard model in the square lattice, and derive the effective spin Hamiltonian by the second-order perturbation. In Sec. III, we expand the spin operators in terms of boson operators, and solve the Green's functions for boson operators. The excitation modes are discussed. Section IV is devoted to the concluding remarks. \section{\label{sect.2}Spin Hamiltonian for S\lowercase{r}$_2$I\lowercase{r}O$_4$} \subsection{Multi-orbital Hubbard model} The crystal structure of Sr$_2$IrO$_4$ belongs to the K$_2$NiF$_4$ type. \cite{Crawford1994} The oxygen octahedra surrounding an Ir atom are rotated about the crystallographic $c$ axis by about 11(deg). To take account of this crystal distortion, we describe the base states in the local coordinate frames rotated in accordance with the rotation of the octrahedra.\cite{Jackeli2009,Wang2011} Since the crystal field energy of the $e_g$ orbitals is about 2 eV higher than that of the $t_{2g}$ orbitals, we consider only $t_{2g}$ orbitals. Electrons transfer between them at neighboring Ir sites in the square lattice. Then, the multi-orbital Hubbard model is defined by \begin{equation} H = H_{\rm kin}+H_{\rm SO}+H_{\rm I}, \end{equation} with \begin{eqnarray} H_{\rm kin} & = & \sum_{\left\langle i,i'\right\rangle } \sum_{n,n'\sigma}\left(t_{in,i'n'}d_{in\sigma}^{\dagger}d_{i'n'\sigma}+ {\rm H.c.} \right),\\ H_{\rm SO} & = & \zeta_{\rm SO}\sum_{i,n,n',\sigma,\sigma'} d_{in\sigma}^{\dagger}({\bf L})_{nn'} \cdot({\bf S})_{\sigma\sigma'}d_{in'\sigma'}, \\ H_{\rm I} & = & U\sum_{i,n} n_{in\uparrow}n_{in\downarrow} \nonumber \\ &+&\sum_{i,n<n'\sigma}[U' n_{in\sigma}n_{in'-\sigma} + (U'-J) n_{in\sigma}n_{in'\sigma}] \nonumber\\ &+&J\sum_{i,n\neq n',\sigma} (d_{in\uparrow}^{\dagger}d_{in'\downarrow}^{\dagger} d_{in\downarrow}d_{in'\uparrow} +d_{in\uparrow}^{\dagger}d_{in\downarrow}^{\dagger} d_{in'\downarrow}d_{in'\uparrow}), \nonumber \\ \end{eqnarray} where $d_{in\sigma}$ denotes the annihilation operator of an electron with orbital $n$ ($=yz,zx,xy$) and spin $\sigma$ at the Ir site $i$. The $H_{\rm kin}$ represents the kinetic energy with transfer integral $t_{in,i'n'}$, An electron on the $xy$ orbital could transfer to the $xy$ orbital in the nearest neighbor sites through the intervening O $2p$ orbitals, while an electron on the $yz$($zx$) orbital could transfer to the $yz$($zx$) orbital in the nearest neighbor sites only along the $y$($x$) direction. The $H_{\rm SO}$ represents the spin-orbit interaction of $5d$ electrons with ${\bf L}$ and ${\bf S}$ denoting the orbital and spin angular momentum operators. The $H_{\rm I}$ represents the Coulomb interaction between electrons, which satisfies $U=U'+2J$.\cite{Kanamori1963} \subsection{Strong coupling approach} Five electrons are occupied on $t_{2g}$ orbitals in each Ir atoms. This state could be considered as occupying one {\emph hole}. The matrices of the orbital angular momentum operators with $L=2$ represented by the $t_{2g}$ states are the minus of those with $L=1$ represented by $|p_x\rangle$, $|p_y\rangle$, $|p_z\rangle$, if the bases are identified by $|yz\rangle$, $|zx\rangle$, $|xy\rangle$, respectively. Therefore, the six-fold degenerate states are split into the states with the effective angular momentum $j_{\rm eff}=1/2$ and with $3/2$ under $H_{\rm SO}$. The lowest-energy states are the doublet with $j_{\rm eff}=1/2$, given by \begin{eqnarray} \left|+\frac{1}{2} \right\rangle &=& \frac{1}{\sqrt{3}}\left[ |xy\uparrow \rangle + |yz\downarrow \rangle + i|zx\downarrow\rangle \right],\\ \left|-\frac{1}{2} \right\rangle &=& \frac{1}{\sqrt{3}}\left[ -|xy\downarrow \rangle + |yz\uparrow\rangle - i|zx\uparrow\rangle\right]. \end{eqnarray} \begin{figure} \includegraphics[width=8.0cm]{fig.1.eps \caption{\label{fig.process} (Color online) The second-order process with $H_{\rm kin}$. In the initial and final states, one hole sits at site 1 and another sits at site 2. In the intermediate state, two holes are on the same site, where the Coulomb interaction works. } \end{figure} We start by one hole sitting at site 1 and another at site 2, and carry out the second-order perturbation calculation, as illustrated in Fig.~\ref{fig.process}. The transfer integral between $xy$ orbitals in the nearest-neighbor sites may be generally different from those between $yz$ orbitals and between $zx$ orbitals, since the orbitals are defined in the local coordinate frames. Nevertheless we assume them to be a same value, which is denoted as $t_1$, since the difference merely gives rise to minor corrections to the values of $J'_{z}$ and $J'_{xy}$ in Eqs. (\ref{eq.Hz}) and (\ref{eq.Hxy}). In the intermediate state, we take full account of the Coulomb interaction between two holes. We numerically evaluate the second-order energy given in a $4\times 4$ matrix form. This matrix is expressed in terms of the spin operators ${\bf S}$ acting on the doublet. It is given by \begin{equation} H(1,2)=C+H^{(0)}(1,2) + H_{z}^{(1)}(1,2) + H_{xy}^{(1)}(1,2), \end{equation} with \begin{eqnarray} H^{(0)}(1,2) &=& J_{\rm ex}{\bf S}_1\cdot{\bf S}_2 ,\\ H_z^{(1)}(1,2) &=& J'_z S_1^z S_2^z, \label{eq.Hz}\\ H_{xy}^{(1)}(1,2) &=& \textrm{sgn}(1,2) J'_{xy} \left(S_1^x S_2^x - S_1^y S_2^y\right), \label{eq.Hxy} \end{eqnarray} where $\textrm{sgn}(i,j)$ gives $+1 (-1)$ when the bond between the sites $i$ and $j$ is along the $x$ ($y$) axis, and $C$ is a constant. Table \ref{table.1} shows the calculated coupling constants for various parameter sets of the Hubbard model. Both $J'_z$ and $J'_{xy}$ vanish without Hund's coupling $J$ because we can check they are proportional to $J$. Note that $J'_{z}$ is negative and its absolute value is nearly the same as $J'_{xy}$ within the significant figures. These tendencies are consistent with the previous study. \cite{Jackeli2009,Kim2012} \begin{table} \caption{\label{table.1} Exchange couplings for various parameter sets, in units of eV. The transfer integral and the spin-orbit coupling are fixed at $t_1=0.36$ and $\zeta_{\rm SO}=0.36$.} \begin{ruledtabular} \begin{tabular}{rrrrrr} $U$ & $U'$ & $J$ & $J_{\rm ex}$ & $J'_{\rm z}$ & $J'_{\rm xy}$ \\ \hline $1.4$ & $1.4$ & $0$ & $0.165$ & $0$ & $0$ \\ $1.4$ & $0.98$ & $0.21$ & $0.223$ & $-0.0055$ & $0.0055$ \\ \hline $2.2$ & $2.2$ & $0$ & $0.105$ & $0$ & $0$ \\ $2.2$ & $1.78$ & $0.21$ & $0.124$ & $-0.0023$ & $0.0023$ \\ $2.2$ & $1.54$ & $0.33$ & $0.144$ & $-0.0046$ & $0.0046$ \\ \hline $3.0$ & $3.0$ & $0$ & $0.077$ & $0$ & $0$ \\ $3.0$ & $2.34$ & $0.33$ & $0.095$ & $-0.0023$ & $0.0023$ \\ $3.0$ & $2.1$ & $0.45$ & $0.107$ & $-0.0039$ & $0.0039$ \\ \end{tabular} \end{ruledtabular} \end{table} \section{Excitation Spectra} As a next step to the analysis of the preceding section, we consider the following spin Hamiltonian in the square lattice: \begin{equation} H= H^{(0)} + H^{(1)}, \end{equation} with \begin{eqnarray} H^{(0)}&=&\sum_{\langle i,j\rangle} H^{(0)}(i,j), \\ H^{(1)}&=&\sum_{\langle i,j\rangle} H_z^{(1)}(i,j) + H_{xy}^{(1)}(i,j). \end{eqnarray} The ground state takes the conventional antiferromagnetic spin configuration in the absence of the anisotropic term $H^{(1)}$. The direction of the staggered moment is not determined. The $H_{z}^{(1)}$ makes the direction favor the the $xy$ plane when $J'_z < 0$. This antiferromagnetic order breaks the rotational invariance of the isospin space in the $ab$ plane. We assume the staggered moment pointing to the $x$ axis.\cite{Cao1998} It should be noted here that the antiferromagnetic order in the local coordinate frames indicates the presence of the weak ferromagnetic moment in the global coordinate frame. Labeling the $x$, $y$, and $z$ axes as $z'$, $x'$, and $y'$ axes, respectively, we express the spin operators by boson operators within the lowest order of $1/S$-expansion:\cite{Holstein1940} \begin{eqnarray} S_i^{z'} &=& S - a_i^\dagger a_i , \quad S_i^{x'}+iS_i^{y'} = \sqrt{2S}a_i , \label{eq.boson1}\\ S_j^{z'} &=& -S + b_j^\dagger b_j , \quad S_j^{x'}+iS_j^{y'} = \sqrt{2S}b_j^\dagger ,\label{eq.boson2} \end{eqnarray} where $a_i$ and $b_j$ are boson annihilation operators, and $i$ ($j$) refers to sites on the A (B) sublattice. Using Eqs.~(\ref{eq.boson1}) and (\ref{eq.boson2}), $H^{(0)}$ and $H^{(1)}$ may be expressed as \begin{eqnarray} H^{(0)} &=& J_{\rm ex}S\sum_{\langle i,j \rangle}( a_i^\dagger a_i + b^\dagger_{j} b_{j} + a_i b_{j} + a_i^\dagger b_{j}^\dagger), \label{eq.h0} \\ H^{(1)} &=& J'_z S\frac{1}{2}\sum_{\langle i,j \rangle} (a_i - a_i^\dagger)(b_j - b_j^\dagger) \nonumber\\ &+&J'_{xy}S \sum_{\langle i,j \rangle}\textrm{sgn}(i,j) (a_i^{\dagger}a_i + b_j^{\dagger}b_j) \nonumber \\ & -&J'_{xy}S\frac{1}{2} \sum_{\langle i,j \rangle} \textrm{sgn}(i,j)(a_i +a_i^{\dagger})(b_j^{\dagger}+b_j). \label{eq.h1} \end{eqnarray} where the unimportant constant term is neglected. The second term in Eq.~(\ref{eq.h1}) is canceled out by the factor $\pm$. Then we introduce the Fourier transforms of the boson operators in the magnetic Brillouin zone, \begin{eqnarray} a({\bf k}) &=& \sqrt{\frac{2}{N}}\sum_{i}a_{i} \exp(-i{\bf k}\cdot{\bf r}_i) , \\ b({\bf k}) &=& \sqrt{\frac{2}{N}}\sum_{j}b_{j} \exp(-i{\bf k}\cdot{\bf r}_j) , \label{eq.Fouriers2} \end{eqnarray} where $N$ is the number of sites, and $i$ ($j$) runs over A (B) sublattice. We obtain \begin{eqnarray} H^{(0)} &=& J_{\rm ex}Sz\sum_{\bf k} a^{\dagger}({\bf k})a({\bf k}) + b^{\dagger}({\bf k})b({\bf k}) \nonumber \\ & +& \gamma({\bf k})[a^{\dagger}({\bf k})b^{\dagger}({\bf -k}) +a({\bf k})b({\bf -k})], \\ H_{z}^{(1)} &=& J'_{z}(2S)\sum_{\bf k} \gamma({\bf k}) [a({\bf k})-a^{\dagger}({\bf -k})][b({\bf -k}) - b^{\dagger}({\bf k})], \nonumber \\ \\ H_{xy}^{(1)} &=& -J'_{xy}(2S)\sum_{\bf k} \eta({\bf k}) [a({\bf k})+a^{\dagger}({\bf -k})][b({\bf -k})+b^{\dagger}({\bf k})], \nonumber \\ \end{eqnarray} where \begin{eqnarray} \gamma({\bf k}) &=& \frac{1}{2}(\cos k_x + \cos k_y), \\ \eta({\bf k}) &=& \frac{1}{2}(\cos k_x - \cos k_y). \end{eqnarray} Here $z$ is the number of nearest neighbors, {\it i.e.}, $z=4$. To find out the excitation modes, we introduce the Green's functions, \begin{eqnarray} G_{aa}({\bf k},t) &=& -i\langle T[a({\bf k},t)a^{\dagger}({\bf k},0)]\rangle, \\ F_{ba}({\bf k},t) &=& -i\langle T[b^{\dagger}({\bf -k},t)a^{\dagger}({\bf k},0)]\rangle, \\ G_{ba}({\bf k},t) &=& -i\langle T[b({\bf k},t)a^{\dagger}({\bf k},0)]\rangle, \\ F_{aa}({\bf k},t) &=& -i\langle T[a^{\dagger}({\bf -k},t)a^{\dagger}({\bf k},0)]\rangle, \end{eqnarray} where $T$ is the time ordering operators, and $\langle X \rangle$ denotes the ground-state average of operator $X$. The $F_{ba}({\bf k},t)$ and $F_{aa}({\bf k},t)$ belong to the so called anomalous type. Defining their Fourier transforms by $G_{aa}({\bf k},\omega) = \int G_{aa}({\bf k},t){\rm e}^{i\omega t}{\rm d}t$ and so on, we derive the equation of motion for these functions, \begin{eqnarray} &&\left( \begin{array}{cccc} \omega -1 & -A({\bf k}) & B({\bf k}) & 0 \\ -A({\bf k}) & -(\omega+1) & 0 & B({\bf k}) \\ B({\bf k}) & 0 & \omega-1 & -A({\bf k}) \\ 0 & B({\bf k}) & -A({\bf k}) & -(\omega+1) \end{array} \right) \nonumber \\ &\times& \left( \begin{array}{c} G_{aa}({\bf k},\omega) \\ F_{ba}({\bf k},\omega) \\ G_{ba}({\bf k},\omega) \\ F_{aa}({\bf k},\omega) \end{array} \right) = \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array} \right) , \label{eq.matrix} \end{eqnarray} where \begin{eqnarray} A({\bf k}) &=& (1+g_z)\gamma({\bf k}) -g_{xy}\eta({\bf k}) , \\ B({\bf k}) &=& g_z\gamma({\bf k}) + g_{xy}\eta({\bf k}) , \\ g_z &=& J'_{z}/(2J_{\rm ex}), \quad g_{xy}=J'_{xy}/(2J_{\rm ex}). \end{eqnarray} Here the energy is measured in units of $J_{\rm ex}Sz$. Hence we finally obtain, \begin{equation} \left( \begin{array}{c} G_{aa}({\bf k},\omega) \\ F_{ba}({\bf k},\omega) \\ G_{ba}({\bf k},\omega) \\ F_{aa}({\bf k},\omega) \\ \end{array} \right)=\frac{1}{D({\bf k},\omega)} \left( \begin{array}{c} g_{aa}({\bf k},\omega) \\ f_{ba}({\bf k},\omega) \\ g_{ba}({\bf k},\omega) \\ f_{aa}({\bf k},\omega) \\ \end{array} \right), \end{equation} where \begin{eqnarray} D({\bf k},\omega) &=& \omega^4 -2[1+B({\bf k})^2 - A({\bf k})^2]\omega^2 +1-2A({\bf k})^2 \nonumber \\ &-&2B({\bf k})^2+[A({\bf k})^2-B({\bf k})^2]^2, \label{eq.determ} \\ g_{aa}({\bf k},\omega) &=& (\omega-1)(\omega+1)^2 - B({\bf k})^2(\omega-1) \nonumber \\ &+&A({\bf k})^2(\omega+1) ,\\ f_{ba}({\bf k},\omega) &=& -A({\bf k})[(\omega^2-1)-B({\bf k})^2 +A({\bf k})^2], \\ g_{ba}({\bf k},\omega) &=& B({\bf k}) [B({\bf k})^2-(\omega+1)^2-A({\bf k})^2], \\ f_{aa}({\bf k},\omega) &=& 2A({\bf k})B({\bf k}). \label{eq.faa} \end{eqnarray} In the absence of the anisotropic terms, we have $A({\bf k})=\gamma({\bf k})$ and $B({\bf k})=0$. Inserting these relations into Eqs.~(\ref{eq.determ})-(\ref{eq.faa}), we have \begin{eqnarray} G_{aa}({\bf k},\omega)&=&\frac{\omega+1}{\omega^2-(1-\gamma({\bf k})^2)}, \\ F_{ba}({\bf k},\omega)&=&-\frac{\gamma({\bf k})}{\omega^2-(1-\gamma({\bf k})^2)},\\ G_{ba}({\bf k},\omega)&=& F_{aa}({\bf k},\omega) = 0. \end{eqnarray} These forms are well-known for the isotropic Heisenberg model.\cite{Bulut1989} In the presence of the anisotropic terms, Eq.~(\ref{eq.determ}) is rewritten as \begin{equation} D(\textbf{k},\omega) =[ \omega^2 - E_-^2(\textbf{k})] [\omega^2 - E_+^2(\textbf{k})], \end{equation} with \begin{equation} E_{\pm}(\textbf{k}) = \sqrt{ [1 \pm |B(\textbf{k})|]^2 - A^2(\textbf{k}) }. \label{eq.disp.1} \end{equation} This indicates that poles exist at $\omega=E_{\pm}({\bf k})$ in the domain of $\omega> 0$. To clarify the behavior of the poles, we express the Green's function as \begin{equation} G_{aa}(\textbf{k},\omega)= \frac{1}{2} \left[ \frac{\omega + 1 +|B(\textbf{k})|} {\omega^2-E_+^2(\textbf{k})} + \frac{\omega + 1 -|B(\textbf{k})|} {\omega^2-E_-^2(\textbf{k})} \right]. \end{equation} This form indicates that two poles have nearly equal weights in the domain of $\omega>0$ in the case of weak anisotropic terms. At the $\Gamma$-point, $A({\bf k})=1+g_z$ and $B({\bf k})=g_z$, and hence $D(0,\omega)$ $=$ $\omega^2(\omega^2+4g_z)$. Therefore one mode has zero excitation energy while the other has a finite energy $2\sqrt{-g_z}$. The former may correspond to the Goldstone mode due to breaking the rotational invariance of the isospin in the $ab$ plane. At the X-point, ${\bf k}=(\pi,0)$, $A({\bf k})=$ $-B({\bf k})$ $=g_{xy}$, and hence the two modes have excitation energies $\omega=\sqrt{1\pm 2g_{xy}}$. At the M-point, since $A({\bf k})=$ $B({\bf k})=0$, the two modes have the same excitation energy, $\omega=1$. \begin{figure} \includegraphics[width=8.0cm]{fig.2.eps \caption{\label{fig.dispersion} (Color online) Excitation energies of two modes $E_{\pm}(\textbf{k})$ as a function of ${\bf k}$ along the symmetry lines. (a) $E_{\pm}(\textbf{k})$ evaluated from Eq. (\ref{eq.disp.1}). The parameters are evaluated from the Hubbard model with $t_1=0.36$ eV, $\zeta_{\rm SO}=0.36$ eV, $U=3.0$ eV, $U'=2.1$ eV, and $J=0.45$ eV with $g_{z}=-0.018$ and $g_{xy}=0.018$ in units of $J_{\rm ex}Sz$. (b) $E_{\pm}(\textbf{k})$ evaluated from Eq. (\ref{eq.disp.2}) including $J_{\textrm{ex}}'/J_{\textrm{ex}}=-1/3$ and $J_{\textrm{ex}}''/J_{\textrm{ex}}=1/4$ with $J_{\textrm{ex}}=59$ meV. The parameters are evaluated from the Hubbard model with $t_1=0.3$ eV, $\zeta_{\rm SO}=0.4$ eV, $U=3.5$ eV, $U'=2.6$ eV, and $J=0.45$ eV with $g_{z}=-0.015$ and $g_{xy}=0.015$ in units of $J_{\rm ex}Sz$. } \end{figure} Figure \ref{fig.dispersion} shows the dispersion relation along the symmetry lines of ${\bf k}$ for $g_{z}=-0.018$ and $g_{xy}=0.018$. The parameters correspond to the Hubbard model with $t_1=0.36$ eV, $\zeta_{\rm SO}=0.36$ eV, $U=3.0$ eV, $U'=2.1$ eV, and $J=0.45$ eV (last row in Table \ref{table.1}). Although $J'_{z}$ and $J'_{xy}$ are two orders of magnitude smaller than $J_{\rm ex}$, they substantially modify the dispersion relation. Note that $J_{\rm ex}Sz(\equiv 2J_{\rm ex})=0.214$ eV, which is comparable to the excitation energy at the X-point observed in resonant inelastic x-ray scattering (RIXS) at the $L$-edge of Ir.\cite{J.Kim2012} Finally, let us consider what happens when the exchange interactions between the second and third neighbor sites, denoted as $J_{\textrm{ex}}'$ and $J_{\textrm{ex}}''$ respectively, are introduced in addition to $J_{\textrm{ex}}$. It is known that a phenomenological isotropic model constructed by $J_{\textrm{ex}}$, $J_{\textrm{ex}}'$, and $J_{\textrm{ex}}''$ couplings gives much better dispersion curve.\cite{J.Kim2012} We expect the inclusion of $J_{\textrm{ex}}'$ and $J_{\textrm{ex}}''$ terms improves the dispersion curve in comparison with the experimental one since the contributions from the anisotropic terms are not so significant in the wide range of the Brillouin zone as shown in Fig. \ref{fig.dispersion} (a). Then, our concern is whether or not the gap between the two modes around the $\Gamma$ point remains finite quantitatively. We see the answer is in the affirmative as follows. In the presence of $J_{\textrm{ex}}'$ and $J_{\textrm{ex}}''$ terms, the coefficient matrix appeared in Eq. (\ref{eq.matrix}) is modified as \begin{widetext} \begin{equation} \left( \begin{array}{cccc} \omega -1+\xi(\textbf{k}) & -A({\bf k}) & B({\bf k}) & 0 \\ -A({\bf k}) & -(\omega+1-\xi(\textbf{k})) & 0 & B({\bf k}) \\ B({\bf k}) & 0 & \omega-1+\xi(\textbf{k}) & -A({\bf k}) \\ 0 & B({\bf k}) & -A({\bf k}) & -(\omega+1-\xi(\textbf{k})) \end{array} \right), \end{equation} \end{widetext} where \begin{eqnarray} \xi(\textbf{k})&=& \frac{J_{\textrm{ex}}'}{J_{\textrm{ex}}} ( 1-\gamma'(\textbf{k})) + \frac{J_{\textrm{ex}}''}{J_{\textrm{ex}}}( 1-\gamma''(\textbf{k})), \\ \gamma'(\textbf{k}) &=& \cos k_x \cos k_y, \\ \gamma''(\textbf{k}) &=& \frac{1}{2} [ \cos(2k_x) + \cos(2 k_y)]. \end{eqnarray} Then, the excitation energies for two modes become \begin{equation} E_{\pm}(\textbf{k}) = \sqrt{ [1 - \xi(\textbf{k}) \pm |B(\textbf{k})|]^2 - A^2(\textbf{k}) }. \label{eq.disp.2} \end{equation} Since the extra term $\xi(\textbf{k})$ goes to zero when $|\textbf{k}| \rightarrow 0$, the existence of the gap of the two modes at the $\Gamma$ point is robust. The experimental dispersion curve can be reproduced well by setting $J_{\textrm{ex}}=60$ meV, $J_{\textrm{ex}}'=-J_{\textbf{ex}}/3$ and $J_{\textrm{ex}}''=J_{\textbf{ex}}/4$ in the phenomenological model without the anisotropic terms $J_z '$ and $J_{xy}'$.\cite{J.Kim2012} With the parameter set of $U=3.5$ eV, $U'=2.6$ eV, $J=0.45$ eV, $t_1=0.3$ eV, and $\zeta_{\textrm{SO}}=0.4$ eV, we obtain $J_{\textrm{ex}}\simeq 60$ meV, $J_{xy}'=-J_z'=1.8$ meV. Together with the relations $J_{\textrm{ex}}'=-J_{\textbf{ex}}/3$ and $J_{\textrm{ex}}''=J_{\textbf{ex}}/4$, the $E_{\pm}(\textbf{K})$ can be numerically evaluated. As shown in Fig. \ref{fig.dispersion} (b), the inclusion of $J_{\textrm{ex}}'$ and $J_{\textrm{ex}}''$ improves the dispersion curve as expected. On the other hand, the splitting of the two modes, which is the most prominent around the $\Gamma$ point, remains nearly intact and the magnitude of the gap keeps nearly the same order of magnitude as that obtained for the parameter set evaluated in the absence of $J_{\textrm{ex}}'$ and $J_{\textrm{ex}}''$. \section{Concluding remarks} We have studied the low-lying excitations in Sr$_2$IrO$_4$ on the localized electron picture. Having introduced the isospin operators acting on Kramers' doublet, we have derived the effective spin Hamiltonian from the multi-orbital Hubbard model by the second-order perturbation with the electron transfer. This approach may be justified in the strong Coulomb interaction. It consists of the isotropic Heisenberg term and small anisotropic terms. Expanding the spin operators in terms of boson operators, we have introduced the Green's functions for the boson operators, and have solved the coupled equations of motion for those functions. The excitation spectra have been obtained from the Green's functions. It is found that two modes emerge with slightly different energies due to the anisotropic terms, in contrast to the spin waves in the isotropic Heisenberg model. The existence of the anisotropic terms, which is the hallmark of the interplay between the SOI and the Coulomb interaction, has not been corroborated by experiments.\cite{J.Kim2012,Fujiyama2012} The magnetic excitations are usually detected by the inelastic neutron scattering, but it may be hard for the present system due to the strong absorption of neutron by Ir atom. Recently, the spin and orbital excitations have been observed and analyzed by the Ir $L$-edge RIXS. No indication of the splitting is unfortunately found in the spectral shape.\cite{J.Kim2012,Ament2011} Since the energy difference is around $60$ meV at most, it may be difficult to distinguish the two modes by the RIXS experiments. However, the observation with finer instrumental energy resolution of 36 meV may be within the reach at the $L_{2,3}$ edges of Ir.\cite{MorettiSala2013} Finally we comment on the validity of the strong coupling approach. The excitation energy at the M point is the same as at the X point within the nearest-neighbor coupling in the Heisenberg model. Since the energy at the M point is found nearly the half of that at the X point in the RIXS, we need to include the next-nearest-neighbor coupling as large as one third of the nearest neighbor coupling to account for such difference.\cite{J.Kim2012} In addition, the Mott-Hubbard gap is estimated as $\sim 0.4$ eV from the optical absorption spectra,\cite{Kim2008,Moon2009} which value is comparable to the magnetic excitation energy $2J\sim 0.2$ eV. These indicate that the strong coupling approach may not work well. It may be interesting to study the elementary excitations from the viewpoint of the itinerant electron picture workable in the weak and intermediate couplings. \begin{acknowledgments} We are grateful to M. Yokoyama for fruitful discussions. This work was partially supported by a Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology of the Japanese Government. \end{acknowledgments} \bibliographystyle{apsrev} \bibliographystyle{apsrev}
1,941,325,220,134
arxiv
\section{Introduction and results} According to Heisenberg and Dirac, Quantum Mechanics is obtained from Classical Mechanics by replacing classical Poisson brackets, multiplied by the Planck constant, by commutators. Two circles of problems arise from such an \lq\lq ansatz\rq\rq : one concerning the precise content of that prescription, which is not so well defined as it may appear, if it is to be interpreted as general and unique, and therefore independent from choices of coordinates, Hamiltonian, etc.; the second asking whether such a \lq\lq substitution\rq\rq\ has to be interpreted as a mere prescription or rather follows from more fundamental principles. Concerning the first kind of problems, it is well known that the substitution procedure is completely well defined for a particle in euclidean space: when applied to cartesian coordinates and their conjugated momenta, it defines the Heisenberg algebra, which, assuming exponentiability of the generators, gives rise to a unique $C^*$ algebra \cite{Slawny}, the Weyl algebra, with a unique Hilbert space representation continuous in the group parameters (von Neumann Theorem \cite{Thirring}). The situation changes completely if one considers classical mechanics on a manifold ${\cal M}$ and asks whether a similar, coordinate independent, construction provides a unique algebra, with a similar classification of representations. In this case, geometrical structures play an essential r\^ole and different strategies and constructions have been proposed. \lq\lq Phase space\rq\rq\ quantization methods start from the classical phase space $T^*({\cal M})$ and try to associate an element $Q(f)$ of an operator algebra to any classical variable, i.e. any (regular) function $f$ on $T^*({\cal M})$. Requests which seem {\em a priori} reasonable are however found to be inconsistent: $[Q(f), Q(g)] = i \hbar \{ f, g \}$, $[ \ , \ ]$ denoting the commutator and $\{ \ , \ \}$ the classical Poisson brackets, is in fact incompatible with $Q(g(f)) = g(Q(f))$, and also with linearity of $Q$, if irreducibility of the resulting algebra, or related conditions, are assumed \cite{Ali}. Possible solutions are given either by restricting the correspondence $Q$ to suitable subsets of the classical variables (\lq\lq Geometric quantization\rq\rq ), or by relaxing the relation between commutators and classical Poisson brackets, which is assumed to hold to order $\hbar$ (\lq\lq Deformation quantization\rq\rq ). In both cases, the construction depends on the introduction of additional structures, respectively geometrical and algebraic, and the result is not unique. On the other side, \lq\lq Canonical quantization\rq\rq\ developed into an analysis based only on the geometry of ${\cal M}$ and of its diffeomorphism group, and therefore into the study of the representations of the crossed product algebras $C^0 ({\cal M}) \times G$, $G$ a subgroup of Diff $({\cal M})$, defined by the action of $G$ on $C^0 ({\cal M})$. In the Segal approach \cite{Segal}, a maximality condition on $C^0({\cal M})$ reduces the analysis to the representations in $L^2({\cal M})$, with the Lebesgue measure. In the approach of Mackey \cite{Mackey} and Landsman \cite{LandsmanCP}, ${\cal M}$ is assumed to be a homogenous space, ${\cal M} = G/H$, $G$ a finite dimensional Lie group, $H$ a subgroup. Given ${\cal M}$, the resulting algebra and representations (classified by the representations of $H$) substantially depend on the choice of $G$; leaving aside the interest of the additional degrees of freedom associated to $H$, the construction does not therefore provide a unique formulation of Quantum Mechanics for a particle on ${\cal M}$. Segal's diffeomorphism invariant approach has been developed by Doebner \cite{Doebner}, who dropped Segal's maximality condition by assuming a local Hilbert space structure of any dimension, with a connection form which relates spaces at different points. The representations of the diffeomorphism group, associated in general to diffeomorphism invariance, have been studied by Goldin. The corresponding Lie algebra of functions and vector fields does not contain enough information for the identification of mechanics on ${\cal M}$; in fact, it has the interpretation of the (classical) current algebra and its representations appear therefore in many situations, in particular for all $N$ particle quantum (Schroedinger) systems on ${\cal M}$ \cite{Goldin}. The same considerations apply to the representations of the crossed product $C^*$ algebra $C^0 ({\cal M}) \times $ Diff $({\cal M})$. Clearly, the basic problem of the diffeomorphism invariant canonical approaches is the identification of degrees of freedom for the generalized momenta, which is not correctly given by the Lie structure of vector fields, since linearly independent vector fields define independent variables. The solution proposed in \cite{MS1} is to consider as fundamental the module structure of the Lie algebra of vector fields of compact support, denoted by Vect $({\cal M})$, on the algebra of $C^\infty$ functions, i.e. the {\em Lie Rinehart} (LR) structure of ($C^\infty ({\cal M}),$ Vect $({\cal M})$) and to assume that the Lie-Rinehart product \begin{equation} f \circ v: (C^\infty({\cal M}) , {\rm Vect}({\cal M})) \rightarrow {\rm Vect}({\cal M}) \end{equation} is realized, in the algebra defining Quantum Mechanics (QM) on ${\cal M}$, by the {\em symmetric ( Jordan) product\/}: \begin{equation} f \circ v = 1/2 \, (f \cdot v + v \cdot f) \label{LR} \end{equation} It turns out that the LR relations (\ref{LR}) can also be written in terms of the resolvents of the unbounded operators representing vector fields (of compact support) and define therefore, together with the crossed product relations between $C^\infty ({\cal M})$ and Diff $({\cal M})$, a unique $C^*$ algebra. Its Hilbert space representations have been classified, assuming regularity, i.e strong continuity of one dimensional subgroup of Diff $({\cal M})$ as for the Weyl algebra, and shown to be in one to one correspondence with the unitary representation of the fundamental group of ${\cal M}$, describing the displacement of a particle along non-contractible closed paths \cite{MS1}. Such a classification of states reproduces, only assuming basic geometrical and algebraic structures, that obtained by Doebner \cite{Doebner} and by \cite{Zanghi}, the latter within an approach {\em a priori} based on trajectories. For the basic questions about the nature of quantization, clearly one has to identify general principles and ask to which extent they constrain both classical and quantum mechanics and which alternatives they leave open. The strategy proposed by Dirac \cite{Dirac}, with his analysis of \lq\lq proportionality between commutators and Poisson brackets\rq\rq , ends with the difficulties of phase space quantization. As we shall see, Dirac's equations cannot be interpreted as directly relating classical and quantum algebras and the basic missing point is the very identification of the algebraic structures to which they apply. We start therefore from fundamental principles, given by the geometry of ${\cal M}$ and Vect $({\cal M})$. They are embodied in the commutative algebraic structure of $C^\infty ({\cal M})$, describing the manifold, and in the Lie structure of $C^\infty ({\cal M}) + $ Vect $({\cal M})$ defined by the Lie relation between vector fields and their action on $C^\infty ({\cal M})$. From the above discussion it is clear that also the module structure of Vect $({\cal M})$ over $C^\infty ({\cal M})$ plays an essential r\^ole, redefining linear dependence of vector fields according to multiplication by $C^\infty$ functions. Actually, the Lie-Rinehart algebra ($C^\infty ({\cal M}),$ Vect $({\cal M})$) is represented {\em faithfully} both in classical and in quantum mechanics, with the Lie product realized respectively as the Poisson and commutator brackets and the LR product realized as the symmetric product, eq.(\ref{LR}). We then propose to base the most general notion of mechanics on a set of variables indexed by $C^\infty ({\cal M})$ and Vect $({\cal M})$) with their Lie-Rinehart relations, $C^\infty$ functions being interpreted as position variables and Vect $(M)$, describing \lq\lq small displacements\rq\rq , as \lq\lq generalized momenta\rq\rq . In order to obtain variables to which well defined values may be assigned in terms of a notion of spectrum, we consider (associative) algebras generated by them. The associative product is assumed 1) to extend the commutative product of $C^\infty({\cal M})$ and 2) to reproduce, with its {\em symmetric (Jordan) part}, the Lie-Rinehart product between $C^\infty ({\cal M})$ and Vect $({\cal M})$, i.e. to satisfy eq.(\ref{LR}); as discussed above in the case of QM, condition 2) has a basic r\^ole for the identification of degrees of freedom and therefore for the characterization of Mechanics on ${\cal M}$, with respect to the most general diffeomorphism invariant system. The use of {\em the symmetric part} of the associative product is essential in the non-commutative case. We also assume 3) that the Lie product on $C^\infty ({\cal M}) + $ Vect $({\cal M})$ can be extended to a Lie product on such algebras, defining on them derivations, i.e., satisfying the Leibniz rule with respect to the associative product. This may be interpreted as the association of some (infinitesimal) operation to each variable, generalizing the action of vector fields and allowing for a general notion of symmetry transformation (which is essential, e.g., for the introduction of a time evolution). We extend therefore the Lie-Rinehart algebra of ${\cal M}$ to a {\em non-commutative (real) Poisson algebra}. In general, one obtains an enveloping non commutative Poisson algebra of a Lie-Rinehart algebra \cite{MS2}, a notion which extends that of Poisson enveloping algebra of a Lie algebra \cite{Voronov}. It should be emphasized that no relation is assumed between Lie products and commutators, only the Leibniz rule constraining the associative and Lie products. If no other restriction is assumed, the result is the {\em universal} enveloping (non-com\-mu\-tative) Poisson algebra of the LR algebra $(C^\infty ({\cal M})$, Vect $({\cal M}))$. Its uniqueness follows from the definition of universality (see below) and its construction has been given in \cite{MS2}. Such a non-commutative Poisson algebra will be called the Lie-Rinehart universal Poisson algebra of ${\cal M}$, or briefly the {\em Poisson-Rinehart} algebra of ${\cal M}$ and denoted by $\Lambda_R({\cal M})$. A unique linear involution is also defined (see Section 2) on $\LRM$, treating functions and vector fields as real variables, $f = f^*$, $ v = v^* $ and satisfying $(A B)^* = B^*\,A^* $, which then implies $\{\,A,\,B\}^* = \{\,A^*,\,B^*\,\}$. {\em A priori}, $\LRM$ is a very general algebraic structure, in fact the most general associative algebra, with a Lie bracket satisfying the Leibniz rule, generated by the LR algebra associated to ${\cal M}$ and Vect $({\cal M})$. The main result is that it describes {\em nothing else than classical and quantum mechanics}. This is obtained as follows: first, we recall that in general, in a Poisson algebra $\Lambda$, commutators and Lie products satisfy the following relation, already pointed out by Dirac \cite{Dirac} and rederived in refs. \cite{Voronov}~\cite{Farkas}: \begin{equation} {[\,A,\,B\,]\,\{\,C, \,D\,\} = \{\,A,\,B\,\}\,[\,C,\,D\,] \ \ \ \ \forall A, B, C, D \in \Lambda \, .} \label{Farkas} \end{equation} Clearly, if for some $C,D \in \Lambda$, $\{\,C, \,D\,\}$ has an inverse, eq. (\ref{Farkas}) allows to express commutators in terms of Poisson brackets. More generally, the same holds for prime Poisson algebras, i.e. algebras without ideals which are divisors of zero~\cite{Farkas}; such conditions are not satisfied by $\LRM$, since (see below) any pair of functions with disjoint supports generate (bilateral) ideals $I_1, I_2$ with $I_1\cdot I_2 = 0$. However, in the case of compact ${\cal M}$, by summing commutators of locally conjugated variables in $\LRM$, we construct a unique variable $Z \in \LRM$ satisfying \begin{equation} [A, B] = Z \{A, B\} \ \ \ \ \forall A, B \in \LRM \, . \label{Z} \end{equation} $Z$ turns out to be central with respect to both the associative and to Lie product; in the non compact case, we construct a sequence $Z_n$ satisfying eq.(\ref{Z}) for $n \geq \bar n (A,B)$, allowing for an extension of $\LRM$ by an element satisfying eq.(\ref{Z}) and central in the same sense. In both cases, $Z$ is antihermitian: $Z^* = -Z$. If the variable $Z$ is substituted by an (imaginary) number, the result may be seen as a precise version of an argument by Dirac, based on eq.(\ref{Farkas}), on the commutator prescription for QM; more properly, it shows that the general approach to mechanics on a manifold provided by the basic geometrical (LR) structure automatically yields a variable invariant under diffeomorphisms and under all the physical operations, expressed by the Lie brackets in $\LRM$, i.e. a \lq\lq universal constant\rq\rq, with the same r\^ole and interpretation as the Planck constant. To proceed with the analysis of $\LRM$, one considers the ideals generated by $Z^* Z - z^2 I$, $z \geq 0$, $I$ the identity of $\LRM$, given by $1 \in C^\infty({\cal M})$; they are stable with respect to the Poisson brackets with $\LRM$ and define therefore homomorphisms $\pi_z$ and quotient Poisson algebras $\LRM_z \equiv \pi_z (\LRM)$. For $z = 0$, one obtains the commutative Poisson algebra generated by $C^\infty(\M)$ and by the $C^\infty$ vector fields, with the natural module structure of Vect $({\cal M})$ on $C^\infty(\M)$, which is isomorphic to the commutative Poisson algebra of polynomials in the cotangent vectors on ${\cal M}$ with $C^\infty$ coefficients; under standard regularity conditions (Section 2), it has a unique Hilbert space representation, by multiplication operators in $L^2 (T^*({\cal M}))$, with Lie brackets represented by the classical Poisson brackets. $T^*({\cal M})$ arises as the spectrum, modulo a zero measure subset, of the commutative $C^*$ algebra generated by $C^\infty$ functions and exponential of vector fields. For $z > 0$, $\pi_z(Z) = \iota z$, $\iota^2 = -1$ and there is an isomorphism ${\varphi}$, mapping the real Poisson involutive algebra $\LRM_z $ into the complex algebra generated by $C^\infty({\cal M})$ and by the generalized momenta $T_v$ associated to the vector fields of Vect (${\cal M}$), satisfying $$[\,T_v, \,T_w \,] = i\, z\, T_{\{ v, \,w \}}\, , \ \ \ [T_v, \,f ] = i\, z\, \{ v , \, f\} \, , \ \ \ T_{f \circ v} = 1/2 \, (f T_v + T_v f) \, ,$$ with ${\varphi}(f) = f$, ${\varphi}(v) = T_v$, ${\varphi}(\iota) = i.$ This is the (unbounded, \lq\lq Lie-Rinehart\rq\rq ) quantum algebra introduced in ref. \cite{MS1}. Its regular (i.e. exponentiable) Hilbert space representations were studied in \cite{MS1} and shown to be in one-to one correspondence with the unitary representations of the fundamental group of ${\cal M}$, $\pi_1({\cal M})$. The analysis of Hilbert space representations of $\LRM$ shows that the above classification is complete, i.e. Classical and Quantum Mechanics on ${\cal M}$ are the only {\em regular} and {\em factorial} representations of $\LRM$ (Section 2). The isomorphism between the real algebra $\LRM_z$, $z \neq 0$, and the above complex algebra also explains the origin of a complex structure in the standard formulation of quantum mechanics, through a non-zero complex number representation of the antihermitian variable $Z$; no complex structure arises in classical mechanics, which originates from the zero representation of $Z$. For ${\cal M} = {\bf R}^n$, if only cartesian vector fields are considered, a similar simplified construction applies, still providing a $Z$ variable and the above classification. The relevance of the full Rinehart structure is in fact only related to the geometrical identification of observables in a diffeomorphism invariant formulation (a question in fact at the origin the above mentioned difficulties and alternatives for the formulation of QM an manifolds). In all cases, the Lie, or Lie-Rinehart, algebra of momenta and (functions of) positions is {\em the same} in classical and in quantum mechanics. The classical-quantum alternative only arises when {\em polynomials in the momenta} are introduced and is uniquely given by the values of $Z$ in the universal enveloping algebra, with the LR constraint on the symmetric product. The above analysis unravels the basic r\^ole of the LR geometry of the configuration manifold, which in the quantum case is somewhat hidden in the observable $C^*$-algebra (Lie products being identified with commutators) and in the classical case goes beyond the abelian algebraic relations. The LR algebraic structure provides, through the Poisson-Rinehart algebra $\LRM$, a notion of {\em non commutative phase space\/} which coincides with that of a {\em general mechanical system\/}, exactly covering classical and quantum mechanics. In particular, the above construction shows that the Dirac ansatz of canonical quantization, in the form of the proportionality of the commutators {\em of variables in\/} $C^\infty ({\cal M}) +$ Vect $({\cal M})$ to their classical Poisson brackets, has no alternative, within the above rather general notion of mechanical system. The uniqueness of the commutators for $C^\infty$ functions and vector fields on the configuration manifold also explains the obstructions which arise by requiring proportionality of commutators to Poisson brackets {\em for all functions\/} on the classical phase space. In our approach, the extension of the commutation relations starting from $C^\infty({\cal M})$ and Vect(${\cal M}$) and the construction of quantum algebras does not in fact use the classical Poisson algebra; it is given by the Leibniz rule and by the identification of the LR product with the symmetric product and therefore, in a sense, it {\em automatically depends\/} on $Z$. Our results suggest a quite different approach to the relation between Classical and Quantum Mechanics with respect to phase space quantization: the classical phase space is {\em not} assumed as a starting point and rather arises from the same (non-commutative) Poisson algebra in correspondence with one of the values taken by the central variable $Z$, on the same footing as the quantum mechanical state space. We also emphasize that in the above approach the Planck constant {\em needs not to be introduced}. It automatically appears as a variable invariant under all physical transformations, i.e. a {\em universal constant}, in the Poisson-Rinehart algebra of a manifold. In the following Section the above notions will be formalized, together with their implications on classical and quantum mechanics. \section{The Poisson-Rinehart algebra of a manifold and its representations} \msk\noindent {\em The Lie-Rinehart algebra of ${\cal M}$}. \msk A general notion of mechanical variables on ${\cal M}$, should include regular ($C^\infty$) functions and vector fields, indexing generalized momenta; as we shall see, local variables are enough for a general notion of Mechanics on ${\cal M}$. We therefore consider the algebra generated by real functions of compact support in ${\cal M}$ and the identity, $C^\infty({\cal M})$, and the space Vect (${\cal M}$) of $C^\infty$ vector fields of compact support. Vect (${\cal M}$) is a Lie algebra of derivations $v: f \rightarrow v(f)$ on $C^\infty({\cal M})$, with Lie product $\{ \, v , \, w \, \}$ defined by \begin{equation} \{ \, v , \, w \, \} (f) = v(w(f)) - w(v(f)) \ ; \end{equation} its elements are integrable to one-parameter subgroups of the diffeomorphism group Diff (${\cal M}$) by compactness of their support and generate a subgroup of it, ${\G(\M)}$. As a real vector space, Vect (${\cal M}$) is generated by an infinite number of linearly independent vector fields, which define independent variables; however, Vect (${\cal M}$) is also a module over $C^\infty ({\cal M})$ and, as such, it is locally generated by $n$ vector fields, $n$ the dimension of ${\cal M}$. The module structure of vector fields over $C^\infty({\cal M})$ is clearly an expression of the functional character of the Lie algebra Vect (${\cal M}$), to which a notion of linear dependence with functions as coefficients is naturally associated; clearly, it is crucial in order to describe the infinite dimensional diffeomorphism group and its Lie algebra in terms of a finite number of generators, which will be interpreted as independent generalized momenta. All together, the above algebraic structures give rise to the {\em Lie-Rinehart algebra of} ${\cal M}$, ${\cal L}_R(\M) $, defined \cite{Rinehart} as the pair ($C^\infty ({\cal M}), $ Vect $({\cal M})$) with the commutative (real) algebraic structure of $C^\infty({\cal M})$, the Lie product in Vect (${\cal M}$), the action of Vect (${\cal M}$) on $C^\infty({\cal M})$ as derivations and the Rinehart product $ (C^\infty({\cal M}) , {\rm Vect}({\cal M})) \rightarrow {\rm Vect}({\cal M})$, defined by its action as a derivation on $C^\infty({\cal M})$: \begin{equation} (f \circ v) (g) = f \, v(g) \ . \end{equation} The Rinehart product is distributive in both factors, associative in the first, \begin{equation} f \circ (g \circ v) = (f g) \circ v \end{equation} and is related to the Lie product by \begin{equation} \{v, \,f \circ w\} = v(f) \circ w + f \circ \{v, \,w\} \end{equation} for all $f,g \in C^\infty({\cal M})$, $v,w \in $ Vect (${\cal M}$). The identity $1$ of $C^\infty({\cal M})$ satisfies $ v (1) = 0 $, $1 \circ v = v$, $\forall v \in$ Vect (${\cal M}$). The action of Vect (${\cal M}$) on $C^\infty({\cal M})$ as derivations can also be written as an extension of the Lie product Vect $({\cal M})$ to $C^\infty ({\cal M}) + $ Vect $({\cal M})$, which becomes therefore a Lie algebra, still denoted by ${\cal L}_R(\M)$. \begin{equation} \{ v , f \} \equiv v(f) \ , \ \ \{ f, g \} \equiv 0 \end{equation} for all $f,g \in C^\infty({\cal M})$, $v \in $ Vect (${\cal M}$). $ \mbox{Diff}(\M)$ defines a group of automorphisms of the Lie-Rinehart algebra ${\cal L}_R(\M) $. The action of the one-parameter group $g_{\lambda v}$, $\lambda \in {\rm \doppio R}$, generated by $v \in$ Vect (${\cal M}$) satisfies \begin{equation} (d/d\lambda)\, g_{\lambda v} (A) = \{\,v , \,g_{\lambda v}(A) \, \}, \ \ \ \forall A \in C^\infty ({\cal M}) + {\rm Vect} ({\cal M}) \label{ActDiff} \end{equation} with the derivative taken in the $C^\infty$ topology. \vskip 5mm\goodbreak\noindent {\em Non-commutative Poisson $*$ algebras}. \msk As discussed in the Introduction, in order to obtain variables taking well defined values through a notion of spectrum, multiplications should be allowed and an associative algebra $\Lambda$ should be considered. In order to preserve diffeomorphism invariance, see eq.(\ref{ActDiff}), the Lie action of vector fields on $C^\infty ({\cal M}) + $ Vect $({\cal M})$, should extend to derivations of $\Lambda$. If the interpretation of vector fields as generators of a symmetry can be extended to all the variables in $\Lambda$, one is led to assume that their action is described by an extension to $\Lambda$ of the Lie product of ${\cal L}_R(\M)$ satisfying the Leibniz rule, in both arguments as a consequence of antisymmetry. Substantially, this is the step advocated by Dirac by the introduction of {\em generalized Poisson brackets}, assumed \cite{Dirac} to satisfy the Leibniz rule in an associative algebra. Moreover, a notion of reality should be defined in $\Lambda$ through an involution, leaving ${\cal L}_R(\M)$ pointwise invariant. $\Lambda$ should therefore have the structure of a {\em non-commutative Poisson $*$ algebra} . Non-commutative Poisson algebras have been formally introduced in refs. \cite{Voronov} \cite{FGV} \cite{Farkas} \cite{Dubois}. They are real associative algebras, with product denoted by $A \cdot B$, which are also Lie algebras, with Lie product, denoted by $\{A,\,B\}$, satisfying the Leibniz rule \begin{equation} {\{A, \,B \cdot C\}= \{A, \,B\} \cdot C + B \cdot \{A,\,C\},} \end{equation} For a $*$ algebra, a linear involution must be defined, satisfying, as usual, $(A \cdot B)^* = B^* \cdot A^*$; the reality of the Lie structure in $\Lambda$ also requires $\{\,A , \,B\}^* = \{\,A^* , \, B^*\,\}$. No relation is assumed between the Lie product and the commutator $[\,A,\,B\,] \equiv A \cdot B - B \cdot A$; however, the following identity holds for all $A,B,C,D$ in a Poisson algebra: \cite{Dirac} \cite{Voronov} \cite{Farkas}: \begin{equation} [\,A,\,B\,]\,\{\,C, \,D\,\} = \{\,A,\,B\,\}\,[\,C,\,D\,] \ . \label{DVF} \end{equation} \vskip 5mm\noindent {\em The universal Poisson-Rinehart algebra of ${\cal M}$}. \msk Following the above arguments, a general notion of mechanics on a manifold ${\cal M}$ is given by the Poisson $*$ algebra {\em generated by the Lie-Rinehart algebra of} ${\cal M}$. More precisely, we consider the (non-commutative) {\em universal enveloping Poisson algebra} of the LR algebra ${\cal L}_R(\M) $, defined as follows \cite{MS2}: \begin{definition}{Definition} The {\bf LR universal Poisson algebra}, or {\bf Poisson-Rinehart algebra}, of a manifold ${\cal M}$ is the unique (non-commutative) Poisson algebra $\LRM$ with an injection $i: {\cal L}_R(\M) \rightarrow \LRM$ satisfying, \newline i) $i$ is a Lie algebra homomorphism, \begin{equation} i( \{l_1, \,l_2\}) = \{ i(l_1), \,i(l_2)\} \, , \ \ \ \forall l_1, \,l_2 \in {\cal L}_R(\M) \ , \end{equation} \newline ii) $i(1) \cdot i(l) = i(l)$, $\forall\, l \in {\cal L}_R(\M))$, \newline iii) $i(f g) = i(f) \cdot i(g) \ \ \ \forall\, f, g \in C^\infty(\M)$, \newline iv) $i(f \circ v) = {\scriptstyle{\frac{1}{2}}} (i(f) \cdot i(v) + i(v) \cdot i(f)) \ \ \ \forall\, f \in C^\infty(\M) , \, v \in {\rm Vect} ({\cal M})$ \newline and such that, if $\Lambda$ is a Poisson algebra with injection $i_\Lambda$ satisfying i) - iv), there is a unique homomorphisms of Poisson algebras $\rho : \LRM \rightarrow \Lambda $ intertwining between the injections, $i_\Lambda = \rho\, \circ\, i \; $. \label{LRM} \end{definition} As in general for enveloping algebras, the uniqueness of the Poisson universal enveloping algebra of ${\cal L}_R(\M)$ follows immediately from the uniqueness of the homomorphism $\rho$. In the following, ${\cal L}_R(\M)$ will be identified with the image of its injection in $\LRM$. In order to construct $\LRM$, one may start from the Poisson universal enveloping algebra of ${\cal L}_R(\M)$ {\em as a Lie algebra}, introduced in general by Voronov \cite{Voronov} for (graded) Lie algebras, and take quotients with respect to the ideals (in the sense of associative algebras) generated by the relations ii)-iv); such ideals are in fact stable under the bracket operations with all the elements in the universal enveloping algebra of ${\cal L}_R(\M)$ as a Lie algebra, and therefore the corresponding quotients define Poisson algebras. The only delicate point in the construction of $\LRM$ is the validity of the Leibniz rule {\em in both arguments} for the extended the Lie brackets. In fact, the Leibniz rule on one side determines a unique extension of the Lie brackets to the tensor algebra of a Lie algebra. The Leibniz rule on the other side is obtained by Voronov through an explicit analysis of the quotient with the ideal generated by eq. (\ref{DVF}). The same result also follows by imposing antisymmetry, through the quotient with respect to the ideal generated by all the elements $\{\,A,\,B\,\} + \{\,B,\,A\,\}$ together with their repeated $\{\,\cdot, \,\cdot\,\}$ brackets with the tensor algebra; the Jacoby identity follows by induction. Injectivity of $i$ holds since the above ideals have $0$ intersection with $i({\cal L}_R(\M))$. With respect to the Poisson universal enveloping algebra of ${\cal L}_R(\M)$ as a Lie algebra, $\LRM$ includes the relations ii) - iv), so that, according to the requirements discussed above, it extends the algebraic relations of $C^\infty(\M)$ and the Lie-Rinehart product, identified with the symmetric (Jordan) part of the associative product. Such relations are {\em a priori} essential for the mechanical interpretation of $\LRM$ and will in fact be crucial for the derivation of the classical phase space and for the characterization of QM on ${\cal M}$. They also enter in the construction of the Planck constant as a central variable, even if conditions i) and ii) are sufficient for compact ${\cal M}$. If, in Definition 1, only conditions i) is assumed, the result merely embodies the Lie relations between vector fields and functions on ${\cal M}$, so that it applies in general to $ \mbox{Diff}(\M)$ invariant systems; in particular, the resulting Poisson algebra appears in all $N$-particle systems on ${\cal M}$~\cite{Goldin}, with ${\cal L}_R(\M)$, {\em as a Lie algebra}, interpreted as the current algebra. On $\LRM$ there is a unique involution which leaves ${\cal L}_R(\M)$ pointwise invariant; in fact, $(A \cdot B)^* = B^* \cdot A^*$ uniquely extends the involution from ${\cal L}_R(\M)$ to its tensor algebra, where it leaves invariant the ideals defining $\LRM$; the involution is therefore well defined in $\LRM$ and, by construction of the Lie brackets in $\LRM$, satisfies $\{\,A , \,B\}^* = \{\,A^* , \, B^*\,\}$. $ \mbox{Diff}(\M)$ is a symmetry of all the above constructions and extends therefore to a group of automorphisms of $\LRM$ as a Poisson algebra (leaving ${\cal L}_R(\M)$ invariant). As before, the action of the one-parameter groups $g_{\lambda v}$, $\lambda \in {\rm \doppio R}$, satisfies eq.(\ref{ActDiff}), for all $ A \in \LRM $, with the derivative taken in the topology induced on $\LRM$ by the $C^\infty$ topology on the tensor algebra over ${\cal L}_R(\M)$. It should be noted that $\LRM$ is not an enveloping algebra in the usual sense \cite{Dix}, since the Lie product is not given by the commutator. With respect to the commutative and non commutative Poisson algebras discussed in the literature \cite{FGV} \cite{Dubois} for classical mechanics and for quantum mechanics, the concept of (non-commutative) Poisson universal enveloping algebra is more general and only includes the basic geometrical (Lie and Lie-Rinehart) structures. Its construction {\em includes neither classical nor quantum principles}, which are usually assumed in the form of abelianess or commutation relations. \vskip 5mm\noindent {\em The relation between commutators and Lie products}. \msk\noindent A central result in our analysis is the construction of a variable $Z$ which relates commutators and Lie products in $\LRM$. The essential ingredient is that any function of compact support, in particular the identity for compact ${\cal M}$, can be obtained as a sum of Lie products; by eq. (\ref{Farkas}), the corresponding sum of commutators gives the required variable, which is then shown to be independent of the construction and central, both in the commutator and in the Lie sense. More precisely we have \begin{theorem}{Theorem} \label{TH1} \cite{MS1} For a compact manifold ${\cal M}$, there exists a unique $Z \in \LRM$, such that, $\forall A, \,B \in \LRM$, \begin{equation} [\, A, \, B \,] = Z\,\cdot \{\, A,\, B\, \} \ . \label{Z1} \end{equation} It satisfies \begin{equation} \{\, Z,\, A\,\} = 0 = [\,Z,\,A\,] \ , \ \ Z = -Z^{*} \label{Z2} \end{equation} For a non-compact manifold, there exists a sequence $Z_n = - Z_n^{*}\in \LRM $ such that, $ \forall A, \,B \in \LRM$, $\exists\, \bar{n}(A, B) \in {\rm \doppio N} $, such that, \begin{equation} [\, A, \, B \,] = Z_n\,\cdot \{\,A,\,B\,\} \ , \ \ \{\, Z_n,\, A\,\} = 0 = [\,Z_n,\,A\,] \ \ \ \forall n > \bar{n} \ . \end{equation} One may therefore define an element $Z = - Z^{*}$, such that the Poisson algebra $\tilde\Lambda_R ({\cal M})$ generated by $\LRM$ and $Z$ satisfies eqs. (\ref{Z1}), (\ref{Z2}). \end{theorem}\goodbreak \noindent {\em Proof}. The proof simplifies for compact ${\cal M}$. In this case, the manifold can be covered by a finite number of open sets ${\cal O}_i$ homeomorphic to discs. There are therefore functions $q_i$ and vector fields $w_i$, with compact support contained (in local coordinates) in larger discs ${\cal O}'_i$, satisfying $\{ q_i, w_i \} (x) = 1 \, , \ \forall x \in {\cal O}_i$. For any partition of the unity $ \sum_i g_i = 1 $, with Supp $(g_i) \subset {\cal O}_i$, we have \begin{equation} 1 = \sum_i \, g_i \{ q_i, w_i \} = \sum_i \, \{ q_i, g_i \circ w_i \} \equiv \sum_i \, \{ q_i, p_i \} \end{equation} Then, eq.(\ref{Farkas}) gives, for all $A, B \in \LRM$, \begin{equation} [\,A,\,B\,] = 1 \cdot [\,A,\,B\,] = \sum_i \, \{q_i, p_i\} \cdot [ A,\,B ] = \sum_i \, [\, q_i, \, p_i \, ] \cdot \{A,\,B\} \label{sqp} \end{equation} The sum in the r.h.s. of eq.(\ref{sqp}) is independent of the construction since, for any other choice of $\tilde {\cal O}_i , \tilde g_i, \tilde q_i, \tilde w_i$, eq.(\ref{sqp}) gives \begin{equation} \sum_i \, [\, \tilde q_i, \, \tilde p_i \, ] = \sum_j \, [\, q_j, \, p_j \, ] \cdot \sum_i \{ \tilde q_i, \, \tilde p_i \} = \sum_j \, [\, q_j, \, p_j \, ] \ . \end{equation} One may therefore define \begin{equation} Z \equiv \sum_i [\, q_i, \, p_i \, ] \end{equation} and eq.(\ref{Z1}) holds. By definition of the involution in $\LRM$, $Z = - Z^{*}$. By the Leibniz rule, $\forall A \in \LRM$, \begin{equation} \{ Z, \, A \} = \sum \{ [ q_i, \,p_i ], \,A\} = \sum ([ \{q_i, \,A\}, p_i ] + [ q_i, \{\,p_i, \,A\} ]) \ ; \label{ZA} \end{equation} using eq.(\ref{Z1}) and the Jacobi identity for the Lie product, the r.h.s. of eq.(\ref{ZA}) becomes \begin{equation} Z \cdot \sum (\{\,\{q_i, \,A\}, p_i\} + \{\,q_i,\,\{\,p_i, \,A\,\}\,\} \,) = Z \cdot \sum \{\,\{q_i, \,p_i\}, \,A\} = Z \cdot \{ 1, \, A\} = 0 \ . \end{equation} Then, eq.(\ref{Z1}) implies $[ Z, \, A ] = 0$, $\forall A \in \LRM$. $\rule{5pt}{5pt} $ Theorem 1 can be regarded as an answer to the problem raised by Dirac \cite{Dirac} about the origin and the uniqueness of the relation between commutators in Quantum Mechanics and classical Poisson brackets. Dirac introduced the notion of {\em generalized Poisson brackets} (substantially, the notion of non-commutative Poisson algebra) as the basis for a generalization of Classical Mechanics and argued that Poisson brackets must be proportional to commutators on the basis of eq.(\ref{Farkas}); however, the argument relies on the \lq\lq independence\rq\rq of $C,D$ from $A,B$ in eq.(\ref{Farkas}) and is not conclusive, since invertibility of $\{ C, D\}$ is not discussed, and in fact the conclusion, even in the generalized form given by eq.(\ref{Z1}), does not hold in general in (non-commutative) Poisson algebras. E.g., one can derive, for the universal enveloping Poisson algebra of a finite dimensional Lie algebra ${\cal L}$, the relation \begin{equation} [A,B] \cdot Z_1 = \{ A , B\} \cdot Z_2 \end{equation} \begin{equation} Z_1 = \sum_{ij} g_{ij} L_i \cdot L_j \ , \ \ \ Z_2 = \sum_{ijk} c_{ijk} L_i \cdot L_j \cdot L_k \ , \end{equation} with $g_{ij}$ the Killing form and $c_{ijk}$ the structure constant of ${\cal L}$; $Z_1$ and $Z_2$ are central in the sense of Theorem 1, but $Z_1$ is not invertible and, in general, $A \cdot Z_1 = 0$ does not imply $A=0$. For the Poisson-Rinehart algebra of a manifold, eq.(\ref{Z1}) holds as a consequence of condition ii) in Definition 1 for ${\cal M}$ compact and from conditions ii) - iv) in general. Moreover, Dirac's analysis is not conclusive because it is unclear {\em to which algebra} it is meant to apply, so that the r.h.s. of eq.(\ref{Z1}) is {\em not\/} well defined. If it is identified, as perhaps implicit in Dirac's analysis, with the classical bracket in the {\em classical} Poisson algebra as a Lie algebra, leaving undetermined the associative product, one exactly meets the problems of {\em phase space quantization}. If it is interpreted in the universal Poisson enveloping algebra of the {\em Lie algebra} of functions and vector fields, substantial information is lacking for the derivation of Quantum (and Classical) Mechanics on ${\cal M}$, as discussed above, and a conclusion can be obtained only in fixed coordinates, e.g. in ${\rm \doppio R}^n$ with cartesian coordinates (see below). The Lie-Rinehart relations and the construction of the Poisson-Rinehart universal enveloping algebra, with the LR product identified with the Jordan product, is therefore essential for the relevance of eq.(\ref{Z1}); in particular, the problems of phase space quantization are avoided since the construction of the classical Poisson algebra, also as a Lie algebra, depends on abelianess of the product, which does not hold in $\LRM$; in fact, the Lie algebra of functions on the phase space is {\em not\/} a common structure of classical and quantum mechanics, being given by {\em a quotient} of the common Poisson algebra $\LRM$. \vskip 5mm\noindent {\em The non-commutative Poisson algebra generated by cartesian coordinates and momenta}. \msk\noindent As discussed above, the r\^ole of the Lie-Rinehart relations is mainly that of allowing for the identification of $\LRM$ with the algebra of mechanical variables on a manifold. If only ${\rm \doppio R}^n$, with cartesian coordinates, is considered, a similar simplified construction still yields a Poisson algebra with a central element $Z$ satisfying eqs.(\ref{Z1}), (\ref{Z2}). For its construction, it is enough to consider the polynomial algebra in the cartesian coordinates $x_i , i = 1 \ldots n$ and the Lie algebra ${\cal L}_c$ generated by it and by momenta $p_i , i = 1 \ldots n$ with Lie product \begin{equation} \{ \, P(x) , \, p_i\} = \frac{\partial}{\partial x_i } P(x) \ , \ \ \ \{ \, p_i , \, p_j \} = \{ \, P_1(x) , \, P_2(x) \} = 0 \ . \end{equation} No Rinehart product between coordinates and momenta is present, since only the vector fields associated to translations are considered. Then, Definition 1, without condition iv), gives a unique universal enveloping Poisson algebra $\Lambda_c$ of ${\cal L}_c$, extending the (commutative) algebraic structure of polynomials in the coordinates. The proof of Theorem 1 shows that $ \, [\, x_i, \, p_i\, ] \, $ is independent of $i$ and defines an element $Z \in \Lambda_c$ satisfying eqs.(\ref{Z1}), (\ref{Z2}). A non-trivial point is that $\Lambda_c$ is not an explicit polynomial algebra; rather, it is uniquely determined by the requirements that i) it is a Poisson algebra, ii) it is generated by the polynomials in $x_i$ and by the momenta $p_i$, iii) it is universal, i.e. for any Poisson algebra $\Lambda$ satisfying i) and ii) there is a unique homomorphism $\rho: \Lambda_c \mapsto \Lambda$ acting as the identity on the generators. The same Poisson algebra is also obtained if one only starts with the {\em Lie} algebra ${\cal L}_H$ generated by $x_i$, $p_i$ and an element $I$, satisfying \begin{equation} \{ \, x_i , \, p_j \} = \delta_{ij} \, I \, , \ \end{equation} all the other Lie products vanishing. One considers the universal enveloping Poisson algebra given by Definition 1, dropping conditions iii) and iv) and keeping ii) for $I$, i.e. with $I$ as the identity. Then, introducing as before $Z \equiv [x_i, p_i]$ ($i$ fixed) abelianess of the polynomials in $x_i$ follows from eq.(\ref{Z1}) and the result is again the Poisson algebra $\Lambda_c$. Taking in $\Lambda_c$ the quotients defined by the ideals generated by $Z^* Z - z^2$, $z \geq 0$ one obtains the classical Poisson algebra of polynomials in coordinates and momenta and the Heisenberg algebra, with $\hbar = z$. The Dirac ansatz for commutators between cartesian coordinates and momenta has therefore no alternative, precisely in the sense that the Heisenberg algebra and the classical polynomial Poisson algebra are the only Poisson algebras which envelope ${\cal L}_c$ or ${\cal L}_H$ in the above sense (and are therefore isomorphic to quotients in $\Lambda_c$) and represent $Z^*Z$ by a nonnegative number. A more complete discussion of central variables and ideals in Poisson algebras requires the introduction of bounded variables, which can be conveniently constructed in Hilbert space representations. \vskip 5mm\noindent {\em Regular factorial representations of $\LRM$. Classical and Quantum Mechanics}. \msk\noindent \begin{definition}{Definition} A {\bf representation} $\pi$ of a Poisson *-algebra $\Lambda$ in a complex Hilbert space $\mbox{${\cal H}$}$ is a homomorphism of $\Lambda$ into a Poisson *-algebra of operators in $\mbox{${\cal H}$}$, with both the operator product and a Lie product $\{.\,, \,.\}$ satisfying the Leibniz rule, having a common invariant dense domain $D$ on which $$\pi(A \cdot B) = \pi(A) \,\pi(B), \,\,\,\,\pi(\{\,A,\,B\,\}) = \{\,\pi(A),\,\pi(B)\,\}, \,\,\,\,\pi(A^*) = \pi(A)^*.$$ \noindent A representation $\pi$ of $\LRM$ is called {\bf regular} if $\pi(C_0^\infty({\cal M})) \neq 0$ and \newline i) ({\bf exponentiability}) $D$ is invariant under $\pi(C^\infty(\M))$ and the one parameter unitary groups $U(\lambda v)$, $U(\lambda Z) $, $\lambda \in {\rm \doppio R}$, generated by $T_v \equiv \pi(v)$ and $ - i\, \pi(Z)$, respectively, \newline ii) ({\bf diffeomorphism invariance}) the elements $g_{\lambda v} \in {\cal G}({\cal M})$ define strongly continuous automorphisms of the $C^*$ algebra ${\cal A}(\M)_\pi$ generated by $\pi(C^\infty(\M))$, $U(\lambda v)$, $U(\lambda Z) $, \begin{equation} g_{\mu w}: \pi(f) \rightarrow \pi(g_{\mu w} f),\,\,\,\, U(\lambda v) \rightarrow U(\lambda g_{\mu w}(v)), \,\,\,\,\, U(\lambda Z) \rightarrow U(\lambda Z). \end{equation} \noindent A regular representation $\pi$ of $\LRM$ is called {\bf factorial} if the elements of the center ${\cal Z}_\pi$ of ${\cal A}(\M)_\pi'' $, the weak closure of ${\cal A}(\M)_\pi$, which are invariant under Diff(${\cal M}$) are multiples of the identity. \end{definition} Condition i) requires the existence of exponentials of the representatives of the vector fields and of $Z$; such exponentials are unique since stability of $D$ under them implies essential selfadjointness of the generators on $D$. Condition ii) amounts to exponentiability of the derivations defined by eq.(\ref{ActDiff}), in the representation $\pi$; it is implied by i) for representations with $z \neq 0$ (as a consequence of eq.(\ref{CP}) below). The action of diffeomorphisms on ${\cal A}(\M)_\pi'' $ is well defined as a consequence of their strong continuity (condition ii). The above condition on the center of ${\cal A}(\M)$ reflects the fact that $\LRM$ has both an associative product and a Lie product, related to Diff(${\cal M}$) by eq.(\ref{ActDiff}), so that diffeomorphism invariance of an element corresponds, in exponentiated form, to the vanishing of its Lie brackets with vector fields. For representations with $z \neq 0$, central elements are automatically diffeomorphism invariant, by eq.(\ref{CP}) below. For the analysis of regular factorial representations of $\LRM$, one has \cite{MS2}: \begin{lemma}{Proposition} In a representation $\pi$ of $\LRM$, $\pi(f)$ and $\pi(v)$, $f \in C^\infty(\M)$, $v \in$ Vect(${\cal M}$), are strongly continuous on $D$ in the $C^\infty$ topology of $C^\infty(\M)$ and Vect(${\cal M}$). Equation (\ref{ActDiff}) holds for $\pi(g_{\lambda v}(A))$, $A = f,\, v$, with the derivative taken in the strong topology. \noindent In a regular representation, the one-parameter unitary groups $U(\lambda v)$, $U(\lambda Z) $ satisfy \begin{equation} [\,U(\lambda v), \,U(\lambda Z)\,] = 0, \,\,\,\,\,\,[\,\pi(f), \,U(\lambda Z)\,] =0. \end{equation} \noindent In a regular factorial representation one has \newline i) $U(\lambda Z) = e^{- i \lambda z} I $, $z \in {\rm \doppio R}$; modulo the $ ^*$ involution in $\LRM$ (leaving $C^\infty({\cal M}) +$ Vect (${\cal M}$) pointwise invariant), one can take $ z \geq 0$, \newline ii) the one parameter groups $U(\lambda v)$ are strongly continuous in $v$ in the $C^\infty$ topology of the vector fields and \begin{equation} U(\lambda v)\,U(\mu\,w) = U(\mu g_{\lambda z v}(w)) \,U(\lambda v), \,\,\,\,\,\,\,\, U(\lambda v) f = g_{\lambda z v}(f)\, U(\lambda v) \, , \label{CP} \end{equation} \end{lemma} For $z \neq 0$, eq.(\ref{CP}) defines, with the obvious modification of a factor $z$ in the Lie algebra structure constants, the crossed product $\Pi({\cal M})$ of $C^\infty(\M)$ and $\tilde {\cal G}({\cal M})$, the universal covering group of Diff(${\cal M}$) (the usual definition corresponding to $z=1$). A regular representation of $\LRM$ gives a representation of $\Pi({\cal M})$ which is Lie-Rinehart regular in the sense of \cite{MS2}, since it is differentiable, the generators are strongly continuous in the $C^\infty$ topology of vector fields and they satisfy the Lie-Rinehart relations. We recall that two representations are called quasi equivalent if each of them is unitarily equivalent to a sum of subrepresentations of the other. Our main result is that the regular factorial representations of $\LRM$ exactly define classical and quantum mechanics on ${\cal M}$, with $z$ playing the r\^ole of $\hbar$: \begin{theorem}{Theorem} \cite{MS2} The regular factorial representations $\pi$ of $\LRM$ are classified, modulo the $ ^*$ involution, by the values \, $i z, \, z\geq 0$ of the central variable $Z$ and \vspace{0.5mm} \newline 1) for $z > 0$, they coincide, apart from a multiplicity, with of the irreducible Lie-Rinehart regular representations of the crossed product $\Pi({\cal M}) \equiv C^\infty(\M) \times \tilde {\cal G} ({\cal M})$, defining \ {\bf Quantum Mechanics} on ${\cal M}$. As a result of \cite{MS1}, for each $z>0$, they are locally equivalent, up to a multiplicity, to the Schroedinger representation and they are classified by the unitary representations of the fundamental group of ${\cal M}$. \vspace{0.5mm} \newline 2) for z = 0, for separable representation spaces $\mbox{${\cal H}$}$, they are quasi equivalent to the representation $\pi_C$ in $L^2(T^* {\cal M}, d x \,d p)$, defined by multiplication operators ({\bf Classical Mechanics}): on $D = C_0^\infty(T^* {\cal M})$, in local coordinates, $\forall f \in C^\infty(\M)$, $\forall v = \sum_i g_i(x) \partial/\partial x_i$, supp\,$v \subset {\cal O}$, ${\cal O}$ homeomorphic to an open disc, \begin{equation} \pi_(f) = f(x), \ \ \ \ \pi(v) = \sum_i g_i(x)\, p^i \ , \end{equation} $p^i$ denoting the coordinates in the basis dual to $\partial/\partial x_i$. The Lie product in $\pi_C(\LRM)$ is given by the standard Poisson brackets on $T^* {\cal M}$. If $ \mbox{Diff}(\M)$ is unitarily implemented, the representation is unitarily equivalent to a multiple of $\pi_C$. \end{theorem} \noindent {\em Proof}. By i) of Proposition 1, $Z = iz I$ in regular factorial representations of $\LRM$ and for $z \neq 0$ the classification follows from Proposition 1 and Theorems 3.7, 4.5, 4.6 of ref.\cite{MS1}. For $z = 0$, by separability of $\mbox{${\cal H}$}$, modulo unitary equivalence, the representation is defined by multiplication operators in a denumerable sum of $L^2$ spaces over the spectrum of the abelian $C^*$ algebra ${\cal A}(\M)$. The proof \cite{MS2} then requires three steps: first, the spectrum of ${\cal A}(\M)$ is identified, apart from a set of zero measure, with the cotangent bundle $T^*({\cal M})$; in fact, by regularity of $\pi$, almost all the multiplicative functionals $\xi$ on ${\cal A}(\M)$ are determined by their value on $C^\infty(\M)$ and on the generators $\pi(v)$ of the one parameter groups, to which they extend by regularity; locally, \begin{equation} \xi (v) \equiv \xi \, (\sum_i g_i(x) \frac {\partial}{\partial x_i}) = \sum_i g_i(x_\xi) \, \xi (\frac {\partial}{\partial x_i}) \equiv \sum_i g_i(x_\xi) \, p^i_\xi \ , \end{equation} since ${\cal M}$ is the spectrum of the closure of $C^\infty(\M)$; therefore, $\xi = (x_\xi, p^i_\xi) \in T^*({\cal M})$ and \begin{equation} \pi(f)= f(x_\xi) \ , \ \ \ \pi(v) = \sum_i g_i(x_\xi) \, p^i_\xi \end{equation} as multiplication operators. The second point is the regularity of the above measures with respect to the Lebesgue measure on $T^*({\cal M})$, which follows using transitivity of the transformations of $T^*({\cal M})$ induced by diffeomorphisms of ${\cal M}$ (apart from a set of zero measure) and local coordinates defined (almost everywhere) in $T^*({\cal M})$ by suitable vector fields on ${\cal M}$. Third, the identification of the Lie brackets with the classical Poisson brackets on $\LRM$ follows, by the Leibniz rule, from its validity for $\pi (C^\infty(\M) + $ Vect(${\cal M}$)); from Proposition 1, one has, in local coordinates, $\forall A(x, p) \in \pi(C^\infty(\M) + $ Vect(${\cal M}$)), $v = \sum_i g_i(x) \, p^i$, $\, g_{\lambda v}(x,p)$ the canonical transformation defined on $T^*({\cal M})$ by the diffeomorphism $g_{\lambda v}$, \begin{equation} \{\sum_i g_i(x) \,p^i,\,A(x, p)\,\} = (d/ d \lambda) A(g_{\lambda v}^{-1}(x, p))|_{\lambda = 0} = \end{equation} \begin{equation} = \sum_i \Big{(} - \frac{\partial A(x, p)}{\partial x_i}\, g_i(x) + \frac{\partial A(x, p)}{\partial p^i}\,\frac{\partial g_j(x)}{\partial x_i}\, p^j \Big{)} = \{\sum_i g_i(x) \,p^i,\,A(x, p)\,\}_{Class} \ . \ \ \rule{5pt}{5pt} \end{equation}
1,941,325,220,135
arxiv
\section{Introduction} Since years Atomic Force Microscopy (AFM) is a powerful technique for analyzing the physical properties of materials down to nanoscale. More recently AFM has made its entry in the biological world, where the fragile nature of samples has prompted new challenges and has led to considerable optimization of the AFM techniques\cite{raman,tando,tetard}. One issue is linked to the softness of the samples with respect to the AFM tip that imposed the minimization of the contact between tip and sample and a second one is the increasingly urgent need of extracting quantitative values for the mechanical and chemical properties of the studied systems. In this letter we address specifically these two aspects. In conventional atomic force microscopy a cantilever with a nanosized tip is used to explore the entire range of tip-sample forces in one single vibrational cycle. The tip-sample interaction does couple the eigenmodes of the cantilever but typically only the information contained in the first eigenmode is studied. However, the coupling with higher modes is highly nonlinear for relatively large amplitudes of oscillation and energy is transferred to higher harmonics \cite{stark:5111, stark347}. Consequently, if the cantilever response is measured only at the excitation frequency, close to the first eigenmode, then part of the tip-sample interaction is masked and not measured. To overcome the problem, methods have been developed where several modes and/or harmonics are measured contemporaneously \cite{raman, kareem, garcia}. Now, among the difficulties in dealing with biological sample is the fact that the measurements are usually carried out in liquid. When imaging in liquid the AFM cantilevers have to be stiff enough to maintain an acceptable quality factor to run dynamic AFM measurements. This, together with the large amplitudes of oscillation imposed, results in large excitation energies when compared to thermal energy. Exciting also at other frequencies further increases the excitation energy and the consequent energy transfer to the sample via the tip-sample interaction. For decreasing both the excitation energy and the pressure exerted on the sample by the tip, it becomes of paramount importance then to decrease both the cantilever stiffness and the amplitude of oscillation. Moreover, small oscillation amplitudes result in a negligible coupling to higher harmonics. Recently this strategy has been implemented in a new instrument called \textit{Force Feedback Microscope}(FFM) \cite{rodrigues:203105} where very soft cantilevers and small amplitudes of oscillation are adopted to minimize the interaction energy. The cantilever stiffness is typically kept in the order of $0.01$ N/m and the oscillation amplitude is about $0.3$ nm. The typical excitation energy imposed to the cantilever is then $E=k x^2 \approx 10 \times 10^{-22}J$ while $k_b T \approx 41 \times 10^{-22}$, implying that the excitation energy is kept below the thermal energy. TThis can be compared to normal AFM measurements where the amplitude and the stiffness are at least a factor 10 higher ($3$ nm and $0.2$ N/m) which make a factor 1000 in energy. Moreover, using small amplitudes of oscillation also offers the advantage that it can be assumed that at any given distance the tip-sample interaction is linear justifying the use of very simple equations to describe the interaction. In turn this has the consequence that the changes in normalized oscillation amplitude and phase can be mapped directly into stiffness and damping of the sample. One central aspect of FFM is that rather than \emph{assuming} a certain dynamic behavior of the cantilever, it is possible to \emph{calibrate} its dynamics as a function of a measured reference interaction. This makes it possible to quantify easily the tip-sample interaction regardless of the cantilever response spectrum. In liquid conditions and in particular when soft cantilevers are used, it is often difficult to precisely obtain a quality factor (Q) or even identify the resonance frequency $f$ of the cantilever. These two constants are essentially irrelevant when using the method described here. The frequency used during the measurements is arbitrarily chosen and kept constant during a measurement. The responses in frequency of the liquid and of other mechanical parts do not influence the quantitative analysis. The results reported on this letter show how the FFM makes possible to map the topography, the force, the force gradient and the dissipation in one single scan, and how the interaction can be measured quantitatively from solely the knowledge of the spring constant of the cantilever. The range of the xyz scanner used was rather large ($100\times 100 \times 100 \mu m$) limiting the spatial resolution. This does not limits the significance of our results, since our main goal is to demonstrate the possibilities offered by the method. We used three different samples: DNA, lipids and protein complexes in liquid media. For all the three samples the substrate was mica. \section{Materials and method} \subsection{Force Feedback Microscopy} Before taking an image, a set of approach curves onto the mica substrate are performed to calibrate the cantilever dynamics. A typical curve is shown in figure \ref{fig:1}. The FFM feedback loop keeps the position of the tip constant relative to the laboratory reference frame. The force supplied by the loop is then equal and opposite to the tip-sample interactions \cite{rodrigues:203105}. The calibration is a measurement of how the cantilever responds elastically and inelastically to forces at the frequency and in the medium chosen. To perform the calibration the oscillation amplitude, the excitation amplitude and the phase are recorded as a function of the distance (or interaction) resulting in a so called approach curve. \begin{equation} \nabla F = a \left[\cos(\phi_\infty)-n \cos(\phi)\right] \label{eq:eq1} \end{equation} \begin{equation} \gamma = \frac{a}{\omega}\left[\sin(\phi_\infty)-n \sin(\phi)\right] \label{eq:eq2} \end{equation} Equations \ref{eq:eq1} and \ref{eq:eq2} are used to convert measured data to interaction \cite{rodrigues:203105} parameters namely to force gradient $\nabla F$ and viscous damping $\gamma$. The tip-sample force gradient corresponds to the negative of the tip-sample stiffness and for that reason we may use stiffness or force gradient to refer to the same physical characteristics of the interaction. In the equations above $a$ and $\phi$ are calibrated constants, $n$ is the normalized amplitude (i.e. the ratio excitation amplitude to oscillation amplitude normalized to one at infinity) and $\omega$ is the angular velocity of the excitation. The strategy consists in finding which constants, $a$ and $\phi_{\infty}$, satisfy the condition that the integral of the force gradient equals the force. Since the force is simply $F=k \Delta x$, the only constant that is required for calibrating the cantilever dynamics at that specific frequency and in the specific media is the cantilever stiffness $k$ that allows to obtain the force. To note that the force is actually the amount of force that the feedback loop needs to supply to the tip to maintain it at equilibrium. The method to calibrate the cantilever is described in more detail in reference \cite{rodrigues:203105}. To obtain the images a second feedback loop is used. This second feedback loop operates in the same way as in any other typical AFM measurement, moving the sample to and fro maintaining constant a chosen signal, typically the amplitude of oscillation. Here, instead of the amplitude of oscillation we have used either the phase of oscillation or the tip-sample force. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{figure1.eps} % \caption{Approach curve for calibration of the force sensor. The sample is clusters of TBK1 and OPTN complexes on mica. (a) normalized excitation (b) phase difference, (c) tip-sample stiffness (d) damping coefficient, (e) negative of integrated force gradient and (f) force (red) comparison with tip-sample stiffness (blue, thinner line). } \label{fig:1} \end{figure} \subsection{DNA} The sample was prepared using a solution containing Mg$_2$+ divalent cations to bind the DNA on top of freshly cleaved mica \cite{dna2,dna3,dna4}. In our experiments we used 1000 base-pair DNA and supercoiled DNA. A buffer $10$ mM HEPES, $5$ mM MgCl$_2$ at pH $6.5$ was used to dilute the DNA up to $1$ nM concentration. A drop of $10$ $\mu$L of the DNA solution has been deposited on freshly cleaved mica and left incubate for 20 minutes. The drop is then rinsed with $500$ $\mu$L of $10$ mM HEPES, $5$ mM MgCl$_2$ at pH $6.5$ and the sample is imaged in buffer with the FFM. \subsection{Phopholipids} The phospholipid 1,2-Distearoyl-sn-glycero-3-phosphoethanolamine (DSPE) was used to obtain self-assembled lipids layers on mica. Lipids were diluted in chloroform at a concentration of $0.1$ g/L. About $20$ $\mu$L of solution was directly applied on freshly cleaved mica at room temperature. The specimen was incubated for 15 minutes and then washed several times with deionized water. The sample is imaged in $20$ mM Tris, $150$ mM NaCl at pH $7.5$ with the FFM. \subsection{Tank Binding Kinase (TBK1) and Optineurin (OPTN) protein complexes} The TBK1·OPTN sample was prepared by adding the two purified proteins in equimolar ratios and purifying the 1:1 complex via size exclusion chromatography. The complex was diluted with deposition buffer ($20$ mM HEPES and $5$ mM MgCl$_2$) to $34$ nM. Sample grids were prepared by applying $20$ $\mu$L of poly-L-lysine to freshly cleaved mica to render the surface positively charged \cite{proteins2, lipids5}. After 5 min incubation the mica was rinsed with dH$_2$O and dried with gaseous nitrogen. Subsequently $2$ $\mu$L of the TBK1·OPTN sample was added to the mica and incubated for ten min. The mica was rinsed with deposition buffer and imaged with the FFM in non-contact mode. \section{Results} The first case we present here is that of DNA on mica. Imaging DNA in a $\mathrm{MgCl_2}$ solution is particularly difficult. \begin{figure}[htp] \centering \includegraphics{figure2r.eps} \caption{FFM images of DNA deposited on mica in liquid solution. (a) topography, (b) force, (c) stiffness and (d) damping. The full color scale is 3 nm, 600 pN, 0.025 N/m and 1 $\mu$kg/s respectively and the scale bar is 500 nm. Here the feedback signal used for imaging is the phase of oscillation in repulsive regime yielding an almost constant damping image.} \label{fig:2} \end{figure} \noindent Figure \ref{fig:2} shows the topography, force, stiffness and damping images of DNA. A cantilever with 0.02 N/m stiffness was used. The excitation frequency was $3.555$ kHz and the oscillation amplitude of about $0.3$ nm. To obtain the topography the phase difference between excitation and oscillation was used as set point in the topography feedback loop. This corresponded to an image at constant damping. Figure \ref{fig:2} shows the damping image resembling to an error image. The same figure also provides local force changes (Fig. 2b) and local stiffness changes (Fig. 2c). We observe the interaction force to be close to zero when the tip is on the top of the mica. When the tip is on the top of DNA we observe an interaction force equal to $150$ pN. We conclude that the indentation on the DNA is therefore larger than the one on mica, indicating the DNA to be less viscous than mica. The measured local stiffness of DNA is larger than the one of the mica, indicating the DNA to be stiffer than mica \textbf{at this dissipation}.\\ \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{new_dna.eps} % \caption{FFM images of supercoiled DNA deposited on mica in liquid solution. (a) topography, (b) force, (c) stiffness and (d) damping. The full color scale is 2 nm, 300 pN, 0.13 N/m and 4.8 $\mu$kg/s respectively and the scale bar is 400 nm. Here the feedback signal used for imaging is the force in repulsive regime. } \label{fig:new} \end{figure} In figure \ref{fig:new} supercoiled DNA has been imaged at a constant repulsive force of 100 pN. The excitation frequency was $3.57$ kHz, the oscillation amplitude was $0.3nm$ and the cantilever stiffness $0.02$ N/m. In this imaging mode we acquired the stiffness and the damping coefficient simultaneously to the topography. In this case the DNA is softer than mica of one order of magnitude (figure 3c). Moreover, DNA is less viscous than mica (figure 3d), in agreement with the measurement presented in figure \ref{fig:2}. We conclude that the constant dissipation imaging mode can be seen as a measurement of the local stiffness for different interaction forces, since the damping coefficient is largely changing as a function of the interacting sample. Concluding, the local stiffness of the 1000 base-pair DNA at $150$ pN it is found to be $0.025$ N/m (figure 2), whereas for the supercoiled DNA at $100$ pN it is found to be $0.01$ N/m (figure 3).\\ As a second case we show an image of lipid membranes. Phospholipids are the major components of all cell membranes, constituting the matrix for the membrane proteins. Cells can perform many physiological functions through the membrane, including molecular recognition, intracellular communication and cell adhesion \cite{lipids1}, but the direct observation of these biological events at the nanoscale is still a challenge. Simplified 2-dimensional systems called artificial membranes are used to simulate cell membranes. These membranes assembled with phospholipids are intensively studied as a model for the cell membrane \cite{lipids3} and membrane/proteins interaction \cite{lipids4} with the AFM. The measurements were performed at a constant repulsive force of $50$ pN. A small oscillation amplitude of $0.2$ nm at $7.01$ kHz was imposed on the tip. In the topography, figure 4a, the thickness of the DSPE layers is found to be 6.5 nm $\pm$ 1 nm, indicating the DSPE to form a bilayer \cite{lipidsa}. The images clearly provide more information than just the topography. In figure 4c for example the color contrast indicates when the tip is over the membranes as they appear locally softer than the substrate. This is in agreement with the measurements performed in the last decade with the acquisition of static force curves and \textit{Peak Force} techniques \cite{lipidsb,lipidsc}. Moreover, we observe that thicker layers of lipids result to be softer and less viscous than a single bilayer. This is likely due to the lower influence of the substrate. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{new_lipids.eps} \caption{FFM images of DSPE deposited on mica in liquid solution. (a) topography, (b) force, (c) stiffness and (d) damping. the full color scale is 23nm, 100pN, 0.13 N/m and 2.5$\mu$kg/s respectively. The scale bar corresponds to 1000 nm. The signal chosen for the feedback was a small repulsive force of 50 pN.} \label{fig:3} \end{figure} The last example we show is a non-contact image of clusters of the Tank Binding Kinase (TBK1) and Optineurin (OPTN) protein complexes. The characterization of biologically relevant protein-protein complex is essential for understanding fundamental cellular processes. The TBK1 is a vital protein involved in the innate immune signaling pathway. TBK1 forms a complex with the scaffold protein OPTN. This complex (TBK1·OPTN) has not yet been characterized structurally due to its large size and intrinsic flexibility. Structural characterization would help to elucidate how the complex is involved in reducing the proliferation of invading bacteria \cite{proteins1}. A small oscillation amplitude of $0.2$ nm at $2.2$ kHz was imposed on the tip and the cantilever calibrated. The calibration curves for this measurement are presented in figure \ref{fig:1}. Despite a possible contamination of the tip which might induce artifacts in the images, the clear and stable presence of short-range attractive forces between the tip and the sample gives the opportunity to acquire a non-contact image. In the context of biomechanics this is an important instrumental challenge, even if for the moment it does not add anything to our knowledge of the studied system. The phase difference between excitation and tip oscillations was used as set-point for the acquisition of the topography due to its monotonicity as a function of tip-sample distance. Here the images clearly provide more information than just the topography. In this case, at variant with the first two examples, the stiffness (Figure 5c) is not linked to the common sample stiffness as such property would imply direct contact with the sample that in this specific situation is absent. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{figure4.eps} \caption{FFM images of clusters of TBK1 and OPTN complexes deposited on mica in aqueous solution. (a) topography, (b) force, (c) stiffness and (d) damping. The full color scale corresponds to 24nm, -300pN, -0.04 N/m and 5$\mu$kg/s respectively and the scale bar to 1$\mu$m. This image was taken in attractive regime using the phase as the feedback signal.} \label{fig:4} \end{figure} \section{Discussion} In all the three cases presented the images are provided with absolute values, based on the experimental cantilever calibration. The error propagation associated to the measurements comes from the cantilever stiffness alone. In conclusion, we have shown that the FFM can provide quantitative images of the mechanical properties of biological samples in liquid media. Depending on the property of interest a particular signal for the feedback loop can be selected. We have shown two configurations: one where the signal to the topography loop is the force and another where the phase is used; other signals such as the amplitude can be used as well. The phase seems to be in general a good candidate because it is often monotonous regardless of the nature of the interaction. The monotonicity is mainly due to the fact that the chosen frequency is far from resonances. In FFM, the high sensitivity is determined by the small cantilever stiffness chosen and not by resonance phenomena. \section{Conclusions} These \emph{proof of principle} experiments underscore the general philosophy of Force Feedback Microscopy and the benefits of the FFM in providing qualitative and quantitative in situ characterization of biological samples. Furthermore, the FFM can provide quantitative data on viscoelasticity at any given frequency because the choice of working frequency is arbitrary and independent of the cantilever resonant mode. Thus, it is possible to obtain images or approach curves at different frequencies to explore the local mechanical impedance of samples. \vspace{9pt} \noindent \textbf{AKNOWLEDGMENTS} \\ Luca Costa acknowledges COST Action TD 1002. Mario S. Rodrigues acknowledges financial support from Fundação para a Ciência e Tecnologia SFRH/BPD/69201/2010. The authors acknowledge Leide Cavalcanti for the help in preparing the lipid samples. \hspace{8pt}
1,941,325,220,136
arxiv
\section{Introduction} \label{S introduction} \subsection{Lagrange optimal control problem: classical setting} \label{SS classical setting} Consider a Lagrange optimal control problem with control-affine dynamics: \begin{align} J(x,u) = & \int_0^1 L(x(t), u (t)) \, dt \rightarrow \min, \label{lag} \\ \notag \dot x (t) = & f(x(t)) + \sum \limits _{i =1}^kg_i(x(t))u_i(t)=\\ \label{affine system} = & f(x(t))+G(x(t))u(t) \qquad \text{a.e. } t \in [0,1], \\ \label{boundary conditions} x(0)=&x^0, \qquad x(1) = x^1 . \end{align} We assume the vector fields $f$, $g_i, \ i=1,2,\ldots , k$ to be locally Lipschitz in $\mathbb R^n$, and the function $(x,u) \mapsto L(x,u)$ to be continuous in $\mathbb R^{n+k}$, and convex with respect to $u$. Regarding {\it existence of minimizers} for this problem, the classical approach, pioneered by L.Tonelli and D.Hilbert more than a century ago, (see monograph \cite{Cesa} for historical remarks and bibliography) introduced the following assumptions for the Lagrangian $L$ \begin{itemize} \item[{\bf A)}] convexity of the Lagrangian with respect to $u$ for each fixed $x$; \item[{\bf B)}] boundedness of the Lagrangian from below and {\it superlinear growth} of the Lagrangian as $|u| \to \infty$. \end{itemize} Besides, one must require existence of an admissible trajectory of the controlled dynamics \eqref{affine system} satisfying the boundary conditions \eqref{boundary conditions}. These assumptions guarantee existence of a minimizing control $\tilde{u}(t) \in L_1^k[0,1]$, which we call {\it ordinary minimizing control}. It is well known, at least since the work of L.C.Young in the 1930's, that without convexity of $L$ in $u$, ordinary minimizing controls of the Lagrange problem may cease to exist and the minimum can be achieved by so-called {\it relaxed controls}. By now, a rich theory of relaxed controls is developed (see \cite{Wa}). We will not deal with relaxed controls, {\bf assuming below the convexity assumption (A) to hold}. Instead, we will {\it weaken the condition of superlinear growth} of the Lagrangian, as $|u| \to \infty$. \subsection{Weakening growth assumption and generalized minimizers} \label{SS linear growth} If instead of superlinear growth assumption {\bf (B)} we assume merely {\it linear growth} \begin{itemize} \item[{\bf B$_\ell$)}]$ \ L(x,u)\geq a+ b|u|, \qquad a \in \mathbb{R}, \ b >0$, \end{itemize} then existence of ordinary minimizer for the problem \eqref{lag}--\eqref{boundary conditions} may cease, as the following simple example shows. \begin{example}[transfer with minimal fuel consumption] Consider the optimal control problem \begin{align} & J(u) = \int_0^1 | u (t)| dt \rightarrow \min, \\ & \dot x (t) = x(t) + u(t), \ \ u \in \mathbb{R}, \ x(0)=0, \ x(1)=e, \label{km} \end{align} which describes transfer of a point on a line, with the minimized cost, seen as fuel consumption for such transfer. From the differential system and the boundary conditions we get $$e=x(1)=e\int_0^1e^{-\tau}u(\tau)d\tau$$ and then for any control $u(\cdot) \in L_1[0,1]$, compatible with the boundary condition \eqref{km}, $$\int_0^1|u(\tau)|d\tau > \left|\int_0^1e^{-\tau}u(\tau)d\tau \right|=1. $$ On the other side for a sequence of needle-like controls $$u_i(\tau)=\bar{u}_i\chi_{[0,1/i]}(\tau), \qquad \bar{u}_i=1/(1-e^{-1/i}),$$ which are compatible with the boundary condition, there holds $J(u_i) \to 1$ as $i \to \infty $. It is easy to see that the sequence $\{u_i\}$ converges in $W_{-1,1}$-norm to the Dirac measure or optimal {\it impulsive generalized control} $\tilde{u}=\delta(\tau)$; the corresponding generalized trajectory is a discontinuous function $\tilde{x}(\tau)=e, \forall \tau >0$, with $\tilde{x}(0)=0. \ \square$ \end{example} If the Lagrangian $L$ has linear growth with respect to control, then each sequence of minimizing controls $\{u_i\}_{i \in \mathbb N}$ will be bounded in $L_1$-norm. If the fields $f, g_1, \ldots g_k$ have linear or sublinear growth with respect to the state variables, then the corresponding sequence of trajectories $\{x_{u_i} \}_{i \in \mathbb N}$ is bounded in total variation. Helly's selection theorem \cite{Durrett} guarantees that there is a function $x:[0,1] \mapsto \mathbb R^n$ of bounded variation (not necessarily continuous), such that $x_{u_i}$ converges pointwise to $x$ at every point of continuity of $x$. It is reasonable to conjecture that there is a space of {\it generalized trajectories} including discontinuous curves, and a space of {\it generalized controls} including impulses, for which the problem \eqref{lag}--\eqref{boundary conditions} admits a solution. \subsection{Generalized and impulsive controls in involutive and non-involutive cases} \label{SS involutive case} Study of optimal impulsive controls for linear systems has been initiated in the 1950's, particularly for applications in spacecraft dynamics. Later, a more general {\it nonlinear theory} has been developed; it englobes the problem \eqref{lag}--\eqref{boundary conditions} for the cases, where the controlled vector fields $\{g^1, \ldots , g^k\}$ in \eqref{affine system} form an {\it involutive system}. It turns out that in such cases one can provide the space of 'ordinary', controls $u(\cdot)$ (say $L_{1}^{k}[0,T]$) and of the trajectories $x(\cdot)$ with weak topologies, for which one can still guarantee uniform continuity of the {\it input-to-trajectory map} $u(\cdot) \mapsto x(\cdot)$. Then one can extend this map by continuity onto the topological completion of the space of controls, which contains distributions. Results obtained for {\it nonlinear} control systems by this approach since the 1970's, can be found in \cite{Br87,KrPo,Or,Sa88}. In particular the method allows to extend the input-to-trajectory map onto the space $W_{-1,\infty}$ of generalized derivatives of measurable essentially bounded functions, with generalized trajectories belonging to $L_\infty$. Some representation formulae for the generalized trajectories via the generalized primitives of the inputs can be found in \cite{Sa88}. In the linear-quadratic case, this approach allows for the extension of the input-to-trajectory map \emph{and} the cost functional. Indeed, linear-quadratic Lagrange problems admit a generalized minimizer in some Sobolev space of sufficiently large negative index, provided the boundary conditions can be satisfied and the quadratic functional is bounded from below \cite{Guerra00,ZavalishchinSesekin}. Problems with the continuous extension of the input-to-trajectory map which arise in the non-involutive case have been identified in the 1950's (see \cite{Kuzw}). It has been proved in \cite{KrPo} that involutivity of the system of controlled vector fields is necessary for continuity of the map in the weak topology -- a property coined in \cite{KrPo} as {\it vibrocorrectness}. To see, why vibrocorrectness fails in the non-involutive case, look at the following simple example. \begin{example} \label{Ex noninvolutive system} Consider the system \[ \dot{x}_1=u_1, \ \ \dot{x}_2=u_2, \ \ \dot{x}_3=x_2u_1, \qquad x(0)=(0,0,0) , \] and three bi-dimensional controls, which are concatenations of needles: \begin{align*} & u^{1,\varepsilon} (t) = \left( \frac 1 \varepsilon \chi_{[0,\varepsilon]}(t), \frac 1 \varepsilon \chi_{[\varepsilon, 2 \varepsilon ]}(t)\right), \\ & u^{2,\varepsilon} (t) = \left( \frac 1 \varepsilon \chi_{[0,\varepsilon]}(t),\frac 1 \varepsilon \chi_{[0,\varepsilon ]}(t) \right), \\ & u^{3,\varepsilon} (t) = \left( \frac 1 \varepsilon \chi_{[\varepsilon,2\varepsilon]}(t),\frac 1 \varepsilon \chi_{[0,\varepsilon ]}(t) \right). \end{align*} For $\varepsilon \rightarrow 0^+$, all the concatenations tend in $W^2_{-1,1}$ to the bi-dimensional impulsive control $u(t) = (\delta (t), \delta(t) )$, while the corresponding trajectories converge pointwise to different discontinuous curves with $x(0^+)=(1,1,0)$, $x(0^+) = \left( 1,1, \frac 1 2 \right)$, and $x(0^+) =(1,1,1)$ respectively. $\square$ \end{example} Thus, in the noninvolutive case an extension of input-to-trajectory map onto classical spaces of distributions and/or Sobolev spaces of negative order seems to be impossible. One approach to the study of noninvolutive systems with impulsive controls proceeds by construction of an appropriate Lie extension of the original system \cite{BressanRampazzo94,Jurdjevic}. The extension is a new system such that: {\it (i)} the extended system of controlled fields is involutive, {\it (ii)} all the trajectories of the original system are trajectories of the new system, and {\it (iii)} the trajectories of the extended system can be approximated by trajectories of the original system. This reduces the noninvolutive case to the involutive and, after some further transformation, to the commutative case. However, any relation between controls of the extended system and controls of the original system is indirect. An alternative approach providing a unique extension of the input-to-trajectory map is one of the main issues treated in this contribution. \subsection{Time-reparametrization and "graph completion" techniques in the noncommutative case} \label{SS noninvolutive case} For the noncommutative case, a different approach has been adopted. It is based on a technique of time reparametrization introduced by R.W. Rischel \cite{Rischel65} and J. Warga \cite{Warga65}, and further developed by other authors \cite{ArutyunovKaramzinPereira10,ArutyunovKaramzinPereira12, BressanRampazzo88,DykhtaSamsonyuk09,MiRu,MottaRampazzo96,PereiraSilva00,SilvaVinter96,Wa,WargaZhu94}. For a detailed monography and further references, see \cite{MiRu}. The approach proceeds by introducing a new independent variable with respect to which the trajectories become absolutely continuous. This creates an auxiliary control system which includes time as an additional state variable. Several authors \cite{ArutyunovKaramzinPereira10,ArutyunovKaramzinPereira12, DykhtaSamsonyuk09,MiRu,MottaRampazzo96,PereiraSilva00,SilvaVinter96} use the auxiliary system to obtain representations of generalized solutions of \eqref{affine system} by solutions of systems having Radon measures as generalized controls and (right-continuous) functions of bounded variation as generalized trajectories. The definitions introduced have a 'sequential form': couples $(x(\cdot), U(\cdot))$ of functions of bounded variation, which are correspondingly the generalized trajectory and the primitive of generalized control are weak$^*$ limits in {\bf BV} of couples $\left( x_n(\cdot), U_n(\cdot) \right)$ of classical trajectories $x_n$ and primitives $U_n$ of classical controls $u_n$ which generate $x_n(\cdot)$, with $\sup\limits_n\|u_n(\cdot)\|_{L_1} < \infty$. It is known that in the scope of this approach for the same $U$, different sequences $x_n(\cdot)$, driven by different $U_n$ may converge to different limits, i.e., each generalized input defines a 'funnel' of generalized trajectories, rather than a well defined unique trajectory. A different line of argument has been followed in \cite{BressanRampazzo88}. Any function $x:[0,1] \mapsto \mathbb R^n$ can be identified with its {\it graph}, that is the set $\Gamma_x = \left\{ (t,x(t)): t \in [0,1] \right\} \subset \mathbb R^{1+n}$. If the function $x$ is not continuous, then its graph is not connected. However, if the total variation of $x$ is finite, then there is a {\it graph completion} of $\Gamma_x$ which is connected. In \cite{BressanRampazzo88}, each control $u \in L_1^k[0,1]$ is identified with the graph of its primitive $U(t)= \int_0^tu(\tau) d \tau $. The spaces of generalized controls and generalized trajectories are spaces of graph completions of functions of bounded variation. The input-to-trajectory map is shown to be continuous over sets of generalized controls equibounded in variation provided with an appropriate metric, into the space of generalized trajectories provided with the Hausdorff metric over the graph completions of generalized primitives. \subsection{Fr\'echet curves approach to the noncommutative case} \label{SS new approach} In what regards the construction of generalized inputs and trajectories, our approach is rather close to the one of \cite{BressanRampazzo88}. It is easy to observe that the graph completions introduced in \cite{BressanRampazzo88} are Fr\'echet curves \cite{Frechet08,Leoni09}, and the metric introduced in the space of generalized controls is the classical Fr\'echet metric. We prove a stronger version of the main result in \cite{BressanRampazzo88}: the input-to-trajectory map is continuous with respect to a strengthened Fr\'echet metric in both the domain and the image. Notice that the Fr\'echet metric is topologically stronger than the Hausdorff metric. Since ordinary controls are densely embedded in the space of Fr\'echet curves, this proves existence and uniqueness of a continuous extension of the input-to-trajectory map into the space of generalized controls. This map admits a simple representation in the form of an input-to-trajectory map of an equivalent {\it auxiliary system}. \subsection{Fr\'echet generalized minimizers for Lagrange problems with functionals of linear growth} \label{SS extended Lagrange problem} This continuity result together with the representations of generalized trajectories contributes to proper extension of the cost functional \eqref{lag} onto the space of Fr\'echet generalized controls. Thus, we extend the Lagrange variational problem \eqref{lag}--\eqref{boundary conditions} onto the class of Fr\'echet generalized controls and trajectories so that \begin{itemize} \item the cost functional \eqref{lag} is lower semicontinuous in the space of Fr\'echet generalized controls and, under linear growth assumption for the integrand, the problem possesses a Fr\'echet generalized minimizer; \item there may exist a (Lavrentiev-type) gap between the infimum of the cost functional in $L_1^k[0,1]$ and the infimum in the space of generalized controls; \item one can formulate regularity conditions which preclude occurrence of the gap. \end{itemize} We do not claim finding the weakest topology in the space of controls, which provides continuity of input-to-trajectory map. In fact, the study in \cite{LiuSussmann99} indicates that, under lack of {\it involutivity}, the weakest topology should depend on the structure of the Lie algebra generated by the the vector fields $f, g_1, \ldots, g_k$. The topology, we introduce does not depend on it. However, it allows for a proper extension of Lagrange variational problems onto a set of generalized controls, which is broad enough to guarantee existence of generalized minimizers for integral functionals of low (in particular of linear) growth. \subsection{Structure of the paper} \label{SS structure of the paper} This paper is organized as follows. In Section \ref{S Generalized trajectories and generalized controls}, we discuss the spaces of Fr\'echet curves and their topologies. We prove the {\it continuous canonical selection} theorem (Theorem \ref{T continuity Frechet to W}), and introduce the spaces of generalized controls and generalized trajectories. Section \ref{S input-to-trajectory map} deals with the definition of the generalized input-to-trajectory map. Section \ref{S Problem reduction} discusses the auxiliary problem. In Section \ref{S the cost functional}, we discuss the extension of the cost functional and its properties. Existence of minimizers for the extended problems is settled in Section \ref{S existence}. Possible occurrence of a Lavrentiev gap is discussed in Section \ref{S Lavrentiev phenomenon}. In Section \ref{S Example} we present an example of a problem with an integrand of linear growth, whose minimizers are all generalized. The proofs of some technical results are collected in the appendix (Section \ref{S appendix}). \section{Fr\'echet generalized controls and generalized paths} \label{S Generalized trajectories and generalized controls} The goal of this Section is to introduce the spaces of generalized controls and generalized paths. Subsection \ref{SS Frechet curves} contains definitions and some basic facts about Fr\'echet curves. Subsection \ref{SS Continuity of canonical selector} contains the key theorem of {\it continuous canonical selection} (Theorem \ref{T continuity Frechet to W}). Subsection \ref{SS Frechet curves in space-time}, specialises on Fr\'echet curves defined in space-time. In Subsections \ref{SS Generalized controls} and \ref{SS Generalized trajectories}, we give definitions of what we call the spaces of {\it Fr\'echet generalized controls and generalized paths}. \subsection{Fr\'echet curves} \label{SS Frechet curves} Various slightly different definitions of Fr\'echet curves can be found in the literature \cite{Frechet08,Leoni09}. In this paper we consider curves that are rectifiable and oriented. Such curves admit absolutely continuous parameterizations, which is a natural requirement when dealing with ordinary differential equations. We allow Fr\'echet curves to be parameterized by non-compact intervals, which is a convenient way to account for solutions of \eqref{affine system} with a blow up time in the interval $[0,1]$. Below we state the exact definitions and basic properties. \medskip We say that a set $\gamma \subset \mathbb R^n$ is a \emph{parameterized curve} if there is an absolutely continuous function $g:[0,+\infty[ \mapsto \mathbb R^n$ such that $\gamma = g([0,+\infty[)$. A parameterization provides the curve with a terminal point $g(+\infty)$ only if a finite limit $\lim\limits_{t \rightarrow +\infty} g(t) $ exists. In that case we don't distinguish between $\gamma $ and $\gamma \cup \{ g(+\infty) \} $. \begin{definition} \label{D orientation} Two absolutely continuous curves $g_1,g_2:[0,+\infty[ \mapsto \mathbb R^n$ are equivalent if \begin{equation} \label{Eq orientation} \inf_{\alpha \in \mathcal T} \left\| g_1-g_2 \circ \alpha \right\|_{L_\infty[0,+\infty [} = 0, \end{equation} where $\mathcal T$ denotes the set of monotonically increasing absolutely continuous bijections $\alpha :[0,+\infty[ \mapsto [0,+\infty[$ admitting absolutely continuous inverse. $\square$ \end{definition} The following Lemma relates the previous definition with alternative formulations; its proof can be found in Appendix (Subsection \ref{SP L orientation}). \begin{lemma} \label{L orientation} Two absolutely continuous parameterizations $g_1,g_2:[0,+\infty[ \mapsto \mathbb R^n$ are equivalent if and only if there are absolutely continuous nondecreasing functions $\alpha_1, \alpha_2: [0,+\infty[ \mapsto[0,+\infty[ $ satisfying the following conditions: \begin{itemize} \item[{\bf a)}] $g_1\circ \alpha_1 (t) = g_2 \circ \alpha_2 (t) \qquad \forall t \in [0,+\infty[$; \item[{\bf b)}] $\alpha_1(0)=\alpha_2(0)=0$ and $\alpha_i([0,+\infty[ ) = [0,+\infty[ $ for at least one $i \in \{1,2\}$; \item[{\bf c)}] If $\alpha_i (\infty )=T<+\infty $, then $g_i(t)=g_i(T^-)$ for every $t\geq T$. $\square$ \end{itemize} \end{lemma} Definition \ref{D orientation} introduces an equivalence relation. The equivalence class of a function $g:[0,+\infty[ \mapsto \mathbb R^n$ is \begin{equation}\label{Fre_class} [g] = \left\{ h \in AC \left( [0,+\infty[, \mathbb R^n \right): \inf_{\alpha\in \mathcal T} \left\| g- h \circ \alpha \right\|_{L_\infty[0,+\infty[} = 0 \right\} , \end{equation} and is called \emph{absolutely continuous Fr\'echet curve} in $\mathbb R^n$, or for the sake of brevity, \emph{Fr\'echet curve}; each $\tilde g \in [g]$ will be called either a {\it representative} or {\it parameterization} of $[g]$, depending on context. The space of Fr\'echet curves in $\mathbb R^n$ is provided with the \emph{Fr\'echet metric} \[ d\left( [g_1],[g_2] \right) = \inf_{\alpha \in \mathcal T} \left\| g_1 - g_2 \circ \alpha \right\|_{L_\infty[0,+\infty[} . \] Parameterizations by bounded intervals, i.e., absolutely continuous functions $g:[0,T[ \mapsto \mathbb R^n$ or $g:[0,T] \mapsto \mathbb R^n$, with $T<+\infty$ are included in the present definition of Fr\'echet curves: $[g]$ stays for $[g \circ \alpha ]$ where $\alpha:[0,+\infty[ \mapsto [0,T[$ is any monotonically increasing absolutely continuous bijection with absolutely continuous inverse. For every subset $A \subset \mathbb R^n$ and any Fr\'echet curve $[g]$, we set \[ A \cap [g] = \left\{ g(t): t \in[0,+\infty[, \ g(t) \in A \right\} . \] We say that $A \cap [g]$ is a \emph{segment} if the set $\{ t \geq 0 : g(t) \in A \}$ is an interval; according to aforesaid, a nonempty segment is also a Fr\'echet curve. For every nondecreasing $\alpha:[0,+\infty[ \mapsto [0,+\infty[$, we introduce the function $\alpha^{\#}:[0,+\infty[ \mapsto [0,+\infty]$, defined as \[ \alpha^{\#}(t) = \sup \{ s\geq 0: \alpha(s)\leq t \} \qquad t \in [0,+\infty[. \] If $\alpha$ is continuous, then $\alpha^{\#}$ is the right-inverse of $\alpha$, that is, $\alpha \circ \alpha^{\#}(t) =t$ for every $t< \alpha (+\infty )$. \begin{lemma} \label{L AC reparameterization} Let $\alpha:[0,+\infty[ \mapsto [0,+\infty[$ be nondecreasing absolutely continuous, and $g:[0,+\infty[ \mapsto \mathbb R^n$ be absolutely continuous. Then: \begin{itemize} \item[{\bf a)}] $\dot \alpha \circ \alpha^{\#} (t) >0 $ a.e. on $\left[ \alpha(0), \alpha (\infty) \right[$. \item[{\bf b)}] $g \circ \alpha^{\#}$ is absolutely continuous in $\left[ \alpha(0), \alpha (\infty)\right[$ if and only if the set \linebreak $ \left\{ t \geq 0: \dot \alpha (t) =0, \ \dot g (t) \neq 0 \right\}$ has zero Lebesgue measure. \item[{\bf c)}] If $g \circ \alpha^{\#}$ is absolutely continuous in $\left[ \alpha(0), \alpha (\infty)\right[$, then \begin{equation} \label{Eq derivative of reparameterized curve} \frac{d}{dt}\left( g \circ \alpha^{\#} \right)(t) = \frac{\dot g}{\dot \alpha} \circ \alpha^{\#} (t) \qquad \text{for a.e. } t \in \left[ \alpha(0),\alpha (\infty ) \right[ . \end{equation} \item[{\bf d)}] If $g \circ \alpha^{\#}$ is absolutely continuous in $\left[ \alpha(0), \alpha (\infty)\right[$ and $g$ is constant on each interval $\left[0,\alpha(0) \right[$, $ \left] \alpha(\infty),+\infty \right[$, then $g \circ \alpha^\# \in \left[ g \right]$. $\square$ \end{itemize} \end{lemma} \begin{proof} See Subsection \ref{SP L AC reparameterization}. \end{proof} An absolutely continuous parameterization $g:[0,+\infty[ \mapsto \mathbb R^n$ generates an arc-length function $\ell_g:[0,+\infty[ \mapsto [0,+\infty[$, defined as \[ \ell_g(t) = \int_0^t \left| \dot g (s) \right| ds \qquad \forall t \geq 0. \] By Lemma \ref{L AC reparameterization}, the function $g \circ \ell_g^{\#} \in [g]$ and $\left| \frac d {dt} \left( g \circ \ell_g^{\#} \right) \right| \equiv 1$. Further, $\tilde g \circ \ell_{\tilde g}^{\#} = g \circ \ell_g^{\#}$ for every $\tilde g \in [g]$. That is, the transformation $[g] \mapsto g \circ \ell_g^{\#}$ does not depend on the particular $g$ representative of $[g]$. This transformation selects one particular element of the class $[g]$. Since it plays an important role in our approach we introduce the following definition: \begin{definition} \label{D canonical selector} We call $g \circ \ell_g^{\#}$ the \emph{canonical representative} or \emph{canonical parameterization} of $[g]$, and the mapping $[g] \mapsto g \circ \ell_g^{\#}$ is the \emph{canonical selector}. $\square$ \end{definition} The total length of a Fr\'echet curve $[g]$, $\ell_g(\infty)$, does not depend on the particular parameterization $g$. We call a Fr\'echet curve $[g]$ \emph{infinite} if $\ell_g(\infty)=+\infty$. Otherwise, we call $[g]$ \emph{finite}. The space of finite Fr\'echet curves can be provided with the \emph{strengthened F\'echet metric} \[ d^+\left( [g_1],[g_2] \right) = d\left( [g_1],[g_2] \right) + \left| \ell_{g_1}(\infty ) - \ell_{g_2}(\infty ) \right| . \] This metric provides a stronger topology than the Fr\'echet metric,as can be seen from the following example. \begin{example} \label{Ex strong Frechet metric} For the sequence \[ g_i(t) = \frac 1 i \left( \cos (i^2t), \sin (i^2t) \right) \qquad t \in [0,1], \ i \in \mathbb N , \] $[g_i]$ converges to $[0]$ with respect to the Fr\'echet metric $d$. However $\ell_{g_i}(1) = i$ and therefore $[g_i]$ does not converge in the metric $d^+$. $\square$ \end{example} Every finite Fr\'echet curve $[g]$ has a well defined terminal point $g(\infty)$. Thus, we adopt the following \begin{convention} \label{Cv extension canonical} Any parameterization of a finite Fr\'echet curve by a compact interval $g:[0,T] \mapsto \mathbb R^n$ is extended to the interval $[0,+\infty[$ by \[ g(t) = g(T) \qquad \forall t \geq T. \] In particular, a canonical parameterization $t \mapsto g \circ \ell_g^\#(t)$ is extended to the interval $[0,+\infty[$ by \[ g \circ \ell_g^{\#}(t) = g(\infty) \qquad \forall t \geq \ell_g(\infty) . \ \square \] \end{convention} \subsection{Continuity of canonical selector} \label{SS Continuity of canonical selector} The space of Fr\'echet curves consists of sets (classes of curves), in which we introduced the strengthened Fr\'echet metric. A crucial fact is that choosing the canonical representative of each class provides us with a $C_0$-continuous selector. \begin{proposition} \label{P continuity Frechet-to-length} The canonical selector $[g] \mapsto g \circ \ell_g^{\#}$ is a continuous one-to-one mapping of the space of finite Fr\'echet curves in $\mathbb R^n$ provided with the strengthened Fr\'echet metric $d^+$ into the space $AC^n[0,+\infty[$ provided with the topology of $C_0$-convergence. $\square$ \end{proposition} \begin{proof} Since for each $g$ the canonical representative $g \circ \ell_g^{\#}$ belongs to $[g]$ and is uniquely defined, then the correspondence $[g] \mapsto g \circ \ell_g^{\#}$ is one-to-one. It remains to prove that the canonical selector is continuous. Fix a finite Fr\'echet curve $[ \gamma ]$, with canonical representative $\gamma $, and pick a small $\varepsilon >0$. There exists a partition $0=t_0<t_1<t_2< \ldots < t_N < +\infty$ such that \begin{align*} & \sum_{i=1}^N \left| \gamma (t_i) - \gamma(t_{i-1}) \right| > \ell_\gamma (\infty )- \varepsilon , \qquad t_N > \ell_\gamma(\infty) + \varepsilon . \end{align*} This implies that the length of any segment $\gamma|_{[t_j,t_{j+k}]}$ admits the bounds \begin{align} \label{Eq length approximation 2} \sum_{i=1}^k \left| \gamma (t_{j+i}) - \gamma(t_{j+i-1}) \right| \leq \ell_{\gamma|_{[t_j,t_{j+k}]} } \leq \sum_{i=1}^k \left| \gamma (t_{j+i}) - \gamma(t_{j+i-1}) \right| + \varepsilon . \end{align} Pick an arbitrary Fr\'echet curve $[g]$ such that \begin{equation} \label{Z006} d^+\left( [g],[\gamma] \right) < \frac \varepsilon N , \end{equation} and let $g$ be the canonical representative of $[g]$. The bound \eqref{Z006} implies \begin{align} \label{Eq bounds g gamma} \left| \ell_g (\infty) - \ell_\gamma (\infty) \right| < \frac \varepsilon N, \qquad \left\| g \circ \alpha - \gamma \right\|_{L_\infty}< \frac \varepsilon N , \end{align} for some (absolutely continuous monotonically increasing) function $\alpha \in \mathcal T$. Without loss of generality, we may assume that $\alpha (t) =t $ for every $t \geq t_N > \max \left\{ \ell_g(\infty), \ell_\gamma(\infty) \right\}$. Let $\theta_i = \alpha (t_i)$ for $i = 0,1,2, \ldots , N$. By \eqref{Eq bounds g gamma}, we have \begin{equation} \label{Eq bounds g gamma 2} |g(\theta_i) - \gamma (t_i) | < \frac \varepsilon N \qquad \text{for } i=0,1,2, \ldots , N. \end{equation} Let \[ M= \left\| g - \gamma \right\|_{L_\infty[0,+\infty[} = \max \left\{ |g(t) - \gamma (t) |: t \in [0, t_N ] \right\} = |g(\hat t) - \gamma(\hat t)| . \] We may add the point $\hat t$ to the partition and (with a small abuse of notation) think that $\hat t=t_k$ for some $k \in \{0,1,2, \ldots , N \}$. We obtain \begin{align*} \ell_g(\infty) = & \theta_k +\ell_g(\infty) - \ell_g(\theta_k) = \theta_k+ \ell_{g |_{[ \theta_k, \theta_N]}} \geq \theta_k + \sum_{i=k+1}^N \left| g(\theta_i)-g(\theta_{i-1}) \right| \geq \\ \geq & \theta_k + \sum_{i=k+1}^N \left( \left| \gamma(t_i)-\gamma(t_{i-1}) \right| - \left| g(\theta_i)-\gamma(t_i) \right| - \left| g(\theta_{i-1})-\gamma(t_{i-1}) \right| \right) \geq \end{align*} and by \eqref{Eq bounds g gamma 2}: $$ \ell_g(\infty) \geq \theta_k + \sum_{i=k+1}^N \left| \gamma(t_i)-\gamma(t_{i-1}) \right| - 2 \varepsilon . $$ By virtue of \eqref{Eq length approximation 2}, we get \begin{align} \ell_g(\infty) \geq & \theta_k + \ell_{\gamma |_{[t_k,t_N]}} - 3 \varepsilon = \ell_\gamma(\infty) + \theta_k - t_k -3 \varepsilon . \label{Z020} \end{align} Similar computation, based on \eqref{Eq bounds g gamma 2} and \eqref{Eq length approximation 2}, yields \begin{align} \notag \ell_g(\infty) = & \ell_g(\theta_k) + \ell_g(\infty) - \theta_k \geq \sum_{i=1}^k \left| g(\theta_i) - g(\theta_{i-1}) \right| + \ell_g(\infty) - \theta_k \geq \\ \label{Z021} \geq & \sum_{i=1}^k \left| \gamma(t_i) - \gamma(t_{i-1}) \right| - 2 \varepsilon + \ell_g(\infty) -\theta_k \geq \ell_\gamma(\infty) + \ell_\gamma(t_k) - \theta_k - 4 \varepsilon = \\ \notag = & \ell_\gamma(\infty) + t_k - \theta_k - 4 \varepsilon . \end{align} Joining \eqref{Z020} and \eqref{Z021}, one concludes \[ |\theta_k - t_k | < \ell_g(\infty) - \ell_\gamma(\infty) + 4 \varepsilon . \] Finally, we have the estimate \begin{align*} | \theta_k - t_k| = & \left| \ell_g (\theta_k) - \ell_g(t_k) \right| \geq \left| g (\theta_k) - g(t_k) \right| = \left| g \circ \alpha(t_k)- \gamma(t_k) + \gamma(t_k) - g(t_k) \right| \geq \\ \geq & \left| \gamma(t_k) - g(t_k) \right| - \left| g \circ \alpha(t_k)- \gamma(t_k) \right| \geq M - \frac \varepsilon N . \end{align*} Thus, \eqref{Z006} implies $\left\| g-\gamma \right\|_{L_\infty}=M < 5 \varepsilon $. \end{proof} It is a bit surprising that the continuity property of the canonical selector can be strengthened to $W_{1,p}^n[0,+\infty[$. \begin{theorem} \label{T continuity Frechet to W} The canonical selector $[g] \mapsto g \circ \ell_g^{\#}$ is a continuous map from the space of finite Fr\'echet curves in $\mathbb R^n$ provided with the strengthened Fr\'echet metric $d^+$ into the Sobolev space $W_{1,p}^n[0,+\infty[$, for each $p \in [1,+\infty[$. $\square$ \end{theorem} \begin{remark} \label{Rm W[0,infty[} Since we are dealing with finite Fr\'echet curves, each canonical element's derivative is supported in some compact interval. Therefore, for each pair of finite Fr\'echet curves $[g],[h]$, there is some $T<+\infty$ such that \[ \left\| g \circ \ell_g^\# - h \circ \ell_h^\# \right\|_{W_{1,p}^n[0,+\infty[}=\left\| g \circ \ell_g^\# - h \circ \ell_h^\# \right\|_{W_{1,p}^n[0,T]} . \] However, we cannot fix a priori one such $T$ for every finite $[g],[h]$. $\square$ \end{remark} \begin{proof} Fix $[\gamma]$, a finite Fr\'echet curve with canonical representative $\gamma$. Let a sequence of finite Fr\'echet curves $\{ [ \gamma_i] \}_{i \in \mathbb N}$, with canonical representatives $\gamma_i$, $i \in \mathbb N$, converge to $[\gamma]$: $\lim\limits_{i \rightarrow \infty} d^+ \left( [\gamma_i], [\gamma] \right) = 0$. As far as the lengths of $\gamma, \gamma_i, \ i \in \mathbb N$ are bounded by some $T$, then the interval $[0,T]$ contains the supports of $\dot \gamma, \ \dot \gamma_i$, $i \in \mathbb N$. According to Proposition \ref{P continuity Frechet-to-length}, $\lim\limits_{i \rightarrow \infty} \|\gamma_i -\gamma\|_{L^n_\infty[0,T]} = 0$. One wishes to prove that $\lim\limits_{i \rightarrow \infty} \left\| \dot \gamma_i - \dot \gamma \right\|_{L^n_p[0,T]}=0,$ and hence \[ \lim\limits_{i \rightarrow \infty} \|\gamma_i -\gamma\|_{W^n_{1,p}[0,T]} = 0 . \] First, we show that $\dot \gamma_i$ converges to $\dot \gamma$ in the weak$^*$ topology of $L^n_\infty[0,T]$. Indeed, seeing $\dot \gamma_i$ as a functional on $L^n_1[0,T]$, we note that \[ \forall t \in [0,T], \ \forall v \in \mathbb R^n: \quad \langle \gamma_i(t), v \rangle = \int_0^t \langle \dot \gamma_i(s),v \rangle ds = \left\langle \dot \gamma_i^j, v \chi_{[0,t]}\right\rangle \] is the result of the action of the functional $\dot \gamma_i$ on the vector-function $s \mapsto v \chi_{[0,t]}(s)$. As far as the space of linear combinations of the functions $v \chi_{[0,t]}(\cdot)$ is dense in $L^n_1[0,T]$, and $L_\infty$-norms of $\dot \gamma_i$ are bounded by $1$, we conclude that \[ \lim_{i \rightarrow \infty} \int_0^T \langle \dot \gamma_i , \varphi \rangle dt = \int_0^T \langle \dot \gamma, \varphi \rangle dt , \qquad \forall \varphi \in L^n_1[0,T]. \] Since $L^n_q[0,T] \subset L^n_1[0,T]$, $\forall q \in [1,+\infty]$, this shows that $\dot \gamma_i \rightharpoondown \dot \gamma$ weakly in $L^n_p[0,T]$ for every $p \in ]1, +\infty[$. Note that from $d^+$-convergence of $\gamma_i$ to $\gamma$, it follows that \[ \lim \left\| \dot \gamma_i \right\|_{L^n_p[0,T]} = \lim \ell_{\gamma_i}(\infty) = \ell_{\gamma}(\infty) = \left\| \dot \gamma \right\|_{L^n_p[0,T]} \qquad \forall p \in ]1,+\infty[. \] Therefore, the Radon-Riesz theorem \cite{Riesz-Nagy} guarantees that \[ \lim \left\| \dot \gamma_i - \dot \gamma \right\|_{L^n_p[0,T]} =0 \qquad \forall p \in ]1,+\infty[ , \] which implies $\lim \left\| \dot \gamma_i - \dot \gamma \right\|_{L^n_1[0,T]} =0$. \end{proof} \subsection{Fr\'echet curves in space-time} \label{SS Frechet curves in space-time} Let $\mathcal Y_n$ denote the set of absolutely continuous functions $(\theta,y):[0,+\infty[ \mapsto \mathbb R^{1+n}$ such that \begin{align} \label{Eq time monotonicity} & \theta (0) =0, \qquad \dot \theta (t) \geq 0 \quad \text{a.e. } t \geq 0 . \\ \label{Eq infinite length} & \ell_{(\theta,y)}(\infty) = + \infty . \end{align} The first coordinate $\theta$ represents time. Thus, each $(\theta,y) \in \mathcal Y_n$ is a parameterization of a curve in space-time, defined in the time interval $[0,\theta(\infty)[$. The condition \eqref{Eq time monotonicity} reflects the fact that time should be a monotonically increasing variable. We don't require it to be strictly increasing because we are interested in jumps and impulses, i.e., processes that evolve instantaneously. The condition \eqref{Eq infinite length} means that the time interval $[0,\theta(\infty)[$ is maximal, that is, the curve parameterized by $(\theta, y)$ cannot be prolonged beyond the time $\theta(\infty) \in [0,+\infty]$. For each $T >0$, let $\mathcal Y_{n,T}$ be the set of all $(\theta,y) \in \mathcal Y_n$ such that $\theta(\infty ) >T$, i.e. the set of all absolutely continuous parameterizations of curves in space-time, which are well defined on the compact time interval $[0,T]$. For each $(\theta,y) \in \mathcal Y_n $, define $(\theta_T, y_T)$ as \begin{equation} \label{Eq (vT,yT)} \begin{array}{ll} (\theta_T,y_T)(t) = (\theta , y )(t) & \text{for } t \in [0, \theta^\#(T) [, \smallskip \\ (\theta_T,y_T)(t) = \left(T , y(\theta^\#(T)\right) & \text{for } t \geq \theta^\#(T) ; \end{array} \end{equation} in particular $(\theta_T,y_T)$ coincides with $ (\theta , y ) $ if $\theta^\#(T)=+\infty$. Now we introduce a family of semimetrics in $\mathcal Y_n$, $\{ \rho_T \}_{T \in ]0,+\infty[}$, defined as \[ \rho_T \left((\theta,y), (\tilde \theta,\tilde y) \right) = \left\| (\theta_T, y_T)- ( \tilde \theta_T, \tilde y_T) \right\|_{L_\infty[0,+\infty[} , \] Each $\rho_T$ becomes a metric if we don't distinguish between $(\theta,y),(\tilde \theta, \tilde y) \in \mathcal Y_n$ such that $\theta^{\#}(T)= \tilde \theta^{\#}(T)$ and $(\theta, y)$ coincides with $(\tilde \theta, \tilde y)$ in $[0,\theta^{\#}(T)[$. \begin{convention} \label{Cv extension Y} When dealing with the metric $\rho_T$, we identify $(\theta,y) \in \mathcal Y_n$ with the corresponding $(\theta_T,y_T)$, defined in \eqref{Eq (vT,yT)}. $\square$ \end{convention} Let $\mathcal F_n$ and $\mathcal F_{n,T}$ be the spaces of Fr\'echet curves in $\mathbb R^{1+n}$ corresponding to $\mathcal Y_n$ and $\mathcal Y_{n,T}$, that is \[ \mathcal F_n = \left\{ [(\theta,y)]: (\theta,y ) \in \mathcal Y_n \right\} , \qquad \mathcal F_{n,T} = \left\{ [(\theta,y)]: (\theta,y ) \in \mathcal Y_{n,T} \right\} . \] Due to condition \eqref{Eq infinite length}, $\mathcal F_n$ is a set of infinite Fr\'echet curves. Each semimetric $\rho_T$ induces a semimetric $d_T$ in the space $\mathcal F_n$: \begin{align*} & d_T \left( [(\theta_1,y_1)], [(\theta_2,y_2)] \right) = \\ = & \inf \left\{ \rho_T \left( (\tilde \theta_1,\tilde y_1), (\tilde \theta_2,\tilde y_2) \right): (\tilde \theta_i,\tilde y_i) \in [(\theta_i,y_i)], \ i=1,2 \right\} \end{align*} By Convention \ref{Cv extension Y}, $d_T$ becomes a metric; $d_T\left( [(\theta_1, y_1)], [(\theta_2, y_2)] \right)$ coincides with the Fr\'echet distance between the segments $[(\theta_1, y_1)] \cap \left( [0,T] \times \mathbb R^n \right)$ and $[(\theta_2, y_2)] \cap \left( [0,T] \times \mathbb R^n \right)$, for any $[(\theta_1, y_1)], [(\theta_2, y_2)] \in \mathcal F_{n}$. When dealing with the metric $d_T$ we identify each $[(\theta,y)] \in \mathcal F_n$ with the segment $[(\theta, y)] \cap \left( [0,T] \times \mathbb R^n \right)$. \medskip Consider an absolutely continuous function $x:[0,T[ \mapsto \mathbb R^n$, where $T \in ]0,+\infty]$ is maximal in the sense that $x$ does not admit an absolutely continuous extension onto any interval $[0,\hat T[$ with $\hat T>T$. Then, the function $t \in [0,T[ \mapsto (t,x(t)) \in \mathbb R^{1+n}$ is an element of $ \mathcal Y_n$ and hence $[(t,x)]$ is an element of $\mathcal F_n$. The correspondences $x \mapsto (t,x)$ and $x \mapsto [(t,x)]$ are one-to-one. Conversely, the Lemma \ref{L orientation} implies that for every $[(\theta, y)] \in \mathcal F_n$, the function $y \circ \theta^\#: \left[ 0, \theta (\infty) \right[ \mapsto \mathbb R^n $ does not depend on the particular representative of $[(\theta,y)]$. In particular, $y \circ \theta^\# = x$ for every $(\theta, y) \in [(t,x)]$. Following this argument, we identify each absolutely continuous function $x $ defined on a maximal interval with the corresponding Fr\'echet curve $[(t,x)]$. For every $[(\theta, y)] \in \mathcal F_n$, the function $y \circ \theta^\#$ has bounded variation on compact subintervals of $[0, \theta (\infty)[$. Due to Lemma \ref{L AC reparameterization}, $y \circ \theta^\#$ is absolutely continuous if and only if the set $\left\{ t \geq 0: \dot \theta (t) =0 , \ \dot y(t) \neq 0 \right\}$ has zero Lebesgue measure. The following proposition shows that every function of locally bounded variation can be 'lifted' to a Fr\'echet curve by virtue of the transformation $[(\theta,y)] \mapsto y \circ \theta^\#$. \begin{proposition} \label{P lift of functions of bounded variation} If a function $x:[0,T[ \mapsto \mathbb R^n$ has finite variation on every compact subinterval of $[0,T[$, then there exists $(\theta,y) \in \mathcal Y_n$ such that $x(t) = y \circ \theta^\#(t)$ for every $t \in [0,T[$, a continuity point of $x$. $\square$ \end{proposition} \begin{proof} See Appendix (Subsection \ref{SP P lift of functions of bounded variation}). \end{proof} Note that the mapping $[(\theta,y)] \mapsto y \circ \theta^\#$ is not one-to-one. Indeed for each function of locally bounded variation $x:[0,T[ \mapsto \mathbb R^n$, there are infinitely many $[(\theta,y)] \in \mathcal F_n$, such that $y\circ \theta^\#=x$. To see this, consider the following example. \begin{example} \label{Ex lift of discontinuous curve} For a discontinuous function $x:[0,+\infty[ \mapsto \mathbb R^2$ defined as \[ \begin{array}{ll} x_1(t)=x_2(t)=0, & \text{for } t<1, \smallskip \\ x_1(t)=0, \ \ x_2(t)=1, & \text{for } t\geq 1. \end{array} \] Every Fr\'echet curve $[(\theta,y)]$ in space-time, consisting of the concatenation of the following arcs \begin{itemize} \item[{\bf i.}] the segment of straight line from the point $(0,0,0)$ to the point $(1,0,0)$; \item[{\bf ii.}] an absolutely continuous Fr\'echet curve from the point $(1,0,0)$ to the point $(1,0,1)$, contained in the plane $\left\{ (1,x_1,x_2): (x_1,x_2) \in \mathbb R^2 \right\}$; \item[{\bf iii.}] the ray $\left\{ (1+t,0,1), t \geq 0 \right\}$ \end{itemize} satisfies $y \circ \theta^\# (t) =x(t) \ \forall t \geq 0$ (see Figure \ref{Fg lifts of function}). $\square$ \begin{figure}[h] \vspace{-0.7cm} \begin{center} \begin{minipage}[t]{12cm} \includegraphics*[height=10cm]{f901.pdf} \vspace{-2.8cm} \hspace{10cm} $t$ \vspace{-3.4cm} \hspace{8.5cm} $x_1$ \vspace{-3.4cm} \hspace{3.2cm} $x_2$ \vspace{5.9cm} \hspace{4.5cm} $t=1$ \end{minipage} \end{center} \vspace{-1.5cm} \caption{ \label{Fg lifts of function} The discontinuous function $x$ (solid line) and some Fr\'echet curves satisfying $y\circ \theta ^\#=x$ (different dashed lines). } \end{figure} \end{example} \begin{remark} \label{Rm graphs} The \emph{graph} of a function $x:[0,T[ \mapsto \mathbb R^n$ is the set $\Gamma_x = \left\{ (t,x(t)): t \in [0,T[ \right\}$. If $x(t)$ is absolutely continuous, then the function $t \mapsto (t,x(t))$ is absolutely continuous parameterization of $\Gamma_x$. Thus, Fr\'echet curves in space-time can be seen as generalizations of absolutely continuous graphs to the class of functions of locally bounded variation. $\square$ \end{remark} Fr\'echet curves in space-time coincide with {\it graph completions} in the terminology of \cite{BressanRampazzo88}. \subsection{Fr\'echet generalized controls} \label{SS Generalized controls} By \emph{ordinary control}, we understand any locally integrable function $u: [0,+\infty[ \mapsto \mathbb R^k$. The control system \eqref{affine system} can be represented as \begin{equation} \label{Eq affine system U} \dot x(t)= f(x(t)) + G(x(t)) \dot U (t) , \end{equation} where $U(t) = \int _0^t u(s) ds $. The function $t \mapsto (t,U(t))$ is a representative of the Fr\'echet curve $[(t,U)]$. By construction $U(0)=0$; besides local integrability of $u$ guarantees that $(t,U) \in \mathcal Y_{k,T}$, and thus $[(t,U)] \in \mathcal F_{k,T}$, for every $T\in ]0,+\infty[$. For each $T \in ]0,+\infty[$, we introduce the set \[ \mathcal Y_{k,T}^0 = \left\{ (V,W) \in \mathcal Y_{k,T}: W(0)=0 \right\} , \] and define the space of \emph{Fr\'echet generalized controls} on the interval $[0,T]$ to be the set of Fr\'echet curves, whose representatives belong to $\mathcal Y_{k,T}^0$, that is \[ \mathcal F_{k,T}^0 = \{ [(V,W)] \in \mathcal F_{k,T}: W(0)=0 \} . \] According to Convention \ref{Cv extension Y}, we identify each generalized control $[(V,W)] \in \mathcal F_{k,T}^0$ with it's segment $[(V,W)] \cap \left( [0,T] \times \mathbb R^k \right)$, and provide this space with the strengthened Fr\'echet metric: \begin{align*} d_T^+\left( [(V_1,W_1)], [(V_2,W_2)] \right) = & d_T \left( [(V_1,W_1)], [(V_2,W_2)] \right) + \\ & + \left| \ell_{(V_1,W_1)} (V_1^{\#}(T)) - \ell_{(V_2,W_2)} (V_2^{\#}(T)) \right| . \end{align*} A Fr\'echet curve $[(V,W)] \in \mathcal F_{k,T}^0$ coincides with an ordinary control in the interval $[0,T]$ if and only if the set $\{ t: \dot V(t) = 0 \ \text{and} \ \dot W(t) \neq 0 \}$ has zero Lebesgue measure. In that case, $U=W\circ V^\#$ is absolutely continuous, and the corresponding ordinary control is $u(t) = \frac{d}{dt}\left( W \circ V^\# \right)(t)$. The following proposition demonstrates that every generalized control can be approximated by sequences of ordinary controls. \begin{proposition} \label{P density of ordinary controls} The space of ordinary controls $\left\{ \left[(t,\int_0^t u(s) ds ) \right] : u \in L_1^k[0,T] \right\} $ is dense in $\left( \mathcal F_{k,T}^0, d_T^+\right) $. $\square$ \end{proposition} \begin{proof} We introduce in $\mathcal Y_{k,T}$ a semimetric $\rho_T^+$: \begin{align} \notag \rho_T^+ \left( (V_1,W_1), ( V_2, W_2) \right) = & \rho_T \left( (V_1,W_1), ( V_2, W_2) \right) + \\ \label{Eq RhoPlus} & + \left| \ell_{(V_1,W_1)} (V_1^{\#}(T)) - \ell_{(V_2,W_2)} (V_2^{\#}(T)) \right| . \end{align} Fix arbitrary $(V,W ) \in \mathcal Y_{k,T}^0$. For each $\varepsilon>0$, let \[ V_\varepsilon(t) = \left\{ \begin{array}{ll} \frac{T}{\int_0^{ V^{\#}(T)} \max ( \dot V, \frac{\varepsilon}{T}) d\tau} \int_0^t \max ( \dot V, \frac{\varepsilon}{T}) d\tau , & \text{for } t \leq V^\#(T), \smallskip \\ T+t- V^\#(T) & \text{for } t > V^\#(T) . \end{array} \right. \] $V_\varepsilon $ admits absolutely continuous inverse and therefore $U_\varepsilon (t) = W \circ V_\varepsilon^{-1}(t)$ is absolutely continuous with $U_\varepsilon(0)=0$. One can check that $V_\varepsilon^{-1}(T)= V^{\#}(T) $, $V_\varepsilon$ converges to $V$ uniformly in $[0,V^\#(T)]$, as $\varepsilon \rightarrow 0^+$, and $$\lim\limits_{\varepsilon \rightarrow 0^+} \ell_{(V_\varepsilon,W)}(V^\#(T)) = \ell_{(V ,W)}(V^\#(T)) . $$ Therefore, $\lim\limits_{\varepsilon \rightarrow 0^+} \rho_T^+ \left( (V_\varepsilon,W), ( V, W) \right)=0 , $ which implies $ \lim\limits_{\varepsilon \rightarrow 0^+} d_T^+ \left( [(t,U_\varepsilon), [( V, W)] \right)=0 .$ \end{proof} The space of generalized controls has the following compactness property: \begin{proposition} \label{P compactness} Every sequence of ordinary controls bounded in $L_1^k[0,T]$ admits a subsequence $\{u_i\}_{i \in \mathbb N}$ such that $\left\{ \left[ (t,U_i) \right] \right\}_{i \in \mathbb N}$ converges in $\left( \mathcal F_{k,T}^0, d_T^+ \right)$. $\square$ \end{proposition} \begin{proof} Let a sequence $\left\{ u_i \in L_1^k[0,T] \right\}_{i \in \mathbb N}$ be such that \begin{equation} \label{Z005} \|u_i \|_{L_1^k[0,T]} \leq M, \qquad \forall i \in \mathbb N . \end{equation} For each $i \in \mathbb N$, let $(V_i,W_i)= \left( \ell_{(t,U_i)}^{-1},U_i \circ \ell_{(t,U_i)}^{-1} \right) $ be the canonical parameterization of $\left[ (t,U_i) \right]$. By \eqref{Z005}, $\ell_{(t,U_i)}(T) \leq M+T, \ \forall i \in \mathbb N$. The sequence $\left\{ (V_i,W_i) \right\}_{i \in \mathbb N}$ is uniformly bounded and equicontinuous in $[0,T+M]$. Therefore by the Ascoli-Arzel\`a theorem, it admits a uniformly converging subsequence. It follows that the corresponding subsequence $\left\{ \left[ (V_{i_j},W_{i_j} ) \right] \right\}_{j \in \mathbb N}$ converges in $\left( \mathcal F_{k,T}^0, d_T^+ \right)$. \end{proof} Let us revise Example \ref{Ex noninvolutive system}. \begin{example} \label{Ex noninvolutive system c1} Consider the controls $u^{i,\varepsilon}, \ i=1,2,3$ from Example \ref{Ex noninvolutive system}. Let $U^{i,\varepsilon}$ be the respective primitives. It can be shown that the limits \[ \left[ (V_i,W_i) \right] = \lim_{\varepsilon \rightarrow 0^+} \left[ (t,U^{i,\varepsilon}) \right] , \qquad i=1,2,3, \] exist in $\left( \mathcal F_{n,T}^0, d_T^+ \right)$, and differ on the segment from the point $(0,0,0) $ to the point $(0,1,1)$, as shown on Figure \ref{Fg generalized controls}. \begin{figure}[h] \vspace{-0.7cm} \begin{center} \begin{minipage}[t]{12cm} \includegraphics*[height=10cm]{f902.pdf} \vspace{-2.5cm} \hspace{10cm} $t$ \vspace{-2.3cm} \hspace{4.6cm} $U_1$ \vspace{-5.4cm} \hspace{1.2cm} $U_2$ \end{minipage} \end{center} \vspace{-1.5cm} \caption{ \label{Fg generalized controls} The generalized controls from Example \ref{Ex noninvolutive system c1}: $\left[ (V_1,W_1) \right]$ (solid line), $\left[ (V_2,W_2) \right]$ (dashed line), and $\left[ (V_3,W_3) \right]$ (doted line). } \end{figure} One can directly compute \begin{align*} & d_T^+ \left( \left[ (V_1,W_1) \right], \left[ (V_2,W_2) \right] \right) = d_T^+ \left( \left[ (V_3,W_3) \right], \left[ (V_2,W_2) \right] \right) = 2 - \frac{1}{\sqrt{2}}, \\ & d_T^+ \left( \left[ (V_1,W_1) \right], \left[ (V_3,W_3) \right] \right) = 1 . \ \square \end{align*} \end{example} The ability to characterize impulses by a path in space-time provides a "resolution" of an impulse, which takes place in zero time, as Example \ref{Ex noninvolutive system c1} illustrates. This feature is crucial for dealing with the case, where the controlled vector fields in the system \eqref{affine system} do not commute. \subsection{Fr\'echet generalized paths} \label{SS Generalized trajectories} We call \emph{ordinary path} any absolutely continuous function $x:[0,T[ \mapsto \mathbb R^n$, with $T \in ]0,+\infty]$ being maximal in the sense that $x$ does not admit an absolutely continuous extension onto any interval $[0,\hat T[\supset [0,T]$. Using the notation of Section \ref{SS Frechet curves in space-time}, $x \mapsto (t,x)$ and $x \mapsto [(t,x)]$ are one-to-one mappings from the space of ordinary trajectories into $\mathcal Y_n$ and $\mathcal F_n$, respectively. We identify each ordinary path $x$ with the corresponding Fr\'echet curve $\left[ (t,x) \right] \in \mathcal F_n$, and define the space of \emph{generalized paths} to be the space $\mathcal F_n$, provided with the metric $d_T$ (with $T\in ]0,+\infty[$ fixed). Since the metric $d_T$ is fixed, we follow Convention \ref{Cv extension Y} and identify each generalized path $[(\theta,y)] \in \mathcal F_n$ with its segment $[(\theta,y)] \cap [0,T] \times \mathbb R^n$. In particular, any ordinary path $x:[0,\tilde T[ \mapsto \mathbb R^n$, with $\tilde T>T$, is identified with its restriction to the interval $[0,T]$. According to Section \ref{SS Frechet curves in space-time}, $\mathcal F_{n,T}$ is the space of generalized paths defined on the time interval $[0,T]$ and extendable beyond $[0,T]$. Taking into account the conventions above, for any ordinary path $x$, $(t,x) $ is a representative of some $[(\theta,y)] \in \mathcal F_{n,T}$ if and only if $x \in AC^n[0,T]$. That is, we identify the space of ordinary paths that can be extended beyond $[0,T]$ with the set \[ \left\{ [(t,x)]: x \in AC^n[0,T] \right\} \subset \mathcal F_{n,T} . \] For generalized paths, we have an analogue of Proposition \ref{P density of ordinary controls}: \begin{proposition} \label{P density of ordinary trajectories} The space of ordinary paths defined in the interval $[0,T]$, $\{[(t,x)] : x \in AC^n[0,T] \}$ is dense in $\left( \mathcal F_{n,T}, d_T^+ \right)$ (and therefore, it is also dense in $\left( \mathcal F_{n,T}, d_T \right)$). $\square$ \end{proposition} \begin{proof} A trivial adaptation of the proof of Proposition \ref{P density of ordinary controls}. \end{proof} \section{The input-to-trajectory map} \label{S input-to-trajectory map} This section is centered on Theorem \ref{T extension of input-to trajectory map}, which establishes existence and uniqueness of a continuous extension of the input-to-trajectory map onto the space of Fr\'echet generalized controls. The extended map takes values in the space of generalized paths. \medskip We use capital letters for the elements of $\mathcal Y_{k,T}^0$ and small letters for their first-order derivatives (i.e., $(v,w)=(\dot V, \dot W)$, with $(V,W) \in \mathcal Y_{k,T}^0$). For each locally integrable $u :[0,+\infty [ \mapsto \mathbb R^k$, let $x_u$ denote the corresponding trajectory of the system \eqref{affine system}, starting at a point $x_u(0)=x^0$, defined on a maximal interval. We will show that the input-to-trajectory mapping $u \mapsto x_u$ defines a unique mapping $[(t,U)] \mapsto [(t,x_u)]$, which in its turn admits a unique continuous extension onto $\mathcal F_{k,T}^0$. There is the following intuition behind this. Fix an ordinary control $u$, and suppose that $x_u$ is well defined in the interval $[0,T]$. Pick a representative $(V,W) \in [(t,U)]$, and let $y=x_u \circ V$. It follows that \begin{align} \label{Eq reduced system y} y(0)=x_0, \qquad \dot y(t) = f(y(t))v(t) + G(y(t))w(t) \quad \text{a.e. } t \in [0, V^\#(T)]. \end{align} Thus, if we denote by $y_{(v,w)}$ the unique trajectory of \eqref{Eq reduced system y}, then one may expect that $(V,y_{(v,w)}) \in [(t,x_u)]$ for every $(V,W) \in [(t,U)]$. The following Proposition shows that this is also true for generalized controls, thus providing a "natural" extension of the input-to-trajectory map. \begin{proposition} \label{P parameterization invariance of trajectories} The mapping \begin{align} \label{Eq input-to-trajectory map} [(V,W)] \in \mathcal F_{k,T}^0 \mapsto \left[ (V,y_{(v,w)}) \right] \in \mathcal F_n \end{align} is properly defined, i.e., it does not depend on the choice of $(V,W) \in [(V,W)]. \ \square$ \end{proposition} \begin{proof} Fix a generalized control $[(\hat V,\hat W)] \in \mathcal F_{k,T}^0$ and the canonical representative $(\hat V, \hat W)$. For any $(V,W) \in [(\hat V, \hat W)]$, \[ (\hat V, \hat W) = (V,W) \circ \ell_{(V,W)}^{\#}, \qquad (\hat v, \hat w) = \frac{(v,w)}{\sqrt{v^2+|w|^2}} \circ \ell_{(V,W)}^{\#} , \] and it follows that \[ \left( V, y_{(v,w)} \right) \circ \ell_{(V,W)}^{\#} = \left( V\circ \ell_{(V,W)}^{\#} , y_{(v,w)} \circ \ell_{(V,W)}^{\#} \right) = \left( \hat V , y_{(v,w)} \circ \ell_{(V,W)}^{\#} \right) . \] Since $\{ t: (v(t),w(t))=0 \wedge \dot y_{(v,w)}(t) \neq 0 \} $ has zero Lebesgue measure, the Lemma \ref{L AC reparameterization} guarantees that the function $t \mapsto y_{(v,w)} \circ \ell_{(V,W)}(t)$ is absolutely continuous and \begin{align*} & \frac d {dt}\left( y_{(v,w)} \circ \ell_{(V,W)}^{\#} \right) = \left( \dot y_{(v,w)} \circ \ell_{(V,W)}^{\#} \right) \frac{1}{\sqrt{v^2+|w|^2}} \circ \ell_{(V,W)}^{\#} = \\ = & f\left( y_{(v,w)} \circ \ell_{(V,W)}^{\#} \right) \frac{v}{\sqrt{v^2+|w|^2}} \circ \ell_{(V,W)}^{\#} + G\left( y_{(v,w)} \circ \ell_{(V,W)}^{\#} \right) \frac{w}{\sqrt{v^2+|w|^2}} \circ \ell_{(V,W)}^{\#} = \\ = & f\left( y_{(v,w)} \circ \ell_{(V,W)}^{\#} \right) \hat v + G\left( y_{(v,w)} \circ \ell_{(V,W)}^{\#} \right) \hat w , \end{align*} then $y_{(\hat v, \hat w)}= y_{(v,w)}\circ \ell_{(V,W)}^{\#} $, and therefore, $\left[(V,y_{(v,w)}) \right] = \left[ (\hat V, y_{\hat v, \hat w}) \right ]$. \end{proof} To formulate the result on continuity of the input-to-trajectory map we introduce the set \[ \mathcal W_T = \left\{ [(V,W)] \in \mathcal F_{k,T}^0: [(V, y_{(v,w)})] \in \mathcal F_{n,T} \right\} , \] of generalized controls, such that the generalized trajectory assigned to them by \eqref{Eq input-to-trajectory map} is well defined on the time interval $[0,T]$. The following is a stronger version of Theorem 2, Corollary 1 in \cite{BressanRampazzo88}. \begin{theorem} \label{T extension of input-to trajectory map} The set $\mathcal W_T$ is an open subset of $\left( \mathcal F_{k,T}^0, d_T^+ \right)$. The mapping $[(V,W)] \in \mathcal W_T \mapsto \left[ (V,y_{(v,w)}) \right] \in \mathcal F_{n,T}$ is the unique extension of the input-to-trajectory map that is continuous with respect to the metrics $d_T^+$ in the domain and in the image. $\square$ \end{theorem} \begin{proof} The transformation $[( V, W)] \mapsto \left[ ( V,y_{( v, w)}) \right] $ can be decomposed into a chain of mappings \begin{align} \notag [(V, W)] \in \mathcal F_{k,T}^0 \mapsto & (V, W) \in W_{1,1}^{1+k}[0,+\infty[ \mapsto \\ \mapsto & \label{Eq transformation chain} (V,y_{(v,w)}) \in \mathcal Y_{n,T} \mapsto [(V,y_{(v,w)})] \in \mathcal F_{n,T} , \end{align} where the first transformation is the canonical selector, which is continuous by Theorem \ref{T continuity Frechet to W}. We provide the space $\mathcal Y_{n,T}$ with the metric $\rho_T^+$ defined in \eqref{Eq RhoPlus}. By definition, $d_T^+\left( [(\theta_1,y_1)],[(\theta_2,y_2)] \right) \leq \rho_T^+ \left( (\theta_1,y_1) , (\theta_2,y_2) \right) $ for every $(\theta_1,y_1) $, $(\theta_2,y_2) \in \mathcal Y_{n,T}$. Hence, the last transformation in \eqref{Eq transformation chain} is continuous. Fix a generalized control $[(\hat V,\hat W)] \in \mathcal W_T$. Under Conventions \ref{Cv extension canonical} and \ref{Cv extension Y}, the support of $(\hat v,\hat w)$ is contained in the compact interval $[0,\hat V^\#(T)]$, and for every $[( V, W)] \in \mathcal F_{k,T}^0 $ such that $ d_T^+ \left( [( V, W)], [(\hat V, \hat W)] \right) < \varepsilon $, the support of $( v, w)$ is contained in $[0, \hat V^\#(T)+\varepsilon]$ By standard continuity result, there is some $\varepsilon>0$ such that the trajectory of the system \eqref{Eq reduced system y} is well defined on the interval $[0,\hat V^\#(T)+\varepsilon]$ for every $( V, W) \in W_{1,1}^{1+k}[0,+\infty[$ such that $\left\| ( v, w) - (\hat v, \hat w) \right\|_{L_1^{1+k}[0,+\infty[} < \varepsilon$. Since the input-to-trajectory map of system \eqref{Eq reduced system y} $(v,w) \mapsto y_{(v,w)}$ is continuous with respect to the norms $\| \cdot \|_{L_1^{1+k}[0, \hat V^\#(T) + \varepsilon]}$ in the domain and $\| \cdot \|_{W_{1,1}^{1+n}[0, \hat V^\#(T) + \varepsilon]}$ in the image, the Theorem follows. \end{proof} We compare Theorem \ref{T extension of input-to trajectory map} with the corresponding result (Theorem 2, Corollary 1) of \cite{BressanRampazzo88}. There, the generalized controls have equibounded variations and the metric in the space of impulses is (in our terminology) the Fr\'echet metric. The topology in the space of generalized trajectories is defined by the Hausdorff metric on the graphs of generalized trajectories (i.e., in the images of the curves $(\theta (t), y(t) )$, $t \in [0, \theta^\#(T)]$). By introducing the strengthened Fr\'echet metric $d^+$, we automatically require convergence of the full variations, and therefore guarantee equiboundedness of converging sequences of inputs. The main difference lies in the fact that we prove continuity of the input-to-trajectory map when the space of generalized trajectories is provided with the $d^+$ metric instead of the weaker Hausdorff metric. Continuity of the canonical selector (Theorem \ref{T continuity Frechet to W}) is essential for this result. To see that the topology generated by $d^+$ is strictly stronger than the topology generated by the Hausdorff metric, consider the following simple example: \begin{example} \label{Ex Hausdorff} Consider two curves $[(\theta, y)],[(\tilde \theta, \tilde y)] \in \mathcal F_{2,T} $, with canonical elements $(\theta, y), (\tilde \theta, \tilde y)$. Suppose that $(\theta(t), y(t))= (\tilde \theta(t), \tilde y(t))$ for $t\geq 2\pi $ and \[ (\theta(t), y(t))= ( 0, \cos t, \sin t ) , \ \ (\tilde \theta(t), \tilde y(t))= ( 0, \cos (2\pi -t), \sin (2\pi- t) ) \quad \text{for } t \in [0,2\pi] . \] It is simple to check that the Hausdorff distance between $[(\theta, y)]$ and $[(\tilde \theta, \tilde y)]$ is zero but $d\left( [(\theta, y)],[(\tilde \theta, \tilde y)] \right) = d^+\left( [(\theta, y)],[(\tilde \theta, \tilde y)] \right) = 2$. $\square$ \end{example} To illustrate the extended input-to-trajectory mapping, we return to Example~\ref{Ex noninvolutive system}: \begin{example} \label{Ex noninvolutive system c2} Consider Example \ref{Ex noninvolutive system}. By Theorem \ref{T extension of input-to trajectory map}, the input-to-trajectory map is well defined. To compute the generalized trajectories corresponding to the generalized controls in the example, notice that here the system \eqref{Eq reduced system y} reduces to \[ \dot y_1=w_1, \quad \dot y_2 = w_2, \quad \dot y_3 = y_2 w_2 . \] Let $(V_i,W_i), \ i=1,2,3$ be the parameterizations by length of the generalized controls $[(V_i,W_i)], \ i=1,2,3$ in Example \ref{Ex noninvolutive system c1}. Then, \begin{align*} & (v_1,w_1)(t)= (0,1,0) \chi_{[0,1]}(t) + (0,0,1) \chi_{]1,2]}(t) + (1,0,0) \chi_{]2,+\infty[}(t), \\ & (v_2,w_2)(t)= \left(0,\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right) \chi_{[0,\sqrt{2}]}(t) + (1,0,0) \chi_{]\sqrt{2},+\infty[}(t), \\ & (v_3,w_3)(t)= (0,0,1) \chi_{[0,1]}(t) + (0,1,0) \chi_{]1,2]}(t) + (1,0,0) \chi_{]2,+\infty[}(t) . \end{align*} Therefore, \begin{align*} & (V_1,y_{(v_1,w_1)})(t) = (0,t,0,0) \chi_{[0,1]}(t) + (0,1,t-1,0) \chi_{]1,2]}(t) + (t-2,1,1,0) \chi_{]2,+\infty[}(t), \\ & (V_2,y_{(v_2,w_2)})(t) = \left(0,\frac{t}{\sqrt{2}},\frac{t}{\sqrt{2}}, \frac{t^2}{4}\right) \chi_{[0,\sqrt{2}]}(t) + \left(t-\sqrt{2},1,1,\frac 1 2 \right) \chi_{]\sqrt{2},+\infty[}(t) , \\ & (V_3,y_{(v_3,w_3)})(t) = (0,0,t,0) \chi_{[0,1]}(t) + (0,t-1,1,t-1) \chi_{]1,2]}(t) + (t-2,1,1,1) \chi_{]2,+\infty[}(t) \end{align*} are parameterizations of the generalized trajectories corresponding to $[(V_i,W_i)], \ i=1,2,3$, respectively (see Figure \ref{Fg generalized trajectories}). \begin{figure}[h] \vspace{-0.3cm} \begin{center} \begin{minipage}[t]{12cm} \includegraphics*[height=10cm]{f903.pdf} \vspace{-1.2cm} \hspace{8.5cm} $x_1$ \vspace{-2.9cm} \hspace{4.8cm} $x_2$ \vspace{-6.5cm} \hspace{1.2cm} $x_3$ \end{minipage} \end{center} \vspace{-0.9cm} \caption{ \label{Fg generalized trajectories} The jumps of the generalized trajectories from Example \ref{Ex noninvolutive system c2}: $\left[ (V_1,y_{(v_1,w_1)}) \right]$ (solid line), $\left[ (V_2,y_{(v_2,w_2)}) \right]$ (dashed line), and $\left[ (V_3,y_{(v_3,w_3)}) \right]$ (doted line). } \end{figure} Notice that \begin{align*} & y_{(v_1,w_1)} \circ V_1^\#(t) =(1,1,0), \qquad y_{(v_2,w_2)} \circ V_2^\#(t) =\left(1,1,\frac 1 2 \right), \\ & y_{(v_3,w_3)} \circ V_3^\#(t) =(1,1,1), \end{align*} for every $t >0$. $\square$ \end{example} To compare our approach with the one developed by Miller and Rubinovich \cite{MiRu}, we formulate the following result, which shows that every generalized trajectory of system \eqref{affine system}, as defined in \cite{MiRu} coincides with some $y_{(v,w)}\circ V^\#$, with $(V,W) \in \mathcal Y^0_{k,T}$. \begin{proposition} \label{P Miller trajectories} Consider a sequence of ordinary controls $\left\{ u_i \in L_1^k[0,T] \right\}_{i \in \mathbb N} $, equibounded in $L_1[0,T]$-norm, such that the corresponding sequence of trajectories $\left\{ x_{u_i} \right\}_{i \in \mathbb N} $ is equibounded in $L_\infty[0,T]$-norm. There is a subsequence $\{ u_{i_j} \}$ such that $\left[ ( t,U_{i_j}) \right] $ converges towards some $[(V,W)] \in \mathcal F_{k,T}^0$, and $x_{u_{i_j}} (t) $ converges pointwise to $y_{(v,w)} \circ V^\#(t)$ in $[0,T]$, with the possible exception of a countable set of points. $\square$ \end{proposition} \begin{proof} The Proposition \ref{P compactness} guarantees existence of a convergent subsequence $\left\{ \left[ ( t,U_i) \right] \right\}_{i \in \mathbb N} $. Thus, we can assume without loss of generality that $\left\{ \left[ ( t,U_i) \right] \right\}_{i \in \mathbb N} $ converges to some $[(V,W)] \in \mathcal F_{k,T}^0$. Let $(V,W)$ be the canonical parameterization of $[(V,W)]$, and $(V_i,W_i)$ be the canonical parameterization of $\left[ ( t,U_i) \right]$, for $i \in \mathbb N$. Notice that $x_{u_i} = y_{(v_i,w_i)} \circ V_i^\# $ for every $i \in \mathbb N$. Since the sequence $\left\{ x_{u_i} \right\}_{i \in \mathbb N} $ is equibounded, we can assume that the vector fields $f$, $g_1, \ldots , g_k$ have compact support. In that case, the sequence $\left\{ (V_i,y_{(v_i,w_i)} ) \right\}$ is uniformly Lipschitz and converges uniformly towards $(V,y_{(v,w)})$ in the interval $\left[ 0, 1 + V^\#(T)\right]$. Then, for any $t \in [0,T]$: \begin{align*} \left| y_i \circ V_i^\#(t) - y \circ V^\#(t) \right| \leq & \left| y_i \circ V_i^\#(t) - y_i \circ V^\#(t) \right| + \left| (y_i-y) \circ V_i^\#(t) \right| \leq \\ \leq & C \left| V_i^\#(t) - V^\#(t) \right| + \left\| y_i-y \right\|_{L_\infty\left[0,1 + V^\#(T)\right]} . \end{align*} Since \[ \liminf_{i \rightarrow \infty} V_i^\# (t) \geq V^\#(t^-) , \quad \limsup_{i \rightarrow \infty} V_i^\# (t) \leq V^\#(t^+), \qquad \forall t \in [0,T], \] we see that $\lim\limits_{i \rightarrow \infty} x_{u_i}(t) = y \circ V^\#(t)$, with exceptions only at the points of discontinuity of $V^\#$. This set is at most countable, and the result follows. \end{proof} \section{Auxiliary problem} \label{S Problem reduction} Coming back to the optimal control problem \eqref{lag}--\eqref{boundary conditions} we show that parameterization by the arc length of the curves $[(t,U)]$ results in its equivalent reformulation. Canonical parameterizations where introduced in \cite{BressanRampazzo88}, for Cauchy problems. Here we extend the analysis to Lagrange problems \eqref{lag}--\eqref{boundary conditions}. A similar reparameterization was introduced in \cite{MiRu}. \medskip For each ordinary control $u \in L_1^k[0,1]$, the length function $\ell_{(t,U)}$ coincides with the function $\tau_u:[0,+\infty[ \mapsto [0,+\infty[$, given by \begin{equation} \label{Eq arc length function U} \tau_u(t) = \int_0^t \sqrt{1+ |u|^2} \, ds, \end{equation} and the canonical parameterization of the curve $[(t,U)]$ is the function $(V,W)= \left( \tau_u^{-1}, U \circ \tau_u^{-1} \right) $. The function $\tau_u$ admits an absolutely continuous inverse $\tau_u^{-1}$ with $\frac{d}{dt} \tau_u^{-1}= \frac{1}{\sqrt{1+|u|^2}}\circ \tau_u^{-1}$. The reparameterized trajectory $x_u \circ \tau_u^{-1}$ coincides with $y_{(v,w)}$, the unique solution of \eqref{Eq reduced system y} with \begin{equation} \label{Eq transformation u to vw} (v,w)= \left( \frac{1}{\sqrt{1+|u|^2}}\circ \tau_u^{-1}, \frac{u}{\sqrt{1+|u|^2}}\circ \tau_u^{-1} \right) . \end{equation} By the change of variable $t= \tau_u^{-1}$, the cost functional \eqref{lag} becomes \begin{align*} \int_0^1 \! L(x_u,u) dt = \int_0^{\tau_u(1)} \!\!\! L(y_{(v,w)}, u \circ \tau_u^{-1}) \frac{1}{\sqrt{1+|u|^2}}\circ \tau_u^{-1} dt = \int_0^{\tau_u(1)} \!\!\! L\left(y_{(v,w)}, \frac w v \right) v \, dt . \end{align*} Conversely, for any $T \in ]0,+\infty[$ and any measurable function $t \mapsto (v(t),w(t)) \in \mathbb R^{1+k}$ satisfying \begin{align*} & v(t) >0, \ \ v(t)^2+|w(t)|^2 = 1 \quad \text{a.e. } t \in [0,T] , \qquad \int_0^Tv(t) dt =1, \end{align*} there is a unique $u \in L_1^k[0,1]$ satisfying \eqref{Eq transformation u to vw}. Therefore, the problem \eqref{lag}--\eqref{boundary conditions} is equivalent to \begin{align} \label{Eq reduced lagrangian} & I(v,w,T)=\int_0^TL\left( y(t), \frac{w(t)}{v(t)}\right) v(t) dt \rightarrow \min , \\ \label{Eq reduced system} & \dot \theta(t) = v(t) , \quad \dot y(t) = f(y(t))v(t) + G(y(t))w(t), \\ \label{Eq control constraints} & v(t) >0, \quad v(t)^2+|w(t)|^2 =1 \qquad \text{a.e. } t \in [0,T], \\ \label{Eq boundary conditions} & \theta(0)=0, \ \theta(T)=1, \ y(0)=x^0, \ y(T)=x^1, \end{align} with free $T\in ]0,+\infty[$. It is crucial that the integrand in \eqref{Eq reduced lagrangian} is parametric in the terminology of L.C.Young \cite{Young}, i.e. invariant with respect to the dilation $(v,w) \to (\kappa v, \kappa w), \ \kappa \in \mathbb{R}_+$. This will later allow us to extend the functional \eqref{Eq reduced lagrangian} onto the class of Fr\'echet generalized controls. The proof of the following Lemma is given in Appendix (Subsection \ref{SP L convexity of reduced lagrangean}). \begin{lemma} \label{L convexity of reduced lagrangean} 1. For every $y \in \mathbb R^n$, the function $(v,w)\mapsto L\left( y, \frac w v \right) v $ is convex in $]0,+\infty[\times \mathbb R^k$. 2. For every $(\hat y, \hat v, \hat w) \in \mathbb R^n \times [0,+\infty[\times \mathbb R^k $, \begin{align*} \liminf_{\scriptsize\begin{array}{c} (y,v,w) \rightarrow (\hat y, \hat v, \hat w) \\ v>0 \end{array}} L\left( y, \frac w v \right) v = & \liminf_{\scriptsize \begin{array}{c} (v,w) \rightarrow (\hat v, \hat w) \\ v>0 \end{array}} L\left( \hat y, \frac w v \right) v = \\ = & \lim_{v \rightarrow \hat v, \ v >0 } L\left( \hat y, \frac {\hat w} v \right) v . \ \square \end{align*} \end{lemma} We define the new Lagrangian on $\mathbb{R}^n \times \mathbb{R}_+ \times \mathbb{R}^k$: \begin{align}\label{lambda_extended} & \lambda (y,v,w) = \left\{ \begin{array}{ll} L\left( y, \frac w v \right) v, & \text{for } v>0, \smallskip \\ \lim\limits_{\eta \rightarrow 0^+} L\left( y, \frac w \eta \right) \eta , & \text{for } v=0 \end{array} \right. \end{align} Lemma \ref{L convexity of reduced lagrangean} implies \begin{corollary} The function $\lambda (y,v,w)$, defined by \eqref{lambda_extended} is the lower semicontinuous envelope of the function $(y,v,w) \mapsto L\left( y, \frac w v \right) v. \ \square$ \end{corollary} \begin{remark} Since the lower semicontinuous envelope of a convex function is convex, we conclude that $\lambda $ is convex with respect to $(v,w) \in [0,+\infty[\times \mathbb R^k$. $\square$ \end{remark} Replacing the condition $v(t)>0 $ by $v(t)\geq 0 $ in \eqref{Eq control constraints}, one obtains a problem with controls taking values in the compact set $\left\{ (v,w) \in \mathbb R^{1+k}: v\geq 0, v^2+|w|^2 = 1 \right\}$. This is the so-called compactification technique, started probably in \cite{Gam}. For a recent contribution, see \cite{GuSar}. By relaxing the control values to the convex hull $B_k^+$ of this set, we introduce the relaxed problem \begin{align} \label{Eq relaxed lagrangian} & \widehat I(v,w,T)=\int_0^T \lambda \left( y(t), v(t), w(t) \right) dt \rightarrow \min , \\ \label{Eq relaxed system} & \dot \theta(t) = v(t) , \quad \dot y(t) = f(y(t))v(t) + G(y(t))w(t), \\ \label{Eq control relaxed constraints} & (v(t),w(t)) \in B_k^+=\{(v,w)| \ v \geq 0, \quad v^2+| w |^2 \leq 1\} \qquad \text{a.e. } t\in[0,T], \\ \label{Eq boundary conditions relaxed} & \theta(0)=0, \ \theta(T)=1, \ y(0)=x^0, \ y(T)=x^1, \end{align} with free $T\in [0,+\infty[$. \section{The cost functional for Fr\'echet generalized trajectories and controls} \label{S the cost functional} The argument used above shows that for an ordinary control $u(t)$, and the graph of its primitive $(t,U(t))$, one gets for each $(V,W) \in [(t,U)]$: \[ J(x_u,u) = \int_0^{V^{\#}(1)} \lambda\left( y_{(v,w)},v,w \right) d \tau . \] This suggests that the cost functional \eqref{lag} can be extended onto the space $\mathcal F_{k,1}^0$ and this extension should coincide with the functional \begin{equation} \label{Eq extended cost} I([(V,W)])= \int_0^{V^{\#}(1)} \lambda\left( y_{(v,w)},v,w \right) d \tau \qquad [(V,W)] \in \mathcal W_1 . \end{equation} First, note that the functional \eqref{Eq extended cost} is properly defined, that is: \begin{proposition} \label{P parameterization invariance of cost} The mapping \[ [(V,W)] \in \mathcal W_1 \mapsto \int_0^{V^{\#}(1)} \lambda\left( y_{(v,w)},v,w \right) d \tau \] does not depend on a particular representative $(V,W) \in [(V,W)]. \ \square$ \end{proposition} \begin{proof} Follow the argument of the proof of Proposition \ref{P parameterization invariance of trajectories}. \end{proof} The following property of the functional \eqref{Eq extended cost} holds: \begin{proposition} \label{P cost lower semicontinuity} The functional $I([(V,W)]) = \int_0^{V^{\#}(T)}\lambda (y_{(v,w)},v,w) dt $ is lower semicontinuous in the space of generalized controls $\left( \mathcal F_{k,T}^0,d_T^+ \right)$. $\square$ \end{proposition} \begin{proof} For any $0< \delta < \varepsilon < 1$ and $y \in \mathbb R^n$, $(v,w) \in B_k^+$ (satisfying the constraints \eqref{Eq control relaxed constraints}), we get \begin{align*} L\left( y, \frac{w}{v+\varepsilon} \right)(v+\varepsilon) = & L\left( y, \frac{v+\delta}{v+\varepsilon} \frac{w}{v+\delta} + \frac{\varepsilon-\delta}{v+\varepsilon} 0\right)(v+\varepsilon) \leq \\ \leq & L\left( y, \frac{w}{v+\delta} \right)(v+\delta) +L\left( y, 0 \right)(\varepsilon-\delta) . \end{align*} Passing to the limit at the right-hand side, as $\delta \to 0^+$, we invoke Lemma \ref{L convexity of reduced lagrangean} to conclude \[ \lambda (y,v,w) \geq L\left( y, \frac{w}{v+\varepsilon} \right)(v+\varepsilon) -L\left( y, 0 \right)\varepsilon , \] and thus $\lambda$ is bounded from below on compact sets. Fix a sequence $\left\{ [(V_i,W_i)] \in \mathcal F_{k,T}^0 \right\}_{i \in \mathbb N} $ converging to $[(V,W)]$ with respect to the metric $d_T^+$, and let $\{ (V_i,W_i)\}_{i \in \mathbb N}$, $(V,W)$ be the respective canonical representatives. By Theorem \ref{T extension of input-to trajectory map}, $y_{(v_i,w_i)}$ converges uniformly to $y_{(v,w)}$. By the Theorem \ref{T continuity Frechet to W}, $(v_i,w_i)$ converges to $(v,w)$ with respect to the $L_1$-norm. Therefore, there is a subsequence $(v_{i_j},w_{i_j})$ converging pointwise almost everywhere to $(v,w)$. Using Fatou's Lemma: \begin{align*} & \liminf_{j \rightarrow \infty} I\left( [(V_{i_j},W_{i_j})] \right) = \liminf_{j \rightarrow \infty} \int_0^{V_{i_j}^{\#}(T)}\lambda (y_{(v_{i_j},w_{i_j})},v_{i_j},w_{i_j}) dt \geq \\ \geq & \int_0^{V^{\#}(T)}\liminf_{j \rightarrow \infty} \lambda (y_{(v_{i_j},w_{i_j})},v_{i_j},w_{i_j}) dt \geq \int_0^{V^{\#}(T)}\lambda (y_{(v,w)},v,w) dt = I\left( [(V,W)] \right) . \end{align*} \end{proof} In order to extend the optimal control problem \eqref{lag}--\eqref{boundary conditions} onto the class of Fr\'echet generalized controls $\mathcal F_{k,1}^0$, let us recall some of previously obtained results: \begin{itemize} \item The functional $[(V,W)] \mapsto I\left( [(V,W)] \right)$ is lower semicontinuous in $\mathcal F_{k,1}^0$ and \[ I\left( [(t,U)] \right) = J(x_u,u) \qquad \forall u \in L_1^k[0,1] . \] \item The input-to-trajectory map $[(V,W)] \mapsto [(V,y_{(v,w)})]$ is the unique continuous extension of the input-to-trajectory of system \eqref{affine system}. A generalized trajectory has a well defined endpoint given by $y_{(v,w)}\circ V^\#(1)$. This point does not depend on a particular $(V,y_{(v,w)}) \in [(V,y_{(v,w)})]$; it is the $x$-component of the point, at which the Fr\'echet curve $[(V,y_{(v,w)})]$ crosses (leaves) the hyperplane $\{ (t,x)\in \mathbb R^{1+n}: t=1 \}$. \end{itemize} Basing on these considerations, we introduce the extended problem: \begin{align} \label{Eq generalized cost} & I\left( [(V,W)] \right) \rightarrow \min , \\ & \label{Eq generalized boundary conditions} [(V,W)] \in \mathcal F_{k,1}^0, \quad y_{(v,w)} \circ V^\#(T) = x^1 . \end{align} This problem is equivalent to the relaxed problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}, in the following sense: \begin{itemize} \item[{\bf i)}] If $(v,w)$ is an optimal control for the problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}, then $[(V,W)]= \left[\int_0^{( \cdot )} (v,w) d \tau \right]$ is optimal for the problem \eqref{Eq generalized cost}--\eqref{Eq generalized boundary conditions} and the corresponding generalized trajectory is $\left[ ( V,y_{(v,w)})\right]$; \item[{\bf ii)}] If $[(V,W)]$ is optimal for the problem \eqref{Eq generalized cost}--\eqref{Eq generalized boundary conditions} and $(V,W)$ is its canonical representative, then $(v,w)=(\dot V, \dot W)$ is an optimal control for the problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}. \end{itemize} \section{Existence of Fr\'echet generalized minimizers for integrands with linear growth } \label{S existence} Following the aforesaid, the problem \eqref{lag}--\eqref{boundary conditions} admits a generalized solution if and only if the problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed} admits a solution. In this Section we prove existence of minimizer for \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}. To this end we start with a classical Ascoli-Arzel\`a-Filippov argument to obtain a general necessary and sufficient condition (Proposition \ref{P existence of solution}) for existence of minimizer for the relaxed problem. Then we prove that this condition is satisfied in the case of the integrand of linear growth in control variable(s). \medskip Consider the set $B_k^+$ defined by \eqref{Eq control relaxed constraints} and let for each $y \in \mathbb R^n$: \begin{align*} Q(y) = \Big\{ (\phi, v, f(y)v+G(y)w ) : (v,w) \in B^+ , \phi \geq \lambda (y,v,w) \Big\} . \end{align*} We pass to the differential inclusion form of the problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}, following the scheme developed in \cite{Cesa}: \begin{align} \label{Eq nonparametric lagrangian} & C(T) \rightarrow \min, \\ \label{Eq nonparametric dynamics} & \left( \dot C(t), \dot \theta(t), \dot y(t) \right) \in Q(y(t)) \quad \text{a.e. } t \in [0,T], \\ \label{Eq nonparametric boundary 1} & C(0)=0, \ \ \theta (0) = 0, \ \ \theta(T) = 1, \\ \label{Eq nonparametric boundary 2} & y(0)=x^0, \ \ y(T) = x^1, \end{align} with free $T\in[0,+\infty[$. For each $y \in \mathbb R^n$, $\varepsilon >0 $, let $ Q(B_\varepsilon(y)) = \bigcup\limits_{|z-y|<\varepsilon} Q(z)$. The following Lemma shows that the differential inclusion \eqref{Eq nonparametric dynamics} is continuous. \begin{lemma} \label{L continuity inclusion} For every $y \in \mathbb R^n$, $\ Q(y) = \bigcap\limits_{\varepsilon >0} Q(B_\varepsilon(y))$. $\square$ \end{lemma} \begin{proof} Fix $(\phi,V) \in \bigcap\limits_{\varepsilon >0} Q(B_\varepsilon(y)) $. By definition, there is a sequence \linebreak $\left\{ (y_i,v_i,w_i) \in B_{\frac 1 i}(y) \times B^+ \right\}_{i \in \mathbb N}$ such that \begin{align*} &V = \left(v_i, f(y_i)v_i+G(y_i)w_i \right), \qquad \phi \geq \lambda (y_i,v_i,w_i) \end{align*} for every $i \in \mathbb N $. Since $\{(v_i,w_i)\}$ is bounded, we can assume, passing to a subsequence, that it converges to some $(v,w) \in B^+$. By continuity of $f,G$, we have $V=\left(v,f(y)v+G(y)w \right)$. By Lemma \ref{L convexity of reduced lagrangean}, $\lambda(y,v,w) \leq \liminf \lambda (y_i,v_i,w_i) \leq \phi$. Therefore, $(\phi, V) \in Q(y)$. \end{proof} The classical Filippov selection theorem requires the Lagrangian to be continuous, a condition that is not guaranteed for the auxiliary Lagrangian $\lambda$. However we manage to prove that Filippov's theorem holds for $\lambda $. \begin{proposition} \label{P Filippov} Fix $(C,\theta,y)$, a trajectory of the differential inclusion \eqref{Eq nonparametric dynamics}, defined in the interval $[0,T]$. There is a measurable control $(v,w):[0,T] \mapsto B^+ $ such that \begin{align*} & \dot \theta(t) = v(t), \quad \dot y(t) = f(y(t))v(t) + G(y(t))w(t) \\ & \dot C(t) \geq \lambda (y(t),v(t),w(t)) , \end{align*} for a.e. $t \in [0,T]. \ \square$ \end{proposition} \begin{proof} See Appendix (Section \ref{SP P Filippov}). \end{proof} \begin{corollary} \label{C equivalence nonparametric} If $\{ (v_i,w_i)\}_{i \in \mathbb N}$ is a minimizing sequence for the problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}, then \begin{equation} \label{Eq minimizing sequence nonparametric} \left\{ \left(\int_0^{(\cdot)} \lambda (y_{(v_i,w_i},v_i,w_i) dt, V_i, y_{(v_i,w_i)} \right) \right\}_{i \in \mathbb N} \end{equation} is a minimizing sequence for the problem \eqref{Eq nonparametric lagrangian}--\eqref{Eq nonparametric boundary 2}. The problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed} admits a solution if and only if the problem \eqref{Eq nonparametric lagrangian}--\eqref{Eq nonparametric boundary 2} does. $\square$ \end{corollary} \begin{proof} Due to the Proposition \ref{P Filippov}, if \eqref{Eq minimizing sequence nonparametric} fails to be a minimizing sequence for the problem \eqref{Eq nonparametric lagrangian}--\eqref{Eq nonparametric boundary 2}, then there would exist an admissible control $(\hat v, \hat w)$, for which \[ \int_0^{\hat V^\#(1)} \lambda (y_{(\hat v, \hat w},\hat v,\hat w) dt < \liminf \int_0^{V_i^\#(1)} \lambda (y_{(v_i,w_i},v_i,w_i) dt, \] which is a contradiction. It follows that, whenever $(v,w)$ is a solution for the problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}, then \[ \left(\int_0^{(\cdot)} \lambda (y_{(v,w},v,w) dt, V, y(v,w) \right) \] must be solution for the problem \eqref{Eq nonparametric lagrangian}--\eqref{Eq nonparametric boundary 2}. Now, suppose that the problem \eqref{Eq nonparametric lagrangian}--\eqref{Eq nonparametric boundary 2} admits a solution $(C,\theta,y)$. By Proposition \ref{P Filippov}, there is an admissible control $(v,w)$ such that $I(v,w) \leq C(T)$. As far as the infimum of problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed} cannot be strictly less than the minimum of \eqref{Eq nonparametric lagrangian}--\eqref{Eq nonparametric boundary 2}, it follows that $(v,w)$ must be a solution for \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}. \end{proof} The following Proposition provides necessary and sufficient condition for existence of solution of the problem \eqref{Eq nonparametric lagrangian}--\eqref{Eq nonparametric boundary 2}. \begin{proposition} \label{P existence of solution} The problem \eqref{Eq nonparametric lagrangian}--\eqref{Eq nonparametric boundary 2} admits a solution if and only if it admits a minimizing sequence $\left\{ (C_i, \theta_i, y_i, T_i) \right\}_{i \in \mathbb N}$ for which the sequences \[ \{T_i \}_{i \in \mathbb N}, \quad \left\{ \left\|(C_i, \theta_i, y_i) \right\|_{L_\infty[0,T_i]} \right\}_{i \in \mathbb N} \] are bounded. $\square$ \end{proposition} \begin{proof} The condition is clearly necessary. To verify sufficiency, note that such a sequence is bounded and equicontinuous. Therefore, the Ascoli-Arzel\`a Theorem guarantees existence of a subsequence, converging to a limit. Lemma \ref{L continuity inclusion} guarantees that the limit solves the differential inclusion \eqref{Eq nonparametric dynamics} and hence the limit is optimal. \end{proof} The Corollary \ref{C equivalence nonparametric} and the Proposition \ref{P existence of solution} immediately imply the following: \begin{corollary} \label{C existence of solution} The relaxed problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed} admits a solution if and only if it has finite infimum and admits a minimizing sequence $\{ ( v_i, w_i )\} $ for which the sequences \[ \left\{ T_i \right\}_{i \in \mathbb N}, \quad \left\{ \left\| y_{(v_i,w_i)} \right\|_{L_\infty[0,T_i]} \right\}_{i \in \mathbb N} \] are bounded. $\square$ \end{corollary} Using this Corollary, we will prove existence of generalized solutions when the Lagrangian \eqref{lag} has linear growth with respect to controls: \begin{proposition} \label{P existence linear growth} Suppose the following conditions hold: \begin{itemize} \item[i)] There are constants $a \in \mathbb R$, $b>0$ such that \[ L(x,u) \geq a+b|u| \qquad \forall (x,u) \in \mathbb R^{n+k} . \] \item[ii)] There are constants $ \tilde a, \tilde b < +\infty $ such that \begin{align*} & |G(x)u| \leq (\tilde a + \tilde b |x|)|u| + \tilde b L(x,u), \\ &|f(x)| \leq \tilde a+ \tilde b (|x|+ L(x,u)) \qquad \forall (x,u) \in \mathbb R^{n+k} . \end{align*} \end{itemize} Then, the relaxed problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed} admits a minimizer, i.e., the original problem \eqref{lag}--\eqref{boundary conditions} admits a Fr\'echet generalized minimizer. $\square$ \end{proposition} \begin{proof} Adding a suitable constant to the Lagrangian $L$, we may replace the conditions (i), (ii) by \begin{itemize} \item[i$^\prime$)] There is a constant $b>0$ such that \[ L(x,u) \geq b(1+|u|) \qquad \forall (x,u) \in \mathbb R^{n+k} . \] \item[ii$^\prime$)] There is a constant $ \tilde b < +\infty $ such that \begin{align*} & |G(x)u| \leq \tilde b (|x-x^0||u| + L(x,u)), \\ &|f(x)| \leq \tilde b (|x-x^0|+ L(x,u)) \qquad \forall (x,u) \in \mathbb R^{n+k} . \end{align*} \end{itemize} Fix $\{ (v_i, w_i )\} $, a minimizing sequence for the problem \eqref{Eq relaxed lagrangian}--\eqref{Eq boundary conditions relaxed}. Due to Propositions \ref{P parameterization invariance of trajectories} and \ref{P parameterization invariance of cost}, $ ( v_i, w_i )\circ \ell_{( V_i, W_i )}^\# $ is also a minimizing sequence. Thus, we can assume that \begin{equation} \label{Eq control standartization} v_i(t)^2 + |w_i(t)|^2 =1 \qquad \text{a.e. } t\geq 0, \ \forall i \in \mathbb N . \end{equation} In that case, the condition i$^\prime$) guarantees that $\lambda (y,v_i,w_i) \geq \frac{b}{\sqrt{2}}\sqrt{v_i^2+|w_i|^2} = \frac{b}{\sqrt{2}}$. Hence $\hat I(v_i,w_i,T_i) \geq \frac{b}{\sqrt{2}} T_i$ and therefore the infimum of the problem is finite and the sequence $\{ T_i \}$ is bounded. From the condition (ii$^\prime$), we get \begin{align*} & |y_{(v_i,w_i)}(t)-x^0| \leq \int_0^t | f(y_{(v_i,w_i)}) |v_i + |G(y_{(v_i,w_i)})w_i| d \tau \leq \\ \leq & 2 \int_0^t \tilde b |y_{(v_i,w_i)}-x^0| + \tilde b \lambda(y_{(v_i,w_i)},v_i,w_i) d \tau \leq 2\tilde b \hat I(v_i,w_i,T_i)+ 2\tilde b \int_0^t |y_{(v_i,w_i)}-x^0| d \tau , \end{align*} and by Gronwall's Lemma, the sequence $\{\|y_{(v_i,w_i)}\|_{L_\infty[0,T_i]} \}$ is bounded. \end{proof} \section{Lavrentiev gap for ordinary and generalized controls} \label{S Lavrentiev phenomenon} We briefly discuss, what we call Lavrentiev gap for the classes of ordinary and Fr\'echet generalized controls. We say that the functional $I$ exhibits an $L_1^k[0,1]$-$\mathcal F_{k,1}^0$ {\it Lavrentiev(-type) gap}, if \[ \inf\limits_{u \in L_1^k[0,1]} I\left( [(t,U)] \right) > \inf\limits_{[(V,W)] \in \mathcal F_{k,1}^0} I \left( [(V,W)] \right) . \] This definition is complete only after we specify how to deal with the boundary conditions \eqref{boundary conditions}. One possibility is to consider approximations of generalized controls by ordinary controls that satisfy exactly the boundary conditions. That is, to take the infima over the $u \in L_1^k[0,1]$ satisfying \eqref{boundary conditions} and over the $[(V,W)] \in \mathcal F_{k,1}^0$ satisfying \eqref{Eq generalized boundary conditions}. In alternative, we may consider approximations of generalized controls by ordinary controls that satisfy \emph{approximately} the boundary conditions. We adopt this last point of view, which leads to the \begin{definition}\label{D Lavrentiev gap} The functional $I$ exhibits an $L_1^k[0,1]$-$\mathcal F_{k,1}^0$ {\it Lavrentiev gap}, if \[ \lim_{\varepsilon \rightarrow 0^+} \inf_{\scriptsize \begin{array}{c} u \in L_1^k[0,1] \\ |x_u (1)- x^1| \leq \varepsilon \end{array}} I\left( [(t,U)] \right) > \inf_{\scriptsize \begin{array}{c} [(V,W)] \in \mathcal F_{k,1}^0 \\ y_{(v,w)} \circ V^\#(1) = x^1 \end{array}} I \left( [(V,W)] \right) . \ \square \] \end{definition} The original Lavrentiev phenomenon has been studied in the classical problem of the calculus of variations, where simple examples with a $W_{1,\infty}$-$W_{1,1}$ gap are known \cite{BallMizel,Mania}. Some generalizations can be found in \cite{Sa97}. Therefore, the occurrence of a $L_1^k[0,1]$-$\mathcal F_{k,1}^0$ gap is not surprising; the following example shows that such gap is a real possibility \begin{example} \label{Ex Lavrentiev gap 1} Consider the optimal control problem \begin{align*} & J(u) = \int_0^1|x_1(t)|+ h\left(x_1(t),u(t) \right) dt \rightarrow \min, \\ & \dot x_1 = x_1+x_2, \quad \dot x_2 = u, \quad x(0) = (0,-1), \quad x(1) =(0,0) , \end{align*} with \[ h(x_1,u) = \left\{ \begin{array}{ll} \max \left(|u| - \frac{1}{\sqrt{|x_1|}}, 0 \right) & \text{for } x_1 \neq 0 , \smallskip \\ 0 & \text{for } x_1 = 0 . \end{array} \right. \] Note that the integrand $|x_1|+h(x_1,u)$ is a continuous function. The problem is equivalent to \begin{align} & I(v,w) = \int_0^T |y_1|v+h\left(y_1,\frac{w}{v}\right) v dt \rightarrow \min , \ T \ \mbox{- free},\label{Eq Ex Lavrentiev functional} \\ & \dot y_1 = (y_1+y_2)v, \quad \dot y_2 = w, \quad \dot{V}=v, \quad v\geq 0, \ v^2+w^2 = 1, \label{Eq Ex Lavrentiev constraints} \\ & y_1(0) = 0, \ y_2(0)=-1, \quad V(0)=0, \quad V(T)=1, \quad y_1(T) = y_2(T)=0 .\label{Eq Ex Lavrentiev boundary cond} \end{align} The control $(\hat v, \hat w ) = (0,1) \chi_{[0,1]} + (1,0) \chi_{]1,+\infty[} $, $T=2$, satisfies the boundary condition and $I(\hat v, \hat w)=0$. Thus, it is optimal. It corresponds to a generalized control containing an impulse which is optimal for the initial problem. We will show that for the problem \eqref{Eq Ex Lavrentiev functional}--\eqref{Eq Ex Lavrentiev boundary cond} there is a constant $C>0$ such that $I(v,w)\geq C$ whenever $v(t)>0$ almost everywhere,$V(T)=1$ and $|y_2(T)|$ is sufficiently small. i.e. whenever a control in the original problem is ordinary and generates a trajectory with endpoint in a neighbourhood of the boundary condition $x(1)=(0,0)$. Fix an arbitrary triple $(v,w,T)$ with $v(t)>0$ and $v(t)^2+w(t)^2 =1 $ for a.e. $t \geq 0$, such that $V(T)=1$ and $y_2(T)>-\frac 1 2$. Let $ T_1= \min\left\{ t \in [0,T]: y_2(t) = -\frac 1 2 \right\}$, hence $-1 \leq y_2(t) \leq -1/2$ on $[0,T_1]$. Given that $|\dot{y}_2(t)|=|w(t)|<1 $ we conclude $T_1>1/2$. Then for $t \in [0,T_1]$: \begin{equation}\label{Eq_Lavrentiev_dynam_y1} y_1(t)=\int_0^t e^{\int_s^tv(\tau)d\tau}v(s)y_2(s)ds<0, \end{equation} and $ \dot{y}_1(t)=v(t)(y_1(t)+y_2(t))<0$. Hence $ \dot y_1=v(t) (y_1(t)+y_2(t) )\leq v(t) y_2(t)$, and \begin{align} \label{Z013} | y_1(t) | = & -y_1(t) \geq \int_0^t-y_2(t)v(t)dt \geq \frac 1 2 \int_0^tv(t)dt=\frac 1 2 V(t) \qquad \forall t \in [0,T_1]. \end{align} Then \[ \int_0^T|y_1(t)|v(t)dt \geq \int_0^{T_1} \frac 1 2 V(t)v(t)dt=\frac 1 4 (V(T_1))^2 , \] and from \eqref{Eq Ex Lavrentiev functional} \[ I(v,w) \geq \frac 1 4 (V(T_1))^2 + \int_0^{T_1}\left(|w(t)|-\sqrt{\frac{2}{V(t)}}v(t)\right)dt \geq \frac 1 4 (V(T_1))^2 +\frac 1 2 -\sqrt{\frac{V(T_1)}{2}}; \] one notes that $\int_0^{T_1}|w(t)|dt \geq |y_2(T_1)-y_2(0)|=\frac 1 2$. Given that $V(T_1) \in [0,1]$ we conclude that \[ I(v,w) \geq \min_{z \in [0,1]}\left(\frac 1 4 z^2 +\frac 1 2 - \sqrt{\frac z 2}\right)=\frac 1 2 - \frac{3}{2^{8/3}} \geq 0.0275 . \ \square \] \end{example} Now, we present some conditions that exclude a $L_1^k[0,1]$-$\mathcal F_{k,1}^0$ Lavrentiev gap. \begin{proposition} \label{P no gap continuous} If the auxiliary Lagrangian $\lambda $ is continuous in $\mathbb R^n \times B^+_k$, (see \eqref{Eq control relaxed constraints}), then the problem \eqref{lag}--\eqref{boundary conditions} does not exhibit Lavrentiev gap. $\square$ \end{proposition} \begin{proof} Pick a generalized control $[(V,W)]$ with canonical element $(V,W)$, and $T \in ]0,+\infty[$, satisfying the boundary condition $y_{(v,w)}(T) =x^1$, $V(T)=1$. For each $\varepsilon >0$, let \[ \left( V_\varepsilon(t), W_\varepsilon(t) \right) = \left( V(\frac{t}{1+\varepsilon}) + \frac{\varepsilon t}{1 + \varepsilon }, W(\frac{t}{1+\varepsilon}) \right) , \qquad t \geq 0 , \] and let $T_\varepsilon $ be the unique $t$ solving $ V_\varepsilon(t) = 1$. Then, $\left[ (V_\varepsilon, W_\varepsilon) \right] $ is an ordinary control and the Lebesgue's dominated convergence theorem guarantees that \[ \lim_{\varepsilon \rightarrow 0^+} \int_0^{T_\varepsilon} \lambda \left( y_{(v_\varepsilon,w_\varepsilon)}, v_\varepsilon, w_\varepsilon \right) dt = \int_0^T \lambda \left( y_{(v,w)}, v, w \right) dt . \] \end{proof} The following example shows that there are problems in which the Fr\'echet generalized minimizer contain jumps along discontinuities of the auxiliary Lagrangian and yet have no Lavrentiev gap. Thus, continuity of the auxiliary Lagrangian is \emph{not} a necessary condition to exclude existence of gap. \begin{example} \label{Ex Lavrentiev gap 2} Consider the optimal control problem \begin{align*} & J(u) = \int_0^1|x_1(t)|^\alpha u(t)^2 dt \rightarrow \min, \\ & \dot x_1 = x_1+x_2, \quad \dot x_2 = u, \quad x(0) = (0,-1), \quad x(1) =(0,0) , \end{align*} with $\alpha >0$ constant. The auxiliary Lagrangian is \[ \lambda(y_1,y_2,v,w) = \lambda(y_1,v,w) = \left\{\begin{array}{ll} |y_1|^\alpha \frac{w^2}{v}, & \text{if } v \neq 0, \\ 0 , & \text{if } w=0 \ \text{or } y_1=0,\\ + \infty , & \text{if } v=0, \ y_1 \neq 0, \ w \neq 0 . \end{array} \right. \] Clearly, it is discontinuous at the points $(0,0,w)$, $w \in \mathbb R$, for every positive $\alpha$. The auxiliary problem is \begin{align*} & I(v,w) = \int_0^{T} \lambda (y_1,v,w) dt \rightarrow \min , \\ & \dot y_1 = (y_1+y_2)v, \quad \dot y_2 = w, \quad v\geq 0, \ v^2+w^2 = 1, \\ & y(0) = (0,-1), \quad V(T)=1, \quad y(T) = (0,0) . \end{align*} The control $(\hat v, \hat w ) = (0,1) \chi_{[0,1]} + (1,0) \chi_{]1,+\infty[} $ satisfies the boundary condition with $T=2$ and $I(\hat v, \hat w)=0$. Thus, it is optimal. It corresponds to a Fr\'echet generalized control with an impulse at $t=0$. Now, consider the approximation of the generalized minimizer by ordinary controls corresponding to $(v_\eta, \hat w ) = (\eta,1) \chi_{[0,1]} + (1,0) \chi_{]1,+\infty[} $. A simple computation shows that $V_\eta(2-\eta)=1$, $y_{(v_\eta, \hat w)}(2-\eta) = O(\eta)$ and $I(v_\eta, \hat w) = O(\eta^{\alpha -1})$. Thus, the problem has no Lavrentiev gap when $\alpha >1$. The argument breaks down for $\alpha \leq 1$. Indeed, it can be shown that the problem has a Lavrentiev gap when $\alpha \in ]0,1[$. $\square$ \end{example} The Proposition \ref{P no gap continuous} has the following immediate corollary. \begin{corollary} \label{C no gap homogeneous} Suppose that the Lagrangian can be written as \[ L(x,u) = L_1(x)+L_2(x,u), \qquad \forall (x,u) \in \mathbb R^{n+k} , \] with $u \mapsto L_2(x,u)$ positively homogeneous of degree $1$ for every $x \in \mathbb R^n$. Then, the problem \eqref{lag}--\eqref{boundary conditions} has no Lavrentiev gap in the sense of Definition \ref{D Lavrentiev gap}. $\square$ \end{corollary} \begin{proof} If the assumption holds, then $ \lambda(y,v,w) = L_1(y)v+L_2(y,w) $. \end{proof} \begin{remark} \label{Rmk 3} Linear growth of the Lagrangian with respect to control does not guarantee lack of Lavrentiev gap. To see this, consider the same dynamics and boundary conditions as in Example \ref{Ex Lavrentiev gap 1}, and introduce the modified functional \[ \tilde J(u) = \int_0^1|x_1(t)|+ h\left(x_1(t),u(t) \right) + \varepsilon |u(t)| dt, \] with $\varepsilon>0$, a small constant. Existence of generalized minimizer is guaranteed by Proposition \ref{P existence linear growth}. The inequality $\tilde J(u) \geq C$ holds for every ordinary control satisfying the boundary condition. However, for the generalized minimizer given in Example \ref{Ex Lavrentiev gap 1}, we have $\tilde I(\hat v, \hat w) = \varepsilon $, and therefore $\inf\limits_{[(v,w)] \in \mathcal F_{1,1}^0} \tilde I \left( [(v,w)] \right) < \inf\limits_{u \in L_1[0,1]} \tilde J(u)$ for sufficiently small $\varepsilon>0$. $\square$ \end{remark} We conclude this section with two further cases where Lavrentiev gap cannot occur. \begin{proposition} \label{Rm no Lavrenteev} If $L(x,u) = L_1(x) + L_2(u)$ with $L_1$ continuous and $L_2$ convex, then the optimal control problem does not have a Lavrenteev gap. $\square$ \end{proposition} \begin{proof} Since $y_{(v+\eta, w)} \rightarrow y_{(v,w)}$ uniformly when $\eta \rightarrow 0^+$, and \begin{align*} &\left( L_1(y_{(v+\eta,w)}) - L_2\left( \frac w{v+\eta} \right) \right) (v+\eta ) \leq \\ \leq & \lambda (y_{(v,w)},v,w) + \left( L_1(y_{(v+\eta,w)}) - L_1(y_{(v,w)}) \right) (v+\eta ) + L_2(0) \eta, \end{align*} The result follows from Lebesgue's dominated convergence theorem. \end{proof} \begin{proposition} \label{P no gap no drift} If $f\equiv 0$ (i.e., the system \eqref{affine system} has no drift), then the problem \eqref{lag}--\eqref{boundary conditions} has no Lavrentiev gap in the sense of Definition \ref{D Lavrentiev gap}. $\square$ \end{proposition} \begin{proof} If the system \eqref{affine system} has no drift, then $y_{(v,w)}=y_{(\tilde v, w)}$ for every $v, \tilde v, w$. Since $L(y, \frac{w}{v+\varepsilon})(v+\varepsilon) \leq \lambda(y,v,w) + \varepsilon L(y,0)$, it follows that \[ \lim_{\varepsilon \rightarrow 0^+} I(V+\varepsilon t,W) = I(V,W) \] for every generalized control $[(V,W)]$. \end{proof} \section{Example} \label{S Example} We provide an example of a Lagrange variational problem with a functional of linear growth, for which the minimum is attained at a generalized minimizer. The set of all horizontal curves in the Heisenberg group can be identified with the set of trajectories of the control system \begin{align} \label{Eq system Heisenberg} \dot x_1 = u_1, \quad \dot x_2 = u_2, \quad \dot x_3 =2x_2u_1-2x_1u_2 . \end{align} By adding a smooth drift $f$, one obtains a control-affine system \begin{align} \label{Eq system Heisenberg affine} \left( \begin{array}{c} \dot x_1 \\ \dot x_2 \\ \dot x_3 \end{array} \right) = \left( \begin{array}{c} f_1(x) \\ f_2(x) \\ f_3(x) \end{array} \right) + \left( \begin{array}{c} 1 \\ 0 \\ 2x_2 \end{array} \right) u_1 + \left( \begin{array}{c} 0 \\ 1 \\ -2x_1 \end{array} \right) u_2 \end{align} We wish to minimize the functional \begin{align} \label{Eq cost example} J(u_1,u_2) = \int_0^1 \sqrt{1+u_1^2+u_2^2} \ dt, \end{align} under the boundary conditions $x(0)=\overline{x}$, $x(1)=\overline{\overline{x} }$. This problem satisfies the assumptions of Proposition \ref{P existence linear growth}, provided $f$ does not have supralinear growth with respect to $x$. Therefore, it has a generalized solution in the class $\mathcal F_{2,1}^0$. The auxiliary Lagrangian is \[ \lambda (y,v,w) = v\sqrt{1+\left( \frac{w_1}{v} \right)^2 +\left( \frac{w_2}{v} \right)^2} = \sqrt{v^2+w_1^2+w_2^2} . \] Therefore, the Proposition \ref{P no gap continuous} guarantees that the problem \eqref{Eq system Heisenberg affine}--\eqref{Eq cost example} does not have a $L_1^2[0,1]$-$\mathcal F_{2,1}^0$ Lavrentiev gap. The extension of the problem \eqref{Eq system Heisenberg affine}--\eqref{Eq cost example} is equivalent to the problem \begin{align} \label{Eq example reduced Lagrangian} & T \rightarrow \min , \\ & \label{Eq example reduced dynamics} \left( \begin{array}{c} \dot y_1 \\ \dot y_2 \\ \dot y_3 \end{array} \right) = \left( \begin{array}{c} f_1(y) \\ f_2(y) \\ f_3(y) \end{array} \right) v + \left( \begin{array}{c} 1 \\ 0 \\ 2y_2 \end{array} \right) w_1 + \left( \begin{array}{c} 0 \\ 1 \\ -2y_1 \end{array} \right) w_2 , \quad \dot{V}=v, \\ & \label{Eq example control values} v \geq 0, \quad v^2+w_1^2+w_2^2 =1, \\ & \label{Eq example boundary conditions} y(0) = \overline{x}, \quad V(0)=0, \quad V(T)=1, \quad y (T) = \overline{ \overline{x} } . \end{align} Optimal controls for this problem satisfy the Pontryagin maximum principle with Hamiltonian \begin{align*} H=& \left( \lambda_1 f_1 (y)+\lambda_2 f_2(y)+\lambda_3 f_3(y) + \lambda_4 \right)v + \left( \lambda_1+\lambda_3y_2\right) w_1 + \left( \lambda_2-\lambda_3y_1\right) w_2 . \end{align*} An optimal trajectory of the problem \eqref{Eq system Heisenberg affine}--\eqref{Eq cost example} exhibits a jump if there is an interval where the corresponding extremal of the problem \eqref{Eq example reduced Lagrangian}--\eqref{Eq example boundary conditions} satisfies \[ \lambda_1 f_1 (y)+\lambda_2 f_2(y)+\lambda_3 f_3(y) + \lambda_4 \leq 0 , \] and hence, by the Pontryagin maximum principle, $v(t) =0$. Jump paths are sub-Riemannian geodesics of the Heisenberg group. The presence or absence of jumps in optimal solutions depends on the drift vector field $f$. \subsection{Constant drift} For example, if the drift is a constant vector field of the form $f \equiv (0,0,C)$, one can easily conclude, that all optimal trajectories are continuous. Indeed in this case the Hamiltonian amounts to \[ H= \left( \lambda_3 C + \lambda_4 \right)v + \left( \lambda_1+\lambda_3y_2\right) w_1 + \left( \lambda_2-\lambda_3y_1\right) w_2 , \] and the extremals satisfy $\dot \lambda_3= \dot \lambda_4 \equiv 0,\ v(t)=\max\{0,\lambda_3C+\lambda_4 \}$. It follows that $\lambda_3C+\lambda_4$ is constant and, if the constant is positive, then $v(\cdot)$ does not vanish and the extremal trajectory is continuous, or, if it is non-positive, then $v(\cdot)$ vanishes identically. The latter possibility is incompatible with the condition $\int_0^T v(s)ds=1$. \subsection{Case of linear drift} Contrasting with the case above, for the linear drift vector field $f(x)=(0,0,-x_3)$ optimal trajectories may have a jump. This happens, for example, for the boundary conditions $\overline{x}=0$, $\overline{\overline{x}}=(0,0,C)$, whenever $C>0$ is large enough. We prove that in this case the optimal trajectory consists of an analytic arc in the interval $[0,1[$ and a final jump at $t=1$. Let us write the equations of Pontryagin Maximum Principle with the Hamiltonian \[ H= \left( \lambda_4 - \lambda_3y_3 \right)v + \left( \lambda_1+2\lambda_3y_2\right) w_1 + \left( \lambda_2-2\lambda_3y_1\right) w_2 . \] The adjoint vector satisfies the system \begin{equation} \label{Eq dynamic adjoint vector} \dot \lambda_1 = 2 \lambda_3 w_2, \quad \dot \lambda_2 = -2 \lambda_3 w_1, \quad \dot \lambda_3 = \lambda_3 v, \quad \dot \lambda_4 = 0. \end{equation} As far as Hamiltonian $H$ is homogeneous, we may consider the abnormal case $H\equiv 0$ and the normal one: $H\equiv 1$. \begin{lemma} Abnormal extremals for this problem are trivial: $y(t) \equiv 0. \ \square$ \end{lemma} \begin{proof} The identity $H \equiv 0$ implies \begin{equation} \label{Z016} \lambda_1+2\lambda_3y_2 \equiv 0 , \quad \lambda_2-2\lambda_3 y_1 \equiv 0, \quad \lambda_4-\lambda_3y_3 \leq 0 . \end{equation} Differentiating the first two equalities, one obtains \[ \lambda_3(y_2v+2w_2) = \lambda_3(y_1v+2w_1)=0 . \] Besides \begin{equation} \label{Eq lambda3} \lambda_3(t)=e^{V(t)}\lambda_3(0) , \end{equation} and if $\lambda_3(t)$ vanishes at a point, then $ \lambda_3\equiv 0 $ and by \eqref{Z016} $\lambda_1 \equiv \lambda_2 \equiv 0, \ \lambda_4 <0$ and $v \equiv 0$, meaning that the end-point condition $V(T)=1$ can not be achieved. If $y_2v+2w_2 = y_1v+2w_1 \equiv 0 $, then $1=v^2+w_1^2+w_2^2=v^2(1+y_1^2/4+y_2^2/4) $ and $v= \frac{2}{\sqrt{4+y_1^2+y_2^2}}$ is absolutely continuous with \[ \frac{dv}{dt}= 2 \frac{y_1^2+y_2^2}{\left( 4+y_1^2+y_2^2 \right)^2} = \frac 1 2 v^2(1-v^2) . \] Besides $v(0)= \frac{2}{\sqrt{4+(y_1(0))^2+(y_2(0))^2}}=1$, and hence $v(t)\equiv 1, w_1(t)=w_2(t) \equiv 0$, which results in a trivial trajectory $y \equiv 0$ \end{proof} Now, consider an extremal $(y,\lambda)=(y_1,y_2,y_3,\lambda_1,\lambda_2,\lambda_3,\lambda_4)$ such that $H\equiv 1$, $y(0)=0$ and $y(T)=(0,0, C)$, $C>1$. The extremal controls are \begin{equation} \label{Eq example optimal controls} v=\max (0,\lambda_4-\lambda_3 y_3), \qquad w_1= \lambda_1+2\lambda_3y_2, \qquad w_2= \lambda_2-2\lambda_3y_1 . \end{equation} We will use the following three lemmata. \begin{lemma} \label{L decreasing v} For the imposed boundary conditions, the extremal control $v(\cdot)$ is monotonously decreasing and $0<\lambda_4 \leq 1$. $\square$ \end{lemma} \begin{proof} Let $\sigma_v(t)=\lambda_4-\lambda_3(t) y_3(t)$ be the switching function, which determines $v(t)$ along the extremal. From \eqref{Eq example optimal controls} and the dynamics \eqref{Eq example reduced dynamics}, \eqref{Eq dynamic adjoint vector}, we have \begin{align*} \frac{d}{dt}\left( \lambda_1y_2-\lambda_2y_1\right) = 2 \lambda_3w_2y_1 +\lambda_1w_2 +2\lambda_3w_1y_1-\lambda_2w_1 = w_1w_2-w_2w_1=0 , \end{align*} and therefore \begin{equation} \label{Eq conserved quantity} \lambda_1y_2-\lambda_2y_1 \equiv 0 \end{equation} along any extremal trajectory with $y(0)=0$. Further, \begin{align*} \frac{d}{dt}(\lambda_3y_3) = & \lambda_3vy_3 + \lambda_3(-y_3v+2y_2w_1-2y_1w_2) = \\ = & 2\lambda_3(\lambda_1y_2-\lambda_2y_1)+4\lambda_3^2(y_1^2+y_2^2) = 4\lambda_3^2(y_1^2+y_2^2) \geq 0. \end{align*} Therefore, $\frac{d}{dt}(\lambda_4 - \lambda_3 y_3) = -\frac{d}{dt}(\lambda_3y_3) \leq 0$ and hence the extremal control $v$ is a monotonically decreasing function. Since the boundary condition \eqref{Eq example boundary conditions} requires $v$ to be positive in some interval, we see that $\lambda_4>0$. The equality $H \equiv 1$ implies $\lambda_4 = \lambda_4- \lambda_3(0) y_3(0) \leq 1$. \end{proof} \begin{lemma} \label{L rescaling} The trajectories of the system \[ \dot y_1=w_1, \qquad \dot y_2=w_2, \qquad \dot y_3=-y_3v+2y_2w_1-2y_1w_2, \qquad y(0)=0 , \] are invariant with respect to the dilation $$(v,w_1,w_2,y_1,y_2,y_3) \to (v,\eta w_1, \eta w_2, \eta y_1,\eta y_2,\eta^2 y_3). \ \square $$ \end{lemma} \begin{proof} A direct verification. \end{proof} \begin{lemma} \label{L attainable set bounded} For each $T \geq 0$ the attainable set of the system \eqref{Eq example reduced dynamics} is bounded; in particular the time $T_C$, needed to attain the point $(0,0,C)$, grows to $+\infty$ as $C \to +\infty . \ \square$. \end{lemma} \begin{proof} Notice that the right-hand side of \eqref{Eq example reduced dynamics} is bounded by a linear function of $|y|$. \end{proof} Now, let $(\hat y,\hat \lambda)$ be an extremal satisfying the boundary conditions \eqref{Eq example boundary conditions}, and let $(\hat v,\hat w) =(\hat v,\hat w_1,\hat w_2) $ be the corresponding extremal control. Assume the extremal value of the functional to be $\hat T$ and $\hat v(t)>0$ on $[0, \hat T[$. We will prove that if $C>0$ is large enough, the extremal cannot be optimal. We proceed by showing that there is a control $(v,w)$ with $v\geq 0$ such that the corresponding trajectory of \eqref{Eq example reduced dynamics} satisfies $V(\hat T)=1$, $y(\hat T) =(0,0,C) $, and \[ \int_0^{\hat T} \sqrt{v^2+w_1^2+w_2^2} dt < \hat T . \] This inequality requires that $v^2+w_1^2+w_2^2\not \equiv 1$, but the Propositions \ref{P parameterization invariance of trajectories} and \ref{P parameterization invariance of cost} show that $(v,w)$ can be transformed by a time reparameterization into a control satisfying \eqref{Eq example control values} and \eqref{Eq example boundary conditions} for $T = \int_0^{\hat T} \sqrt{v^2+w_1^2+w_2^2} dt$ and therefore $\hat T$ is not minimal. For each $\varepsilon \in ]0 , \hat T [$, let $a= \int_{\hat T-\varepsilon}^{\hat T} \hat v dt $. We avoid the notation $a(\varepsilon)$, but keep the dependence on $\varepsilon$ in mind. In particular, $0 < a \leq \varepsilon$. Fix $\varepsilon$ and consider the modified control $\tilde v$, defined as \[ \tilde v(t) = \left\{ \begin{array}{ll} \hat v(t)+1, & t \in [0,a], \smallskip \\ \hat v(t), & t \in [a,\hat T- \varepsilon [, \smallskip \\ 0, & t \in [\hat T- \varepsilon, \hat T], \end{array} \right. \] and let $\tilde y$ be the trajectory of the system \eqref{Eq example reduced dynamics} for the control $(\tilde v, \hat w)$. Then, $\tilde V(\hat T) = \hat V(\hat T) =1$, $\tilde y_1 \equiv y_1$, and $\tilde y_2\equiv y_2$. Further, for any $t \geq 0$: \begin{align*} \tilde y_3(t) - \hat y(t) = & \int_0^t \hat y_3 v - \tilde y_3 \tilde v d \tau = \int_0^t -(\tilde y_3 - \hat y_3) \tilde v + \hat y_3 (\hat v - \tilde v ) d \tau . \end{align*} It follows that \begin{align*} \tilde y_3(\hat T) - \hat y(\hat T) = e^{-1}\int_0^{\hat T} e^{\tilde V(\tau)}\hat y_3(\hat v - \tilde v) d \tau = - \int_0^a e^{\tilde V(\tau)-1}\hat y_3 d \tau + \int_{\hat T - \varepsilon}^{\hat T} \hat y_3 \hat v d \tau . \end{align*} Since $|\hat y_3(t)| \leq 2t^2$ and $\lim\limits_{t \rightarrow \hat T} \hat y_3(t) =C$, there is a constant $k \in ]0,+\infty[$ such that \begin{align} \label{Z022} \tilde y_3(\hat T) > C(1+a-k\varepsilon a) , \end{align} for every $C \in ]0,+\infty[$ and every sufficiently small $\varepsilon >0$. Let $\eta = \sqrt{\frac{C}{\tilde y_3(\hat T)}}$. Due to Lemma \ref{L rescaling}, the control $(v,w) = ( \tilde v, \eta \hat w_1, \eta \hat w_2) $ satisfies the boundary conditions \eqref{Eq example boundary conditions}, and we estimate the functional \begin{align*} & \int_0^{\hat T} \sqrt{v^2+w_1^2+w_2^2} dt = \\ = & \int_0^a \sqrt{(1+\hat v)^2 + \eta^2 (1- \hat v^2)} dt + \int_a^{\hat T-\varepsilon} \sqrt{ \hat v ^2 + \eta^2 (1- \hat v^2)} dt + \int_{\hat T-\varepsilon}^{\hat T} \sqrt{ \eta^2 (1- \hat v^2)} dt \leq \\ \leq & \int_0^{\hat T} \sqrt{ \hat v ^2 + \eta^2 (1- \hat v^2)} dt + \int_0^a \sqrt{ 1+2\hat v + \hat v^2 + \eta^2 (1- \hat v^2)} - \sqrt{ \hat v ^2 + \eta^2 (1- \hat v^2)} dt . \end{align*} Since $1+\hat v \leq 3$, the second integral is bounded by $\sqrt 3 a$ and therefore \begin{align*} & \int_0^{\hat T} \sqrt{v^2+w_1^2+w_2^2} dt \leq \int_0^{\hat T} \sqrt{1-(1-\eta^2)(1-\hat v^2)} dt + \sqrt 3 a \leq \\ \leq & \int_0^{\hat T} 1-\frac{1-\eta^2}2(1-\hat v^2) dt + \sqrt 3 a \leq \hat T - \frac{1-\eta^2}2\int_0^{\hat T} 1 - \hat v dt + \sqrt 3 a = \\ = & \hat T - \frac{1-\eta^2}2 (\hat T- 1) + \sqrt 3 a . \end{align*} Since \eqref{Z022} implies $ 1- \eta^2 > \frac{1-k\varepsilon}{1+(1-k\varepsilon)a} a $, the estimate above yields \begin{align*} & \int_0^{\hat T} \sqrt{v^2+w_1^2+w_2^2} dt < \hat T - \left( \frac{\hat T -1}2 \frac{1-k\varepsilon}{1+(1-k\varepsilon)a} - \sqrt 3 \right) a < \hat T , \end{align*} provided $\varepsilon >0$ is sufficiently small and $\hat T > 1 +2 \sqrt 3$. Due to Lemma \ref{L attainable set bounded}, this last condition holds for every sufficiently large $C>0$. For such $C$ no extremal satisfying $\hat v >0$ in $[0,\hat T[ $ can be optimal. \section{Appendix: proofs of technical results} \label{S appendix} \subsection{Proof of Lemma~\ref{L orientation}} \label{SP L orientation} \begin{proof} Suppose that \eqref{Eq orientation} holds and pick a sequence $\{ \beta_i \in \mathcal T \}_{i \in \mathbb N}$ such that $$\lim\limits_{i \rightarrow \infty} \left\| g_1-g_2 \circ \beta _i \right\|_{L_\infty[0,+\infty [} = 0.$$ For each $i \in \mathbb N$, let $\alpha_{1,i}$ denote the inverse function of $t \mapsto t + \beta_i(t)$, and let $\alpha_{2,i}= \beta_i \circ \alpha_{1,i} $. Since \[ \dot \alpha_{1,i} = \frac{1}{1+\dot \beta_i \circ \alpha_{1,i} } , \qquad \dot \alpha_{2,i} = \frac{\dot \beta_i \circ \alpha_{1,i}}{1+\dot \beta_i \circ \alpha_{1,i} }, \] the sequence $(\alpha_{1,i}, \alpha_{2,i} )$ is uniformly bounded and equicontinuous in compact intervals. Due to the Ascoli-Arzel\`a theorem, it admits a subsequence converging uniformly in compact intervals towards some absolutely continuous nondecreasing functions $(\alpha_1, \alpha_2)$. Due to continuity of $g_1,g_2$, $(\alpha_1, \alpha_2)$ satisfy {\bf (a)}. Since $\alpha_{1,i}+\alpha_{2,i}=Id$, it follows that $\alpha_1+\alpha_2=Id$ and therefore {\bf (b)} holds. Suppose that $ \alpha_1(\infty ) = T < +\infty $. Due to continuity of $g_1$, $g_1(T^-)= g_1(T)$. For any $t>T$, and any $i \in \mathbb N$: \begin{align*} & \left| g_1(T)- g_1(t) \right| = \left| g_1(T)- g_1\circ \alpha_{1,i}\left( \alpha_{1,i}^{-1}(t)\right) \right| \leq \\ \leq & \left| g_1(T)- g_2\circ \alpha_{2,i}\left( \alpha_{1,i}^{-1}(t)\right) \right| + \left| g_2\circ \alpha_{2,i}\left( \alpha_{1,i}^{-1}(t)\right)- g_1\circ \alpha_{1,i}\left( \alpha_{1,i}^{-1}(t)\right) \right| \leq \\ \leq & \left| g_1(T)- g_2\circ \alpha_{2,i}\left( \alpha_{1,i}^{-1}(t)\right) \right| + \left\| g_2\circ \alpha_{2,i} - g_1\circ \alpha_{1,i} \right\|_{L_\infty[0,+\infty[} . \end{align*} By assumption, $\lim\limits_{i \rightarrow \infty} \alpha_{1,i}^{-1}(t) = + \infty$ and therefore $\lim\limits_{i \rightarrow \infty} \alpha_{2,i} \left( \alpha_{1,i}^{-1}(t)\right) = + \infty$. Since the condition {\bf (a)} implies that $\lim\limits_{s \rightarrow + \infty} g_2(s) = g_1(T)$, {\bf (c)} holds. Now, suppose there are $\alpha_1, \alpha_2$ satisfying {\bf (a)}, {\bf (b)}, and {\bf (c)}. First, consider the case where $\alpha_1([0,+\infty[ ) = \alpha_2([0,+\infty[ ) =[0,+\infty[ $. Then, there is a sequence $\{T_j \}_{j \in \mathbb N}$ such that \[ \lim T_j = +\infty, \quad \text{and} \quad \alpha_i(T_j) < \alpha_i(T_{j+1}) \ \ \forall j \in \mathbb N, \ i =1,2 . \] For any sequence $\{ \varepsilon_j \in]0,1[ \}_{j \in \mathbb N}$, the functions \begin{align*} \alpha_i^\varepsilon (t) = \sum_{j=1}^\infty \Bigg( & \alpha_i(T_{j-1}) + (1-\varepsilon_j) (\alpha_i(t) - \alpha_i(T_{j-1}) ) + \\ & + \varepsilon_j \frac{\alpha_i(T_j) - \alpha_i(T_{j-1}) }{T_j-T_{j-1}}(t-T_{j-1}) \Bigg) \chi_{[T_{j-1},T_j[}(t) \end{align*} belong to $\mathcal T$ and therefore $\alpha_2^\varepsilon \circ \left( \alpha_1^\varepsilon \right)^{-1} \in \mathcal T$. Also, \[ \left| \alpha_i^\varepsilon (t) - \alpha_i (t) \right| \leq \sum_{j=1}^\infty \varepsilon_j \left| \alpha_i(T_j) - \alpha_i(T_{j-1}) \right| \chi_{[T_{j-1},T_j[}(t) \qquad \forall t \geq 0 . \] Since $g_1, g_2$ are uniformly continuous in compact intervals, for every $\delta >0$ there is some sequence $\{ \varepsilon_j \in]0,1[ \}_{j \in \mathbb N}$ such that $ \left\| g_1 \circ \alpha_1^\varepsilon - g_2 \circ \alpha_2^\varepsilon \right\|_{L_\infty[0,+\infty[} < \delta $. Since $ \left\| g_1 \circ \alpha_1^\varepsilon - g_2 \circ \alpha_2^\varepsilon \right\|_{L_\infty[0,+\infty[} = \left\| g_1 - g_2 \circ \alpha_2^\varepsilon \circ \left(\alpha_1^\varepsilon \right)^{-1} \right\|_{L_\infty[0,+\infty[} $, we see that \eqref{Eq orientation} holds. In the case where $\alpha_1([0,+\infty[ ) =[0,+\infty[ $ and $ \alpha_2(\infty ) = T < +\infty$, there is a sequence $\{T_j \}_{j \in \mathbb N}$ such that \[ \lim T_j = +\infty, \quad \text{and} \quad \alpha_1(T_j) < \alpha_1(T_{j+1}) \ \ \forall j \in \mathbb N. \] Then we can apply a similar argument to the functions \begin{align*} \alpha_1^\varepsilon (t) = \sum_{j=1}^\infty \Bigg( & \alpha_1(T_{j-1}) + (1-\varepsilon_j) (\alpha_1(t) - \alpha_1(T_{j-1}) ) + \\ & + \varepsilon_j \frac{\alpha_1(T_j) - \alpha_1(T_{j-1}) }{T_j-T_{j-1}}(t-T_{j-1}) \Bigg) \chi_{[T_{j-1},T_j[}(t), \\ \alpha_2^\varepsilon (t) = \alpha_2(t) &+ \varepsilon_1 t , \end{align*} and this completes the proof. \end{proof} \subsection{Proof of Lemma~\ref{L AC reparameterization}} \label{SP L AC reparameterization} \begin{proof} To prove {\bf (a)}: Pick $t \in \left] \alpha(0), \alpha(\infty) \right[$, and let $\hat \theta = \alpha^\#(t)$. By continuity of $\alpha$, there is some $s \in ]0,+\infty[$ such that $t=\alpha(s)$, and $\hat \theta= \max \left\{ \theta: \alpha(\theta) = \alpha (s) \right\}$, that is, $\alpha (\hat \theta) = \alpha (s) =t$. The equality $\dot \alpha \circ \alpha^\#(t) =0$ reduces to $\dot \alpha (\hat \theta) =0$. Therefore, $\dot \alpha \circ \alpha^\#(t) =0$ implies $t \in \alpha \left(\left\{ \theta: \dot \alpha =0 \right\} \right)$. Since this set has zero Lebesgue measure, we proved {\bf (a)}. To prove {\bf (b)} and {\bf (c)}: Let $A= \left\{ t: \dot \alpha (t) = 0, \ \dot g (t) \neq 0 \right\}$, and let $\mu$ denote the Lebesgue measure. For each $\varepsilon >0$ there is a sequence of intervals $\left\{ ]a_i,b_i[ \right\}_{i \in \mathbb N}$ such that \[ A \subset \bigcup_{i=1}^\infty]a_i,b_i[, \qquad \sum_{i=1}^\infty (b_i-a_i) < \mu(A)+\varepsilon , \qquad \sum_{i=1}^\infty \left(\alpha(b_i)-\alpha(a_i) \right) < \varepsilon . \] Fix $\varepsilon$ and a sequence as above and let \[ t_i= \alpha (b_i), \quad s_i=\alpha(a_i) - \frac{\varepsilon}{2^i}, \qquad i \in \mathbb N . \] Notice that $\sum\limits_{i=1}^\infty (t_i-s_i) < 2 \varepsilon $ and $\alpha^\#(t_i) \geq b_i$, $\alpha^\#(s_i)< a_i$ for every $i \in \mathbb N$. Therefore, \begin{align*} \sum_{i=1}^\infty \int_{\alpha^\#(s_i)}^{\alpha^\#(t_i)} | \dot g(s)| ds \geq \sum_{i=1}^\infty \int_{a_i}^{b_i} | \dot g(s)| ds \geq \int_A | \dot g(s)| ds . \end{align*} Thus, $g \circ \alpha^\#$ cannot be absolutely continuous when $\mu (A)>0$. Now, suppose that $\mu (A) = 0$. In order to prove that $g \circ \alpha^\#$ is absolutely continuous and satisfies \eqref{Eq derivative of reparameterized curve}, we only need to consider the case where $g$ is scalar. Taking the decomposition $g=g^+-g^-$, where $g^+(t) = g(0) + \int_0^t \max \left(0, \dot g(s)\right) ds$ and $g^-(t) = \int_0^t \max \left(0, - \dot g(s)\right) ds$, we only need to consider the case where $g:[0,+\infty[ \mapsto \mathbb R $ and $\dot g (s) \geq 0 $ for a.e. $s \geq 0$. Fix $t_1< t_2$ with $t_1 \geq 0$, $t_2 < \alpha(\infty) $, and fix $T \in ] \alpha^{\#}(t_2), + \infty[$. For each $i \in \mathbb N$, let \[ \alpha_i(s) = \sup\left\{ \alpha (\tilde s) + \frac{s-\tilde s}{i} : \tilde s \in [0,s] \right\} \qquad \forall s \in [0,T] , \] and let $B_i= \left\{ s \in [0,S], \ \alpha(s) < \alpha_i(s) \right\} $. Since $\dot g (s) = 0$ for a.e. $s \in \left[ \alpha^{\#}(t^-) , \alpha^{\#}(t) \right]$, and $\lim\limits_{i \rightarrow \infty} \alpha_i^{-1}(t) = \alpha^{\#}(t^-)$, we have \begin{align*} & g \circ \alpha^{\#}(t_2) - g \circ \alpha^{\#}(t_1)= \int_{ \alpha^{\#}(t_1^-)}^{ \alpha^{\#}(t_2^-)} \dot g ds = \lim_{i \rightarrow \infty} \int_{ \alpha_i^{-1}(t_1)}^{ \alpha_i^{-1}(t_2)} \dot g ds = \\ = & \lim_{i \rightarrow \infty} \left( \int_{ [\alpha_i^{-1}(t_1), \alpha_i^{-1}(t_2)] \setminus B_i} \dot g ds + \int_{ [\alpha_i^{-1}(t_1), \alpha_i^{-1}(t_2)] \cap B_i} \dot g ds \right) . \end{align*} Since $ \bigcap\limits _{i \in \mathbb N} B_i \setminus \{ \dot \alpha =0 \} $ is a set of zero Lebesgue measure, it follows that \begin{align*} & g \circ \alpha^{\#}(t_2) - g \circ \alpha^{\#}(t_1)= \lim_{i \rightarrow \infty} \int_{ [\alpha_i^{-1}(t_1), \alpha_i^{-1}(t_2)] \setminus B_i} \dot g ds . \end{align*} Notice that $\alpha_i$ is absolutely continuous and $ \dot \alpha_i = \dot \alpha \chi_{B_i^c} + \frac 1 i \chi_{B_i} \geq \frac 1 i $. Therefore, \begin{align*} g \circ \alpha^{\#}(t_2) - g \circ \alpha^{\#}(t_1)= & \lim_{i \rightarrow \infty} \int_{ [t_1, t_2] \setminus \alpha_i(B_i)} \frac{\dot g}{\dot \alpha_i} \circ \alpha_i^{-1} ds = \\ =& \lim_{i \rightarrow \infty} \int_{ [t_1, t_2] \setminus \alpha_i(B_i)} \frac{\dot g}{\dot \alpha} \circ \alpha^{\#} ds . \end{align*} Since $B_{i+1} \subset B_i$ and the set $\alpha_i(B_i)$ has Lebesgue measure no greater that $\frac T i$, the Lebesgue monotone convergence theorem guarantees that \[ g \circ \alpha^{\#}(t_2) - g \circ \alpha^{\#}(t_1)= \int_{t_1}^{ t_2} \frac{\dot g}{\dot \alpha} \circ \alpha^{\#} ds . \] Thus, $g \circ \alpha^\#$ is absolutely continuous and satisfies \eqref{Eq derivative of reparameterized curve}. To prove {\bf (d)}: Notice that $\alpha (s) = \alpha(t) $ for every $s \in \left[ t, \alpha^\# \circ \alpha(t) \right]$. Since the set $\{ t: \dot \alpha (t) =0, \ \dot g (t) \neq 0 \}$ has zero Lebesgue measure, we see that $g (s) = g(t) $ for every $s \in \left[ t, \alpha^\# \circ \alpha(t) \right]$. In particular, $g\circ \alpha^\# \circ \alpha(t) = g(t)$. Thus, the result follows from Lemma \ref{L orientation}. \end{proof} \subsection{Proof of Proposition \ref{P lift of functions of bounded variation}} \label{SP P lift of functions of bounded variation} \begin{proof} Without loss of generality, we can assume that $x$ has finite variation in the interval $[0,T]$ and it is constant in $[T, + \infty[$. Consider a sequence of partitions of the interval $[0,T]$ \[ P_k = \left\{ 0 =t_{k,0} < t_{k,1} < \cdots < t_{k,k} =T \right\} \qquad k \in \mathbb N, \] such that $P_k \subset P_{k+1}$ for every $k \in \mathbb N$, and $\bigcup\limits_{k \in \mathbb N} P_k$ is dense in $[0,T]$. Let $x_k:[0,+\infty[ \mapsto \mathbb R^n$ be the piecewise linear function interpolating the points $x(t_{k.i})$, $i=0,1, \ldots k$ and $x_k(t) =x(T) $ for every $t>T$. Then, $\left\{ (\theta_k,y_k)=\left( \ell_{(t,x_k)}^{-1}, x_k \circ \ell_{(t,x_k)}^{-1} \right) \right\}_{k \in \mathbb N} $ is a sequence in $\mathcal Y_n$. The length of the graph of $x_k$ on the interval $[0,T]$ is \begin{align*} \ell_{(t,x_k)}(T)= & \sum_{i=1}^k\sqrt{(t_i-t_{i-1})^2 + \left|x(t_i)-x(t_{i-1})\right|^2} \leq T+{\rm V}_{[0,T]}(x) . \end{align*} Thus, the sequence $\left\{ \left( \theta_k,y_k \right) \right\} $ is uniformly bounded and equicontinuous on the interval $\left[0,T+{\rm V}_{[0,T]}(x) \right]$, and the Ascoli-Arzel\`a theorem guarantees that it has a subsequence converging uniformly uniformly towards some $(\theta,y) \in \mathcal Y_n$. Without loss of generality, we assume that this subsequence is $\{ (\theta_k,y_k)\}$. Notice that $\left\{ \theta_k^{-1}(t) \right\}$ may fail to converge towards $\theta^\#(t)$ if $t$ is a discontinuity point of $\theta^\#$. Instead, we take the sequence $\tilde \theta_k = \left( \theta_k- \left\| \theta_k - \theta \right\|_{L_\infty[0,T+{\rm V}_{[0,T]}(x) ]} \right)^+ $. Notice that $\left\{ (\tilde \theta_k,y_k) \right\}$ converges uniformly towards $(\theta,y)$ and $\lim \limits_{k \rightarrow \infty} \tilde \theta _k^\#(t) = \theta^\#(t)$ for every $t \in [0,T ]$. Therefore, \[ \lim_{k \rightarrow \infty} y_k \circ \tilde{\theta}_k^\#(t) = y \circ \theta^\# (t) \qquad \forall t \in [0,T] . \] Now, suppose that $x$ is continuous at the point $t \in [0,T]$. By continuity, for every $\varepsilon>0$ there is some $\delta>0$ such that $|x(\tau )- x(t)|< \varepsilon $ for every $\tau \in \left] t - \delta, t + \delta \right[ $. This implies $|x_k(\tau) -x(t) | < \varepsilon $ for every sufficiently large $k$ and every $\tau \in \left] t- \frac \delta 2, t+ \frac \delta 2 \right[$, because then $x_k(\tau)$ is a convex combination of points in $B_\varepsilon(x(t))$. Thus, \[ y \circ \theta^\# (t) = \lim_{k \rightarrow \infty} y_k \circ \tilde \theta_k^\# (t) = \lim_{k \rightarrow \infty} x_k\left(t+ \| \theta_k-\theta \|_{L_\infty[0,T+{\rm V}_{[0,T]}(x)]} \right) = x(t) . \] \end{proof} \subsection{Proof of Lemma~\ref{L convexity of reduced lagrangean}} \label{SP L convexity of reduced lagrangean} \begin{proof} Notice that \begin{align*} & L \left( y, \frac{\lambda w +(1-\lambda ) \hat w}{\lambda v +(1-\lambda ) \hat v} \right) \left( \lambda v +(1-\lambda ) \hat v \right) = \\ = & L \left( y, \frac{\lambda v}{\lambda v +(1-\lambda ) \hat v} \frac w v + \frac{(1-\lambda) \hat v}{\lambda v +(1-\lambda ) \hat v} \frac{\hat w}{\hat v} \right) \left( \lambda v +(1-\lambda ) \hat v \right) \leq \\ \leq & \left( \frac{\lambda v}{\lambda v +(1-\lambda ) \hat v} L \left( y, \frac w v \right) + \frac{(1-\lambda) \hat v}{\lambda v +(1-\lambda ) \hat v} L \left( y, \frac{\hat w}{\hat v} \right) \right) \left( \lambda v +(1-\lambda ) \hat v \right) = \\ = & \lambda L \left( y, \frac w v \right) v + (1-\lambda) L \left( y, \frac{\hat w}{\hat v} \right) \hat v . \end{align*} Therefore, $(v,w) \mapsto L \left( y, \frac w v \right) v$ is convex. The inequality \begin{align*} & \liminf_{\scriptsize \begin{array}{c} (y,v,w) \rightarrow (\hat y, \hat v, \hat w) \\ v> \end{array}} L\left( y, \frac w v \right) v \leq \\ \leq & \liminf_{\scriptsize \begin{array}{c} (v,w) \rightarrow (\hat v, \hat w) \\ v>0 \end{array}} L\left( \hat y, \frac w v \right) v \leq \liminf_{v \rightarrow \hat v, \ v>0} L\left( \hat y, \frac {\hat w} v \right) v \end{align*} holds trivially. Therefore, we only need to prove that \begin{align} & \limsup_{v \rightarrow \hat v, \ v>0} L\left( \hat y, \frac {\hat w} v \right) v \leq \liminf_{\scriptsize \begin{array}{c} (y,v,w) \rightarrow (\hat y, \hat v, \hat w) \\ v> \end{array}} L\left( y, \frac w v \right) v . \label{Z001} \end{align} Due to continuity of $L$, this inequality holds for every $\hat v >0$. Suppose $\hat v =0$ and fix $b< \limsup\limits_{v \rightarrow 0^+} L \left( \hat y, \frac{\hat w}{v} \right) v $, and $\varepsilon >0$. Then, we can pick $a \in ]0, \varepsilon]$ such that $L \left( \hat y , \frac{ \hat w}{ a} \right) > b \frac{1}{a}$. By continuity of $L$, there is some $\delta >0$ such that \begin{equation} \label{Z025} L\left( y, \frac{w}{a} \right) > \frac{b}{a}, \quad \text{and} \quad \left| L(y,w)- L(\hat y, \hat w) \right| < \varepsilon \end{equation} for every $(y,w)$ such that $|y- \hat y| < \delta$ and $|w-\hat w| < \delta$. Due to convexity of $w \mapsto L(y,w)$, we have \begin{align*} L\left( y, \frac w v \right) \geq & L(y,w) + \frac{L\left( y, \frac{w}{a}\right) - L(y,w)}{\frac{1}{a}-1 } \left( \frac 1 v -1 \right) = \\ = & L(y,w) + \frac{a}{1-a} \left( L\left( y, \frac{w}{a}\right) - L(y,w)\right) \frac{1-v}{v} \qquad \forall v \in ]0, a] . \end{align*} Using the estimates \eqref{Z025}, this yields \begin{align*} L \left( y, \frac w v \right) v \geq & \left( L \left( \hat y, \hat w \right) - \varepsilon \right) v + \frac{a}{1-a} \left( \frac b a -L\left(\hat y, \hat w \right) - \varepsilon \right) (1-v) = \\ = & \frac{1-v}{1-a}b + L\left( \hat y , \hat w \right) \left( v-a \frac{1-v}{1-a} \right) - \varepsilon \left( v+a \frac{1-v}{1-a} \right) , \end{align*} that is, \[ \liminf_{\scriptsize \begin{array}{c} (y,v,w) \rightarrow (\hat y, \hat v, \hat w) \\ v> \end{array}} L\left( y, \frac w v \right) v \geq \frac{1}{1-a}b - \left( L\left( \hat y , \hat w \right) +\varepsilon \right) \frac{a}{1-a} . \] Making $\varepsilon$ tend to zero and $b$ tend to $\limsup\limits_{v \rightarrow 0^+} L \left( \hat y, \frac{\hat w}{v} \right) v $, this implies \eqref{Z001}. \end{proof} \subsection{Proof of Proposition \ref{P Filippov}} \label{SP P Filippov} \begin{proof} Fix $(C,\theta,y)$, a trajectory of the differential inclusion \eqref{Eq nonparametric dynamics}, and let $V_t= \left( \dot \theta(t), \dot y(t) \right)$ for almost every $t \in [0,T]$. For each compact set $K \subset \mathbb R^{1+k}$, consider the function $F_K:[0,T] \mapsto \overline{\mathbb R}$, defined defined almost everywhere by \[ F_K(t) = \inf \left\{ \lambda (y(t),v,w) : (v,w) \in B^+ \cap K , \left( v,f(y(t))v+G(y(t))w \right) =V_t \right\}, \] being understood that $\inf \emptyset = + \infty$. First, we show that the functions $F_K$ are measurable. For any set $A \subset \mathbb R^k$ and any $\varepsilon >0$, let $B_\varepsilon(A) = \bigcup\limits_{x \in A} B_\varepsilon(x)$. Then, lower semicontinuity of $\lambda$ implies that for any $\alpha \in \mathbb R$, \begin{align*} & F_K^{-1}\left( ]-\infty, \alpha [ \right) = \\ = & \left\{ t: \exists (v,w) \in B^+ \cap K , \left( v,f(y(t))v+G(y(t))w \right) =V_t, \lambda(y(t),v,w) < \alpha \right\} = \\ = & \bigcap_{i \in \mathbb N} \begin{array}[t]{l} \Big\{ t: \exists (v,w) \in B_{\frac 1 i} \left( B^+ \cap K \right), \left| \left( v,f(y(t))v+G(y(t))w \right) -V_t \right|< \frac 1 i, \\ \hspace{2cm} \lambda(y(t),v,w) < \alpha \Big\} . \end{array} \end{align*} Due to Lemma \ref{L convexity of reduced lagrangean}, this is \begin{align*} & F_K^{-1}\left( ]-\infty, \alpha [ \right) = \\ = & \bigcap_{i \in \mathbb N} \bigcup_{v \in \mathbb Q \cap ]0,1]} \begin{array}[t]{l} \Big\{ t: \exists w \in \mathbb R^k, (v,w) \in B_{\frac 1 i} \left( B^+ \cap K \right) , \\ \hspace{0.3cm} \left| \left( v,f(y(t))v+G(y(t))w \right) -V_t \right|< \frac 1 i, L\left( y(t),\frac w v \right)v < \alpha \Big\} . \end{array} \end{align*} Due to continuity of $L$, this further reduces to \begin{align*} & F_K^{-1}\left( ]-\infty, \alpha [ \right) = \\ = & \bigcap_{i \in \mathbb N} \bigcup_{v \in \mathbb Q \cap ]0,1]} \bigcup_{{\scriptsize \begin{array}{c}w \in \mathbb Q^k: \\ (v,w) \in B_{\frac 1 i} \left( B^+ \cap K \right)\end{array}}} \begin{array}[t]{l} \Big\{ t: \left| \left( v,f(y(t))v+G(y(t))w \right) -V_t \right|< \frac 1 i, \\ \hspace{1cm} L\left( y(t),\frac w v \right)v < \alpha \Big\} . \end{array} \end{align*} Since $y$, $V$ are measurable and $f,G,L$ are continuous, it follows that $F_K$ is measurable. Now, we construct a sequence $\{ \mathcal A_i \}_{ i \in \mathbb N }$ with the following properties: \begin{itemize} \item[{\rm (a)}] Each $\mathcal A_i = \{ A_{i,1}, A_{i,2}, \ldots , A_{i,h_i} \}$ is a finite ordered collection of measurable subsets of $B^+ $; \item[{\rm (b)}] All the members of each collection $\mathcal A_i$ are pairwise disjoint, $B^+ = \bigcup \limits _{A \in \mathcal A_i} A$, and each element of $\mathcal A_i$ is contained in a ball of radius $\frac 1 i$; \item[{\rm (c)}] For any $i<j$, every element of $\mathcal A_j$ is a subset of some element of $\mathcal A_i$. All elements of $\mathcal A_j$ that are contained in $A_{i,h}$ precede (in the order of $\mathcal A_j$) any element of $\mathcal A_j$ contained in $A_{i,h+1}$. \end{itemize} To see that such sequences exist, let $\mathcal A_0=\left\{ B^+ \right\}$. For each $i \in \mathbb N$, let $\mathcal B_i = \{ B_{i,1},B_{i,2}, \ldots , B_{i,j_i} \}$, a finite cover of $B^+ $ by balls of radius $\frac 1 i$, and let \[ C_{i,h} = B_{i,h} \setminus \bigcup_{l<h}B_{i,l}, \qquad h=1,2, \ldots , j_i . \] For each $i \in \mathbb N$, let $\mathcal A_i$ be the collection of intersections \[ A \cap C_{i,h}, \qquad A \in \mathcal A_{i-1}, \quad h = 1,2, \ldots , j_i , \] ordered in any way such that any $C_{i,h}\cap A_{i-1,l}$ precedes every $C_{i,s}\cap A_{i-1,l+1}$ (discard empty intersections). So, $\{ \mathcal A_i \}_{i \in \mathbb N }$ satisfies (a)--(c). Fix a sequence $\{ \mathcal A_i \}_{i \in \mathbb N }$ as above and for each $i \in \mathbb N$, $j \in \{ 1, 2, \ldots , j_i\}$, fix $(v,w)_{i,j}=(v_{i,j}, w_{i,j}) \in A_{i,j}$. For each $i \in \mathbb N$, consider a function $j(i, \cdot ):[0,T] \mapsto \mathbb N$ defined almost everywhere by \begin{equation} \label{Z002} j(i,t) = \min \left\{ h\in \{ 1,2, \ldots , j_i\}: F_{\overline{A_{i,h}}}(t)=F_{B^+ }(t) \right\} , \end{equation} and consider the sequence $\{(v_i,w_i): [0,T] \mapsto B^+ \}_{i \in \mathbb N}$ defined as \[ (v_i,w_i)(t)=(v,w)_{i,j(i,t)}, \qquad i \in \mathbb N , \ t \in [0,T]. \] Notice that $(v_i,w_i)([0,T])\subset \left\{ (v,w)_{i,j}, j \in \{ 1, 2, \ldots , j_i\} \right\}$ is a finite set and \begin{align*} & \{ t: (v_i,w_i)(t)=(v,w)_{i,j} \} = \left\{ t: F_{\overline{A_{i,j}}}(t)=F_{B^+ }(t) \right\} \setminus \bigcup_{h<j} \left\{ t: F_{\overline{A_{i,h}}}(t)=F_{B^+ }(t) \right\} . \end{align*} Therefore, measurability of $F_K$ guarantees measurability of $(v_i,w_i)$. For almost every $t \in [0,T]$, we have: \begin{itemize} \item[] $\left(v_i(t),f(y(t))v_i(t) + G(y(t))w_i(t) \right)= V_t \qquad \forall i \in \mathbb N$, and \item[] $ \{ (v_i,w_i)(t) \}_{i \in \mathbb N} $ is a Cauchy sequence. \end{itemize} Thus, $(v,w)(t)= \lim\limits_{i \rightarrow \infty} (v_i,w_i)(t)$ is a measurable function satisfying \[ \left(v(t),f(y(t))v(t) + G(y(t))w(t) \right)= V_t \qquad \text{a.e. }t \in [0,T]. \] Lower semicontinuity of $\lambda$ and \eqref{Z002} imply that \begin{align*} & \lambda (y(t),v(t),w(t)) = \\ = & \inf\left\{ \lambda (y(t),\tilde v, \tilde w): (\tilde v, \tilde w) \in B^+ , \left(\tilde v, f(y(t))\tilde v +G(y(t)) \tilde w \right)=V_t \right\} \leq \\ \leq & \dot C(t) \end{align*} for almost every $t \in [0,T]$. \end{proof} \section*{ACKNOWLEDGMENTS} The research of the first coauthor has been supported by FCT--Funda\c c\~ao para a Ci\^encia e Tecnologia (Portugal) via strategic project PEst-OE/EGE/UI0491/2013, he is grateful to INDAM (Italy) for supporting his visit to the University of Florence in January 2014. The research of the second coauthor has been supported by MIUR (Italy) via national project (PRIN) 200894484E of MIUR (Italy); he is also grateful to CEMAPRE (Portugal) for supporting his research stay at ISEG, University of Lisbon in May 2013.
1,941,325,220,137
arxiv
\section{Introduction}\label{intro} The aim of this article is to study the complexity of approximating rational points of a projective variety defined over a function field of characteristic zero. Our motivation is work of McKinnon-Roth \cite{McKinnon-Roth} and our main results, which we state in \S \ref{main:results}, show how the subspace theorem can be used to prove \emph{Roth type theorems}, by analogy with those formulated in the number field setting, see \cite[p.~515]{McKinnon-Roth}. Indeed, we obtain lower bounds for approximation constants of rational points. More precisely, we show how extensions of the subspace theorem can be used to obtain lower bounds which are independent of fields of definition and which can be expressed in terms of local measures of positivity; we also give sufficient conditions for approximation constants to be computed on a proper subvariety. As it turns out, these kinds of theorems are related to rational curves lying in projective varieties, see \S \ref{motivation} and Corollary \ref{corollary1.3}. As we explain in \S \ref{motivation}, an important aspect to the Roth type theorems obtained in \cite{McKinnon-Roth}, in the number field setting, is a theorem of Faltings-W\"{u}stholz, \cite[Theorem 9.1]{Faltings:Wustholz}. Understanding the role that this theorem plays in the work \cite{McKinnon-Roth} was one of the original sources of motivation for the present article. On the other hand, one key feature to our approach here is that we use Schmidt's subspace theorem, for function fields, to derive a function field analogue of \cite[Theorem 9.1]{Faltings:Wustholz}. We then use this result, Corollary \ref{corollary5.5}, to prove Roth type theorems in a manner similar to what is done in \cite{McKinnon-Roth}. \subsection{Motivation}\label{motivation} The starting point for this article is \cite[Theorem 9.1]{Faltings:Wustholz}, an interesting theorem of Faltings-W\"{u}stholz, and its relation to work of McKinnon-Roth \cite{McKinnon-Roth}. To motivate and place what we do here in its proper context let us describe the results of \cite{McKinnon-Roth} in some detail. To this end, let $\mathbf{K}$ be a number field, $\overline{\mathbf{K}}$ an algebraic closure of $\mathbf{K}$, $X$ an irreducible projective variety defined over $\mathbf{K}$, and $x \in X(\overline{\mathbf{K}})$. The main focus of \cite{McKinnon-Roth} is the definition and study of an extended real number $\alpha_x(L)$ depending on a choice of ample line bundle $L$ on $X$ defined over $\mathbf{K}$. The intuitive idea is that the invariant $\alpha_x(L)$ provides a measure of how expensive it is to approximate $x$ by infinite sequences of distinct $\mathbf{K}$-rational points of $X$. A key insight of \cite{McKinnon-Roth} is that this arithmetic invariant is related not only to local measures of positivity for $L$ about $x$, namely the Seshadri constant $\epsilon_x(L)$, and $\beta_x(L)$ the relative asymptotic volume constant of $L$ with respect to $x$, but also to the question of existence of rational curves in $X$, passing through $x$ and defined over $\mathbf{K}$. More specifically, in \cite{McKinnon-Roth}, \cite[Theorem 9.1]{Faltings:Wustholz} was used to prove \cite[Theorem 6.2]{McKinnon-Roth} which asserts: if $g$ denotes the dimension of $X$, then either $$\alpha_x(L) \geq \beta_x(L) \geq \frac{g}{g+1} \epsilon_x(L)$$ or $$\alpha_x(L) = \alpha_x(L|_{W})$$ for some proper $\mathbf{K}$-subvariety $W$ of $X$. A consequence of this result is \cite[Theorem 6.3]{McKinnon-Roth} which states that $\alpha_x(L) \geq \frac{1}{2}\epsilon_x(L)$ with equality if and only if both $\alpha_x(L)$ and $\epsilon_x(L)$ are computed on a $\mathbf{K}$-rational curve $C$ such that $C$ is unibranch at $x$, $\kappa(x) \not = \mathbf{K}$, $\kappa(x) \subseteq \mathbf{K}_v$, and $\epsilon_{x,C}(L|_{C}) = \epsilon_{x,X}(L)$. (Here $\kappa(x)$ denotes the residue field of $x$ and $\mathbf{K}_v$ the completion of $\mathbf{K}$ with respect to $v$ a place of $\mathbf{K}$.) In light of these results D. McKinnon has conjectured: \noindent {\bf Conjecture} (Compare also with \cite[Conjecture 4.2]{McKinnon-Roth-Louiville}){\bf .} Let $X$ be a smooth projective variety defined over a number field $\mathbf{K}$, $\overline{\mathbf{K}}$ an algebraic closure of $\mathbf{K}$, $x \in X(\overline{\mathbf{K}})$, and $L$ an ample line bundle on $X$ defined over $\mathbf{K}$. If $\alpha_x(L) < \infty$, then there exists a $\mathbf{K}$-rational curve $C \subseteq X$ containing $x$ and also containing a \emph{sequence of best approximation to $x$}. Our purpose here is to give content to these concepts in the setting of projective varieties defined over function fields. \subsection{Statement of results and outline of their proof}\label{main:results} Our main results rely on work of Julie Wang \cite{Wang:2004} and provide an analogue of \cite[Theorem 6.2]{McKinnon-Roth} for the case of projective varieties defined over function fields. To describe our results in some detail, let $\overline{\mathbf{k}}$ be an algebraically closed field of characteristic zero and $Y \subseteq \PP^r_{\overline{\mathbf{k}}}$ an irreducible projective variety and non-singular in codimension $1$. Let $\mathbf{K}$ denote the function field of $Y$, $\overline{\mathbf{K}}$ an algebraic closure of $\mathbf{K}$, let $X \subseteq \PP^n_{\mathbf{K}}$ be a geometrically irreducible subvariety, and let $L=\Osh_{\PP^n_{\mathbf{K}}}(1)|_{ X}$. Given a prime (Weil) divisor $\mathfrak{p} \subseteq Y$ and a point $x \in X(\overline{\mathbf{K}})$ we define an extended non-negative real number $\alpha_x(L) = \alpha_{x,X}(L;\mathfrak{p})=\alpha_x(L;\mathfrak{p}) \in [0,\infty]$, depending on $L$, which, roughly speaking, gives a measure of the cost of approximating $x$ by an infinite sequence of distinct $\mathbf{K}$-rational points $\{y_i\} \subseteq X(\mathbf{K})$ with unbounded height and converging to $x$. Our goal is two fold: on the one hand we would like to relate $\alpha_x(L;\mathfrak{p})$ to local measures of positivity of $L$ about $x$ and, on the other hand, we would like to give sufficient conditions for $\alpha_x(L;\mathfrak{p})$ to be computed on a proper $\mathbf{K}$-subvariety of $X$. This is achieved by analogy with the program of \cite{McKinnon-Roth}. More precisely we relate $\alpha_x(L;\mathfrak{p})$ to two invariants of $x$ with respect to $L$. To do so, let $\mathbf{F}$ be the field of definition of $x$ and $X_\mathbf{F}$ the base change of $X$ with respect to the field extension $\mathbf{K} \rightarrow \mathbf{F}$. Next, let $\pi : \widetilde{X} = \mathrm{Bl}_x(X) \rightarrow X_\mathbf{F}$ denote the blow-up of $X_\mathbf{F}$ at the closed point corresponding to $x \in X(\overline{\mathbf{K}})$ and let $E$ denote the exceptional divisor of $\pi$. If $\gamma \in \RR_{\geq 0}$, then let $L_\gamma$ denote the $\RR$-line bundle $\pi^*L_\mathbf{F}-\gamma E$; here $L_\mathbf{F}$ denotes the pullback of $L$ to $X_\mathbf{F}$ and, in what follows, we let $L_{\gamma,\overline{\mathbf{K}}}$ denote the pullback of $L_\gamma$ to $\widetilde{X}_{\overline{\mathbf{K}}}$ the base change of $\widetilde{X}$ with respect to $\mathbf{K} \rightarrow \overline{\mathbf{K}}$. The first invariant, the \emph{relative asymptotic volume constant of $L$ with respect to $x$}, is defined by McKinnon-Roth in \cite{McKinnon-Roth} to be: $$\beta_x(L) = \int_0^{\gamma_{\mathrm{eff}}} \frac{\mathrm{Vol}(L_{\gamma})} {\mathrm{Vol}(L)} d\gamma; $$ here $\mathrm{Vol}(L_\gamma)$ and $\mathrm{Vol}(L)$ denote the volume of the line bundles $L_{\gamma}$ and $L$ on $\widetilde{X}$ and $X$ respectively and the real number $\gamma_{\mathrm{eff}}$ is defined by: $$\gamma_{{\mathrm{eff}}} = \gamma_{{\mathrm{eff}},x}(L) = \sup \{\gamma \in \RR_{\geq 0} : L_{\gamma,\overline{\mathbf{K}}} \text{ is numerically equivalent to an effective divisor} \}. $$ The second invariant is the \emph{Seshadri constant of $x$ with respect to $L$}: $$\epsilon_x(L) = \sup \{ \gamma \in \RR_{\geq 0} : L_{\gamma, \overline{\mathbf{K}}} \text{ is nef}\}\text{.} $$ Having described briefly our main concepts, our main result, which we prove in \S \ref{proof:main:results}, reads: \begin{theorem}\label{theorem1.1} Let $\mathbf{K}$ be the function field of an irreducible projective variety $Y \subseteq \PP^r_{\overline{\mathbf{k}}}$, defined over an algebraically closed field $\overline{\mathbf{k}}$ of characteristic zero, assume that $Y$ is non-singular in codimension $1$ and fix a prime divisor $\mathfrak{p} \subseteq Y$. Fix an algebraic closure $\overline{\mathbf{K}}$ of $\mathbf{K}$ and suppose that $X \subseteq \PP^n_{\mathbf{K}}$ is a geometrically irreducible subvariety, that $x \in X(\overline{\mathbf{K}})$, and that $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_{X}$. In this setting, either $$\alpha_x(L;\mathfrak{p}) \geq \beta_x(L) \geq \frac{\dim X}{\dim X + 1}\epsilon_x(L) $$ or $$\alpha_{x,X}(L;\mathfrak{p}) = \alpha_{x,W}(L|_{W};\mathfrak{p}) $$ for some proper subvariety $W \subsetneq X$. \end{theorem} In particular, note that Theorem \ref{theorem1.1} implies that $\alpha_x(L;\mathfrak{p})$ is computed on a proper $\mathbf{K}$-subvariety of $X$ provided that $\alpha_x(L;\mathfrak{p}) < \beta_x(L)$. By analogy with \cite{McKinnon-Roth}, Theorem \ref{theorem1.1} has the following consequence: \begin{corollary}\label{corollary1.2} In the setting of Theorem \ref{theorem1.1}, we have that $\alpha_x(L;\mathfrak{p}) \geq \frac{1}{2} \epsilon_x(L)$. If equality holds then $\alpha_{x,X}(L;\mathfrak{p}) = \alpha_{x,C}(L|_{ C};\mathfrak{p})$ for some curve $C \subseteq X$ defined over $\mathbf{K}$. \end{corollary} In the case that $\mathbf{K}$ has transcendence degree $1$, Corollary \ref{corollary1.2} takes the more refined form: \begin{corollary}\label{corollary1.3} Assume that $\mathbf{K}$ is the function field of a smooth projective curve over an algebraically closed field of characteristic zero. Let $X$ be a geometrically irreducible projective variety defined over $\mathbf{K}$ and $L$ a very ample line bundle on $X$ defined over $\mathbf{K}$. If $x$ is a $\overline{\mathbf{K}}$-rational point of $X$, then the inequality $\alpha_x(L) \geq \frac{1}{2}\epsilon_x(L)$ holds true. If equality holds, then $\alpha_{x,X}(L) = \alpha_{x,B}(L|_B)$ for some rational curve $B\subseteq X$ defined over $\mathbf{K}$. \end{corollary} Theorem \ref{theorem1.1} and Corollary \ref{corollary1.2} are proven in \S \ref{proof:main:results}, while we prove Corollary \ref{corollary1.3} in \S \ref{9.6}. Our techniques used to prove Theorem \ref{theorem1.1} and Corollary \ref{corollary1.2} are similar to those used to establish \cite[Theorem 6.3]{McKinnon-Roth}. Indeed, we first define approximation constants for projective varieties defined over a field $\mathbf{K}$ of characteristic zero together with a set $M_\mathbf{K}$ of absolute values which satisfy the product rule. The definition we give here extends that given in \cite{McKinnon-Roth} for the case that $\mathbf{K}$ is a number field. We then restrict our attention to the case that $\mathbf{K}$ is a function field. In this setting, the effective version of Schmidt's subspace theorem given in \cite{Wang:2004}, and which is applicable to function fields of higher dimensional varieties, plays the role of the theorem of Faltings-W\"{u}stholz \cite[Theorem 9.1]{Faltings:Wustholz}. More precisely, in \S \ref{5} we first give an extension of the subspace theorem obtained in \cite{Wang:2004}. We then use this extension to obtain a function field analogue of the Faltings-W\"{u}stholz theorem. Finally, we apply this result, in a manner similar to what is done in \cite{McKinnon-Roth}, to obtain Theorem \ref{theorem1.1} and Corollary \ref{corollary1.2}. A key aspect to deducing Corollary \ref{corollary1.3} from Corollary \ref{corollary1.2}, is to first establish Theorem \ref{theorem9.4} which determines the nature of approximation constants for rational points of Abelian varieties over function fields of curves. This theorem and its proof are similar to the corresponding statement in the number field setting, see for instance \cite[Second theorem on p.~98]{Serre:Mordell-Weil-Lectures}. As some additional comments, again to place our results in their proper context, let us emphasize that in order for the results of this article to have content one encounters the question of existence of $\mathbf{K}$-rational points for varieties defined over function fields. To this end we recall the main result of \cite{Graber:Harris:Starr} which asserts that if $\mathbf{K}$ is the function field of a complex curve, then every rationally connected variety defined over $\mathbf{K}$ has a $\mathbf{K}$-rational point. \noindent {\bf Acknowledgements.} This paper has benefited from comments and suggestions from Steven Lu, Mike Roth and Julie Wang. I also thank Mike Roth for suggesting the problem to me. Portions of this work were completed while I was a postdoctoral fellow at McGill University and also while I was a postdoctoral fellow at the University of New Brunswick where I was financially supported by an AARMS postdoctoral fellowship. Finally, I thank anonymous referees for carefully reading this work and for their comments, suggestions and corrections. \section{Preliminaries: Absolute values, product formulas, and heights}\label{2} In this section, to fix notation and conventions which we will require in subsequent sections, we recall some concepts and results about absolute values, product formulas, and heights. Some standard references, from which much of our presentation is based, are \cite{Lang:Algebra}, \cite{Lang:Diophantine}, and \cite{Bombieri:Gubler}. Throughout this section $\mathbf{K}$ denotes a field of characteristic zero. In \S \ref{2.4}--\ref{2.7} we will place further restrictions on $\mathbf{K}$. Indeed, there $\mathbf{K}$ will also be a function field. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{2.1} {\bf Absolute values.} By an \emph{absolute value on $\mathbf{K}$} we mean a real valued function $$|\cdot|_{v} : \mathbf{K} \rightarrow \RR$$ having the properties that: \begin{enumerate} \item[(a)]{$|x|_{v} \geq 0$ for all $x \in \mathbf{K}$ and $|x|_{v} = 0$ if and only if $x = 0$;} \item[(b)]{$|xy|_{v} = |x|_{v}|y|_{v}$, for all $x,y \in \mathbf{K}$; } \item[(c)]{$|x+y|_{v} \leq |x|_{v} + |y|_{v}$, for all $x,y \in \mathbf{K}$.} \end{enumerate} We say that an absolute value $|\cdot|_{v}$ is \emph{non-archimedean} if it has the property that: $$|x+y|_{v} \leq \max(|x|_{v},|y|_{v} ) \text{, for all $x,y\in\mathbf{K}$.} $$ If an absolute value is not non-archimedean, then we say that it is \emph{archimedean}. Every absolute value $|\cdot|_{v}$ defines a metric on $\mathbf{K}$; the distance of two elements $x,y\in\mathbf{K}$ with respect to this metric is defined to be $|x-y|_{v}$. If $|\cdot|_{v}$ is an absolute value on $\mathbf{K}$, then we let $\mathbf{K}_{v}$ denote the completion of $\mathbf{K}$ with respect to $|\cdot |_{v}$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{2.2}{\bf The product formula.} Let $M_{\mathbf{K}}$ denote a collection of absolute values on $\mathbf{K}$. We assume that our set $M_{\mathbf{K}}$ has the property that if $x \in \mathbf{K}^\times$ then $|x|_v = 1$ for almost all $|\cdot|_v \in M_{\mathbf{K}}$. We do not require $M_{\mathbf{K}}$ to consist of inequivalent absolute values. We say that \emph{$M_{\mathbf{K}}$ satisfies the product formula} if for each $x \in \mathbf{K}^{\times}$ we have: \begin{equation}\label{eqn2.1} \prod\limits_{|\cdot|_{v} \in M_{\mathbf{K}}} |x|_{v} = 1 \text{.} \end{equation} \noindent {\bf Remark.} Note that the definition given above is similar to \cite[Axiom 1, p.~473]{Artin:Whaples} except that we do not require $M_{\mathbf{K}}$ to consist of inequivalent absolute values. The definition we give here is motivated by the discussion given in \cite[p.~24]{Lang:Diophantine}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{2.3}{\bf Heights.} Let $M_\mathbf{K}$ be a set of absolute values on $\mathbf{K}$ which satisfies the product rule and $\PP^n_{\mathbf{K}} = \mathrm{Proj} \ \mathbf{K}[x_0,\dots,x_n]$. If $y = [y_0:\cdots : y_n] \in \PP^n(\mathbf{K})$ then let \begin{equation}\label{eqn2.2} H_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y) = \prod\limits_{|\cdot|_{v} \in M_\mathbf{K}} \max\limits_i |y_i|_{v}. \end{equation} The fact that $M_\mathbf{K}$ satisfies the product rule ensures that the righthand side of equation \eqref{eqn2.2} is well defined. The number $H_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y)$ is called the \emph{multiplicative height of $y$ with respect to $\Osh_{\PP^n_{\mathbf{K}}}(1)$ and $M_\mathbf{K}$} and the function \begin{equation}\label{eqn2.3} H_{\Osh_{\PP^n_{\mathbf{K}}}(1)} : \PP^n(\mathbf{K}) \rightarrow \RR \end{equation} is called the \emph{multiplicative height function of $\PP^n_{\mathbf{K}}$ with respect to the tautological line bundle and the set $M_\mathbf{K}$}. If $X \subseteq \PP^n_{\mathbf{K}}$ is a projective variety then the multiplicative height of $x \in X(\mathbf{K})$ with respect to $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_{ X}$ is defined by pulling back the function \eqref{eqn2.3} and is denoted by $H_L(x)$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{2.4}{\bf Example.} Let $\overline{\mathbf{k}}$ be an algebraically closed field of characteristic zero, $Y$ an irreducible projective variety over $\overline{\mathbf{k}}$ and non-singular in codimension $1$. By a \emph{prime (Weil) divisor} of $Y$ we mean a closed integral subscheme $\mathfrak{p} \subseteq Y$ of codimension $1$. Let $\eta$ denote the generic point of $Y$ and $\mathbf{K} = \Osh_{Y,\eta}$ the field of fractions of $Y$. If $\eta_{\mathfrak{p}}$ denotes the generic point of a prime divisor $\mathfrak{p}\subseteq Y$, then its local ring $\Osh_{Y,\eta_\mathfrak{p}} \subseteq \Osh_{Y,\eta}$ is a discrete valuation ring and we let \begin{equation}\label{eqn2.4} \mathrm{ord}_{\mathfrak{p}} : \mathbf{K}^\times \rightarrow \ZZ \end{equation} denote the valuation determined by $\Osh_{Y,\eta_{\mathfrak{p}}}$. Fix an ample line bundle $\mathcal{L}$ on $Y$. If $\mathfrak{p} \subseteq Y$ is a prime divisor, then we let $\deg_{\mathcal{L}}(\mathfrak{p})$ denote the degree of $\mathfrak{p}$ with respect to $\mathcal{L}$, see for instance \cite[A.9.38]{Bombieri:Gubler}. Next fix $0 < \mathbf{c} < 1$ and for each prime divisor $\mathfrak{p}\subseteq Y$, let \begin{equation}\label{eqn2.5} |x|_{\mathfrak{p},\mathbf{K}} = \begin{cases} \mathbf{c}^{\mathrm{ord}_{\mathfrak{p}}(x) \deg_{\mathcal{L}}(\mathfrak{p})} & \text{ for $x \not = 0$} \\ 0 & \text{ for $x = 0$.} \end{cases} \end{equation} The absolute values $|\cdot|_{\mathfrak{p},\mathbf{K}}$, defined for each prime divisor $\mathfrak{p}\subseteq Y$ and depending on our fixed ample line bundle $\mathcal{L}$, are non-archimedean, proper and the set \begin{equation}\label{eqn2.6} M_{(Y,\mathcal{L})} = \{ |\cdot|_{\mathfrak{p},\mathbf{K}} : \mathfrak{p}\subseteq Y \text{ is a prime divisor}\} \end{equation} is a proper set of absolute values which satisfies the product rule. Since the set $M_{(Y,\mathcal{L})}$ satisfies the product rule we can define the multiplicative and logarithmic height functions of $\PP^n_\mathbf{K}$ with respect to the tautological line bundle $\Osh_{\PP^n_\mathbf{K}}(1)$. Specifically if $y=[y_0:\dots:y_n] \in \PP^n(\mathbf{K})$, then the multiplicative height of $y$ is given by \begin{equation}\label{eqn2.7} H_{\Osh_{\PP^n_\mathbf{K}}(1)}(y) = \prod\limits_{|\cdot|_{\mathfrak{p},\mathbf{K}} \in M_{(Y,\mathcal{L})}} \max\limits_i |y_i|_{\mathfrak{p}, \mathbf{K}}, \end{equation} the logarithmic height of $y$ is given by \begin{equation}\label{eqn2.8} h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y) = - \sum\limits_{|\cdot|_{\mathfrak{p},\mathbf{K}} \in M_{(Y,\mathcal{L})}} \min\limits_i (\mathrm{ord}_{\mathfrak{p}}(y_i)\deg_{\mathcal{L}}(\mathfrak{p})) \end{equation} and the logarithmic and multiplicative height functions are related by \begin{equation}\label{eqn2.9} - \log_{\mathbf{c}} H_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y) = h_{\Osh_{\PP^n_\mathbf{K}}(1)}(y). \end{equation} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{2.5}{\bf Example.} We continue with the situation of \S \ref{2.4}, we let $\overline{\mathbf{K}}$ be an algebraic closure of the function field $\mathbf{K}$ and we fix $\mathbf{F} / \mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$, a finite extension of $\mathbf{K}$. Let $\phi : Y' \rightarrow Y$ be the normalization of $Y$ in $\mathbf{F}$. As in \S \ref{2.4}, every ample line bundle $\mathcal{L}'$ on $Y'$ determines a proper set of absolute values $M_{(Y',\mathcal{L}')}$ which satisfies the product rule. In this setting, we denote elements of $M_{(Y',\mathcal{L}')}$ by $|\cdot|_{\mathfrak{p}',\mathbf{F}}$ for $\mathfrak{p}'$ a prime divisor of $Y'$. In particular, we can take $\mathcal{L}' = \phi^*\mathcal{L}$ for $\mathcal{L}$ an ample line bundle on $Y$. In this case, if $\mathfrak{p}'$ is a prime divisor of $Y'$ lying over $\mathfrak{p}$ a prime divisor of $Y$, we then set \begin{equation}\label{eqn2.10} |x|_{\mathfrak{p}'/\mathfrak{p}} = |\mathrm{N}_{\mathbf{F}_{\mathfrak{p}'}/\mathbf{K}_{\mathfrak{p}}}(x)|_{\mathfrak{p}, \mathbf{K}}^{1/[\mathbf{F}_{\mathfrak{p}'}:\mathbf{K}_{\mathfrak{p}}]} = |x|_{\mathfrak{p}',\mathbf{F}}^{1/[\mathbf{F}_{\mathfrak{p}'}:\mathbf{K}_{\mathfrak{p}}]}; \end{equation} here $\operatorname{N}_{\mathbf{F}_{\mathfrak{p}'}/\mathbf{K}_{\mathfrak{p}}}$ denotes the field norm from $\mathbf{F}_{\mathfrak{p}'}$ to $\mathbf{K}_{\mathfrak{p}}$. As explained in \cite[\S 1.3.6]{Bombieri:Gubler}, the absolute value \begin{equation}\label{eqn2.11} |\cdot|_{\mathfrak{p}'/\mathfrak{p}} : \mathbf{F} \rightarrow \RR \end{equation} extends the absolute value \begin{equation}\label{eqn2.12} |\cdot|_{\mathfrak{p},\mathbf{K}} : \mathbf{K} \rightarrow \RR. \end{equation} We can also normalize the absolute values of $\mathbf{F}$ relative to $\mathbf{K}$. In particular, given a prime divisor $\mathfrak{p}'$ of $Y'$ we let $|\cdot|_{\mathfrak{p}',\mathbf{K}}$ denote the absolute value \begin{equation}\label{eqn2.13} |x|_{\mathfrak{p}',\mathbf{K}} = |x|_{\mathfrak{p}',\mathbf{F}}^{1/[\mathbf{F}:\mathbf{K}]} \text{, for $x \in \mathbf{F}$,} \end{equation} compare with \cite[Example 1.4.13]{Bombieri:Gubler}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{2.6}{\bf Height functions and field extensions.} Since the sets $M_{(Y,\mathcal{L})}$ and $M_{(Y',\mathcal{L}')}$, defined in \S \ref{2.4} and \S \ref{2.5}, satisfy the product formula we can consider the height functions that they determine. To compare these height functions, we first note that, as explained in \cite[Example 1.4.13]{Bombieri:Gubler}, given a prime divisor $\mathfrak{p} \subseteq Y$, the set of places of $\mathbf{F}$ lying over the place of $\mathbf{K}$ determined by $\mathfrak{p}$ is in bijection with the set of prime divisors of $Y'$ lying over $\mathfrak{p}$. Given a prime divisor $\mathfrak{p}$ of $Y$ and a prime divisor $\mathfrak{p}'$ of $Y'$ we sometimes use the notation $\mathfrak{p}' | \mathfrak{p}$ to indicate that $\mathfrak{p}'$ lies above $\mathfrak{p}$. Next, note that by \cite[Corollary 1.3.2]{Bombieri:Gubler}, given a prime divisor $\mathfrak{p}$ of $Y$, we have \begin{equation}\label{eqn2.14} \sum_{\mathfrak{p}' | \mathfrak{p}} [\mathbf{F}_{\mathfrak{p}'} : \mathbf{K}_{\mathfrak{p}}] = [\mathbf{F}:\mathbf{K}]. \end{equation} Also, since the absolute value $|\cdot|_{\mathfrak{p}'/\mathfrak{p}} = |\cdot|_{\mathfrak{p}',\mathbf{F}}^{1/[\mathbf{F}_{\mathfrak{p}'} :\mathbf{K}_{\mathfrak{p}}] } $ extends the absolute value $|\cdot|_{\mathfrak{p}, \mathbf{K}}$, it follows, using \eqref{eqn2.14} and \eqref{eqn2.10}, that if $H_{\PP^n_{\mathbf{K}}(1)}(\cdot)$ denotes the height function on $\PP^n(\mathbf{K})$ determined by $M_{(Y,\mathcal{L})}$ and if $H_{\Osh_{\PP^n_{\mathbf{F}}}(1)}(\cdot)$ denotes the height function on $\PP^n(\mathbf{F})$ determined by $M_{(Y',\mathcal{L}')}$, then \begin{equation}\label{eqn2.15} H_{\Osh_{\PP^n_\mathbf{K}}(1)}(y) = H_{\Osh_{\PP^n_{\mathbf{F}}}(1)}(y)^{1/[\mathbf{F} : \mathbf{K}]}, \end{equation} for all $y = [y_0:\dots:y_n] \in \PP^n(\mathbf{K})$. At the level of logarithmic heights, the relation \eqref{eqn2.15} implies that \begin{equation}\label{eqn2.16} [\mathbf{F} : \mathbf{K}] h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y) = h_{\Osh_{\PP^n_{\mathbf{F}}}(1)}(y), \end{equation} for all $y \in \PP^n(\mathbf{K})$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{2.7}{\bf Height functions, field extensions, and projective varieties.} Considerations similar to \S \ref{2.6} apply to an arbitrary projective variety $X$ over $\mathbf{K}$. In particular, given a very ample line bundle $L$ on $X$, and defined over $\mathbf{K}$, let $H_L(\cdot)$ and $h_L(\cdot)$ denote, respectively, the multiplicative and logarithmic heights obtained by pulling back $H_{\Osh_{\PP^n_{\mathbf{K}}(1)}}(\cdot)$ and $h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(\cdot)$ with respect to some embedding $X \hookrightarrow \PP^n_{\mathbf{K}}$ afforded by $L$. Similarly, if $X_{\mathbf{F}}= X\times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$ denotes the base change of $X$ with respect to the extension $\mathbf{F} / \mathbf{K}$ and $L_{\mathbf{F}}$ the pullback of $L$ to $X_{\mathbf{F}}$, then we denote by $H_{L_{\mathbf{F}}}(\cdot)$ and $h_{L_{\mathbf{F}}}(\cdot)$, respectively, the height functions determined by pulling back $H_{\Osh_{\PP^n_{\mathbf{F}}}(1)}(\cdot)$ and $h_{\Osh_{\PP^n_{\mathbf{F}}}(1)}(\cdot)$, respectively, with respect to any embedding $X \hookrightarrow \PP^n_{\mathbf{F}}$ afforded by $L_{\mathbf{F}}$. From this point of view, we have the relations \begin{equation}\label{eqn1.10} H_{L}(y) = H_{L_{\mathbf{F}}}(y)^{1/ [\mathbf{F}:\mathbf{K}]} \end{equation} and \begin{equation}\label{eqn1.11} [\mathbf{F}:\mathbf{K}] h_L(y) = h_{L_{\mathbf{F}}}(y), \end{equation} for all $y \in X(\mathbf{K})$, compare with \eqref{eqn2.15} and \eqref{eqn2.16}. \section{Distance functions and approximation constants}\label{3} Let $\mathbf{K}$ be a field of characteristic zero and $|\cdot|$ a non-archimedean absolute value on $\mathbf{K}$. In this section we define, by analogy with \cite{McKinnon-Roth}, projective distance functions and approximation constants, with respect to $|\cdot|$, for pairs $(X,L)$ for $X$ a projective variety over $\mathbf{K}$ and $L$ a very ample line bundle on $X$. In \S \ref{4} we record some properties of these distance functions which are needed in subsequent sections. \noindent{\bf Projective distance functions.} We define (normalized) distance functions for projective varieties over $\mathbf{K}$ with respect to non-archimedean places of $\mathbf{K}$. The reason we include a discussion about normalizing our distance functions is so that we can state Lemma \ref{lemma3.1} below which we need later in \S \ref{4}. In that section we also record various properties of these distance functions; these properties are needed in \S \ref{9} where we establish Corollary \ref{corollary1.3}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{3.5} Given a nontrivial absolute value $|\cdot|_{v,\mathbf{K}}$ on $\mathbf{K}$, we also denote by $|\cdot|_{v,\mathbf{K}}$ an extension of $|\cdot|_{v,\mathbf{K}}$ to $\overline{\mathbf{K}}$ an algebraic closure of $\mathbf{K}$. We fix a collection of non-archimedean places of $\mathbf{K}$ which we denote by $M_{\mathbf{K}}$. Let $\mathbf{F} / \mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$, be a finite dimensional extension and $w$ a place of $\mathbf{F}$ lying over $v$. We let $\mathbf{F}_w$ and $\mathbf{K}_v$ denote, respectively, the completions of $\mathbf{F}$ and $\mathbf{K}$ with respect to $w$ and $v$. Finally, we write $M_\mathbf{F}$ for the set of places of $\mathbf{F}$ lying above elements of $M_\mathbf{K}$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{3.6} For each $v \in M_\mathbf{K}$ and each $w \in M_\mathbf{F}$, lying over $v$, we define absolute values by \begin{equation}\label{eqn3.1} ||x||_w = |\mathrm{N}_{\mathbf{F}_w / \mathbf{K}_v}(x)|_{v,\mathbf{K}} \end{equation} and \begin{equation}\label{eqn3.2} |x|_{w,\mathbf{K}} = |\mathrm{N}_{\mathbf{F}_w / \mathbf{K}_v}(x)|_{v,\mathbf{K}}^{1/[\mathbf{F} : \mathbf{K}]}; \end{equation} here $\operatorname{N}_{\mathbf{F}_w/\mathbf{K}_v}$ denotes the field norm from $\mathbf{F}_w$ to $\mathbf{K}_v$. The absolute value $|\cdot|_{w,\mathbf{K}}$ is a representative of $w$ and the absolute value \begin{equation}\label{eqn3.3} ||\cdot||_w^{1/[\mathbf{F}_w : \mathbf{K}_v]} \end{equation} is a representative of $w$ extending $|\cdot|_{v,\mathbf{K}}$. In particular, \begin{equation}\label{eqn3.4} |x|_{v,\mathbf{K}} = ||x||_v = ||x||_w^{1/[\mathbf{F}_w : \mathbf{K}_v]}, \end{equation} for $x \in \mathbf{K}$, \cite[1.3.6, p.~6]{Bombieri:Gubler}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{3.7} We can use the absolute values defined by \eqref{eqn3.1} to define projective distance functions corresponding to places $w \in M_{\mathbf{F}}$. When we do this, we say that this distance function is \emph{normalized relative to $\mathbf{F}$} and we denote it by $d_w(\cdot,\cdot)$ or $d_v(\cdot,\cdot)_\mathbf{F}$ for a place $v$ lying below $w$ if we wish to emphasize the fact that it is normalized relative to $\mathbf{F}$. More specifically, given $w \in M_{\mathbf{F}}$, we fix an extension of $||\cdot||_w$ to $\overline{\mathbf{K}}$ and we define $$d_w (\cdot, \cdot) : \PP^n(\overline{\mathbf{K}}) \times \PP^n(\overline{\mathbf{K}}) \rightarrow [0,1] $$ by \begin{equation}\label{eqn3.5} d_v(x,y)_{\mathbf{F}} = d_w(x,y) = \frac{\max_{0\leq i < j \leq n}(||x_i y_j - x_j y_i||_w )}{\max_{0\leq i \leq n}(||x_i||_w) \max_{0 \leq j \leq n}(||y_j||_w)}, \end{equation} for $x = [x_0:\dots:x_n]$ and $y=[y_0:\dots : y_n] \in \PP^n(\overline{\mathbf{K}})$ and $v \in M_\mathbf{K}$ lying below $w$. We remark: \begin{lemma}\label{lemma3.1} If $v \in M_\mathbf{K}$ and $w \in M_\mathbf{F}$ lies over $v$ then $$ d_v(\cdot,\cdot)_\mathbf{K}^{[\mathbf{F}_w : \mathbf{K}_v]} = d_v(\cdot,\cdot)_{\mathbf{F}} = d_w(\cdot,\cdot).$$ \end{lemma} \begin{proof} Immediate from the definitions. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{3.8} If $X$ is a projective variety defined over $\mathbf{K}$ and $L$ a very ample line bundle on $X$, then every embedding \begin{equation}\label{eqn3.5'} X \hookrightarrow \PP^n_{\mathbf{K}}, \end{equation} obtained by choosing a basis of a very ample linear system with $\dim V = n+1$, determines, by pulling back the distance function defined in \eqref{eqn3.5}, a projective distance function on $X$ \begin{equation}\label{eqn3.5''} d_v(\cdot,\cdot) = d_{|\cdot|_v}(\cdot,\cdot) : X(\overline{\mathbf{K}}) \times X(\overline{\mathbf{K}}) \rightarrow [0,1]. \end{equation} Such functions also behave in the same way as Lemma \ref{lemma3.1} with respect to normalizing with respect to field extensions. \medskip \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{3.8'}\noindent{\bf Approximation constants.} Let $(X,L)$ be a pair consisting of a projective variety $X$ and $L$ a very ample line bundle on $X$. We assume that $(X,L)$ is defined over $\mathbf{K}$. Fix an embedding $X \hookrightarrow \PP^n_{\mathbf{K}}$ determined by a very ample linear system $V \subseteq \H^0(X,L)$, fix a set $M_\mathbf{K}$ of absolute values on $\mathbf{K}$ satisfying the product rule and, as in \S \ref{2.3}, let $H_L(\cdot)$ denote the multiplicative height of $X$ with respect to $L = \Osh_{\PP^n_\mathbf{K}}(1)|_X$ and our set $M_\mathbf{K}$. Given a non-archimedean absolute value, $|\cdot|_v \in M_\mathbf{K}$ let $d_{|\cdot|_v}(\cdot,\cdot)$ denote the corresponding distance function defined in \eqref{eqn3.5''}. Here we define approximation constants and our definition extends that given in \cite[Definitions 2.8 and 2.9]{McKinnon-Roth}. \noindent {\bf Definition.} Fix $x \in X(\overline{\mathbf{K}})$. For every infinite sequence $\{y_i\} \subseteq X(\mathbf{K})$ of distinct points with unbounded height and $d_{|\cdot|_v}(x,y_i) \to 0$ (which we sometimes denote by $\{y_i\} \to x$) define: \begin{equation}\label{alpha:x:seq} \alpha_x(\{y_i\},L) = \inf \{\gamma \in \RR : d_{|\cdot|_v}(x,y_i)^{\gamma} H_L(y_i) \text{ is bounded from above} \} \end{equation} and define $$\alpha_{x,X}(L; |\cdot|_{v})=\alpha_x(L;|\cdot|_v) = \alpha_x(L) $$ by: \begin{multline}\label{alpha:x} \alpha_x(L) = \inf \{ \alpha_x(\{y_i\},L) : \{y_i\}\subseteq X(\mathbf{K}) \text{ is an infinite sequence } \\ \text{of distinct points with unbounded height and $d_{|\cdot|_v}(x,y_i) \to 0$} \} \text{.}\end{multline} The intuitive idea is that $\alpha_x(L)$ provides a measure of the cost of approximating $x \in X(\overline{\mathbf{K}})$ by infinite sequences of distinct $\mathbf{K}$-rational points with unbounded height and converging to $x$. \noindent {\bf Remarks.} \begin{enumerate} \item[(a)]{As a matter of convention, if $\{y_i\} \subseteq X(\mathbf{K})$ is an infinite sequence of distinct points with unbounded height and not converging to $x$, then we define $\alpha_x(\{y_i\},L) = \infty$. Similarly, if there exists no infinite sequence of distinct points $\{y_i\}\subseteq X(\mathbf{K})$ with unbounded height and converging to $x$, then we define $\alpha_x(L) = \infty$.} \item[(b)]{In the definitions \eqref{alpha:x:seq} and \eqref{alpha:x}, the reason that we restrict our attention to infinite sequences of distinct points with unbounded height is that, in general, for instance when $\mathbf{K}$ is a function field, there may exist infinite sequences of distinct points with bounded height. On the other hand, if $\{y_i\} \subseteq X(\mathbf{K})$ is an infinite sequence of distinct points with unbounded height, then $\{y_i\}$ admits a subsequence $\{y_i'\}$ with $H_L(y_i') \to \infty$.} \item[(c)]{Let $\{y_i\} \subseteq X(\mathbf{K})$ be an infinite sequence of distinct points with unbounded height and $\{y_i\} \to x$. It then follows from the definitions that if $\{y_i'\} \subseteq X(\mathbf{K})$ is a subsequence of distinct points with unbounded height then $\{y_i'\} \to x$ and $\alpha_x(\{y_i'\},L) \leq \alpha_x(\{y_i\},L)$, for all $x \in X(\overline{\mathbf{K}})$. } \item[(d)]{If $\mathbf{K}$ is a number field and $\{y_i\} \subseteq X(\mathbf{K})$ an infinite sequence of distinct points then the sequence $\{H_L(y_i)\}$ is unbounded and thus the definitions \eqref{alpha:x:seq} and \eqref{alpha:x} extend those given in \cite[Definitions 2.8 and 2.9]{McKinnon-Roth}.} \end{enumerate} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{3.9}{\bf Example.}\label{PP1:eg} In the case that $\mathbf{K}$ is a number field and $x \in \PP^n(\mathbf{K})$, then in \cite[Lemma 2.13]{McKinnon-Roth}, it is shown that $\alpha_x(\Osh_{\PP^n_{\mathbf{K}}}(1)) = 1$. The same is true for the case that $\mathbf{K}$ is the function field of a smooth projective complex curve $C$. To see why, as in the proof of \cite[Lemma 2.13]{McKinnon-Roth}, we have $\alpha_x(\Osh_{\PP^n_{\mathbf{K}}}(1)) \geq 1$. To see that this lower bound can be achieved, as in \cite[Lemma 2.13]{McKinnon-Roth} it suffices to treat the case $n=1$ and $x = [1:0]$. To see that $\alpha_x(\Osh_{\PP^1_{\mathbf{K}}}(1)) = 1$, let $p$ be the point of $C$ corresponding to the absolute value which we used to define $\alpha_x(\Osh_{\PP^n_{\mathbf{K}}}(1))$. Let $g$ be the genus of $C$ and let $d>2g$ be an integer. Let $s \in \mathbf{K}$ denote the global section of $\Osh_C(dp)$ with $\mathrm{div}(s)=dp$. Then $\mathrm{ord}_p(s) = d$ and $\mathrm{ord}_q(s) = 0$ for $p \not= q$. Since $d > 2g$, $h^0(C,\Osh_C(dp))\geq g+2$ and thus $|dp|$ is base point free so we can find a $t \in \mathbf{K}$ which is a global section of $\Osh_C(dp)$ and which does not vanish at $p$. Let $y_i=[1,s^it^{-i}]$, for $i \geq 0$. Then $d_{|\cdot|_p}(x,y_i) \to 0$ and $H_{\Osh_{\PP^1_{\mathbf{K}}}}(y_i) \to \infty$ as $i \to \infty$ and also $d_{|\cdot|_p}(x,y_i) H_{\Osh_{\PP^1_{\mathbf{K}}}(1)}(y_i) = 1$ for all $i$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{3.10'}{\bf Example.} Let $\mathbf{K}$ be the function field of a smooth projective curve over an algebraically closed field with characteristic $0$. In \S 9 we compute $\alpha_x(L)$ for $x \in A(\overline{\mathbf{K}})$, for $A$ an abelian variety defined over $\mathbf{K}$ and $L$ a very ample line bundle on $A$. Specifically, we establish an approximation theorem similar to \cite[p.~98]{Serre:Mordell-Weil-Lectures}, proven there in the number field setting, and it follows that $\alpha_x(L) = \infty$, see Theorem \ref{theorem9.4} and Corollary \ref{corollary9.4'}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{3.10}{\bf Example.}\label{curve:function:field:setting} Let $C$ be a non-singular curve defined over $\mathbf{K}$, the function field of a smooth projective curve over an algebraically closed field with characteristic $0$ and suppose that the genus of $C$ is at least one. If $L$ is a very ample line bundle on $C$ and $x \in C(\overline{\mathbf{K}})$, then $\alpha_x(L) = \infty$ as we prove in Theorem \ref{theorem9.1'}. To get a sense for some of the ideas involved, we consider the Abel-Jacobi map $C \rightarrow A$, here $A = \operatorname{Jac}(C)$ is the Jacobian of $C$. Let $\Theta$ be the theta divisor of $A$ and identify $C$ with its image in $A$. Then, in this notation, we have that $\alpha_x(\Theta^{\otimes 3}|_C) \geq \alpha_x(\Theta^{\otimes 3})$, compare with \cite[Proposition 2.14 (c)]{McKinnon-Roth}. Now note that since $\alpha_x(\Theta^{\otimes 3}) = \infty$, see \S \ref{3.10'} or Theorem \ref{theorem9.4} and Corollary \ref{corollary9.4'}, it follows that $\alpha_x(\Theta^{\otimes 3}|_C) = \infty$ too. Finally, it follows, from our definition of approximation constants in conjunction with properties of height functions, that $\alpha_x(L) = \infty$ for all very ample line bundles $L$ on $C$. The same is true for singular curves with geometric genus at least $1$, see Theorem \ref{theorem9.1'}. \section{Properties of projective distance functions}\label{4} In this section we record some properties of the distance functions defined in \eqref{eqn3.5} and \eqref{eqn3.5''}. In the number field setting, similar properties were established in \cite[\S 2]{McKinnon-Roth}. The only major difference between what we do here and what is done there is that we work with bounded sets instead of compact sets. We omit the proof of these properties since they are evident adaptations of the corresponding statements given in \cite[\S 2]{McKinnon-Roth}. The main reason that we record these properties is that they are needed to establish Lemma \ref{lemma6.1} and Theorem \ref{theorem9.4}. Throughout this section we fix a field $\mathbf{K}$ of characteristic zero, an algebraic closure $\overline{\mathbf{K}}$ of $\mathbf{K}$ and a place $v$ of $\mathbf{K}$ which we extend to $\overline{\mathbf{K}}$ and also denote by $v$. In what follows we also fix an absolute value $|\cdot| = |\cdot|_v$ on $\overline{\mathbf{K}}$ representing $v$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{4.1} Let $X$ be a projective variety over $\mathbf{K}$. Besides the Zariski topology on $X(\overline{\mathbf{K}})$ we have a topology which is induced by that of $\mathbf{K}$ with respect to $v$. We call this topology the \emph{$v$-topology} on $X$ and it is the topology which is induced locally by open balls with respect to closed embeddings of affine open subsets of $X$ into affine spaces and the max norm with respect to $|\cdot|$. This topology is independent of the embeddings and the equivalence class of $|\cdot|$. As $\mathbf{K}$ need not be compact with respect to the $v$-topology, the $v$-topology on $X(\overline{\mathbf{K}})$ need not be compact in general. On the other hand, to understand $X(\overline{\mathbf{K}})$ in terms of $|\cdot|$ it is useful instead to work with the concept of \emph{bounded sets} of $X$, in the sense of \cite[Definition 2.6.2]{Bombieri:Gubler} or \cite[\S 6.1]{Serre:Mordell-Weil-Lectures}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{4.2} For the sake of completeness, we include some discussion about \emph{bounded sets} and we first consider the affine case. To this end, let $U$ be an affine $\mathbf{K}$-variety with coordinate ring $\mathbf{K}[U]$. We say that a subset $E \subseteq U(\overline{\mathbf{K}})$ is \emph{bounded} in $U$, if for every $f \in \mathbf{K}[U]$, the function $|f|$ is bounded on $E$. If $\{f_1,\dots, f_N\}$ are generators of $\mathbf{K}[U]$ as a $\mathbf{K}$-algebra and if the inequality $$ \sup_{P \in E} \max_{j=1,\dots, N} |f_j(P)| < \infty$$ holds true for a subset $E \subseteq U(\overline{\mathbf{K}})$, then $E$ is bounded in $U$, \cite[Lemma 2.2.9]{Bombieri:Gubler}. Also if $\{U_\ell\}$ is a finite open covering of $U$ and if $E$ is bounded in $U$, then there are bounded subsets $E_\ell$ of $U_\ell$ such that $E = \bigcup_\ell E_\ell$, \cite[Lemma 2.2.10]{Bombieri:Gubler}. Next, given an arbitrary variety $X$ over $\mathbf{K}$, a subset $E \subseteq X(\overline{\mathbf{K}})$ is called \emph{bounded} in $X$, if there is a finite covering $\{U_i \}_{i \in I}$ of $X$ by affine open subsets and sets $E_i$ with $E_i \subseteq U_i(\overline{\mathbf{K}})$ such that $E_i$ is bounded in $U_i$ and $E = \bigcup_{i \in I} E_i$, \cite[Definition 2.6.2]{Bombieri:Gubler}. If $E$ is bounded in $X$, then for every finite covering $\{U_i\}_{i \in I}$ of $X$ by affine open subsets, there is a subdivision $$ E = \bigcup_{i \in I} E_i,$$ with $E_i \subseteq U_i(\overline{\mathbf{K}})$ such that each $E_i$ is bounded in $U_i$, \cite[Remark 2.6.3]{Bombieri:Gubler}. Finally, as explained in \cite[Example 2.6.5]{Bombieri:Gubler}, see also \cite[\S 6]{Serre:Mordell-Weil-Lectures}, the set $\PP^n(\overline{\mathbf{K}})$ is bounded in $\PP^n_{\mathbf{K}}$ and one way to see this is to use the standard affine covering $$X_i := \{ x =[x_0:\dots : x_n] \in \PP^n_{\mathbf{K}} : x_i \not = 0 \}, $$ for $i \in \{0,\dots, n \}$, of $\PP^n_\mathbf{K}$ together with the decomposition $$E_i := \{ x \in \PP^n_{\mathbf{K}} : |x_i | = \max\limits_{j=0,\dots,n} |x_j| \} $$ of $E := \PP^n(\overline{\mathbf{K}})$. One consequence of the boundedness of $\PP^n(\overline{\mathbf{K}})$ is that the set of $\overline{\mathbf{K}}$-rational points $X(\overline{\mathbf{K}})$ for $X$ a projective variety over $\mathbf{K}$ is bounded; it also follows that if $X$ is a projective variety defined over $\mathbf{K}$ and $x \in X(\overline{\mathbf{K}})$ a $\overline{\mathbf{K}}$-rational point of $X$, then there exists an affine open subset $U\subseteq X$ with $x \in U(\overline{\mathbf{K}})$ and a subset $E \subseteq U(\overline{\mathbf{K}})$ bounded in $U$ and containing $x$. We refer to such a subset as a \emph{bounded neighbourhood of $x$} in what follows. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{4.3} The key point to establishing our desired properties of the distance functions, which we defined in \S \ref{3.7} and \S \ref{3.8}, is Lemma \ref{lemma4.1} below. To state it, let $X$ be a variety over $\operatorname{Spec} \mathbf{K}$ and $U$ an affine open subset of $X_{\mathbf{F}} = X \times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$ for some finite extension $\mathbf{F} / \mathbf{K}$ with $\mathbf{F} \subseteq \overline{\mathbf{K}}$. Suppose given two collections of elements $u_1,\dots, u_r$ and $u_1',\dots, u_s'$ of $\Gamma(U,\Osh_{X_{\mathbf{F}}})$ which generate the same ideal. \begin{lemma}[Compare with {\cite[Lemma 2.2]{McKinnon-Roth}}]\label{lemma4.1} In the above setting, the functions \begin{equation}\label{eqn4.1} \max(|u_1(\cdot)|_v,\dots,|u_r(\cdot)|_v) \end{equation} and \begin{equation}\label{eqn4.2} \max(|u_1'(\cdot)|_v,\dots,|u_s'(\cdot)|_v) \end{equation} are equivalent on every subset $E \subseteq U(\overline{\mathbf{K}})$ which is bounded in $U$. \end{lemma} \begin{proof} In light of the discussion given in \S \ref{4.2}, the proof of Lemma \ref{lemma4.1} is an evident adaptation of the proof of \cite[Lemma 2.2]{McKinnon-Roth}. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{4.4} Now let $L$ and $L'$ be two very ample line bundles on a projective variety $X$, defined over $\mathbf{K}$, $V\subseteq \H^0(X,L)$ and $W \subseteq \H^0(X,L')$ two very ample linear systems, $s = \dim V -1$, $r = \dim W - 1$ and fix two embeddings $j : X \hookrightarrow \PP^s$ and $j' : X \hookrightarrow \PP^r$ obtained by choosing bases for $V$ and $W$ respectively. We wish to compare the distance functions determined by the embeddings $j$ and $j'$. We denote these distance functions by $d_v(\cdot,\cdot)$ and $d_v'(\cdot,\cdot)$ respectively. The main point is Proposition \ref{proposition4.3} which shows that the functions $d_v(\cdot,\cdot)$ and $d_v'(\cdot,\cdot)$ are equivalent. Before stating Proposition \ref{proposition4.3}, we record: \begin{lemma}[Compare with {\cite[Lemma 2.3]{McKinnon-Roth}}]\label{lemma4.2} Let $\mathbf{F}/\mathbf{K}$ be a finite extension, $\mathbf{F} \subseteq \overline{\mathbf{K}}$. Then for every point $x \in X(\mathbf{F})$ and every rational map $f: \PP^s \dashrightarrow \PP^r$ defined at $j(x)$ and such that $f\circ j = j'$ near $x$, there is a subset $E \subseteq X(\overline{\mathbf{K}}) \times X(\overline{\mathbf{K}})$ bounded in $X \times X$ and containing $(x,x)$ such that $d_v(\cdot,\cdot)$ and $d_v'(\cdot,\cdot)$ are equivalent on $E$. \end{lemma} \begin{proof} The proof of Lemma \ref{lemma4.2} uses Lemma \ref{lemma4.1} and is an evident adaptation of the proof of \cite[Lemma 2.3]{McKinnon-Roth}. \end{proof} As mentioned, the distance functions determined by distinct embeddings are equivalent: \begin{proposition}[Compare with {\cite[Proposition 2.4]{McKinnon-Roth}}]\label{proposition4.3} Let $d_v$ and $d_v'$ be two distance functions coming from different embeddings of $X$. Then for all finite extensions $\mathbf{F}/\mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$, $d_v$ is equivalent to $d_v'$ on $X(\mathbf{F}) \times X(\mathbf{F})$. \end{proposition} \begin{proof} The proof of Proposition \ref{proposition4.3} uses Lemma \ref{lemma4.2} and, considering the discussion of \S \ref{4.2}, is an evident adaptation of the proof of \cite[Proposition 2.4]{McKinnon-Roth}. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{4.5} Proposition \ref{proposition4.5} below, which is useful for working with distance functions locally, is a consequence of the following useful auxiliary observation. \begin{lemma}[Compare with {\cite[Lemma 2.5]{McKinnon-Roth}}]\label{lemma4.4} Let $x$ be a point of $X(\overline{\mathbf{K}})$ and $\mathbf{F} \subseteq \overline{\mathbf{K}}$ a finite extension of $\mathbf{K}$ over which $x$ is defined. Then there exists an affine open subset $U$ of $X_{\mathbf{F}} = X \times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$ containing $x$ and elements $u_1,\dots,u_r$ of $\Gamma(U,\Osh_{X_{\mathbf{F}}})$ which generate the maximal ideal of $x$ and positive real constants $c \leq C$ so that $$ c d_v(x,y) \leq \min(1,\max(|u_1(y)|_v,\dots,|u_r(y)|_v) ) \leq C d_v(x,y),$$ for all $y \in U(\mathbf{F})$. \end{lemma} \begin{proof} This is an evident adaptation of the proof of \cite[Lemma 2.5]{McKinnon-Roth} and uses Lemma \ref{lemma3.1}. \end{proof} Lemma \ref{lemma4.4} is needed to prove the following useful result. \begin{proposition}[Compare with {\cite[Lemma 2.6]{McKinnon-Roth}}]\label{proposition4.5} Let $x$ be a point of $X(\overline{\mathbf{K}})$ and $\mathbf{F} \subseteq \overline{\mathbf{K}}$ a finite extension of $\mathbf{K}$ over which $x$ is defined. Let $U$ be an affine open subset of $X_{\mathbf{F}} = X \times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$ containing $x$. Let $u_1,\dots,u_r$ be elements of $\Gamma(U,\Osh_{X_{\mathbf{F}}})$ which generate the maximal ideal of $x$. Then for every sequence of points $\{x_i \} \subseteq U(\mathbf{K})$ such that $d_v(x,x_i) \to 0$ as $i \to \infty$, the functions \begin{equation}\label{eqn4.6} d_v(x,\cdot) \end{equation} and \begin{equation}\label{eqn4.7} \max(|u_1(\cdot)|_v,\dots, |u_r(\cdot)|_v) \end{equation} are equivalent on $\{x_i\}$. In particular, there exists positive constants $c \leq C$ so that for all $i \geq 0$, we have that $$c d_v(x,x_i) \leq \max(|u_1(x_i)|_v,\dots, |u_r(x_i)|_v) \leq C d_v(x,x_i). $$ \end{proposition} \begin{proof} This is an evident adaptation of \cite[Lemma 2.6]{McKinnon-Roth} and relies on Lemma \ref{lemma4.4}. \end{proof} \section{Wang's effective Schmidt's subspace theorem}\label{5} In subsequent sections we study the approximation constants that we defined in \S \ref{3.8} for the case that $\mathbf{K}$ is a function field. Our approach relies on a slight extension of a theorem of Julie Wang \cite{Wang:2004} and here we describe this extension. First we make some preliminary remarks. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{5.1} Our setting is that of \S \ref{2.4}. In particular, $\overline{\mathbf{k}}$ is an algebraically closed field of characteristic zero, $Y$ is an irreducible projective variety over $\overline{\mathbf{k}}$, non-singular in codimension $1$ and we have fixed an ample line bundle $\mathcal{L}$ on $Y$. We also let $M_{(Y,\mathcal{L})}$ denote the set of absolute values of the form \begin{equation}\label{5.1} |\cdot|_{\mathfrak{p},\mathbf{K}} : \mathbf{K} \rightarrow \RR, \end{equation} for $\mathbf{K}$ the field of fractions of $Y$ and $\mathfrak{p}$ a prime divisor of $Y$, defined in \eqref{eqn2.5}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{5.2}{\bf $\mathfrak{p}$-adic metrics.} Important to our extension of the subspace theorem is the concept of $\mathfrak{p}$-adic metrics. Such metrics are determined by prime divisors of $Y$. To define such metrics first let $\PP^n_{\mathbf{K}} = \operatorname{Proj} \mathbf{K}[x_0,\dots,x_n]$. Every prime divisor $\mathfrak{p}$ of $Y$ determines a $\mathfrak{p}$-adic metric on the tautological line bundle $\Osh_{\PP^n_{\mathbf{K}}}(1)$ given locally by: \begin{equation}\label{eqn5.2} ||\sigma(y)||_{\mathfrak{p},\mathbf{K}} = \frac{|\sigma(y)|_{\mathfrak{p},\mathbf{K}}}{\max_{j,k}|a_j y_k|_{\mathfrak{p},\mathbf{K}}} = \min_{j,k}\left( \frac{|\sigma(y)|_{\mathfrak{p},\mathbf{K}}}{|a_j y_k|_{\mathfrak{p},\mathbf{K}}} \right), \end{equation} for nonzero $$ \sigma = \sum_{j=0}^n a_j x_j \in \H^0(\PP^n,\Osh_{\PP^n_{\mathbf{K}}}(1)), \text{ with } a_j \in \mathbf{K}.$$ Fix an extension of $|\cdot|_{\mathfrak{p},\mathbf{K}}$ to $\overline{\mathbf{K}}$ and, by abuse of notation, denote this extension also by $|\cdot|_{\mathfrak{p},\mathbf{K}}$. Having fixed such an extension, we obtain a $\mathfrak{p}$-adic metric on $\Osh_{\PP^n_{\mathbf{F}}}(1)$ for all finite extensions $\mathbf{F} / \mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$. We also denote this metric by $||\cdot||_{\mathfrak{p},\mathbf{K}}$ and it is defined by \eqref{eqn5.2} for all nonzero sections $$\sigma = \sum_{j=0}^n a_j x_j \in \H^0(\PP^n_{\mathbf{F}},\Osh_{\PP^n_{\mathbf{F}}}(1)), \text{ with } a_j \in \mathbf{F}. $$ If $X \subseteq \PP^n_{\mathbf{K}}$ is a subvariety and $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_X$, then we let $||\cdot||_{\mathfrak{p},\mathbf{K}}$ denote the $\mathfrak{p}$-adic metric $||\cdot||_{\mathfrak{p},\mathbf{K}}$ on $L$ obtained by pulling back the metric \eqref{eqn5.2}. Similarly, given a finite extension $\mathbf{F} / \mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$, we let $||\cdot||_{\mathfrak{p},\mathbf{K}}$ denote the $\mathfrak{p}$-adic metric on $L_{\mathbf{F}}$, the pull-back of $L$ to $X_{\mathbf{F}} = X \times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$, obtained by a fixed extension of $|\cdot|_{\mathfrak{p},\mathbf{K}}$ to an absolute value on $\overline{\mathbf{K}}$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{5.3} {\bf Weil functions.} We also need to make some remarks concerning Weil functions. To do so, let $$\sigma = \sum_{j=0}^n a_j x_j \in \H^0(\PP^n_{\mathbf{K}},\Osh_{\PP^n_{\mathbf{K}}}(1)) \text{, with $a_j \in \mathbf{K}$}, $$ be a nonzero section and let $\operatorname{Supp}(\sigma)$ denote the hyperplane that it determines. The \emph{Weil function} of $\sigma$ with respect to a prime divisor $\mathfrak{p}$ of $Y$ has domain $\PP^n(\mathbf{K}) \backslash \operatorname{Supp}(\sigma)(\mathbf{K})$ and is defined by \begin{equation}\label{eqn5.3} \lambda_{\sigma,|\cdot|_{\mathfrak{p},\mathbf{K}}}(y) = ( \mathrm{ord}_{\mathfrak{p}}(\sigma(y)) - \min_j(\mathrm{ord}_{\mathfrak{p}}(y_j))- \min_j(\operatorname{ord}_{\mathfrak{p}}(a_j)))\deg_{\mathcal{L}}(\mathfrak{p}). \end{equation} The Weil function $\lambda_{\sigma,|\cdot|_{\mathfrak{p}},\mathbf{K}}$ and the $\mathfrak{p}$-adic metric $||\cdot||_{\mathfrak{p},\mathbf{K}}$ are related by \begin{equation}\label{eqn5.4} \lambda_{\sigma,|\cdot|_{\mathfrak{p},\mathbf{K}}}(y) = \log_{\mathbf{c}} || \sigma(y)||_{\mathfrak{p},\mathbf{K}}, \end{equation} for each $y \in \PP^n(\mathbf{K})\backslash \operatorname{Supp}(\sigma)(\mathbf{K})$. When we fix an extension of $|\cdot|_{\mathfrak{p},\mathbf{K}}$ to an absolute value $|\cdot|_{\mathfrak{p},\mathbf{K}} : \overline{\mathbf{K}} \rightarrow \RR$ we can use the relation \eqref{eqn5.4} to consider Weil functions of nonzero sections $$ \sigma = \sum_{j=0}^n a_j x_j \in \H^0(\PP^n_{\mathbf{F}},\Osh_{\PP^n_{\mathbf{F}}}(1)), \text{ with $a_j \in \mathbf{F}$,} $$ for $\mathbf{F} / \mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$, a finite extension. In particular, given a nonzero section $\sigma \in \H^0(\PP^n_{\mathbf{F}},\Osh_{\PP^n_{\mathbf{F}}}(1))$, we define its Weil function with respect to $\mathfrak{p}$ to be the function $\lambda_{\sigma,|\cdot|_{\mathfrak{p}},\mathbf{K}}$ defined by \begin{equation}\label{eqn5.5} \lambda_{\sigma,|\cdot|_{\mathfrak{p},\mathbf{K}}}(y) = \log_{\mathbf{c}}||\sigma(y)||_{\mathfrak{p},\mathbf{K}}, \end{equation} for all $y \in \PP^n(\mathbf{F})\backslash \operatorname{Supp}(\sigma)(\mathbf{F})$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{5.4}{\bf The subspace theorem.} Before we establish our extension of the subspace theorem we state the main result of \cite{Wang:2004}. Here we state this result in a slightly more general form than considered in \cite{Wang:2004} and \cite{Ru:Wang:2012}. Indeed, there $Y$ is assumed to be non-singular and the absolute values are considered with respect to a very ample line bundle on $Y$. Here we assume that $Y$ is non-singular in codimension $1$ and we consider absolute values with respect to an ample line bundle $\mathcal{L}$ on $Y$. This more general setting is important to what we do here. \begin{theorem}[See {\cite[p.~811]{Wang:2004}} or {\cite[Theorem 17]{Ru:Wang:2012}}]\label{theorem5.1} Fix a finite set $S$ of prime divisors of $Y$, a collection of linear forms $\sigma_1,\dots,\sigma_q$ in $\mathbf{K}[x_0,\dots,x_n]$ and let $\PP^n_{\mathbf{K}} = \operatorname{Proj} \mathbf{K}[x_0,\dots,x_n]$. There exists an effectively computable finite union of proper linear subspaces $Z \subsetneq \PP^n_\mathbf{K}$ such that the following holds true: Given $\epsilon > 0$, there exists effectively computable constants $a_\epsilon$ and $b_\epsilon$ such that for every $x \in \PP^n(\mathbf{K}) \backslash Z(\mathbf{K})$ either: \begin{enumerate} \item{$h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(x) \leq a_\epsilon$ or} \item{$\sum\limits_{\mathfrak{p} \in S} \max\limits_J \sum\limits_{j \in J} \lambda_{\sigma_j,|\cdot|_{\mathfrak{p},\mathbf{K}}}(x) \leq (n+1+\epsilon) h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(x) + b_{\epsilon}$;} here the maximum is taken over all subsets $J \subseteq \{1,\dots,q\}$ such that the $\sigma_j$, for $j\in J$, are linearly independent. \end{enumerate} \end{theorem} \begin{proof} This is implied by the main result of \cite{Wang:2004} and the remark \cite[bottom of p.~812]{Wang:2004}. See also \cite[Theorem 17]{Ru:Wang:2012} and \cite[Remark 1]{Ru:Wang:2012}. \end{proof} \noindent {\bf Remark.} As explained in \cite[Remark 18]{Ru:Wang:2012}, the constants $a_\epsilon$ and $b_\epsilon$ appearing in Theorem \ref{theorem5.1} depend on $\epsilon$, the degree, with respect to $\mathcal{L}$, of a canonical class of $Y$, the sum of the degrees of the $\mathfrak{p} \in S$ with respect to $\mathcal{L}$, and the heights of the linear forms $\sigma_1,\dots,\sigma_q \in \mathbf{K}[x_0,\dots,x_n]$, respect to $\mathcal{L}$, as defined by \cite[(1.5) and (1.6)]{Ru:Wang:2012}. For a description of the union of linear subspaces $Z$ appearing in Theorem \ref{theorem5.1}, we refer to \cite[\S 3]{Wang:2004}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{5.5} By changing the order of quantifiers slightly in Theorem \ref{theorem5.1} and using our conventions about Weil functions given in \S \ref{5.3}, especially the definition given in \eqref{eqn5.5}, we can extend Wang's subspace theorem so as to allow for linear forms having coefficients in $\overline{\mathbf{K}}$. We state this result in Theorem \ref{theorem5.2} below and I am grateful to Julie Wang for her interest in an earlier version of this work, for telling me that such an extension should follow from her \cite[p.~811]{Wang:2004}, and for suggesting a method of proof. Having, in \S \ref{5.2} and \S \ref{5.3}, properly defined the concepts we need the proof of Theorem \ref{theorem5.2} is standard and can be compared with \cite[Remark 7.2.3]{Bombieri:Gubler}. \begin{theorem}\label{theorem5.2} Fix a finite set $S$ of prime divisors of $Y$, fix a collection of linear forms $\sigma_1,\dots,\sigma_q \in \overline{\mathbf{K}}[x_0,\dots,x_n]$ and let $\PP^n_{\overline{\mathbf{K}}} = \operatorname{Proj} \overline{\mathbf{K}}[x_0,\dots,x_n]$. Then given $\epsilon > 0$, there exists an effectively computable finite union of proper linear subspaces $W \subsetneq \PP^n_\mathbf{K}$ and effectively computable positive constants $a_\epsilon$ and $b_\epsilon$ such that for every $x \in \PP^n(\mathbf{K}) \backslash W(\mathbf{K})$ either: \begin{enumerate} \item{$h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(x) \leq a_{\epsilon}$ or} \item{$\sum\limits_{\mathfrak{p} \in S} \max\limits_J \sum\limits_{j \in J} \lambda_{\sigma_j,|\cdot|_{\mathfrak{p},\mathbf{K}}}(x) \leq (n+1+\epsilon)h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(x) + b_\epsilon $;} \end{enumerate} here the maximum is taken over all subsets $J \subseteq \{1,\dots, q\}$ such that the $\sigma_j$, for $j \in J$, are linearly independent. \end{theorem} \noindent{\bf Remark.} In Theorem \ref{theorem5.2}, the Weil functions are given by \eqref{eqn5.5} and they depend on our fixed choice of extension $|\cdot|_{\mathfrak{p},\mathbf{K}} : \overline{\mathbf{K}} \rightarrow \RR$ of the absolute value $|\cdot|_{\mathfrak{p},\mathbf{K}} : \mathbf{K} \rightarrow \RR$ to $\overline{\mathbf{K}}$. \begin{proof}[Proof of Theorem \ref{theorem5.2}] Let $\mathbf{F} / \mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$ be a finite Galois extension containing the coefficients of each of the $\sigma_j$ and let $\phi : Y' \rightarrow Y$ be the normalization of $Y$ in $\mathbf{F}$. Let $S'$ be the set of prime divisors of $Y'$ lying over the elements of $S$. For each $\mathfrak{p} \in S$ and each $\mathfrak{p}' \in S'$ lying over $\mathfrak{p}$ recall that the absolute value $|\cdot|_{\mathfrak{p}'/\mathfrak{p}} : \mathbf{F} \rightarrow \RR$, given by \eqref{eqn2.11}, extends the absolute value $|\cdot|_{\mathfrak{p},\mathbf{K}}$. Furthermore the extension $\mathbf{F}/\mathbf{K}$ is Galois. Thus, by \cite[Corollary 1.3.5]{Bombieri:Gubler}, there exists for each $\mathfrak{p} \in S$ and each $\mathfrak{p}' \in S'$ lying over $\mathfrak{p}$ an $g_{\mathfrak{p}'/\mathfrak{p}} \in \operatorname{Gal}(\mathbf{F} /\mathbf{K})$ so that \begin{equation}\label{eqn5.6} |x|_{\mathfrak{p},\mathbf{K}} = |g_{\mathfrak{p}'/\mathfrak{p}}(x)|_{\mathfrak{p}'/\mathfrak{p}} \text{, for $x \in \mathbf{F}$.} \end{equation} On the other hand, considering \eqref{eqn2.10}, we have \begin{equation}\label{eqn5.7} |\cdot|_{\mathfrak{p}'/\mathfrak{p}} = |\cdot|_{\mathfrak{p}',\mathbf{F}}^{1/[\mathbf{F}_{\mathfrak{p}'}:\mathbf{K}_{\mathfrak{p}}]} \end{equation} and it follows that \begin{equation}\label{eqn5.8} |x|_{\mathfrak{p},\mathbf{K}} = |g_{\mathfrak{p}'/\mathfrak{p}}(x)|_{\mathfrak{p}',\mathbf{F}}^{1/[\mathbf{F}_{\mathfrak{p}'} : \mathbf{K}_{\mathfrak{p}}]} \text{, for all $x \in \mathbf{F}$.} \end{equation} For each $\mathfrak{p} \in S$ and each $\mathfrak{p}' \in S'$ lying over $\mathfrak{p}$ let $g_{\mathfrak{p}'/\mathfrak{p}} \in \operatorname{Gal}(\mathbf{F} / \mathbf{K})$ be so that \eqref{eqn5.6} holds true and set \begin{equation}\label{eqn5.9} \sigma_{\mathfrak{p}',j} = g_{\mathfrak{p}'/\mathfrak{p}}(\sigma_j). \end{equation} Then $\sigma_{\mathfrak{p}',j}$ is the linear form in $\mathbf{F}[x_0,\dots,x_n]$ obtained by applying $g_{\mathfrak{p}'/\mathfrak{p}}$ to the coefficients of $\sigma_j$. Let $x \in \PP^n(\mathbf{K})$ be such that $x \not \in \operatorname{Supp}(\sigma_j)$ for $j=1,\dots,q$. Then, considering \eqref{eqn5.8}, the definition \eqref{eqn5.5} and the relation \eqref{eqn2.14}, it follows that \begin{equation}\label{eqn5.10} \left(\sum\limits_{\mathfrak{p} \in S} \max\limits_J \sum\limits_{j \in J} \lambda_{\sigma_j,|\cdot|_{\mathfrak{p},\mathbf{K}}}(x) \right)[\mathbf{F} : \mathbf{K}] = \sum\limits_{\mathfrak{p}' \in S'} \max\limits_J \sum\limits_{j \in J} \lambda_{\sigma_{\mathfrak{p}',j},|\cdot|_{\mathfrak{p}',\mathbf{F}}} (x); \end{equation} here the maximum in the left hand side of \eqref{eqn5.10} is taken over all $J \subseteq \{1,\dots,q\}$ so that the $\sigma_j$, $j \in J$, are linearly independent, whereas the maximum in the righthand side of equation \eqref{eqn5.10} is taken over all $J \subseteq \{1,\dots,q\}$ so that the $\sigma_{\mathfrak{p}',j}$ for $j \in J$ and fixed $\mathfrak{p}'$, are linearly independent. The righthand side of \eqref{eqn5.10} is at most \begin{equation}\label{eqn5.11} \sum\limits_{\mathfrak{p}' \in S'} \max\limits_J \sum\limits_{(\mathfrak{q}',j) \in J'} \lambda_{\sigma_{\mathfrak{q}',j,|\cdot|_{\mathfrak{p}',\mathbf{F}}}}(x); \end{equation} here the maximum in \eqref{eqn5.11} is taken over all subsets $J' \subseteq \{ (\mathfrak{q}',j) : \mathfrak{q}' \in S', 1 \leq j \leq q \}$ for which the $\sigma_{\mathfrak{q}',j}$ are linearly independent. Considering \eqref{eqn5.11}, \eqref{eqn5.10}, and \eqref{eqn2.16}, it follows, by applying Theorem \ref{theorem5.1} over $\mathbf{F}$ with respect to the linear forms $\sigma_{\mathfrak{p}',j}$, $\mathfrak{p}' \in S'$, $j=1,\dots q$, that there exists an effectively computable union of linear subspaces $Z \subseteq \PP^n_{\mathbf{F}}$ so that for all $\epsilon > 0$ there exists effectively computable constants $a_\epsilon$ and $b_\epsilon$ such that for every $x \in \PP^n(\mathbf{K}) \backslash Z(\mathbf{K})$ either \begin{enumerate} \item[(a)]{$h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(x) \leq \frac{a_\epsilon}{[\mathbf{F}:\mathbf{K}]}$ or} \item[(b)]{$\sum\limits_{\mathfrak{p} \in S} \max\limits_J \sum\limits_{j \in J} \lambda_{\sigma_j,|\cdot|_{\mathfrak{p},\mathbf{K}}} (x) \leq (n+1+\epsilon) h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(x) + \frac{b_\epsilon}{[\mathbf{F}:\mathbf{K}]}$,} \end{enumerate} where the maximum in (b) above is taken over all $J \subseteq \{1,\dots,q\}$ for which the $\sigma_j$ are linearly independent. In particular the above hold for our given fixed $\epsilon > 0$. To produce such a $W$ defined over $\mathbf{K}$, write $Z = \bigcup_i \Lambda_i$ for linear subspaces $\Lambda_i \subseteq \PP^n_{\mathbf{F}}$ and for each $i$ replace $\Lambda_i$ by the linear span of all solutions $x \in \Lambda_i(\mathbf{K})\bigcap \PP^n(\mathbf{K})$ to the system: \begin{enumerate} \item[(a')]{$h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(x) > \frac{a_\epsilon}{[\mathbf{F}:\mathbf{K}]}$ and} \item[(b')]{$\sum\limits_{\mathfrak{p} \in S} \max\limits_J \sum\limits_{j \in J} \lambda_{\sigma_j, |\cdot|_{\mathfrak{p},\mathbf{K}}}(x) > (n+1+\epsilon) h_{\Osh_{\PP^n_{\mathbf{K}}(1)}}(x) + \frac{b_\epsilon}{[\mathbf{F}:\mathbf{K}]}$.} \end{enumerate} The union of such linear spaces $W$ is defined over $\mathbf{K}$ and the conclusion of Theorem \ref{theorem5.2} holds true for all $x \in \PP^n(\mathbf{K}) \backslash W(\mathbf{K})$. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{5.6} We now consider consequences of Theorem \ref{theorem5.2}. To begin with we have the following result which we state in multiplicative form. \begin{corollary}\label{corollary5.3} Let $\mathbf{F} / \mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$, be a finite extension, fix a finite set $S$ of prime divisors of $Y$, and fix a collection of linearly independent linear forms $\sigma_1,\dots,\sigma_q \in \mathbf{F} [x_0,\dots,x_n]$. Then given $\epsilon > 0$, there exists a proper subvariety $Z \subsetneq \PP^n_{\mathbf{K}}$ and positive constants $A_\epsilon$ and $B_\epsilon$ such that if $y \in \PP^n(\mathbf{K})$ satisfies the conditions \begin{enumerate} \item{$H_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y) > A_{\epsilon}$; and} \item{$\prod_{\mathfrak{p}\in S} \prod_{j =1}^q ||\sigma_j(y)||_{\mathfrak{p},\mathbf{K}} < B_{\epsilon} H_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y)^{-n-1-\epsilon}$; and} \item{$y \not \in \operatorname{Supp}(\sigma_i)$, for $i = 1,\dots, q$,} \end{enumerate} then $y \in Z(\mathbf{K})$. \end{corollary} \begin{proof} Use the relation $h_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y) = - \log_{\mathbf{c}} H_{\Osh_{\PP^n_{\mathbf{K}}}(1)}(y)$ to write the conclusion of Theorem \ref{theorem5.2} in multiplicative form. \end{proof} Next, we give an extension of Corollary \ref{corollary5.3}. Indeed, using Corollary \ref{corollary5.3}, we obtain a function field analogue of the Faltings-W\"{u}stholz theorem, \cite[Theorem 9.1]{Faltings:Wustholz}. This result, which we state as Corollary \ref{corollary5.5} below, should also be compared with the discussion given in \cite[bottom of p.~1301]{Evertse:Ferretti:2002}. We also note that in formulating this result, for the sake of simplicity, we restrict our attention to the case of a single prime divisor. Finally, we remark that Corollary \ref{corollary5.5} below plays a key role in our approach to proving Roth type theorems as we will see in Proposition \ref{proposition6.2}. In order to state Corollary \ref{corollary5.5}, fix a non-degenerate projective variety $X \subseteq \PP^n_\mathbf{K}$, let $L = \Osh_{\PP^n_\mathbf{K}}(1)|_X$, fix a finite extension $\mathbf{F}$ of $\mathbf{K}$, $\mathbf{F} \subseteq \overline{\mathbf{K}}$, and let $L_\mathbf{F}$ denote the pullback of $L$ to $X_\mathbf{F} = X \times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$ via the base change $\mathbf{K} \rightarrow \mathbf{F}$. Let $H_L(\cdot) : X(\mathbf{K}) \rightarrow \RR$ denote the height function determined by $L$, fix a prime divisor $\mathfrak{p}$ of $Y$ and let $||\cdot||_{\mathfrak{p}, \mathbf{K}}$ be the $\mathfrak{p}$-adic metric on $L$ obtained by pulling back the metric given in \eqref{eqn5.2}. Finally, fix an extension of the absolute value $|\cdot|_{\mathfrak{p}, \mathbf{K}}$ to $\overline{\mathbf{K}}$. Then, in this way, we obtain an extension of the metric $||\cdot||_{\mathfrak{p},\mathbf{K}}$ to a $\mathfrak{p}$-adic metric on $L_\mathbf{F}$. We can now state: \begin{corollary}\label{corollary5.5} In the setting just described, in particular, $X \subseteq \PP^n_\mathbf{K}$ is a non-degenerate projective variety, $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_X$ and $s_0,\dots,s_n \in \H^0(X,L)$ are the pull-back of the coordinate functions $x_0,\dots,x_n$, let $\sigma_1,\dots,\sigma_q \in \H^0(X_{\mathbf{F}},L_{\mathbf{F}})$ be a collection of $\mathbf{F}$-linearly independent combinations of the $s_0,\dots,s_n$. Fix real numbers $c_1,\dots,c_q \geq 0$ with the property that $c_1+\dots+c_q > n+1$. If $\epsilon = c_1+\dots+c_q - n - 1$, then there exists a proper subvariety $Z \subsetneq X$ and positive constants $A_\epsilon$ and $B_\epsilon$ such that the following is true: if $y \in X(\mathbf{K})$ satisfies the conditions: \begin{enumerate} \item{$H_L(y) > A_\epsilon$; and} \item{$||\sigma_i(y)||_{\mathfrak{p},\mathbf{K}} < B_\epsilon H_L(y)^{-c_i} $, for $i = 1,\dots, q$; and } \item{$y \not \in \operatorname{Supp}(\sigma_i)$, for $i=1,\dots, q$,} \end{enumerate} then $y \in Z(\mathbf{K})$. \end{corollary} \begin{proof} Applying Corollary \ref{corollary5.3} with $S = \{ \mathfrak{p} \}$ and using the definitions of $H_L(\cdot)$ and $||\cdot||_{\mathfrak{p},\mathbf{K}}$, we conclude that there exists a proper subvariety $Z \subsetneq X$ and positive constants $A_\epsilon$ and $B_\epsilon$ such that if $y \in X(\mathbf{K})$ satisfies the conditions: \begin{itemize} \item[(a)]{ $H_L(y) > A_\epsilon$; and } \item[(b)]{ $\prod_{j=1}^q ||\sigma_j(y) ||_{\mathfrak{p},\mathbf{K}} < B_\epsilon H_L(y)^{-n-1-\epsilon}$; and} \item[(c)]{ $y \not \in \operatorname{Supp}(\sigma_i)$, for $i = 1,\dots, q$, } \end{itemize} then $y \in Z(\mathbf{K})$. Now suppose that $y \in X(\mathbf{K})$ satisfies the conditions that: \begin{itemize} \item[(a')]{$H_L(y) > A_\epsilon$; and} \item[(b')]{ $ || \sigma_i(y)||_{\mathfrak{p}, \mathbf{K}} < B_\epsilon^{\frac{1}{q}}H_L(y)^{-c_i}$, for $i=1,\dots,q$; and } \item[(c')]{ $y \not \in \operatorname{Supp}(\sigma_i)$, for $i = 1,\dots,q$.} \end{itemize} We then conclude that $y$ must be contained in $Z$, since if (b') holds true, then it is also true that: $$ \prod_{i=1}^q || \sigma_i(y)||_{\mathfrak{p}, \mathbf{K}} < B_\epsilon H_L(y)^{- \sum_{i=1}^q c_i} = B_\epsilon H_L(y)^{-n-1-\epsilon}. $$ \end{proof} \section{Computing approximation constants for varieties over function fields}\label{6} Let $\overline{\mathbf{k}}$ be an algebraically closed field of characteristic zero, $Y$ an irreducible projective variety over $\overline{\mathbf{k}}$ which is non-singular in codimension $1$ and $\mathcal{L}$ an ample line bundle on $Y$. Let $\mathbf{K}$ be the field of fractions of $Y$ and $X \subseteq \PP^n_\mathbf{K}$ a geometrically irreducible projective variety. In this section we give sufficient conditions for approximation constants $\alpha_x(L)$, for $x \in X(\overline{\mathbf{K}})$ and $L = \Osh_{\PP^n_\mathbf{K}}(1)|_X$, to be computed on a proper $\mathbf{K}$-subvariety of $X$. Our conditions are related to existence of \emph{vanishing sequences} which are \emph{Diophantine constraints}. We define these concepts in \S \ref{6.2}. In \S \ref{7}, especially Theorem \ref{theorem7.1}, we show how the relative asymptotic volume constants of McKinnon-Roth, \cite{McKinnon-Roth}, can be used to give sufficient conditions for existence of such vanishing sequences which are Diophantine constraints. Throughout this section we fix a prime divisor $\mathfrak{p} \subseteq Y$. We also fix an extension of $|\cdot|_{\mathfrak{p},\mathbf{K}}$, the absolute value of $\mathfrak{p}$ with respect to $\mathcal{L}$, which we defined in \eqref{eqn2.5}, to $\overline{\mathbf{K}}$ a fixed algebraic closure of $\mathbf{K}$. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{6.1} Since $X \subseteq \PP^n_{\mathbf{K}}$, we obtain a projective distance function \begin{equation}\label{eqn6.1} d_{\mathfrak{p}}(\cdot,\cdot) = d_{|\cdot|_{\mathfrak{p}}}(\cdot,\cdot) : X(\overline{\mathbf{K}}) \times X(\overline{\mathbf{K}}) \rightarrow [0,1] \end{equation} by pulling back the function \eqref{eqn3.5}. The function \eqref{eqn6.1} is the projective distance function of $X$ with respect to $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_{ X}$, the prime divisor $\mathfrak{p} \subseteq Y$, and the sections $s_0,\dots, s_n \in \H^0(X,L)$ obtained by pulling back the coordinate functions $x_0,\dots,x_n \in \H^0(\PP^n_{\mathbf{K}},\Osh_{\PP^n_{\mathbf{K}}}(1))$. If $x \in X(\overline{\mathbf{K}})$ and $\mathbf{F}$ its field of definition, then let $X_{\mathbf{F}} = X \times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$ and let $L_{\mathbf{F}}$ denote the pull-back of $L$ to $X_{\mathbf{F}}$ via the base change $\operatorname{Spec} \mathbf{F} \rightarrow \operatorname{Spec} \mathbf{K}$. The following lemma is used in the proof of Proposition \ref{proposition6.2}. Its main purpose is to show how, under suitable hypothesis, the metric $||\cdot||_{\mathfrak{p},\mathbf{K}}$ behaves with respect to the distance function $d_\mathfrak{p}(\cdot,\cdot)$. \begin{lemma}\label{lemma6.1} In the above setting, fix $x \in X(\overline{\mathbf{K}})$, let $\mathbf{F}$ denote the field of definition of $x$ and suppose that a nonzero global section $\sigma = \sum_{j=0}^n a_j s_j \text{, with } a_j \in \mathbf{F}\text{,}$ of $L_{\mathbf{F}}$ vanishes to order at least $m$ at $x$. In particular, locally $\sigma \in \mathfrak{m}^m_x \Osh_{X_{\mathbf{F}},x}$ the $m$th power of the maximal ideal of the local ring of $x$. Let $\{y_i\} \subseteq X(\mathbf{K})$ be an infinite sequence of distinct points with the property that $d_{\mathfrak{p}}(x,y_i) \to 0$ as $i \to \infty$. Then for all $\delta > 0$ and all $i \gg 0$, depending on $\delta$, $$||\sigma(y_i)||_{\mathfrak{p}, \mathbf{K}} \leq d_{\mathfrak{p}}(x,y_i)^{m-\delta}. $$ \end{lemma} \begin{proof} If $z \in X(\mathbf{K})$ has homogeneous coordinates $z=[z_0:\dots : z_n]$ then locally we know: $$||\sigma(z)||_{\mathfrak{p}, \mathbf{K}} = \min_{j,k} \left(\left| \frac{\sigma}{a_j s_k}(z)\right|_{\mathfrak{p}, \mathbf{K}} \right) $$ and locally by assumption at least one $$\frac{\sigma}{a_j s_k} \in \mathfrak{m}^m_{x} \Osh_{X_{\mathbf{F}},x}. $$ This fact together with Proposition \ref{proposition4.5} implies that for all $i \gg 0$ $$||\sigma(y_i)||_{\mathfrak{p}, \mathbf{K}} \leq \mathrm{C} d_{\mathfrak{p}}(x,y_i)^m $$ for some constant $\mathrm{C}$ independent of $i$. We also have that $$d_{\mathfrak{p}}(x,y_i) \to 0 $$ as $i \to 0$. Thus for $i \gg 0$, $d_{\mathfrak{p}}(x,y_i)$ is very small and so for all $\delta > 0$, $d_{\mathfrak{p}}(x,y_i)^{-\delta}$ will exceed $\mathrm{C}$ for all $i \gg 0$. In particular, $$||\sigma(y_i)||_{\mathfrak{p}, \mathbf{K}} \leq \mathrm{C} d_{\mathfrak{p}}(x,y_i)^m \leq d_{\mathfrak{p}}(x,y_i)^{m-\delta} $$ for all $i \gg 0$. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{6.2}{\bf Vanishing sequences, Diophantine constraints and computing approximation constants.} We now introduce the concept of \emph{vanishing sequences} which are \emph{Diophantine constraints}, see \S \ref{6.2.2} and \S \ref{6.2.3} respectively. The main motivation for these notions is that they, in conjunction with the subspace theorem, especially Corollary \ref{corollary5.5}, allow for sufficient conditions for approximation constants to be computed on a proper subvariety, see Proposition \ref{proposition6.2} and Theorem \ref{theorem6.3}. We should also emphasize that these results proven here are in some sense unsatisfactory because in order for them to be of use we are faced with the issue of constructing vanishing sequences which are Diophantine constraints. As we will see in \S \ref{7} one approach to resolving this issue is related to local positivity and especially the asymptotic volume constant in the sense of McKinnon-Roth \cite{McKinnon-Roth}. \vspace{3mm}\refstepcounter{subsubsection}\noindent{\bf \thesubsubsection.} {}\label{6.2.1} In what follows, we fix $X \subseteq \PP^n_\mathbf{K}$ a geometrically irreducible projective variety and we let $L = \Osh_{\PP^n_\mathbf{K}}(1)|_X$. We also fix $x \in X(\overline{\mathbf{K}})$, we let $\mathbf{F} \subseteq \overline{\mathbf{K}}$ be the field of definition of $x$, and we let $X_\mathbf{F} = X \times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$ denote the base change of $X$ with respect to the field extension $\mathbf{F} / \mathbf{K}$. Finally, we let $L_\mathbf{F}$ denote the pullback of $L$ to $X_\mathbf{F}$ via the base change map $\mathbf{K} \rightarrow \mathbf{F}$. Fix an integer $m \in \ZZ_{>0}$ and let $s_0,\dots, s_N \in \H^0(X,L^{\otimes m})$ denote a basis of the $\mathbf{K}$-vector space $\H^0(X,L^{\otimes m})$. Let $\sigma_1,\dots,\sigma_q \in \H^0(X_{\mathbf{F}}, L^{\otimes m}_{\mathbf{F}})$ denote a collection of $\mathbf{F}$-linearly independent $\mathbf{F}$-linear combinations of the $s_0,\dots, s_N$. Fix rational numbers $\gamma_1,\dots,\gamma_q \in \QQ_{>0}$ with the property that $m\gamma_j \in \ZZ$ for all $j$. Having fixed our setting, we are now able to make two definitions. \subsubsection{Definition}\label{6.2.2} We say that data $(m,\gamma_\bullet,\sigma_\bullet)$ as in \S \ref{6.2.1} is a \emph{vanishing sequence for $L$ at $x$ with respect to $m$ and the rational numbers $\gamma_1,\dots,\gamma_q \in \QQ_{>0}$, and defined over $\mathbf{F}$} if locally the pullback of each $\sigma_i$ is an element of $\mathfrak{m}_x^{m \gamma_i} \Osh_{X_{\mathbf{F}},x}$. \subsubsection{Definition}\label{6.2.3} Fix a real number $R>0$ and fix a vanishing sequence $(m,\gamma_\bullet,\sigma_\bullet)$ for $L$ at $x$ with respect to $m$ and the rational numbers $\gamma_1,\dots,\gamma_q \in \QQ_{>0}$. We say that the vanishing sequence $(m,\gamma_\bullet,\sigma_\bullet)$ is a \emph{Diophantine constraint for $L$ with respect to $R$ and $m$ at $x$ and defined over $\mathbf{F}$} if there exists a proper subvariety $Z \subsetneq X$ and positive constants $A$ and $B$ so that if $y \in X(\mathbf{K})$ satisfies the conditions that \begin{enumerate} \item[(a)]{$H_{L^{\otimes m}}(y) > A$; and} \item[(b)]{$||\sigma_i(y)||_{\mathfrak{p},\mathbf{K}} < B H_{L^{\otimes m}}(y)^{-\gamma_iR}$, for $i = 1,\dots,q$; and} \item[(c)]{$y \not \in \mathrm{Supp}(\sigma_i)$, for $i = 1,\dots,q$, } \end{enumerate} then $y \in Z(\mathbf{K})$. \subsubsection{Example} Suppose given a vanishing sequence $(m,\gamma_\bullet,\sigma_\bullet)$ for $L$ at $x$ with respect to $m$ and rational numbers $\gamma_1,\dots, \gamma_q \in \QQ_{>0}$ and defined over $\mathbf{F}$. Fix a real number $R > 0$ and suppose that $$\gamma_1 + \dots + \gamma_q > \frac{h^0(X,L^{\otimes m})}{R}.$$ Then, as implied by Corollary \ref{corollary5.5}, the data $(m,\gamma_\bullet,\sigma_\bullet)$ is a Diophantine constraint for $L$ with respect to $R$ and $m$ at $x$ and defined over $\mathbf{F}$. \vspace{3mm}\refstepcounter{subsubsection}\noindent{\bf \thesubsubsection.} {}\label{6.2.5} As mentioned above, vanishing sequences and Diophantine constraints are related to approximation constants: \begin{proposition}\label{proposition6.2} Let $x \in X(\overline{\mathbf{K}})$ have field of definition $\mathbf{F}$. Let $R>0$ be a real number, $m \in \ZZ_{>0}$ a positive integer and suppose that there exists a vanishing sequence $(m,\gamma_\bullet,\sigma_\bullet)$ for $L$ at $x$, with respect to $m$ and the rational numbers $\gamma_1,\dots,\gamma_q \in \QQ_{>0}$ and defined over $\mathbf{F}$, which is also a Diophantine constraint with respect to $R$. Then there exists a proper Zariski closed subset $W \subsetneq X$ defined over $\mathbf{K}$, containing $x$ as a $\overline{\mathbf{K}}$-point, and with the property that $$\alpha_{x,X}(\{y_i\},L) \geq \frac{1}{R} $$ for all infinite sequences $\{y_i\} \subseteq X(\mathbf{K}) \backslash W(\mathbf{K})$ of distinct points with unbounded height. \end{proposition} \begin{proof} Let $c_j = \gamma_j R$, for $j = 1,\dots, q$. The fact that the vanishing sequence $(m,\gamma_\bullet,\sigma_\bullet)$ is a Diophantine constraint with respect to $R$ implies that there exists positive constants $A$, $B$ and a proper Zariski closed subset $W \subsetneq X$ defined over $\mathbf{K}$ with the property that the collection of $y \in X(\mathbf{K})$ having the properties that \begin{enumerate} \item[(a)]{$H_{L^{\otimes m}}(y) > A$; and } \item[(b)]{$||\sigma_i(y)||_{\mathfrak{p}, \mathbf{K}} < B H_{L^{\otimes m}}(y)^{-c_i} $, for $i=1,\dots,q$; and} \item[(c)]{$y \not \in \bigcup_{i=1}^q \operatorname{Supp}(\sigma_i)$} \end{enumerate} is contained in $W$. The collection of such $y$ is also contained in $W$ adjoined with $\bigcup_{i=1}^q\operatorname{Supp}(\sigma_i)$ which, since $X$ is irreducible, is a proper Zariski closed subset of $X$. Thus, by enlarging $W$ if necessary, we can assume that $W$ contains $\bigcup_{i=1}^q\operatorname{Supp}(\sigma_i)(\mathbf{K})$ and $x$. Suppose the proposition is false for this $W$. Then there exists an infinite sequence $\{y_i\} \subseteq X(\mathbf{K}) \backslash W(\mathbf{K})$ of distinct points with unbounded height such that \begin{equation}\label{eqn6.2} \alpha_{x,X}(\{y_i\},L) = \frac{1}{m}\alpha_{x,X}(\{y_i\},L^{\otimes m}) < \frac{1}{R}. \end{equation} Trivially \eqref{eqn6.2} implies that \begin{equation}\label{eqn6.2'} \alpha_{x,X}(\{y_i\},L^{\otimes m}) < m/R \end{equation} and using \eqref{eqn6.2'}, in conjunction with the definition of $\alpha_{x,X}(\{y_i\},L^{\otimes m})$, it follows that \begin{equation}\label{eqn6.2''} d_{\mathfrak{p}}(x,y_i)^{\frac{m}{R}-\delta'} H_{L^{\otimes m}}(y_i) \to 0, \end{equation} as $i \to \infty$ for all $0 < \delta' \ll 1$. Note also that the definition of $\alpha_{x,X}(\{y_i\},L^{\otimes m})$ implies that \begin{equation}\label{eqn6.2'''} d_{\mathfrak{p}}(x,y_i) \to 0, \end{equation} as $i \to \infty$. We now make the following deductions. To begin with, using Lemma \ref{lemma6.1}, we deduce that for all $\delta > 0$ and all $j$: \begin{equation}\label{eqn6.3} ||\sigma_{j}(y_i)||^{\frac{1}{R\gamma_j}}_{\mathfrak{p}, \mathbf{K}} \leq d_{\mathfrak{p}}(x,y_i)^{\frac{m}{R} - \left(\frac{\delta}{R\gamma_j} \right)} \end{equation} for all $i \gg 0$ depending on $\delta$. (Here we use the fact that $\sigma_{j} \in \mathfrak{m}^{m \gamma_j} \Osh_{X_{\mathbf{F}},x}$ combined with the fact \eqref{eqn6.2'''} so that the hypothesis of Lemma \ref{lemma6.1} ia satisfied.) Next choose $\delta$ so that each $\delta'_j = \frac{\delta}{R\gamma_j}$ is sufficiently small. Then, using \eqref{eqn6.3} and \eqref{eqn6.2''} above, we deduce: \begin{equation}\label{eqn6.4} \frac{H_{L^{\otimes m}}(y_i) ||\sigma_{j}(y_i)||_{\mathfrak{p}, \mathbf{K}}^{\frac{1}{R\gamma_j}}}{B^{\frac{1}{R\gamma_j}}} \leq \frac{H_{L^{\otimes m}}(y_i)d_{\mathfrak{p}}(x,y_i)^{\frac{m}{R}-(\frac{\delta}{R\gamma_j})}} {B^{\frac{1}{R\gamma_j}}} < 1\text{,} \end{equation} for all $j$ and all $i \gg 0$. Equation \eqref{eqn6.4} has the consequence that: $$||\sigma_{j}(y_i)||_{\mathfrak{p}, \mathbf{K}} < B H_{L^{\otimes m}}(y_i)^{-R \gamma_j} $$ for all $j$ and all $i \gg 0$. Since, by passing to a subsequence if necessary, $H_L(y_i) \to \infty$, as $i \to \infty$, it follows that $H_{L^{\otimes m}}(y_i) \to \infty$ too and so we must have that $y_i \in W$ for all $i \gg 0$. This is a contradiction. \end{proof} \vspace{3mm}\refstepcounter{subsubsection}\noindent{\bf \thesubsubsection.} {}\label{6.2.6} Proposition \ref{proposition6.2} implies: \begin{theorem}\label{theorem6.3} Let $X \subseteq \PP^n_\mathbf{K}$ be a geometrically irreducible subvariety, put $L = \Osh_{\PP^n_\mathbf{K}}(1)|_X$ and let $x \in X(\overline{\mathbf{K}})$. Fix a real number $R>0$ and a positive integer $m \in \ZZ_{>0}$. If $\alpha_{x,X}(L) < \frac{1}{R}$ and if there exists a vanishing sequence $(m,\gamma_\bullet,\sigma_\bullet)$ for $L$ at $x$ with respect to $m$ and defined over $\mathbf{F} \subseteq \overline{\mathbf{K}}$, the field of definition of $x$, which is also a Diophantine constraint with respect to $R$, then $$\alpha_{x,X}(L) = \alpha_{x,W}(L|_{ W}) $$ for some proper subvariety $W \subsetneq X$ having dimension at least $1$ containing $x$ as a $\overline{\mathbf{K}}$-point. \end{theorem} \begin{proof} By assumption $\alpha_{x,X}(L) < \frac{1}{R}$ and there exists a vanishing sequence $(m,\gamma_\bullet,\sigma_\bullet)$ for $L$ at $x$ with respect to $m$ which is also a Diophantine constraint with respect to $R$. By Proposition \ref{proposition6.2}, there exists a proper subvariety $W \subsetneq X$ defined over $\mathbf{K}$ and containing $x$ as a $\overline{\mathbf{K}}$-point so that $$ \alpha_{x,X}(\{y_i\},L) \geq \frac{1}{R}$$ for all infinite sequences $\{y_i\} \subseteq X(\mathbf{K}) \backslash W(\mathbf{K})$ of distinct points with unbounded height. As a consequence if $\alpha_{x,X}(\{y_i\},L) < \frac{1}{R}$ for $\{y_i\} \subseteq X(\mathbf{K})$ an infinite sequence of distinct points with unbounded height, then almost all of the $y_i$ must lie in $W(\mathbf{K})$. In particular $W(\mathbf{K})$ must have an infinite number of $\mathbf{K}$-rational points and so $W$ has dimension at least $1$. Since $x \in W(\overline{\mathbf{K}})$ the definitions imply immediately that $\alpha_{x,X}(L) = \alpha_{x,W}(L|_{ W})$. \end{proof} \section{Asymptotic volume functions and vanishing sequences}\label{7} Let $\overline{\mathbf{k}}$ be an algebraically closed field of characteristic zero, $Y\subseteq \PP^r_{\overline{\mathbf{k}}}$ an irreducible projective variety, non-singular in codimension $1$, $\mathbf{K}$ the field of fractions of $Y$, $X \subseteq \PP^n_\mathbf{K}$ a geometrically irreducible projective variety and $L = \Osh_{\PP^n_\mathbf{K}}(1)|_X$. In this section, we relate the theory we developed in \S \ref{6} to local measures of positivity for $L$ near $x \in X(\overline{\mathbf{K}})$. Our main result is Theorem \ref{theorem7.1}, which we prove in \S \ref{7.2}, and which shows how the number $\beta_x(L)$ introduced by McKinnon-Roth can be used to construct a vanishing sequence for $L$ about $x$ which is also a Diophantine constraint. In \S \ref{7.3} we include a short discussion about $\epsilon_x(L)$, the Seshadri constant of $L$ about $x$, and indicate how it is related to the number $\beta_x(L)$. The reason for including this discussion is that the inequality given in \eqref{eqn7.2} below, established by McKinnon-Roth in \cite[Corollary 4.4]{McKinnon-Roth}, is needed in \S \ref{proof:main:results} where we prove the results we stated in \S \ref{motivation}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{7.1}{\bf Expectations and volume functions.} Let $x \in X(\overline{\mathbf{K}})$, $\mathbf{F}$ the field of definition of $x$, $\pi : \widetilde{X} = \operatorname{Bl}_x(X) \rightarrow X_\mathbf{F}$ the blow-up of $X_\mathbf{F}$, the base change of $X$ with respect to the field extension $\mathbf{K} \rightarrow \mathbf{F}$, at the closed point of $X_{\mathbf{F}}$ corresponding to $x$, and let $E$ denote the exceptional divisor of $\pi$. In what follows we let $\operatorname{N}^1(\widetilde{X})_\RR$ denote the real Neron-Severi space of $\RR$-Cartier divisors on $\widetilde{X}$ modulo numerical equivalence and we let $\operatorname{Vol}(\cdot)$ denote the volume function $$\operatorname{Vol}(\cdot) : \operatorname{N}^1(\widetilde{X})_{\RR} \rightarrow \RR. $$ In particular if $g$ equals the dimension of $\widetilde{X}$, and if $\ell$ denotes the numerical class of an integral Cartier divisor $D$ on $\widetilde{X}$, then $$\operatorname{Vol}(\ell) = \limsup_{m\to \infty} \frac{h^0(\widetilde{X},\Osh_{\widetilde{X}}(mD))}{m^g / g!},$$ see \cite[\S 2.2.C, p.~148]{Laz}. If $\gamma \in \RR_{\geq 0}$, then let $L_\gamma$ denote the $\RR$-line bundle $$L_\gamma = \pi^* L_{\mathbf{F}} - \gamma E$$ on $\widetilde{X}$; here $L_\mathbf{F}$ denotes the pullback of $L$ to $X_\mathbf{F}$. In what follows we also let $L_{\gamma,\overline{\mathbf{K}}}$ denote the pullback of $L_\gamma$ to $\widetilde{X}_{\overline{\mathbf{K}}}$ the base change of $\widetilde{X}$ with respect to $\mathbf{K} \rightarrow \overline{\mathbf{K}}$. In addition let $\gamma_{{\mathrm{eff}},x}(L)$ be defined by $$\gamma_{{\mathrm{eff}}} = \gamma_{{\mathrm{eff}},x}(L) = \sup \{\gamma \in \RR_{\geq 0} : L_{\gamma, \overline{\mathbf{K}}} \text{ is numerically equivalent to an effective divisor} \}. $$ As explained in \cite[\S 4]{McKinnon-Roth}, we have: \begin{enumerate} \item[(a)]{$\gamma_{{{\mathrm{eff}}}} < \infty$;} \item[(b)]{$\operatorname{Vol}(L_\gamma) > 0$ for all $\gamma \in [0,\gamma_{{{\mathrm{eff}}}})$;} \item[(c)]{$\operatorname{Vol}(L_\gamma) = 0$ for all $\gamma > \gamma_{{\mathrm{eff}}}$; and} \item[(d)]{$\operatorname{Vol}(L_{\gamma_{\mathrm{eff}}}) = 0$.} \end{enumerate} In \cite[\S 4]{McKinnon-Roth} the constant $\beta_x(L)$ is defined by: $$\beta_x(L) = \int^\infty_0 \frac{\operatorname{Vol}(L_\gamma)}{\operatorname{Vol}(L)} d \gamma = \int^{\gamma_{{\mathrm{eff}}}}_0 \frac{\operatorname{Vol}(L_\gamma)}{\operatorname{Vol}(L)} d \gamma \text{,}$$ see \cite[p.~545, Definition 4.3, and Remark p.~548]{McKinnon-Roth}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{7.2}{\bf Volume functions and existence of vanishing sequences.} We wish to show how the number $\beta_x(L)$ is related to vanishing sequences and Diophantine constraints. Indeed, we use techniques, similar to those employed in the proof of \cite[Theorem 5.1]{McKinnon-Roth}, to prove: \begin{theorem}\label{theorem7.1} Let $X \subseteq \PP^n_\mathbf{K}$ be a geometrically irreducible subvariety and let $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_X$. Fix a real number $R > 0$ and a $\overline{\mathbf{K}}$-rational point $x \in X(\overline{\mathbf{K}})$. Let $\mathbf{F}$ denote the field of definition of $x$. If $\beta_x(L) > \frac{1}{R}$, then there exists a positive integer $m \in \ZZ_{>0}$ and a vanishing sequence $(m,\gamma_\bullet, \sigma_\bullet)$ for $L$ at $x$ with respect to $m$ and defined over $\mathbf{F} \subseteq \overline{\mathbf{K}}$ which is also a Diophantine constraint with respect to $R$. \end{theorem} \begin{proof} Let $X_{\mathbf{F}}$ denote the base change of $X$ with respect to the finite field extension $\mathbf{F} / \mathbf{K}$ and let $\pi : \widetilde{X} \rightarrow X_{\mathbf{F}} $ be the blow-up of $X$ at the closed point of $X_{\mathbf{F}}$ corresponding to $x \in X(\mathbf{F})$. Let $E$ denote the exceptional divisor, let $L_{\mathbf{F}}$ denote the pull-back of $L$ to $X_{\mathbf{F}}$ and let $L_\gamma$ denote the $\RR$-line bundle $ L_\gamma = \pi^* L_{\mathbf{F}} - \gamma E$ on $\widetilde{X}$, for $\gamma \in \RR_{\geq 0}$. Since $X$ is assumed to be geometrically irreducible, we have: $$\beta_x(L) = \int_0^{\gamma_{\mathrm{eff}}} \frac{\operatorname{Vol}(L_\gamma)}{\operatorname{Vol}(L)} d \gamma = \int_0^{\gamma_{\mathrm{eff}}} f(\gamma) d \gamma;$$ here $$ f(\gamma) = \frac{\operatorname{Vol}(L_\gamma)}{\operatorname{Vol}(L)}.$$ By assumption we have $\beta_x(L) > \frac{1}{R}$. This assumption in conjunction with \cite[Lemma 5.5]{McKinnon-Roth} implies existence of a positive integer $r$ and rational numbers $$0 < \gamma_1 < \dots < \gamma_r < \gamma_{\mathrm{eff},x}(L)$$ so that, if we set $\gamma_{r+1} = \gamma_{{\mathrm{eff}},x}(L)$, we have: $$ \sum\limits_{j=1}^r c_j ( f(\gamma_j) - f(\gamma_{j+1})) > 1;$$ here $c_j = R\gamma_j$, for $j=1,\dots, r$. We now have, for all $\gamma \geq 0$: $$\lim\limits_{m \to \infty} \frac{ h^0(\widetilde{X},(L^{\otimes m})_{m \gamma})}{h^0(X, L^{\otimes m}) } = f(\gamma) $$ and it follows that by taking $m \gg 0$ we can ensure that each $$\frac{ h^0(\widetilde{X}, (L^{\otimes m})_{m\gamma_j})}{ h^0(X, L^{\otimes m}) }$$ is sufficiently close to $f(\gamma_j)$ so that: \begin{equation}\label{claim3:eqn2} 1 < \frac{1}{h^0(X, L^{\otimes m})} \left( \sum\limits_{j=1}^r c_j (h^0(\widetilde{X}, (L^{\otimes m})_{m\gamma_j}) - h^0(\widetilde{X}, (L^{\otimes m})_{m\gamma_{j+1}}) )\right)\text{.} \end{equation} In addition, by increasing $m$ if necessary we may assume that the $\gamma_j m$ are integers and also, by \cite[Lemma 5.4.24, p.~310]{Laz} for instance, that \begin{equation}\label{claim3:eqn2'}\pi_* \Osh_{\widetilde{X}}(-m\gamma_j E) = \mathcal{I}_x^{m \gamma_j}, \end{equation} for all $j = 1,\dots,r$. In what follows we fix such a large integer $m$ and our goal is to construct a vanishing sequence for $L$ at $x$ with respect to $m$ which is defined over $\mathbf{F}$ and which is a Diophantine constraint with respect to $R$. To this end, let $V$ denote the $\mathbf{F}$-vector space $\Gamma(X_{\mathbf{F}}, L_{\mathbf{F}}^{\otimes m})$, let $N = \dim V - 1$ and $V^j = \Gamma(\widetilde{X},(L^{\otimes m} )_{m\gamma_j})$, for $j = 1,\dots, r$. Using \eqref{claim3:eqn2'} we deduce: \begin{enumerate} \item[(a)]{$V^j = \H^0(X_{\mathbf{F}}, \mathcal{I}_x^{m \gamma_j} \otimes L_{\mathbf{F}}^{\otimes m})$, for $j = 1,\dots, r$;} \item[(b)]{$ V^{j+1} \subseteq V^j$, for $j = 1,\dots, r-1$; and} \item[(c)]{each element $\sigma_j$ of $V^j$ is locally an element of $\mathfrak{m}_x^{m \gamma_j} \Osh_{X_{\mathbf{F}},x}$.} \end{enumerate} Let $V^0 = V$, $\ell_j = \dim V^j$, for $j=0,\dots, r$, and let $s_{r,1},\dots, s_{r,\ell_r}$ be an $\mathbf{F}$-basis for $V^r$. We can extend this to a basis for $V^{r-1}$ which we denote by: $s_{r,1},\dots, s_{r,\ell_r},s_{r-1,\ell_r + 1},\dots, s_{r-1,\ell_{r-1}}. $ Recursively, we can construct an $\mathbf{F}$-basis for $V^j$ extending the $\mathbf{F}$-basis for $V^{j+1}$, for $j = 1,\dots, r-1$, and we denote such a basis as: $ s_{r,1},\dots, s_{r,\ell_r},\dots, s_{j,\ell_{j+1}+1},\dots, s_{j,\ell_j}$. In this way, we obtain $\ell_1$ $\mathbf{F}$-linearly independent elements of the $\mathbf{F}$-vector space $V$: $$s_{r,1},\dots, s_{r,\ell_r},\dots, s_{j,\ell_{j+1}+1},\dots, s_{j,\ell_j},\dots, s_{1,\ell_2 + 1},\dots, s_{1,\ell_1}. $$ Since the very ample line bundle $L^{\otimes m}$ is defined over $\mathbf{K}$, if $s_0,\dots, s_N$ denotes a $\mathbf{K}$-basis for the $\mathbf{K}$-vector space $\H^0(X,L^{\otimes m})$, then each of the $\mathbf{F}$-linearly independent sections $s_{j,k}$ of $L_{\mathbf{F}}^{\otimes m}$ is an $\mathbf{F}$-linear combination of the $s_0,\dots, s_N$. Let $\ell_{r+1} = 0$, for each $1 \leq j \leq r$ and each $\ell_{j+1} + 1 \leq k \leq \ell_j$ let the sections $s_{j,k} \in V^j$ have weight $c_{j,k} = c_j$, and let $\eta_{j,k} = \gamma_j$. In this notation equation \eqref{claim3:eqn2} implies that: $$ \sum\limits_{j = 1}^r \sum\limits_{k = \ell_{j+1} + 1}^{\ell_j} c_{j,k} > N+1 $$ and it follows, in light of Corollary \ref{corollary5.5}, that $(m,\eta_\bullet,\sigma_\bullet)$ with $\eta_\bullet = ( \eta_{j,\ell})$ and $\sigma_\bullet = (s_{j,\ell})$, for $1 \leq j \leq r$ and $\ell_{j+1} + 1 \leq \ell \leq \ell_j$, is a vanishing sequence for $L$ with respect to $m$ at $x$ and defined over $\mathbf{F}$ which is also a Diophantine constraint with respect to $R$. \end{proof} Theorem \ref{theorem7.1} has the following consequence: \begin{corollary}\label{corollary7.2} Fix a real number $R> 0$. Continuing with the assumptions that $X$ is geometrically irreducible and $x \in X(\overline{\mathbf{K}})$, if $\beta_x(L) > \frac{1}{R}$, then there exists a proper subvariety $W \subsetneq X$ defined over $\mathbf{K}$ and containing $x$ so that $$ \alpha_{x,X}(\{y_i\},L) \geq \frac{1}{R}$$ for all infinite sequences $\{y_i\} \subseteq X(\mathbf{K}) \backslash W(\mathbf{K})$ of distinct points with unbounded height. \end{corollary} \begin{proof} Consequence of Theorem \ref{theorem7.1} and Proposition \ref{proposition6.2}. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{7.3}{\bf Asymptotic volume functions and their relation to Seshadri constants.} Here, in order to prepare for \S \ref{proof:main:results}, we make a few remarks about Seshadri constants and how they are related to the asymptotic relative volume constants of McKinnon-Roth \cite{McKinnon-Roth}. To do so let $$\epsilon_x(L)= \sup \{\gamma \in \RR_{\geq 0}: L_{\gamma, \overline{\mathbf{K}}} \text{ is nef} \}$$ denote the Seshadri constant of $L$ at $x \in X(\overline{\mathbf{K}})$. We refer to \cite[\S 3]{McKinnon-Roth} and \cite{Laz} for more details regarding Seshadri constants. A basic result is that, if we identify $x$ with the closed point of $X_{\overline{\mathbf{K}}}$ that it determines, then $$\epsilon_{x,X}(L) =\inf\limits_{x \in C\subseteq X_{\overline{\mathbf{K}}}} \left\{ \frac{L_{\overline{\mathbf{K}}}.C}{\mathrm{mult}_x(C)} \right\}, $$ where the infimum is taken over all reduced irreducible curves $C$ passing through $x$, see \cite[Proposition 3.2]{McKinnon-Roth} or \cite[Proposition 5.1.5, p.~270]{Laz} for instance. In \cite[Corollary 4.4]{McKinnon-Roth} it is shown that \begin{equation}\label{eqn7.2} \beta_x(L) \geq \frac{\dim X}{\dim X+1} \epsilon_x(L);\end{equation} this inequality is important in the proof of our main results stated in \S \ref{main:results}. \section{Proof of main results}\label{proof:main:results} In this section we prove the main results of this paper, namely Theorem \ref{theorem1.1} and Corollary \ref{corollary1.2}, which we stated in \S \ref{main:results}. For convenience of the reader we restate these results as Theorem \ref{theorem8.1} and Corollary \ref{corollary8.1} below. To prepare for these results, let $\mathbf{K}$ be the function field of an irreducible projective variety $Y \subseteq \PP^r_{\overline{\mathbf{k}}}$ defined over an algebraically closed field $\overline{\mathbf{k}}$ of characteristic zero and non-singular in codimension $1$ and fix a prime divisor $\mathfrak{p} \subseteq Y$. Fix an algebraic closure $\overline{\mathbf{K}}$ of $\mathbf{K}$ and suppose that $X \subseteq \PP^n_{\mathbf{K}}$ is a geometrically irreducible subvariety. The results we prove here show how the subspace theorem can be used to relate $\alpha_x(L)$, for $x \in X(\overline{\mathbf{K}})$ and $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_{ X}$, to $\beta_x(L)$. \begin{theorem}\label{theorem8.1} Let $\mathbf{K}$ be the function field of an irreducible projective variety $Y \subseteq \PP^r_{\overline{\mathbf{k}}}$, defined over an algebraically closed field $\overline{\mathbf{k}}$ of characteristic zero, assume that $Y$ is non-singular in codimension $1$ and fix a prime divisor $\mathfrak{p} \subseteq Y$. Fix an algebraic closure $\overline{\mathbf{K}}$ of $\mathbf{K}$ and suppose that $X \subseteq \PP^n_{\mathbf{K}}$ is a geometrically irreducible subvariety, that $x \in X(\overline{\mathbf{K}})$, and that $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_{X}$. In this setting, either $$\alpha_x(L;\mathfrak{p}) \geq \beta_x(L) \geq \frac{\dim X}{\dim X + 1}\epsilon_x(L) $$ or $$\alpha_{x,X}(L;\mathfrak{p}) = \alpha_{x,W}(L|_{W};\mathfrak{p}) $$ for some proper subvariety $W \subsetneq X$. \end{theorem} \begin{proof}[Proof of Theorem \ref{theorem8.1} and Theorem \ref{theorem1.1}] It suffices to show that if $\alpha_x(L;\mathfrak{p}) < \beta_x(L)$, then $X$ has dimension at least two and $\alpha_{x,X}(L) = \alpha_{x,W}(L|_{W})$ for some proper subvariety $W \subsetneq X$ having dimension at least $1$ and containing $x$. To this end, if $\alpha_x(L;\mathfrak{p}) < \beta_x(L)$, then we can choose $R> 0$ so that $$\alpha_x(L;\mathfrak{p}) < 1/R < \beta_x(L).$$ Since $\beta_x(L) > \frac{1}{R}$, Theorem \ref{theorem7.1} implies existence of a vanishing sequence $(m,\gamma_\bullet, \sigma_\bullet)$ for $L$ at $x$ with respect to some positive integer $m$ which is also a Diophantine constraint with respect to $R$. In addition we have $\alpha_{x,X}(L; \mathfrak{p})<\frac{1}{R}$. The hypothesis of Theorem \ref{theorem6.3} is satisfied and its conclusion implies that $\alpha_{x,X}(L) = \alpha_{x,W}(L|_{W})$ for some proper subvariety $W \subsetneq X$ having dimension at least $1$ and containing $x$. \end{proof} Theorem \ref{theorem8.1} has the following consequence: \begin{corollary}\label{corollary8.1} In the setting of Theorem \ref{theorem8.1}, we have that $\alpha_x(L;\mathfrak{p}) \geq \frac{1}{2} \epsilon_x(L)$. If $\alpha_x(L;\mathfrak{p}) = \frac{1}{2} \epsilon_x(L)$, then $\alpha_{x,X}(L;\mathfrak{p}) = \alpha_{x,C}(L|_{ C};\mathfrak{p})$ for some curve $C \subseteq X$ defined over $\mathbf{K}$. \end{corollary} \begin{proof} Follows from Theorem \ref{theorem8.1} using induction. In more detail, let $g$ denote the dimension of $X$. If $g \geq 1$, then $$\frac{g}{g+1}\epsilon_x(L) \geq \frac{1}{2}\epsilon_x(L). $$ Thus if $\alpha_x(L;\mathfrak{p}) \geq \frac{g}{g+1}\epsilon_x(L)$, then $\alpha_x(L;\mathfrak{p}) \geq \frac{1}{2}\epsilon_x(L)$. If $\alpha_x(L;\mathfrak{p}) < \frac{g}{g+1}\epsilon_x(L)$, then Theorem \ref{theorem8.1} implies that $\alpha_x(L;\mathfrak{p}) = \alpha_{x,W}(L|_{W};\mathfrak{p})$ for some proper subvariety $W \subsetneq X$ and \cite[Lemma 2.17]{McKinnon-Roth} (proven for the case that $\mathbf{K}$ is a number field but equally valid for the case that $\mathbf{K}$ is a function field) implies that we may take $W$ to be irreducible over $\overline{\mathbf{K}}$. By induction, $\alpha_{x,W}(L|_{W};\mathfrak{p}) \geq \frac{1}{2}\epsilon_x(L|_{W})$. On the other hand, $\alpha_x(L;\mathfrak{p}) = \alpha_{x,W}(L|_{W};\mathfrak{p})$ and $\epsilon_x(L) \leq \epsilon_{x,W}(L|_{W})$, by \cite[Proposition 3.4 (c)]{McKinnon-Roth}, and it follows that $$\alpha_x(L;\mathfrak{p}) = \alpha_{x,W}(L|_{W};\mathfrak{p}) \geq \frac{1}{2} \epsilon_{x}(L|_{W}) \geq \frac{1}{2}\epsilon_x(L). $$ Finally if $\alpha_x(L;\mathfrak{p}) = \frac{1}{2}\epsilon(L|_{W})$, then we conclude that $W$ is a curve defined over $\mathbf{K}$. \end{proof} \section{Approximation constants for Abelian varieties and curves}\label{9} Throughout this section, we let $\mathbf{K}$ be the function field of a smooth projective curve $C$ over an algebraically closed field $\overline{\mathbf{k}}$ of characteristic zero. We also fix an algebraic closure $\overline{\mathbf{K}}$ of $\mathbf{K}$. Our goal here is to use the properties of the distance functions, which we recorded in \S \ref{4}, to prove an approximation theorem for rational points of an abelian variety $A$ over $\mathbf{K}$. This theorem, Theorem \ref{theorem9.4}, and its proof is very similar to what is done in the number field setting, see for example \cite[p.~98--99]{Serre:Mordell-Weil-Lectures}. We then use Theorem \ref{theorem9.4} to study approximation constants for $\overline{\mathbf{K}}$-rational points of $B$ an irreducible projective curve over $\mathbf{K}$. Specifically, in \S \ref{9.6} we prove Corollary \ref{corollary1.3} which we stated in \S \ref{main:results}. \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{9.2} We recall a special case of Roth's theorem for $\PP^1$ from which we deduce an approximation result, Theorem \ref{theorem9.2}, applicable to projective varieties over $\mathbf{K}$. To state Roth's theorem for $\PP^1$, fix $p \in C(\overline{\mathbf{k}})$ and let $$ d_p(\cdot,\cdot) : \PP^1(\overline{\mathbf{K}}) \times \PP^1(\overline{\mathbf{K}}) \rightarrow [0,1]$$ denote the projective distance function that it determines. \begin{theorem}[Roth's theorem for $\PP^1$]\label{theorem9.1} Let $\mathbf{K}$ be the function field of a smooth projective curve over an algebraically closed field of characteristic zero. Let $x \in \PP^1(\overline{\mathbf{K}})$ and $\delta > 2$. Then there is no infinite sequence $\{x_i \} \subseteq \PP^1(\mathbf{K})$ of distinct points with unbounded height so that $$d_p(x,x_i) \to 0 \text{ and }d_p(x,x_i)H_{\Osh_{\PP^1}(1)}(x_i)^{\delta} \leq 1, $$ for $i \to \infty$. \end{theorem} \begin{proof} This is implied by the main theorem of \cite{wang:1996} for example. \end{proof} As in \cite[\S 7.3]{Serre:Mordell-Weil-Lectures}, combined with the local description of the distance functions given by Lemma \ref{lemma4.4}, Roth's theorem for $\PP^1$ implies: \begin{theorem}[Compare with {\cite[First theorem on p.~98]{Serre:Mordell-Weil-Lectures}}]\label{theorem9.2} Suppose that $\mathbf{K}$ is the function field of a smooth projective curve over an algebraically closed field of characteristic zero. Let $X \subseteq \PP^n_{\mathbf{K}}$ be a projective variety and $L = \Osh_{\PP^n_{\mathbf{K}}}(1)|_X$. If $x \in X(\overline{\mathbf{K}})$ and $\delta > 2$, then there is no infinite sequence $\{x_i\} \subseteq X(\mathbf{K})$ of distinct points with unbounded height with $$ d_p(x,x_i) \to 0 \text{ and } d_p(x,x_i) H_L(x_i)^\delta \leq 1,$$ for $i \to \infty$. \end{theorem} \begin{proof} As in \cite[p.~98]{Serre:Mordell-Weil-Lectures}, using the local description of the distance functions $d_p(x,\cdot)$ given by Lemma \ref{lemma4.4}, Theorem \ref{theorem9.2} follows from Theorem \ref{theorem9.1} applied to the coordinates of the $x_i$. \end{proof} Theorem \ref{theorem9.2} can be used to give a lower bound for the approximation constant $\alpha_x(L)$. \begin{corollary}\label{corollary9.2'} In the setting of Theorem \ref{theorem9.2}, $\alpha_x(L) \geq 1/2$. \end{corollary} \begin{proof} Immediate considering the definition of $\alpha_x(L)$ in conjunction with the fact that the approximation constant is the reciprocal of the approximation exponent. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{9.3} Our approximation theorem for rational points of an abelian variety $A$ over $\mathbf{K}$ is proved in a manner similar to what is done in the number field case, see for example \cite[\S 7.3]{Serre:Mordell-Weil-Lectures}, and relies on the weak Mordell-Weil theorem. Specifically, we use Theorem \ref{theorem9.2}, combined with the properties of the distance functions that we stated in \S \ref{4}, to prove the following result. \begin{theorem}[Compare with {\cite[Second theorem on p.~98]{Serre:Mordell-Weil-Lectures}}]\label{theorem9.4} Let $\mathbf{K}$ be the function field of a smooth projective curve $C$ over an algebraically closed field $\overline{\mathbf{k}}$ of characteristic zero and fix $p \in C(\overline{\mathbf{k}})$. If $L$ is a very ample line bundle on an abelian variety $A$ over $\mathbf{K}$, $x \in A(\overline{\mathbf{K}})$ and $\delta > 0$, then there is no infinite sequence of distinct points $\{x_i \} \subseteq A(\mathbf{K})$ with unbounded height and having the property that $$d_p(x,x_i) \to 0 \text{ and }d_p(x,x_i)H_L(x_i)^\delta \leq 1, $$ for all $i \gg 0$. \end{theorem} \begin{proof} Choose an embedding $A \hookrightarrow \PP^n$ afforded by $L$, let $\delta > 0$, choose an integer $m \geq 1$ with $(m^2-1)\delta > 3$, and let $\{x_i \} \subseteq A(\mathbf{K})$ be a sequence of distinct points with unbounded height with the property that \begin{equation}\label{eqn9.2} d_p(x,x_i) \to 0 \text{ and }d_p(x,x_i)H_L(x_i)^\delta \leq 1, \end{equation} as $i \to \infty$. The weak Mordell-Weil theorem, \cite[Theorem 10.5.14]{Bombieri:Gubler}, implies that the group $A(\mathbf{K})/mA(\mathbf{K})$ is finite. By passing to a subsequence with unbounded height if necessary, it follows that there exists $a \in A(\mathbf{K})$ and $x_i' \in A(\mathbf{K})$ so that \begin{equation}\label{eqn9.3} x_i = m x_i' + a, \end{equation} for all $i$. Let $d_p'(\cdot,\cdot)$ denote the distance function obtained by using the embedding $A \xrightarrow{\tau_a} A \hookrightarrow \PP^n; $ here $\tau_a : A \rightarrow A$ denotes translation by $a \in A(\mathbf{K})$ in the group law. The distance functions $d'_p(\cdot,\cdot)$ and $d_p(\cdot,\cdot)$ are equivalent by Proposition \ref{proposition4.3}. Thus \eqref{eqn9.3} together with the fact that $d_p(x,x_i) \to 0$, as $i \to \infty$, implies that $ d_p(x-a,mx_i') \to 0,$ as $i \to \infty$. In particular $ \{mx_i'\} \to x - a,$ as $i \to \infty$. Now note that since $A(\overline{\mathbf{K}})$ is a divisible group, there exists $b \in A(\overline{\mathbf{K}})$ so that $mb = x-a.$ As a consequence, since $ \{mx_i'\} \to x - a$ as $i \to \infty$, we have that $ d_p(mb,mx_i') \to 0,$ as $i \to \infty$. Next consider the morphism $[m]_A : A \rightarrow A$, defined by multiplication by $m$ in the group law, near $b$. In particular using the fact that $[m]_A$ is \'{e}tale in conjunction with Lemma \ref{lemma4.4} and Proposition \ref{proposition4.5}, we deduce that \begin{equation}\label{eqn9.3'} d_{p}(b,x_i') \to 0, \end{equation} as $i \to \infty$ too. In more detail, let $\mathbf{F}$ be the field of definition of $b$. Lemma \ref{lemma4.4} implies that there exists an affine open subset $U$ of $A_\mathbf{F} = A \times_{\operatorname{Spec} \mathbf{K}} \operatorname{Spec} \mathbf{F}$ and elements $u_1,\dots,u_r$ of $\Gamma(U,\Osh_{X_\mathbf{F}})$ which generate the maximal ideal of $b$ and positive constants $c\leq C$ so that the inequality \begin{equation}\label{eqn9.4} c d_p(b,x_i') \leq \min(1,\max(|u_1(x_i')|_p,\dots,|u_r(x_i')|_p)) \leq C d_p(b,x_i') \end{equation} holds true for all $x_i ' \in U(\mathbf{K})$. Now let $\mathfrak{m}_{mb}$ denote the maximal ideal of $mb$ and $\mathfrak{m}_b$ the maximal ideal of $b$. Then, since the morphism $[m]_A$ is \'{e}tale, we have $\mathfrak{m}_{mb}\cdot\Osh_b = \mathfrak{m}_b$, see for example \cite[Ex. III 10.3]{Hart}. Thus, if $u_1',\dots,u_r'$ are generators for $\mathfrak{m}_{mb}$ the maximal ideal of $mb$, then their pullbacks $[m]_A^* u_1',\dots, [m]_A^* u_r'$ generate $\mathfrak{m}_b$ too. We now consider implications of Lemma \ref{lemma4.1}. Specifically, Lemma \ref{lemma4.1} implies that the functions \begin{equation}\label{eqn9.4'}\max(|u_1(\cdot)|_p,\dots,|u_r(\cdot)|_p) \end{equation} and \begin{equation}\label{eqn9.4''}\max(|[m]_A^* u_1'(\cdot)|_p,\dots, |[m]_A^* u_r'(\cdot)|_p) \end{equation} are equivalent on every subset $E \subseteq U(\overline{\mathbf{K}})$ which is bounded in $U$. On the other hand, since $$\max(|[m]_A^* u_1'(x_i')|_p,\dots, |[m]_A^*u_r'(x_i')|_p) = \max(|u_1'(mx_i')|_p,\dots,|u_r'(mx_i')|_p ) \to 0, $$ as $i \to \infty$, it follows, by the equivalence of the functions \eqref{eqn9.4'} and \eqref{eqn9.4''}, that \begin{equation}\label{eqn9.4'''} \max(|u_1(x_i')|_p,\dots,|u_r(x_i')|_p) \to \infty, \end{equation} as $i \to \infty$ too. Combining \eqref{eqn9.4} and \eqref{eqn9.4'''} we have that $d_p(b,x_i') \to 0$ as $i \to \infty$ too. Now, by the theory of Neron-Tate heights, see for instance \cite[\S 9.2 and \S 9.3]{Bombieri:Gubler}, the function $\log H_L$ is quadratic up to a bounded function. In particular, since $x_i = mx_i' + a$, it follows that $\{x_i'\}$ has unbounded height and also that \begin{equation}\label{eqn9.5} \frac{\log H_L(x_i)}{\log H_L(x_i')} \to m^2, \end{equation} and thus \begin{equation}\label{eqn9.6} H_L(x_i')^{m^2-1} \leq H_L(x_i), \end{equation} for all $i \gg 0$. Consider now the isogeny $\tau_a \circ [m]_A : A \rightarrow A $ and let $u_1,\dots,u_r$ be regular functions which generate the maximal ideal of $b$. By Proposition \ref{proposition4.5}, there exists a positive constant $c$ so that \begin{equation}\label{eqn9.7} c d_p(b,x_i') \leq \max(|u_1(x_i')|_p,\dots,|u_r(x_i')|_p). \end{equation} On the other hand, since $\tau_a \circ [m]_A$ is an \'{e}tale morphism we can choose $u_1,\dots, u_r$ so that there exists regular functions $u_1',\dots,u_r'$ which generate the maximal ideal of $x$ and which have the property that $$u_j'(mx_i'+a)=(\tau_a \circ [m]_A)^*u'_j(x_i') = u_j(x_i').$$ Thus, by Proposition \ref{proposition4.5}, there exists a positive constant $C$ so that \begin{equation}\label{eqn9.8} \max(|u_1(x_i')|_p,\dots,|u_r(x_i')|_p ) = \max(|u_1'(mx_i'+a)|_p,\dots,|u_r'(mx_i'+a)|_p) \leq C d_p(x,x_i). \end{equation} Combining \eqref{eqn9.7} and \eqref{eqn9.8} we obtain \begin{equation}\label{eqn9.9} c d_p(b,x_i') \leq C d_p(x,x_i) \end{equation} and, using \eqref{eqn9.3'}, \eqref{eqn9.9} and \eqref{eqn9.6}, it follows from \eqref{eqn9.2} that, by passing to a subsequence if necessary so that $H_L(x_i') \to \infty$, $$d_p(b,x_i') \to 0 \text{ and } d_p(b,x_i') < H_L(x_i')^{-k} ,$$ for some $k > 2$ and all $i \gg 0$ which contradicts Theorem \ref{theorem9.2}. \end{proof} Theorem \ref{theorem9.4} has the following consequence which we will use in \S \ref{9.5} and \S \ref{9.6}. \begin{corollary}\label{corollary9.4'} In the setting of Theorem \ref{theorem9.4}, we have $\alpha_x(L) = \infty$. \end{corollary} \begin{proof} Considering the definition of $\alpha_x(L)$ given in \S \ref{3.8'}, the conclusion of Corollary \ref{corollary9.4'} follows immediately from Theorem \ref{theorem9.4}. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{9.5} We now explain how Theorem \ref{theorem9.4} and Corollary \ref{corollary9.4'} allow for the calculation of the approximation constant $\alpha_x(L)=\alpha_x(L;p)$ for $L$ a very ample line bundle on $B$ an irreducible curve over $\mathbf{K}$ and $x \in B(\overline{\mathbf{K}})$. This is the content of Theorem \ref{theorem9.1'} which is proved in a manner similar to \cite[Theorem 2.16]{McKinnon-Roth}. \begin{theorem}\label{theorem9.1'} Let $\mathbf{K}$ be the function field of a smooth projective curve over an algebraically closed field of characteristic zero. If $\alpha_x(L) < \infty$ for $L$ a very ample line bundle on $B$ an irreducible curve over $\mathbf{K}$ and $x \in B(\overline{\mathbf{K}})$, then $B$ has geometric genus equal to zero. \end{theorem} \begin{proof} Let $\phi : \widetilde{B} \rightarrow B$ be the normalization morphism. Since the pullback of $L$ to $\widetilde{B}$ via $\phi$ is ample, \cite[Ex. III 5.7 (d)]{Hart}, and since $\alpha_x(L) = N \alpha_x(L^{\otimes N})$, for $N > 0$, without loss of generality we may assume that the pullback of $L$ to $\widetilde{B}$ is very ample. We next note that given an infinite sequence $\{x_i \} \subseteq B(\mathbf{K})$ of distinct points with unbounded height and $\{x_i\} \to x$, we may, by passing to a subsequence with unbounded height, assume that none of the $x_i$ are the finitely many points where $\phi$ is not an isomorphism. We may also assume that the sequence $\{\phi^{-1}(x_i)\}$ converges with respect to $d_p(\cdot,\cdot)$, the distance function on $\widetilde{B}$ determined by $\phi^* L$, to one of the points $q \in \phi^{-1}(x)$. Indeed, if the branch corresponding to $q$ has multiplicity $m_q$, then locally $\phi$ is described by functions in the $m_q$th power of the maximal ideal of $q$. Consequently, as can be deduced from Lemma \ref{lemma4.4}, $d_p(x,\phi(q_i))$ is equivalent to $d_p(q,q_i)^{m_q}$ as $i \to \infty$. Conversely, it is clear that given an infinite sequence $\{q_i\} \subseteq \widetilde{B}$ of distinct points with unbounded height and $\{q_i\} \to q$ for some $q \in \phi^{-1}(x)$, we then have that $\{\phi(q_i)\} \to x$. Thus, to compute $\alpha_x(L)$, it suffices to consider $\alpha_x(\{x_i\},L)$ for those infinite sequences of distinct points with unbounded height $\{x_i\} \subseteq B(\overline{\mathbf{K}})$ which arise from infinite sequences $\{q_i \} \subseteq \widetilde{B}(\mathbf{K})$ with unbounded height and converging to some $q \in \widetilde{B}(\overline{\mathbf{K}})$ with $q \in \phi^{-1}(x)$. Now given such a sequence $\{q_i \} \to q$, we have \begin{equation}\label{eqn5.1} H_{\phi^*L}(q_i) \sim H_L(\phi(q_i)), \end{equation} for all $i$. Since $d_p(x,\phi(q_i))$ is equivalent to $d_p(q,q_i)^{m_q}$ as $i \to \infty$, this fact combined with \eqref{eqn5.1} implies that \begin{equation}\label{eqn9.2'} \alpha_x(\{\phi(q_i)\},L) \sim \frac{1}{m_q} \alpha_q(\{q_i\}, \phi^*L) \end{equation} and so, by considering the Abel-Jacobi map of $\widetilde{B}$, compare with the discussion given in \S \ref{3.10}, Corollary \ref{corollary9.4'} implies that the righthand side of \eqref{eqn9.2'} is infinite if $B$ has geometric genus at least $1$. \end{proof} \vspace{3mm}\refstepcounter{subsection}\noindent{\bf \thesubsection.} {}\label{9.6} Having established Theorem \ref{theorem9.1'}, we are now able to refine Corollary \ref{corollary1.2} and, in particular, establish Corollary \ref{corollary1.3}. \begin{theorem}\label{theorem9.1''} Assume that $\mathbf{K}$ is the function field of a smooth projective curve over an algebraically closed field of characteristic zero. Let $X$ be a geometrically irreducible projective variety defined over $\mathbf{K}$ and $L$ a very ample line bundle on $X$ defined over $\mathbf{K}$. If $x$ is a $\overline{\mathbf{K}}$-rational point of $X$, then the inequality $\alpha_x(L) \geq \frac{1}{2}\epsilon_x(L)$ holds true. If equality holds, then $\alpha_{x,X}(L) = \alpha_{x,B}(L|_B)$ for some rational curve $B\subseteq X$ defined over $\mathbf{K}$. \end{theorem} \begin{proof}[Proof of Theorem \ref{theorem9.1''} and Corollary \ref{corollary1.3}] By Corollary \ref{corollary8.1}, the given inequalities hold true and if equality holds true then the approximation constant is computed on an irreducible curve $B$ defined over $\mathbf{K}$ and passing through $x$. On the other hand, since $\epsilon_x(L) < \infty$ it follows that $\alpha_x(L)$ must be finite too and so $B$ must be rational by Theorem \ref{theorem9.1'}. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,941,325,220,138
arxiv
\section{Introduction} \label{section:introduction} One of the classic questions about stochastic processes is whether they can hit a given set. That is, for a process $X_t$ taking values in a space $S$, and for $A\subset S$, do we have \begin{equation*} \mathbf{P}(X_t\in A\text{ for some $t$})>0. \end{equation*} For example, consider the Bessel process $R_t$ with parameter $n$, which satisfies \begin{equation*} dR=\frac{n-1}{2R}dt+dW \end{equation*} where $W(t)$ is a one-dimensional Brownian motion and we assume that $R_0>0$. It is well known that if we allow $n$ to take nonnegative real values, then $R_t$ can hit 0 iff $n<2$. For Markov processes such as $R_t$, harmonic functions and potential theory are powerful tools which have led to rather complete answers to such questions; see \cite{mper10} or most other books in Markov processes. For stochastic partial differential equations (SPDE), potential theory becomes less tractible due to the infinite-dimensional state space of solutions, and hitting questions have not been as thoroughly studied. To be specific, solutions $u(t,x)$ usually depend on a time parameter $t$ and a spatial parameter $x$. So for a fixed time $t$, the solution $u(t,x)$ is a function of $x$, and the state space of the process is an infinite dimensional function space. Nonetheless, hitting questions have been studied for certain SPDE, see \cite{DKN07,DS10,DS15,mt01,NV09} among others. These papers deal with the stochastic heat and wave equations either with no drift or with well behaved drift. As for SPDE analogues of the Bessel process, the only results known to the authors are in Mueller \cite{mue98} and Mueller and Pardoux \cite{mp99}. Here we assume that $u(t,x)$ is scalar valued, and as before $t>0$. But now we let $x$ lie in the unit circle $[0,1]$ with endpoints identified. We also assume that $u(0,x)$ is continuous and strictly positive. Here and throughout the paper we write $\dot{W}(t,x)$ for two-parameter white noise. Suppose $u$ satisfies the following SPDE. \begin{equation*} \partial_tu(t,x)=\Delta u(t,x)+u^{-\alpha}(t,x)+g(u(t,x))\dot{W}(t,x) \end{equation*} where there exist constants $0<c_0<C_0<\infty$ for which $c_0\leq g(u)\leq C_0$ for all values of $u$. Let $\tau$ be the first time at which $u$ hits 0, and let $\tau=\infty$ if $u$ does not hit 0. Then $\mathbf{P}(\tau<\infty)>0$ if $\alpha<3$, see \cite{mue98} Corollary 1.1. Also, $\mathbf{P}(\tau<\infty)=0$ if $\alpha>3$, see Theorem 1 of [MP99]. The situation for vector-valued solutions $u(t,x)$ of the stochastic heat equation is unclear. Indeed, the curve $x\to u(t,x)$ may wind around 0, and perhaps then $u$ will contract to 0 in cases where it would ordinarily stay away from 0. The purpose of this paper is to study hitting question for the stochastic wave equation with scalar solutions and with strong drift. As is well known, there are crucial differences between the heat and wave equations. For example, the heat equation satisfies a maximum principle while the wave equation does not. The same holds for the comparison principle, which states that if the stochastic heat equation has two solutions with the first solution initially larger than the second, then the first solution will almost surely remain larger than the second as time goes on. So while certain arguments from the heat equation case carry over, new ideas are required. Here is the setup for our problem. Again, we let $t\geq0$, and $x$ lies in the circle \begin{equation*} \mathbf{I}=[0,J] \end{equation*} with endpoints identified. We study scalar-valued solutions $u(t,x)$ to the following equation. \begin{align} \label{eq:stoch-wave} \partial_t^2 u(t,x) &=\Delta u(t,x)+u^{-\alpha}(t,x)+g(u(t,x))\dot{W(t,x)} \\ u(0,x)&=u_0(x) \nonumber\\ \partial_tu(0,x)&=u_1(x). \nonumber \end{align} As usual, $u$ and our two-parameter white noise $\dot{W}$ depend on a random parameter $\omega$ which we suppress. As for $x$ taking values in higher-dimensional spaces, it is well known that (\ref{eq:stoch-wave}) is well-posed only in one spatial dimensions. Indeed, in two or more spatial dimensions we would expect that the solution $u$ only exists as a generalized function, but then it is hard to give meaning to nonlinear terms such as $u^{-\alpha}$ or $g(u)$. Next, we define the first time that $u$ hits 0. Let \begin{equation*} \tau_\infty=\inf\Big\{t>0: \inf_{0\leq s<t}\inf_{x\in\mathbf{I}}u(t,x)=0\Big\} \end{equation*} and let $\tau_\infty=\infty$ if the set in the above definition is empty. Before stating our main theorems, we give some assumptions. \medskip \noindent \textbf{Assumptions} \begin{enumerate} \item[(i)] $u_0$ is H\"older continuous of order $1/2$ on $\mathbf{I}$. \item[(ii)] There exist constants $0<c_0<C_0<\infty$ such that $c_0\leq u_0(x)\leq C_0$ for all $x\in \mathbf{I}$. \item[(iii)] $u_1$ is H\"older continuous of order $1/2$ on $\mathbf{I}$ and hence bounded. \item[(iv)] There exist constants $0<c_g<C_g<\infty$ such that $c_g\leq g(y)\leq C_g$ for all $y\in\mathbf{R}$. \end{enumerate} \medskip Here are our main theorems. \begin{theorem} \label{th:2} Suppose that $u(t,x)$ satisfies (\ref{eq:stoch-wave}), and that the above assumptions hold. Then $\alpha>3$ implies \begin{equation*} \mathbf{P}(\tau_\infty<\infty)=0. \end{equation*} That is, $u$ does not hit 0. \end{theorem} \begin{theorem} \label{th:1} Suppose that $u(t,x)$ satisfies (\ref{eq:stoch-wave}), and that above assumptions hold. Then $0<\alpha<1$ implies \begin{equation*} \mathbf{P}(\tau_\infty<\infty)>0. \end{equation*} That is, $u$ can hit 0. \end{theorem} Here is the plan for the paper. In Section \ref{sec:technicalities} we give a rigorous formulation of (\ref{eq:stoch-wave}); in particular, the solution is only defined up to the first time $t$ that $u(t,x)=0$ for some $x$, since $u^{-\alpha}(t,x)$ blows up there. The same is true for the stochastic heat equation discussed earlier. In Section \ref{sec:pf-thm-2} we prove Theorem \ref{th:2}, and in Section \ref{sec:pf-thm-1} we prove Theorem \ref{th:1}. Note the gap between $\alpha<1$ and $\alpha>3$. Since there is no comparison principle for the wave equation, we cannot be certain that there exists a critical value $\alpha_0$ such that $u$ can hit 0 for $\alpha<\alpha_0$ but not for $\alpha>\alpha_0$. We strongly believe in the existence of such a critical value, but we leave the existence and identification of $\alpha_0$ as an open problem. \section{Technicalities} \label{sec:technicalities} \subsection{Rigorous Formulation of the Wave SPDE} For the most part we follow Walsh \cite{wal86} although we could also use the formulation found in Da Prato and Zazbczyk \cite{dz92}. First we recall the definition the one-dimensional wave kernel on $x\in\mathbf{R}$. \begin{equation*} S(t,x)=\frac{1}{2}\mathbf{1}(|x|\leq t) \end{equation*} See \cite{eva98} for this classical material. If we regard $S(t,x)$ as a Schwartz distribution, then for $t\geq0$ we can write \begin{equation*} \partial_tS(t,x)=\frac{1}{2}\delta(x-t)+\frac{1}{2}\delta(x+t). \end{equation*} From now on, we interpret such expressions as Schwartz distributions. Now we switch to the circle $x\in\mathbf{I}$, as defined earlier. It is also a classical result that for $x\in\mathbf{I}$, the wave kernel $S_\mathbf{I}$ and its time derivative are given by \begin{align*} S_\mathbf{I}(t,x)&=\sum_{n\in\mathbf{Z}}S(t,x+nJ) \\ \partial_tS_\mathbf{I}(t,x)&=\frac{1}{2}\sum_{n\in\mathbf{Z}}\Big(\delta(nJ+x-t)+\delta(nJ+x+t)\Big). \end{align*} Again, we regard $\partial_tS_\mathbf{I}$ as a Schwartz distribution. Let $w(t,x)$ be the solution of the linear deterministic wave equation on $x\in\mathbf{I}$, with the same initial data as $u$. That is, \begin{align*} \partial_t^2 w(t,x) &=\Delta w(t,x) \\ w(0,x)&=u_0(x) \\ \partial_tw(0,x)&=u_1(x) \end{align*} with periodic boundary conditions, so that \begin{align*} w(t,x) &= \int_{0}^{J}\Big(\partial_tS_\mathbf{I}(t,x-y)u_0(y) +S_\mathbf{I}(t,x-y)u_1(y)\Big)dy \\ &= \frac{1}{2}\int_{0}^{J}\Big(u_0(x-t-y)+u_0(x+t-y) +S_\mathbf{I}(t,x-y)u_1(y)\Big)dy \end{align*} where expressions such as $x-y$ and $x-t-y$ are interpreted using arithmetic modulo J. We note that by Assumptions (i) and (iii), we can conclude that $w(t,x)$ is H\"older continuous of order $1/2$ in $(t,x)$ jointly. Using Duhamel's principle, if $u^{-\alpha}$ and $g(u(s,y))\dot{W}$ were smooth, we could write \begin{align} \label{eq:stoch-wave-int-eq} u(t,x)=&w(t,x)+\int_{0}^{t}\int_{0}^{J}S_\mathbf{I}(t-s,x-y)u(s,y)^{-\alpha}dyds \\ & +\int_{0}^{t}\int_{0}^{J}S_\mathbf{I}(t-s,x-y)g(u(s,y))W(dyds). \nonumber \end{align} If $u^{-\alpha}$ had no singularities, we could use this \textit{mild form} to give rigorous meaning to (\ref{eq:stoch-wave}), where we define final double integral using Walsh's theory of martingale measures, see \cite{wal86}. One could also use the Hilbert space theory given in Da Prato and Zabczyk \cite{dz92}. To deal with the singularity of $u^{-\alpha}$, we use truncation and then take the limit as the truncation is removed. For $N=1,2,\ldots$ define $u_N(t,x)$ as the solution of \begin{align} \label{eq:stoch-wave-truncated} u_N(t,x) =&w(t,x)+\int_{0}^{t}\int_{0}^{J}S_\mathbf{I}(t-s,x-y) \Big[u_N(s,y)\vee(1/N)\Big]^{-\alpha}dyds \nonumber\\ &+ \int_{0}^{t}\int_{0}^{J}S_\mathbf{I}(t-s,x-y)g(u_N(s,y))W(dyds). \end{align} Here $a\vee b=\max(a,b)$. Note that if $\alpha>0$, then $[u\vee(1/N)]^{-\alpha}$ is a Lipschitz function of $u$. It is well known that SPDE such as (\ref{eq:stoch-wave-truncated}) with Lipschitz coefficients have unique strong solutions valid for all time, see \cite{wal86}, Chapter III. It follows for each $N=1,2,\ldots$ that (\ref{eq:stoch-wave-truncated}) has a unique strong solution $u_N$ valid for all $t\geq0, x\in[0,J]$. Now let \begin{equation*} \tau_N=\inf\Big\{t>0: \inf_{x\in[0,J]}u_N(t,x)\leq1/N\Big\}. \end{equation*} From the definition, we see that almost surely \begin{equation*} u_{N_1}(t,x)=u_{N_2}(t,x) \end{equation*} for all $t\in[0,\tau_{N_1}\wedge\tau_{N_2})$ and $x\in\mathbf{I}$. Here $a\wedge b=\min(a,b)$. It also follows that $\tau_1\leq\tau_2\leq\cdots$ and so we can almost surely define \begin{equation*} \tau=\sup_N\tau_N. \end{equation*} We allow the possibility that $\tau=\infty$. Note that this definition of $\tau$ is consistent with the definition given in the introduction. So, for $t<\tau$ and $x\in\mathbf{I}$, we can define \begin{equation*} u(t,x)=\lim_{N\to\infty}u_N(t,x) \end{equation*} since for $t<\tau$ and $x\in\mathbf{I}$ the sequence $u_1(t,x),u_2(t,x),\ldots$ does not vary with $N$ after a finite number of terms. It follows that $u(t,x)$ satisfies (\ref{eq:stoch-wave-int-eq}) for $0\leq t<\tau$. Finally, we define $u(t,x)$ for all times $t$ by defining \begin{equation*} u(t,x)=\mathbf{\Delta} \end{equation*} for $t\geq\tau$. Here $\mathbf{\Delta}$ is a cemetary state. \subsection{Multi-parameter Girsanov Theorem} The proof of Theorem \ref{th:1} is based on Girsanov's theorem for two-parameter white noise. This approach was used earlier in Mueller and Pardoux \cite{mp99} for the stochastic heat equation, but we need to do some work to adapt the argument to the stochastic wave equation. Girsanov's theorem will allow us to remove the drift from our equation (\ref{eq:stoch-wave}), at least up to time $\tau$. If this Girsanov transformation gives us an absolutely continuous change of probability measure, then we only need to verify that the stochastic wave equation (\ref{eq:stoch-wave}) without the drift has a positive probability of hitting 0. Assume that our white noise $\dot{W}(t,x)$ and hence also $u(t,x)$ is defined on a probability space $(\Omega,\mathcal{F},\mathbf{P})$. As in Walsh \cite{wal86}, we define $\dot{W}(t,x)$ in terms of a random set function $W(A,\omega)$ on measurable sets $A\subset[0,\infty)\times\mathbf{I}$. Let $(\mathcal{F}_t)_{t\geq0}$ be the filtration defined by \begin{equation*} \mathcal{F}_t=\sigma(W(A): A\subset[0,t]\times\mathbf{I}). \end{equation*} Nualart and Pardoux \cite{np94} give the following version of Girsanov's theorem. \begin{theorem} \label{th:girsanov} Let $T>0$ be a given constant, and define the probability measure $\mathbf{P}_T$ to be $\mathbf{P}$ restricted to sets in $\mathcal{F}_T$. Suppose that $W$ is a space-time white noise random measure on $[0,T]\times\mathbf{R}$ with respect to $\mathbf{P}_T$, and that $h(t,x)$ is a predictable process such that the exponential process \begin{equation*} \mathcal{E}_h(t)=\exp{\left(\int_0^t\int_\mathbf{R}h(s,y)W(dyds) -\frac{1}{2}\int_0^t\int_\mathbf{R}h(s,y)^2dyds\right)} \end{equation*} is a martingale for $t\in[0,T]$. Then the measure \begin{equation} \label{eq:girsanov_measure} \tilde{W}(dx dt) = W(dx dt) - h(t, x) \ dx dt \end{equation} is a space-time white noise random measure on $[0,T]\times\mathbf{R}$ with respect to the probability measure $\mathbf{Q}_T$, where $\mathbf{Q}_T$ and $\mathbf{P}_T$ are mutually absolutely continuous and \begin{equation} \label{eq:girsanov_derivative} d\mathbf{Q}_T = \mathcal{E}_h(T) \ d\mathbf{P}_T. \end{equation} \end{theorem} We recall Novikov's sufficient condition for $\mathcal{E}_h(t)$ to be a martingale. \begin{prop} \label{prop:girsanov} Let $h(t,x)$ be a predictable process with respect to the filtration $(\mathcal{F}_t)_{t\in[0,T]}$. If \begin{equation} \label{eq:novikov} \mathbf{E}\left[\exp{\left(\frac{1}{2}\int_0^T\int_\mathbf{R}h(s,y)^2dyds\right)}\right]<\infty \end{equation} then $\mathcal{E}_h(t)$ is a uniformly integrable $\mathcal{F}_t$-martingale for $0\leq t\leq T$. \end{prop} \subsection{H\"older continuity of the stochastic convolution} For an almost surely bounded predictable process $\rho(t,x)$, we define the stochastic convolution as follows. \begin{equation*} N_\rho(t,x)=\int_{0}^{t}\int_{0}^{J}S_\mathbf{I}(t-s,x-y)\rho(s,y)W(dyds). \end{equation*} Note that the double integral in (\ref{eq:stoch-wave-int-eq}) is equal to $N_{g(u)}(t,x)$ for $t<\tau$. We conveniently define $g(\mathbf{\Delta})=0$, so that $N_{g(u)}(t,x)$ is defined for all time. The proofs of both main theorems depend on the H\"older continuity of $N_{g(u)}(t,x)$. Although such results are common in the SPDE literature, unfortunately we could not find the exact result we needed. So for completeness, we state it here. \begin{theorem} \label{th:holder} Let $\rho(t,x)$ be an almost surely bounded predictable process. For any $T>0$ and $\beta<1/2$, there exists a random variable $Y$ with finite expectation, with $\mathbf{E}|Y|$ depending only on $\beta$ and $T$, such that \begin{equation} \label{eq:holder} \left|N_\rho(t+h,x+k)-N_\rho(t,x)\right|\leq Y\left(h^\beta+k^\beta\right) \end{equation} almost surely for all $h,k$ where $t,t+h\in[0,T]$. \end{theorem} We will prove Theorem \ref{th:holder} in the appendix. \section{Proof of Theorem \ref{th:2}} \label{sec:pf-thm-2} \subsection{Outline and Preliminaries} We write the mild solution to (\ref{eq:stoch-wave}) in the following form: \begin{equation} \label{eq:wave_drifted_sol_short} u(t,x)=V_u(t,x)+D_u(t,x)+N_u(t,x) \end{equation} where \begin{align*} V_u(t,x)&=\frac{1}{2}\big(u_0(x+t)+u_0(x-t)\big)+\int_0^Ju_1(y)S_{\mathbf{I}}(t,x-y)dy\\ D_u(t,x)&=\int_0^t\int_0^Ju(s,y)^{-\alpha}S_{\mathbf{I}}(t-s,x-y)dyds\\ N_u(t,x)&=\int_0^t\int_0^Jg(u(s,y))S_{\mathbf{I}}(t-s,x-y)W(dyds). \end{align*} We will prove Theorem \ref{th:2} by contradiction. First, we assume that $\tau<\infty$ with positive probability. Then, on the sample paths where this is the case (i.e., all $u(\omega)$ such that $\tau(\omega)<\infty$), we go backwards in time from where $u$ hits zero. The upward drift term $D_u(t,x)$ will then push downwards, since we are going backwards in time. We show that this downward push must overwhelm the modulus of continuity of the $N_u(t,x)$ term, implying the existence of another time $\tau_1<\tau$ such that $ u $ hits zero at $\tau_1$. However, this contradicts the minimality of $\tau$, thus proving the theorem. \subsection{A Regularity Lemma} Let $\mathbf{A}=\{\tau<\infty\}$. By assumption, $\mathbf{P}(\mathbf{A})>0 $. We then show the following: \begin{lemma} \label{lem:upper_holder} On the event $\mathbf{A}$, $V_u(t,x)+N_u(t,x)$ is almost surely $\beta$-H\"older continuous on $[0,\tau)\times[0,J]$ for any $\beta<1/2$. The H\"older constant is a random variable depending only on $\beta$ and $\omega$. \end{lemma} \begin{proof} Let $\beta<1/2$ be given. Then by (\ref{eq:holder}) we know that $N_u(t,x)$ is almost surely $\beta$-H\"older continuous on $[0,\tau)\times[0,J]$, with random H\"older constant $Y$ depending only on $\beta$ and $\tau$. Since $u_1$ is continuous on $\mathbf{I}$, the Riemann integral \begin{equation*} \int_0^Ju_1(y)S_{\mathbf{I}}(t,x-y)dy=\int_{x-t}^{x+t}u_1(y)dy \end{equation*} is jointly differentiable (and thus $\beta$-H\"older continuous) on $(t,x)\in[0,\tau)\times[0,J]$ as well. Finally, from assumption, $u_0$ is $\beta$-H\"older continuous on $[0,J]$, so it follows that $\frac{1}{2}(u_0(x+t)+u_0(x-t))$ is continuous as well. Thus $V_u(t,x)+N_u(t,x)$ is almost surely $\beta$-H\"older continuous on $[0,\tau)\times[0,J]$. As the H\"older constant of $V_u$ depends only on $u_0$ and $u_1$, the H\"older constant of $V_u+N_u$ is a random variable depending only on $\beta$. \end{proof} \subsection{The Backwards Light Cone} Given $(t,x)\in\mathbf{R}_+\times\mathbf{R}$, define the \textit{backwards light cone} as \begin{equation*} \mathbf{L}(t,x)=\left\{(s,y):\left|x-y\right|<t-s\right\}. \end{equation*} Note that the light cone cannot include points $(s,y)$ for which $s>t$. It follows that $ D_u(t, x) $ can be rewritten as \begin{equation} \label{eq:upper_d} \begin{split} D_u(t,x)&=\int_0^t\int_0^Ju(s,y)^{-\alpha}S_{\mathbf{I}}(t-s,x-y)dyds\\ &=\int_0^t\int_{\mathbb{R}}u\left(s,y^*\right)^{-\alpha}S(t-s,x-y)dyds\\ &=\iint_{\mathbf{L}(t,x)}u\left(s,y^*\right)^{-\alpha}dyds \end{split} \end{equation} where \begin{equation} \label{eq:upper_wrap} y^* = y \mod{J}. \end{equation} and $y^*\in[0,J]$. \begin{lemma} \label{lem:upper_drift} Let $(t,x)\in[0,\tau)\times[0,J]$. Then for any $(s,y)\in\mathbf{L}(t,x)$, we have \begin{equation*} D_u(s,y)-D_u(t,x)<0. \end{equation*} \end{lemma} \begin{proof} Since $u(s,y)>0$ on $[0,\tau)$, using (\ref{eq:upper_d}) the result follows from the fact that $\mathbf{L}(s,y)\subsetneq\mathbf{L}(t,x)$. \end{proof} \subsection{Theorem \ref{th:2}, Conclusion} Since $\alpha>3$ by assumption, define $\epsilon\in(0,1/2)$ sufficiently small such that \begin{equation*} \frac{3-\alpha}{2}+\epsilon(\alpha+1)<0. \end{equation*} Using Lemma \ref{lem:upper_holder}, on the event $\mathbf{A}$ we define $Y$ to be a (random) $1/2-\epsilon$ H\"older constant of $V_u(t,x)+N_u(t,x)$, depending only on $\epsilon$. By our choice of $\epsilon$, the exponent of $R$ in the expression \begin{equation*} \frac{\pi Y^{-1-\alpha}}{2^{\alpha+2}}R^{\frac{3-\alpha}{2}+\epsilon(\alpha+1)} \end{equation*} is negative. Hence, on $\mathbf{A}$ we can pick a sufficiently small random $R>0$, depending on $\epsilon$ and $Y$, such that both \begin{equation} \label{eq:upper_r} \frac{\pi Y^{-1-\alpha}}{2^{\alpha+2}}R^{\frac{3-\alpha}{2}+\epsilon(\alpha+1)}>1 \quad\text{and}\quad R<\frac{\tau}{2}. \end{equation} Finally, on $\mathbf{A}$ we pick a random $\delta>0$ sufficiently small such that both \begin{equation} \label{eq:upper_delta} \delta<\min\left(\inf_{x\in[0,J]}u_0(x),YR^{\frac{1}{2}-\epsilon}\right) \end{equation} and \begin{equation} \label{eq:upper_delta_tau} \tau_{\delta}=\inf\left\{t>0:\inf_{x\in[0,J]}u(t,x)<\delta\right\}>\frac{\tau}{2}, \end{equation} which is possible since since $u(t,x)$ is continuous in $t$ for $t<\tau$. Here, $\tau{_\delta}$ need not be a stopping time. Note that $\tau_{\delta}$ is the first time that $u(t,x)$ reaches $\delta$, and that by continuity of $u(t,x)$ in $x$, there exists some $x_{\delta}\in[0,J]$ such that $u\left(\tau_{\delta},x_{\delta}\right)=\delta$. We define the differences \begin{align*} \Delta V(t,x)&=V_u(t,x)-V_u\left(\tau_{\delta},x_{\delta}\right)\\ \Delta D(t,x)&=D_u(t,x)-D_u\left(\tau_{\delta},x_{\delta}\right)\\ \Delta N(t,x)&=N_u(t,x)-N_u\left(\tau_{\delta},x_{\delta}\right) \end{align*} and for all $(t,x)\in\mathbf{L}(\tau_{\delta},x_{\delta})$, we decompose \begin{equation} \label{eq:upper_decomp} \begin{split} u(t,x)&=u(t,x)-u\left(\tau_{\delta},x_{\delta}\right)+\delta\\ &=\Delta V(t,x)+\Delta D(t,x)+\Delta N(t,x)+\delta. \end{split} \end{equation} We recall that by construction, \begin{equation} \label{eq:upper_initial_noise} \Delta V(t,x) + \Delta N(t,x) <Y\left|(t,x)-\left(\tau_{\delta},x_{\delta}\right)\right|^{1/2-\epsilon} \end{equation} almost surely on $\mathbf{A}$ with $\mathbf{E}\left[Y;\mathbf{A}\right]<\infty$. From Lemma \ref{lem:upper_drift}, we find that \begin{equation*} \Delta D(t,x)<0 \end{equation*} almost surely. Hence, for all $(t,x)\in\mathbf{L}\left(\tau_{\delta},x_{\delta}\right)$ we obtain the bound \begin{equation} \label{eq:upper_initial} \begin{split} u(t,x)&=\Delta V(t,x)+\Delta D(t,x)+\Delta N(t,x)+\delta\\ &< \Delta V(t,x) + \Delta N(t,x) + \delta \\ &<Y\left|(t,x)-\left(\tau_{\delta},x_{\delta}\right)\right|^{1/2-\epsilon} + \delta \end{split} \end{equation} almost surely on $\mathbf{A}$. We define the sector \begin{equation*} B_R=\left\{(t,x)\in\mathbf{L}\left(\tau_{\delta},x_{\delta}\right) :\left|\left(\tau_{\delta},x_{\delta}\right)-(t,x)\right|\leq R\right\}, \end{equation*} noting from (\ref{eq:upper_r}) and (\ref{eq:upper_delta_tau}) that $t>0$ on $B_R$. We then denote the curved part of the boundary of $B_R$ by \begin{equation*} \partial B_R=\left\{(t,x)\in B_R: \left|\left(\tau_{\delta},x_{\delta}\right)-(t,x)\right|=R\right\}. \end{equation*} Then for all $(t,x)\in\partial B_R$, using (\ref{eq:upper_d}), (\ref{eq:upper_initial}), and (\ref{eq:upper_delta}) we find that \begin{equation} \label{eq:upper_drift} \begin{split} \Delta D(t,x)&=-\iint_{\mathbf{L}\left(\tau_{\delta},x_{\delta}\right) \setminus\mathbf{L}(t,x)}u\left(s,y^*\right)^{-\alpha}dyds\\ &\leq-\iint_{B_R}u\left(s,y^*\right)^{-\alpha}dyds\\ &\leq-\left|B_R\right|\left(YR^{\frac{1}{2}-\epsilon}+\delta\right)^{-\alpha}\\ &<-\left|B_R\right|\left(2YR^{\frac{1}{2}-\epsilon}\right)^{-\alpha}\\ &=-\frac{\pi R^2}{2^{\alpha+2}}Y^{-\alpha}R^{-\alpha\left(\frac{1}{2}-\epsilon\right)}\\ &=-\frac{\pi Y^{-\alpha}}{2^{\alpha+2}}R^{2-\left(\frac{1}{2}-\epsilon\right)\alpha} \end{split} \end{equation} on the event $\mathbf{A}$. Recall that on $\partial B_R$, $\left|(t,x)-\left(\tau_{\delta},x_{\delta}\right)\right|=R$. Hence from (\ref{eq:upper_decomp}), (\ref{eq:upper_initial_noise}), and (\ref{eq:upper_drift}) we find that for all $(t,x)\in\partial B_R$, \begin{equation} \label{eq:upper_boundary} \begin{split} u(t,x)&<YR^{\frac{1}{2}-\epsilon}-\frac{\pi Y^{-\alpha}}{2^{\alpha+2}} R^{2-\left(\frac{1}{2}-\epsilon\right)\alpha}\\ &=YR^{\frac{1}{2}-\epsilon}\left(1-\frac{\pi Y^{-1-\alpha}}{2^{\alpha+2}} R^{2-\left(\frac{1}{2}-\epsilon\right)\alpha -\left(\frac{1}{2}-\epsilon\right)}\right)\\ &=YR^{\frac{1}{2}-\epsilon}\left(1-\frac{\pi Y^{-1-\alpha}}{2^{\alpha+2}} R^{\frac{3-\alpha}{2}+\epsilon(\alpha+1)}\right) \end{split} \end{equation} almost surely on $\mathbf{A}$. From (\ref{eq:upper_r}) and (\ref{eq:upper_boundary}) it then follows that $u(t,x)<0 $ for all $(t,x)\in\partial B_R$, almost surely on $\mathbf{A}$. Since $\mathbf{P}(\mathbf{A})>0 $ by assumption, the event that $u(t,x)<0$ for all $(t,x)\in\partial B_R$ occurs with positive probability. However, since $R>0$, we know that $t<\tau_{\delta}<\tau$ for all $(t,x)\in\partial B_R$, which is a contradiction, since $\tau$ is defined to be the first hitting time for $u(t,x)\leq0$. Hence we conclude that $\mathbf{P}(\mathbf{A})=0$. This finishes the proof of theorem \ref{th:2}. \section{Proof of Theorem \ref{th:1}} \label{sec:pf-thm-1} \subsection{Equation without the Drift} Now we use Proposition \ref{prop:girsanov} to prove Theorem \ref{th:1}. Consider the stochastic wave equation with initial conditions identical to (\ref{eq:stoch-wave}) but without drift: \begin{align} \label{eq:stoch-wave-no-drift} \partial_t^2 v(t,x) &=\Delta v(t,x)+g(v(t,x))\dot{W(t,x)} \\ v(0,x)&=u_0(x) \nonumber\\ \partial_tv(0,x)&=u_1(x). \nonumber \end{align} Here $x\in[0,J]$, as before. Since there are no singular terms in (\ref{eq:stoch-wave-no-drift}), we can give this equation rigorous meaning using the mild form: \begin{equation} \label{eq:wave_undrifted_sol} v(t,x)=w(t,x)+\int_{0}^{t}\int_{0}^{J}S_\mathbf{I}(t-s,x-y)g(v(s,y))W(dyds). \end{equation} where $w(t,x)$ is as before, the solution to the deterministic wave equation. First we verify that $v(t,x)$ can hit 0. \begin{lemma} \label{lem:v-hit-0} Suppose that $v(t,x)$ is a solution to (\ref{eq:wave_undrifted_sol}). Then \begin{equation*} \mathbf{P}(v(t,x)=0 \text{ for some $t>0,\, x\in[0,J]$})>0. \end{equation*} \end{lemma} \begin{proof} Let $V(t)=\int_0^Jv(t,x)dx$. By the almost sure continuity of $v(t,x)$ (see \cite{wal86}) Chapter III, it suffices to show that \begin{equation} \label{eq:v-below-0} \mathbf{P}\big(V(t)<0\big)>0. \end{equation} Since $\int_0^JS_{\mathbf{I}}(t,x-y)dy=t$ by the definition of the one-dimensional wave kernel, and since $\int_0^J\frac{1}{2}\big(u_0(x+t)+u_0(x-t)\big)dx=\int_0^Ju_0(x)dx$, \begin{equation*} V(t)=\int_0^Ju_0(x)dx+t\int_0^Ju_1(x)dx +\int_{0}^{t}\int_0^J(t-s)g(v(s,y))W(dyds). \end{equation*} Here we have used the stochastic Fubini theorem (see \cite{wal86}, Theorem 2.6) to change the order of integration in the double integral. Let us define $N_v(t)$ as the double integral: \begin{equation*} N_v(t)=\int_{0}^{t}\int_0^J(t-s)g(v(s,y))W(dyds). \end{equation*} The question would be easy if $g\equiv1$, as $N_v(t)$ would be a Gaussian variable, with a positive probability of taking values below any desired level. Since this is not necessarily the case, we use another Girsanov transformation to bound $N_v(t)$ by a Gaussian process. Fix $t>0$. Choose $K$ sufficiently large so that \begin{equation} \label{eq:lower_undrifted_k_bound} \frac{c_gJKt^2}{2}-\int_0^Ju_0(x)dx-t\int_0^Ju_1(x)dx>0. \end{equation} Using Theorem \ref{th:girsanov}, we define $ \tilde{W} $ as a $\tilde{\mathbf{P}}$ white noise, where $\mathbf{P}$ and $\tilde{\mathbf{P}}$ are equivalent and \begin{equation*} W(dyds)=\tilde{W}(dyds)-Kdyds. \end{equation*} Decompose $ N_v(t) = N_v^{(1)}(t) - N_v^{(2)}(t) $, where \begin{align*} N_v^{(1)}(t)&=\int_0^t\int_0^J(t-s)g(v(s,y))\tilde{W}(dyds)\\ N_v^{(2)}(t)&=\int_0^t\int_0^J(t-s)g(v(s,y))Kdyds. \end{align*} Since $g(v(s,y))$ is bounded below by $c_g>0$, we have: \begin{equation*} N_v^{(2)}(t)\geq\frac{c_g JKt^2}{2}. \end{equation*} Hence to show (\ref{eq:v-below-0}), it suffices to prove that \begin{equation*} \mathbf{P}\left(N_v^{(1)}(t)<\frac{c_gJKt^2}{2}-\int_0^Ju_0(x)dx -t\int_0^Ju_1(x)dx\right)>0 \end{equation*} and since $\mathbf{P}$ and $ \tilde{\mathbf{P}} $ are equivalent, we can show instead that \begin{equation} \label{eq:lower_undrifted_bound} \tilde{\mathbf{P}}\left(N_v^{(1)}(t)<\frac{c_gJKt^2}{2}-\int_0^Ju_0(x)dx -t\int_0^Ju_1(x)dx\right)>0. \end{equation} We define the process \begin{equation*} M_t(r)=\int_0^r\int_0^J(t-s)g(v(s,y))\tilde{W}(dyds). \end{equation*} Since $g$ is bounded, $M_t(r)$ is an $\mathcal{F}_r$-martingale in $r$, for $r\leq t$. Hence, from Theorem V.1.6 in Revuz and Yor \cite{ry99}, there exists a one-dimensional standard Brownian motion $B$ such that $M_t(r)=B\left(\tau(r)\right)$, where the time change $\tau(r)$ is given by the predictable process: \begin{equation*} \begin{split} \tau(r)&=\int_0^r\int_0^J(t-s)^2g^2(v(s,y))dyds \\ &\leq C_g^2\int_0^r\int_0^J(t-s)^2dyds \\ &=\frac{C_g^2J}{3}t^3-\frac{C_g^2J}{3}(t-r)^3. \end{split} \end{equation*} Then let \begin{equation*} L=\frac{C_g^2J}{3}t^3 \end{equation*} so we have $ \tau(t) \leq L $. Using this, we find that: \begin{equation*} N_v^{(1)}(t)=M_t(t)=B\left(\tau(t)\right)\leq\sup_{0\leq q\leq L}B(q). \end{equation*} Due to (\ref{eq:lower_undrifted_k_bound}), we can use the reflection principle to find that \begin{align*} \begin{split} \tilde{\mathbf{P}}&\left(\sup_{0\leq q\leq L}B(q) \geq\frac{c_gJKt^2}{2}-\int_0^Ju_0(x)dx-t\int_0^Ju_1(x)dx\right) \\ &\leq2\tilde{\mathbf{P}}\left(B(L)\geq\frac{c_gJKt^2}{2}-\int_0^Ju_0(x)dx -t\int_0^Ju_1(x)dx\right) \\ &< 1 \qquad\qquad\qquad \text{(since $ B(L) \sim \mathcal{N}(0, L) $)} \end{split} \end{align*} from which (\ref{eq:lower_undrifted_bound}) follows, and the proof of Lemma \ref{lem:v-hit-0} is complete. \end{proof} \subsection{Removing the Drift Term} To finish the proof of Theorem \ref{th:1}, it suffices to show that up to the first time $\tau$ that $u$ and $v$ hit 0, these two processes induce equivalent probability measures on the canonical paths consisting of continuous functions $f(t,x)$ on $[0,\tau(f)]\times[0,J]$. Given a (possibly random) function $f:[0,\infty)\times[0,J]\rightarrow\mathbf{R}$, define the hitting times \begin{align*} \tau^{(f)}&=\inf\left\{t>0:\inf_{x\in[0,J]}f(t,x)\leq0\right\} \\ \alpha_m^{(f)}&=\inf\left\{t>0:\int_0^t\int_0^Jf(s,x)^{-2\alpha}dxds>m\right\} \end{align*} and for a constant $ T > 0 $, let \begin{equation*} T_m(f)=\tau^{(f)}\wedge\alpha_m^{(f)}\wedge T. \end{equation*} Then define the truncated function $ f^{T_m(f)} $ by: \begin{equation*} f^{T_m(f)}(t,x)=f(t,x)\mathbf{1}_{\{t\leq T_m(f)\}}. \end{equation*} Let $\mathbf{P}_u^{T_m(u)}$, $\mathbf{P}_v^{T_m(v)}$ be the measures on path space $\mathcal{C}\left([0,\infty)\times[0,J],\mathbf{R}\right) $ induced by $u^{T_m(u)}(t,x)$, $v^{T_m(v)}(t,x)$ respectively, and let \begin{equation} h(r) = \begin{cases} \label{eq:h-drift} \frac{r^{-\alpha}}{g(r)}&\text{ if }r\neq0 \\ 0&\text{ if }r=0. \end{cases} \end{equation} We then obtain the following Girsanov transformation: \begin{lemma} \label{lem:lower_girsanov} For each $ m\in\mathbf{N} $, the measures $\mathbf{P}_u^{T_m(u)}$ and $\mathbf{P}_v^{T_m(v)}$ are equivalent, with \begin{align*} \frac{d\mathbf{P}_u^{T_m(u)}}{d\mathbf{P}_v^{T_m(v)}} =\exp\bigg(\int_0^{T_m(v)}&\int_0^Jh\left(v(t,x)\right)W(dxdt) \\ & -\frac{1}{2}\int_0^{T_m(v)}\int_0^Jh\left(v(t,x)\right)^2dxdt\bigg). \end{align*} \end{lemma} \begin{proof} First, we note that $h(v^{T_m(v)}(t,x))$ satisfies the Novikov condition given in (\ref{eq:novikov}). Then, define the probability measure $\mathbf{Q}^{T_m(v)}$ by the derivative \begin{equation*} \begin{split} \frac{d\mathbf{Q}^{T_m(v)}}{d\mathbf{P}_v^{T_m(v)}} &=\exp\bigg(\int_0^T\int_0^Jh\left(v^{T_m(v)}(t,x)\right)W(dxdt) \\ &\qquad\qquad-\frac{1}{2}\int_0^T\int_0^Jh\left(v^{T_m(v)}(t,x)\right)^2dxdt\bigg) \\ &=\exp\bigg(\int_0^{T_m(v)}\int_0^Jh\left(v(t,x)\right)W(dxdt) \\ &\qquad\qquad -\frac{1}{2}\int_0^{T_m(v)}\int_0^Jh\left(v(t,x)\right)^2dxdt\bigg). \end{split} \end{equation*} Then, from Theorem \ref{th:girsanov}, it follows that \begin{equation*} \tilde{W}(dxdt)=W(dxdt)-h\left(v^{T_m(v)}(t,x)\right)dxdt \end{equation*} is a space-time white noise random measure under $\mathbf{Q}^{T_m(v)}$. Note that $\mathbf{Q}^{T_m(v)}$ is the measure on $\mathcal{C}\left(\left[0,\infty\right)\times[0,J],\mathbf{R}\right)$ induced by $f^{T_m(f)}(t,x)$ where $f(t,x)$ satisfies \begin{equation*} \begin{split} f(t,x)&=\frac{1}{2}\left(u_0(x+t)+u_0(x-t)\right)+\int_0^Ju_1(y)S_{\mathbf{I}}(t,x-y)dy \\ &\quad+\int_0^t\int_0^Jg(f(s,y))S_{\mathbf{I}}(t-s,x-y)W(dyds) \\ &=\frac{1}{2}\left(u_0(x+t)+u_0(x-t)\right)+\int_0^Ju_1(y)S_{\mathbf{I}}(t,x-y)dy \\ &\quad+\int_0^t\int_0^Jf(s,y)^{-\alpha}S_{\mathbf{I}}(t-s,x-y)dxdt \\ &\quad+\int_0^t\int_0^Jg(f(s,y))S_{\mathbf{I}}(t-s,x-y)\tilde{W}(dyds). \end{split} \end{equation*} where the last term is a Walsh integral with respect to the underlying measure $\mathbf{Q}^{T_m(v)}$. However, these are just the paths of $u^{T_m(u)}(t,x)$, so the measure $\mathbf{Q}^{T_m(v)}=\mathbf{P}_u^{T_m(u)}$. Then Lemma \ref{lem:lower_girsanov} follows. \end{proof} Now we wish to apply Lemma \ref{lem:lower_girsanov} with $h$ as defined in \eqref{eq:h-drift}. This depends on the finiteness of $\alpha_m(f)$ for some $m$. Thus \ref{th:1} follows from the following lemma, which we prove by using the regularity of the stochastic wave equation. \begin{lemma} \label{lem:lower_main} For any constant $T>0$, \begin{equation} \label{eq:lower_main} \int_0^{\tau^{(v)}\wedge T}\int_0^Jv(t,x)^{-2\alpha}dxdt<\infty \end{equation} almost surely. \end{lemma} \subsection{Proof of Lemma \ref{lem:lower_main}} For this entire section, we let $v(t,x)$ be as given in (\ref{eq:wave_undrifted_sol}). \subsubsection{A Rectangular Grid} For each $ K > 0 $ define the event: \begin{equation} \label{eq:lower_k} \mathbf{A}(K):=\left\{\sup_{(t,x)\in[0,T]\times[0,J]}v(t,x)\leq K\right\}. \end{equation} Since $(t,x)\mapsto v(t,x)$ is almost surely continuous, the above supremum is almost surely finite, so \begin{equation*} \lim_{K\rightarrow\infty}\mathbf{P}\left\{\mathbf{A}(K)\right\}=1. \end{equation*} We split the interval $ (0, K] $ into dyadic subintervals \begin{equation} (0,K]=\bigcup_{n=0}^\infty\left(2^{-n-1}K,2^{-n}K\right] \end{equation} and observe that on the event $ \mathbf{A}(K) $, \begin{equation} \label{eq:lower_lebesgue_bound} \begin{split} &\int_0^{\tau^{(v)}\wedge T}\int_0^Jv(t,x)^{-2\alpha}dxdt\\ &=\sum_{n=0}^{\infty}\left[\int_0^{\tau^{(v)}\wedge T}\int_0^Jv(t,x)^{-2\alpha} 1_{\left\{2^{-n-1}K<v(t,x)\leq2^{-n}K\right\}}dxdt\right]\\ &\leq\sum_{n=0}^{\infty}\left[2^{2\alpha(n+1)}K^{-2\alpha} \int_0^{\tau^{(v)}\wedge T}\int_0^J1_{\left\{2^{-n-1}K<v(t,x)\leq2^{-n}K\right\}} dxdt\right]\\ &=\sum_{n=0}^{\infty}\Big[2^{2\alpha(n+1)}K^{-2\alpha}\\ &\qquad\times\mu\left(\left\{(t,x)\in\left[0,\tau^{(v)}\wedge T\right]\times[0,J] :2^{-n-1}K<v(t,x)\leq2^{-n}K\right\}\right)\Big] \end{split} \end{equation} where $\mu$ denotes Lebesgue measure. Define a constant $\epsilon>0$ such that \begin{equation*} 0<2\epsilon<1-\alpha \end{equation*} and for each $n\in\mathbf{N}$, consider the rectangle \begin{equation} \label{eq:lower_square} D_n=\left\{(t,x)\in\left[0,\lambda_n\right]\times\left[0,2\lambda_n\right]\right\} \end{equation} where \begin{equation} \label{eq:lower_lambda} \lambda_n=2^{-(1-2\epsilon)n}. \end{equation} As far as the optimality of this choice of $\lambda_n$, the real issue is why the same factor applies to both $t$ and $x$. This is because the stochastic wave equation with white noise has the same regularity in both $t$ and $x$, namely it is H\"older $1/2-\varepsilon$. So we do not believe that the result for $\alpha<1$ can be improved by better choosing $\lambda_n$. Next, for each $ (t, x) \in D_n $, define the grids of points: \begin{align*} \Gamma_n(t,x)&=\Biggl[[0,T]\times[0,J]\Biggr]\bigcap \left[\bigcup_{k,\ell\in\mathbb{N}}\Big(t+k\lambda_n,x+2\ell\lambda_n\Big)\right] \\ \overline{\Gamma}_n(t,x)&=\Biggl[\left[0,\tau^{(v)}\right]\times[0,J]\Biggr]\bigcap\Gamma_n(t, x). \end{align*} Let $\#$ denote the number of points in a set, and define the strip \begin{equation*} J_n=\left\{(t,x)\in\left[0,\lambda_n\right]\times[0,J]:2^{-n-1}K <v(t,x)\leq2^{-n}K\right\}. \end{equation*} Then we have \begin{multline} \label{eq:lower_lebesgue_count} \mu\left(\left\{(t,x)\in\left[0,\tau^{(v)}\wedge T\right]\times[0,J]:2^{-n-1}K <v(t,x)\leq2^{-n}K\right\}\right)\\ \leq\iint_{D_n}\#\left\{(s,y)\in\overline{\Gamma}_n(t,x):v(s,y) \leq2^{-n}K\right\}dxdt +\mu\left(J_n\right). \end{multline} Since $v(t,x)$ is continuous on $[0,T]\times[0,J]$ and $\inf_{x\in[0,J]}u_0(x)>0$, we have that $\mu\left(J_n\right)=0$ for sufficiently large random $n$. Hence, \begin{equation} \label{eq:lower_strip_bound} \sum_{n=0}^{\infty}\left[2^{2\alpha(n+1)}K^{-2\alpha}\mu\left(J_n\right)\right]<\infty \end{equation} almost surely. We now place a bound on \begin{equation*} \#\left\{(s,y)\in\overline{\Gamma}_n(t,x):v(s,y)\leq2^{-n}K\right\} \end{equation*} in the upcoming lemmas. \subsubsection{The Shifted Equation} Let $(t,x)$ be an arbitrary point in $D_n$, as defined in (\ref{eq:lower_square}), and let $\theta$ be the time shift operator, defined by $\theta_s W(dxdt)=W(dxd(t + s))$. Then for given $(s,y)\in\Gamma_n(t,x)$, define \begin{equation*} s_n^-=s-\lambda_n. \end{equation*} Now, we take the approach of considering $W$ as a cylindrical Wiener process, as described in \cite{dz92}. Furthermore, by Theorem 9.15 on page 256 of Da Prato and Zabczyk \cite{dz92}, there is a version of our solution $\Phi_t$ which is a strong Markov process with respect to the Brownian filtration $(\mathcal{F}_t)_{t\geq0}$. Using the strong Markov property of solutions, we restart the equation at time $s_n^-$: \begin{multline} \label{eq:lower_restart} v(s,y)=\frac{1}{2}\left(v\left(s_n^-,y+\lambda_n\right) +v\left(s_n^-,y-\lambda_n\right)\right) \\ +\int_0^JS_{\mathbf{I}}\left(\lambda_n,y-z\right)\frac{\partial v}{\partial t}\left(s_n^-,z\right)dz\\ +\int_0^{\lambda_n}\int_0^JS_{\mathbf{I}}\left(\lambda_n-r,y-z\right) g\left(v\left(s_n^-+r,z\right)\right)\theta_{s_n^-}W(dzdr). \end{multline} Here, $\frac{\partial v}{\partial t}$ is regarded as a Schwartz distribution. We analyze \eqref{eq:lower_restart} term by term. Decompose \begin{equation*} v(s,y)=V_n(s,y)+N_n(s,y)+E_n(s,y) \end{equation*} where \begin{align*} V_n(s,y)=&\frac{1}{2}\left(v\left(s_n^-,y+\lambda_n\right) +v\left(s_n^-,y-\lambda_n\right)\right) \\ &+\int_0^JS_{\mathbf{I}}\left(\lambda_n,y-z\right)\frac{\partial v}{\partial t}\left(s_n^-,z\right)dz\\ N_n(s,y)&=\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}} S_{\mathbf{I}}\left(\lambda_n-r,y-z\right)g(v(s_n^-,y)) \theta_{s_n^-}W(dzdr)\\ E_n(s,y)=&-N_n(s,y) \\ &+\int_0^{\lambda_n}\int_0^JS_{\mathbf{I}}(\lambda_n-r,y-z) g(v(s_n^-+r,z))\theta_{s_n^-}W(dzdr). \end{align*} More specifically, \begin{itemize} \item First, we take $V_n$ to be the first two terms, representing the contribution to $v(s,y)$ from the shifted initial conditions (both position and velocity). \item Next, we realize the stochastic term as the sum of a conditionally Gaussian term and an error term. The former is the stochastic term integrated over the light cone contained in the square $\left\{(s,y)+D_n\right\}$, with the diffusion coefficient $g$ frozen at $v(s_n^-,y) $. We call this term the noise term, $N_n$. \item Finally, as mentioned above the error term $E_n$ is the difference between the stochastic term of $v(s,y)$ minus the noise term defined above. \end{itemize} As alluded to above, the noise term can be rewritten as: \begin{equation} \label{eq:lower_noise_dist} \begin{split} N_n(s,y)&=g(v(s_n^-,y))\int_0^{\lambda_n}\int_{\left\{|z-y| \leq\lambda_n\right\}}S_{\mathbf{I}}\left(\lambda_n-r,y-z\right) \theta_{s_n^-}W(dzdr) \\ &=g(v(s_n^-,y))c_nZ \end{split} \end{equation} where $c_n^2$ is the quadratic variation of the above double integral and $Z$ is a standard normal random variable. Moreover, for sufficiently small $\lambda_n$ relative to $J$, we have: \begin{equation} \label{eq:lower_noise_var} \begin{split} c_n^2&=\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}}S_{\mathbf{I}}^2(r,y-z)dzdr\\ &=\frac{\lambda_n^2}{4}=2^{-2(1-2\epsilon)n-2}. \end{split} \end{equation} \subsubsection{A Regularity Lemma} Now, we find bounds for $E_n$ and $N_n$ by using H\"older continuity of $v$. Define the events \begin{align*} \mathbf{B}_n&=\left\{\sup_{(s,y)\in\Gamma_n(t,x)}\left|E_n(s,y)\right| \leq2^{-n}\right\}\\ \mathbf{C}_n&=\left\{\sup_{(s,y)\in\Gamma_n(t,x)}\left|N_n(s,y)\right| \leq2^{-(1-3\epsilon)n}\right\}\\ \mathbf{A}_n&=\mathbf{A}(K)\cap\mathbf{B}_n\cap\mathbf{C}_n. \end{align*} Then we assert the following: \begin{lemma} \label{lem:lower_regularity} $\sum_{n=1}^{\infty}\mathbf{P}\left(\mathbf{B}_n^c\right)<\infty$ and $\sum_{n=1}^{\infty}\mathbf{P}\left(\mathbf{C}_n^c\right)<\infty$. \end{lemma} To prove this lemma, we first establish a bound on the error term $E_n$. Recall its definition: \begin{equation*} \begin{split} E_n&=-N_n(s,y)+\int_0^{\lambda_n}\int_0^JS_{\mathbf{I}} \left(\lambda_n-r,y-z\right)g(v(s_n^-+r,z))\theta_{s_n^-}W(dzdr)\\ &=-\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}}S_{\mathbf{I}} \left(\lambda_n-r,y-z\right)g(v(s_n^-,y))\theta_{s_n^-}W(dzdr)\\ &\quad+\int_0^{\lambda_n}\int_0^JS_{\mathbf{I}}\left(\lambda_n-r,y-z\right) g(v(s_n^-+r,z))\theta_{s_n^-}W(dzdr) \end{split} \end{equation*} Note that in the integrals above, $S_{\mathbf{I}}(\lambda_n-r,y-z)=0$ outside of the light cone $|z-y|\leq\lambda_n$. Thus, we restrict the domain of integration of $z$: \begin{equation*} \begin{split} E_n=&-\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}} S_{\mathbf{I}}\left(\lambda_n-r,y-z\right)g(v(s_n^-,y)) \theta_{s_n^-} W(dz dr) \\ &+\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}} S_{\mathbf{I}}(\lambda_n-r,y-z)g(v(s_n^-+r,z)) \theta_{s_n^-}W(dzdr)\\ =&\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}} S_{\mathbf{I}}\left(\lambda_n-r,y-z\right)\\ &\qquad\times\left[g(v(s_n^-+r,z)) -g(v(s_n^-,y))\right]\theta_{s_n^-}W(dzdr) \end{split} \end{equation*} We define the rectangle \begin{equation*} \Delta_n(s,y)=\left\{r\in\mathbb{R}_+,z\in[0,J]:|r-s|\leq\lambda_n,|z-y| \leq\lambda_n\right\} \end{equation*} and let $p$ be a positive integer. Then it follows that \begin{multline*} \mathbf{E}\left[E_n^{2p}\right] =\mathbf{E}\bigg(\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}} S_{\mathbf{I}}\left(\lambda_n-r,y-z\right)\\ \left[g(v(s_n^-+r,z))-g(v(s_n^-,y) )\right]\theta_{s_n^-}W(dzdr)\bigg)^{2p} \end{multline*} and since the integrand above is continuous in $\lambda_n$, we can use the Burkholder-Davis-Gundy inequality to obtain: \begin{equation*} \begin{split} \mathbf{E}\left[ E_n^{2p} \right] &\lesssim_p\mathbf{E}\Bigg(\int_{s_n^-}^{s_n^-+\lambda_n}\int_{\left\{|z-y|\leq \lambda_n\right\}}S_{\mathbf{I}}^2\left(\lambda_n-\left(r-s_n^-\right),y-z\right)\\ &\quad\times\left[g(v(r,z)) -g(v(s_n^-,y))\right]^2dzdr\Bigg)^{p}\\ &\lesssim_p\mathbf{E}\Bigg(\int_{s_n^-}^s\int_{\left\{|z-y|\leq\lambda_n\right\}} S_{\mathbf{I}}^2\left(s-r,y-z\right)\\ &\quad\times\left[g(v(r,z)) -g(v(s_n^-,y))\right]^2dzdr\Bigg)^{p}. \end{split} \end{equation*} As usual, the notation $a(x)\lesssim_{p}b(x)$ means that $a(x)\leq C_{p}b(x)$. Since $g$ is Lipschitz and since $(s_n^-,y)\in\Delta_n(s,y)$, \begin{align*} |g(v(r,z))&-g(v(s_n^-,y))|^2 \\ &\leq L_g^2\left|v(r,z)-v\left(s_n^-,y\right)\right|^2\\ &\leq L_g^2\left(2\left|v(r,z)-v(s,y)\right|^2 +2\left|v(s,y)-v\left(s_n^-,y\right)\right|^2\right)\\ &\leq4L_g^2\sup_{(r,z)\in\Delta_n(s,y)}\left|v(r,z)-v(s,y)\right|^2 \end{align*} With this bound, we get: \begin{multline} \label{eq:lower_reg_power_bound} \mathbf{E}\left[ E_n^{2p} \right] \lesssim\mathbf{E}\left[\sup_{(r,z)\in\Delta_n(s,y)} \left[\left|v(r,z)-v(s,y)\right|^{2p}\right]\right]\\ \left(\int_{s_n^-}^s\int_{\left\{|z-y|\leq\lambda_n\right\}} S_{\mathbf{I}}^2\left(s-r,y-z\right)dzdr\right)^{p}. \end{multline} Recall that $v(s,y)$ is almost surely $\beta$-H\"older continuous for any $\beta<\frac{1}{2}$. Setting $\beta=\frac{1}{2}-\frac{1}{2p}$, we obtain \begin{align} \label{eq:modulus-v-moment} \mathbf{E} &\left[ \sup_{(r,z) \in \Delta_n(s,y)} \left[\left|v(r,z)-v(s,y)\right|^{2p}\right]\right] \\ &\qquad\lesssim_{g,p}\sup_{(r,z)\in\Delta_n(s,y)}\left(|r-s|^{\frac{1}{2}-\frac{1}{2p}} +|z-y|^{\frac{1}{2}-\frac{1}{2p}}\right)^{2p}\nonumber\\ &\qquad\lesssim_{g,p}\left(\lambda_n^{\frac{1}{2}-\frac{1}{2p}}\right)^{2p} =\lambda_n^{p-1}=2^{-(1-2\epsilon)n(p-1)}. \nonumber \end{align} Recalling \eqref{eq:lower_lambda}, we then bound the integral: \begin{align} \label{eq:lower_reg_bound_2} \bigg(\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}}&S_{\mathbf{I}}^2(r,y-z)dzdr\bigg)^p \nonumber\\ &=\left(\frac{1}{4}\int_0^{\lambda_n}\int_{\left\{|z-y|\leq\lambda_n\right\}} \mathbf{1}_{\left\{|y-z|<r \right\}}dzdr\right)^p \\ &\lesssim \lambda_n^{2p} \nonumber\\ &\lesssim 2^{-2 (1 - 2 \epsilon) np} \nonumber \end{align} so by \eqref{eq:lower_reg_power_bound}, \eqref{eq:modulus-v-moment}, and \eqref{eq:lower_reg_bound_2}, we obtain a bound on the error term: \begin{equation} \label{eq:lower_reg_bound} \mathbf{E}\left[E_n^{2p}\right]\lesssim_{g,p}2^{-(1-2\epsilon)n(3p-1)}. \end{equation} \begin{proof}[Proof of Lemma \ref{lem:lower_regularity}] Recalling that $\#\left\{\Gamma_n(t,x)\right\}\lesssim\lambda_n^{-2}=2^{(2-4\epsilon)n}$, we find: \begin{align*} \mathbf{P}\left(\mathbf{B}_n^c\right) &=\mathbf{P}\left\{\sup_{(s,y)\in\Gamma_n(t,x)} \left|E_n(s,y)\right|>2^{-n}\right\}\\ &\leq\sum_{(s,y)\in\Gamma_n(t,x)}\mathbf{P}\left\{ \left|E_n(s,y)\right|>2^{-n}\right\}\\ &\lesssim2^{(2-4\epsilon)n}\ \mathbf{P}\left\{ \left|E_n(s,y)\right|>2^{-n}\right\}. \end{align*} By Markov's inequality, we can continue as follows, \begin{equation*} \mathbf{P}\left(\mathbf{B}_n^c\right) \lesssim2^{(2-4\epsilon)n+2np}\ \mathbf{E}\left[E_n^{2p}\right] \end{equation*} after which we use \eqref{eq:lower_reg_bound} to obtain the bound: \begin{equation*} \mathbf{P}\left(\mathbf{B}_n^c\right)\lesssim2^{(3-6\epsilon)n-(1-6\epsilon)np}. \end{equation*} Thus, the summation $\sum_{n=1}^{\infty}\mathbf{P}\left(\mathbf{B}_n^c\right)$ converges when $p>\frac{3-6\epsilon}{1-6\epsilon}$. With a similar decomposition, we obtain: \begin{equation*} \mathbf{P}\left(\mathbf{C}_n^c\right)\lesssim2^{(4-4\epsilon)n} \ \mathbf{P}\left\{\left|N_n(s,y)\right|>2^{-(1-3\epsilon)n}\right\}. \end{equation*} Recalling \eqref{eq:lower_noise_dist} and \eqref{eq:lower_noise_var}, this implies: \begin{equation*} \begin{split} \mathbf{P}\left(\mathbf{C}_n^c\right)&\lesssim2^{(4-4\epsilon)n} \ \mathbf{P}\left\{\left|g\left(v\left(s_n^-,y\right)\right)Z\right| >2^{(1-2\epsilon)n}2^{-(1-3\epsilon)n}\right\}\\ &\lesssim2^{(4-4\epsilon)n}\ \mathbf{P}\left\{|Z|>C_g^{-1}2^{\epsilon n}\right\} \end{split} \end{equation*} where $Z$ is a standard normal random variable. Now, we use a standard tail estimate for the normal (often called the Chernoff bound) to conclude \begin{equation*} \mathbf{P}\left\{|Z|>C_g^{-1}\ 2^{\epsilon n}\right\} \leq2\exp{\left(-C_g^{-2}\ 2^{2\epsilon n-1}\right)} \end{equation*} and it follows that $\sum_{n=1}^{\infty}\mathbf{P}\left(\mathbf{C}_n^c\right)$ converges. \end{proof} \subsubsection{A Counting Lemma} \begin{lemma} \label{lem:lower_counting} For each $K>0$ there exists a constant $c_K$ such that for every $n\in\mathbf{N}$ and $(t,x)\in D_n$, \begin{equation*} \mathbf{E}\left[\#\left\{(s,y)\in\overline{\Gamma}_n(t,x):v(s,y)\leq2^{-n}K\right\}; \mathbf{A}_n\right]\leq c_K. \end{equation*} \end{lemma} The proof of Lemma \ref{lem:lower_counting} will require several preliminary steps. We fix $(t,x)$ and order the points in $\Gamma_n(t,x)$ lexicographically, calling the $i$th point $(s_i,y_i)$ for some $i\in\mathcal{I}(t,x)=\{1,2,\dots,\#\{\Gamma_n(t,x)\}\}$ - i.e., if $i<j$ then $s_i\leq s_j$ and if $s_i=s_j$, then $x\leq x_i<x_j\mod J$. For given $(t,x)$, we define the set $\Delta^n(s,y)$ as follows: \begin{equation*} \Delta^n(s,y)= \begin{cases} \left[0,s_n^-\right]\times\mathbf{I} &y=x\\ \left(\left[0,s_n^-\right]\times\mathbf{I}\right) \bigcup\left(\left[s_n^-,s\right]\times \left[x-\lambda_n,y-\lambda_n\right] \right)&y\neq x \end{cases} \end{equation*} where the interval $[x-\lambda_n,y-\lambda_n]$ on $\mathbf{I}$ is taken modulo $J$, wrapping around whenever $x-\lambda_n>y-\lambda_n$. (Note that this is not the same as the previously defined $\Delta_n(s,y)$). Let $\mathcal{F}^n_i$ be the $\sigma$-algebra generated by $\dot{W}$ in the set $\Delta^n\left(s_i,y_i\right)$. Then $V_n\left(s_i,y_i\right)$ is $\mathcal{F}^n_i$-measurable for all $i\in\mathbb{N}$. Recall from (\ref{eq:lower_noise_dist}) that \begin{equation} \label{eq:lower_noise_n_dist} N_n\left(s_i,y_i\right)=g\left(v\left(\left(s_i\right)_n^-,y_i\right)\right)c_nZ_i \end{equation} where $c_n=2^{-(1-2\epsilon)n-1}$ and $Z_i\sim\mathcal{N}(0,1)$ is $\mathcal{F}^n_{i+1}$-measurable but independent of $\mathcal{F}^n_i$. Let $\mathbf{P}^n_i$ denote the conditional probability with respect to $\mathcal{F}^n_i$ and let $\delta>1$ be a constant depending only on $K$. Define \begin{equation*} \overline{v}_n(s,y)=V_n(s,y)+N_n(s,y). \end{equation*} We now prove the following lemma: \begin{lemma} \label{lem:lower_initial} There exists $ d_K > 0 $ such that for all $ i \in \mathcal{I}(t, x) $, \begin{equation} \label{eq:lower_initial} \mathbf{P}^n_i\Big[\overline{v}_n\left(s_i,y_i\right)\leq-2^{-n}\ \Big|\ \overline{v}_n\left(s_i,y_i\right)\leq2^{-n}(K+1)\Big]\geq d_K \end{equation} almost surely on the event $\{V_n(s_i,y_i)\leq\delta2^{-(1-\epsilon)n}\}$. \end{lemma} \begin{proof} From the definition of conditional probability, the left hand side of (\ref{eq:lower_initial}) is: \begin{equation*} \begin{split} H&:=\mathbf{P}^n_i\Big[\overline{v}_n\left(s_i,y_i\right)\leq-2^{-n} \ \Big|\ \overline{v}_n\left( s_i, y_i \right) \leq 2^{-n} (K + 1) \Big] \\ &= \frac{\mathbf{P}^n_i\left\{ \overline{v}_n\left( s_i, y_i \right) \leq-2^{-n} \right\}} {\mathbf{P}^n_i\left\{ \overline{v}_n\left( s_i, y_i \right) \leq 2^{-n} (K + 1) \right\}} \\ &= \frac{\mathbf{P}^n_i\left\{ N_n\left( s_i, y_i \right) \leq -2^{-n} - V_n\left( s_i, y_i \right) \right\}} {\mathbf{P}^n_i\left\{ N_n\left( s_i, y_i \right) \leq 2^{-n}(K + 1) - V_n\left( s_i, y_i \right) \right\}}. \end{split} \end{equation*} Using \eqref{eq:lower_noise_n_dist}, we find that \begin{equation*} H = \frac{\mathbf{P}^n_i\left\{ g\left( v\left( \left( s_i \right)_n^-, y_i \right) \right) c_n Z_i \leq -2^{-n} - V_n\left( s_i, y_i \right) \right\}} {\mathbf{P}^n_i\left\{ g\left( v\left( \left( s_i \right)_n^-, y_i \right) \right) c_n Z_i \leq 2^{-n}(K + 1) - V_n\left( s_i, y_i \right) \right\}} \end{equation*} and using \eqref{eq:lower_noise_var}, we find that \begin{equation*} H = \frac{\mathbf{P}^n_i\left\{ g\left( v\left( \left( s_i \right)_n^-, y_i \right) \right) Z_i \leq -2^{1-2 \epsilon n} - 2^{1+(1 - 2 \epsilon) n} V_n\left( s_i, y_i \right) \right\}} {\mathbf{P}^n_i\left\{ g\left( v\left( \left( s_i \right)_n^-, y_i \right) \right) Z_i \leq 2^{1-2 \epsilon n}(K + 1) - 2^{1+(1 - 2 \epsilon) n} V_n\left( s_i, y_i \right)\right\}}. \end{equation*} Then define $\rho_{n,i}=2g(v((s_i)_n^-,y_i))^{-1}$. Note that $\rho_{n,i}$ is almost surely bounded above by $2C_g$ and below by $2c_g^{-1}>0 $, both uniformly in $n$ and $i$. Plugging this into the above equation, we find: \begin{equation} \label{eq:lower_counting_ratio} H = \frac{\mathbf{P}^n_i\left\{ Z_i \leq -\rho_{n, i}\left( 2^{-2 \epsilon n} + 2^{(1 - 2 \epsilon) n} V_n\left( s_i, y_i \right) \right) \right\}} {\mathbf{P}^n_i\left\{ Z_i \leq -\rho_{n, i}\left( -2^{-2 \epsilon n}(K + 1) + 2^{(1 - 2 \epsilon) n} V_n\left( s_i, y_i \right) \right) \right\}}. \end{equation} Now we examine $ H $ in two cases. The first case is on the event \begin{equation} \label{eq:lower_counting_one} \left\{-2^{-2\epsilon n}(K+1)+2^{(1-2\epsilon)n}V_n\left(s_i,y_i\right) \leq 0 \right\}. \end{equation} Since the denominator in (\ref{eq:lower_counting_ratio}) is less than or equal to $1$, we can bound $H$ below by its numerator: \begin{equation*} H\geq\mathbf{P}^n_i\left\{Z_i\leq-\rho_{n,i}\left(2^{-2\epsilon n} +2^{(1-2\epsilon)n}V_n\left(s_i,y_i\right)\right)\right\}. \end{equation*} Using the decomposition \begin{equation*} \begin{split} &2^{-2\epsilon n}+2^{(1-2\epsilon)n}V_n\left(s_i,y_i\right)\\ &=\left(2^{-2\epsilon n}(K+2)\right)+\left(-2^{-2\epsilon n}(K+1) +2^{(1-2\epsilon)n}V_n\left(s_i,y_i\right)\right)\\ &\leq\left(2^{-2\epsilon n}(K+2)\right) \end{split} \end{equation*} and the assumption in \eqref{eq:lower_counting_one}, we find \begin{equation} \label{eq:lower_counting_one_bound} H\geq\mathbf{P}^n_i\left\{Z_i\leq-\rho_{n,i}2^{-2\epsilon n}(K+2)\right\}. \end{equation} Since $\rho_{n,i}\leq2C_g$ for all $n$, we note that for all $K>0$, $\rho_{n,i}2^{-2\epsilon n}(K+2)\rightarrow0$ as $n\rightarrow\infty$. So for sufficiently large $n$ (depending on $K$), $H\geq1/3$. Hence in the case given by (\ref{eq:lower_counting_one_bound}), Lemma \ref{lem:lower_initial} follows. The second case is on the event \begin{equation} \label{eq:lower_counting_two} \left\{-2^{-2\epsilon n}(K+1)+2^{(1-2\epsilon)n}V_n\left(s_i,y_i\right) > 0 \right\}. \end{equation} Here, we use the following inequality from Lemma 8 of Mueller and Pardoux \cite{mp99}: For $a,b>0$ and $Z$ a standard normal random variable, \begin{equation*} \frac{\mathbf{P}\{Z>a\}}{\mathbf{P}\{Z>a+b\}}\leq\frac{1}{2\mathbf{P}\{Z>1\}} \vee\left(1+\frac{\sqrt{e}}{1-e^{-1}}(a+b)be^{ab+\frac{b^2}{2}}\right). \end{equation*} Let $a=\rho_{n,i}(-2^{-2\epsilon n}(K+1)+2^{(1-2\epsilon)n}V_n(s_i,y_i))$ and $b=\rho_{n,i}2^{-2\epsilon n}(K+2)$. Recalling that from our given conditions, $V_n(s_i,y_i)\leq\delta2^{-(1-\epsilon)n}$ almost surely, we find that: \begin{equation*} \begin{split} (a+b)b&=\rho_{n,i}^2\left(2^{-2\epsilon n}+2^{(1-2\epsilon)n}V_n\left( s_i,y_i\right)\right)(K+2)2^{-2\epsilon n}\\ &\leq\rho_{n,i}^2\left(2^{-2\epsilon n}+2^{-2\epsilon n}\delta\right)(K+2) 2^{-2 \epsilon n} \\ &=\rho_{n,i}^2(\delta+1)(K+2)2^{-4\epsilon n}\\ &\leq\rho_{n,i}^2(\delta+1)(K+2) \end{split} \end{equation*} and \begin{equation*} \begin{split} \left(a+\frac{b}{2}\right)b&=\rho_{n,i}^2\left(-(0.5K)2^{-2\epsilon n} +2^{(1-2\epsilon)n}V_n\left(s_i,y_i\right)\right)(K+2)2^{-2\epsilon n}\\ &\leq\rho_{n,i}^2\left(-(0.5K)2^{-2\epsilon n}+2^{-2\epsilon n}\delta\right) (K+2) 2^{-2 \epsilon n} \\ &= \rho_{n, i}^2 (\delta - 0.5 K)(K + 2) 2^{-4 \epsilon n} \\ &\leq\rho_{n,i}^2(\delta-0.5)(K+2)\qquad\qquad\text{(recalling that $K>1$)} \end{split} \end{equation*} almost surely. Using these results with (\ref{eq:lower_counting_ratio}), we find: \begin{equation*} \begin{split} H &= \frac{\mathbf{P}\{Z \leq -(a + b)\}}{\mathbf{P}\{Z \leq -a\}} \\ &= \frac{\mathbf{P}\{Z > a + b\}}{\mathbf{P}\{Z > a\}} \\ &\geq \left( 2 \mathbf{P}\{Z > 1\} \right) \wedge \left(1+\frac{\sqrt{e}}{1-e^{-1}}(a+b)be^{ab+\frac{b^2}{2}}\right)^{-1}\\ &\geq \left( 2 \mathbf{P}\{Z > 1\} \right) \wedge \left(1+\frac{\sqrt{e}}{1-e^{-1}}\rho_{n,i}^2(\delta+1)(K+2) e^{\rho_{n,i}^2(\delta-0.5)(K+2)}\right)^{-1}. \end{split} \end{equation*} Since $\rho_{n,i}$ is almost surely uniformly bounded away from $0$ in $n$ and $i$, there exists $c_{g,\delta}>0 $ such that $\rho_{n,i}^2\geq 4c_{g,\delta}$. So: \begin{multline*} \left(1+\frac{\sqrt{e}}{1-e^{-1}}\rho_{n,i}^2(\delta+1)(K+2) e^{\rho_{n, i}^2 (\delta - 0.5)(K + 2)} \right)^{-1} \\ \geq c_{g,\delta}\left(c_{g,\delta}^{-1}+\frac{\sqrt{e}}{1-e^{-1}}(\delta+1)(K+2) e^{c_{g,\delta}(\delta-0.5)(K+2)}\right)^{-1} \end{multline*} and since $\delta$ depends only on $K$, the right hand side above is bounded below by some $\gamma_{K,g}>0 $. Then $H$ is bounded above by \begin{equation*} d_K = 2 \mathbf{P}\{Z > 1\} \wedge \gamma_{K,g} > 0 \end{equation*} in the case given by \eqref{eq:lower_counting_two} as well. Hence Lemma \ref{lem:lower_initial} follows in both cases. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:lower_counting}] Define \begin{equation*} \xi_n=\mathbf{E}\Big[\#\Big\{(s,y)\in\overline{\Gamma}_n(t,x): v(s, y) \leq 2^{-n} K \Big\}; \mathbf{A}_n \Big]. \end{equation*} From the definitions of $\mathbf{A}_n$ and $\overline{\Gamma}$, it follows that $\xi_n$ is bounded by: \begin{multline*} \xi_n\leq\mathbf{E}\Big[\#\Big\{(s,y)\in\overline{\Gamma}_n(t,x): 0<v(s,y)\leq2^{-n}K,\\ \left|E_n(s,y)\right|\leq2^{-n},\left|N_n(s,y)\right|\leq2^{-2(1-3\epsilon)n}\Big\}\Big]. \end{multline*} Recall that by definition, $\overline{v}_n=V_n(s,y)+N_n(s,y)=v(s,y)-E_n(s,y)$. Thus, we obtain the bound \begin{multline*} \xi_n\leq\mathbf{E}\Big[\#\Big\{(s,y)\in\overline{\Gamma}_n(t,x): -2^{-n}<\overline{v}_n(s,y)\leq2^{-n}(K+1),\\ \left|E_n(s,y)\right|\leq2^{-n},\left|N_n(s,y)\right|\leq2^{-2(1-3\epsilon)n}\Big\}\Big]. \end{multline*} Note that if $V_n(s,y)+N_n(s,y)\leq2^{-n}(K+1)$ and $|N_n(s,y)|\leq2^{-2(1-3\epsilon)n}$, then for some $\delta>1$ depending on $K,\epsilon$ we have $V_n(s,y)\leq\delta2^{-(1-\epsilon)n}$. So we can write: \begin{multline*} \xi_n\leq\mathbf{E}\Big[\#\Big\{(s,y)\in\overline{\Gamma}_n(t,x): -2^{-n} < \overline{v}_n(s, y) \leq 2^{-n}(K + 1), \\ V_n(s, y) \leq \delta 2^{-(1 - \epsilon) n} \Big\} \Big]. \end{multline*} Let $\{\sigma_n(k)\}_{k\in\mathbb{N}}$ be the sequence of indices $i\in\mathcal{I}$, in lexicographical order, such that both $\overline{v}_n(s_i,y_i)\leq2^{-n}(K+1)$ and $V_n(s_i,y_i)\leq\delta2^{-(1-\epsilon)n}$. Out of the set of points on $\Gamma_n$ such that $\overline{v}_n\leq2^{-n}(K+1)$, one looks at the points where $\overline{v}_n<-2^{-n}$, which would force $v$ to be negative. Thus we define the event \begin{equation*} \mathbf{D}_k=\left\{ \overline{v}_n\left(s_{\sigma_n(k)},y_{\sigma_n(k)}\right)\leq-2^{-n}\right\} \end{equation*} and for $k\in\mathbb{N}$, we define the indicator random variable \begin{equation*} I_k=1_{\mathbf{D}_k}. \end{equation*} From Lemma \ref{lem:lower_initial}, it is clear that \begin{equation*} \mathbf{P}\left\{I_1=1\right\}\geq d_K \end{equation*} and moreover, since $V_i$ and $N_i$ are $\mathcal{F}^n_j$-measurable for all $i<j$, we can also use Lemma \ref{lem:lower_initial} to find that \begin{equation*} \mathbf{P}\Big[I_k=1\ \Big|\ I_1,\dots,I_{k-1}\Big]\geq d_K \end{equation*} for $k>1$. Finally, let \begin{equation*} \overline{\sigma}_n=\inf\left\{k;I_k=1\right\}. \end{equation*} Since $v(s,y)\geq0$ for all $(s,y)\in\overline{\Gamma}_n$, it follows that $\xi_n\leq\mathbf{E}\overline{\sigma}_n$. Note that the $I_k$'s are not independent. We couple $\left\{I_k\right\}$ with a sequence of independent random variables $\left\{Y_k\right\}$ as follows: Let $\left\{U_k\right\}_{k\geq1}$ be a sequence of mutually independent random variables that are globally independent of the $I_k$'s and have uniform law on $[0,1]$. Then define \begin{equation*} Y_k = \begin{cases} 0 & \text{if } I_k = 0 \text{ or } U_k > d_K/ \mathbf{P}\Big[ I_k = 1 \ \Big|\ I_1, \dots, I_{k-1} \Big] \\ 1 & \text{if } I_k = 1 \text{ and } U_k \leq d_K/ \mathbf{P}\Big[ I_k = 1 \ \Big|\ I_1, \dots, I_{k-1} \Big] \end{cases} \end{equation*} for $k\geq1$. Then clearly, \begin{equation*} Y_k\leq I_k \end{equation*} and for $ k > 1 $, \begin{equation*} \begin{split} \mathbf{P}\Big[Y_k=1\ \Big|\ Y_1,\dots,Y_{k-1}\Big] &=\mathbf{P}\Big[Y_k=1\ \Big|\ I_1,\dots,I_{k-1}\Big]\\ &=d_K \end{split} \end{equation*} so $\left\{ Y_k \right\} $ is a sequence of i.i.d. random variables. Let $ \tilde{\sigma} = \inf\left\{ k; Y_k = 1 \right\} $. Then \begin{align*} \overline{\sigma}_n &= \text{1st } k \text{ such that } I_k = 1 \\ \tilde{\sigma} &= \text{1st } k \text{ such that } Y_k = 1 \end{align*} and since $ Y_k \leq I_k $, it follows that $ \overline{\sigma}_n \leq \tilde{\sigma} $. So \begin{equation*} \mathbf{E}\overline{\sigma}_n \leq \mathbf{E}\tilde{\sigma} = d_K^{-1}. \end{equation*} \end{proof} \subsection{Lemma \ref{lem:lower_main}, Conclusion} Finally, we cite a measure-theoretic result related to the Borel-Cantelli Lemma: \begin{lemma} \label{lem:borel} Let $\left\{X_n\right\}$ be a sequence of $[0,\infty)$-valued random variables, and $\left\{\mathbf{F}_n\right\}$ be a sequence of events, such that both: \begin{gather*} \sum_{n=0}^{\infty} \mathbf{P}\left( \mathbf{F}_n^c \right) < \infty \\ \sum_{n=0}^{\infty} \mathbf{E}\left[ X_n; \mathbf{F}_n \right] < \infty \end{gather*} Then $\sum_{n=0}^{\infty}X_n<\infty$ almost surely. \end{lemma} \begin{proof} Let $\mathbf{F}=\{\sum_{n=0}^{\infty}X_n=+\infty\}$. Then on the event $\mathbf{F}\cap\left(\liminf\mathbf{F}_n\right)$, we have $\sum_{n=0}^{\infty}X_n\mathbf{1}_{\mathbf{F}_n}=+\infty$. So from the second condition, we get $\mathbf{P}(\mathbf{F}\cap\liminf\mathbf{F}_n)=0$. However, from the first condition and Borel-Cantelli, we find: \begin{equation*} \mathbf{P}\left( \limsup \mathbf{F}_n^c \right) = 0 \Rightarrow \mathbf{P}\left( \liminf \mathbf{F}_n \right) = 1 \end{equation*} Implying that $\mathbf{P}(\mathbf{F})=0 $, which is our desired result. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:lower_main}] From equations \eqref{eq:lower_lebesgue_bound} and \eqref{eq:lower_lebesgue_count}, we have: \begin{equation*} \begin{split} & 1_{A(K)} \int_0^{\tau^{(v)} \wedge T} \int_\mathbf{I} v(t, x)^{-2 \alpha} \ dx dt \\ &\leq 1_{A(K)} \sum_{n=0}^{\infty} \Big[ 2^{2 \alpha (n + 1)} K^{-2 \alpha} \\ &\qquad \times\mu \left( \left\{ (t, x) \in \left[0, \tau^{(v)} \wedge T \right] \times \mathbf{I}:2^{-n - 1}K<v(t,x)\leq2^{-n}K\right\}\right)\Big]. \end{split} \end{equation*} Now consider the above expression $\mu(\cdots)$. First, we can bound this expression above by dropping the inequality $2^{-n-1}K<v(t,x)$, which enlarges the set under consideration. Secondly, we note that \begin{equation*} \bigcup_{(t,x)\in D_n}\overline{\Gamma}_n(t,x) \end{equation*} covers the entire $(t,x)$-plane, because $\overline{\Gamma}_n(t,x)$ consists of the corners of rectangles which are translations of $D_n$, further translated by $(t,x)$. So we can continue as follows. \begin{equation*} \begin{split} 1_{A(K)}& \int_0^{\tau^{(v)} \wedge T} \int_\mathbf{I} v(t, x)^{-2 \alpha} \ dx dt \\ &\leq 1_{A(K)} \sum_{n=0}^{\infty} \Big[ 2^{2 \alpha (n + 1)} K^{-2 \alpha} \\ &\qquad \times\iint_{D_n} \# \left\{ (s, y) \in \overline{\Gamma}_n(t, x) : v(s, y) \leq 2^{-n} K \right\} \ dx dt \Big] \\ &\quad + 1_{A(K)} \sum_{n=0}^{\infty} \Big[ 2^{2 \alpha (n + 1)} K^{-2 \alpha} \ \mu\left( J_n \right) \Big]. \end{split} \end{equation*} Now consider the summation of expectations \begin{equation*} \begin{split} &\sum_{n=0}^{\infty}\mathbf{E}\bigg[1_{A(K)}\ 2^{2\alpha(n+1)}K^{-2\alpha}\\ &\qquad\iint_{D_n}\#\left\{(s,y)\in\overline{\Gamma}_n(t,x):v(s,y) \leq 2^{-n} K \right\} \ dx dt ; B_n \cap C_n \bigg] \\ &=\sum_{n=0}^{\infty}\mathbf{E}\bigg[2^{2\alpha(n+1)}K^{-2\alpha}\\ &\qquad \iint_{D_n} \# \left\{ (s, y) \in \overline{\Gamma}_n(t, x) : v(s, y) \leq 2^{-n} K \right\} \ dx dt ; A_n \bigg] \end{split} \end{equation*} Recalling that $\alpha<1$, and that $D_n$ was defined in \eqref{eq:lower_square}, we note that: \begin{equation*} \iint_{D_{n}} \ dx dt = 2^{-(2 - 4 \epsilon) n + 1} \end{equation*} so using Lemma \ref{lem:lower_counting}, we find: \begin{equation*} \begin{split} & \sum_{n=0}^{\infty} \mathbf{E} \bigg[ 2^{2 \alpha (n + 1)} K^{-2 \alpha} \\ & \qquad \times\iint_{D_n} \# \left\{ (s, y) \in \overline{\Gamma}_n(t, x) : v(s, y) \leq 2^{-n} K \right\} \ dx dt ; A_n \bigg] \\ &= \sum_{n=0}^{\infty} 2^{2 \alpha (n + 1)} K^{-2 \alpha} \\ &\qquad \times\iint_{D_n} \mathbf{E} \left[ \# \left\{ (s, y) \in \overline{\Gamma}_n(t, x) : v(s, y) \leq 2^{-n} K \right\} ; A_n \right] \ dx dt \\ &\leq \sum_{n=0}^{\infty} 2^{2 \alpha (n + 1)} K^{-2 \alpha} c_K \ \iint_{D_{n}} \ dx dt \\ &= \sum_{n=0}^{\infty} c_K K^{-2 \alpha} 2^{2 \alpha + 1} 2^{(2 \alpha - 2 + 4\epsilon) n} \end{split} \end{equation*} and since $\alpha<1-2\epsilon$, the summation converges. Thus from using Lemma \ref{lem:lower_regularity}, Lemma \ref{lem:borel}, and \eqref{eq:lower_strip_bound}, we have \begin{equation} \label{eq:lower_main_k} 1_{A(K)}\int_0^{\tau^{(v)}\wedge T}\int_0^Jv(t,x)^{-2\alpha}dxdt<\infty \end{equation} almost surely. Observe that (\ref{eq:lower_k}) and (\ref{eq:lower_main_k}) imply (\ref{eq:lower_main}) since \begin{equation*} \begin{split} &\int_0^{\tau^{(v)}\wedge T}\int_\mathbf{I}v(t,x)^{-2\alpha}dxdt\\ &=1_{A(K)}\int_0^{\tau^{(v)}\wedge T}\int_\mathbf{I}v(t,x)^{-2\alpha}dxdt +1_{A(K)^c}\int_0^{\tau^{(v)}\wedge T}\int_\mathbf{I}v(t,x)^{-2\alpha}dxdt\\ &\leq1_{A(K)}\int_0^{\tau^{(v)}\wedge T}\int_\mathbf{I}v(t,x)^{-2\alpha}dxdt +\int_0^{\tau^{(v)}\wedge T}\int_\mathbf{I}K^{-2\alpha}dxdt\\ &<\infty \end{split} \end{equation*} almost surely. Indeed, on $A(K)^c$ one knows that $v(t,x)>K$ for $(t,x)\in[0,T]\times[0,J]$, so $v^{-2\alpha}(t,x)\leq K^{-2\alpha}$ in this situation. \end{proof}
1,941,325,220,139
arxiv
\section{Introduction} \label{introduction} For this interim review we summarize the state of the art of Fiber Fabry-Perot Cavities (FFPCs), i.e. Fabry-Perot resonators integrated on optical fibers. We discuss their basic properties and applications (with focus on gaseous media) and we conclude with an outlook into future perspectives. The article may also serve as a resource letter through its collection of references with relevance for FFPCs. \subsection{Fabry-Perot interferometers} Ever since their invention in the late 19$^\text{th}$ century Fabry-Perot interferometers (FPIs)~ \cite{fabryperot1899} have played an important role in advancing the resolution as well as the applications of optical spectrometers. In the simplest case they consist of two parallel plane and partially transmitting mirrors spaced by a length $\ell$. Multiple reflections of a light beam back and forth between its mirrors cause interference, where for normal incidence sequentially reflected beams have a path difference $2\ell$. Hence, constructive interference occurs when an integer number of wavelengths $\lambda$ covers the round trip path, $N\cdot \lambda = 2\ell$~\cite{bornwolf1959,siegman1986lasers}. The integer number $N$ is called the order of the interferometer and is typically large, of order $N \approx 10^3 - 10^6$. The path difference is cast into a resonance condition for the frequency of a light field ($\nu=c/\lambda$) by \begin{equation}\label{eq:fpires} \nu_N= N\cdot\Delta\nu_\text{FSR}\ , \end{equation} where $\Delta\nu_\text{FSR}=c/2\ell$ is the free spectral range of the FPI with $\ell$ the cavity length. The width of an isolated resonance is determined by the reflectivities $R$ of the mirrors. A conveniently accessible measure for the spectral quality of the FPI is given by the ratio of the free spectral range and the Lorentzian line width $\Delta_\text{1/2}$ (FWHM) of a single spectral line, which is called finesse ${\cal F}$. In the simplest case of a symmetric cavity with two mirrors of identical reflectivity and negligible losses $A \ll 1{-}R$, it reads \begin{equation}\label{eq:finesse} {\cal{F}} = \frac{\Delta_\text{FSR}}{\Delta_\text{1/2}}=\frac{\pi\sqrt{R}}{1-R}\ , \end{equation} where for the frequent situation $R\to 1$ we have ${\cal{F}}\simeq \pi/(1-R) \gg 1$. With this, FPIs have offered the highest spectroscopic resolution in classic optical instrumentation before the advent of laser spectroscopy, acting essentially as narrow band optical filters with resolution~$\nu_N/\Delta_\text{1/2} = N\cdot{\cal{F}}$. \subsection{Fabry-Perot cavities} The light field circulates back and forth between the mirrors with a round trip time $\tau_\text{circ}=1/\Delta_\text{FSR}$. FPIs have therefore given birth to the notion of Fabry-Perot resonators or Fabry-Perot cavities (FPCs), where the focus is on the properties of the internally stored light field rather than the transmitted or reflected intensity. An initially stored light field is attenuated as a consequence of the finite mirror transmissivity $(1{-}R)$ and other losses $A$. For small losses and transmission, the relaxation rate $1/\tau_\text{cav}$ per round trip is approximately given by $1/\tau_\text{cav}=(1{-}R)/\tau_\text{circ}$ which translates into the mean number of round trips \begin{equation}\label{eq:circ} n_\text{round} = \frac{\tau_\text{cav}}{\tau_\text{circ}} \simeq \frac{{\cal{F}}}{\pi}\ . \end{equation} The decay constant $\kappa$ of the amplitude of of the stored light field (energy relaxation rate $2\kappa$) is the relevant rate describing dynamic properties of any FPC. A spectrum of an FPC shows a Lorentzian line width (FWHM = $\Delta_\text{1/2}$), which is related to the relaxation rate by $\Delta_\text{1/2} = \kappa/\pi$. FPCs have strongly inspired the advent of the laser -- an FPI/FPC containing an amplifier for optical waves -- rendering one of the most important tools in contemporary science and technology. The geometry of the coherent light fields provided by lasers are usually described in terms of Gaussian field modes, which are extensively covered in textbooks such as~\cite{siegman1986lasers}. \begin{figure} \centering \includegraphics[width=\columnwidth]{cavmode.pdf} \caption{Basic geometric parameters of a TEM$_\text{00}$ mode supported by an optical fiber Fabry-Perot cavity (FFPC). Wavefronts of the cavity mode field match the radii of curvature of the integrated mirrors. The coupling strength with incoming light fields is determined by the overlap of the travelling guided and the cavity modes.} \label{fig:cavmode} \end{figure} The most widely used Gaussian mode is the so-called TEM$_\text{00}$, which is schematically shown in Fig.~\ref{fig:cavmode}. The properties of the fundamental cavity modes can be expressed in terms of the dimensionless cavity parameters $G_i$ (mirrors $i=1,2$), which are related to the mirror geometry and the cavity length $\ell$, \begin{equation} G_i = 1-\ell/r_i. \end{equation} The stability condition binds the individual $G_i$ values according to $0 \leq G_1G_2 \leq 1$ (i.e. eigensolutions of the paraxial Helmholtz equation, see~\cite{siegman1986lasers}), with special cases $G_1 = G_2 = 1$ (plane-plane cavity), 0 (confocal) and -1 (concentric). A stable cavity mode shows wave fronts matching the curvature of the mirrors with radii $r_i$. Most FPCs are arranged to have the focus, the position of highest field strength, within its volume. The waist is conveniently calculated from \begin{equation}\label{eq:w0g1g2} w_0^2 = \frac{\ell \lambda}{\pi}\sqrt{\frac{G_1 G_2(1-G_1G_2)}{(G_1+G_2-2G_1G_2)^2}}\,, \end{equation} and the resonator mode size at the position of mirror M$_1$ is given by \begin{equation}\label{eq:wmg1g2} w^2_{M_1} = \frac{\ell \lambda}{\pi}\sqrt{\frac{G_2}{(G_1(1-G_1G_2)}}\,, \end{equation} where $\lambda=c/\nu_\text{cav}$ is the resonant wavelength of the cavity. To illustrate physical properties, the symmetric case is sufficient in most cases. With $G=G_1=G_2$ Eq.~\ref{eq:w0g1g2} reduces to \begin{equation}\label{eq:w0} w_0^2 = \frac{\ell \lambda}{2\pi}\sqrt{\frac{1+G}{1-G}} = \frac{\ell \lambda}{2\pi}\sqrt{\frac{r}{\ell/2}-1}\,, \end{equation} where it is easy to see that the mirrors of the FPC have to fulfill the condition $r>\ell/2$ for their radii of curvature. The smallest waist radii $w_0$ are obtained in the limit $r\to \ell/2$. In this case of the concentric cavity, however, the paraxial approximation of Gaussian beams is no longer valid, and diffraction limits the minimal waist to a radius of order $w_0 \sim \lambda/2$. For the most relevant Gaussian TEM$_\text{00}$ mode of the FPC, the resonance condition Eq.\ref{eq:fpires} is slightly modified by the so-called Guoy shift~\cite{siegman1986lasers}, \begin{equation}\label{eq:HGModes} \nu_{N} = \frac{c}{2\ell} \left(N+\frac{\arccos{(G)}}{\pi}\right). \end{equation} For all practical purposes discussed here, this small correction ($\arccos{G}/\pi \ll N$) is included in an effective cavity length $\ell = \ell_\text{eff}$, which also accounts for the pene\-tration of the cavity field into the stack of dielectric layers of the mirror coating. FPCs are frequently used to maximize light matter coupling, and hence atoms, membranes, or spectroscopic samples are positioned at the waist of the cavity mode with diameter $2w_0$. The field strength of the cavity field can be considered a function of the total energy $U$ stored in the FPC. An important measure for the ratio of the stored electromagnetic energy to the maximum field strength is given by the mode volume $V_\text{mode}$. It is calculated by spatially integrating the field distribution inside the cavity normalized to the maximum field strength $\vec{{\cal{E}}}(w_0)$, resulting in \begin{equation}\label{eq:vmode} V_\text{mode} = \frac{\pi}{4}w_0^2\ell = V_\lambda \cdot \left(\frac{\ell}{\lambda} \right)^2\left(\frac{r}{\ell/2}-1\right)^{1/2},\ \end{equation} with minimal mode volume $V_\lambda=(\lambda/2)^3$ for a cubic cavity resonant at wavelength $\lambda$. The total electromagnetic energy $U$ stored in the FPC relates to the maximum field amplitude $|\vec{{\cal{E}}}_\text{max}|$ usually at the waist through \begin{equation}\label{eq:uvse} U = \frac{1}{2}\epsilon_0c^2\int_{V_\text{cav}} \text{d}V\, |\vec{{\cal{E}}}(\vec{r})|^2 = \frac{1}{2}\epsilon_0c^2|\vec{{\cal{E}}}_\text{max}|^2\cdot V_\text{mode}. \end{equation} \section{Fiber Fabry-Perot cavities} One route of the present evolution of photonics is the miniaturization of optical devices. In the past FPCs have successfully been integrated into optical fibers through the inscription of Bragg mirrors directly into the glass fiber material, serving e g. as optical filters. The Fiber Fabry-Perot Cavities (FFPCs) introduced by~\cite{hunger2010fiber} and discussed here are intrinsically fiber connected, too. However, as scaled-down versions of classic FPCs with mirrors spaced by an empty volume, they enable versatile access to the field mode stored in the cavity, which allows applications as outlined in Sec.~\ref{sec:applications}. While the formal treatment of FFPCs is not different from the conventional description of macroscopic FPCs outlined above in terms of Gaussian beams and field modes, it is the limit of small typical length scales $\ell \sim 10- \SI{1000}{\micro\meter}$, which makes FFPCs a special limiting case. A breakthrough of FFPCs was achieved when D. Hunger, J. Reichel and colleagues~\cite{hunger2010fiber,Steinmetz2006} developed the integration of mirrors with high optical quality on the end facets of optical fibers. With their continued work on FFPCs, it became clear that FFPCs offer several technical advantages over macroscopic devices due to their compactness and robustness (Fig.~\ref{fig:ffpcfunc}), including: \begin{itemize} \item High field concentration \item Integration with optical fibers \item High optical quality \item Small foot print \item Open geometry \item Integration with other functional components \end{itemize} The following sections will briefly summarize the physical quantities associated with these aspects. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{FFPC.pdf} \caption{Concept of a Fiber Fabry-Perot Resonator (FFPC) integrated with optical fiber links and functional components.} \label{fig:ffpcfunc} \end{figure} \subsection{FFPCs for high field concentration} \label{sec:fieldconc} The realization of high local field strengths is at the heart of many FFPC applications. For all Gaussian modes sustained by FFPCs the TEM$_\text{00}$-mode offers the highest fields at its waist with diameter $2w_0$. According to Eqs.~\ref{eq:vmode} and \ref{eq:uvse} short cavity lengths $\ell$ and $\ell/r \to 2$ (cf. Eq.~\ref{eq:w0}) approaching the concentric cavity case let the mode volume decrease and the local field strength ${\cal{E}}_\text{cav}$ increase. The global geometric scale of FFPCs for the relevant parameters like the cavity length $\ell$ and the radii of mirror curvatures $r_i$ is set by the standard diameter of optical fibers of $\SI{125}{\micro\meter}$ ($D$ in Fig.~\ref{fig:cavmode}). Choosing a small waist $w_0$ for high field concentration also means high divergence of the field mode. The divergence angle of a Gaussian mode is $\Theta_\text{div}=\lambda/(\pi w_0)$, and hence in order to avoid clipping losses by the fiber end facets we need to make sure that $D/2 > \Theta_\text{div}\cdot(\ell/2)$ yielding \begin{equation}\label{eq:lmax} \ell < \pi w_0 D/\lambda\ , \end{equation} in agreement with the Rayleigh focusing limit for an aperture of diameter $D$. We use Eq.~\ref{eq:vmode} to estimate the range of mode volumes available for FFPCs with \begin{equation} V_\text{mode}/V_\lambda = \frac{\pi}{4}\left(\frac{w_0}{\lambda/2}\right)^3\left(\frac{\ell/2}{w_0}\right) \end{equation} For realistic focusing conditions $5 \leq 2w_0/\lambda \leq 50$ we therefore obtain a wide range of possible mode volumes ranging from $V_\text{mode}/V_\lambda \sim 10^2 \cdot (2\ell/w_0)$ to $\sim 10^5 \cdot (2\ell/w_0)$, where Eq.~\ref{eq:lmax} sets an upper limit. All experimental realizations of FFPCs indeed fall into this regime, where the factor $2\ell/w_0$ may approach values below 10 when favorable small mode volumina are of interest. As the mode volume reduces with the radii of curvature of the mirrors, high field concentrations are reached for small $r$ that, however, still have to fulfill the condition $r>\ell/2$. For strongly focusing mirrors the radii $r$ will, therefore, typically be on the order of $\ell$, i.e. in the range of some 10 to $\SI{1000}{\micro\meter}$. \subsection{Mode matching for FFPCs} Successful use of FFPCs implies efficient coupling of the cavity mode to the waveguide modes propagating within the fibers. Assuming perfect collinear alignment of the cavity axis and fiber axis the coupling efficiency into the cavity can be written as~\cite{hunger2010fiber} \begin{equation} \epsilon_\text{TEM00} = \frac{4}{(\frac{w_\text{f}}{w_\text{m}}+\frac{w_\text{m}}{w_\text{f}})^2+(\frac{\pi n_\text{f} w_\text{f}w_\text{m}}{\lambda r})^2} \end{equation} with $w_\text{m}$, $w_\text{f}$, $n_\text{f}$ the waist of the cavity mode and of the fiber mode at the mirror, respectively, and the index of refraction of the fiber~\cite{hunger2010fiber}. The mode field waist of a standard single mode fiber, for instance, allows for a maximum coupling value of $\epsilon \approx 0.78$, considering a cavity for $\lambda = \SI{780}{\nano\meter}$ with $\ell = \SI{50}{\micro\meter}$, $r = r_1 = r_2 = \SI{150}{\micro\meter}$ and thus $w_0 \approx \SI{3.7}{\micro\meter}$ $w_{m}= \SI{4.1}{\micro\meter}$, as well as $w_{f}=\SI{2.53}{\micro\meter}$ and $n_{f}=1.46$. The mismatch in mode geometry of the incoming guided mode and the mode sustained by the FFPC can be mitigated by stacking fibers of different types to achieve mode shaping inside the fiber (cf. Sec.~\ref{FFPCdevices} on GRIN fibers). In practice, imperfect centering of the mirror curvature on the fiber center during fabrication and/or cavity misalignment lead to degraded coupling efficiencies. Detailed mode overlap calculations for misaligned cavity and fiber modes have been performed in Refs.~\cite{gallego2016high,Bick2016,gulati2017fiber} providing an explanation of asymmetric cavity line shapes as pure interference effects. The interplay of direct single mode fiber and cavity mode coupling results in alignment-dependent cavity line shapes in the reflected side at the incoupling fiber that are well described by the superposition of a Lorentzian and a dispersive component~\cite{gallego2016high}. \subsection{Finesse of FFPCs} In addition to determining the fiber to cavity coupling, the alignment-dependent cavity mode properties also influence the FFPC finesse due to mirror clipping losses, cf. Eq.~\ref{eq:lmax}. The reduction in cavity finesse for longer cavities prior to the stability limit has been well explained by considering the cavity mode footprint at the position of the fiber mirrors to be clipped by mirrors with effective finite diameters~\cite{hunger2010fiber}. More recently, sharp variations in finesse at specific cavity lengths have been well modelled in position and magnitude as mirror-profile-dependent crossings of the fundamental cavity mode with higher order transversal modes. Deviations of the cavity mirrors from the idealized spherical mirror surface have also been identified as the origin of the birefringence, which are observed in many fiber cavity systems~\cite{benedikter2015transverse,podoliak2017harnessing,benedikter2019transverse}. \subsection{Polarization properties and birefringence} The dominant contribution for frequency splitting of polarization eigenmodes in fiber Fabry–Perot cavities has been shown to result from the elliptic shapes of cavity mirrors forming the FPC. The lifting of the degeneracy of polarization modes expected from scalar field theory (cf. Eq.~\ref{eq:HGModes}) can be explained by extensions to vector theory that allows for field components along the cavity axis~\cite{Uphoff2015,garcia2018dual}. For many applications cavities with low birefringence are desirable and mirror manufacturing techniques (cf.~Sec.~\ref{fabrication}), including e.g. the rotation of the fiber during multi-shot sequences~\cite{takahashi2014}, have been introduced for this purpose. Recently, however, it was also shown that the birefringence of optical cavities can be turned into an advantage by making more field modes available for controlled light matter interaction~\cite{barrett2019,barrett2020}, e.g for further enhancing the Purcell effect, see Sec.~\ref{ssec:cqed}. \subsection{Fabrication of high quality mirrors for FFPCs} \label{fabrication} The small-radii of curvature required for cavities with small mode volumes are not within the reach of traditional polishing techniques. Successful techniques to fabricate the required small, smooth depressions (depth of order a few \textmu m) on glass surfaces are surface etching methods and laser ablation. Wet or ion-based etching has proven to be a useful tool for providing radii of curvature below $\SI{5}{\micro\meter}$, but with surface roughness at the nanometer scale~\cite{laliotis2012icppolish,trichet2015,Qing2019}, which cause limits in the achievable mirror reflectivities. The most successful technique to date for producing curved mirror shapes with ultra-low surface roughness is \chem{CO_2}-Laser ablation (Fig.~\ref{fig:fabrication} (a)) ~\cite{hunger2010fiber},~\cite{muller2010lowmode},~\cite{hunger2012micro}. The strong absorption of fused silica (\chem{SiO_2}) for light between $9-\SI{11}{\micro\meter}$ wavelength is used to strongly heat the top layer of the fiber end surface by illumination with a focused \chem{CO_2} laser beam. For suitable intensity, focus and pulse duration the Gaussian laser intensity profile melts and evaporates part of the glass surface and produces a depression with a similar profile~\cite{hunger2010fiber,brandstaetter2013,takahashi2014}. The spherical part at the bottom of the resulting structure exhibits a low roughness comparable to those of superpolished mirrors. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{FabricationBig.pdf} \caption{Treatment of fiber end facets for mirror fabrication. (a) Thermal ablation by pulsed CO$_2$ laser illumination. (b) Phase shift interferometry of the fiber end facet for quality control. (c) Numerical reconstruction of the fiber end facet surface through phase unwrapping.} \label{fig:fabrication} \end{figure} In order to characterize the ablation process and to ensure the desired shape and alignment of manufactured depressions, the fiber mirrors are optically inspected. A typical setup uses a high numerical aperture microscope to image an LED illuminated fiber end in a Michelson Interferometry configuration to obtain interferogram images of the fiber surfaces with high spatial resolution (see Fig.~\ref{fig:fabrication} (b) and (c)). The ablation technologies have proven to be suitable for producing a wide range of fiber end microstructures, from micrometer-long cavities~\cite{kaupp2016smallmode} to elaborate geometric shooting patterns that allow the creation of fiber-resonators in the millimeter range~\cite{ott2016} and customized mirror ellipticities for tailored cavity birefringence~\cite{garcia2018dual}. Subsequently to the mirror surface molding, multilayer dielectric coatings are applied to realize low-loss, high-reflectivity mirrors in the visible, infrared or ultraviolet domain. For this purpose up to tens of alternate layers of materials with high (\chem{Ta_2O_5} (n=2.10) and low (\chem{SiO_2} n=1.45) index of refraction and individual thicknesses of $\lambda$/4 are deposited on the fiber surface using ion-beam sputtering techniques~\cite{muller2010lowmode}. The resulting fiber-based mirrors with the shape of the original fiber surface depression can feature transmission, scattering and absorption losses (after annealing~\cite{atanassova1995}) down into the few parts per million range, respectively and are ready to be aligned to build high-finesse, low-mode-volume resonators. \subsection{FFPC devices} \label{FFPCdevices} Depending on the specific application, the geometries of FFPC implementations can strongly vary resulting in a large range of reachable specifications with regard to cavity and mode sizes, mode properties, degree of accessibility and stability of the devices. FFPC devices can consist of either two opposing fiber mirrors or a single fiber mirror together with a usually flat, macroscopic mirror substrate~\cite{Steinmetz2006}. As shown in Fig.~\ref{fig:fiberRealizations}~(a)~and~(b), the implementations differ with respect to the position of the smallest cavity waist that is either located between the fiber mirrors or on the macroscopic mirror. Solid state quantum emitters on substrates are interfaced using the latter cavity geometry whereas experiments with trapped atoms or ions usually employ a symmetric cavity geometry in order to maximize the coupling to the emitter. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{fiberRealizations.pdf} \caption{(a) shows a hemi-cavity, where the smallest mode waist is located on a flat mirror substrate. The amber colored fiber in the image is the reflection of the fiber mirror approached from the left. (b) in contrast is a symmetric cavity with the smallest mode waist located in the center. The first is used to interface structures on the surface of the flat mirror, while the latter is used to interact with gases, atoms or ions that can be trapped in the center of the cavity mode volume. (c) shows a schematic of mode-matching optics using a graded index fiber (GRIN) piece. The stack of different fiber pieces allows to shape the mode incident on the mirror from the fiber side and match it to the mode of the cavity~\cite{gulati2017fiber}.} \label{fig:fiberRealizations} \end{figure} As many applications benefit from high $Q$ optical resonators, the finesse $\mathcal{F} = Q/N$ is one of the key specifications for FFPCs. In accordance with Eq.~\ref{eq:finesse}, it is determined by the losses $\mathcal{A}$ during a cavity round trip, with $\mathcal{F}=2\pi/\sum_i \mathcal{A}_i$. In FFPCs, the reachable finesse is usually limited by the quality of the reflective coating, the surface quality and the size of the fiber mirrors. Absorption in ion beam sputtered layer systems with high reflectivity usually occurs on the order of some parts per million. Annealing of these mirrors at moderate temperatures ($\leq \SI{350}{\degree} $) can reduce the amount of absorption even further. Mirror carving using laser ablation creates smooth surfaces, but is hard to control close to the rim of the facet reducing the usable mirror size. Small effective mirror diameters however lead to clipping losses that increase with the separation of the fiber mirrors and reduce the finesse. Record finesses for FFPCs on the order of some 100000 have been reported~\cite{hunger2010fiber,rochau2021dynamical}. The cavity length $\ell$ of FFPC devices can be customized depending on the target specification for mode volume and linewidth (cf. Sec.~\ref{sec:fieldconc}). Furthermore, it is a key parameter to set the resonator's free spectral range $\Delta\nu_\text{FSR}$ that requires consideration, if multiple cavity resonances are employed. The realization of specific cavity lengths requires the fabrication of tailored fiber mirrors with respect to the usable mirror size and the radius of curvature of the employed fiber mirrors. Record long FFPCs reach lengths on the order of \SI{1}{\milli\meter}~\cite{ott2016}. The length of the cavity is thereby limited by the size of the mode on the mirror with respect to the usable area, the ultimate limit being the size of the optical fiber (cf. Eq.~\ref{eq:lmax}). For mirrors fabricated using laser ablation the usable mirror size is usually much smaller than the full fiber facet. Mode waists larger than the spherical part of the mirror lead to clipping losses and hence strongly reduced finesse values. Another challenge especially for long FFPCs is mode matching of the fiber-guided and the cavity mode at the in-coupling mirror as the cavity mode size is increasingly large compared to usual fiber guided mode field diameters. For extremely short FFPCs with small mirror radii of curvature the mismatch of the wavefront curvature leads to a similar effect. To compensate for this effect in large cavities either fibers with a large mode-field diameter~\cite{ott2016} or fiber-integrated mode matching optics~\cite{gulati2017fiber} are required. The latter uses graded index (GRIN) fiber pieces of well defined length that are spliced to the in-coupling fiber. The mirror fabrication is then performed on the last surface of the fiber stack. This approach uses the re-focusing properties of GRIN fiber to shape the fiber-sided mode. A schematic depiction is shown in Fig.~\ref{fig:fiberRealizations}~(c). \subsection{Tunable and stable FFPCs} \label{sec:locking} A key parameter in any application of FFPCs is their frequency stability and their ability to be tuned and locked to a certain frequency reference. Frequency noise in high-finesse FFPCs arises from thermal drifts, picked-up acoustic noise from the environment, electric noise in the cavity resonance tuning and from the intrinsic mechanical noise at non-zero temperature caused by vibration modes~\cite{saavedra2021tunable,janitz2017high,lee2021novel}. The first three can be reduced by appropriate thermal and acoustic isolation and by using low-noise electrical components in the experiment. How strongly ambient acoustic noise is picked-up by the device and the influence of the vibration modes onto the device stability is largely determined by the design geometry~\cite{saavedra2021tunable}. As FFPCs are miniaturized devices, their intrinsic vibration modes can occur at comparably high acoustic frequencies ($>\SI{1}{\kilo\hertz}$). Whilst in macroscopic cavities low frequency vibration modes can be actively damped~\cite{whittle2021approaching}, passive stability in FFPC devices is reached by pushing the lowest order vibration modes to higher frequency reducing the amount of thermal vibration noise in the system. It also allows for high locking bandwidths to actively stabilize against environmental disturbances through an electric actuation of the cavity length e.g. via piezo-electric elements~\cite{gallego2016high,janitz2017high,saavedra2021tunable}. This design paradigm is achieved through rigid device geometries, where the opposing mirrors of the FFPC are supported in close vicinity to a common base~\cite{saavedra2021tunable,janitz2017high,lee2019microelectromechanical}. Aside from electronic locking also other mechanisms for example through the optomechanical spring effect of an integrated membrane~\cite{rohse2020cavity} or via thermal locking~\cite{brachmann2016photothermal,gallego2016high} can be realized. The latter is based on a self locking mechanism, where thermal drifts caused by the heat induced through the intra-cavity field lead to a stabilization of the cavity resonance. Similarly, these thermal nonlinearities can even lead to deformation of the FFPC mirrors or drive the FFPC resonance into photothermal self-oscillations~\cite{konthasinghe2017self,konthasinghe2018dynamics}. \subsection{FFPC integration and microstructured mirrors}\label{sect:functint} To increase the functionality of FFPCs other elements such as mechanical or electrical components can be integrated into the FFPC geometry or the FFPC itself is embedded into miniaturized experiment designs e.g. chip-based ion trap experiments~\cite{lee2019microelectromechanical}. In the particular example of ion traps the advantage of small mode-volumes of FFPCs is usually not fully exploited because of uncontrolled charging of the dielectric fiber mirrors. Usually, this is mitigated by employing long cavities, however an alternative approach to reduce the effect of charging is to integrate a microstructure such as a metallic shield directly on the fiber mirror. This metallic mask is deposited on the end facet of the fiber leaving out the central part around the core. As an example for direct microstructuring of FFPC mirrors, a possible procedure to fabricate a metallic mask is detailed in the following. In the first step, a polymer cover (see Fig.~\ref{fig:fiberfinesse}~(a)) is printed on top of the fiber-end facet by three-dimensional laser lithography~\cite{kawata2001finer}. Subsequently, the fiber as well as the polymer cover are metallized by thermal evaporation in a deposition chamber. In order to achieve a uniform coating, the fiber is rotated during the deposition. In the last fabrication step, the polymer cover is removed with a micromanipulator. Fig.~\ref{fig:fiberfinesse}~(b) exemplifies a fiber-end facet with a copper mask that we fabricated using this procedure. In order to show that the fabrication does not deteriorate the optical properties of the fiber mirrors, we characterized the FFPC before and after the application of the metallic masks. The data shown in Fig.~\ref{fig:fiberfinesse}~(c) demonstrates that the metallic mask does not have a negative effect on the cavity finesse. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{FFPCFinesse2.pdf} \caption{(a) CAD drawing of the polymer cover printed on top of the fiber end-facet. (b) Optical micrograph of the end facet of a metallized fiber. (c) Finesse of a FFPC before and after application of a metallic mask on the end-facet and the cladding of the fibers.} \label{fig:fiberfinesse} \end{figure} As shown here, the integration of microstructures into FFPCs can expand their capabilities, add new functions and at the same time maintain a miniaturized geometry. Future FFPC device developments are expected to more heavily make use of such additional integrated elements and embed FFPCs into other miniaturized experimental platforms. \section{Applications}\label{sec:applications} All applications of FFPCs make use of the miniaturized geometry leading to a small mode volume and a small footprint, see Fig.~\ref{fig:ffpcfunc}. Here we concentrate on three aspects: Small mode volumes allow to enhance the electric field strengths of a single photon to the level, where strong coupling with single atoms realizes the quantum regime (Sec.~\ref{ssec:cqed}); short cavities increase the interaction with miniaturized mechanical resonators that are easily introduced due to the open geometry (Sec.~\ref{sec:optomech}); and FFPCs also allow to construct very compact spectrometers for applications in sensing (Sec.~\ref{sec:sensing}). \subsection{FFPCs for Cavity QED: The Quantum Limit of Light Matter Interaction} \label{ssec:cqed} While the radiative decay of excited atoms appears to be an unchangeable property of nature, it is not. It was realized early on by E. Purcell~\cite{purcell1946rfemission} in the context of the invention of NMR that electromagnetic cavities would allow to manipulate the coupling strength of light matter interaction by controlling the density of electromagnetic states coupling to a quantum emitter. Later on D. Kleppner~\cite{kleppner1981vacuum} coined the phrase \textit{turning off the vacuum}, which put forward the idea of \textit{cavity QED} or \textit{cQED} to investigate the quantum limit of light matter interaction, i.e. quantum emitters such as single atoms coupled to the electromagnetic vacuum or few photons. An extensive amount of information about the general status of cavity supported interaction of atoms and fields at the quantum level after more than four decades of experimenting is found in~\cite{haroche2006quantum}, and focusing on more recent applications as light matter interfaces in future quantum networks in~\cite{reiserer2010cqed}. Today, miniaturized optical cavities integrated with optical fibers, the subject of this article, have opened yet another experimental avenue to control light matter interaction at the full quantum level. \subsubsection{The role of the FFPC in atom-field coupling} FFPCs have typically small waist diameters $2w_0$ and accordingly small mode volumes Eq.\ref{eq:vmode}. In a simplified rate model, we may characterize the interaction of light and matter by considering the absorption cross section $\sigma=3\lambda^2/2\pi$ for atomic quantum emitters at resonance wavelengths $\lambda$ and the waist area $A=\pi w_0^2$ of a photon propagating with a Gaussian beam shape (Fig.~\ref{fig:atomcav}). \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{AtomCavIA.pdf} \caption{Simplified parameters for light matter interaction at the atom-photon level inside a cavity. The most interesting strong coupling limit occurs when the cross sections for absorption and mode at the interaction point are matched.} \label{fig:atomcav} \end{figure} The cavity recycles the travelling photon $n_\text{round}=\cal{F}/\pi$ times with $\cal{F}$ the finesse from Eq.~\ref{eq:finesse}. The number of chances that the photon experiences to be absorbed by the atom is then estimated by the ratio of $\sigma$ and $A$ times the number of round trips, i.e. \begin{equation}\label{eq:Cprob} C = \frac{\sigma}{\pi w_0^2}\cal{F}\ . \end{equation} One can show that this definition of the cooperativity $C$ is consistent with the more fundamental definition \begin{equation}\label{eq:coop} C = \frac{g^2}{2\kappa\gamma} = \frac{1}{2}\cdot\left(\frac{g}{\gamma}\right)^2\left(\frac{\kappa}{\gamma}\right)^{-1}\ , \end{equation} where the cooperativity $C$ compares the rate of evolution $g$ of the combined atom-field system coupling with the dynamic parameters governing its constituents: the free space decay rate $\gamma$ of the atomic dipole of the relevant transition ($2\gamma$ for the excitation energy), and the relaxation rate $\kappa$ of the cavity field amplitude ($2\kappa$ for the cavity field energy). Parameters in publications are typically given in terms of sets for $\{g,\kappa,\gamma\}$-values. For an initially excited atom, the cooperativity $C$ in Eq.~(\ref{eq:coop}) determines the ratio of the emission rate into the cavity field mode with respect to the global free space emission rate by the enhancement factor \begin{equation}\label{eq:eta} \eta = \frac{2C}{1+2C}. \end{equation} For large $\cal{F}$ though small $2C = (2\sigma/(\pi w_0^2))\cdot{\cal{F}}\ll 1$ the probability of photon emission into the cavity field is already strongly enhanced by the finesse factor $\cal{F}$ in comparison to free space. However, with $\eta \ll 1$ (Eq.~\ref{eq:eta}) the total decay rate of the atom experiences little modification, giving rise to the so-called \textit{bad cavity regime}, \begin{equation}\label{eq:crate} \gamma'=\gamma(1+2C). \end{equation} The realm of cavity QED has also been extended to so-called \textit{artificial atoms}, a term, which describes all quantum emitters with two and three-level quantum systems as well as good coupling to photon fields and thus resembling simple atoms. Ranging from molecules to quantum dots and more, artificial atoms offer technical advantages such as solid state samples, see Sec.~\ref{section:emitterFFPCcoupling}. \subsubsection{FFPCs for strong atom-field coupling} \label{subsection:strongcoupling} The most interesting regime (the so-called strong coupling regime) occurs for $C > 1$ when atom-field coupling becomes so strong that the coupled systems undergo joint evolution. Coherent coupling of two quantum oscillators, atoms and fields, indeed leads to a spectrum showing a splitting as a signature of the strongly coupled system, see Fig.~\ref{fig:split}. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{Splitting.pdf} \caption{(a) The spectrum of the empty cavity is taken in reflection with a strong dip at resonance. (b,c) Reflection spectra for the cavity frequency set at different detunings from the atom transition frequency (dotted lines corresponding to the spectrum at resonance in (a), too). (d-f) Reflection spectra with parameters as in (a-c) but with a single atom resonantly coupled to the cavity. The spectrum of the atom-cavity system shows a splitting which indicates the strong coupling regime. The gap width at resonance is given by $2g$, the atom-photon coupling rate, also called vacuum Rabi splitting.~\cite{gallego2017thesis}} \label{fig:split} \end{figure} To achieve this regime, the small mode volumes of FFPCs offer excellent conditions: The coupling strength $g$ of atoms, or artificial atoms, with the photon field is given by the product of the atomic transition dipole moment $d$ and the local cavity field strength (Eq.~\ref{eq:uvse}). It is calculated for the energy of a single cavity photon $\hbar\omega_\text{cav}$ from ${\cal{E}}_\text{cav}=\sqrt{2\hbar\omega_\text{cav}/\epsilon_0V_\text{mode}}$ per photon resulting in \begin{equation}\label{eq:gcav} g = \frac{1}{\hbar} d\cdot {\cal{E}}_\text{cav} = d\sqrt{\frac{2\omega_\text{cav}}{\hbar\epsilon_0 V_\text{mode}}}\ . \end{equation} For $2C \gg 1$, Eq.~\ref{eq:eta} now reads $\eta \to 1$, that is, emission into free space is almost switched off and the field energy is mostly deposited into the cavity field. Since we cannot change the free space decay rate $\gamma$ of atoms, we can experimentally only control the ratio $g/\gamma$ via the field concentration caused by a small mode volume. The field relaxation rate $2\kappa$ caused by the outcoupling rate from the resonator, however, is a function of mirror transmissivity, which is determined by the mirror coatings. Hence we can technically also choose the ratio $\kappa/\gamma$ in order to control the dynamic properties of the coupled system of atoms and fields. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{DMPlot.pdf} \caption{A non-exhaustive survey of normalized parameter sets \{$g, \kappa, \gamma$\} and $C=g^2/2\kappa\gamma$ used in cavity QED experiments with fiber based Fabry-Perot resonators (FFPCs sorted from left to right:~\cite{Takahashi2020,brekenfeld2020quantum,steiner2013,pscherer2021single,gallego2018strong,colombe2007strong,pfister2016quantum,ballance2017,lien2016}) with respect to performance in atom field coupling strength $g/\gamma$ vs. resonator field fiber coupling rate $\kappa/\gamma$. For comparison a small set of bulk cavity system parameters is shown, too, from~\cite{khuda2009,kalb2015,kuhn2002} sorted from top to bottom.} \label{fig:kappvsg} \end{figure} In Fig.~\ref{fig:kappvsg} we compare atom-cavity systems prepared by experimenters for the strong coupling regime over the past decades with respect to their $g/\gamma$ and $\kappa/\gamma$-values for both bulk cavity experiments~\cite{khuda2009,kalb2015,kuhn2002} and FFPCs~\cite{Takahashi2020,brekenfeld2020quantum,steiner2013,pscherer2021single,gallego2018strong,colombe2007strong,pfister2016quantum,ballance2017,lien2016}. Selected sets of parameters for the ($g, \kappa, \gamma)$-values used in different laboratories show a clear tendency to take advantage of FFPC properties especially in the fast cavity regime, i.e. with $\kappa/\gamma \gg 1$. In the fast cavity domain, the energy of an initially excited atom is radiated into the cavity field at the strongly enhanced rate according to Eq.~\ref{eq:crate}. The cavity field in turn (i.e. the stored photons) is then rapidly coupled to the fiber port, too. This is experimentally advantageous since the photons leaving the cavity carry information about the quantum state of the atom-cavity system. Calling the emission of a photon from the FFPC a "read" process (since the information on the photon needed to be present in the cavity before its release), it is clear, that also the inverse "write" processes are possible and hence FFPCs coupled to quantum emitters are prime candidates for quantum memories or also certain quantum gates in future quantum networks~\cite{reiserer2010cqed}. While it is impossible to give an exhaustive survey of the applications of the different platforms of quantum emitters now coupled to FFPCs (atoms, ions, color centers, quantum dots, $\dots$), we indicate the origin of these research lines and selected current results. \subsubsection{Quantum emitters coupling to FFPC fields} \label{section:emitterFFPCcoupling} As outlined above, FFPCs are favorable components to enhance the interaction of light fields with quantum emitters. Conceptually neutral atoms (and ions) are the simplest and best understood quantum light sources. With neutral atoms -- both transiting a cavity or trapped inside -- experimental progress in cavity QED was pioneered ranging from the microwave to the optical domain. \paragraph{FFPCs for analyzing and preparing cold atom ensembles.} The rise of FFPCs for cavity QED was indeed triggered by experiments with a degenerate (ultra cold) system of atoms (a Bose Einstein condensate)~\cite{Steinmetz2006,colombe2007strong}: Fiber optical channels were integrated with a so-called \textit{atom chip} for analyzing and controlling properties of the atomic ensemble. The experiment showed that a system of $N$ identical atoms further increases the coupling strength (Eq.~\ref{eq:gcav}) to $g_\text{N} \propto \sqrt{N}\cdot g$. Also, quantum projection measurements (see below) allowed to create entanglement within such ultra cold atom ensembles~\cite{entangle2014haas}. \paragraph{FFPCs for controlling single and few atom dynamics.} With many atoms both the atom ensemble and the cavity field resemble classical harmonic oscillators since both of them exhibit the well known linear energy spectrum. The spectrum of the combined system shows of course a splitting into two modes, but the global dynamic behaviour is dominated by what is well known from two coupled classical pendulums. This situation is very different with a single or few atoms: In the strong coupling case the atom becomes easily saturated by the field corresponding to a single photon or more. Atom-field coupling in this regime is highly non-linear with respect to the field strength that is proportional to $g_{n_\text{ph}}=\sqrt{n_\text{ph}}g$, the number of photons in the cavity. The quantum nature of this situation may be illustrated with the detection scheme for the quantum state of an atom coupled to the FFPC: from Fig.~\ref{fig:split} it is clear that an incoming light signal resonant with the empty cavity is reflected depending on the state of the cavity-atom system: For an atom in an uncoupled state, the cavity looks empty and the light field is transmitted; for a coupled state, the signal is reflected. The reflection signal thus discriminates with high efficiency an uncoupled from the coupled atomic state~\cite{qnd2011volz} which amounts to a so-called quantum non-demolition measurement. The majority of cQED experiments with FFPCs has focused on coupling a single or few atoms (or artificial atoms, see below) to the cavity field~\cite{singleat2010gehr,gallego2018strong,brekenfeld2020quantum,macha2020nonadiabatic}. For applications, e.g. the creation of single photons on demand, the dynamics of the atom-cavity must be externally controlled. This can be accomplished by using an (artifical) atom component offering a third level which couples to the quantum states of the strongly interacting atom-cavity system. Driving the transitions involving this atomic third quantum state, the dynamics of the strongly coupled atom-field evolution can be manipulated and functional components for quantum technology such as single photon sources~\cite{gallego2018strong} and quantum memories~\cite{brekenfeld2020quantum} can be constructed. We finally note that trapped neutral atoms require laser cooling processes, where the fast cavity domain renders the method of cavity cooling~\cite{cavcool2013reis} inoperative and hence alternative schemes must be used~\cite{urunuela2020ground}. With respect to atom trapping the cavity may also be used to provide a dipole trap field by engineering a resonance with a suitable second wavelength~\cite{Ferri2020Mapping}. \paragraph{Matching trapped ions with FFPCs.} It is natural to consider trapped ions for cQED experiments with FFPCs. However, the small scale of the FFPCs and their dielectric nature are subject to uncontrolled charging under vacuum conditions. Therefore, ion trap experiments with FFPCs are using relatively long cavities (up to a millimeter) which increases the mode volume. Successful experiments are now carried out based on Yb$^+$~\cite{steiner2013} and Ca$^+$-ions~\cite{Takahashi2020} with wavelengths 370 and $\SI{854}{\nano\meter}$, respectively. Especially the short UV wavelength of Yb$^+$ adds to the technical challenges in maintaining high performance mirror coatings~\cite{schmitz2019ultraviolet}. At the present time investigations are underway with respect to numerous technical improvements especially with regard to ion coupled FFPCs: Unwanted stray fields are to be controlled by means of integrated electrodes (see Sec.~\ref{sect:functint}); a natural extension of FFPCs is integration with miniaturized ion traps~\cite{steiner2013,pfister2016quantum,Takahashi2020}; improved mirror fabrication for long cavities can mitigate problems with charging of dielectric surfaces~\cite{lee2019microelectromechanical}. \paragraph{FFPCs with color centers.} While atoms and ions are efficient and simple quantum emitters the need for trapping devices based on laser radiation or radiofrequency fields also causes much technical overhead. Thus there are strong efforts to replace atoms by artificial atoms, which are realized with solid state host materials and cannot run away in order to simplify the systems towards applications. Prominent examples of artificial atoms -- quantum objects exhibiting atom like energy structures with transitions in the optical domain -- include color centers defects or rare earth ions in transparent materials. With nitrogen vacancies in diamond (NV centers), one of the most widespread artificial atoms, FFPCs have been shown to make NV centers at the quantum level available by e.g. demonstrating single photon light sources~\cite{albrecht2014}. A review focusing on the operation of color centers with FFPCs is found in~\cite{janitz2020cavity}. Rare-earth ion doped crystals offer another type of artificial atoms which are well known from rare earth ion doped laser crystals. They offer alternative wavelengths and coupling with FFPCs has been demonstrated to modify their radiative properties, another important step towards making photon light sources for future quantum networks at the telecom wave length around $\SI{1550}{\nano\meter}$ available~\cite{casabone2018cavity}. \paragraph{FFPCs with molecules.} The high spectral selectivity of FFPCs allows to also control the emission properties of molecules for selected transitions out of complex structure and at the single emitter level~\cite{toninelli2010scanning}. The enhancement of molecule-light field interaction in a FFPC based microcavity has been shown to realize nonlinear wave mixing processes at the single photon level~\cite{daqing2021,pscherer2021single}. \paragraph{FFPCs for semiconductor quantum emitters} Yet another natural choice for artificial atoms is given by semiconductor nanostructures, where quantum wells and quantum dots exhibit atom like energy structures which can also be engineered over a large range of wavelengths. In contrast to color centers, quantum dots hold the promise to be pumped by simple electric currents. Within the semiconductor world the integration of all basic opto-electronic components in terms of systems is routine. However, the realization of a resonant and strongly coupled quantum dot cavity system based on quantum emitters, cavities, and waveguides remains challenging~\cite{senellart2021}. Hence the application of FFPCs offers attractive alternatives for e.g. single photon generation~\cite{trivedi2020} with respect to simplicity of fabrication or tunability. The feasibility of this approach has been shown in~\cite{miguel2013cavity,Besga2015}, where the substrate holding the quantum dots was directly integrated with DBR reflectors (DBR = Distributed Bragg Reflector). Finally, we mention that FFPCs may help to further enlarge the toolbox for processing photons with more exotic objects such as carbon nano-tubes~\cite{jeantet2017exploiting}. \paragraph{FFPCs for hybrid systems} Another interesting application of FFPCs with cQED concepts is concerned with the distribution of photonic quantum information by means of optical fiber links. Future quantum networks could profit from components (single-photon sources, quantum memories, $\dots$) built from different physical platforms ("hybrid systems") depending on their functional advantages. Then it is of interest to e.g. match different wavelengths as well as to control the shape of photons to allow efficient exchange between e.g. quantum dot single-photon sources and ion or atom-based quantum memories or other systems\cite{meyer2015,macha2020nonadiabatic}. \subsection{Cavity optomechanics in FFPCs} \label{sec:optomech} Cavity optomechanical experiments investigate the interaction of optical cavity modes with the motion of mechanical elements~\cite{Aspelmeyer2014}. These can be mechanical resonators inside the optical cavity or, in the simplest case, one of the optical mirrors that is suspended and able to vibrate. The displacement of the mechanical resonator is thereby coupled to the number of photons in the optical mode. On one side, this leads to a push of the mechanical resonator through the radiation pressure and on the other to a detuning of the optical mode frequency $\nu$ by the mechanical resonator displacement $x$. In the simplest case the detuning $\Delta \nu$ is inversely proportional to the unperturbed length of the cavity $\ell$ as $\Delta \nu \approx \nu_0 \cdot x/\ell$~\cite{Aspelmeyer2014}. This inverse scaling with $\ell$ holds true for all optomechanical experiments, where a mechanical mode locally interacts with a distributed optical cavity mode. The most common examples for this type are suspended mirrors~\cite{groblacher2009observation} or membranes in cavities~\cite{thompson2008strong}. In other optomechanical setups, where the optical and mechanical modes are of comparable size and interacting within their full mode volume, this is typically replaced by the overlap of the two modes, as for example in the case of photoelastic coupling in optomechanical crystals~\cite{chan2012optimized}. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{OMCschematics.pdf} \caption{Cavity optomechanical experiments can be realized with a broad variety of mechanical resonators inside FFPCs. The displacement of the mechanical resonator thereby modulates the optical cavity resonance frequency e.g. by a position-dependent modulation of the effective refractive index of the optical cavity mode. (a) shows a schematic of a membrane-in-the-middle setup~\cite{thompson2008strong}. The optical field extends on both sides of the mechanical oscillator, which is clamped outside the mode volume of the optical mode. Examples for this geometry are~\cite{flowers2012fiber,rochau2021dynamical,rohse2020cavity,shkarin2014optically}. (b) depicts a setup using density waves in liquid helium as the mechanical resonance as demonstrated in~\cite{kashkanova2017optomechanics,shkarin2019quantum}. The standing wave inside the liquid helium is similarly as the optical field confined between the two mirror surfaces. (c) depicts an experiment using a nanowire as the mechanical resonator as used in~\cite{fogliano2021mapping}.} \label{fig:OMCschematics} \end{figure} FFPCs have been used in optomechanical experiments because of their ability to realize miniaturized cavities corresponding to small $\ell$ that at the same time still allow for a simple integration of mechanical elements, because of the accessible cavity volume. Most of the realizations using FFPCs can be attributed to the so-called membrane-in-the-middle (MIM) type experiments~\cite{flowers2012fiber,rochau2021dynamical,rohse2020cavity,shkarin2014optically} (see Fig.~\ref{fig:OMCschematics}~(a)), where a partially transmitting element divides the cavity into two sub-cavities. The small mode waist that can be realized in FFPCs also allows to reduce the size of the mechanical resonator in these experiments. This can be used to increase the mechanical resonator frequency $\Omega$ and decrease its effective mass $m_\text{eff}$. Pushing towards higher mechanical frequencies allows to enter the sideband resolved regime ($\Omega>\kappa$), where the cavity can be used to e.g. selectively enhance amplification or dampening of the mechanical resonator. Smaller effective masses enable larger zero point motions $x_\text{ZPM} = \sqrt{\hbar/2m_\text{eff}\Omega}$ and correspondingly a larger vacuum optomechanical coupling strength $g_0/2\pi = x_\text{ZPM}\cdot\partial\nu/\partial x$ that characterizes the interaction strength on a single-photon level. The reduction of the mechanical oscillator size has led to realizations using on-chip \chem{SiN}-~\cite{rochau2021dynamical} and nanowire oscillators~\cite{fogliano2021mapping} (see Fig.~\ref{fig:OMCschematics}~(c)), with the latter being used to map out the optical mode, position-dependent coupling and scattering. Apart from classical MIM-type experiments other realizations in FFPCs have used sound-waves of liquid helium filling the cavity volume as mechanical modes~\cite{kashkanova2017optomechanics,kashkanova2017superfluid,shkarin2019quantum} (see Fig.~\ref{fig:OMCschematics}~(b)). The distributed mechanical mode interacts with the optical mode through the modulation of the refractive index by the density variations in the acoustic standing wave pattern. An overview of the reached parameters in FFPC-based optomechanical experiments is shown in Fig.~\ref{fig:OMCcomparison}. As for cavity optomechanical experiments in general, except those using cold atoms as mechanical oscillators, only moderate values of $g_0 /\kappa$ are reached. This keeps nonlinear quantum effects using single photons caused by the optomechanical interaction yet not realized. However, FFPC-based systems still reach decent parameters due to their small cavity lengths and corresponding large vacuum coupling strength and the possible high finesse. The single-photon cooperativity $C_0 = 4 g_0^2/\kappa\Gamma$ can still reach favorable values as mechanical resonators with high quality factor can be realized. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{comparisonFigure.pdf} \caption{Comparison of optomechanical experiments using FFPCs. (a), (b) show the reached values of vacuum coupling strength $g_0$ and mechanical frequency $\Omega$ vs the cavity linewidth $\kappa$. The two sideband resolved realizations [vi, vii] use density waves in liquid helium as the mechanical resonance~\cite{kashkanova2017optomechanics,shkarin2019quantum}. The conventional membrane realizations [i-iv] are~\cite{flowers2012fiber,rochau2021dynamical,rohse2020cavity,shkarin2014optically} and the experiment using nanowire as the mechanical resonator~\cite{fogliano2021mapping}. (c) shows the performance of the different experiments in terms of the mechanical and optical loss rates $\Gamma$ and $\kappa$ in relation to $g_0$.} \label{fig:OMCcomparison} \end{figure} A benefit of FFPCs in these experiments compared to even more miniaturized geometries like whispering gallery mode resonators~\cite{schliesser2010cavity} or optomechanical crystals~\cite{eichenfield2009optomechanical} lies in the tolerance to large optical drive powers. Whilst FFPCs like macroscopic optical cavities can make use of large intracavity photon numbers $n_\text{c}$ (up to Watts of optical power in high finesse cavities) to boost the linearized optomechanical coupling strength $g = \sqrt{n_\text{c}}\cdot g_0$, other platforms experience limitations caused by heating and thermal instabilities. As FFPCs can be integrated to on-chip platforms~\cite{lee2019microelectromechanical}, be used to interface very miniaturized mechanical elements, and as they are able to reach large finesse together with strong optomechanical coupling strengths~\cite{rochau2021dynamical}, they continue to be a favorable platform for cavity optomechanical experiments, especially superior where large intracavity optical powers are required. \subsection{Cavity-enhanced sensing and other applications} \label{sec:sensing} Aside their utilization as an interface for quantum emitters or mechanical oscillators, FFPCs have been used for a large variety of different sensing tasks. Scanning cavity microscopes~\cite{Mader2015} (see Fig.~\ref{fig:otherApps}~(a)), use a fiber mirror as a scanning probe above a reflective substrate. They can be facilitated to map the surface~\cite{benedikter2015transverse,benedikter2019transverse} or particles located on it~\cite{Mader2015}. Measurements including multiple higher-order cavity modes are thereby used to increase the spatial resolution below the mode waist size of the fundamental mode. Another application of FFPCs as a cavity-enhanced sensor is for trace gas sensing~\cite{Petrak2014} (see Fig.~\ref{fig:otherApps}~(d)). There, the high intracavity intensity along with the preferred emission into cavity modes is used to enhance the emission of Raman scattering from atmospheric gases. Fiber-based sensing of substances is however not limited to gaseous media, but can also be used with liquid solutions in absorption-measurement-based~\cite{waechter2010chemical} or refractometer-based schemes~\cite{li2019fiber}. As the resonance frequency of FFPCs is sensitive to external mechanical perturbation, it can also be used for its detection. Fiber cavity based force sensors~\cite{wagner2018direct} (see Fig.~\ref{fig:otherApps}~(e)), and fiber based sensors for strain~\cite{jiang2001simple} and vibrations~\cite{garcia2018dual} have been demonstrated. Strong fields are a prerequisite for enhancing nonlinear optical effects. As they can be provided by optical cavities, also FFPCs have been facilitated to demonstrate and make use of optical nonlinearities. Examples include the generation of photon pairs~\cite{langerfeld2018correlated,ronchen2019correlated} (see Fig.~\ref{fig:otherApps}~e)) or in a little different cavity geometry even the realization of a photonic flywheel, which using dissipative Kerr solitons may lead to the FFPC-based frequency comb devices~\cite{jia2020photonic} (see Fig.~\ref{fig:otherApps}~(b)). At last, also traditional interferometer applications like optical filters can be realized in fiber-based platforms. Early developments in this field even date back far before the first realization of high finesse FFPCs~\cite{stone1989optical,chraplyvy1991optical}. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{applications.pdf} \caption{Compilation of various applications and experiments using FFPCs for sensing or nonlinear devices. (a) Scanning cavity microscope, adapted from Ref.~\cite{Mader2015} (CC BY 4.0). (b) Dissipative Kerr solitons and frequency comb, adapted with permission from Ref.~\cite{jia2020photonic}. Copyrighted by the American Physical Society. (c) Photon pair generation in FFPCs using the optical nonlinearity of the mirror coating, adapted with permission from Ref.~\cite{langerfeld2018correlated}. Copyrighted by the American Physical Society. (d) Trace gas sensing via enhanced Raman scattering, adapted with permission from Ref.~\cite{Petrak2014}. Copyrighted by the American Physical Society. (e) Force sensing using FFPCs, adapted with permission from Ref.~\cite{wagner2018direct}.} \label{fig:otherApps} \end{figure} \section{Conclusion and prospects} Our summary shows that following pioneering experiments~\cite{colombe2007strong,Steinmetz2006} the improved understanding, characterization and manufacturing techniques developed for manufacturing FFPCs with excellent and robust optical and mechanical properties have opened the route for manifold applications in quantum technology, spectroscopy, and beyond. The already wide breadth of current applications also points to a potentially even larger success in the future. FFPCs may continue to offer advantages over alternative schemes: FFPCs can overcome the low tunabilities characteristic for rigid high finesse cavities such as micro toroids etc. while the accessibility of the cavity field is unrivaled; also with respect to fully integrated photonic crystal cavities the tunability and versatility of FFPCs offer advantages. We expect experiments and applications using FFPCs to further diversify and demonstrate their flexibility for tackling challenges in different fields like platforms for controlled light-matter interaction, microscopy, sensing or nonlinear devices for applications at extremely low light levels. This flexibility together with their integration into fiber optics is expected to also create economic impact beside its already tremendous scientific value. \vspace{6pt} \begin{acknowledgements} The authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, SFB/TR185 \textit{OSCAR}) under Germany's Excellence Strategy – Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 – 390534769, as well as by the Bundesministerium f\"ur Bildung und Forschung (BMBF), projects Q.Link.X and FaResQ. \end{acknowledgements}
1,941,325,220,140
arxiv
\section{Introduction} \label{sec:introduction} There are many modern money exchange systems such as paper checks, credit/debit cards, automated clearing house (ACH) payments, bank transfers, or digital cash which are owned and regulated by financial institutions. Nevertheless, in the evolving world of trade, the movement of money is still going through changes. The last decade witnessed the introduction of \textit{Bitcoin} \cite{nakamoto2008bitcoin}, a new paradigm-shifting innovation where the users control their own money without needing a trusted third party. In this model, the users are governing the system by coming to a consensus for controlling the transfer and the ownership of the money. Following the success of Bitcoin, new cryptocurrencies that offer new capabilities were introduced based on the idea of consensus-based account management \cite{mukhopadhyay2016brief,tschorsch2016bitcoin}. In the following years, the initial success of cryptocurrencies was hindered due to practical issues related its daily use. Basically, it was a very limited system in terms of scalability and its wide usage in simple daily transactions was quite impossible due to long waiting times, high disproportional transaction fees, and low throughput. Among many solutions \textit{payment channel} idea arose as a well-accepted one for solving mentioned problems. The idea is based on establishing off-chain links between parties so that many of the transactions would not be written to the blockchain each time. The payment channel idea later evolved towards the establishment of \textit{payment channel networks} (PCN), where among many participants and channels the participants pay by using others as relays, essentially forming a connected network. This is in essence a \textit{Layer-2} network application running on top of a cryptocurrency, which covers \textit{Layer-1} services. A perfect example to PCNs is Lightning Network (LN) \cite{poon2016bitcoin} which uses Bitcoin and reached to many users in a very short amount of time. Raiden \cite{Raiden}, based on Ethereum, is another example for a successful PCN. The emergence of PCNs led to several research challenges. In particular, security of the off-chain payments is very important as users can lose money or liability can be denied. In addition, efficiency of payment routing within the PCN with large number of users is tackled. Such efforts paved the way for introducing many new PCNs in addition to LN. These PCNs rely on various cryptocurrencies and carry several new features. As these newly proposed PCNs become more prominent there will be a heavy user and business involvement which will raise issues regarding their privacy just as the user privacy on Internet. The difference is that on many cases, Internet privacy could be regulated but this will not be the case for PCNs as their very idea is based on decentralization. For instance, a user will naturally want to stay anonymous to the rest of the network while a business would like to keep its revenue private against its competitors. Therefore, in this paper, we investigate this very emerging issue and provide an analysis of current PCNs along with their privacy implications. We first categorize the PCNs in the light of common network architectures and blockchain types. We then define user and business privacy within the context of PCNs, and discuss possible attacks on the privacy of the participants. Specifically, we came up with novel privacy risks specific to PCNs. Utilizing these attack scenarios, we later survey and evaluate thoroughly the existing PCNs in terms of their privacy capabilities based on certain metrics. This is a novel qualitative evaluation to be able to compare what each PCN is offering in terms of its privacy features. Finally, we offer potential future research issues that can be further investigated in the context of PCN privacy. Our work not only is the first to increase awareness regarding the privacy issues in the emerging realm of PCNs but also will help practitioners on selecting the best PCN for their needs. The paper is organized as follows: Section \ref{sec:background} gives an introductory background. Next, Section \ref{sec:PCNCategorization} categorizes the PCNs in the light of common network architectures and blockchain types. In Section \ref{sec:PrivacyMetrics} we define the user and business privacy, discuss possible attacks on the privacy of the participants in the PCNs, and present an evaluation of state-of-the-art solutions for what they offer in terms of privacy. Section \ref{sec:FutureResearch} offers directions about the future research on privacy in PCNs and Section \ref{sec:conclusion} concludes the paper. \section{Background} \label{sec:background} \subsection{Blockchain} \label{sec:Blockchain} Blockchain is the underlying technology in cryptocurrency, that brings a new distributed database which is a public, transparent, persistent, and append-only ledger co-hosted by the participants. With various cryptographically verifiable methods, called \textit{Proof-of-X} (PoX), each participant in the network holds the power of moderation of the blockchain \cite{bano2019sok}. As an example, Bitcoin and Ethereum, which jointly hold 75\% of total market capitalization in the cryptocurrency world, utilize \textit{proof-of-work} (PoW) mechanism where a participant has to find a ``block-hash-value'' smaller than a jointly agreed number. A block is an element with a limited size that stores the transaction information. Each block holds the hash of the preceding block which in the long run forms a chain of blocks, called, the blockchain. ``Who-owns-what'' information is embedded in the blockchain as transaction information. Therefore, the cohort of independent participants turns blockchain into a liberated data/asset management technology free of trusted third parties. \subsection{Cryptocurrency} Although it finds many areas, the most commonly used application of blockchain technology is cryptocurrencies. A \textit{cryptocurrency} is a cryptographically secure and verifiable currency that can be used to purchase goods and services. In this paper, we will use cryptocurrency and money interchangeably. Blockchain technology undoubtedly changed the way data can be transferred, stored, and represented. Nonetheless, making a consensus on the final state of a distributed ledger has drawbacks. The first drawback is long transaction confirmation times. For example, in Bitcoin, a block is generated at about every 10 minutes. As a heuristic Bitcoin users wait 6 blocks for the finality of a transaction which yields almost 60 minutes. In Ethereum, time between blocks are shorter but users wait 30 consecutive blocks which yields 10-15 minutes of waiting time. Note that, as a block is limited in size, not only the throughput will be limited, but also the total waiting time for the users will be longer during the congested times of the transfer requests. Nevertheless, if a user is in a hurry for approval of its transaction, it will need to pay larger fees to miners than what its competitors pay. This brings us the second drawback of using blockchain for cryptocurrency. The miner nodes, which generate and approve blocks, get fees from the users to include transactions in blocks. So when there is congestion, a payer either has to offer more fee or she/he has to wait more so that a miner picks her/his transaction request \subsection{Smart Contracts} The ability to employ smart contracts is another feature that makes blockchain an unorthodox asset management technology. Smart contracts are scripts or bytecodes, which define how transactions will take place based on the future events defined within the contract. Smart contracts can be utilized in conditional/unconditional peer-to-peer (P2P) transactions, voting, legal testament etc. As always, the duty of decision-making is on blockchain. Hence, the blockchain finalizes the transaction outputs when the smart contracts are utilized too. \section{PCNs and their Categorization} \label{sec:PCNCategorization} \subsection{Payment Channel Networks} Due to the scalability issues researchers have always been in the search of solutions to make the cryptocurrency scalable. Among many offered solutions, the \textit{off-chain} payment channel idea has attracted the most interest. To establish such a channel, two parties agree on depositing some money in a multi-signature (2-of-2 multi-sig) wallet with the designated ownership of their share. The multi-sig wallet is created by a smart contract where both parties sign. The smart contract, mediated by the blockchain, includes the participants' addresses, their share in the wallet, and information on how the contract will be honored. Approval of the opening transaction in the blockchain initiates the channel. The idea is simple; the payer side gives ownership of some of his/her money to the other side by locally updating the contract mutually. To close the channel the parties submit ``closing transaction'' to the blockchain for it to honor the final state of the channel. Thus, each side receives its own share from the multi-sig wallet Payment channels created among many parties make establishment of \textit{multi-hop payments} from a source to a destination through intermediary nodes possible. As shown in Fig. \ref{fig:PCN}, Alice-Charlie (A-C) and Charlie-Bob (C-B) have channels. Let, A-C and C-B are initialized when \texttt{time} is \texttt{t}. Although Alice does not have a direct channel to Bob, she can still pay Bob via Charlie. At \texttt{time} \texttt{t+x1}, Alice initiates a transfer of 10 units to Bob. The money is destined to Bob over Charlie. When Charlie honors this transaction in the C-B channel by giving 10 units to Bob, Alice gives 10 units of her share to Charlie in A-C channel. When the transfers are over, A-C and C-B channels get updated. When \texttt{time} is \texttt{t+x2}, Alice makes another transaction (20 units) to Bob and the shares in the channels get updated once again. \begin{figure}[htb] \centering \includegraphics[width=.85\linewidth]{figs/PaymentScheme-TimeElapse.png} \caption{A Simple Multi-hop Payment. Alice can initiate a transfer to Bob utilizing channels between Alice-Charlie and Charlie-Bob.} \label{fig:PCN} \end{figure} Multi-hop payment concept enables the establishment of a network of payment channels among users, which is referred to as PCN as shown in Fig. \ref{fig:PCN-model}. Current PCNs vary in terms of what topologies they depend on and which layer-1 blockchain technology they utilize. We discuss this categorization next. We will then explain each of these PCNs in more detail and categorize them in Section IV. \begin{figure}[htb] \centering \includegraphics[width=.85\linewidth]{figs/PaymentScheme-payment_network.png} \caption{ A PCN of end-users and relays acting as backbone.} \label{fig:PCN-model} \end{figure} \begin{comment} \end{comment} \begin{comment} \begin{table*} \centering \caption{DANGLING Comparative Table} \label{tab:comparative_table} \begin{tabular}{|l||c|c|} \hline ~ & \textbf{\textit{Network Type}} & \textbf{\textit{Blockchain Type}} \\ \hline Lightning Network (HTLC) \cite{poon2016bitcoin} & Decentralized/Distributed & Public \\ \hline Spider \cite{sivaraman2018high} & Decentralized & Public \\ \hline SilentWhispers \cite{malavolta2017silentwhispers}& Centralized & Public/Permissioned \\ \hline SpeedyMurmurs \cite{roos2017settling} & Decentralized/Centralized & Public \\ \hline PrivPay \cite{moreno2015privacy} & Centralized & Permissioned \\ \hline Flash \cite{wang2019flash} & Decentralized/Distributed & Public \\ \hline Flare \cite{prihodko2016flare} & Distributed & Public \\ \hline Bolt \cite{green2017bolt} & Decentralized/Centralized & Public \\ \hline Rayo and Fulgor \cite{malavolta2017concurrency} & Decentralized & Public \\ \hline Erdin et al. \cite{erdin2020bitcoin} & Decentralized/Federated & Public/Permissioned \\ \hline \end{tabular}{} \\ \end{table*} \end{comment} \subsection{PCN Architectures} In this section, we categorize the types of \textit{network architectures} that can be used in PCNs. \subsubsection{Centralized Architecture} In this type of network, there is a central node, and users communicate with each other either over that central node or based on the rules received from the central node as shown in Fig. \ref{fig:networkTypes}(a). From the governing point of view, if an organization or a company can solely decide on the connections, capacity changes, and flows in the network, then this architecture is called to be a centralized one. \subsubsection{Distributed Architecture} In distributed networks, there is no central node. As opposed to the centralized network, each user has the same connectivity, right to connect, and voice in the network. A sample architecture is shown in Fig. \ref{fig:networkTypes}(b). \subsubsection{Decentralized Architecture} This type of architecture is a combination of the previous two types which is shown in Fig. \ref{fig:networkTypes}(c). In this architecture, there is no singular central node, but there are independent central nodes. When the child nodes are removed, central nodes' connections look very much like a distributed architecture. However, when the view is concentrated around one of the central nodes, a centralized architecture is observed. \subsubsection{Federated Architecture} Federated architecture sounds very much like the federation of the states in the real world and arguably lies somewhere between centralized and decentralized networks. In a federated architecture, there are many central nodes where they are connected to each other in a P2P fashion. Then the remaining nodes (children) strictly communicate with each other over these central nodes which very much looks like a federation of centralized architectures. \begin{figure}[htb] \centering \includegraphics[width=.85\linewidth]{figs/AllLayers-NetworkTypes.png} \caption{Network Types} \label{fig:networkTypes} \end{figure} \subsection{Types of Blockchain Networks} In this section, we categorize the existing PCNs based on blockchain type they employ. There are mainly three types of blockchains employed by PCNs: \subsubsection{Public Blockchain} In a public blockchain, no binding contract or registration is needed to be a part of the network. Users can join or leave the network whenever they want. Consequently, the PCN will be open to anyone who would like to use it. \subsubsection{Permissioned Blockchain} Permissioned (i.e., Private) blockchain lays at the opposite side of the public blockchain, where the ledger is managed by a company/organization. Moreover, the roles of the nodes within the network are assigned by the central authority. Not everybody can participate or reach to the resources in the permissioned blockchain. PCNs employing permissioned blockchain will be ``members-only''. \subsubsection{Consortium Blockchain} Contrary to the permissioned blockchain, in consortium blockchain, the blockchain is governed by more than one organization. From the centralization point of view, this approach seems more liberal but the governance model of the blockchain slides it to the permissioned side. PCNs utilizing consortium blockchain will be similar to permissioned blockchain in terms of membership but in this case members will be approved by the consortium. \section{Privacy Issues in PCNs: Metrics and Evaluation } \label{sec:PrivacyMetrics} As PCNs started to emerge within the last few years, a lot of research has been devoted to make them efficient, robust, scalable and secure. However, as some of these PCNs started to be deployed, they reached large number of users (i.e., LN has over 10K users), which is expected to grow further. Such growth brings several privacy issues that are specific to PCNs. We argue that there is a need to identify and understand privacy risks in PCNs from both the users and businesses perspectives. Therefore, in this section, we first define these privacy metrics and explain possible privacy attacks in PCNs. We then summarize the existing PCNs to evaluate their privacy capabilities with respect to these metrics for the first time. \subsection{Privacy in PCNs} In its simplest form, \textit{data privacy} or \textit{information privacy} can be defined as the process which answers how storage, access, and disclosure of data take place. The PCN, in our case, needs to provide services ensuring that the users' data will not be exposed without their authorization. However, the user data travels within the PCN through many other users. To address these issues, some PCN works aimed to hide the sender ($u_s$) or receiver ($u_r$) identity (i.e., \textit{anonymity}) whereas some others concentrated on strengthening the \textit{relationship anonymity} between sender and recipient. \subsection{Attack Model and Assumptions} There are two types of attackers considered in this paper. The first attacker is an \textit{honest-but-curious} (HBC) where the attacker acts honestly while running the protocols but still collects information passively during operations. The second attacker of interest is the \textit{malicious} attacker that controls more than one node in the network to deviate from the protocols. These attacker types and how they can situate in the network are shown in Fig. \ref{fig:AttackersConnected} as follows: \yuvallak{1} The attacker is on the path of a payment. \yuvallak{2} The attacker is not on the path of a particular payment but it can partially observe the changes in the network. \yuvallak{3} The attacker colludes with other nodes, for example, to make packet timing analysis with sophisticated methods. \begin{figure}[htb] \centering \includegraphics[width=.6\linewidth]{figs/PaymentScheme-EveScenario.png} \caption{Attackers can appear in the network in different places. } \label{fig:AttackersConnected} \end{figure} Based on these assumptions, we consider the following potential attacks for compromising privacy in PCNs: \begin{itemize} \item \textbf{Attacks on Sender/Recipient Anonymity}: Sender/Recipient anonymity requires that the identity of the sender/recipient ($u_s$/$u_r$) should not be known to the others during a payment. This is to protect the privacy of the sender/recipient so that nobody can track their shopping habits. There may be cases where an adversary may successfully guess the identity of the sender/recipient as follows: For case \yuvallak{1}, the sender can have a single connection to the network and next node is the attacker, hence, the attacker is sure that $u_s$ is sender. For case \yuvallak{2} the attacker may guess the sender/recipient by probing the changes in the channel balances. For case \yuvallak{3} the attacker will learn the sender/recipient if it can carry out a payment timing analysis within the partial network formed by the colluded nodes. \item \textbf{Attack on Channel Balance Privacy.} To keep the investment power of a user/business private, the channel capacities should be kept private in PCNs. The investment amount in a channel would give hints about financial situation of a user or its shopping preferences. Moreover, if the capacity changes in the channels are known, tracing them causes indirect privacy leakages about the senders/recipients. For instance, an attacker can initiate fake transaction requests. After gathering responses from intermediary nodes, it can learn about the channel capacities. \item \textbf{Relationship Anonymity.} In some cases identities of $u_s$ or $u_r$ may be known. This is a very valid case for retailers because they have to advertise their identities to receive payments. However, if an attacker can relate the payer to the payee, not only the spending habits of the sender but also the the business model of the recipient will be learned. In such cases, the privacy of the trade can be preserved by hiding the relationship between the sender and recipient. Specifically, who-pays-to-whom information should be kept private. \item \textbf{Business Volume Privacy.} For a retailer, publicly disclosed revenue will yield the trade secrets of its business, which must be protected by the PCN. In that sense, privacy of every payments is important. Such payment privacy can be attacked as follows: In a scenario where two or more nodes collude, the amount of a transaction can be known to the attacker. In another scenario, if the recipient is connected to the network via a single channel through the attacker, then it will track all of the flows towards the recipient. \end{itemize} \subsection{State-of-the-art PCNs and their Privacy Evaluation} In this section, we briefly describe current studies which either present a complete PCN or propose revisions to the current ones, then analyze their privacy capabilities based on our threat model. We provide a summary of the assessment of the current PCNs' categorizations and privacy features in Table \ref{tab:comparative_table}. \begin{table*} \centering \caption{Qualitative Evaluation of Privacy Features of Existing PCNs.} \label{tab:comparative_table} \begin{tabular}{|Z{14em}|Z{6em}|Z{3.5em}|Z{4em}|Z{4em}|Z{3.5em}|Z{4em}|Z{3.5em}|} \hline ~ & \rot{\pbox{5em}{Network Type}} & \rot{\pbox{5em}{Blockchain Type}} & \rot{\pbox{5em}{Sender Anonymity}} & \rot{\pbox{5em}{Recipient Anonymity}} & \rot{\pbox{5em}{Channel Balance Privacy}} & \rot{\pbox{5em}{Relationship Anonymity}} & \rot{\pbox{5em}{Business Volume Privacy}} \\ \hline \hline Lightning Network (HTLC) \cite{poon2016bitcoin} & Decentralized/ Distributed & Public & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} \\ \hline Raiden Network \cite{Raiden} & Decentralized/ Distributed & All & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} \\ \hline Spider \cite{sivaraman2018high} & Decentralized/ Centralized & All & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/emptyc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} \\ \hline SilentWhispers \cite{malavolta2017silentwhispers} & Decentralized/ Centralized & All & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} \\ \hline SpeedyMurmurs \cite{roos2017settling} & Decentralized/ Centralized & Public & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} \\ \hline PrivPay \cite{moreno2015privacy} & Decentralized/ Centralized & Permissioned & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} \\ \hline Bolt \cite{green2017bolt} & Centralized & Public & \includegraphics[width=.9em]{figs/fullc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} \\ \hline Erdin et al. \cite{erdin2020bitcoin} & Distributed/ Federated & All & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} \\ \hline Anonymous Multi-Hop Locks (AMHL) \cite{malavolta2019anonymous} & Decentralized/ Distributed & Public & \includegraphics[width=.9em]{figs/emptyc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} & \includegraphics[width=.9em]{figs/halfc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} & \includegraphics[width=.9em]{figs/fullc.pdf} \\ \hline \end{tabular}{} \\ \flushleft \hspace{10em}\includegraphics[width=.9em]{figs/halfc.pdf} : Partially satisfies OR can not defend against all mentioned attacks. \\ \hspace{10em}\includegraphics[width=.9em]{figs/fullc.pdf} : Fully satisfies. \\ \hspace{10em}\includegraphics[width=.9em]{figs/emptyc.pdf}: Does not satisfy. \end{table*} \noindent \textbf{Lightning Network (LN)}: LN \cite{poon2016bitcoin} is the first deployed PCN which utilizes Bitcoin. It started in 2017 and by June 2020 serves with more than 12.000 nodes and 36.000 channels. Nodes in LN utilize \textit{``Hashed Time-Locked Contracts''} (HTLC) for multi-hop transfer. The directional capacities in the payment channels are not advertised but the total capacity in the channel is known for a sender to calculate a path. This provides a partial channel balance privacy. The sender encrypts the path by using the public keys of the intermediary nodes by utilizing \textit{``onion-routing''} so that the intermediary nodes only know the addresses of the preceding and the following nodes. None of the intermediary nodes can guess the origin or the destination of the message by looking at the network packet \noindent \textbf{Raiden Network}: Shortly after LN, Ethereum foundation announced Raiden Network \cite{Raiden}. Raiden is the equivalent of LN designed for transferring Ethereum ERC20 tokens and provides the same privacy features. Although Ethereum is the second largest cryptoccurrency, that popularity is not reflected well in the Raiden Network. As of June 2020, Raiden serves with 25 nodes and 54 channels. The advantage of Raiden over LN is, due to tokenization, users can generate their own tokens to create a more flexible trading environment. \noindent \textbf{Spider Network}: Spider network \cite{sivaraman2018routing} is a PCN which proposes applying packet-switching based routing idea which is seen in traditional networks (e.g., TCP/IP). However, it is known that in packet-switching the source and the destination of the message should be embedded in the network packet. The payment is split into many micro-payments so that the channel depletion problem gets eliminated. The authors also aimed having better-balanced channels. In this PCN, there are \textit{spider routers} with special functionalities which communicate with each other and know the capacities of the channels in the network. The sender sends the payment to a router. When the packet arrives at a router, it is queued up until the funds on candidate paths are satisfactory to resume the transaction. The authors do not mention privacy, and plan utilizing onion-routing as a future work. The micro-payments might follow separate paths, which would help keeping business volume private if the recipients were kept private. Additionally, hijack of a router will let an attacker learn everything in the network. \noindent \textbf{SilentWhispers}: SilentWhispers \cite{malavolta2017silentwhispers} utilizes landmark routing where landmarks are at the center of the payments. In their attack model, either the attacker is not on the payment path or a landmark is HBC. Here, landmarks know the topology but they do not know all of the channel balances. When sender wants to send money to a recipient, she/he communicates with the landmarks for her/his intent. Then landmarks start communicating with the possible nodes from ``sender-to-landmark'' to the ``landmark-to-recipient'' to form a payment path. Each node in the path discloses the channel balance availability for the requested transfer amount to the landmarks. Then landmarks decide on the feasibility of the transaction by doing multi-party computation. In SilentWhispers, the sender and the receiver are kept private but the landmarks know the sender-recipient pair. The payment amount is also private for the nodes who do not take part in the transaction. Moreover, the balances of the channels within the network are kept private. Although centralization is possible, the approach is decentralized and landmarks are trusted parties. \noindent \textbf{SpeedyMurmurs}: SpeedyMurmurs \cite{roos2017settling} is a routing protocol, specifically an improvement for LN. In SpeedyMurmurs, there are well-known landmarks like in SilentWhispers. The difference of this approach is that the nodes on a candidate path exchange their neighbors' information anonymously. So if a node is aware of a path closer to the recipient, it forwards the payment in that direction, called ``shortcut path''. In a shortcut path, an intermediary node does not necessarily know the recipient but knows a neighbor close to the recipient. SpeedyMurmurs hides the identities of the sender and the recipient by generating anonymous addresses for them. Intermediary nodes also hide the identities of their neighbors by generating anonymous addresses. Although it may be complex, applying de-anonymization attacks on the network will turn it into SilentWhispers. This is because, while the algorithm is a decentralized approach, with unfair role distribution, it may turn into a centralized approach. \noindent \textbf{PrivPay}: PrivPay \cite{moreno2015privacy} is a hardware-oriented version of SilentWhispers. The calculations in the landmark are done in tamper-proof trusted hardware. Hence, the security and privacy of the network are directly related to the soundness of the trusted hardware which may also bring centralization. In PrivPay, sender privacy is not considered. Receiver privacy and business volume privacy is achieved by misinformation. When an attacker constantly tries to query data from other nodes the framework starts to produce probabilistic results. \noindent \textbf{Bolt:} Bolt \cite{green2017bolt} is a hub-based payment system. That is, there is only one intermediary node between sender and recipient. Bolt assumes \textit{zero-knowledge proof} based cryptocurrencies. It does not satisfy privacy in multi-hop payments, however, it satisfies very strong relationship anonymity if the intermediary node is honest. On the other hand, being dependant on a single node makes this approach a centralized one. \noindent \textbf{Permissioned Bitcoin PCN}: In PCNs, if the network topology is not ideal, e.g., star topology, some of the nodes may learn about the users and payments. To this end, the authors in \cite{erdin2020bitcoin} propose a new topological design for a permissioned PCN such that the channels' depletion can be prevented. They come up with a real use case where a consortium of merchants create a full P2P topology and the customers connect to this PCN through merchants which undertakes the financial load of the network to earn money. The privacy of the users in the PCN is satisfied by LN-like mechanisms. The authors also investigate how initial channel balances change while the sender/receiver privacy and the relationship anonymity can be satisfied by enforcing at least 3-hops in a multi-hop payment. \noindent \textbf{Anonymous Multi-Hop Locks (AMHL)}: In AMHL proposal \cite{malavolta2019anonymous}, the authors offer a new HTLC mechanism for PCNs. On a payment path, the sender agrees to pay some service fee to each of the intermediaries for their service. However, if two of these intermediaries maliciously collude they can eliminate honest users in the path and consequently steal their fees. In order to solve this, they introduce another communication phase in which the sender distributes a one-time-key to the intermediary nodes. Although the HTLC mechanism is improved for the security of the users the sender's privacy is not protected; each of the intermediaries learns the sender. However, relationship anonymity can still be satisfied. \section{Future Research Issues in PCNs} \label{sec:FutureResearch} Privacy in PCNs is an understudied topic and there are many open issues that need to be addressed as a future research. In this section, we summarize these issues: \noindent \textbf{Abuse of the PCN protocols.} As most of the PCNs rely on public cryptocurrencies, whose protocol implementations are public. This freedom can be abused such that by changing some parameters and algorithms in the design, an attacker can behave differently than what is expected. This will bring privacy leakages and censorship in the network. A topological reordering of the network will help solve this problem. If a sender gets suspicious about an intermediary node, it can look for alternatives instead of using that node. \noindent \textbf{Discovery of Colluding Nodes.} When the nodes collude in a PCN, they can extract more information about the users. To prevent this, the protocols should be enriched to discover the colluding nodes or by adding redundancy to the protocols, colluding nodes can be confused. \noindent \textbf{Policy Development.} The cryptocurrency and PCN idea is still in the early phases of their lives. Hence, policy and regulation for not only the security of the participants but also for the privacy of them is highly needed in this domain. This will also create a quantitative metric for the researchers to measure the success of their proposals. \noindent \textbf{Impact of Scalability on Privacy.} One of the aims for introducing PCNs was making the cryptocurrencies more scalable. For example, LN advises running the Barabasi-Albert scale-free network model while establishing new connections \cite{martinazzi2020evolving}. Thus, the final state of the network can impose centralization which will have adverse effects on the privacy of the nodes in the network. \noindent \textbf{Integration of IoTs with PCNs.} Use of IoT devices for payments are inevitable. Aside from the fact that most IoT devices are not powerful to run a full node, security and privacy of the payments and the device identities within the IoT ecosystem needs to be studied. These devices are anticipated to be able to participate in the network through gateways. Revelation of device ownership will reveal the real identity of the users to the public which is a big threat on privacy. \noindent \textbf{Privacy in Permissioned PCNs.} While establishing a network of merchants in permissioned PCNs, the merchants should at least disclose their expected trade volume in order to establish a dependable network. This will, however, yield trade secrets of the merchants. To prevent this, zero-knowledge proof based multi-party communication can be explored. \section{Conclusion} \label{sec:conclusion} PCN is a promising solution to make the cryptocurrency-based payments scalable. This idea aimed fixing two major shortcomings of the cryptocurrencies: long confirmation times and high transaction fees. There are many studies on the design of payment channels and PCNs to make the transfers secure and efficient. However, these studies do not mention the possible privacy leakages of these methods in case of a wide adaptation of proposed ideas. In this paper, we first made the categorization of PCNs based on the type of blockchain being used and the topological behavior of the network. After clearly defining possible privacy leakages in a PCN, we compared and contrasted the state-of-the-art PCN approaches from the privacy point of view. \bibliographystyle{unsrt} \subsubsection{System Model} A distributed PCN is modeled as a weighted directed graph $G = (V; E)$. where $V$ represents the set of nodes and $E$ represents the set of links. A link can be unidirectional channel, from one transferor to transferee, as well as one direction of a bidirectional channel. Each link holds numerous attributes. First of all, each link $e \in E$ has a capacity for the channel, $c_e$, indicating the cumulative amount of value saved into the specific payment channel by both parties. Secondly, each link $e \in E$ also possesses a current \textit{balance} denoted by $b_e$. Let $G$ be a unidirectional channel, expressed by a one-way link, the capacity $c_e$ is the maximum amount of payment that the sender can send to the receiver before the channel deadline. Besides, the balance of $b_e$ is the remaining amount of value that the sender can send. Again, for a bidirectional channel where two collateral links are in opposite directions between two users, the two links possess the similar capacity, equal to the summation of their balances, expressed as, $b_{(u,v)} + b_{(v,u)} = c_{(u,v)} = c_{(v,u)}$. A link $e$ always has the specific balances and capacity $b_e \le c_e$. In general, we consider the set $E$ includes links with positive balances at any given time. If any link has zero balances, it is temporarily removed from the network graph. Whenever a new deposit occurs in the opposite direction, the balance gets recharged, and the link becomes online again. Payment through a channel follows several steps to be completed. First, each channel requires to perform arriving payments sequentially. Second, it needs the forward processing time of the current hop. The time is necessary for the two parties to agree on the balance update. Third, due to the time lock in HTLC, whenever the channel receives the acknowledgment from the next hop, it releases the locked transferred value. This one also requires a specified amount of waiting and backward processing time. For the simplification, the transmission delay between parties is ignored, considering that it is merged into the forward and backward processing times. In the model, $d^1_e$ is used to indicate the forward wait time along with the forward processing time of a link $e$ and $d^2_e$ to express the summation of backward wait time and the backward processing time. Thus, the total time, $d_e = d^1_e + d^2_e$. Each link has an expiration time denoted by $\eta_e$, expressing the time when a particular channel becomes unavailable. For payment via $e$ to be successful, it must settle before the channel expires. Only if the payment is completed before the deadline of the channel, it becomes successful. The proposed model assumed that every user only has the local knowledge on all its incoming and outgoing links. The knowledge includes their capacities, expiration times, balances, and delays. The addresses of the sender and the receiver are shared among each other. However, the location in the network remains confidential. In general, every node may have an approximate estimation of the total network status based on the locally collected information. However, due to network asynchrony and dynamics, the instantaneous balance and delay of the remote nodes remain unknown. \subsubsection{Payment Model} In CoinExpress, any payment request is expressed by a quintuple $R = (s, t, a, st, dl)$, where $s$ is the sender, $t$ is the recipient, $a$ is the amount to be transferred, and $st$ and $dl$ are the start time and deadline for the payment respectively. Considering $\mathcal{P}$ as the set of all paths in the PCN. A payment request $R$ is obtained by a payment plan, consisting, a set of paths $P_R \in \mathcal{P}$, where each $p \in P_R$ is an $(s,t)$-path in the network. Each path $p \in P_R$ is affiliated with a payment amount, indicated by $v_p$. For a payment plan $P_R$ to be successful, it needs to satisfy the following requirements: \begin{itemize} \item $P_R$ is feasible, if and only if for any link $e \in E$ where $P_R(e) \subseteq P_R$ is the set of paths through $e$, thus $$ \sum_{p \in P_{R}(e)} v_{p} \leq b_{e} $$ \item $P_R$ is available, if and only if for any $p \in P_R$ and $e \in p$, let $p^{+}_ {e} \subseteq p$ be links including and after $e$ in $p$, then we have $$ s t+\sum_{e \in p} d_{e}^{1}+\sum_{e \in p_{\epsilon}^{+}} d_{e}^{2} \leq \eta_{e} $$ \item $P_R$ is timely, if and only if for any path $p \in P_R$, we have $$ s t+\sum_{e \in p} d_{e} \leq d l $$ \item $P_R$ is fulfilling, if and only if $$ \sum_{p \in P_{R}} v_{p} \geq a $$ \end{itemize} With the light mentioned conditions, if a payment plan is \textit{feasible, available, timely and fulfilling}, it can transfer $a$ amount of money from sender $s$ to recipient $t$ within the deadline $dl$. Only then it can be called as realizing payment plan for request $R$. The model aims at obtaining a realizing payment plan for each payment request while fulfilling as many goals as possible. The algorithm achieves much lower overhead with an excellent acceptance of goodput performance compared to all algorithms that consider user timeliness constraints. \subsubsection{Credit Networks} \subsubsection{Trust Networks} \subsubsection{Social Networks} \subsubsection{Payment Channel Networks} HOW CAN WE DIfFERENYIATE THOSE? target and equifax breach \subsubsection{Model} Considering the network as an undirected graph $G = (V, E)$, where $V$ represents the set of nodes and $E \subseteq V^2$ indicates the set of opened channels. For each channel $e \in E$, $Cap(e)$ is used to represent the total capacity of the channel. The authors consider binary symmetric function $Cap~:~ V^2 \rightarrow [0, + \infty)$, where capacity is 0 for any non-existant channel. $$\forall v_1, v_2 \in V~~ (v1, v2) \notin E \Leftrightarrow Cap(v_1, v_2) = 0$$ (Hop) distance between two nodes is defined by the minimum number of channels in LN those connect those nodes: $$d_{node}(x, y) := min\{n \in N: \exists v_1,......., v_{n+1} \in V$$$$~~~x = v_1; y = v_{n+1} \forall i \in 1,.~.~., n ~~(v_i, v_{i+1}) \in E\}$$ Again, (Hop) distance between any node $x \in V$ and any channel $e = (y, z) \in E$ is defined by the minimum distance between the node and the channel’s ends: $$d_{chan}(x, (y, z)) \equiv d_{chan}(x, e) : = min\{d_{node} (x, y), d_{node} (x, z)\}$$ Adjacent nodes $Adj(v)$ for node $v \in V$ are determined by nodes at distance 1 : $$Adj (v) = \{z \in V : d_{node} (v, z) = 1\}$$ In another way, adjacent nodes are those nodes with which $v$ has an opened payment channel. A set of nodes, ($T$ ) is connected by edges from the set of edges $T \subseteq E$: $$Nodes (T ) : = \{ A : (A, ·) \in T \vee (·, A) \in T\}$$ Node addresses are used to ensure convergence of the routing algorithm. The node address is a node id which is unique though allover the network. It consists of a binary distance function established between all pairs of addresses. Each address is a 256-bit integer calculated as SHA-256 of the identity key of the node. Address distance between the nodes is an unsigned integer defined as bit-wise XOR of node addresses. \subsubsection{Routing Table} To generate the routing table, Flare proposes another algorithm. Route discovery manages the node’s routing table. A routing table for a specidic node $v \in V$ includes a set of long term (static) information about the LN topology: $$v.RT := \{(a_i, b_i) : (a_i; b_i) \in E\}$$ with the collection of payment channels $E$. To store the routing table for each node, the following tasks are maintained: \textit{Neighborhood map:} considers the channels between all node neighbors (considering hop distance) inside the neighbor radius $r_{nb} \in N$. This ensures that nodes separated from each other by no more than $2r_{nb}$ hops will get at least one route to each other. \textit{Beacon paths:} consider the channels which form routes to $N_{bc}$ beacon nodes adjacent to the node in the address space. As node addresses are allocated randomly, each node having routes to other nodes are randomly scattered around the network. \textit{Cached payment routes:} If any path is used by node earlier, it is saved in the chase memory. $N_{cache}$ channels that form paths to nodes that the node already used previously to complete payments. This saves time and helps the nodes learn working paths once they are considered in the reactive stage of the routing algorithm. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{Images/Flare.png} \caption{Flare Algorithm} \label{fig:flare_algo} \end{center} \end{figure} In the model, $x_p$ means the flow along route $p$ and $P = U_{i,j \in V} P_{i,j}$ is the collection of all paths. The capability limits of payment channels are indicated by the second set of constraints, and the third set of constraints express the balance condition. The balance constraints are essentially important for the flows calculated to be realizable. \subsubsection{Neighborhood Discovery} The neighbourhood information is updated based on several messages i.g. \textbf{NEIGHBOR\textunderscore HELLO}, \textbf{NEIGHBOR\textunderscore UPD}, \textbf{NEIGHBOR\textunderscore RST} and \textbf{NEIGHBOR\textunderscore ACK}, \textbf{NEIGHBOR\textunderscore HELLO}. \textbf{NEIGHBOR\textunderscore UPD} holds incremental adjustments to the node’s routing table since the most recent update message. \textbf{NEIGHBOR\textunderscore RST} message is transmitted after update information was probably dropped. \textbf{NEIGHBOR\textunderscore ACK} is used in response to \textbf{NEIGHBOR\textunderscore UPD} or \textbf{NEIGHBOR\textunderscore HELLO} to acknowledge that the node has appropriately received and processed the message. \textit{Beacon Discovery} enables each node to find its neighborhood to expand its knowledge of the network. Beacons for node $u$, expressed as $u.bc \in V^{N_bc}$, are considered as the nearest nodes among the address space: $$\forall v \in u.bc, \forall z \in V ~~\backslash~~ (\{u\} \cup u.bc) ~~~\rho (v, u) \lt \rho(z,u)$$ Beacon discovery is done using \textbf{BEACON\textunderscore REQ}, \textbf{BEACON\textunderscore ACK} and \textbf{BEACON\textunderscore SET} messages. \textbf{BEACON\textunderscore REQ} message is delivered to let the node know that it is opted as a beacon candidate. \textbf{BEACON\textunderscore ACK} message is used as a response and confirms that the node accepts to be a candidate. \textbf{BEACON\textunderscore SET} message is sent to inform that specific node that it is recognized as a beacon. \subsection{Route Selection} For route selection, the algorithm uses another two messages \textbf{TABLE\textunderscore REQ} and \textbf{TABLE\textunderscore RESP}. \textbf{TABLE\textunderscore REQ} is used to request for a routing table from the recipient. Besides, \textbf{TABLE\textunderscore RESP} is used in reply to \textbf{TABLE\textunderscore REQ} which contains a complete routing table of the sender. \subsubsection{Ranking Candidate Routes} As soon as the routing algorithm finds a new route, the route is ranked based on its attributes and dynamic characteristics. To make it more efficient, both searching for the new route and ranking the found ones are done at the same time. Every time a new candidate route $p$ is found, the sender node starts its asynchronous ranking method, which calculates the route ranking $rr(p) \in [-\infty; +\infty)$. The higher the ranking is, the better the route is. Ranking $-\infty$ indicates an unusable path. The author proposed a threshold for the well-ranked routers and if any routes have a ranking higher or equal to the rank, $r_{good}$; if $rr(p) \ge r_{good}$, it is selected as the final route. It reduces the duration of the routing algorithm. \subsubsection{Architecture}Flash is a distributed online routing system. To achieve a better performance-overhead tradeoff, Flash separates elephant and mice payments and uses different routing algorithms. Flash first uses an improved maxi-flow algorithm to discover and detect the path with sufficient balance to meet its needs and then divides the payment path to minimize transaction fees by solving an optimization program. Flash uses a lightweight solution for Mice payments, which can be easily satisfied. If possible, it can route them randomly through a set of precomputed paths to reduce the detection overhead.\par Flash relays on two factors of offchain networks, one is the locally available topology. In the absence of channel balance information, the topology of the off-chain network is quite stable and changes every hour or every day. This is because opening or closing a payment channel takes at least several decades of on-chain transactions, while a channel usually remains in the network after it is established. Another one is the Atomic multipath payment. Flash uses multi-path routing as much as possible and assumes that the atomicity of multi-path payment is guaranteed, which is similar to the previous work. It also improved network utilization. Some mechanisms can achieve it, such as the Atomic Multi-path Payment (AMP) proposed for lightning. Based on HTLC, AMP allows split payments on multiple paths, while ensuring that recipients either receive all funds from multiple partial payments or receive nothing.\par \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{Images/flash_elephant_algorithm.png} \caption{Modified Edmonds-Karp for elephant payment routing} \label{fig:flash_algorithm} \end{center} \end{figure} \subsubsection{Model}There are two models for Flash, one is an improved max-flow algorithm based on Edmonds-Karp using for elephant payments. Another one is a lightweight solution for mice payments\par The network topology of every node that does not have capacity information. The sender $s$ uses the algorithm that shows in Fig: \ref{fig:flash_algorithm} to route the elephant payment. The detected channel capacity of the paths can be recorded by using a matrix of the capacity $C$, and $C^'$ capacity matrix can record the unused capacity of channels that it is the same as Edmonds-Karp algorithm. As shown in row 4 and 5 in Algorithm, it initializes $C$ and $C^'$ to infinity. In the loop, it aims to find $k$ paths. In row 7, Flash uses the BFS(Breadth-First-Search) to find a possible shortest path $p$ that has an available value capacity, then add it to the set of the $ P$. The next step is to send detections of matrix $p$ to get the capacity of every channel and get the matrix $c$ of the bottleneck capacity. Row 14 means send matrix $c$ on path matrix $p$. According to the results, $C_p $, the detected capacity of channels and the remaining capacity can be updated. When the loop finishes, it will return the set of paths $P$ and the matrix of the capacity $C$ under the condition that the matrix $f$ of maxi-flow on certain paths has to satisfy the matrix $d$ of the payment demand.\par For Elephant payments path selection in. Using a set of paths that got form the algorithm with sufficient capacity to minimize the transaction fees is another important step, and their information has collected in the algorithm. The below equation shows how to optimize it. The algorithm already provided the matrix $P$ of a set of paths and the matrix $C$ of the capacity. A channel $(u,v)$ that has charging function $f_u_,_v$ can collect the fee information. According to route a partial payment is $r_p$ to $(u,v)$, the total fee is $f_u_,_v(r_p)$ if $f$ is convex. It aims to minimize the total fee under the constraints of meeting the payment demand of $d $ and respecting the channel capacity \begin{itemize} \item $$ min \sum_{p \in P} \sum_{_(_u_,_v_)} a_{u}^{P},_v f_u,(r_p) $$ \item $$ subject to \sum_{p \in P} r_p =d, $$ \item $$ \sum_{p \in P}r_p a_{u}^{P}_,_v - \sum_{p \in P}r_p a_{v}^{P}_,_u \leq C(u,v), \forall(u,v). $$ \end{itemize} \par For Mice payments path finding, every node keeps a routing table that holds paths for the node receivers. If a new receiver is not in the routing table, the node will put shortest paths that calculated top-$m$ on the local topology matrix $G$ into the table. Otherwise, Flash re-uses the existing paths. The Mice payments do not need a large routing table. Using top-$m$ shortest paths that matrix $m$ is extremely smaller than matrix $k$(used in the Elephant payments). The Mice payments show the performance better some times. The refreshment of the routing table is following the updating of the local network topology matrix $G$.\par For Mice payments path selection. The sender decides the path to use a trial-and-error loop in the $m$ shortest paths of the routing table. If a sender sends a full payment using a random path $p$ successful, the transaction ends. Otherwise, the sender detects $p $ getting its effective capacity of $c_p $, and sends a part of the payment of volume $c_p $ within $p $. The next step is updating the residual requirements for payment and keeps looping. This guarantees low detection overhead because flash only detects if it is necessary or up to $M $paths. The use of multi-path also improves the success rate of payment.\par \subsubsection{Network} The protocols are designed focusing the structure of different payment networks. The payment networks have both similarities and dissimilarities. As discussed in section \ref{CPN}, there is numerous cryptopayment network already implemented in real worked and some are proposed. The protocols are classified based on their corresponding network. \subsubsection{Switching} Any payment can be sent using a single transaction or it can be split in a few multiple small transactions. With the analogy of the traditional networking, if the payment is sent in a single transaction using a single route, then we calling the payment type as circuit switching. Otherwise, when the payment is divided into several smaller transactions and each transaction is sent through different paths, then this approach is analogous to the mechanism of packet switching. \subsubsection{Routing Algorithm} There are a lot of algorithms used in the payment network models. Some protocols have their own/modified algorithm. Some protocols use the well established efficient algorithms as the backbone of the model. \subsubsection{Path} The transactions can be one using an optimum path determined from the routing table. Otherwise, the transactions can be done using different distinct paths that connect the same sender and receiver. \subsubsection{Type of delivery} The delivery can be two types in nature based on the amount sent. In atomic payment, the sender is only allowed to send the full amount or not at all. No fractional payment is accepted in the transactions. However, non-atomic payment allows us to send anything within the desired amount and the system accepts that fractional payment. Few protocols support only atomic payment while some support both. \subsubsection{Privacy} Privacy is one of the most important aspects of any routing protocol. Privacy must be preserved about the sender and receiver of the payment amount. Most of the algorithms do not consider privacy as an integrated part of the design, rather it leaves as a task of the other algorithms or as future work.\par Table-\ref{tab:pa} shows the comparison of the routing protocols concerning different aspects. Fig: 3 shows the classification tree of the cryptopayment network protocols. The first classification is done on the switching category and the second one is based on consideration of privacy in the model. \subsubsection{Architecture} For routing, finding a path in the epochs is the major work. The routing information must be recalculated repeatedly to consider the dynamic nature of the credit network. Credit links between users are constantly updated, created and deleted due to transactions. Under the assumption that users are loose synchronization, the time should be divided into known periods. At the beginning of each period, a BFS tree and anti-tree should be created, and users use this routing information in the whole period.\par For the credit on Path, the core technical challenge in the credit network design is to calculate the available credit on a specific path, which is necessary for the execution of transactions. First of all, a simple solution is to let each user in the path privately pass their link value to the corresponding landmark, so that the landmark can calculate the minimum value on the path and notify the expected receiver. However, it is easy to see that this approach does not guarantee the privacy of an honest but curious landmark because the landmark will get the credit associated with every link.\par SilentWhispers has a local approach. The credit on the path is calculated step by step by every user in the path, which can not solve the privacy problem. The protocol leaks all intermediate values when every user sends its value to the next in the path and receives a lower value from the previous user. Designing a secure multi-party computing (SMPC) protocol to calculate the available credit on the path is the basic idea of the approach. To improve construction efficiency, landmarks play the role of calculators, and everyone gets a credit in every link from sender to receiver. Landmark can calculate the credit of the whole path intuitively by calculating a series of minimum functions, but it does not need to know any calculation results, of course, it does not need to know the credit of the link.\par For constructing the path using a chained digital signature. The signature chain ensures all shares come from users, and from sender to receiver. In a given path, every user uses a long-term key pair to sign verification keys from their predecessors and successors. There is a result that a specific signature chain becomes a valid proof of the existence of the path from sender to receiver. However, combining long-term and fresh keys can improve the function. First, the user uses a long-term signature to sign a fresh verification key to bind them together. The long-term verification key is only displayed to the counterparty in the credit link, thus, the relationship between the fresh verification key and the user can verify the counterparty, but it is still hidden from other users in the credit network. Second, in any given path, the user can use the fresh signature key to sign the fresh verification key of the predecessor and successor to create a signature chain. For signatures checking, using the judge to determine the correctness of the value for the link. The long-term key is related to the identity of the real user so that the user is responsible for the behavior. \par \subsubsection{Model} The model uses $F_{CN}$ to intend the behaviors of the system, such as functionality, security, and privacy. For the ideal function of the credit network, $F_{CN}$ uses the matrix to maintain static information about nodes, links, and their credits locally. Also, $F_{CN}$ records all of the changes to the credits between nodes caused by successful transactions and we denote by $val^t_{u,u^'}$ the credit between some u and $u^'$ at time t. A set of functionalities of $F_{CN}$ includes $F_{ROUT}$, $F_{PAY}$, $F_{TEST}$, $F_{CHGLINK}$, $F_{TESTLINK}$, and $F_{ACC}$. $F_{CN}$ periodically performs functions to update the routing information of nodes in the network ($F_{ROUT}$) by using $F_{NET}$ as asynchronous means. Nodes can contact the ideal function to perform transactions ($F_{PAY}$), test available credits ($F_{TEST}$), update credits on links ($F_{CHGLINK}$), test credits on links ($F_{TESTLINK}$), or resolve disputes related to credits on certain links ($F_{ACC}$). \par $F_{ROUT}$: $F_{ROUT}$ receives two tuples of the form ($u_1$, . . . , $u_m$) by LM. It is the sets of neighbors of LM in the tree and anti- tree, individually. It uses a BFS on the links to build the tree and anti-tree rooted at the landmark $ID_{LM}$. For every user $u$ belongs to the set, $F_{ROUT}$ sends a message $(sid, ID_{LM}, h, u_p )$ through $F_{SMT}$. $h$ is the amount of hops that divides $u$ from $ID_{LM}$. $u_p$ is the parent node on the path. $u$ gives $(\perp, sid)$, making $F_{ROUT}$ backward to the former user, $(u^{'}, sid)$ shows the next user $u^{'}$ adding into the set. The set is empty meaning the algorithm ends.\par $F_{PAY}$: $F_{PAY}$ receives the tuple($S_{dr}$, $R_{vr}$, $T_{xid}$, $ID_{LM}$) from a sender. $S_{dr}$. $R_{vr}$, $T_{xid}$, and $ID_{LM}$ indicate the receiver. $F_{PAY}$ sends ($T_{xid}$, $ID_{LM}$, $u$) through $F_{SMT}$. $u$ is the preceding user in the chain. $F_{PAY}$ computes the set of tuples $P$={$ID_{LM}$,$v_{LM}$}. $v_{LM}$ is the credit connected by the path from the $S_{dr}$ to the $R_{vr}$ through LM ($path_{LM}$). $F_{PAY}$ sends ($P$, $T_{xid}$) to the $S_{dr}$ through $F_{SMT}$. $S_{dr}$ sends (\perp, $T_{xid}$) to $F_{PAY}$ or sends a set of tuples ($ID_{LM}$, $x_{LM}$, $T_{xid}$) to $F_{PAY}$ through $F_{SMT}$. $F_{PAY}$ sends($x_{LM}$, $ID_{LM}$ , $T_{xid}$) through $F_{SMT}$ to tell all the nodes in $path_{LM}$ of the value $x_{LM}$. $F_{PAY}$ sends $R_{vr}$ the tuple ($S_{dr}$, $R_{vr}$, $v$, $T_{xid}$) through $F_{SMT}$. $v$ is the whole number transacted to $R_{vr}$. $R_{vr}$ sends (\perp, $T_{xid}$) or sends ($success$, $T_{xid}$).\par $F_{ACC}$: Nodes $u_0$ and node $u_1$ sends the tuple ($val_0$,$u_0$,$u_0^'$), ($val_1$,$u_1$,$u_1^'$) respectively to contact $F_{ACC}$ through $F_{SMT}$. $F_{ACC}$ tests if $u_0^'$ = $u_1$ and $u_1^'$ = $u_0$. The next step is letting $t$ present the current time, iterating until $t$ = 0. $F_{ACC}$ queries $F_{CN}$ to retrieve $val^t_(_u_0_,_u_1_)$. If $val^t_(_u_0_,_u_1_)$ = $val_0$, then $F_{ACC}$ sends the tuple (0,$val_0$,$val_1$) to $u_0$ and $u_1$ through $F_{SMT}$. if $val^t_(_u_0_,_u_1_)$ = $val_1$, then the functionality sends the tuple (1, $val_0$ , $val_1$ ) to $u_0$ and $u_1$ through $F_{SMT}$. Otherwise, $F_{ACC}$ sets $t$ = $t$ -1. $F_{ACC}$ returns (\perp,$val_0$,$val_1$). \subsubsection{Architecture} SpeedyMurmurs considers both the available funds and the closeness to the destination of a neighbor when routing a payment, resulting in an efficient algorithm with flexible path selection. It utilizes an on-demand, efficient, stable algorithm that responds to changes in links when necessary, but maintains a low overhead corresponding to those changes. It enhances the processing of concurrent transactions by allowing nodes to actively and accurately allocate the funds needed for transactions, rather than prohibiting concurrent transactions from using links completely or taking the risk of failure in the subsequent payment phase. SpeedyMurmurs performs transactions about twice as fast as SilentWhispers, reducing transaction communication overhead by at least twice while maintaining similar or higher efficiency. It reduces the overhead of managing link changes by two to three orders of magnitude, except in rare periods when burst data sets correspond to sudden rapid growth. SpeedyMurmurs implements the privacy of value and the value of the transaction are kept hidden. The sender and receiver privacy is the identity of two users keeps hidden from the opponent. \subsubsection{Model} For distributed PBT network, $(G, w)$ means a directed graph $G = (V, E)$. $w$ is a function of weight on the set of edges. The set of the nodes $V $ corresponds to the members of the PBT network. $u$ can transfer funds to $v$ if there is a link between $u$ to $v$. The set of outgoing neighbors of a node $v$ belongs $N_{out}(v) = \{u \in V : (v,u) \in E\}$. The set of incoming neighbors of a node $v$ belongs $N_{in}(v) = \{u \in V : (u, v) \in E \}$. A path $p$ is a sequence of links $e_1 ... e_k$ with $e_i = (v_i^1,v_i^2)$, and $v_i^2 = v^1_{(i+1)}$ for $1 \le i \le k-1$. $L = \{l_1,...,l_{|L|}\}$ represents a set of nodes, called landmarks, well known to other users in the PBT network. $|L|$ represents the size of the set $L$.\par $w$ defines the whole funds that can be transferred within two nodes sharing a link. The specific implementation of the function $w$ should be abstracted. For example, in the Bitcoin Lightning Network, the $ w: E \rightarrow \mathrm{R}$ describe bitcoins $u$ can transfer to $v$ under an opening payment channel within them. The funds are available in a path $e_1,...,e_k$ as the $\textit{minimum} w(e_i)$. The net balance of a node $v$ belongs $cnode(v)= \sum_{u \in N_{in}(v)} w(u,v)- \sum_{u \in N_{out}(v)} w(v,u)$. \par A tuple of algorithms (\textit{setRoutes, setCred, routePay}) are the conponments of a PBT network: \par \textit{setRoutes(L)}: Provided the set $L = \{l_1, . . . , l_{|L|}\}$ of land- marks. In the PBT network, each node requires the routing information initialized by \textit{setRoutes}.\par \textit{setCred(c,u,v):} Provided the value $c$, nodes $u$, and $v$. setCred sets $w(u,v) = c$. Moreover, \textit{setRoutes} initially generates the routing information may be changed by \textit{setCred}. \par $((p_1,c_1),...,(p_{|L|}, c_{|L|})) \leftarrow routePay(c,u,v)$. Provided the value $c$, $u$ and $v$, \textit{routePay} returns a set of tuples $(p_i , c_i )$, represent that $c_i$ funds are routed by the path defined by $p_i$.\par \subsubsection{Architecture} Being a packet-switched payment channel network, Spider hosts transfer payments through the system in series of transactions. The transaction units are always withing the maximum transaction unit (MTU). Individual transaction units pass though the spider routers as funds utilizing the payment channel. Each host runs a transport layer that incorporates standard facilities for applications to send and receive payments on the network. The application contains the destination address, the amount to be sent, a deadline, and the maximum allowable routing charge for every transaction. The transport includes facilities for both atomic and non-atomic payments. Spider uses Atomic Multi-Path Payments (AMP) for the atomic payments. AMP breaks a payment into several transactions and sends it over different paths. It also transmit the keys for all payment transaction units from a single base key. It makes sure that the receiver is not able to unlock any payment before getting all transaction units. \par Similar to the Lightning Network, the authors also introduced the utilization of Onion routing to assure the privacy of user payments for any transaction unit. Queuing and waiting for transaction units at Spider routers could end up with unexpected delays for some payments. Nevertheless, the routers can allow distinct levels of service by scheduling transaction units depending on the payment specifications. It can prioritize payments based on volume, deadline, or routing expenses. \subsubsection{Model} Let the payment channel network model be modeled as a graph $G(V, E)$. Here $V$ is the set of routers and $E$ is the set of payment channels. For any pair of source and destination routers, denoted by $i,j \in V$ , let $d_{i,j} \ge 0$ express the regular rate at which transaction units is transported from $i$ to $j$. Besides, let $c_e$ be the total quantity of funds in channel $e$, for $e \in E$, and $\Delta$ is the usual latency faced by transaction units due to different types of network delays. Lastly let $P_{i,j}$ indicate the set of routes from $i$ to $j$ in $G$, for $i,j \in V$. Only ‘trails’, i.e., routes without repeated edges, in $P_{i,j}$ are included. Usually, any payment flow from source to destination routers maintain the demands $d_{i,j}$ as per fluid model. Usually, flow is expressed by a tuple with path and value. Here, path means the selected route and value means the payment conducted by the flow. Thus, the routing is calculated in a way that the average rate of money carried along the path matches the value of the flow. Therefore, in the fluid model, maximizing throughput is similar to finding flows of maximum total value and can be expressed as a Linear Program (LP) as given in equation- 1. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{Images/spider_algo.png} \label{fig:spider_algo} \end{center} \end{figure} In the model, $x_p$ means the flow along route $p$ and $P = U_{i,j \in V} P_{i,j}$ is the collection of all paths. The capability limits of payment channels are indicated by the second set of constraints, and the third set of constraints express the balance condition. The balance constraints are essentially important for the flows calculated to be realizable.
1,941,325,220,141
arxiv
\section{Introduction} At the early stage of a non-central heavy-ion collision, a strong magnetic field perpendicular to the reaction plane is created. In asymmetric Cu+Au collisions, due to the difference in the number of spectators, not only the magnetic field but also a strong electric field would exist pointing along the reaction plane from the Au-nucleus to Cu-nucleus. The lifetime of the electric field might be short, of the order of a fraction of a fm/$c$. The quarks and antiquarks that have been already produced at this time would experience the Coulomb force, which results in a charge dependence of particle directed flow~\cite{hirono,voronyuk}. Thus, the measurement of the charge-dependent directed flow in Cu+Au collisions provides an opportunity to test different quark (charge) production scenarios, e.g. two-wave quark production~\cite{2wave1,2wave2}, and shed light on the (anti-)quark production mechanism in heavy-ion collisions. Understanding the time evolution of the quark densities in heavy-ion collisions is also very important for detailed theoretical predictions of the Chiral Magnetic Effect and Chiral Magnetic Wave, for which various experiments are actively searching. In these proceedings, the charge-dependent directed flow in Cu+Au collisions at $\sqrt{s_{_{NN}}}$= 200 GeV measured with the STAR detector is presented. Results of higher-order flow are also presented. \section{Analysis method} Azimuthal anisotropies were measured with the event plane method defined below: \begin{equation} v_{n} = \langle \cos n(\phi-\Psi_{n}) \rangle / {\rm Res}\{\Psi_{n}\}, \end{equation} where $\phi$ is azimuthal angle of particles and $\langle \rangle$ means average over all particles in the events of the same centrality bins. The $\Psi_{n}$ denotes n$^{\rm th}$-order event plane. The first-order event plane was reconstructed with the Zero Degree Calorimeter (ZDC). The ZDC measures spectator neutrons and thus would minimize non-flow effects such as those from the momentum conservation. For higher harmonics measurements, the event planes were reconstructed from charged tracks (0.15$<$$p_{T}$$<$2 GeV/$c$) reconstructed in the Time Projection Chamber (TPC) and the Endcap Electro-Magnetic Calorimeter (EEMC). In case of using the TPC, charged tracks were divided into two subevents (-1$<$$\eta$$<$-0.4 and 0.4$<$$\eta$$<$1) and $v_{n}$ of charged particles of interest was measured with an $\eta$-gap of 0.4 using the event plane method (e.g. particles of interest are taken from 0$<$$\eta$$<$1 when using the subevent from the backward angle). The results from both subevents are consistent and the average of two measurements was used as final results. The event plane resolution Res\{$\Psi_{n}$\} was estimated by three subevents method~\cite{TwoSub}. Systematic uncertainties were estimated by varying event z-vertex and track quality cuts. The effect of the event plane determination was also taken into account in the systematic uncertainty. For higher-order $v_{n}$, the scalar product method~\cite{SP} was also tested just as a cross-check. \section{Results} \footnotetext[2]{Note that the figures in these proceedings have been updated since the presentation to account for a software issue in the calculation of the event plane resolution. The final results on $v_{1}$ are quantitatively close to those presented at the conference and had no impact on the physics conclusions. The $v_{2}$ and $v_{3}$ in peripheral collisions become larger after this correction.} \begin{figure}[htb] \begin{center} \includegraphics[width=0.95\linewidth,angle=0]{v1pt_dv1_cent_fixed2.pdf} \caption{$v_{1}^{even}$ of positive and negative particles as a function of $p_{T}$ in four centrality bins and the difference between both charges, $\Delta v_{1}$. The PHSD model calculations with and without the initial electric field (EF)~\cite{voronyuk} are compared. The model calculation of $\Delta v_{1}$ with the EF is scaled by 0.1. See the text for the definition of positive direction of $v_{1}$ ($\Psi_{1}$). This plot has been updated since the presentation\protect\footnotemark[2].} \label{fig:v1pt} \end{center} \end{figure} Figure~\ref{fig:v1pt} shows $v_{1}$ of positive ($h^{+}$) and negative ($h^{-}$) charged particles as a function of $p_{T}$ in four centrality bins\footnotemark[2], where $v_{1}$ is measured with respect to the spectator plane in Au-going side and the sign of the $\Psi_{1}^{\rm Au}$ is defined to be positive. In asymmetric collisions, the magnitude of $v_{1}$ is no longer symmetric over the pseudorapidity unlike symmetric collisions, therefore the even component of $v_{1}$ is measured in this analysis. The $v_{1}$ at $p_{T}$$<$1 GeV/$c$ has negative value and positive at the higher $p_{T}$, which means more low (high) $p_{T}$ particles are emitted to the direction of Cu (Au) spectator. Bottom panels of Fig.~\ref{fig:v1pt} show the difference between both charges, $\Delta v_{1}=v_{1}^{h^{+}}-v_{1}^{h^{-}}$. In 20-40\% centrality, the $\Delta v_{1}$ seems to be negative in $p_{T}$$<$2 GeV/$c$, which is qualitatively consistent with the expectation from the initial electric field (EF), i.e. more positively charged particles would move to the direction of the EF and negatively charged particles move to the opposite side. \begin{wrapfigure}[22]{r}[1mm]{0.44\linewidth} \begin{center} \includegraphics[width=\linewidth,angle=0]{v1eta_wideCent_fixed.pdf} \caption{$v_{1}$ of positive and negative particles and $\Delta v_{1}$ as a function of $\eta$ in 10-40\% centrality. This plot has been updated since the presentation\protect\footnotemark[2].} \label{fig:v1eta} \end{center} \end{wrapfigure} The parton-hadron-string-dynamics (PHSD) model calculations with and without the effect of the EF~\cite{voronyuk} are compared to the data, where the $\Delta v_{1}$ for the calculation including the effect of the EF is scaled by 0.1. The model assumes that all electric charges are affected by the EF, resulting in a large separation of $v_{1}$ between positive and negative charges as shown in the upper left panel of Fig.~\ref{fig:v1pt}. The $\Delta v_{1}$ is smaller than the model prediction, which indicates that the electric charges existing within the life time of the EF ($\sim$0.25 fm/$c$) are much smaller than that of all quarks created in the collisions. Figure~\ref{fig:v1eta} shows $v_{1}$ and $\Delta v_{1}$ as a function of $\eta$ in 10-40\% centrality\footnotemark[2], where $p_{T}$ is integrated over 1$<$$p_{T}$$<$2~GeV and the sign of $\Psi_{1}^{\rm Au}$ is defined to be negative (opposite to Fig.~\ref{fig:v1pt}). The $v_{1}$ charge separation is clearly seen in $|\eta|$$<$1 and $\Delta v_{1}$ increases with $\eta$, although the magnitude of $v_{1}$ also changes with $\eta$. Figure~\ref{fig:vn} shows $v_{2}$, $v_{3}$, and $v_{4}$ of positive charged particles as a function of $p_{T}$ using the event plane method and scalar product method. Both methods are in a good agreement. Calculations from an event-by-event viscous hydrodynamic model~\cite{bozek} are compared to the data of $v_{2}$ and $v_{3}$. The model results using $\eta/s$=0.08 and $\eta/s$=0.16 qualitatively agree with the data in 0-5\% and 20-30\% centrality bins. The centrality dependence of $v_{n}$ are similar to the results in Au+Au collisions~\cite{v2star,v3star,vnphenix}. There was no significant difference between positive and negative charged particles for higher-order flows. \begin{figure}[hbt] \begin{minipage}{0.56\hsize} \begin{center} \includegraphics[width=0.98\textwidth,angle=0,trim=0 0 0 0]{vn_EPvsSP_fixed2.pdf} \caption{$v_{2}$, $v_{3}$, and $v_{4}$ as a function of $p_{T}$ in 0-5\%, 20-30\%, and 40-50\% centrality bins measured with the event plane method and the scalar product method. Calculations from the event-by-event viscous hydrodynamic model~\cite{bozek} are compared. This plot has been updated since the presentation\protect\footnotemark[2].} \label{fig:vn} \end{center} \end{minipage} \begin{minipage}{0.43\hsize} \begin{center} \includegraphics[width=0.96\textwidth,angle=0,trim=0 0 0 0]{pidV1_fixed.pdf} \caption{$v_{1}$ of $\pi^{\pm}$, $K^{\pm}$, and $p$+$\bar{p}$ as a function of $p_{T}$ in 10-40\% centrality. This plot has been updated since the presentation\protect\footnotemark[2].} \label{fig:pidv1} \end{center} \end{minipage} \end{figure} Identified particle $v_{n}$ are also measured combining the TPC dE/dx and the time-of-flight information from Time-Of-Flight detector. The $v_{1}$ of charge-combined $\pi^{\pm}$, $K^{\pm}$, and $p+\bar{p}$ are presented in Fig.~\ref{fig:pidv1} and the $v_{2}$ and $v_{3}$ of $\pi^{+}(\pi^{-})$, $K^{+}(K^{-})$, and $p(\bar{p})$ for different centrality bins are presented in Fig.~\ref{fig:pidvn}. The same trends observed in symmetric collisions, such as the mass ordering at low $p_{T}$ ($<$2 GeV/$c$) and the baryon-meson splitting at intermediate $p_{T}$, are observed. \begin{figure}[thb] \begin{center} \includegraphics[width=0.78\textwidth,angle=0,trim=0 0 0 0]{pidVn_fixed.pdf} \end{center} \label{fig:pidvn} \caption{$v_{2}$ and $v_{3}$ of $\pi$, $K$, and $p$($\bar{p}$) as a function of $p_{T}$ for different centrality bins. This plot has been updated since the presentation\protect\footnotemark[2].} \end{figure} \section{Conclusions} Charge-dependent anisotropic flow in Cu+Au collisions at $\sqrt{s_{_{NN}}}$= 200 GeV has been measured with the STAR detector. Charge difference of $v_{1}$ is clearly observed, which is consistent with the effect of the initial electric field. The magnitude of $\Delta v_{1}$ is much smaller than the PHSD model predictions, likely indicating that only a small fraction of all final state quarks are created at the time when the electric field is strong. These results could shed light on the time evolution of quark production in heavy-ion collisions. Higher-order flow $v_{n}$ have been also presented; they exhibit similar trends observed in symmetric collisions. \bibliographystyle{elsarticle-num}
1,941,325,220,142
arxiv
\section{Introduction} The goal of this work is to investigate the so-called electrostatics analogy in the analysis of nematic suspensions or colloids: these consist of small particles immersed in a nematic liquid crystal matrix. The presence of these particles and their alignment induces elastic strains in nematic medium; what results is a complex strain-alignment coupling yielding novel high-functional composite materials. Examples include dilute ferronematics, where the suspended particles are ferromagnetic inclusions; organizing carbon nanotubes using liquid crystals; ferroelectrics; and living liquid crystals, where the suspended particles are swimming bodies (e.g. flagellated bacteria). Further details on the numerous applications of such systems may be found in the review articles \cite{lavrentovich20,musevic19}. Mathematical studies of colloid inclusions in nematics have tended to follow two different directions. Several papers have addressed homogenization of nematics with a dense array of colloids (see, e.g., \cite{BCG05,BK,CDGP, CZ20,CZ20design}), while others consider the presence of point or ring singularities induced by a single colloid particle (see, e.g., \cite{alamabronsardlamy16saturn,alamabronsardlamy17, ACS21, ABGL21, ACS22}). In this paper we adopt the setting of the second set of papers, but concentrate on the effect of the colloid geometry on the far-field behavior of the nematic rather than the local structure of singularities near the colloid surface. The electrostatics analogy is {commonly used} to describe colloidal suspensions in the case of a dilute concentration of particles. It originates in the work \cite{brocharddegennes70} by Brochard and de Gennes, and has been developed further by several authors in the physics literature (see \cite{musevic19} and references therein). It relies on considering each single particle separately and postulating that: \begin{itemize} \item far away from the particle the distortion in nematic alignment can be viewed as a perturbation of uniform alignment and taken to solve the corresponding linearized equation -- the representation formula for solutions of that linearized equation then provides a specific asymptotic expansion, \item the first few coefficients of that asymptotic expansion are characterized by the properties (size, symmetries, etc.) of the particle. \end{itemize} Then one formally replaces the nonlinear effect of each colloid particle by some singular source terms (derivatives of Dirac masses) in the linearized equation, according to the terms in the asymptotic expansion, which are derivatives of the fundamental solution (see Remark \ref{r:eqn0}). In the one-constant approximation for the elastic energy of the nematic, this amounts to the equation satisfied by an electric potential in the presence of charged multipoles, hence the name ``electrostatics analogy''. This simplification, intuitively valid for dilute enough suspensions, allows for an explicit calculation of the energy of a given configuration in terms of the respective positions and properties of each particle, leading to the ultimate goal: computation of interparticle interactions. In this article we provide a few elements towards mathematically quantifying the electrostatics analogy. What seems to us the most challenging part is the second bullet-point above: relating the coefficients of the asymptotic expansion to the particle's properties. Indeed, various mathematical obstacles defy a straightforward calculation of an expansion of minimizers: for instance, minimizers may not be unique, and it is unknown whether the symmetry of the particle system imposes a corresponding symmetry on the minimizing nematic configuration. Nevertheless we do obtain some results in that direction for the leading-order term of the expansion. Specifically, we consider a single particle $G\subset{\mathbb R}^3$ (smooth and bounded) surrounded by nematic liquid crystal. A configuration of nematic alignment is represented by a director field $n\colon{\mathbb R}^3\setminus G\to \mathbb S^2$, and its energy (within the one-constant approximation) is given by \begin{align*} E(n)=\int_{{\mathbb R}^3\setminus G}|\nabla n|^2 + F_s(n_{\lfloor\partial G}), \end{align*} where $F_s\colon H^{1/2}(\partial G;\mathbb S^2)\to [0,\infty]$ can be a very general surface energy reflecting the particle's anchoring properties. Uniform alignment $n(x)\approx n_0\in\mathbb S^2$ at far-field $r=|x|\to\infty$ is imposed through the condition \begin{align*} \int_{{\mathbb R}^3\setminus G} \frac{|n-n_0|^2}{1+r^2} \lesssim \int_{{\mathbb R}^3\setminus G}|\nabla n|^2 <\infty. \end{align*} Equilibrium configurations satisfy the harmonic map equation \begin{align*} -\Delta n =|\nabla n|^2 n\qquad\text{in }{\mathbb R}^3\setminus G. \end{align*} Loosely speaking, we prove that: \begin{itemize} \item minimizing configurations have an asymptotic expansion determined by the linearized equation $\Delta n=0$, {however one cannot discard non-harmonic corrections} -- see Theorem~\ref{t:expansion}; \item generically, the leading-order $\mathcal O(1/r)$ term in that expansion is uniquely determined by the particle $G$ and the far-field uniform alignment $n_0$ -- see Theorem~\ref{t:torque}. \end{itemize} The first point is oblivious to the presence of the particle, it is a result about minimizing harmonic maps in an exterior domain. The second point is obtained by connecting the leading-order term to the variation of minimal energy induced by keeping the particle $G$ fixed and rotating the far-field alignment $n_0$. This is related to formal calculations in \cite{brocharddegennes70} for the torque exerted by the particle on the nematic (see Remark~\ref{r:BdG}). As a consequence, we see that non-harmonic corrections are much smaller when the orientation of the particle is locally minimizing relative to variations in the prescribed far-field direction; see Corollary~\ref{c:eqn0}. Moreover, symmetry properties of the particle and of the surface energy induce corresponding symmetries in the far-field expansion at leading order (Corollary~\ref{c:sym}). Below we state our results in more detail. \subsection{Far-field expansion for harmonic maps} Our far-field expansion is a result about harmonic maps in an exterior domain, which by rescaling we may without loss of generality assume to contain $\mathbb R^3\setminus \overline B_1$. So we let $n_0\in\mathbb S^2$ and $n\colon {\mathbb R}^3\setminus \overline B_1\to\mathbb S^2$ be such that \begin{align*} \int_{{\mathbb R}^3\setminus \overline B_1} \frac{|n-n_0|^2}{r^2} \lesssim\int_{{\mathbb R}^3\setminus \overline B_1}|\nabla n|^2 <\infty, \end{align*} and $n$ is locally energy-minimizing, that is, \begin{align*} \int_{{\mathbb R}^3\setminus\overline B_1}|\nabla n|^2 \leq \int_{{\mathbb R}^3\setminus \overline B_1}|\nabla \tilde n|^2, \end{align*} for any $\mathbb S^2$-valued map $\tilde n$ which agrees with $n$ outside of a compact subset of $\mathbb R^3\setminus \overline B_1$. Our first main result is a far-field expansion for such minimizing maps. \begin{thm}\label{t:expansion} The minimizing map $n$ satisfies, as $r=|x|\to\infty$, \begin{align}\label{eq:expansion} n&= n_0 + n_{harm} +n_{corr} + \mathcal O\left(\frac{1}{r^4}\right),\\ n_{harm}& =\frac{1}{r}v_0 + \sum_{j=1}^3 p_j\partial_j\left(\frac 1r\right) + \sum_{k,\ell=1}^3 c_{k\ell}\partial_k\partial_\ell\left(\frac 1r\right),\quad v_0,p_j,c_{k\ell}\in{\mathbb R}^3,\nonumber\\ n_{corr}& = -\frac{|v_0|^2}{r^2}n_0 - \frac{|v_0|^2}{6r^3}v_0 -\frac{1}{3r} \sum_{j=1}^3 v_0\cdot p_j\,\partial_j\left(\frac 1r\right) \, n_0. \nonumber \end{align} Moreover the vectors $v_0$, $p_j$ ($j=1,2,3$) are orthogonal to $n_0$. \end{thm} The far-field expansion \eqref{eq:expansion} consists of a harmonic part $n_{harm}$ solving the linearized equation $\Delta n_{harm}=0$, and of a non-harmonic correction $n_{corr}$. Interestingly, if the coefficient $v_0$ of the leading-order term in $n_{harm}$ vanishes, then the non-harmonic correction vanishes and $n$ admits a harmonic expansion up to $\mathcal O(1/r^4)$. Higher-order non-harmonic corrections would not have that property. This is why we stop the expansion at this order, even though it will be clear from the proof that one can obtain an expansion at any arbitrary order. \begin{rem}\label{r:general} The proof of Theorem~\ref{t:expansion} can be generalized to obtain far-field expansions for any manifold-valued map $u\colon {\mathbb R}^d\setminus \overline B_1\to\mathcal N\subset {\mathbb R}^k$ ($d\geq 3$) with given far-field value $u_0\in\mathcal N$ in the sense $\int r^{-2}|u-u_0|^2 <\infty$, minimizing an energy of the form $\int A(u)[\nabla u,\nabla u]$, where $A(u)$ is a positive definite bilinear form on ${\mathbb R}^{k\times n}$ depending continuously on $u$. Far field asymptotic expansions for the linearized system $\nabla\cdot A(u_0)\nabla u=0$ are in terms of derivatives of the fundamental solution, as described e.g. in \cite{BGO}. Such a generalization is interesting in the context of nematic liquid crystals with unequal elastic constants that satisfy Frank's coercivity inequalities. \end{rem} We will obtain below as corollaries of Theorem~\ref{t:torque} various sufficient conditions ensuring that $v_0=0$, and so $n_{corr}=0$. For now, it is worth noting that $v_0$ vanishes for axisymmetric configurations. The map $n:\mathbb{R}^3 \setminus \overline{B}_1 \rightarrow \mathbb{S}^2$ is axisymmetric about $n_0$ if for any rotation $R$ of axis $n_0$ one has \begin{align*} n(Rx)=Rn(x)\qquad\forall x\in\Omega. \end{align*} Using the far-field expansion \eqref{eq:expansion} in this identity implies $Rv_0=v_0$ for all rotations $R$ of axis $n_0$, and therefore $v_0=0$ since $v_0\cdot n_0=0$. \begin{cor}\label{c:symexp} If the minimizing map $n$ is axisymmetric about $n_0$, then $n=n_0+n_{harm}+\mathcal O(1/r^4)$ as $r=|x|\to\infty$, with $\Delta n_{harm}=0$. \end{cor} Corollary~\ref{c:symexp} is stated here for minimizing maps that are axisymmetric, but it is hard in general to prove that a minimizing map is symmetric. However, the proof of Theorem~\ref{t:expansion} can be reproduced for an axisymmetric map which is minimizing merely among axisymmetric configurations, and Corollary~\ref{c:symexp} is valid also in that case. \subsection{Characterization of the leading-order term} Next we take into account the presence of the particle, a smooth bounded open subset $G\subset{\mathbb R}^3$, and consider the energy \begin{align*} E(n)=\int_{{\mathbb R}^3\setminus G}|\nabla n|^2 + F_s(n_{\lfloor\partial G}), \end{align*} where $F_s \colon H^{1/2}(\partial G;\mathbb S^2)\to [0,\infty]$ is lower semicontinuous and $\lbrace F_s <\infty\rbrace\neq \emptyset$. This ensures that the minimizing problem \begin{align}\label{eq:hatEn0} \hat E(n_0)=\min\Big\lbrace E(n)\colon &n\in H^1_{loc}({\mathbb R}^3\setminus G;\mathbb S^2), \nonumber\\ & \int_{{\mathbb R}^3\setminus G}\frac{|n-n_0|^2}{1+r^2} +\int_{{\mathbb R}^3\setminus G}|\nabla n|^2<\infty\Big\rbrace, \end{align} does admit a minimizer. To check the existence of a minimizer in \eqref{eq:hatEn0}, note that a boundary map $n_b\in H^{1/2}(\partial G;\mathbb S^2)$ with finite surface energy $F_s(n_b)<\infty$ can be extended to a map $n\in H^1_{loc}({\mathbb R}^3\setminus G;\mathbb S^2)$ such that $n\equiv n_0$ outside of a compact set using e.g. \cite[Lemma~A.1]{hkl88} so the infimum is finite. Moreover the energy is coercive thanks to Hardy's inequality, and weakly lower semicontinuous as a sum of two weakly lower semicontinuous functions. Examples of admissible surface energies $F_s$ include \begin{align*} F_s(n)=\begin{cases} 0 & \text{ if }n=n_D,\\ +\infty & \text{ otherwise,} \end{cases} \end{align*} for some fixed map $n_D\in H^{1/2}(\partial G;\mathbb S^2)$, which corresponds to imposing Dirichlet boundary conditions $n =n_D$ on $\partial G$; or \begin{align*} F_s(n)=\int_{\partial G} g(n,x)\, d\mathcal H^2(x), \end{align*} for some measurable function $g\colon \mathbb S^2 \times \partial G\to [0,\infty)$ which is continuous with respect to $n$; for instance $g(n,x)=|n-n_D(x)|^2$ which relaxes Dirichlet boundary conditions (strong anchoring) to weak anchoring. Our second main result relates the vector $v_0$ appearing in the leading-order term of the expansion \eqref{eq:expansion} to the gradient of the function $\hat E$ at $n_0$. \begin{thm}\label{t:torque} The function $\hat E$ defined by \eqref{eq:hatEn0} is Lipschitz, and for a.e. $n_0\in\mathbb S^2$ we have \begin{align}\label{eq:torque} \nabla \hat E(n_0)=-8\pi v_0, \end{align} where $v_0=\lim_{r\to\infty} r(n-n_0)$ for any minimizing $n$ such that $\hat E(n_0)=E(n)$. Moreover $\hat E$ is { semiconcave: for all } $n_0,m_0\in\mathbb S^2$ and $v_0=\lim_{r\to\infty} r(n-n_0)$ for any minimizer $n$ achieving $\hat E(n_0)$, we have the one-sided inequality \begin{align*} \hat E(m_0)\leq \hat E(n_0)-8\pi v_0\cdot (m_0-n_0) + C |m_0-n_0|^2, \end{align*} for some constant $C=C(G,F_s)\geq 0$. \end{thm} \begin{rem}\label{r:BdG} Formula \eqref{eq:torque} relates $v_0$ to the torque applied by the particle $G$ on the nematic, in agreement with formal calculations in \cite{brocharddegennes70} for an axisymmetric particle. These formal calculations can be made rigorous (and then they show that $\hat E$ is differentiable everywhere) if one knows that the minimization problem \eqref{eq:hatEn0} admits a unique minimizer $n$ which moreover depends smoothly on $n_0$. Such uniqueness and smoothness results seem very hard to obtain in general, and we use a somewhat different method to prove \eqref{eq:torque} and Theorem~\ref{t:torque}. \end{rem} Different minimizers for $\hat E(n_0)$ may a priori have different asymptotic expansions \eqref{eq:expansion}. However, a crucial nontrivial consequence of Theorem~\ref{t:torque} is that at any differentiability point $n_0$ of $\hat E$, the coefficient $v_0$ of the leading-order term is uniquely determined by $n_0$, even though \eqref{eq:hatEn0} may have several minimizers. We do not know whether $\hat E$ can have non-differentiable points, and whether $v_0$ can be multivalued at such points. The semiconcavity inequality in Theorem~\ref{t:torque} implies that all possible values of $v_0$ are included in the subdifferential of $-\frac{1}{8\pi}\hat E$. It would be interesting to characterize values of $v_0$ in terms of this subdifferential. One may pose an analogous question for $\mathbb{S}^1$-valued minimizers in exterior domains ${\mathbb R}^2\setminus G$ in the plane which approach a constant $n_0=e^{i\phi_0}$ at infinity. However the situation is completely different, because finite-energy configurations don't exist in general. One way around that issue is to relax the $\mathbb S^1$-valued constraint via a Ginzburg-Landau approximation. This approach is implemented in \cite{ABGS}, with the asymptotic value $n_0=e^{i\phi_0}$ left free. An interesting consequence of the semiconcavity of $\hat E$ is that it must be differentiable, of zero gradient, at any local minimum point. \begin{cor}\label{c:eqn0} If $n_0\in\mathbb S^2$ is locally minimizing for $\hat E$, then $v_0=0$ and $n=n_{harm}+\mathcal O(1/r^4)$ as $r=|x|\to\infty$ with $\Delta n_{harm}=0$, for any minimizing $n$ such that $E(n)=\hat E(n_0)$. \end{cor} \begin{rem}\label{r:eqn0} In the physical system it is formally equivalent to rotate the far-field alignment $n_0$ or the particle $G$. Hence Corollary~\ref{c:eqn0} tells us that, when the particle is in a stable equilibrium position, all minimizing configurations $n$ have a far-field expansion which is harmonic up to $\mathcal O(1/r^4)$, and whose leading order is given by the harmonic term $\sum_j p_j\partial_j(1/r)$ for some vectors $p_j\in n_0^\perp$. Such leading-order term corresponds to solutions of the equation \begin{align*} \Delta n =\frac{1}{4\pi}\sum_{j=1}^3 p_j\partial_j \delta\qquad\text{in }{\mathbb R}^3, \end{align*} where the singular source term can be interpreted as a dipole-moment, as described e.g. in \cite{lubensky98}. \end{rem} Another remarkable consequence of Theorem~\ref{t:torque} concerns the important case where the particle $G$ possesses some rotational symmetry. As mentioned earlier, we may not necessarily infer the same symmetry for all minimizers, but we can make some strong geometrical conclusions concerning the vector $v_0$ in the expansion \eqref{eq:expansion} of minimizers. To make this precise, we define the symmetry group of the particle $G$, with its anchoring properties described by the surface energy $F_s$, as a subgroup of the orthogonal transformations $O(3)$ given by \begin{align*} \mathrm{Sym}(G)=\Big\lbrace R\in O(3)\colon & RG=G,\text{ and } \\ & F_s(Rn\circ R^{-1})=F_s(n)\;\forall n\in H^{1/2}(\partial G;\mathbb S^2)\Big\rbrace. \end{align*} For any symmetry-preserving transformation $R\in\mathrm{Sym}(G)$, the energy $E$ is conserved under the transformation $n\mapsto Rn\circ R^{-1}$, and therefore $\hat E(n_0)=\hat E(Rn_0)$. \begin{cor}\label{c:sym} If the particle $G$ has an axis of symmetry $\mathbf u\in\mathbb S^2$, i.e. $\mathrm{Sym}(G)$ contains all rotations $R\in SO(3)^{\mathbf u}$ about axis $\mathbf u$, then for almost all $n_0\in\mathbb S^2$ we have \begin{align}\label{eq:cor1.9} v_0(n_0)\cdot (\mathbf u\times n_0)=0, \end{align} where $v_0(n_0)=\lim_{r\to\infty}r(n-n_0)$ for any minimizing map $n$ achieving $\hat E(n_0)$. If $\hat E$ is differentiable at $\mathbf u$ then $v_0(\mathbf u)=0$. If the particle is spherically symmetric, i.e. $\mathrm{Sym}(G)$ contains all rotations $SO(3)$, then $v_0(n_0)=0$ for all $n_0\in\mathbb S^2$. \end{cor} Note that since $v_0$ is orthogonal to $n_0$, if $\mathbf u$ and $n_0$ are not parallel, then the identity $v_0 \cdot (\mathbf u\times n_0)=0$ forces $v_0$ to belong to a fixed line determined by $n_0$ and $\mathbf u$. This link between symmetry properties of $G$ and of $v_0$ gives a rigorous justification to assertions in \cite[\S~II.1.a]{brocharddegennes70} where this is deduced from the assumption, false in general, that minimizers $n$ in \eqref{eq:hatEn0} are unique. \begin{rem} In the axisymmetric setting, Corollary~\ref{c:sym} leaves open the case when $\hat E$ is not differentiable at $n_0=\mathbf u$, the axis of symmetry: the $1/r$ asymptotic might be nonzero. If that situation occurs, that is, there is a minimizer $n$ with far-field alignment $\mathbf u$ but with $v_0\neq 0$, then all its axial rotations $R n\circ R^{-1}$ are minimizers for $\hat E(\mathbf u)$ too, with $1/r$ asymptotic term equal to $Rv_0$. The semiconcavity inequality \begin{align*} \hat E(n_0)\leq \hat E(\mathbf u)-8\pi Rv_0 \cdot (n_0-\mathbf u) + C |n_0-\mathbf u|^2, \end{align*} is then valid for all rotations $R$ of axis $\mathbf u$, and we deduce \begin{align*} \hat E(n_0)\leq \hat E(\mathbf u) -8\pi|n_0-\mathbf u| +C |n_0-\mathbf u|^2. \end{align*} Hence $\hat E$ has a local maximum at $\mathbf u$, and its graph near $\mathbf u$ looks locally like a cone. While none of the results above preclude this scenario in the axisymmetric setting, it is natural to ask the open question: can this situation really occur? \end{rem} \subsection{Plan of the article} In section~\ref{s:expansion} we prove Theorem~\ref{t:expansion} and in section~\ref{s:torque} we prove Theorem~\ref{t:torque}. In Appendix~\ref{a:decay} we provide proofs of some familiar (but not easily found) decay estimates for Poisson's equation for the reader's convenience. \section{Far-field expansion}\label{s:expansion} In this section we prove Theorem~\ref{t:expansion}. The minimizing map $n$ solves the harmonic map equation \begin{align}\label{eq:eulerlagrange} -\Delta n=|\nabla n|^2n\qquad\text{in }{\mathbb R}^3\setminus \overline B_1. \end{align} If the right-hand side decays like $\mathcal O(1/|x|^\gamma)$ for some $\gamma>3$, decay estimates for the Poisson equation (see Lemma~\ref{l:decaycorrecptwise}) enable one to start a harmonic expansion for $n$, and this process can then be iterated including relevant non-harmonic corrections. Hence the main new ingredient in the proof of Theorem~\ref{t:expansion} is to obtain an initial strong enough decay estimate on $|\nabla n|$. Note that, since $\int_{|x|\geq R}|\nabla n|^2\to 0$ as $R\to\infty$, small energy estimates for harmonic maps \cite{schoen84,schoenuhlenbeck82} ensure that $n$ is smooth outside of a finite ball. Specifically, given $x_0\in{\mathbb R}^3$, $\vert x_0\vert =R$, the small energy regularity estimate for harmonic maps \cite[Theorem~2.2]{schoen84} applied to $\hat n(\hat x)=n (x_0+(R/2)\hat x)$ implies the existence of $R_0\geq 1$ (depending on $n$) such that \begin{equation}\label{eq:smallregrescaled} \abs{x_0}=R\geq R_0\quad\Longrightarrow \quad \abs{\nabla n}^2(x_0)\lesssim R^{-3}\int_{\frac R2\leq\abs{x}\leq \frac {3R}{2}}\abs{\nabla n}^2. \end{equation} In particular we have the decay estimate $|\nabla n(x)|^2= o(1/|x|^3)$. At this point we would like to use decay estimates of Poisson's equation from Lemma~\ref{l:decaycorrecptwise} in an iterative process to generate the far-field expansion, but the decay given in \eqref{eq:smallregrescaled} is just not enough to start applying the Lemma. Consequently, we require a bit more algebraic decay $\mathcal O(1/R^\delta)$, for some $\delta>0$, of the integral $\int_{|x|\geq R}|\nabla n|^2$. This we obtain in Lemma~\ref{l:energyimprovement} and Step~1 of Theorem~\ref{t:expansion}'s proof, using the minimizing property of $n$ in order to compare the decay of that integral with the decay of the same integral for minimizers of the Dirichlet energy with values into the plane $T_{n_0}\mathbb S^2$, that is, solutions of the linearized equation $\Delta n=0$. First recall that for harmonic functions we have the following decay estimates. \begin{lem}\label{l:harmonicdecay} Let $u\colon {\mathbb R}^3\setminus \overline B_1\to{\mathbb R}$ satisfy $\int_{{\mathbb R}^3\setminus \overline B_1}\abs{\nabla u}^2<\infty$ and $\Delta u=0$ in ${\mathbb R}^3\setminus \overline B_1$. Then for all $R\geq 1$, $\hat u(\hat x)=u(R \hat x)$ satisfies \begin{equation*} \int_{\abs{\hat x}\geq 1}\abs{\nabla\hat u}^2 = \frac 1R \int_{\abs{x}\geq R}\abs{\nabla u}^2 \leq \frac{1}{R^2}\int_{\abs{x}\geq 1}\abs{\nabla u}^2. \end{equation*} \end{lem} \begin{proof} Since $u$ is harmonic and $\int_{{\mathbb R}^3\setminus\overline B_1}|\nabla u|^2<\infty$, its spherical harmonics expansion is of the form \begin{equation*} u(x) = u(r\omega)=u_0 + \sum_k \frac{a_k}{r^{\gamma_k}}\phi_k(\omega), \end{equation*} where we decompose $x \neq 0$ in polar coordinates as $x = r\omega, r = |x|, $ and $\omega = \frac{x}{|x|} \in \mathbb{S}^2,$ and $\lbrace \phi_k\rbrace_k$ is an $L^2(\mathbb S^2)$-orthonormal system of spherical harmonics and $\gamma_k > 0$. Then we compute \begin{align*} \int_{\abs{x}\geq R}\abs{\nabla u}^2 &= \int_{\abs{x}\geq R}\nabla\cdot(u\nabla u) =-\int_{\abs{x}=R} u \partial_r u \\ & = \sum_k \frac{\gamma_ka_k^2}{R^{2\gamma_k+1}} \leq \frac 1R \sum_k\gamma_ka_k^2 = \frac 1R \int_{\abs{x}\geq 1}\abs{\nabla u}^2. \end{align*} \end{proof} We obtain almost the same decay for our minimizing map $n$, via the following decay improvement result. The estimate obtained in Lemma~\ref{l:energyimprovement} will be needed in Step 1 of the proof of Theorem~\ref{t:expansion}. After the proof of the theorem we present a second proof of that step, replacing the estimate of Lemma~\ref{l:energyimprovement} by a different approach inspired by asymptotic expansions of minimal surfaces in \cite{schoen83}. That alternative proof is however slightly less general than the one in Lemma~\ref{l:energyimprovement}: for energies $\int A(u)[\nabla u,\nabla u]$ as in Remark~\ref{r:general} it would require the positive definite quadratic form $A$ to be a $C^1$ (rather than $C^0$) function of $u$. \begin{lem}\label{l:energyimprovement} For any $\alpha<2$, there exist $\delta>0$ and $R_1>1$ such that for any $n_0\in\mathbb S^2$ and any map $n\colon {\mathbb R}^3\setminus \overline B_1\to \mathbb S^2$ with $\int r^{-2}\abs{n-n_0}^2 \lesssim \int\abs{\nabla n}^2<\infty$, which is energy minimizing, i.e. $\int\abs{\nabla n}^2\leq\int\abs{\nabla\tilde n}^2$ for all $\mathbb S^2$-valued maps $\tilde n$ that agree with $n$ outside of a compact subset of $ {\mathbb R}^3\setminus \overline B_1$, we have \begin{equation*} \int_{\abs{x}\geq 1}\abs{\nabla n}^2\leq\delta^2\quad\Rightarrow\quad\frac 1{R_1} \int_{\abs{x}\geq R_1}\abs{\nabla n}^2\leq \frac{1}{R_1^\alpha}\int_{\abs{x}\geq 1}\abs{\nabla n}^2. \end{equation*} \end{lem} \begin{proof}[Proof of Lemma~\ref{l:energyimprovement}] The proof follows quite closely the strategy in \cite[Proposition~1]{luckhaus88} (see also \cite[Theorem~2.4]{hkl86}). By rotational symmetry we may assume $n_0=(0,0,1)$. We fix $\alpha<2$. Since $T_{n_0}\mathbb S^2=n_0^\perp=\mathbb R^2\times\lbrace 0\rbrace$, thanks to Lemma~\ref{l:harmonicdecay} we may choose any $R_\star>1$ such that for any $T_{n_0}\mathbb S^2$-valued energy minimizing map $v$ in ${\mathbb R}^3\setminus \overline B_1$ with $\int\abs{v}^2\abs{x}^{-2}\lesssim\int\abs{\nabla v}^2<\infty$, \begin{equation}\label{eq:harmonicimprovement} \frac 1{R_\star} \int_{\abs{x}\geq R_\star}\abs{\nabla v}^2\leq \frac 14 \frac{1}{R_\star^\alpha}\int_{\abs{x}\geq 1}\abs{\nabla v}^2. \end{equation} Then we fix $R_1=2R_\star$ and argue by contradiction, assuming Lemma~\ref{l:energyimprovement} to be false for this value of $R_1$. Hence there exist $\delta_j\to 0$ and minimizing $\mathbb S^2$-valued maps $n_j$ such that \begin{align*} &\int_{\abs{x}\geq 1}\frac{\abs{n_j-n_0}^2}{\abs{x}^2}\lesssim\int_{\abs{x}\geq 1}\abs{\nabla n_j}^2 =\delta_j^2\\ \text{and}\quad & \frac 1{R_1} \int_{\abs{x}\geq R_1}\abs{\nabla n_j}^2 > \frac{1}{R_1^\alpha}\int_{\abs{x}\geq 1}\abs{\nabla n_j}^2. \end{align*} We set \begin{equation*} v_j:=\frac{n_j-n_0}{\delta_j}, \end{equation*} so that \begin{equation}\label{eq:propvj} \int_{\abs{x}\geq 1}\frac{\abs{v_j}^2}{\abs{x}^2}\lesssim\int_{\abs{x}\geq 1}\abs{\nabla v_j}^2 =1\quad\text{and}\quad \frac 1{R_1} \int_{\abs{x}\geq R_1}\abs{\nabla v_j}^2 > \frac{1}{R_1^\alpha}\int_{\abs{x}\geq 1}\abs{\nabla v_j}^2. \end{equation} Up to a subsequence (that we do not relabel), there exists $v_\star\in H^1_{loc}({\mathbb R}^3\setminus \overline B_1;\mathbb R^3)$ such that $v_j\rightharpoonup v_\star$ weakly in $H^1_{loc}$, strongly in $L^2_{loc}$, and almost everywhere. Note that $v_\star(x)\in T_{n_0}\mathbb S^2$ for a.e. $x\in {\mathbb R}^3\setminus \overline B_1$, and that by lower semi-continuity, \begin{equation*} \int_{\abs{x}\geq 1}\frac{\abs{v_\star}^2}{\abs{x}^2}\lesssim\int_{\abs{x}\geq 1}\abs{\nabla v_\star}^2 \leq 1. \end{equation*} By Fubini's theorem we may moreover pick $r\in [1,2]$ such that \begin{equation*} \int_{\abs{x}=r}\abs{\nabla v_\star}^2\lesssim 1 \, \qquad\text{and}\quad \int_{\abs{x}=r}\abs{\nabla v_j}^2\lesssim 1. \end{equation*} By continuity of the trace operator and compactness of the embedding $H^{\frac 12}(\partial B_r)\subset L^2(\partial B_r)$ we have $\int_{\abs{x}=r}\abs{v_j-v_\star}^2\to 0$. We claim that $v_\star$ is a $T_{n_0}\mathbb S^2$-valued minimizing map in $\Omega_r=\lbrace \abs{x}>r\rbrace$. Let $v\in H_{loc}^1(\Omega_r; T_{n_0}\mathbb S^2)$ agree with $v_\star$ outside of a compact subset of $\Omega_r$. We will show that $\int\abs{\nabla v_\star}^2\leq\int\abs{\nabla v}^2$, thus proving the claim. Let \begin{align*} \tilde v_j & =\frac{\delta_j^{-\frac 12} v}{\max(\delta_j^{-\frac 12},\abs{v})},\qquad \tilde n_j = \pi_{\mathbb S^2}(n_0 + \delta_j \tilde v_j), \end{align*} so that \begin{align*} &\abs{\nabla \tilde n_j}^2\leq \delta_j^2\left(1+O(\delta_j^{\frac 12})\right)\abs{\nabla v}^2,\qquad \tilde v_j\to v\text{ in }H^1_{loc}(\overline \Omega_r;T_{n_0}\mathbb S^2). \end{align*} Since $v=v_\star$ on $\partial B_r$ and $\int_{\abs{x}=r}\abs{v_j-v_\star}^2\to 0$, we also have \begin{align*} \gamma_j^2:=\int_{\partial B_r}\abs{v_j-\tilde v_j}^2\to 0. \end{align*} Moreover, using that $\pi_{\mathbb S^2}$ is smooth in a small neighborhood of $n_0$ and $\tilde v_j\cdot n_0=0$, we obtain \begin{align*} \tilde n_j-n_j& =\pi_{\mathbb S^2}(n_0+\delta_j\tilde v_j)-n_0 -\delta_j v_j =\delta_j (\tilde v_j-v_j) +\mathcal O(\delta_j^2 |v|^2), \end{align*} so \begin{align*} \int_{\partial B_r}\abs{n_j-\tilde n_j}^2 \leq \delta_j^2 (\gamma_j^2 +c^2\delta_j^2), \end{align*} where $c>0$ is a constant depending on $v$. Luckhaus' extension lemma~\cite[Lemma~1]{luckhaus88} ensures, for any $\lambda\in (0,1)$, the existence of $\varphi_j\colon B_{(1+\lambda)r}\setminus B_r\to\mathbb R^3$ such that \begin{align*} \varphi_j=&n_j\text{ on }\partial B_r,\quad\varphi_j=\tilde n_j((1+\lambda)\cdot)\text{ on }\partial B_{(1+\lambda)r},\\ \int_{B_{(1+\lambda)r}\setminus B_r}\abs{\nabla\varphi_j}^2&\lesssim \lambda \int_{\partial B_r}\left(\abs{\nabla n_j}^2 + \abs{\nabla\tilde n_j}^2\right) + \lambda^{-1} \int_{\partial B_r}\abs{n_j-\tilde n_j}^2\\ &\lesssim \delta_j^2 \left( \lambda + \lambda^{-1}(\gamma_j^2+c^2\delta_j^2)\right),\\ \sup_{B_{(1+\lambda)r}\setminus B_r}\dist^2(\varphi_j,\mathbb S^2)&\lesssim \lambda^{-1}\left(\int_{\partial B_r}\left(\abs{\nabla n_j}^2 + \abs{\nabla\tilde n_j}^2\right)\right)^{\frac 12}\left( \int_{\partial B_r}\abs{n_j-\tilde n_j}^2\right)^{\frac 12}\\ &\quad +\lambda^{-2}\int_{\partial B_r}\abs{n_j-\tilde n_j}^2\\ &\lesssim \delta_j^2 \left(\lambda^{-1}\gamma_j + \lambda^{-2}\gamma_j^2\right) \end{align*} Choosing $\lambda=\lambda_j=\gamma_j+c\delta_j\to 0$, we may thus define $\psi_j =\pi_{\mathbb S^2}(\varphi_j)\colon B_{(1+\lambda_j)r}\setminus B_r\to\mathbb S^2$ satisfying \begin{align*} &\psi_j=n_j\text{ on }\partial B_r,\quad\psi_j=\tilde n_j((1+\lambda_j)\cdot)\text{ on }\partial B_{(1+\lambda_j)r},\\ \text{and }&\delta_j^{-2}\int_{B_{(1+\lambda_j)r}\setminus B_r}\abs{\nabla\psi_j}^2\to 0. \end{align*} Then we set \begin{equation*} \hat n_j(x) =\begin{cases} \psi_j(x) &\quad\text{for }r\leq \abs{x} \leq(1+\lambda_j)r,\\ \tilde n_j((1+\lambda_j)x) &\quad\text{for }\abs{x}\geq (1+\lambda_j)r. \end{cases} \end{equation*} Note that $\hat n_j$ agrees with $n_j$ on $\partial B_r$ and satisfies \begin{equation*} \int_{\abs{x}\geq 2}\frac{\abs{\hat n_j -n_0}^2}{\abs{x}^2} \lesssim \int_{\abs{x}\geq r}\frac{\abs{\tilde n_j -n_0}^2}{\abs{x}^2}\lesssim \delta_j^2 \int_{\abs{x}\geq r}\frac{\abs{\tilde v_j}^2}{\abs{x}^2}\lesssim \delta_j^2 \int_{\abs{x}\geq r}\frac{\abs{v}^2}{\abs{x}^2}<\infty, \end{equation*} since $v=v_\star$ outside of a compact set and $\int_{\abs{x}\geq 1}\abs{v_\star}^2\abs{x}^{-2}<\infty$. Therefore the $\hat n_j$ must have greater energy than $n_j$, hence \begin{align*} \int_{\abs{x}\geq r}\abs{\nabla v_j}^2 & =\delta_j^{-2}\int_{\abs{x}\geq r}\abs{\nabla n_j}^2 \leq \delta_j^{-2}\int_{\abs{x}\geq r}\abs{\nabla \hat n_j}^2 \\ &\leq (1+o(1))\delta_j^{-2}\int_{\abs{x}\geq r}\abs{\nabla\tilde n_j}^2 + o(1)\\ &\leq (1+o(1))\int_{\abs{x}\geq r}\abs{\nabla v}^2 +o(1) \end{align*} By weak lower semi-continuity of the Dirichlet energy with respect to $H^1_{loc}$ convergence we infer \begin{equation*} \int_{\abs{x}\geq r}\abs{\nabla v_\star}^2\leq \liminf \int_{\abs{x}\geq r}\abs{\nabla v_j}^2\leq \int_{\abs{x}\geq r}\abs{\nabla v}^2, \end{equation*} so that $v_\star$ is a $T_{n_0}\mathbb S^2$-valued energy minimizing map in $\Omega_r$, and moreover applying the above to $v=v_\star$ we deduce that \begin{equation*} \int_{\abs{x}\geq r}\abs{\nabla v_j-\nabla v_\star}^2\to 0. \end{equation*} In particular, since $\int_{\abs{x}\geq 1}\abs{\nabla v_j}^2=1$, \eqref{eq:propvj} implies that $\int_{\abs{x}\geq R_1}\abs{\nabla v_\star}^2>0$. Moreover, recalling that $r\in [1,2]$ and taking $j\to\infty$ in \eqref{eq:propvj} we obtain \begin{equation*} \frac{1}{R_1}\int_{\abs{x}\geq R_1}\abs{\nabla v_\star}^2\geq \frac{1}{R_1^\alpha}\int_{\abs{x}\geq 2}\abs{\nabla v_\star}^2, \end{equation*} hence, for $\hat v_\star(\hat x)=v_\star(2\hat x)$, recalling that $R_1=2 R_\star$ and $\alpha < 2,$ we have \begin{equation*} \frac{1}{R_\star}\int_{\abs{x}\geq R_\star}\abs{\nabla\hat v_\star}^2\geq \frac{2^{1-\alpha}}{R_\star^\alpha}\int_{\abs{x}\geq 1}\abs{\nabla \hat v_\star}^2\geq \frac{1}{2}\frac{1}{R_\star^\alpha}\int_{\abs{x}\geq 1}\abs{\nabla \hat v_\star}^2. \end{equation*} Since $\hat v_\star$ is a $T_{n_0}\mathbb S^2$-valued energy minimizing map in ${\mathbb R}^3\setminus \overline B_1$ and $\int_{\abs{x}\geq 1}\abs{\nabla \hat v_\star}^2>0$, this contradicts \eqref{eq:harmonicimprovement}. \end{proof} We will plug the initial decay provided by Lemma~\ref{l:energyimprovement} into the equilibrium equation \eqref{eq:eulerlagrange} in order to deduce the expansion \eqref{eq:expansion} (implying in particular \textit{a posteriori} that Lemma~\ref{l:energyimprovement} is also valid for $\alpha=2$). The main tool to obtain the expansion will be decay estimates for Poisson equation. These estimates are familiar but not easily found in the form we require here, and so we have provided a proof in the Appendix~\ref{a:decay}. With these preliminary lemmas, we are now ready to present the proof of Theorem~\ref{t:expansion}. \begin{proof}[Proof of Theorem \ref{t:expansion}] Let $R_1$ and $\delta$ be as in Lemma \ref{l:energyimprovement}. \noindent\textbf{Step 1.} Picking $R_0>1$ (depending on $n$) such that $\frac{1}{R_0}\int_{\abs{x}\geq R_0}\abs{\nabla n}^2\leq \delta^2$ we may apply Lemma~\ref{l:energyimprovement} iteratively to $x\mapsto n(R_1^kR_0 x)$ for $k\geq 0$ and obtain \begin{equation*} \frac{1}{R_1^{k} R_0}\int_{\abs{x}\geq R_1^{k}R_0}\abs{\nabla n}^2\leq \frac{\delta^2}{(R_1^k)^\alpha}, \end{equation*} and therefore \begin{equation*} \frac{1}{R}\int_{\abs{x}\geq R}\abs{\nabla n}^2\leq \frac{C(n,\alpha)}{R^\alpha}\quad\forall R\geq R_0(n),\alpha<2. \end{equation*} Thanks to \eqref{eq:smallregrescaled} this implies \begin{equation*} \abs{\nabla n}\leq \frac{C(n,\sigma)}{r^{2-\sigma}}\qquad\text{for } r\geq R_0(n),\sigma>0. \end{equation*} Integrating this along radial rays yields $\abs{n-n_0}\leq C(n,\sigma)/r^{1-\sigma}$. Moreover, since $-\Delta n =\abs{\nabla n}^2n$ we have (redefining $\sigma$ appropriately) \begin{equation*} \abs{\Delta n}\leq \frac{C(n,\sigma)}{r^{4-\sigma}}\qquad\text{for } r\geq R_0(n),\sigma>0. \end{equation*} \textbf{Step 2.} Applying Lemma~\ref{l:decaycorrecptwise} to $f_1 =\Delta n = \Delta (n - n_0)$ we obtain the existence of $u_1\colon {\mathbb R}^3\setminus B_{R_0}\to{\mathbb R}^3$ such that $\Delta u_1 =\Delta (n - n_0)$ and \begin{equation} \label{e.u1rem} \frac{\abs{u_1}}{r} + \abs{\nabla u_1}\leq \frac{C(n,\sigma)}{r^{3-\sigma}}\qquad\text{for } r\geq R_0(n). \end{equation} The map $n-n_0-u_1$ is harmonic in ${\mathbb R}^3\setminus B_{R_0}$. Writing down its spherical harmonics expansion and if necessary, modifying $u_1$ to include the part of the expansion that decays faster than $1/r$, we obtain the existence of $v_0\in{\mathbb R}^3$ such that \begin{equation} \label{e.nexp0} n=n_0 + \frac{1}{r}v_0 +u_1. \end{equation} This implies \begin{align*} 1=|n|^2 =1+ \frac{2}{r}v_0\cdot n_0 +\mathcal O\left(\frac 1r\right), \end{align*} so we must have \begin{equation*} v_0\cdot n_0 =0. \end{equation*} \noindent\textbf{Step 3.} With an eye toward obtaining the next term in the far-field expansion, we plug in \eqref{e.nexp0} into the harmonic maps PDE \eqref{eq:eulerlagrange}, and isolate terms that are higher order than $\mathcal O(\frac{1}{r^5})$ on the right hand side. Specifically, we have \begin{align*} 0=\Delta n +|\nabla n|^2n &=\Delta u_1 + \frac{1}{r^4}|v_0|^2n_0 +\mathcal O\left(\frac 1{r^{5-\sigma}}\right) \\ &=\Delta\left( u_1 + \frac{1}{r^2}\frac{|v_0|^2}{2}n_0\right) +\mathcal O\left(\frac 1{r^{5-\sigma}}\right), \end{align*} that is, \begin{align*} \Delta \left(u_1+\frac{1}{r^2}\frac{\abs{v_0}^2}{2}n_0 \right)&= f_2, \end{align*} where $f_2$ has decay rate given by $\abs{f_2}\leq C(n,\sigma)/r^{5-\sigma}$ for $r\geq R_0(n).$ By Lemma \ref{l:decaycorrecptwise}, we obtain $u_2\colon {\mathbb R}^3\setminus B_{R_0}\to{\mathbb R}^3$ such that $\Delta u_2 =f_2$ and \begin{equation*} \frac{\abs{u_2}}{r} + \abs{\nabla u_2}\leq \frac{C(n,\sigma)}{r^{4-\sigma}}\qquad\text{for } r\geq R_0(n). \end{equation*} The map $u_1+r^{-2}\abs{v_0}^2n_0/2 -u_2$ is harmonic in ${\mathbb R}^3\setminus B_{R_0}$, hence including the higher decay part of its spherical harmonics expansion into $u_2$ we deduce the existence of $P_1\in \mathbb R[X]^3$, a vector of homogeneous harmonic polynomials of degree 1 (i.e. linear forms) such that \begin{equation*} u_1 =-\frac{1}{r^2}\frac{\abs{v_0}^2}{2}n_0 + \frac{1}{r^3} P_1(x) +u_2, \end{equation*} i.e. \begin{equation} \label{e.nexp2} n=\left(1-\frac{\abs{v_0}^2}{2r^2}\right) n_0 +\frac 1r v_0 +\frac{1}{r^3} P_1(x) +u_2. \end{equation} Note that the unit norm constraint on $n$ implies $n_0\cdot P_1(x)=0$ for all $x$. Indeed, taking the norm square of \eqref{e.nexp2}, we find \begin{align*} 1 = |n|^2 = 1 + \frac{2 \, n_0 \cdot P_1(x)}{r^3} + \mathcal O\left(\frac{1}{r^3}\right), \end{align*} which implies $n_0 \cdot P_1(x) \equiv 0.$ Writing $P_1(x)/r^2=\sum {p_j}\partial_j(1/r)$, we must have $p_j\cdot n_0=0$ for $j=1,2,3$. \noindent\textbf{Step 4.} As before, we plug \eqref{e.nexp2} back again into the equation \eqref{eq:eulerlagrange} and isolate terms that are $\mathcal O(\frac{1}{r^6} )$ on the right hand side. We find \begin{align*} 0 =\Delta n+|\nabla n|^2n &=\Delta u_2 + \frac{1}{r^5}|v_0|^2v_0 +\frac{4}{r^6}(v_0\cdot P_1(x))\, n_0 + \mathcal O\left(\frac{1}{r^{6-\sigma}}\right) \\ &=\Delta \left( u_2 + \frac{1}{6r^3}|v_0|^2v_0 +\frac 1{3r^4}(v_0\cdot P_1(x))\, n_0 \right) + \mathcal O\left(\frac{1}{r^{6-\sigma}}\right) . \end{align*} Applying Lemma~\ref{l:decaycorrecptwise} and arguing as in Steps~2 and 3, we deduce the existence of $P_2\in {\mathbb R}[X]^3$ a vector of homogeneous harmonic polynomials of degree 2 (i.e. harmonic quadratic forms) such that we have the expansion \begin{align*} n&=\left(1-\frac{\abs{v_0}^2}{2r^2} \right) n_0 +\frac 1r v_0 +\frac{1}{r^3} P_1(x) -\frac{\abs{v_0}^2}{6r^3}v_0 -\frac{1}{3r^4} (v_0\cdot P_1)\, n_0 +\frac{1}{r^5} P_2(x) +u_3,\\ &\frac{\abs{u_3}}{r}+\abs{\nabla u_3}\leq \frac{C(n,\sigma)}{r^{5-\sigma}}\qquad\text{for }r\geq R_0. \end{align*} With one more iteration we realize that the decay $u_3=\mathcal O(1/r^{4-\sigma})$ improves to $u_3=\mathcal O(1/r^{4})$. Writing $P_1(x)/r^2=\sum {p_j}\partial_j(1/r)$ and $P_2(x)/r^5=\sum c_{k\ell}\partial_k\partial_\ell(1/r)$, the proof of Theorem~\ref{t:expansion} is complete. \end{proof} \begin{proof}[Alternative proof of Step~1] We present here another proof of Step~1, inspired by \cite[Proposition~3]{schoen83}. The map $w=\partial_k n$ solves, for $r=|x|\geq R_0$, the system \begin{align}\label{eq:w} -\Delta w =2\nabla n:\nabla w\, n + |\nabla n|^2 w, \end{align} where for matrices $A,B$ we use the notation $A : B := \mathrm{tr}(A^TB),$ for their Frobenius inner product. Testing \eqref{eq:w} with $\eta^2w$ for some smooth cut-off function $\eta$ we obtain \begin{align*} \int\eta^2|\nabla w|^2 &\lesssim \int |\eta|\,|\nabla \eta| \, |w|\,|\nabla w| + \int \eta^2 |w| |\nabla n| |\nabla w| +\int \eta^2 |\nabla n|^2 |w|^2 \\ & \leq \frac 12 \int \eta^2 |\nabla w|^2 +C \int |\nabla\eta|^2|w|^2 + \eta^2|\nabla n|^2 |w|^2. \end{align*} Absorbing the first term of the last line in the left-hand side, choosing $\mathbf 1_{R\leq |x|\leq 2R}\leq\eta \leq \mathbf 1_{R/2\leq |x|\leq 3R}$ with $|\nabla\eta|\lesssim 1/R$, and using $|w|^2\leq |\nabla n|^2\lesssim 1/r^{3}$ thanks to \eqref{eq:smallregrescaled}, we deduce \begin{align*} \int_{R\leq |x|\leq 2R}|\nabla w|^2 &\lesssim \frac{1}{R^2}, \end{align*} hence \begin{align}\label{e.decay1} \int_{|x|\geq R}|\nabla w|^2 \leq \sum_{k\geq 0}\int_{2^k R\leq |x|\leq 2^{k+1}R}|\nabla w|^2 \lesssim \sum_{k\geq 0}\frac{1}{2^{2k}R^2}\lesssim \frac{1}{R^2} \end{align} Therefore, plugging in \eqref{eq:smallregrescaled} and \eqref{e.decay1} in \eqref{eq:w}, we find that the right-hand side of \eqref{eq:w} has $\mathcal O(R^{-4})$ decay in an appropriate $L^2$ sense. To be precise, \begin{align*} -\Delta w =f,\qquad \left(\frac{1}{R^3}\int_{ |x|\geq R}|x|^2|f|^2\right)^{\frac 12}\lesssim \frac{1}{R^3}. \end{align*} Applying Lemma~\ref{l:decay} with the choice $\gamma = 3 - \sigma/2$ for any small $\sigma > 0,$ we deduce the existence of a map $u$ such that $-\Delta u=f$ and \begin{align*} \left(\frac{1}{R^3}\int_{|x|\geq R}\frac{|u|^2}{|x|^2}\right)^{\frac 12}\lesssim \frac{1}{R^{3-\sigma/2}}, \end{align*} which implies \begin{align*} \int_{|x|\geq R}|u|^2 \leq \sum_{k\geq 0} 2^{2k+2}R^2\int_{2^k R\leq |x|\leq 2^{k+1}R} \frac{|u|^2}{|x|^2} \lesssim \sum_{k\geq 0}\frac{1}{2^{(1-\sigma)k}}\frac{1}{R^{1-\sigma}}\lesssim \frac{1}{R^{1-\sigma}}, \end{align*} for any $\sigma>0$. Since $w-u$ is harmonic and square integrable at $\infty$, we have $w-u=\mathcal O(1/r^2)$ as $r\to\infty$, and deduce from this and the above that \begin{align*} \int_{|x|\geq R}|w|^2 \lesssim\frac{1}{R^{1-\sigma}}. \end{align*} Recalling $w=\partial_k n$ this implies, together with \eqref{eq:smallregrescaled}, $|\nabla n|^2\lesssim 1/r^{4-\sigma}$ and the iteration starting in Step~2 of Theorem~\ref{t:expansion}'s proof can now be applied. \end{proof} \section{The leading-order term} \label{s:torque} In this section we prove Theorem~\ref{t:torque} and Corollary~\ref{c:sym}. \begin{proof}[Proof of Theorem~\ref{t:torque}] Without loss of generality assume $G\subset B_1$ and fix a $C^1$ function $\chi\colon \mathbb R^3\to [0,1]$ such that $\chi\equiv 0$ on $B_1$ and $\int_{|x|\geq 1} |x|^{-2} (\chi-1)^2\, dx\lesssim \int_{|x|\geq 1} |\nabla \chi|^2\, dx <\infty$. In what follows, for any $m_0 \in \mathbb{S}^2$, we denote by \begin{align*} H(\partial G,m_0) = \left\{ m\in H^1_{loc}({\mathbb R}^3\setminus G;\mathbb S^2)\colon \int_{\mathbb{R}^3\setminus G} \frac{|m-m_0|^2}{1+ r^2} + \int_{{\mathbb R}^3\setminus G} |\nabla m|^2 + F_s(m_{\lfloor \partial G})<\infty \right\}, \end{align*} the class of admissible competitors in the minimization problem \eqref{eq:hatEn0} defining $\hat E(m_0)$. \medskip {\bf Step 1:} The map $\hat E$ is Lipschitz. \medskip Let $n_1,n_2$ be minimizers with far-field alignments $n_1^\infty,n_2^\infty$. Choose the frame such that $n_1^\infty=e_3$ and $n_2^\infty =R(\theta) e_3$, where $R(\theta)$ denotes the rotation of axis $e_1$ and angle $\theta$ satisfying $|n_1^\infty-n_2^\infty|\leq \theta \leq 2 |n_1^\infty-n_2^\infty|$. Consider now the map $\tilde n_1\in H(\partial G, n_1^\infty)$ given by \begin{align*} \tilde n_1(x) = R(-\chi(x)\theta)\, n_2(x). \end{align*} We have \begin{align*} |\nabla\tilde n_1|^2 &\leq \theta^2 |\nabla\chi|^2 + |\nabla n_2|^2 + 2 \theta \, |\nabla \chi|\,|\nabla n_2| \\ &\leq (1+\lambda^{-1})\theta^2 |\nabla\chi|^2 + (1+\lambda)|\nabla n_2|^2 , \end{align*} for any $\lambda>0$, hence \begin{align*} \hat E(n_1^\infty)\leq C(1+\lambda^{-1})|n_1^\infty -n_2^\infty|^2 + (1+\lambda) \hat E(n_2^\infty) . \end{align*} Applying this to $\lambda=1$ and a fixed $n_2^\infty$ we deduce in particular that $\hat E$ is bounded on $\mathbb S^2$. Moreover, choosing $\lambda=|n_1^\infty-n_2^\infty|$ we obtain \begin{align*} \hat E(n_1^\infty)-\hat E(n_2^\infty)\leq |n_1^\infty -n_2^\infty| \left( \hat E(n_2^\infty) + C + C |n_1^\infty -n_2^\infty|\right). \end{align*} Reversing the roles of $n_1,n_2$ and recalling that $\hat E(n^\infty)$ is bounded on $\mathbb{S}^2$, we conclude that $\hat E$ is Lipschitz. \medskip {\bf Step 2:} At every differentiability point $n_0\in\mathbb S^2$ of $\hat E$ we have $\nabla \hat E(n_0)=-8\pi v_0$, where $v_0=\lim_{r\to\infty} r(n-n_0)\in T_{n_0}\mathbb S^2$ for any minimizer $n$ such that $E(n)=\hat E(n_0)$. \medskip Let $n_0\in\mathbb S^2$ be a differentiability point of $\hat E$. For any axis $e\in\mathbb S^2$ let $R(\theta)$ be the rotation of axis $e$ and angle $\theta$, and set $n_\theta^\infty=R(\theta)n_0$, so that \begin{align*} \hat E(n_\theta^\infty)-\hat E(n_0)=\nabla \hat E(n_0)\cdot (R'(0)n_0) + o(\theta)\qquad\text{as }\theta\to 0. \end{align*} Define $\tilde n\in H(\partial G, n_\theta^\infty)$ by $\tilde n=R(\chi\theta)n$, where $n$ is a minimizer such that $E(n)=\hat E(n_0)$. Using the equation satisfied by $n$ and the fact that $\tilde n=n$ in $\partial G$, for all $R>1$ we have \begin{align*} &E(\tilde n)-E(n)= \int_{B_R\setminus G} |\nabla \tilde n|^2-\int_{B_R\setminus G}|\nabla n|^2\\ &= \int_{B_R\setminus G} \left(2\nabla n\cdot \nabla(\tilde n-n) +|\nabla (\tilde n-n)|^2\right)\\ &=2\int_{\partial B_R}\partial_r n\cdot (\tilde n-n) +\int_{B_R\setminus G} \left(-2\Delta n \cdot (\tilde n-n) +|\nabla (\tilde n-n)|^2\right)\\ &=2\int_{\partial B_R}\partial_r n\cdot (\tilde n-n) + \int_{B_R\setminus G} 2|\nabla n|^2 n \cdot (\tilde n-n) + \int_{B_R\setminus G} |\nabla (\tilde n-n)|^2\\ &=2\int_{\partial B_R}\partial_r n\cdot (\tilde n-n) - \int_{B_R\setminus G} |\nabla n|^2 |\tilde n-n|^2 + \int_{B_R\setminus G} |\nabla (\tilde n-n)|^2 \end{align*} Using the asymptotic expansion of the minimizing map $n$ we have \begin{align*} \int_{B_R}\partial_r n\cdot (\tilde n-n)=-8\pi v_0\cdot (n_\theta^\infty-n_0) + O(1/R)\quad\text{as } R\to\infty, \end{align*} where $v_0=\lim_{r\to\infty} r(n-n_0)\in T_{n_0}\mathbb S^2$. We deduce that \begin{align} \label{e.locsemconc} &\hat E(n_\theta^\infty)-\hat E(n_0) \nonumber\\ &\leq E(\tilde n)- E(n) \nonumber\\ &=-8\pi v_0 \cdot (n_\theta^\infty-n_0) - \int_{{\mathbb R}^3\setminus G} |\nabla n|^2 |\tilde n-n|^2 + \int_{{\mathbb R}^3\setminus G} |\nabla (\tilde n-n)|^2 \nonumber\\ & \leq -8\pi v_0 \cdot (n_\theta^\infty-n_0) + C\left(1+\int_{{\mathbb R}^3\setminus G}|\nabla n|^2 \right) \theta^2. \end{align} The last estimate follows from the explicit form of $\tilde n=R(\chi\theta)n$, and the constant $C$ depends only on the fixed cut-off function $\chi$. In particular we have \begin{align*} \hat E(n_\theta^\infty)-\hat E(n_0) \leq -8\pi v_0\cdot (R'(0)n_0) + O(\theta^2), \end{align*} which implies \begin{align*} (\nabla \hat E(n_0)+8\pi v_0)\cdot (R'(0)n_0)\leq 0. \end{align*} Since $R'(0)n_0$ can be any tangent vector in $T_{n_0}\mathbb S^2$ we infer that $\nabla \hat E(n_0)+8\pi v_0=0$. \\ \textbf{Step 3.} It remains to prove that $\hat E$ is semiconcave. This follows directly from the inequality \eqref{e.locsemconc} obtained in Step 2, as any $m_0\in\mathbb S^2$ can be written as $m_0=n_\theta^\infty$ for some $0\leq\theta\leq 2|m_0-n_0|$. This completes the proof of Theorem~\ref{t:torque}. \end{proof} \begin{proof}[Proof of Corollary \ref{c:sym}] Consider first the axisymmetric case $\mathrm{Sym}(G)\supset SO(3)^{\mathbf u}$. Then we have $\hat E(Rn_0)=\hat E(n_0)$ for any rotation $R$ of axis $\mathbf u$ and $n_0\in\mathbb S^2$. At a differentiable point $n_0$, differentiating this identity with respect to $R$ implies $\nabla \hat E(n_0)\cdot An_0=0$ for any antisymmetric matrix $A$ with $A\mathbf u=0$, i.e. $\nabla \hat E(n_0)\cdot (\mathbf u\times n_0)=0$. Recalling from Theorem~\ref{t:torque} that $\nabla \hat E(n_0)=-8\pi v_0$, we deduce $v_0\cdot (\mathbf u\times n_0)=0$. Moreover, if $\mathbf u$ is a differentiability point, then differentiating that same identity with respect to $n_0$ at $n_0=\mathbf u$ gives $R^{-1}\nabla \hat E(\mathbf u)=\nabla\hat E(\mathbf u)$ for any rotation $R$ of axis $\mathbf u$, hence $\nabla \hat E(\mathbf u)=0$ since $\nabla \hat E(u)\in T_{\mathbf u}\mathbb S^2=\mathbf u^\perp$. So $v_0(\mathbf u)=0$. In the spherically symmetric case $\mathrm{Sym}(G)\supset SO(3)$ we have $\hat E(Rn_0)=\hat E(n_0)$ for all $R\in SO(3)$, hence $\hat E$ is constant, and $\nabla\hat E=0$ on $\mathbb S^2$. So $v_0(n_0)=0$ for all $n_0\in\mathbb S^2$. \end{proof} \section*{Acknowledgements} S.A. and L.B. were supported via an NSERC (Canada) Discovery Grant. X.L. received support from ANR project ANR-18-CE40-0023. The work of R.V. was partially supported by a grant from the Simons Foundation (award \# 733694) and an AMS-Simons travel award.
1,941,325,220,143
arxiv
\section{Introduction} \label{sec_intro} In modern industrial applications, data acquisition systems allow to collect massive amounts of data with high frequency. Several examples may be found in the current Industry 4.0 framework, which is reshaping the variety of signals and measurements that can be gathered during manufacturing processes. The focus in many of these applications is statistical process monitoring (SPM), whose main aim is to quickly detect unusual conditions in a process when special causes of variation act on it, i.e., the process is out-of-control (OC). On the contrary, when only common causes are present, the process is said to be in-control (IC). In this context, the experimental measurements of the quality characteristic of interest are often characterized by complex and high dimensional formats that can be well represented as functional data or profiles \citep{ramsay2005functional,kokoszka2017introduction}. The simplest approach for monitoring one or multiple functional variables is based on the extraction of scalar features from each profile, e.g., the mean, followed by the application of classical SPM techniques for multivariate data \citep{montgomery2012statistical}. However, feature extraction is known to be problem-specific, arbitrary, and risks compressing useful information. Thus, there is a growing interest in \textit{profile monitoring} \citep{noorossana2011statistical}, whose aim is to monitor a process when the quality characteristic is best characterized by one or multiple profiles. Some recent examples of profile monitoring applications can be found in \cite{menafoglio2018profile,capezza2020control,capezza2021functional_qrei,capezza2021functional_clustering,centofanti2020functional}. The main tools for SPM are control charts that are implemented in two phases. In Phase I, historical process data are used to set control chart limits to be used in Phase II, i.e., the actual monitoring phase, where observations falling outside the control limits are signaled as OC. In classical SPM applications the historical Phase I data are assumed to come from an IC process. However, this assumption is not always valid. As an example, let consider the motivating real-world application, detailed in Section \ref{sec_real}, that concerns the SPM of a resistance spot welding (RSW) process in the assembly of automobiles. RSW is the most common technique employed in joining metal sheets during body-in-white assembly of automobiles \citep{zhou2014study}, mainly because of its adaptability for mass production \citep{martin}. Among on-line measurements of RSW process parameters, the so-called dynamic resistance curve (DRC) is recognized as the full technological signature of the metallurgical development of a spot weld \citep{dickinson,capezza2021functional_clustering} and, thus, can be used to characterize the quality of a finished sub-assembly. Figure \ref{fig_drc} shows 100 DRCs corresponding to 10 different spot welds, measured in $m\Omega$, that are acquired during the RSW process from the real-case study presented in Section \ref{sec_real}. Several outliers are clearly visible that should be taken into account to set up an effective SPM strategy. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{fig/fig1.pdf} \caption{Sample of 100 DRCs, measured in $m\Omega$, that are acquired during the RSW process from the real-case study in Section \ref{sec_real}. The different panels refer to the corresponding different spot welds, denoted with names from V1 to V10.} \label{fig_drc} \end{center} \end{figure} Indeed, control charts are very sensitive to the presence of outlying observations in Phase I that can lead to inflated control limits and reduced power to detect process changes in Phase II. To deal with outliers in the Phase I sample, SPM methods use two common alternatives, namely the diagnostic and the robust approaches \citep{kruger2012statistical,hubert2015multivariate}. The diagnostic approach is based on standard estimates after the removal of sample units identified as outliers that translates into SPM methods where iterative re-estimation procedures are considered in Phase I. This approach could be often safely applied to eliminate the effect of a small number of very extreme observations. However, it will fail to detect more moderate outliers that are not always as easy to label correctly. On the contrary, the robust approach accepts all the data points and tries to find a robust estimator which reduces the impact of outliers on the final results \citep{maronna2019robust}. Several robust approaches for the SPM of a multivariate scalar quality characteristic have been proposed in the literature. \cite{alfaro2009comparison} show a comparison of robust alternatives to the classical Hotelling's control chart. It includes two alternative Hotelling's $T^2$-type control charts for individual observations, proposed by \cite{vargas2003robust} and \cite{jensen2007high}, which are based on the minimum volume ellipsoide and the minimum covariance determinant estimators \citep{rousseeuw1984least}, respectively. Moreover, the comparison includes the control chart based on the reweighted minimum covariance determinant (RMCD) estimators, proposed by \cite{chenouri2009multivariate}. More recently, \cite{cabana2021robust} propose an alternative robust Hotelling's $T^2$ procedure using the robust shrinkage reweighted estimator. Although \cite{kordestani2020monitoring} and \cite{moheghi2020phase} propose robust estimators to monitoring simple linear profiles, to the best of authors' knowledge, a robust approach able to successfully capture the functional nature of a multivariate functional quality characteristic has not been devised in the SPM literature so far. Beyond the SPM literature, several works have been proposed to deal with outlying functional observations. Several methods extend the classical linear combination type estimators (i.e., L-estimator) \citep{maronna2019robust} to the functional setting to robustly estimate the center of a functional distribution through trimming \citep{fraiman2001trimmed,cuesta2006impartial} and functional data depths \citep{cuevas2009depth,lopez2011half}. \cite{sinova2018m} introduce the notion of maximum likelihood type estimators (i.e., M-estimators) in the functional data setting. More recently, \cite{centofanti2021rofanova} propose a robust functional ANOVA method that reduces the weights of outlying functional data on the results of the analysis. Regarding functional principal component analysis (FPCA), robust approaches are classified by \cite{boente2021robust} in three groups, depending on the specific property of principal components on which they focus. Methods in the first group perform the eigenanalysis of a robust estimator of the scatter operator, as the spherical principal components method of \cite{locantore1999robust} and the indirect approach of \cite{sawant2012functional}. The latter performs a robust PCA method, e.g., ROBPCA \citep{hubert2005robpca}, on the matrix of the basis coefficients corresponding to a basis expansion representation of the functional data. The second group includes projection-pursuit approaches \citep{hyndman2007robust}, which sequentially search for the directions that maximize a robust estimator of the spread of the data projections. Whereas, the third group is composed of methods that estimate the principal components spaces by minimizing a robust reconstruction error measure \citep{lee2013m}. Finally, it is worth mentioning diagnostic approaches for functional outlier detection, which have been proposed for both univariate \citep{hyndman2010rainbow,arribas2014shape,febrero2008outlier} and multivariate functional data \citep{hubert2015multivariate,dai2018multivariate,aleman2022depthgram}. In presence of many functional variables, the lack of robust approaches that deal with outliers is exacerbated by the curse of dimensionality. Traditional multivariate robust estimators assume a casewise contamination model for the data, which consists in a mixture of two distributions, where the majority of the cases is free of contamination and the minority mixture component describes an unspecified outlier generating distribution. \cite{alqallaf2009propagation} show that these traditional estimators are affected by the problem of propagation of outliers. In situations where only a small number of cases are contaminated, the traditional robust approaches work well. However, under an independent contamination model such as cellwise outliers (i.e., contamination in each variable is independent from the other variables), when the dimensionality of the data is high, the fraction of perfectly observed cases can be rather small and the traditional robust estimators may fail. Moreover, \cite{agostinelli2015robust} point out that both types of data contamination, casewise and cellwise, may occur together. This problem has been addressed in the multivariate scalar setting by \cite{agostinelli2015robust} that propose a two steps method. In the first step, a univariate filter is used to eliminate large cellwise outliers, i.e., detection and replacement by missing values, then, in the second step, a robust estimation, specifically designed to deal with missing data, is applied to the incomplete data. \cite{leung2016robust} notice that the univariate filter does not handle well moderate-size cellwise outliers, therefore they introduce for the first step a consistent bivariate filter to be used in combination with the univariate filter. \cite{rousseeuw2018detecting} propose a method for detecting deviating data cells that takes the correlations between the variables into account, whereas, \cite{tarr2016robust} devise a method for robust estimation of precision matrices under cellwise contamination. Other methods that consider cellwise outliers have been developed for regression and classification \citep{filzmoser2020cellwise,aerts2017cellwise}. To deal with multivariate functional outliers, in this paper we propose a new framework, referred to as robust multivariate functional control chart (RoMFCC), for SPM of multivariate functional data that is robust to both functional casewise and cellwise outliers. The latter corresponds to a contamination model where outliers arise in each variable independently from the other functional variables. Specifically, to deal with functional cellwise outliers, the proposed framework considers an extension of the filtering approach proposed by \cite{agostinelli2015robust} to univariate functional data and an imputation method inspired by the robust imputation technique of \cite{branden2009robust}. Moreover, it also considers a robust multivariate functional principal component analysis (RoMFPCA) based on the ROBPCA method \citep{hubert2005robpca}, and a profile monitoring strategy built on the Hotelling's $T^2$ and the squared prediction error ($SPE$) control charts \citep{noorossana2011statistical,grasso2016using,centofanti2020functional,capezza2020control,capezza2021functional_qrei,capezza2022funchartspaper}. A Monte Carlo simulation study is performed to quantify the probability of signal (i.e., detecting an OC observation) of RoMFCC in identifying mean shifts in the functional variables in presence of both casewise and cellwise outliers. That is, the proposed RoMFCC is compared with other control charts already present in the literature. Finally, the practical applicability of the proposed control chart is illustrated on the motivating real-case study in the automotive manufacturing industry. In particular, the RoMFCC is shown to adequately identify a drift in the manufacturing process due to electrode wear. The article is structured as follows. Section \ref{sec_method} introduces the proposed RoMFCC framework. In Section \ref{sec_sim}, a simulation study is presented where RoMFCC is compared to other popular competing methods. The real-case study in the automotive industry is presented in Section \ref{sec_real}. Section \ref{sec_conclusions} concludes the article. Supplementary materials for this article are available online. All computations and plots have been obtained using the programming language R \citep{r2021}. \section{The Robust Multivariate Functional Control Chart Framework} \label{sec_method} The proposed RoMFCC is a new general framework for SPM of multivariate functional data that is able to deal with both functional casewise and cellwise outliers. It relies on the following four main elements. \begin{enumerate}[label=(\Roman*)] \item \textit{Functional univariate filter}, which is used to identify functional cellwise outliers to be replaced by missing components. \item \textit{Robust functional data imputation}, where a robust imputation method is applied to the incomplete data to replace missing values. \item \textit{Casewise robust dimensionality reduction}, which reduces the infinite dimensionality of the multivariate functional data by being robust towards casewise outliers. \item \textit{Monitoring strategy}, to appropriately monitor multivariate functional data. \end{enumerate} In what follows, we describe a specific implementation of the RoMFCC framework where (I) an extension of the filtering proposed by \cite{agostinelli2015robust}, referred to as functional univariate filter (FUF), is considered; (II) a robust functional data imputation method referred to as RoFDI and based on the robust imputation technique of \cite{branden2009robust} is used; (III) the RoMFPCA is considered as the casewise robust dimensionality reduction method. Finally, (IV) the multivariate functional data are monitored through the profile monitoring approach based on the simultaneous application of the Hotelling’s $T^{2}$ and the squared prediction error ($SPE$) control charts. For ease of presentation, the RoMFPCA is presented in Section \ref{sec_RoMFPCA}. Then, Section \ref{sec_univfilter}, Section \ref{sec_dataimpu}, and, Section \ref{sec_monstr} describe the FUF, the RoFDI method, and the monitoring strategy, respectively. Section \ref{sec_propmeth} details the Phase I and Phase II of the proposed implementation of the RoMFCC framework where the elements (I-IV) are put together. \subsection{Robust Multivariate Functional Principal Component Analysis} \label{sec_RoMFPCA} Let $\bm{X}=\left(X_1,\dots, X_p\right)^{T}$ a random vector with realization in the Hilbert space $\mathbb{H}$ of $p$-dimensional vectors of functions defined on the compact set $\mathcal{T}\in\mathbb{R}$ with realizations in $L^2(\mathcal{T})$, i.e., the Hilbert spaces of square integrable functions defined on $\mathcal{T}$. Accordingly, the inner product of two functions $f$ and $g$ in $L^{2}\left(\mathcal{T}\right)$ is $\langle f,g\rangle=\int_{\mathcal{T}}f\left(t\right)g\left(t\right)dt$, and the norm is $\lVert \cdot \rVert=\sqrt{\langle \cdot,\cdot\rangle}$. The inner product of two function vectors $\mathbf{f}=\left(f_1,\dots,f_p\right)^{T}$ and $\mathbf{g}=\left(g_1,\dots,g_p\right)^{T}$ in $\mathbb{H}$ is $\langle \mathbf{f},\mathbf{g} \rangle _{\mathbb{H}}=\sum_{j=1}^{p}\langle f_j,g_j\rangle$ and the norm is $\lVert \cdot \rVert_{\mathbb{H}}=\sqrt{\langle \cdot,\cdot\rangle_{\mathbb{H}}}$. We assume that $\bm{X}$ has mean $\bm{\mu}=\left(\mu_1,\dots,\mu_p\right)^T$, $\mu_i(t)=\E(X_i(t))$, $t\in\mathcal{T}$ and covariance $\bm{G}=\lbrace G_{ij}\rbrace_{1\leq i,j \leq p}$, $G_{ij}(s,t)=\Cov(X_j(s),X_j(t))$, $s,t\in \mathcal{T}$. In what follows, to take into account for differences in degrees of variability and units of measurements among $X_1,\dots, X_p$, the transformation approach of \cite{chiou2014multivariate} is considered. Specifically, let consider the vector of standardized variables $\bm{Z}=\left(Z_1,\dots,Z_p\right)^T$, $Z_i(t)=v_i(t)^{-1/2}(X_i(t)-\mu_i(t))$, with $v_i(t)=G_{ii}(t,t)$, $t\in\mathcal{T}$. Then, from the multivariate Karhunen-Lo\`{e}ve's Theorem \citep{happ2018multivariate} follows that \begin{equation*} \bm{Z}(t)=\sum_{l=1}^{\infty} \xi_l\bm{\psi}_l(t),\quad t\in\mathcal{T}, \end{equation*} where $\xi_l=\langle \bm{\psi}_l, \bm{Z}\rangle_{\mathbb{H}} $ are random variables, said \textit{principal components scores} or simply \textit{scores} such that $\E\left( \xi_l\right)=0$ and $\E\left(\xi_l \xi_m\right)=\lambda_{l}\delta_{lm}$, with $\delta_{lm}$ the Kronecker delta. The elements of the orthonormal set $\lbrace \bm{\psi}_l\rbrace $, $\bm{\psi}_l=\left(\psi_{l1},\dots,\psi_{lp}\right)^T$, with $\langle \bm{\psi}_l,\bm{\psi}_m\rangle_{\mathbb{H}}=\delta_{lm}$, are referred to as \textit{principal components}, and are the eigenfunctions of the covariance $\bm{C}$ of $\bm{Z}$ corresponding to the eigenvalues $\lambda_1\geq\lambda_2\geq \dots\geq 0$. Following the approach of \cite{ramsay2005functional}, the eigenfunctions and eigenvalues of the covariance $\bm{C}$ are estimated through a basis function expansion approach. Specifically, we assume that the functions $Z_j$ and a generic eigenfunction $\bm \psi$ of $\bm{C}$ with components $\psi_{j}$, for $j=1,\dots,p$, can be represented as \begin{equation} \label{eq_appcov} Z_j(t)\approx \sum_{k=1}^{K} c_{jk}\phi_{jk}(t),\quad \psi_{j}(t)\approx \sum_{k=1}^{K} b_{jk}\phi_{jk}(t), \quad t\in\mathcal{T} \end{equation} where $\bm{\phi}_j=\left(\phi_{j1},\dots,\phi_{jK}\right)^T$, $\bm{c}_j=\left(c_{j1},\dots,c_{jK}\right)^T$ and $\bm{b}_j=\left(b_{j1},\dots,b_{jK}\right)^T$ are the basis functions and coefficient vectors for the expansion of $Z_j$ and $\psi_j$, respectively. With these assumptions, standard multivariate functional principal component analysis \citep{ramsay2005functional,chiou2014multivariate} estimates eigenfunctions and eigenvalues of the covariance $\bm{C}$ by performing standard multivariate principal component analysis on the random vector $\bm{W}^{1/2}\bm{c}$, where $\bm{c}=\left(\bm{c}_1^T,\dots,\bm{c}_p^T\right)^T$ and $\bm{W}$ is a block-diagonal matrix with diagonal blocks $\bm{W}_j$, $j=1,\dots,p$, whose entries are $w_{k_1 k_2} = \langle\phi_{k_1},\phi_{k_2}\rangle$, $k_1,k_2=1,\dots,K$. Then, the eigenvalues of $\bm{C}$ are estimated by those of the covariance matrix of $\bm{W}^{1/2}\bm{c}$, whereas, the components $\psi_j$ of the generic eigenfunction, with corresponding eigenvalue $\lambda$, are estimated through Equation \eqref{eq_appcov} with $\bm{b}_j=\bm{W}^{-1/2}\bm{u}_j$, where $\bm{u}=\left(\bm{u}_1^T,\dots,\bm{u}_p^T\right)^T$ is the eigenvector of the covariance matrix of $\bm{W}^{1/2}\bm{c}$ corresponding to $\lambda$. However, it is well known that standard multivariate principal component analysis is not robust to outliers \citep{maronna2019robust}, which obviously reflects on the functional principal component analysis by probably providing misleading results. Extending the approach of \cite{sawant2012functional} for multivariate functional data, the proposed RoMFPCA applies a robust principal component analysis alternative to the random vector $\bm{W}^{1/2}\bm{c}$. Specifically, we consider the ROBPCA approach \citep{hubert2005robpca}, which is a computationally efficient method explicitly conceived to produce estimates with high breakdown in high dimensional data settings, which almost always arise in the functional context, and to handle large percentage of contamination. Thus, given $n$ independent realizations $\bm{X}_i$ of $\bm{X}$, dimensionality reduction is achieved by approximating $\bm{X}_i$ through $\hat{\bm{X}}_i$, for $i=1,\dots,n$, as \begin{equation} \label{eq_appx} \hat{\bm{X}}_i(t)= \hat{\bm{\mu}}(t)+\hat{\bm{D}}(t)\sum_{l=1}^{L} \hat{\xi}_{il}\hat{\bm{\psi}}_l(t) \quad t\in\mathcal{T} \end{equation} where $\hat{\bm{D}}$ is a diagonal matrix whose diagonal entries are robust estimates $\hat{v}_j^{1/2}$ of $v_j^{1/2}$, $\hat{\bm{\mu}}=\left(\hat{\mu}_1,\dots,\hat{\mu}_p\right)^T$ is a robust estimate of $\bm{\mu}$, $\hat{\bm{\psi}}_l$ are the first $L$ robustly estimated principal components and $\hat{\xi}_{il}= \langle \hat{\bm{\psi}}_l, \hat{\bm{Z}}_i\rangle_{\mathbb{H}}$ are the estimated scores with robustly estimated variances $\hat{\lambda}_l$. The estimates $\hat{\bm{\psi}}_l$ and $\hat{\lambda}_l$ are obtained through the $n$ realizations of $\bm{Z}_i$ estimated by using $\hat{\mu}_j$ and $\hat{v}_j$. The robust estimates $\hat{\mu}_j$ and $\hat{v}_j$ are obtained through the scale equivariant functional $M$-estimator and the functional normalized median absolute deviation estimator proposed by \cite{centofanti2021rofanova}. As in the multivariate setting, $L$ is generally chosen such that the retained principal components $\hat{\bm{\psi}}_1,\dots, \hat{\bm{\psi}}_L$ explain at least a given percentage of the total variability, which is usually in the range 70-90$\%$, however, more sophisticated methods could be used as well, see \cite{jolliffe2011principal} for further details. \subsection{Functional Univariate Filter} \label{sec_univfilter} To extend the filter of \cite{agostinelli2015robust} and \cite{leung2016robust} to univariate functional data, let consider $n$ independent realizations $X_i$ of a random function $X$ with values in $L^2(\mathcal{T})$. The proposed FUF considers the functional distances $D_{i}^{fil}$, $i=1,\dots,n$, defined as \begin{equation} \label{eq_dfil} D_{i}^{fil}=\sum_{l=1}^{L^{fil}} \frac{(\hat{\xi}_{il}^{fil})^2}{\hat{\lambda}_{l}^{fil}}, \end{equation} where the estimated scores $\hat{\xi}_{il}^{fil}= \langle \hat{{\psi}}_{l}^{fil}, \hat{{Z}}_i\rangle$, the estimated eigenvalues $\hat{\lambda}_{l}^{fil}$, the estimated principal components $\hat{{\psi}}_{j}^{fil}$, and the estimated standardized observations $\hat{{Z}}_i$ of ${X}_{i}$ are obtained by applying, with $p=1$, the RoMFPCA described in Section \ref{sec_RoMFPCA} to the sample $X_i$, $i=1,\dots,n$. In this setting, RoMFPCA is used to appropriately represent distances among $X_i$'s and not to perform dimensionality reduction, this means that $L^{fil}$ should be sufficiently large to capture a large percentage of the total variability $\delta^{fil}$. Let $G_n$ the empirical distribution of $D_{i}^{fil}$, that is \begin{equation*} G_n(x)=\frac{1}{n}\sum_{i=1}^n I(D_{i}^{fil}\leq x), \quad x\geq 0, \end{equation*} where $I$ is the indicator function. Then, functional observations $X_i$ are labeled as cellwise outliers by comparing $G_n(x)$ with $G(x)$, $ x \geq 0$, where $G$ is a reference distribution for $D_{i}^{fil}$. Following \cite{leung2016robust}, we consider the chi-squared distribution with $L^{fil}$ degrees of freedom, i.e., $G=\chi^2_{L^{fil}}$. The proportion of flagged cellwise outliers is defined by \begin{equation*} d_n=\sup_{x\geq \eta}\lbrace G(x)-G_n(x)\rbrace^{+}, \end{equation*} where $\lbrace a\rbrace^{+}$ represents the positive part of $a$, and $\eta=G^{-1}(\alpha)$ is a large quantile of $G$. Following \cite{agostinelli2015robust}, in the following we consider $\alpha=0.95$ as the aim is to detect extreme cellwise outliers, but other choices could be considered as well. Finally, we flag $\left[nd_n\right]$ observations with the largest functional distances $D_{i}^{fil}$ as functional cellwise outliers (here $\left[a\right]$ is the largest integer less than or equal to $a$). From the arguments in \cite{agostinelli2015robust} and \cite{leung2016robust} the FUF is a consistent even when the actual distribution of $D_{i}^{fil}$ is unknown, that is asymptotically the filter will not wrongly flag a cellwise outlier provided that the tail of the chosen reference distribution $G$ is heavier (or equal) than that of the actual unknown distribution. \subsection{Robust Functional Data Imputation} \label{sec_dataimpu} Let consider $n$ independent realizations $\bm{X}_i = \left(X_{i1},\dots,X_{ip}\right)^T$ of a random vector of functions $\bm{X}$ as defined in Section \ref{sec_RoMFPCA}. This section considers the setting where $\bm{X}_i$, $i=1,\dots,n$, may presents missing components, i.e., at least one of $X_{i1},\dots,X_{ip}$ is missing. Thus, for each realization $\bm{X}_i$ we can identify the missing $\bm{X}^m_{i}=\left(X_{im_{i1}},\dots,X_{im_{is_i}}\right)^T$ and observed $\bm{X}^o_{i}=\left(X_{io_{i1}},\dots,X_{io_{it_i}}\right)^T$ components with $t_i=p-s_i$, and where $\lbrace m_{ij}\rbrace$ and $\lbrace o_{ij}\rbrace$ are disjoint set of indices, whose union coincides with the set of indeces $ \lbrace 1,\dots,p \rbrace$, that indicate which components of the realization $i$ are either observed or missing. Moreover, we assume that a set $S_c$ of $c$ realizations free of missing components is available. The proposed RoFDI method extends to the functional setting the robust imputation approach of \cite{branden2009robust} that sequentially estimates the missing part of an incomplete observation such that the imputed observation has minimum distance from the space generated by the complete realizations. Analogously, starting from $S_c$, we propose that the missing components of the observation $\bm{X}_{\underline{i}}\notin S_c$, which correspond to the smallest $s_i$, are sequentially imputed by minimizing, with respect to $\bm{X}^m_{\underline{i}}$, \begin{equation} \label{eq_imp1} D(\bm{X}_{\underline{i}}^m)=\sum_{l=1}^{L^{imp}} \frac{(\hat{\xi}_{il}^{imp})^2}{\hat{\lambda}_{l}^{imp}}, \end{equation} where the estimated scores $\hat{\xi}_{il}^{imp}=\langle \hat{\bm{\psi}}_{l}^{imp}, \hat{\bm{Z}}_{\underline{i}}\rangle_{\mathbb{H}}$, eigenvalues $\hat{\lambda}_{l}^{imp}$, principal components $\hat{\bm{\psi}}_{l}^{imp}$ and standardized observations $\hat{\bm{Z}}_{\underline{i}}$ of $\bm{X}_{\underline{i}}$ are obtained by applying the RoMFPCA (Section \ref{sec_RoMFPCA}) to the free of missing data observations in $S_c$. Analogously to Section \ref{sec_univfilter}, RoMFPCA is used to define the distance of $\bm{X}_{\underline{i}}$ from the space generated by the free of missing data observations, thus, $L^{imp}$ should be sufficiently large to capture a large percentage of the total variability $\delta^{imp}$. Because $\hat{\bm{Z}}_{\underline{i}}$ is the standardized version of $\bm{X}_{\underline{i}}$, we can identify the missing $\hat{\bm{Z}}^m_{\underline{i}}$ and observed $\hat{\bm{Z}}^o_{\underline{i}}$ components of $\hat{\bm{Z}}_{\underline{i}}$. Thus, the minimization problem in Equation \eqref{eq_imp1} can be equivalently solved with respect to $\hat{\bm{Z}}_{\underline{i}}^m$, and the resulting solution can be unstandardized to obtain the imputed components of $\bm{X}_{\underline{i}}^m$. Due to the approximations in Equation \eqref{eq_appcov}, $\hat{\bm{Z}}_{\underline{i}}$ is uniquely identified by the coefficient vectors $\bm{c}_{\underline{i}j}$, $j=1,\dots, p$, related to the basis expansions of its components. Let $\bm{c}^m_{\underline{i}j}$, $j=m_{\underline{i}1},\dots,m_{\underline{i}s_{\underline{i}}}$ and $\bm{c}^o_{\underline{i}j}$, $j=o_{\underline{i}1},\dots,o_{\underline{i}t_{\underline{i}}}$ be the coefficient vectors corresponding to the missing and observed components of $\hat{\bm{Z}}_{\underline{i}}$, respectively, and, $ \bm{c}^m_{\underline{i}}=\left(\bm{c}_{\underline{i}m_{\underline{i}1}}^{mT},\dots,\bm{c}_{\underline{i}m_{\underline{i}s_{\underline{i}}}}^{mT}\right)^T$ and $ \bm{c}_{\underline{i}}^o=\left(\bm{c}_{\underline{i}o_{\underline{i}1}}^{oT},\dots,\bm{c}_{\underline{i}o_{\underline{i}t_{\underline{i}}}}^{oT}\right)^T$. Moreover, let indicate with $\hat{\bm{b}}_{lj}$, $l=1,\dots, L^{imp}$, $j=1,\dots, p$, the coefficient vectors related to the basis expansions of the components of the estimated principal components $\hat{\bm{\psi}}_{l}^{imp}$, and with $\hat{\bm{B}}=\left(\hat{\bm{b}}_1,\dots,\hat{\bm{b}}_{L^{imp}}\right)$ the matrix whose columns are $\hat{\bm{b}}_l=\left(\hat{\bm{b}}_{l1}^T,\dots,\hat{\bm{b}}_{lp}^T\right)^T$. Then, the solution of the minimization problem in Equation \eqref{eq_imp1} is \begin{equation} \label{eq_modim} \hat{\bm{c}}_{\underline{i}}^{m}=-\bm{C}_{mm}^{+}\bm{C}_{mo}\bm{c}_{\underline{i}}^o \end{equation} where $\bm{A}^{+}$ is the Moore-Penrose inverse of the matrix $\bm{A}$, $\bm{C}_{mm}$ and $\bm{C}_{mo}$ are the matrices constructed by taking the columns $m_{\underline{i}1},\dots,m_{\underline{i}s_{\underline{i}}}$ and $o_{\underline{i}1},\dots,o_{\underline{i}t_{\underline{i}}}$ of the matrix composed by the rows $m_{\underline{i}1},\dots,m_{\underline{i}s_{\underline{i}}}$ of \begin{equation*} \bm{C}=\bm{W}\hat{\bm{B}}\hat{\bm{\Lambda}}^{-1}\hat{\bm{B}}^T\bm{W} \end{equation*} with $\bm{W}$ the block-diagonal matrix defined in Section \ref{sec_RoMFPCA}, and $\hat{\bm{\Lambda}}$ the diagonal matrix whose diagonal entries are the estimated eigenvalues $\hat{\lambda}_{l}^{imp}$, $l=1,\dots,L^{imp}$. Moreover, to address the correlation bias issue typical of deterministic imputation approaches \citep{little2019statistical,van2018flexible}, we propose to impute $\bm{c}_{\underline{i}}^m$ through $\bm{c}_{\underline{i}}^{m,imp}=\left(\bm{c}_{\underline{i}1}^{m,imp T},\dots,\bm{c}_{\underline{i}s_{\underline{i}}}^{m,imp T}\right)^T$ as follows \begin{equation} \label{eq_imperr} \bm{c}_{\underline{i}}^{m,imp}=\hat{\bm{c}}_{\underline{i}}^{m}+\bm{\varepsilon}_i \end{equation} where $\bm{\varepsilon}_i$ is a multivariate normal random variable with mean zero and a residual covariance matrix robustly estimated from the regression residuals of the coefficient vectors of the missing component on those of the observed component for the observations in $S_c$ through the model in Equation \eqref{eq_modim}. Thus, the proposed RoFDI approach is a stochastic imputation method \citep{little2019statistical,van2018flexible}. Then, the components of $\hat{\bm{Z}}_{\underline{i}}^m$ are imputed, for $j=1,\dots,s_{\underline{i}}$, as \begin{equation*} \hat{Z}_{\underline{i}j}^{m,imp}(t)=\left(\bm{c}_{\underline{i}j}^{m,imp}\right)^T\bm{\phi}_{j}(t),\quad t\in\mathcal{T}, \end{equation*} where $\bm{\phi}_{j}$ is the vector of basis function corresponding to the $j$-th component of $\hat{\bm{Z}}$ (Section \ref{sec_RoMFPCA}), and the imputed missing components of $\bm{X}_{\underline{i}}$ are obtained by unstandardizing $\hat{\bm{Z}}_{\underline{i}}^m$. Once the missing components of $\bm{X}_{\underline{i}}$ are imputed, the whole observation is added to $S_c$ and the next observation, which does not belong to $S_c$ and corresponds to the smallest $s_{\underline{i}}$, is considered. Similarly to \cite{branden2009robust}, if the cardinality of $S_c$ at the first iteration is sufficiently large, we suggest to not update the RoMFPCA model each time a new imputed observation is added to $S_c$ to avoid infeasible time complexity of the RoFDI, otherwise, the RoMFPCA model could be updated each time a given number of observations are added to $S_c$. Finally, to take into account the increased noise due to single imputation, the proposed RoFDI can be easily included in a multiple imputation framework \citep{van2018flexible,little2019statistical}, indeed, differently imputed datasets may be obtained by performing several times the RoFDI due to the presence of the stochastic component $\bm{\varepsilon}_i$ in Equation \eqref{eq_imperr}. \subsection{The Monitoring Strategy} \label{sec_monstr} The (IV) element of the proposed RoMFCC implementation relies on the consolidated monitoring strategy for a multivariate functional quality characteristic $\bm{X}$ based on the Hotelling's $ T^2 $ and $ SPE $ control charts. The former assesses the stability of $ \bm{X} $ in the finite dimensional space spanned by the first principal components identified through the RoMFPCA (Section \ref{sec_RoMFPCA}), whereas, the latter monitors changes along directions in the complement space. Specifically, the Hotelling's $ T^2 $ statistic for $\bm{X}$ is defined as \begin{equation*} T^2=\sum_{l=1}^{L^{mon}}\frac{(\xi_{l}^{mon})^2}{\lambda_{l}^{mon}}, \end{equation*} where $ \lambda_{l}^{mon} $ are the variances of the scores $ \xi_{l}^{mon}=\langle \bm{\psi}_{l}^{mon}, \bm{Z}\rangle_{\mathbb{H}}$ where $\bm{Z}$ is the vector of standardized variable of $\bm{X}$ and $\bm{\psi}_{l}^{mon}$ are the corresponding principal components as defined in Section \ref{sec_RoMFPCA}. The number $L^{mon}$ is chosen such that the retained principal components explain at least a given percentage $\delta^{mon}$ of the total variability. The statistic $ T^2$ is the standardized squared distance from the centre of the orthogonal projection of $\bm{Z}$ onto the principal component space spanned by $ \bm{\psi}_{1}^{mon},\dots,\bm{\psi}_{L^{mon}}^{mon}$. Whereas, the distance between $ \bm{Z} $ and its orthogonal projection onto the principal component space is measured through the $ SPE $ statistic, defined as \begin{equation*} SPE=||\bm{Z}-\hat{\bm{Z}}||_{\mathbb{H}}^2, \end{equation*} where $ \hat{\bm{Z}}=\sum_{l=1}^{L^{mon}}\xi_{l}^{mon}\bm{\psi}_{l}^{mon}$. Under the assumption of multivariate normality of $\xi_{l}^{mon}$, which is approximately true by the central limit theorem \citep{nomikos1995multivariate}, the control limits of the Hotelling's $ T^2 $ control chart can be obtained by considering the $ (1-\alpha^*) $ quantiles of a chi-squared distribution with $L^{mon}$ degrees of freedom \citep{johnson2014applied}. Whereas, the control limits for $ SPE $ control chart can be computed by using the following equation \citep{jackson1979control} \begin{equation*} CL_{SPE,\alpha^*}=\theta_1\left[\frac{c_{\alpha^*}\sqrt{2\theta_2h_0^2}}{\theta_1}+1+\frac{\theta_2h_0(h_0-1)}{\theta_1^2}\right]^{1/h_0} \end{equation*} where $c_{\alpha^*}$ is the normal deviate corresponding to the upper $ (1-\alpha^*) $ quantile, $h_0=1-2\theta_1\theta_3/3\theta_2^2$, $\theta_j=\sum_{l=L^{mon}+1}^{\infty}(\lambda_{l}^{mon})^j$, $j=1,2,3$. Note that, to control the family wise error rate (FWER), $ \alpha^* $ should be chosen appropriately. We propose to use the \v{S}id\'ak correction $\alpha^{*}=1-\left(1-\alpha\right)^{1/2}$ \citep{vsidak1967rectangular}, where $\alpha$ is the overall type I error probability. \subsection{The Proposed Method} \label{sec_propmeth} The proposed RoMFCC implementation collects all the elements introduced in the previous sections for the Phase II monitoring strategy where a set of Phase I observations, which can be contaminated with both functional casewise and cellwise outliers, is used for the design of the control chart. Both phases are outlined in the scheme of Figure \ref{fi_diag} and detailed in the following sections. \begin{figure}[h] \caption{Scheme of the RoMFCC approach.} \label{fi_diag} \centering \includegraphics[width=\textwidth]{fig/diag.png} \end{figure} \subsubsection{Phase I} Let $ \bm{X}_i $, $ i=1,\dots,n $, the Phase I random sample of the multivariate functional quality characteristic $\bm{X}$, used to characterize the normal operating conditions of the process. It can be contaminated with both functional casewise and cellwise outliers. In the \textit{filtering} step, functional cellwise outliers are identified through the FUF described in Section \ref{sec_univfilter}, and, then, replaced by missing components. Note that, if cellwise outliers are identified for each component of a given observation, then, that observation is removed from the sample because its imputation does not provide any additional information for the analysis. In the \textit{imputation} step, missing components are imputed through the RoFDI method presented in Section \ref{sec_dataimpu}. Once the imputed Phase I sample is obtained, it used to to estimate the RoMFPCA model and perform the \textit{dimensionality reduction} step as described in Section \ref{sec_RoMFPCA}. The Hotelling's $ T^2 $ and $ SPE $ statistics are, then, computed for each observation in the Phase I sample after the imputation step. Specifically, the values $ T^2_i $ and $ SPE_i $ of the statistics are computed as described in Section \ref{sec_monstr} by considering the estimated RoMFPCA model obtained in the dimensionality reduction step. Finally, control limits for the Hotelling's $ T^2 $ and $ SPE $ control charts are obtained as described in Section \ref{sec_monstr}. Note that the parameters $\theta_j$ to estimate $CL_{SPE,\alpha^*}$ are approximated by considering a finite summation to the maximum number of estimable principal components, which is finite for a sample of $n$ observations. If a multiple imputation strategy is employed by performing the imputation step several times, then the multiple estimated RoMFPCA models could be combined by averaging the robustly estimated covariance functions as suggested in \cite{van2007two}. Note that, when the sample size $n$ is small compared to the number of process variables, undesirable effect upon the performance of the RoMFCC could arise \citep{ramaker2004effect,kruger2012statistical}. To reduce possible overfitting issues and, thus, increase the monitoring performance of the RoMFCC, a reference sample of Phase I observations, referred to as \textit{tuning set}, could be considered, which is different from the one used in the previous steps, referred to as \textit{training set}. Specifically, on the line of \cite{kruger2012statistical}, Chapter 6.4, the tuning set is passed through the filtering and imputation steps and, then, it is projected on the RoMFPCA model, estimated through the training set observations, to robustly estimate the distribution of the resulting scores. Finally, Hotelling's $ T^2 $ and $ SPE $ statistics and control charts limits are calculated by taking into account the estimated distribution of the tuning set scores. \subsubsection{Phase II} In the actual monitoring phase (Phase II), a new observation $ \bm{X}_{new} $ is projected on the RoMFPCA model to compute the values of $ T^2_{new} $ and $ SPE_{new} $ statistics accordingly to the score distribution identified in Phase I. An alarm signal is issued if at least one between $ T^2_{new} $ and $ SPE_{new} $ violates the control limits. \section{Simulation Study} \label{sec_sim} The overall performance of the proposed RoMFCC is evaluated by means of an extensive Monte Carlo simulation study. The aim of this simulation is to assess the performance of the RoMFCC in identifying mean shifts of the multivariate functional quality characteristic when the Phase I sample is contaminated with both functional cellwise and casewise outliers. The data generation process (detailed in Supplementary Material A) is inspired by the real-case study in Section \ref{sec_real} and mimics typical behaviours of DRCs in a RSW process. Specifically, it considers a multivariate functional quality characteristic with $p=10$ components. Two main scenarios are considered that are characterized by different Phase I sample contamination. Specifically, the Phase I sample is contaminated by functional cellwise outliers in Scenario 1 and by functional casewise outliers in Scenario 2, with a contamination probability equal to 0.05 in both cases. In the Supplementary Material B, additional results obtained by considering contamination probability equal to 0.1 are shown. For each scenario, two contamination models, referred to as Out E and Out P, with three increasing contamination levels, referred to as C1, C2 and C3, are considered. The former mimics a splash weld (expulsion), caused by excessive welding current, while the latter resembles phase shift of the peak time caused by an increased electrode force \citep{xia2019online}. Moreover, we consider also a scenario, referred to as Scenario 0, representing settings where the Phase I sample is not contaminated. To generate the Phase II sample, two types of OC conditions, referred to as OC E and OC P, are considered that are generated analogously to the two contamination models Out E and Out P, respectively, at 4 different severity levels $SL = \lbrace 1, 2, 3, 4 \rbrace$. The proposed RoMFCC implementation is compared with several natural competing approaches. The first approaches are control charts for multivariate scalar data, built on the average value of each component of the multivariate functional data. Among them, we consider the \textit{multivariate classical} Hotelling's $T^2$ control chart, referred to as M, the \textit{multivariate iterative} variant, referred to as Miter, where outliers detected by the control chart in Phase I are iteratively removed until all data are assumed to be IC, and the \textit{multivariate robust} control chart proposed by \cite{chenouri2009multivariate}, referred to as MRo. We further consider also two approaches recently appeared in the profile monitoring literature, i.e., the multivariate \textit{functional control charts}, referred to as MFCC, proposed by \cite{capezza2020control,capezza2022funchartspaper}, and the multivariate \textit{iterative functional control charts} variant, referred to as MFCCiter, where outliers detected by the control chart in Phase I are iteratively removed until all data are assumed to be IC. The RoMFCC is implemented as described in Section \ref{sec_method} with $\delta_{fil}=\delta_{imp}=0.999$ and $\delta_{mon}=0.7$, and to take into account the increased noise due to single imputation, 5 differently imputed datasets are generated through RoFDI. While data are observed through noisy discrete values, each component of the generated quality characteristic observations is obtained by considering the approximation in Equation \eqref{eq_appcov} with $K=10$ cubic B-splines estimated through the spline smoothing approach based on a roughness penalty on the second derivative \citep{ramsay2005functional}. For each scenario, contamination model, contamination level, OC condition and severity level, 50 simulation runs are performed. Each run considers a Phase I sample of 4000 observations, where for MFCC, MFCCIter and RoMFCC, 1000 are used as training set, and the remaining 3000 are used as tuning set. The Phase II sample is composed of 4000 i.i.d. observations. The RoMFCC and the competing methods performances are assessed by means of true detection rate (TDR), which is the proportion of points outside the control limits whilst the process is OC, and the false alarm rate (FAR), which is the proportion of points outside the control limits whilst the process is IC. The FAR should be as similar as possible to the overall type I error probability $\alpha$ considered to obtain the control limits and set equal to 0.05, whereas the TDR should be as close to one as possible. Figure \ref{fi_results_1}-\ref{fi_results_3} display for Scenario 0, Scenario 1 and Scenario 2, respectively, as a function of the severity level $SL$, the mean FAR ($SL=0$) or TDR ($SL\neq0$) for each OC condition OC E and OC P, contamination level C1, C2 and C3 and contamination model Out E and Out P. \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each OC condition (OC E and OC P) as a function of the severity level $SL$ in Scenario 0.} \label{fi_results_1} \centering \begin{tabular}{cc} \hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}\\ \includegraphics[width=.25\textwidth]{fig/sim_0_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_0_OC_P.pdf} \end{tabular} \vspace{-.5cm} \end{figure} \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P in Scenario 1.} \label{fi_results_2} \centering \hspace{-2.05cm} \begin{tabular}{cM{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}} &\multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out E}}}& \multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out P}}}\\ &\hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}&\hspace{0.5cm}\textbf{\footnotesize{OC E}}&\hspace{0.5cm}\textbf{\footnotesize{OC P}}\\ \textbf{\footnotesize{C1}}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s1_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s1_OC_P.pdf}\\ \textbf{\footnotesize{C2}}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s2_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s2_OC_P.pdf}\\ \textbf{\footnotesize{C3}}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s3_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s3_OC_P.pdf}\\ \end{tabular} \vspace{-.5cm} \end{figure} \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P in Scenario 2.} \label{fi_results_3} \centering \hspace{-2.05cm} \begin{tabular}{cM{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}} &\multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out E}}}& \multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out P}}}\\ &\hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}&\hspace{0.5cm}\textbf{\footnotesize{OC E}}&\hspace{0.5cm}\textbf{\footnotesize{OC P}}\\ \textbf{\footnotesize{C1}}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s1_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s1_OC_P.pdf}\\ \textbf{\footnotesize{C2}}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s2_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s2_OC_P.pdf}\\ \textbf{\footnotesize{C3}}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s3_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s3_OC_P.pdf}\\ \end{tabular} \vspace{-.5cm} \end{figure} When the Phase I sample is not contaminated by outliers, Figure \ref{fi_results_1} shows that all the approaches that take into account the functional nature of the data, i.e., MFCC, MFCCiter, RoMFCC, achieve the same performance for both OC conditions. Although this setting should be not favourable to approaches specifically designed to deal with outliers, MFCCiter and RoMFCC perform equal to MFCC. The non-functional approaches, i.e., M, Miter, MRo, show worse performance than the functional counterparts, and there is no significant performance difference among them as well. Figure \ref{fi_results_2} show the results for Scenario 1, where the Phase I sample is contaminated by cellwise outliers. The proposed RoMFCC largely outperforms the competing methods for each contmination model, contamination level and OC condition. As expected, as the contamination level increases the differences in performance between the RoMFCC and the competing methods increase as well. Indeed, the performance of RoMFCC is almost insensible to the contamination levels as well as the contamination models, differently from the competing methods whose performance decreases as the contamination level increases and the contamination model changes. Moreover, the MFCCiter, which is representative of the baseline method in this setting, only slightly improves the performance of the MFCC. This is probably due to the masking effect that prevents the MFCCiter to iteratively identify functional cellwise outlier in the Phase I sample and, thus, makes it equivalent to MFCC. The performance of the non-functional methods are totally unsatisfactory because they are not able to both capture the functional nature of the data and successfully deal with functional cellwise outliers. Specifically, M is the worst method overall readily followed by Miter and MRo. Furthermore, as the contamination level increases, the competing functional methods lose their favourable performance and tend to achieve performance comparable to the non-functional approaches. This shows that if outliers are not properly dealt with, terrible performance could arise independently of the suitability of the methods considered. From Figure \ref{fi_results_3}, also in Scenario 2, RoMFCC is clearly the best method. However, in this scenario the difference in performance between the RoMFCC and the competing methods is less pronounced than in Scenario 1. This is expected because the Phase I sample is now contaminated by functional casewise outliers, which is the only contamination type against which the competing methods are able to provide a certain degree of robustness. Note that, the performance of RoMFCC are almost unaffected by contamination in the Phase I sample also in this scenario, which proves the ability of the proposed method to deal with both functional cellwise and casewise outliers. \section{Real-Case Study} \label{sec_real} To demonstrate the potential of the proposed RoMFCC in practical situations, a real-case study in the automotive industry is presented henceforth. As introduced in Section \ref{sec_intro}, it addresses the issue of monitoring the quality of the RSW process, which guarantees the structural integrity and solidity of welded assemblies in each vehicle \citep{martin}. The RSW process \citep{zhang2011resistance} is an autogenous welding process in which two overlapping conventional steel galvanized sheets are joint together, without the use of any filler material. Joints are formed by applying pressure to the weld area from two opposite sides by means of two copper electrodes. Voltage applied to the electrodes generates a current flowing between them through the material. The electrical current flows because the resistance offered by metals causes significant heat generation (Joule effect) that increases the metal temperature at the faying surfaces of the work pieces up to the melting point. Finally, due to the mechanical pressure of the electrodes, the molten metal of the jointed metal sheets cools and solidifies, forming the so-called weld nugget \citep{raoelison}. To monitor the RSW process, the modern automotive Industry 4.0 framework allows the automatic acquisition of a large volume of process parameters. The DRC is considered the most important of these parameters to describe the quality of the RSW process. Further details on how the typical behaviour of a DRC is related to the physical and metallurgical development of a spot weld are provided by \cite{capezza2021functional_clustering}. Data analyzed in this study are courtesy of Centro Ricerche Fiat and are recorded at the Mirafiori Factory during lab tests and are acquired during RSW processes made on the body of the Fiat 500BEV. A body is characterized by a large number of spot welds with different characteristics, e.g, the thickness and material of the sheets to be joined together and the welding time. In this paper, we focus on monitoring a set of ten spot welds made on the body by one of the welding machines. Therefore, for each sub-assembly the multivariate functional quality characteristic is a vector of the ten DRCs, corresponding to the the second welding pulse of ten spot welds normalized on the time domain $ [0, 1]$, for a total number of assemblies equal to 1839. Moreover, resistance measurements were collected at a regular grid of points equally spaced by 1 ms. The RSW process quality is directly affected by electrode wear since the increase in weld numbers leads to changed electrical, thermal and mechanical contact conditions at electrode and sheet interfaces \citep{manladan2017review}. Thus, to take into account the wear issue, electrodes go through periodical renovations. In this setting, a paramount issue refers to the swift identification of DRCs mean shifts caused by electrode wear, which could be considered as a criterion for electrode life termination and guide the electrode renovation strategy. In the light of this, the 919 multivariate profiles corresponding to spot welds made immediately before electrode renewal are used to form the Phase I sample, whereas, the remaining 920 observations are used in Phase II to evaluate the proposed chart performance. We expect that the mean shift of the Phase II DRCs caused by electrode wear should be effectively captured by the proposed control chart. The RoMFCC is implemented as in the simulation study in Section \ref{sec_sim} with the training and tuning sets, each composed by 460 Phase I observations, randomly selected without remittance. As shown in Figure \ref{fig_drc} of Section \ref{sec_intro}, data are plausibly contaminated by several outliers. This is further confirmed by Figure \ref{fig_boxplot}, which shows the boxplot of the functional distance square roots $\sqrt{D_{i,fil}}$ (Equation \eqref{eq_dfil}) obtained from the FUF applied on the training set. Some components clearly show the presence of functional cellwise outliers that are possibly arranged in groups, while other components seem less severely contaminated. \begin{figure} \begin{center} \includegraphics[width=.75\textwidth]{fig/boxplot.pdf} \caption{Boxplot of the functional distance square roots $\sqrt{D_{i,fil}}$ (Equation \eqref{eq_dfil}) obtained from the FUF applied on the training set.} \label{fig_boxplot} \end{center} \end{figure} Figure \ref{fig_ccreal} shows the application of the proposed RoMFCC. \begin{figure} \centering \includegraphics[width=\textwidth]{fig/control_chart.pdf} \caption{Hotelling's $ T^2 $ and $ SPE $ control charts for the RoMFCC in the real-case study. The vertical line separates the monitoring statistics calculated for the tuning set, on the left, and the Phase II data set on the right, while the horizontal lines define the control limits.} \label{fig_ccreal} \end{figure} The vertical line separates the monitoring statistics calculated for the tuning set, on the left, and the Phase II data set on the right, while the horizontal lines define the control limits. Note that, a significant number of tuning set observations are signaled as OC, highlighted in red in Figure \ref{fig_ccreal}. This is expected because these points may include functional casewise outlier not filtered out by the FUF. In the monitoring phase, many points are signaled as OCs by the RoMFCC. In particular, the RoMFCC signals 72.3\% of the observations in the Phase II data set as OC. This shows that the proposed method is particularly sensible to mean shifts caused by an increased electrode wear. Finally, the proposed method is compared with the competing methods presented in the simulation study in Section \ref{sec_sim}. Table \ref{tab_arlreal} shows the estimated TDR values $\widehat{TDR}$ on the Phase II sample for all the considered competing methods. Similarly to \cite{centofanti2020functional}, the uncertainty of $\widehat{TDR}$ is quantified through a bootstrap analysis \citep{efron1994introduction}. Table \ref{tab_arlreal} reports the mean $\overline{TDR}$ of the empirical bootstrap distribution of $\widehat{TDR}$, and the corresponding bootstrap 95\% confidence interval (CI) for each monitoring method. \begin{table}[] \centering \resizebox{0.4\textwidth}{!}{ \begin{tabular}{ccccc} \toprule & $\widehat{TDR}$ & $\overline{TDR}$ & CI\\ \midrule M & 0.336 & 0.335 & [0.305,0.368]\\ Miter & 0.462 & 0.461 & [0.428,0.496]\\ MRo & 0.513 & 0.512 & [0.481,0.547]\\ MFCC & 0.541 & 0.541 & [0.511,0.574]\\ MFCCiter & 0.632 & 0.632 & [0.595,0.664]\\ RoMFCC & 0.723 & 0.723 & [0.695,0.753]\\ \bottomrule \end{tabular} } \caption{Estimated TDR values $\hat{TDR}$ on the Phase II sample, mean $\overline TDR$ of the empirical bootstrap distribution of $\hat{TDR}$, and the corresponding bootstrap 95\% confidence interval (CI) for each monitoring method in the real-case study.} \label{tab_arlreal} \end{table} The bootstrap analysis shows that the RoMFCC outperforms the competing control charts, indeed bootstrap 95\% confidence intervals are strictly above those of all considered monitoring approaches. As in Section \ref{sec_sim}, non-functional approaches, i.e., M, Miter, MRo, show worse performance than the functional counterparts because they are not able to satisfactorily capture the functional nature of the data and robust approaches always improve the non-robust ones. Therefore, the proposed RoMFCC stands out as the best method to promptly identify OC conditions in the RWS process caused by an increased electrode wear with a Phase I sample contaminated by functional outliers. \section{Conclusions} \label{sec_conclusions} In this paper, we propose a new robust framework for the statistical process monitoring of multivariate functional data, referred to as \textit{robust multivariate functional control charts} (RoMFCC). The RoMFCC is designed to assess the presence of assignable causes of variation while being robust to both functional casewise and cellwise outliers. The proposed method is suitable for those industrial processes where many functional variables are available and occasional outliers are produced, such as anomalies in the data acquisition and data collected during a fault in the process. Specifically, the RoMFCC framework is based on four main elements, i.e. a functional univariate filter to identify functional cellwise outliers, to be replaced by missing values, a robust functional data imputation of these missing values, a casewise robust dimensionality reduction based on ROBPCA and a monitoring strategy based on the Hotelling's $T^2$ and $SPE$ control charts. These elements are combined in a Phase II monitoring strategy where a set of Phase I observations, which can be contaminated with both functional casewise and cellwise outliers, is used for the design of the control chart. To the best of the authors' knowledge, the RoMFCC framework is the first monitoring scheme that is able to monitor a multivariate functional quality characteristic while being robust to functional casewise and cellwise outliers. Indeed, methods already present in the literature either apply robust approaches to multivariate scalar features extracted from the profiles or use diagnostic approaches on the multivariate functional data to iteratively remove outliers. However, the former are not able to capture the functional nature of the data, while the latter are not able to deal with functional cellwise outliers. The performance of the RoMFCC framework is assessed through an extensive Monte Carlo simulation study where it is compared with several competing monitoring methods for multivariate scalar data and multivariate functional data. The ability of the proposed method to estimate the distribution of the data without removing observations while being robust to both functional casewise and cellwise outliers allows the RoMFCC to outperform the competitors in all the considered scenarios. Lastly, the practical applicability of the proposed method is illustrated through a motivating real-case study, which addresses the issue of monitoring the quality of a resistance spot-welding process in the automotive industry. Also in this case, the RoMFCC shows better performance than the competitors in the identification of out-of-control condition of the dynamic resistance curves. \section*{Supplementary Materials} The Supplementary Materials contain additional details about the data generation process in the simulation study (A), additional simulation results (B), as well as the R code to reproduce graphics and results over competing methods in the simulation study. \section*{Acknowledgments} The present work was developed within the activities of the project ARS01\_00861 ``Integrated collaborative systems for smart factory - ICOSAF'' coordinated by CRF (Centro Ricerche Fiat Scpa - \texttt{www.crf.it}) and financially supported by MIUR (Ministero dell’Istruzione, dell’Università e della Ricerca). \bibliographystyle{apalike} \setlength{\bibsep}{5pt plus 0.3ex} {\small \section{Details on Data Generation in the Simulation Study} \label{sec_appB} The data generation process is inspired by the real-case study in Section 4 and mimics typical behaviours of DRCs in a RSW process. The data correlation structure is generated similar to \cite{centofanti2020functional,chiou2014multivariate}. The compact domain $\mathcal{T}$ is set, without loss of generality, equal to $\left[0,1\right]$ and the number components $p$ is set equal to 10. The eigenfunction set $\lbrace \bm{\psi}_i\rbrace $ is generated by considering the correlation function $\bm{G}$ through the following steps. \begin{enumerate} \item Set the diagonal elements $G_{ll}$, $l=1,\dots,p$ of $\bm{G}$ as the \textit{Bessel} correlation function of the first kind \citep{abramowitz1964handbook}. The general form of the correlation function and parameter used are listed in Table \ref{ta_corf}. Then, calculate the eigenvalues $\lbrace\eta_{lk}^{X}\rbrace$ and the corresponding eigenfunctions $\lbrace\vartheta_{lk}\rbrace$, $k=1,2,\dots$, of $G_{ll}$, $l=1,\dots,p$. \item Obtain the cross-correlation function $G_{lj}$, $l,j=1,\dots,p$ and $l\neq j$, by \begin{equation} G_{lj}\left(t_1,t_2\right)=\sum_{k=1}^{\infty}\frac{\eta_{k}}{1+|l-j|}\vartheta_{lk}\left(t_{1}\right)\vartheta_{jk}\left(t_{2}\right)\quad t_1,t_2\in\mathcal{T}. \end{equation} \item Calculate the eigenvalues $\lbrace\lambda_i\rbrace$ and the corresponding eigenfunctions $\lbrace \bm{\psi}_i\rbrace $ through the spectral decomposition of $\bm{G}=\lbrace G_{lj}\rbrace_{l,j=1,\dots,p}$, for $i=1,\dots,L^{*}$. \end{enumerate} \begin{table} \caption{Bessel correlation function and parameter for data generation in the simulation study.} \label{ta_corf} \centering \resizebox{0.5\textwidth}{!}{ \begin{tabular}{ccc} \toprule &$\rho$&$\nu$\\ \midrule $J_{v}\left(z\right)=\binom{|z|/\rho}{2}^{\nu}\sum_{j=0}^{\infty}\frac{\left(-\left(|z|/\rho\right)^{2}/4\right)^{j}}{j!\Gamma\left(\nu+j+1\right)}$&0.125&0\\[.35cm] \bottomrule \end{tabular}} \end{table} Further, $L^{*}$ is set equal to $10$. Let $\bm{Z}=\left(Z_1,\dots,Z_p\right)$ as \begin{equation} \bm{Z}=\sum_{i=1}^{L^{*}}\xi_i\bm{\psi}_i. \end{equation} with $\bm{\xi}_{L^{*}}=\left(\xi^{X}_1,\dots,\xi^{X}_{L^{*}}\right)^{T}$ generated by means of a multivariate normal distribution with covariance $\Cov\left(\bm{\xi}_{L^{*}}^{X}\right)=\bm{\Lambda^{X}}=\diag\left(\lambda_1,\dots,\lambda_{L^{*}}\right)$. Furthermore, let the mean process $m$ \begin{multline} m(t)= 0.2074 + 0.3117\exp(-371.4t) +0.5284(1 - \exp(0.8217t))\\ -423.3\left[1 + \tanh(-26.15(t+0.1715)) \right]\quad t\in\mathcal{T}. \end{multline} Note that, the mean function $m$ is generated to resemble a typical DRC through the phenomenological model for the RSW process presented in \cite{schwab2012improving}. Then, the contamination models $C_E$ and $C_P$, which mimics a splash weld (expulsion) and phase shift of the peak time, are respectively defined as \begin{equation} C_E(t)=\min\Big\lbrace 0, -2M_E(t-0.5)\Big\rbrace \quad t\in\mathcal{T}, \end{equation} and \begin{multline} C_P(t)= -m(t)-(M_P/20)t + 0.2074\\ + 0.3117\exp(-371.4h(t)) +0.5284(1 - \exp(0.8217h(t)))\\ -423.3\left[1 + \tanh(-26.15(h(t)+0.1715))\right] \quad t\in\mathcal{T}, \end{multline} where $h:\mathcal{T}\rightarrow\mathcal{T}$ transforms the temporal dimension $t$ as follows \begin{equation} h(t)= \begin{cases} t & \text{if } t\leq 0.05 \\ \frac{0.55-M_P}{0.55}t-(1+\frac{0.55-M_P}{0.55})0.05 & \text{if } 0.05< t\leq 0.6 \\ \frac{0.4+M_P}{0.4}t+1-\frac{0.4+M_P}{0.4} & \text{if } t> 0.6, \\ \end{cases} \end{equation} and $M_E$ and $M_P$ are contamination sizes. Then, $\bm{X}=\left(X_1,\dots,X_p\right)^T$ is obtained as follows \begin{equation} \label{eq_modgen} \bm{X}\left(t\right)= \bm{m}(t) +\bm{Z}\left(t\right)\sigma+\bm{\varepsilon}\left(t\right) +B_E\bm{C}_E(t)+B_P\bm{C}_P(t) \quad t\in \mathcal{T}, \end{equation} where $\bm{m}$ is a $p$ dimensional vector with components equal to $m$, $\sigma>0$, $\bm{\varepsilon}=\left(\varepsilon_1,\dots,\varepsilon_p\right)^T$, where $\varepsilon_i$ are white noise functions such that for each $ t \in \left[0,1\right] $, $ \varepsilon_i\left(t\right) $ are normal random varaibles with zero mean and standard deviation $ \sigma_e $, and $B_{CaE}$ and $B_{CaP}$ are two independent random variables following Bernoulli distributions with parameters $p_{CaE}$ and $p_{CaP}$, respectively. Moreover, $\bm{C}_E=\left(B_{1,CeE}C_E,\dots,B_{p,CeE}C_E\right)^T$ and $\bm{C}_P=\left(B_{1,CeP}C_P,\dots,B_{p,CeP}C_P\right)^T$, where $\lbrace B_{i,CeE}\rbrace$ and $\lbrace B_{i,CeP}\rbrace$ are two sets of independent random variables following Bernoulli distributions with parameters $p_{CeE}$ and $p_{CeP}$, respectively. Then, the Phase I and Phase II samples are generated through Equation \eqref{eq_modgen} by considering the parameters listed in Table \ref{ta_1} and Table \ref{ta_2}, respectively, with $\sigma_e=0.0025$ and $\sigma=0.01$. The parameter $\tilde{p}$ is the probability of contamination that for the data analysed in Section 3 and Supplementary Material B is set equal to 0.05 and 0.1, respectively. \begin{table} \caption{Parameters used to generate the Phase I sample in Scenario 1 and Scenario 2 of the simulation study.} \label{ta_1} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{cM{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}} \toprule &\multicolumn{4}{c}{Scenario 1}& \multicolumn{4}{c}{Scenario 1}\\ \cmidrule(lr){2-5}\cmidrule(lr){6-9} &\multicolumn{2}{c}{Out E}&\multicolumn{2}{c}{Out P}&\multicolumn{2}{c}{Out E}&\multicolumn{2}{c}{Out P}\\ \cmidrule(lr){2-5}\cmidrule(lr){6-9} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=p_{CaP}=1$,\\ $p_{CeE}=\tilde{p}$, $p_{CeP}=0$}} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=p_{CaP}=1$,\\ $p_{CeE}=0$, $p_{CeP}=\tilde{p}$}} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=\tilde{p}$, $p_{CaP}=0$,\\ $p_{CeE}=p_{CeP}=1$}} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=0$, $p_{CaP}=\tilde{p}$, \\$p_{CeE}=p_{CeP}=1$}}\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9} &$M_E$&$M_P$&$M_E$&$M_P$&$M_E$&$M_P$&$M_E$&$M_P$\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9} C1&0.04 &0.00&0.00&0.40&0.02 &0.00&0.00 &0.20\\ C2&0.06 & 0.00&0.00&0.45&0.03 &0.00&0.00 &0.30\\ C3&0.08 & 0.00&0.00&0.50&0.04 &0.00&0.00 &0.40\\ \bottomrule \end{tabular}} \end{table} \begin{table} \caption{Parameters used to generate the Phase II sample for OC E and OC P and severity level $SL = \lbrace 0, 1, 2, 3, 4 \rbrace$ in the simulation study.} \label{ta_2} \centering \resizebox{0.5\textwidth}{!}{ \begin{tabular}{cM{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}} \toprule &\multicolumn{2}{c}{OC E}&\multicolumn{2}{c}{OC P}\\ \cmidrule(lr){2-5} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=1$, $p_{CaP}=0$,\\ $p_{CeE}=1$, $p_{CeP}=0$}} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=0$, $p_{CaP}=1$,\\ $p_{CeE}=0$, $p_{CeP}=1$}} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} $SL$&$M_E$&$M_P$&$M_E$&$M_P$\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} 0&0.00 &0.00&0.00&0.00\\ 1&0.01 & 0.00&0.00&0.20\\ 2&0.02 & 0.00&0.00&0.27\\ 3&0.03 & 0.00&0.00&0.34\\ 4&0.04 & 0.00&0.00&0.40\\ \bottomrule \end{tabular}} \end{table} Moreover, in Scenario 0 of the simulation study data are generated thorugh $p_{CaE}=p_{CaP}=p_{CeE}=p_{CeP}=0$. Finally, the generate data are assumed to be discretely observed at 100 equally spaced time points over the domain $\left[0,1\right]$. \section{Additional Simulation Results} In this section, we present additional simulations to analyse the performance of the RoMFCC and the competing methods methods when data are generated as described in Supplementary Material A with probability of contamination $\tilde{p}$ equals to 0.1. The RoMFCC is implemented as described in Section 3. Figure \ref{fi_results_2_1} and Figure \ref{fi_results_3_1} show for Scenario 1 and Scenario 2 the mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P. \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P in Scenario 1 for $\tilde{p}=0.1$.} \label{fi_results_2_1} \centering \hspace{-2.5cm} \begin{tabular}{cM{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}} &\multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out E}}}& \multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out P}}}\\ &\hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}&\hspace{0.5cm}\textbf{\footnotesize{OC E}}&\hspace{0.5cm}\textbf{\footnotesize{OC P}}\\ \textbf{\footnotesize{C1}}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s1_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s1_OC_P.pdf}\\ \textbf{\footnotesize{C2}}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s2_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s2_OC_P.pdf}\\ \textbf{\footnotesize{C3}}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s3_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s3_OC_P.pdf}\\ \end{tabular} \vspace{-.5cm} \end{figure} \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P in Scenario 2 for $\tilde{p}=0.1$.} \label{fi_results_3_1} \centering \hspace{-2.5cm} \begin{tabular}{cM{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}} &\multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out E}}}& \multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out P}}}\\ &\hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}&\hspace{0.5cm}\textbf{\footnotesize{OC E}}&\hspace{0.5cm}\textbf{\footnotesize{OC P}}\\ \textbf{\footnotesize{C1}}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s1_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s1_OC_P.pdf}\\ \textbf{\footnotesize{C2}}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s2_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s2_OC_P.pdf}\\ \textbf{\footnotesize{C3}}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s3_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s3_OC_P.pdf}\\ \end{tabular} \vspace{-.5cm} \end{figure} By comparing Figure \ref{fi_results_2_1} and Figure \ref{fi_results_3_1} with Figure 4 and Figure 5, the RoMFCC confirms itself as the best method in all the considered settings. Specifically, the proposed method, differently from the competing methods, is almost insensible to the fact that a large fraction of the data in the Phase I sample is now composed by either cellwise or case wise outliers. On the contrary, the competing methods are strongly affected by the different probability of contamination and show overall worse performance than in Section 3 and, thus, are totally inappropriate to monitor the multivariate functional quality characteristic in this setting. This further confirms as shown in Section 3, i.e., in this simulation study the RoMFCC performance is almost independent of the contamination of the Phase I sample. As in Section 3, also in this case, performance differences between the proposed and competing methods are less pronounced in Scenario 2 due to the less severe contamination produced by functional casewise outliers. However, RoMFCC still outperforms all the cometing methods and, thus, stand out as the best method. \bibliographystyle{chicago} {\small \section{Introduction} \label{sec_intro} In modern industrial applications, data acquisition systems allow to collect massive amounts of data with high frequency. Several examples may be found in the current Industry 4.0 framework, which is reshaping the variety of signals and measurements that can be gathered during manufacturing processes. The focus in many of these applications is statistical process monitoring (SPM), whose main aim is to quickly detect unusual conditions in a process when special causes of variation act on it, i.e., the process is out-of-control (OC). On the contrary, when only common causes are present, the process is said to be in-control (IC). In this context, the experimental measurements of the quality characteristic of interest are often characterized by complex and high dimensional formats that can be well represented as functional data or profiles \citep{ramsay2005functional,kokoszka2017introduction}. The simplest approach for monitoring one or multiple functional variables is based on the extraction of scalar features from each profile, e.g., the mean, followed by the application of classical SPM techniques for multivariate data \citep{montgomery2012statistical}. However, feature extraction is known to be problem-specific, arbitrary, and risks compressing useful information. Thus, there is a growing interest in \textit{profile monitoring} \citep{noorossana2011statistical}, whose aim is to monitor a process when the quality characteristic is best characterized by one or multiple profiles. Some recent examples of profile monitoring applications can be found in \cite{menafoglio2018profile,capezza2020control,capezza2021functional_qrei,capezza2021functional_clustering,centofanti2020functional}. The main tools for SPM are control charts that are implemented in two phases. In Phase I, historical process data are used to set control chart limits to be used in Phase II, i.e., the actual monitoring phase, where observations falling outside the control limits are signaled as OC. In classical SPM applications the historical Phase I data are assumed to come from an IC process. However, this assumption is not always valid. As an example, let consider the motivating real-world application, detailed in Section \ref{sec_real}, that concerns the SPM of a resistance spot welding (RSW) process in the assembly of automobiles. RSW is the most common technique employed in joining metal sheets during body-in-white assembly of automobiles \citep{zhou2014study}, mainly because of its adaptability for mass production \citep{martin}. Among on-line measurements of RSW process parameters, the so-called dynamic resistance curve (DRC) is recognized as the full technological signature of the metallurgical development of a spot weld \citep{dickinson,capezza2021functional_clustering} and, thus, can be used to characterize the quality of a finished sub-assembly. Figure \ref{fig_drc} shows 100 DRCs corresponding to 10 different spot welds, measured in $m\Omega$, that are acquired during the RSW process from the real-case study presented in Section \ref{sec_real}. Several outliers are clearly visible that should be taken into account to set up an effective SPM strategy. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{fig/fig1.pdf} \caption{Sample of 100 DRCs, measured in $m\Omega$, that are acquired during the RSW process from the real-case study in Section \ref{sec_real}. The different panels refer to the corresponding different spot welds, denoted with names from V1 to V10.} \label{fig_drc} \end{center} \end{figure} Indeed, control charts are very sensitive to the presence of outlying observations in Phase I that can lead to inflated control limits and reduced power to detect process changes in Phase II. To deal with outliers in the Phase I sample, SPM methods use two common alternatives, namely the diagnostic and the robust approaches \citep{kruger2012statistical,hubert2015multivariate}. The diagnostic approach is based on standard estimates after the removal of sample units identified as outliers that translates into SPM methods where iterative re-estimation procedures are considered in Phase I. This approach could be often safely applied to eliminate the effect of a small number of very extreme observations. However, it will fail to detect more moderate outliers that are not always as easy to label correctly. On the contrary, the robust approach accepts all the data points and tries to find a robust estimator which reduces the impact of outliers on the final results \citep{maronna2019robust}. Several robust approaches for the SPM of a multivariate scalar quality characteristic have been proposed in the literature. \cite{alfaro2009comparison} show a comparison of robust alternatives to the classical Hotelling's control chart. It includes two alternative Hotelling's $T^2$-type control charts for individual observations, proposed by \cite{vargas2003robust} and \cite{jensen2007high}, which are based on the minimum volume ellipsoide and the minimum covariance determinant estimators \citep{rousseeuw1984least}, respectively. Moreover, the comparison includes the control chart based on the reweighted minimum covariance determinant (RMCD) estimators, proposed by \cite{chenouri2009multivariate}. More recently, \cite{cabana2021robust} propose an alternative robust Hotelling's $T^2$ procedure using the robust shrinkage reweighted estimator. Although \cite{kordestani2020monitoring} and \cite{moheghi2020phase} propose robust estimators to monitoring simple linear profiles, to the best of authors' knowledge, a robust approach able to successfully capture the functional nature of a multivariate functional quality characteristic has not been devised in the SPM literature so far. Beyond the SPM literature, several works have been proposed to deal with outlying functional observations. Several methods extend the classical linear combination type estimators (i.e., L-estimator) \citep{maronna2019robust} to the functional setting to robustly estimate the center of a functional distribution through trimming \citep{fraiman2001trimmed,cuesta2006impartial} and functional data depths \citep{cuevas2009depth,lopez2011half}. \cite{sinova2018m} introduce the notion of maximum likelihood type estimators (i.e., M-estimators) in the functional data setting. More recently, \cite{centofanti2021rofanova} propose a robust functional ANOVA method that reduces the weights of outlying functional data on the results of the analysis. Regarding functional principal component analysis (FPCA), robust approaches are classified by \cite{boente2021robust} in three groups, depending on the specific property of principal components on which they focus. Methods in the first group perform the eigenanalysis of a robust estimator of the scatter operator, as the spherical principal components method of \cite{locantore1999robust} and the indirect approach of \cite{sawant2012functional}. The latter performs a robust PCA method, e.g., ROBPCA \citep{hubert2005robpca}, on the matrix of the basis coefficients corresponding to a basis expansion representation of the functional data. The second group includes projection-pursuit approaches \citep{hyndman2007robust}, which sequentially search for the directions that maximize a robust estimator of the spread of the data projections. Whereas, the third group is composed of methods that estimate the principal components spaces by minimizing a robust reconstruction error measure \citep{lee2013m}. Finally, it is worth mentioning diagnostic approaches for functional outlier detection, which have been proposed for both univariate \citep{hyndman2010rainbow,arribas2014shape,febrero2008outlier} and multivariate functional data \citep{hubert2015multivariate,dai2018multivariate,aleman2022depthgram}. In presence of many functional variables, the lack of robust approaches that deal with outliers is exacerbated by the curse of dimensionality. Traditional multivariate robust estimators assume a casewise contamination model for the data, which consists in a mixture of two distributions, where the majority of the cases is free of contamination and the minority mixture component describes an unspecified outlier generating distribution. \cite{alqallaf2009propagation} show that these traditional estimators are affected by the problem of propagation of outliers. In situations where only a small number of cases are contaminated, the traditional robust approaches work well. However, under an independent contamination model such as cellwise outliers (i.e., contamination in each variable is independent from the other variables), when the dimensionality of the data is high, the fraction of perfectly observed cases can be rather small and the traditional robust estimators may fail. Moreover, \cite{agostinelli2015robust} point out that both types of data contamination, casewise and cellwise, may occur together. This problem has been addressed in the multivariate scalar setting by \cite{agostinelli2015robust} that propose a two steps method. In the first step, a univariate filter is used to eliminate large cellwise outliers, i.e., detection and replacement by missing values, then, in the second step, a robust estimation, specifically designed to deal with missing data, is applied to the incomplete data. \cite{leung2016robust} notice that the univariate filter does not handle well moderate-size cellwise outliers, therefore they introduce for the first step a consistent bivariate filter to be used in combination with the univariate filter. \cite{rousseeuw2018detecting} propose a method for detecting deviating data cells that takes the correlations between the variables into account, whereas, \cite{tarr2016robust} devise a method for robust estimation of precision matrices under cellwise contamination. Other methods that consider cellwise outliers have been developed for regression and classification \citep{filzmoser2020cellwise,aerts2017cellwise}. To deal with multivariate functional outliers, in this paper we propose a new framework, referred to as robust multivariate functional control chart (RoMFCC), for SPM of multivariate functional data that is robust to both functional casewise and cellwise outliers. The latter corresponds to a contamination model where outliers arise in each variable independently from the other functional variables. Specifically, to deal with functional cellwise outliers, the proposed framework considers an extension of the filtering approach proposed by \cite{agostinelli2015robust} to univariate functional data and an imputation method inspired by the robust imputation technique of \cite{branden2009robust}. Moreover, it also considers a robust multivariate functional principal component analysis (RoMFPCA) based on the ROBPCA method \citep{hubert2005robpca}, and a profile monitoring strategy built on the Hotelling's $T^2$ and the squared prediction error ($SPE$) control charts \citep{noorossana2011statistical,grasso2016using,centofanti2020functional,capezza2020control,capezza2021functional_qrei,capezza2022funchartspaper}. A Monte Carlo simulation study is performed to quantify the probability of signal (i.e., detecting an OC observation) of RoMFCC in identifying mean shifts in the functional variables in presence of both casewise and cellwise outliers. That is, the proposed RoMFCC is compared with other control charts already present in the literature. Finally, the practical applicability of the proposed control chart is illustrated on the motivating real-case study in the automotive manufacturing industry. In particular, the RoMFCC is shown to adequately identify a drift in the manufacturing process due to electrode wear. The article is structured as follows. Section \ref{sec_method} introduces the proposed RoMFCC framework. In Section \ref{sec_sim}, a simulation study is presented where RoMFCC is compared to other popular competing methods. The real-case study in the automotive industry is presented in Section \ref{sec_real}. Section \ref{sec_conclusions} concludes the article. Supplementary materials for this article are available online. All computations and plots have been obtained using the programming language R \citep{r2021}. \section{The Robust Multivariate Functional Control Chart Framework} \label{sec_method} The proposed RoMFCC is a new general framework for SPM of multivariate functional data that is able to deal with both functional casewise and cellwise outliers. It relies on the following four main elements. \begin{enumerate}[label=(\Roman*)] \item \textit{Functional univariate filter}, which is used to identify functional cellwise outliers to be replaced by missing components. \item \textit{Robust functional data imputation}, where a robust imputation method is applied to the incomplete data to replace missing values. \item \textit{Casewise robust dimensionality reduction}, which reduces the infinite dimensionality of the multivariate functional data by being robust towards casewise outliers. \item \textit{Monitoring strategy}, to appropriately monitor multivariate functional data. \end{enumerate} In what follows, we describe a specific implementation of the RoMFCC framework where (I) an extension of the filtering proposed by \cite{agostinelli2015robust}, referred to as functional univariate filter (FUF), is considered; (II) a robust functional data imputation method referred to as RoFDI and based on the robust imputation technique of \cite{branden2009robust} is used; (III) the RoMFPCA is considered as the casewise robust dimensionality reduction method. Finally, (IV) the multivariate functional data are monitored through the profile monitoring approach based on the simultaneous application of the Hotelling’s $T^{2}$ and the squared prediction error ($SPE$) control charts. For ease of presentation, the RoMFPCA is presented in Section \ref{sec_RoMFPCA}. Then, Section \ref{sec_univfilter}, Section \ref{sec_dataimpu}, and, Section \ref{sec_monstr} describe the FUF, the RoFDI method, and the monitoring strategy, respectively. Section \ref{sec_propmeth} details the Phase I and Phase II of the proposed implementation of the RoMFCC framework where the elements (I-IV) are put together. \subsection{Robust Multivariate Functional Principal Component Analysis} \label{sec_RoMFPCA} Let $\bm{X}=\left(X_1,\dots, X_p\right)^{T}$ a random vector with realization in the Hilbert space $\mathbb{H}$ of $p$-dimensional vectors of functions defined on the compact set $\mathcal{T}\in\mathbb{R}$ with realizations in $L^2(\mathcal{T})$, i.e., the Hilbert spaces of square integrable functions defined on $\mathcal{T}$. Accordingly, the inner product of two functions $f$ and $g$ in $L^{2}\left(\mathcal{T}\right)$ is $\langle f,g\rangle=\int_{\mathcal{T}}f\left(t\right)g\left(t\right)dt$, and the norm is $\lVert \cdot \rVert=\sqrt{\langle \cdot,\cdot\rangle}$. The inner product of two function vectors $\mathbf{f}=\left(f_1,\dots,f_p\right)^{T}$ and $\mathbf{g}=\left(g_1,\dots,g_p\right)^{T}$ in $\mathbb{H}$ is $\langle \mathbf{f},\mathbf{g} \rangle _{\mathbb{H}}=\sum_{j=1}^{p}\langle f_j,g_j\rangle$ and the norm is $\lVert \cdot \rVert_{\mathbb{H}}=\sqrt{\langle \cdot,\cdot\rangle_{\mathbb{H}}}$. We assume that $\bm{X}$ has mean $\bm{\mu}=\left(\mu_1,\dots,\mu_p\right)^T$, $\mu_i(t)=\E(X_i(t))$, $t\in\mathcal{T}$ and covariance $\bm{G}=\lbrace G_{ij}\rbrace_{1\leq i,j \leq p}$, $G_{ij}(s,t)=\Cov(X_j(s),X_j(t))$, $s,t\in \mathcal{T}$. In what follows, to take into account for differences in degrees of variability and units of measurements among $X_1,\dots, X_p$, the transformation approach of \cite{chiou2014multivariate} is considered. Specifically, let consider the vector of standardized variables $\bm{Z}=\left(Z_1,\dots,Z_p\right)^T$, $Z_i(t)=v_i(t)^{-1/2}(X_i(t)-\mu_i(t))$, with $v_i(t)=G_{ii}(t,t)$, $t\in\mathcal{T}$. Then, from the multivariate Karhunen-Lo\`{e}ve's Theorem \citep{happ2018multivariate} follows that \begin{equation*} \bm{Z}(t)=\sum_{l=1}^{\infty} \xi_l\bm{\psi}_l(t),\quad t\in\mathcal{T}, \end{equation*} where $\xi_l=\langle \bm{\psi}_l, \bm{Z}\rangle_{\mathbb{H}} $ are random variables, said \textit{principal components scores} or simply \textit{scores} such that $\E\left( \xi_l\right)=0$ and $\E\left(\xi_l \xi_m\right)=\lambda_{l}\delta_{lm}$, with $\delta_{lm}$ the Kronecker delta. The elements of the orthonormal set $\lbrace \bm{\psi}_l\rbrace $, $\bm{\psi}_l=\left(\psi_{l1},\dots,\psi_{lp}\right)^T$, with $\langle \bm{\psi}_l,\bm{\psi}_m\rangle_{\mathbb{H}}=\delta_{lm}$, are referred to as \textit{principal components}, and are the eigenfunctions of the covariance $\bm{C}$ of $\bm{Z}$ corresponding to the eigenvalues $\lambda_1\geq\lambda_2\geq \dots\geq 0$. Following the approach of \cite{ramsay2005functional}, the eigenfunctions and eigenvalues of the covariance $\bm{C}$ are estimated through a basis function expansion approach. Specifically, we assume that the functions $Z_j$ and a generic eigenfunction $\bm \psi$ of $\bm{C}$ with components $\psi_{j}$, for $j=1,\dots,p$, can be represented as \begin{equation} \label{eq_appcov} Z_j(t)\approx \sum_{k=1}^{K} c_{jk}\phi_{jk}(t),\quad \psi_{j}(t)\approx \sum_{k=1}^{K} b_{jk}\phi_{jk}(t), \quad t\in\mathcal{T} \end{equation} where $\bm{\phi}_j=\left(\phi_{j1},\dots,\phi_{jK}\right)^T$, $\bm{c}_j=\left(c_{j1},\dots,c_{jK}\right)^T$ and $\bm{b}_j=\left(b_{j1},\dots,b_{jK}\right)^T$ are the basis functions and coefficient vectors for the expansion of $Z_j$ and $\psi_j$, respectively. With these assumptions, standard multivariate functional principal component analysis \citep{ramsay2005functional,chiou2014multivariate} estimates eigenfunctions and eigenvalues of the covariance $\bm{C}$ by performing standard multivariate principal component analysis on the random vector $\bm{W}^{1/2}\bm{c}$, where $\bm{c}=\left(\bm{c}_1^T,\dots,\bm{c}_p^T\right)^T$ and $\bm{W}$ is a block-diagonal matrix with diagonal blocks $\bm{W}_j$, $j=1,\dots,p$, whose entries are $w_{k_1 k_2} = \langle\phi_{k_1},\phi_{k_2}\rangle$, $k_1,k_2=1,\dots,K$. Then, the eigenvalues of $\bm{C}$ are estimated by those of the covariance matrix of $\bm{W}^{1/2}\bm{c}$, whereas, the components $\psi_j$ of the generic eigenfunction, with corresponding eigenvalue $\lambda$, are estimated through Equation \eqref{eq_appcov} with $\bm{b}_j=\bm{W}^{-1/2}\bm{u}_j$, where $\bm{u}=\left(\bm{u}_1^T,\dots,\bm{u}_p^T\right)^T$ is the eigenvector of the covariance matrix of $\bm{W}^{1/2}\bm{c}$ corresponding to $\lambda$. However, it is well known that standard multivariate principal component analysis is not robust to outliers \citep{maronna2019robust}, which obviously reflects on the functional principal component analysis by probably providing misleading results. Extending the approach of \cite{sawant2012functional} for multivariate functional data, the proposed RoMFPCA applies a robust principal component analysis alternative to the random vector $\bm{W}^{1/2}\bm{c}$. Specifically, we consider the ROBPCA approach \citep{hubert2005robpca}, which is a computationally efficient method explicitly conceived to produce estimates with high breakdown in high dimensional data settings, which almost always arise in the functional context, and to handle large percentage of contamination. Thus, given $n$ independent realizations $\bm{X}_i$ of $\bm{X}$, dimensionality reduction is achieved by approximating $\bm{X}_i$ through $\hat{\bm{X}}_i$, for $i=1,\dots,n$, as \begin{equation} \label{eq_appx} \hat{\bm{X}}_i(t)= \hat{\bm{\mu}}(t)+\hat{\bm{D}}(t)\sum_{l=1}^{L} \hat{\xi}_{il}\hat{\bm{\psi}}_l(t) \quad t\in\mathcal{T} \end{equation} where $\hat{\bm{D}}$ is a diagonal matrix whose diagonal entries are robust estimates $\hat{v}_j^{1/2}$ of $v_j^{1/2}$, $\hat{\bm{\mu}}=\left(\hat{\mu}_1,\dots,\hat{\mu}_p\right)^T$ is a robust estimate of $\bm{\mu}$, $\hat{\bm{\psi}}_l$ are the first $L$ robustly estimated principal components and $\hat{\xi}_{il}= \langle \hat{\bm{\psi}}_l, \hat{\bm{Z}}_i\rangle_{\mathbb{H}}$ are the estimated scores with robustly estimated variances $\hat{\lambda}_l$. The estimates $\hat{\bm{\psi}}_l$ and $\hat{\lambda}_l$ are obtained through the $n$ realizations of $\bm{Z}_i$ estimated by using $\hat{\mu}_j$ and $\hat{v}_j$. The robust estimates $\hat{\mu}_j$ and $\hat{v}_j$ are obtained through the scale equivariant functional $M$-estimator and the functional normalized median absolute deviation estimator proposed by \cite{centofanti2021rofanova}. As in the multivariate setting, $L$ is generally chosen such that the retained principal components $\hat{\bm{\psi}}_1,\dots, \hat{\bm{\psi}}_L$ explain at least a given percentage of the total variability, which is usually in the range 70-90$\%$, however, more sophisticated methods could be used as well, see \cite{jolliffe2011principal} for further details. \subsection{Functional Univariate Filter} \label{sec_univfilter} To extend the filter of \cite{agostinelli2015robust} and \cite{leung2016robust} to univariate functional data, let consider $n$ independent realizations $X_i$ of a random function $X$ with values in $L^2(\mathcal{T})$. The proposed FUF considers the functional distances $D_{i}^{fil}$, $i=1,\dots,n$, defined as \begin{equation} \label{eq_dfil} D_{i}^{fil}=\sum_{l=1}^{L^{fil}} \frac{(\hat{\xi}_{il}^{fil})^2}{\hat{\lambda}_{l}^{fil}}, \end{equation} where the estimated scores $\hat{\xi}_{il}^{fil}= \langle \hat{{\psi}}_{l}^{fil}, \hat{{Z}}_i\rangle$, the estimated eigenvalues $\hat{\lambda}_{l}^{fil}$, the estimated principal components $\hat{{\psi}}_{j}^{fil}$, and the estimated standardized observations $\hat{{Z}}_i$ of ${X}_{i}$ are obtained by applying, with $p=1$, the RoMFPCA described in Section \ref{sec_RoMFPCA} to the sample $X_i$, $i=1,\dots,n$. In this setting, RoMFPCA is used to appropriately represent distances among $X_i$'s and not to perform dimensionality reduction, this means that $L^{fil}$ should be sufficiently large to capture a large percentage of the total variability $\delta^{fil}$. Let $G_n$ the empirical distribution of $D_{i}^{fil}$, that is \begin{equation*} G_n(x)=\frac{1}{n}\sum_{i=1}^n I(D_{i}^{fil}\leq x), \quad x\geq 0, \end{equation*} where $I$ is the indicator function. Then, functional observations $X_i$ are labeled as cellwise outliers by comparing $G_n(x)$ with $G(x)$, $ x \geq 0$, where $G$ is a reference distribution for $D_{i}^{fil}$. Following \cite{leung2016robust}, we consider the chi-squared distribution with $L^{fil}$ degrees of freedom, i.e., $G=\chi^2_{L^{fil}}$. The proportion of flagged cellwise outliers is defined by \begin{equation*} d_n=\sup_{x\geq \eta}\lbrace G(x)-G_n(x)\rbrace^{+}, \end{equation*} where $\lbrace a\rbrace^{+}$ represents the positive part of $a$, and $\eta=G^{-1}(\alpha)$ is a large quantile of $G$. Following \cite{agostinelli2015robust}, in the following we consider $\alpha=0.95$ as the aim is to detect extreme cellwise outliers, but other choices could be considered as well. Finally, we flag $\left[nd_n\right]$ observations with the largest functional distances $D_{i}^{fil}$ as functional cellwise outliers (here $\left[a\right]$ is the largest integer less than or equal to $a$). From the arguments in \cite{agostinelli2015robust} and \cite{leung2016robust} the FUF is a consistent even when the actual distribution of $D_{i}^{fil}$ is unknown, that is asymptotically the filter will not wrongly flag a cellwise outlier provided that the tail of the chosen reference distribution $G$ is heavier (or equal) than that of the actual unknown distribution. \subsection{Robust Functional Data Imputation} \label{sec_dataimpu} Let consider $n$ independent realizations $\bm{X}_i = \left(X_{i1},\dots,X_{ip}\right)^T$ of a random vector of functions $\bm{X}$ as defined in Section \ref{sec_RoMFPCA}. This section considers the setting where $\bm{X}_i$, $i=1,\dots,n$, may presents missing components, i.e., at least one of $X_{i1},\dots,X_{ip}$ is missing. Thus, for each realization $\bm{X}_i$ we can identify the missing $\bm{X}^m_{i}=\left(X_{im_{i1}},\dots,X_{im_{is_i}}\right)^T$ and observed $\bm{X}^o_{i}=\left(X_{io_{i1}},\dots,X_{io_{it_i}}\right)^T$ components with $t_i=p-s_i$, and where $\lbrace m_{ij}\rbrace$ and $\lbrace o_{ij}\rbrace$ are disjoint set of indices, whose union coincides with the set of indeces $ \lbrace 1,\dots,p \rbrace$, that indicate which components of the realization $i$ are either observed or missing. Moreover, we assume that a set $S_c$ of $c$ realizations free of missing components is available. The proposed RoFDI method extends to the functional setting the robust imputation approach of \cite{branden2009robust} that sequentially estimates the missing part of an incomplete observation such that the imputed observation has minimum distance from the space generated by the complete realizations. Analogously, starting from $S_c$, we propose that the missing components of the observation $\bm{X}_{\underline{i}}\notin S_c$, which correspond to the smallest $s_i$, are sequentially imputed by minimizing, with respect to $\bm{X}^m_{\underline{i}}$, \begin{equation} \label{eq_imp1} D(\bm{X}_{\underline{i}}^m)=\sum_{l=1}^{L^{imp}} \frac{(\hat{\xi}_{il}^{imp})^2}{\hat{\lambda}_{l}^{imp}}, \end{equation} where the estimated scores $\hat{\xi}_{il}^{imp}=\langle \hat{\bm{\psi}}_{l}^{imp}, \hat{\bm{Z}}_{\underline{i}}\rangle_{\mathbb{H}}$, eigenvalues $\hat{\lambda}_{l}^{imp}$, principal components $\hat{\bm{\psi}}_{l}^{imp}$ and standardized observations $\hat{\bm{Z}}_{\underline{i}}$ of $\bm{X}_{\underline{i}}$ are obtained by applying the RoMFPCA (Section \ref{sec_RoMFPCA}) to the free of missing data observations in $S_c$. Analogously to Section \ref{sec_univfilter}, RoMFPCA is used to define the distance of $\bm{X}_{\underline{i}}$ from the space generated by the free of missing data observations, thus, $L^{imp}$ should be sufficiently large to capture a large percentage of the total variability $\delta^{imp}$. Because $\hat{\bm{Z}}_{\underline{i}}$ is the standardized version of $\bm{X}_{\underline{i}}$, we can identify the missing $\hat{\bm{Z}}^m_{\underline{i}}$ and observed $\hat{\bm{Z}}^o_{\underline{i}}$ components of $\hat{\bm{Z}}_{\underline{i}}$. Thus, the minimization problem in Equation \eqref{eq_imp1} can be equivalently solved with respect to $\hat{\bm{Z}}_{\underline{i}}^m$, and the resulting solution can be unstandardized to obtain the imputed components of $\bm{X}_{\underline{i}}^m$. Due to the approximations in Equation \eqref{eq_appcov}, $\hat{\bm{Z}}_{\underline{i}}$ is uniquely identified by the coefficient vectors $\bm{c}_{\underline{i}j}$, $j=1,\dots, p$, related to the basis expansions of its components. Let $\bm{c}^m_{\underline{i}j}$, $j=m_{\underline{i}1},\dots,m_{\underline{i}s_{\underline{i}}}$ and $\bm{c}^o_{\underline{i}j}$, $j=o_{\underline{i}1},\dots,o_{\underline{i}t_{\underline{i}}}$ be the coefficient vectors corresponding to the missing and observed components of $\hat{\bm{Z}}_{\underline{i}}$, respectively, and, $ \bm{c}^m_{\underline{i}}=\left(\bm{c}_{\underline{i}m_{\underline{i}1}}^{mT},\dots,\bm{c}_{\underline{i}m_{\underline{i}s_{\underline{i}}}}^{mT}\right)^T$ and $ \bm{c}_{\underline{i}}^o=\left(\bm{c}_{\underline{i}o_{\underline{i}1}}^{oT},\dots,\bm{c}_{\underline{i}o_{\underline{i}t_{\underline{i}}}}^{oT}\right)^T$. Moreover, let indicate with $\hat{\bm{b}}_{lj}$, $l=1,\dots, L^{imp}$, $j=1,\dots, p$, the coefficient vectors related to the basis expansions of the components of the estimated principal components $\hat{\bm{\psi}}_{l}^{imp}$, and with $\hat{\bm{B}}=\left(\hat{\bm{b}}_1,\dots,\hat{\bm{b}}_{L^{imp}}\right)$ the matrix whose columns are $\hat{\bm{b}}_l=\left(\hat{\bm{b}}_{l1}^T,\dots,\hat{\bm{b}}_{lp}^T\right)^T$. Then, the solution of the minimization problem in Equation \eqref{eq_imp1} is \begin{equation} \label{eq_modim} \hat{\bm{c}}_{\underline{i}}^{m}=-\bm{C}_{mm}^{+}\bm{C}_{mo}\bm{c}_{\underline{i}}^o \end{equation} where $\bm{A}^{+}$ is the Moore-Penrose inverse of the matrix $\bm{A}$, $\bm{C}_{mm}$ and $\bm{C}_{mo}$ are the matrices constructed by taking the columns $m_{\underline{i}1},\dots,m_{\underline{i}s_{\underline{i}}}$ and $o_{\underline{i}1},\dots,o_{\underline{i}t_{\underline{i}}}$ of the matrix composed by the rows $m_{\underline{i}1},\dots,m_{\underline{i}s_{\underline{i}}}$ of \begin{equation*} \bm{C}=\bm{W}\hat{\bm{B}}\hat{\bm{\Lambda}}^{-1}\hat{\bm{B}}^T\bm{W} \end{equation*} with $\bm{W}$ the block-diagonal matrix defined in Section \ref{sec_RoMFPCA}, and $\hat{\bm{\Lambda}}$ the diagonal matrix whose diagonal entries are the estimated eigenvalues $\hat{\lambda}_{l}^{imp}$, $l=1,\dots,L^{imp}$. Moreover, to address the correlation bias issue typical of deterministic imputation approaches \citep{little2019statistical,van2018flexible}, we propose to impute $\bm{c}_{\underline{i}}^m$ through $\bm{c}_{\underline{i}}^{m,imp}=\left(\bm{c}_{\underline{i}1}^{m,imp T},\dots,\bm{c}_{\underline{i}s_{\underline{i}}}^{m,imp T}\right)^T$ as follows \begin{equation} \label{eq_imperr} \bm{c}_{\underline{i}}^{m,imp}=\hat{\bm{c}}_{\underline{i}}^{m}+\bm{\varepsilon}_i \end{equation} where $\bm{\varepsilon}_i$ is a multivariate normal random variable with mean zero and a residual covariance matrix robustly estimated from the regression residuals of the coefficient vectors of the missing component on those of the observed component for the observations in $S_c$ through the model in Equation \eqref{eq_modim}. Thus, the proposed RoFDI approach is a stochastic imputation method \citep{little2019statistical,van2018flexible}. Then, the components of $\hat{\bm{Z}}_{\underline{i}}^m$ are imputed, for $j=1,\dots,s_{\underline{i}}$, as \begin{equation*} \hat{Z}_{\underline{i}j}^{m,imp}(t)=\left(\bm{c}_{\underline{i}j}^{m,imp}\right)^T\bm{\phi}_{j}(t),\quad t\in\mathcal{T}, \end{equation*} where $\bm{\phi}_{j}$ is the vector of basis function corresponding to the $j$-th component of $\hat{\bm{Z}}$ (Section \ref{sec_RoMFPCA}), and the imputed missing components of $\bm{X}_{\underline{i}}$ are obtained by unstandardizing $\hat{\bm{Z}}_{\underline{i}}^m$. Once the missing components of $\bm{X}_{\underline{i}}$ are imputed, the whole observation is added to $S_c$ and the next observation, which does not belong to $S_c$ and corresponds to the smallest $s_{\underline{i}}$, is considered. Similarly to \cite{branden2009robust}, if the cardinality of $S_c$ at the first iteration is sufficiently large, we suggest to not update the RoMFPCA model each time a new imputed observation is added to $S_c$ to avoid infeasible time complexity of the RoFDI, otherwise, the RoMFPCA model could be updated each time a given number of observations are added to $S_c$. Finally, to take into account the increased noise due to single imputation, the proposed RoFDI can be easily included in a multiple imputation framework \citep{van2018flexible,little2019statistical}, indeed, differently imputed datasets may be obtained by performing several times the RoFDI due to the presence of the stochastic component $\bm{\varepsilon}_i$ in Equation \eqref{eq_imperr}. \subsection{The Monitoring Strategy} \label{sec_monstr} The (IV) element of the proposed RoMFCC implementation relies on the consolidated monitoring strategy for a multivariate functional quality characteristic $\bm{X}$ based on the Hotelling's $ T^2 $ and $ SPE $ control charts. The former assesses the stability of $ \bm{X} $ in the finite dimensional space spanned by the first principal components identified through the RoMFPCA (Section \ref{sec_RoMFPCA}), whereas, the latter monitors changes along directions in the complement space. Specifically, the Hotelling's $ T^2 $ statistic for $\bm{X}$ is defined as \begin{equation*} T^2=\sum_{l=1}^{L^{mon}}\frac{(\xi_{l}^{mon})^2}{\lambda_{l}^{mon}}, \end{equation*} where $ \lambda_{l}^{mon} $ are the variances of the scores $ \xi_{l}^{mon}=\langle \bm{\psi}_{l}^{mon}, \bm{Z}\rangle_{\mathbb{H}}$ where $\bm{Z}$ is the vector of standardized variable of $\bm{X}$ and $\bm{\psi}_{l}^{mon}$ are the corresponding principal components as defined in Section \ref{sec_RoMFPCA}. The number $L^{mon}$ is chosen such that the retained principal components explain at least a given percentage $\delta^{mon}$ of the total variability. The statistic $ T^2$ is the standardized squared distance from the centre of the orthogonal projection of $\bm{Z}$ onto the principal component space spanned by $ \bm{\psi}_{1}^{mon},\dots,\bm{\psi}_{L^{mon}}^{mon}$. Whereas, the distance between $ \bm{Z} $ and its orthogonal projection onto the principal component space is measured through the $ SPE $ statistic, defined as \begin{equation*} SPE=||\bm{Z}-\hat{\bm{Z}}||_{\mathbb{H}}^2, \end{equation*} where $ \hat{\bm{Z}}=\sum_{l=1}^{L^{mon}}\xi_{l}^{mon}\bm{\psi}_{l}^{mon}$. Under the assumption of multivariate normality of $\xi_{l}^{mon}$, which is approximately true by the central limit theorem \citep{nomikos1995multivariate}, the control limits of the Hotelling's $ T^2 $ control chart can be obtained by considering the $ (1-\alpha^*) $ quantiles of a chi-squared distribution with $L^{mon}$ degrees of freedom \citep{johnson2014applied}. Whereas, the control limits for $ SPE $ control chart can be computed by using the following equation \citep{jackson1979control} \begin{equation*} CL_{SPE,\alpha^*}=\theta_1\left[\frac{c_{\alpha^*}\sqrt{2\theta_2h_0^2}}{\theta_1}+1+\frac{\theta_2h_0(h_0-1)}{\theta_1^2}\right]^{1/h_0} \end{equation*} where $c_{\alpha^*}$ is the normal deviate corresponding to the upper $ (1-\alpha^*) $ quantile, $h_0=1-2\theta_1\theta_3/3\theta_2^2$, $\theta_j=\sum_{l=L^{mon}+1}^{\infty}(\lambda_{l}^{mon})^j$, $j=1,2,3$. Note that, to control the family wise error rate (FWER), $ \alpha^* $ should be chosen appropriately. We propose to use the \v{S}id\'ak correction $\alpha^{*}=1-\left(1-\alpha\right)^{1/2}$ \citep{vsidak1967rectangular}, where $\alpha$ is the overall type I error probability. \subsection{The Proposed Method} \label{sec_propmeth} The proposed RoMFCC implementation collects all the elements introduced in the previous sections for the Phase II monitoring strategy where a set of Phase I observations, which can be contaminated with both functional casewise and cellwise outliers, is used for the design of the control chart. Both phases are outlined in the scheme of Figure \ref{fi_diag} and detailed in the following sections. \begin{figure}[h] \caption{Scheme of the RoMFCC approach.} \label{fi_diag} \centering \includegraphics[width=\textwidth]{fig/diag.png} \end{figure} \subsubsection{Phase I} Let $ \bm{X}_i $, $ i=1,\dots,n $, the Phase I random sample of the multivariate functional quality characteristic $\bm{X}$, used to characterize the normal operating conditions of the process. It can be contaminated with both functional casewise and cellwise outliers. In the \textit{filtering} step, functional cellwise outliers are identified through the FUF described in Section \ref{sec_univfilter}, and, then, replaced by missing components. Note that, if cellwise outliers are identified for each component of a given observation, then, that observation is removed from the sample because its imputation does not provide any additional information for the analysis. In the \textit{imputation} step, missing components are imputed through the RoFDI method presented in Section \ref{sec_dataimpu}. Once the imputed Phase I sample is obtained, it used to to estimate the RoMFPCA model and perform the \textit{dimensionality reduction} step as described in Section \ref{sec_RoMFPCA}. The Hotelling's $ T^2 $ and $ SPE $ statistics are, then, computed for each observation in the Phase I sample after the imputation step. Specifically, the values $ T^2_i $ and $ SPE_i $ of the statistics are computed as described in Section \ref{sec_monstr} by considering the estimated RoMFPCA model obtained in the dimensionality reduction step. Finally, control limits for the Hotelling's $ T^2 $ and $ SPE $ control charts are obtained as described in Section \ref{sec_monstr}. Note that the parameters $\theta_j$ to estimate $CL_{SPE,\alpha^*}$ are approximated by considering a finite summation to the maximum number of estimable principal components, which is finite for a sample of $n$ observations. If a multiple imputation strategy is employed by performing the imputation step several times, then the multiple estimated RoMFPCA models could be combined by averaging the robustly estimated covariance functions as suggested in \cite{van2007two}. Note that, when the sample size $n$ is small compared to the number of process variables, undesirable effect upon the performance of the RoMFCC could arise \citep{ramaker2004effect,kruger2012statistical}. To reduce possible overfitting issues and, thus, increase the monitoring performance of the RoMFCC, a reference sample of Phase I observations, referred to as \textit{tuning set}, could be considered, which is different from the one used in the previous steps, referred to as \textit{training set}. Specifically, on the line of \cite{kruger2012statistical}, Chapter 6.4, the tuning set is passed through the filtering and imputation steps and, then, it is projected on the RoMFPCA model, estimated through the training set observations, to robustly estimate the distribution of the resulting scores. Finally, Hotelling's $ T^2 $ and $ SPE $ statistics and control charts limits are calculated by taking into account the estimated distribution of the tuning set scores. \subsubsection{Phase II} In the actual monitoring phase (Phase II), a new observation $ \bm{X}_{new} $ is projected on the RoMFPCA model to compute the values of $ T^2_{new} $ and $ SPE_{new} $ statistics accordingly to the score distribution identified in Phase I. An alarm signal is issued if at least one between $ T^2_{new} $ and $ SPE_{new} $ violates the control limits. \section{Simulation Study} \label{sec_sim} The overall performance of the proposed RoMFCC is evaluated by means of an extensive Monte Carlo simulation study. The aim of this simulation is to assess the performance of the RoMFCC in identifying mean shifts of the multivariate functional quality characteristic when the Phase I sample is contaminated with both functional cellwise and casewise outliers. The data generation process (detailed in Supplementary Material A) is inspired by the real-case study in Section \ref{sec_real} and mimics typical behaviours of DRCs in a RSW process. Specifically, it considers a multivariate functional quality characteristic with $p=10$ components. Two main scenarios are considered that are characterized by different Phase I sample contamination. Specifically, the Phase I sample is contaminated by functional cellwise outliers in Scenario 1 and by functional casewise outliers in Scenario 2, with a contamination probability equal to 0.05 in both cases. In the Supplementary Material B, additional results obtained by considering contamination probability equal to 0.1 are shown. For each scenario, two contamination models, referred to as Out E and Out P, with three increasing contamination levels, referred to as C1, C2 and C3, are considered. The former mimics a splash weld (expulsion), caused by excessive welding current, while the latter resembles phase shift of the peak time caused by an increased electrode force \citep{xia2019online}. Moreover, we consider also a scenario, referred to as Scenario 0, representing settings where the Phase I sample is not contaminated. To generate the Phase II sample, two types of OC conditions, referred to as OC E and OC P, are considered that are generated analogously to the two contamination models Out E and Out P, respectively, at 4 different severity levels $SL = \lbrace 1, 2, 3, 4 \rbrace$. The proposed RoMFCC implementation is compared with several natural competing approaches. The first approaches are control charts for multivariate scalar data, built on the average value of each component of the multivariate functional data. Among them, we consider the \textit{multivariate classical} Hotelling's $T^2$ control chart, referred to as M, the \textit{multivariate iterative} variant, referred to as Miter, where outliers detected by the control chart in Phase I are iteratively removed until all data are assumed to be IC, and the \textit{multivariate robust} control chart proposed by \cite{chenouri2009multivariate}, referred to as MRo. We further consider also two approaches recently appeared in the profile monitoring literature, i.e., the multivariate \textit{functional control charts}, referred to as MFCC, proposed by \cite{capezza2020control,capezza2022funchartspaper}, and the multivariate \textit{iterative functional control charts} variant, referred to as MFCCiter, where outliers detected by the control chart in Phase I are iteratively removed until all data are assumed to be IC. The RoMFCC is implemented as described in Section \ref{sec_method} with $\delta_{fil}=\delta_{imp}=0.999$ and $\delta_{mon}=0.7$, and to take into account the increased noise due to single imputation, 5 differently imputed datasets are generated through RoFDI. While data are observed through noisy discrete values, each component of the generated quality characteristic observations is obtained by considering the approximation in Equation \eqref{eq_appcov} with $K=10$ cubic B-splines estimated through the spline smoothing approach based on a roughness penalty on the second derivative \citep{ramsay2005functional}. For each scenario, contamination model, contamination level, OC condition and severity level, 50 simulation runs are performed. Each run considers a Phase I sample of 4000 observations, where for MFCC, MFCCIter and RoMFCC, 1000 are used as training set, and the remaining 3000 are used as tuning set. The Phase II sample is composed of 4000 i.i.d. observations. The RoMFCC and the competing methods performances are assessed by means of true detection rate (TDR), which is the proportion of points outside the control limits whilst the process is OC, and the false alarm rate (FAR), which is the proportion of points outside the control limits whilst the process is IC. The FAR should be as similar as possible to the overall type I error probability $\alpha$ considered to obtain the control limits and set equal to 0.05, whereas the TDR should be as close to one as possible. Figure \ref{fi_results_1}-\ref{fi_results_3} display for Scenario 0, Scenario 1 and Scenario 2, respectively, as a function of the severity level $SL$, the mean FAR ($SL=0$) or TDR ($SL\neq0$) for each OC condition OC E and OC P, contamination level C1, C2 and C3 and contamination model Out E and Out P. \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each OC condition (OC E and OC P) as a function of the severity level $SL$ in Scenario 0.} \label{fi_results_1} \centering \begin{tabular}{cc} \hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}\\ \includegraphics[width=.25\textwidth]{fig/sim_0_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_0_OC_P.pdf} \end{tabular} \vspace{-.5cm} \end{figure} \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P in Scenario 1.} \label{fi_results_2} \centering \hspace{-2.05cm} \begin{tabular}{cM{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}} &\multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out E}}}& \multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out P}}}\\ &\hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}&\hspace{0.5cm}\textbf{\footnotesize{OC E}}&\hspace{0.5cm}\textbf{\footnotesize{OC P}}\\ \textbf{\footnotesize{C1}}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s1_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s1_OC_P.pdf}\\ \textbf{\footnotesize{C2}}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s2_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s2_OC_P.pdf}\\ \textbf{\footnotesize{C3}}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_E_s3_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_cellwise_P_s3_OC_P.pdf}\\ \end{tabular} \vspace{-.5cm} \end{figure} \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P in Scenario 2.} \label{fi_results_3} \centering \hspace{-2.05cm} \begin{tabular}{cM{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}} &\multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out E}}}& \multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out P}}}\\ &\hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}&\hspace{0.5cm}\textbf{\footnotesize{OC E}}&\hspace{0.5cm}\textbf{\footnotesize{OC P}}\\ \textbf{\footnotesize{C1}}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s1_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s1_OC_P.pdf}\\ \textbf{\footnotesize{C2}}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s2_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s2_OC_P.pdf}\\ \textbf{\footnotesize{C3}}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_E_s3_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_05_casewise_P_s3_OC_P.pdf}\\ \end{tabular} \vspace{-.5cm} \end{figure} When the Phase I sample is not contaminated by outliers, Figure \ref{fi_results_1} shows that all the approaches that take into account the functional nature of the data, i.e., MFCC, MFCCiter, RoMFCC, achieve the same performance for both OC conditions. Although this setting should be not favourable to approaches specifically designed to deal with outliers, MFCCiter and RoMFCC perform equal to MFCC. The non-functional approaches, i.e., M, Miter, MRo, show worse performance than the functional counterparts, and there is no significant performance difference among them as well. Figure \ref{fi_results_2} show the results for Scenario 1, where the Phase I sample is contaminated by cellwise outliers. The proposed RoMFCC largely outperforms the competing methods for each contmination model, contamination level and OC condition. As expected, as the contamination level increases the differences in performance between the RoMFCC and the competing methods increase as well. Indeed, the performance of RoMFCC is almost insensible to the contamination levels as well as the contamination models, differently from the competing methods whose performance decreases as the contamination level increases and the contamination model changes. Moreover, the MFCCiter, which is representative of the baseline method in this setting, only slightly improves the performance of the MFCC. This is probably due to the masking effect that prevents the MFCCiter to iteratively identify functional cellwise outlier in the Phase I sample and, thus, makes it equivalent to MFCC. The performance of the non-functional methods are totally unsatisfactory because they are not able to both capture the functional nature of the data and successfully deal with functional cellwise outliers. Specifically, M is the worst method overall readily followed by Miter and MRo. Furthermore, as the contamination level increases, the competing functional methods lose their favourable performance and tend to achieve performance comparable to the non-functional approaches. This shows that if outliers are not properly dealt with, terrible performance could arise independently of the suitability of the methods considered. From Figure \ref{fi_results_3}, also in Scenario 2, RoMFCC is clearly the best method. However, in this scenario the difference in performance between the RoMFCC and the competing methods is less pronounced than in Scenario 1. This is expected because the Phase I sample is now contaminated by functional casewise outliers, which is the only contamination type against which the competing methods are able to provide a certain degree of robustness. Note that, the performance of RoMFCC are almost unaffected by contamination in the Phase I sample also in this scenario, which proves the ability of the proposed method to deal with both functional cellwise and casewise outliers. \section{Real-Case Study} \label{sec_real} To demonstrate the potential of the proposed RoMFCC in practical situations, a real-case study in the automotive industry is presented henceforth. As introduced in Section \ref{sec_intro}, it addresses the issue of monitoring the quality of the RSW process, which guarantees the structural integrity and solidity of welded assemblies in each vehicle \citep{martin}. The RSW process \citep{zhang2011resistance} is an autogenous welding process in which two overlapping conventional steel galvanized sheets are joint together, without the use of any filler material. Joints are formed by applying pressure to the weld area from two opposite sides by means of two copper electrodes. Voltage applied to the electrodes generates a current flowing between them through the material. The electrical current flows because the resistance offered by metals causes significant heat generation (Joule effect) that increases the metal temperature at the faying surfaces of the work pieces up to the melting point. Finally, due to the mechanical pressure of the electrodes, the molten metal of the jointed metal sheets cools and solidifies, forming the so-called weld nugget \citep{raoelison}. To monitor the RSW process, the modern automotive Industry 4.0 framework allows the automatic acquisition of a large volume of process parameters. The DRC is considered the most important of these parameters to describe the quality of the RSW process. Further details on how the typical behaviour of a DRC is related to the physical and metallurgical development of a spot weld are provided by \cite{capezza2021functional_clustering}. Data analyzed in this study are courtesy of Centro Ricerche Fiat and are recorded at the Mirafiori Factory during lab tests and are acquired during RSW processes made on the body of the Fiat 500BEV. A body is characterized by a large number of spot welds with different characteristics, e.g, the thickness and material of the sheets to be joined together and the welding time. In this paper, we focus on monitoring a set of ten spot welds made on the body by one of the welding machines. Therefore, for each sub-assembly the multivariate functional quality characteristic is a vector of the ten DRCs, corresponding to the the second welding pulse of ten spot welds normalized on the time domain $ [0, 1]$, for a total number of assemblies equal to 1839. Moreover, resistance measurements were collected at a regular grid of points equally spaced by 1 ms. The RSW process quality is directly affected by electrode wear since the increase in weld numbers leads to changed electrical, thermal and mechanical contact conditions at electrode and sheet interfaces \citep{manladan2017review}. Thus, to take into account the wear issue, electrodes go through periodical renovations. In this setting, a paramount issue refers to the swift identification of DRCs mean shifts caused by electrode wear, which could be considered as a criterion for electrode life termination and guide the electrode renovation strategy. In the light of this, the 919 multivariate profiles corresponding to spot welds made immediately before electrode renewal are used to form the Phase I sample, whereas, the remaining 920 observations are used in Phase II to evaluate the proposed chart performance. We expect that the mean shift of the Phase II DRCs caused by electrode wear should be effectively captured by the proposed control chart. The RoMFCC is implemented as in the simulation study in Section \ref{sec_sim} with the training and tuning sets, each composed by 460 Phase I observations, randomly selected without remittance. As shown in Figure \ref{fig_drc} of Section \ref{sec_intro}, data are plausibly contaminated by several outliers. This is further confirmed by Figure \ref{fig_boxplot}, which shows the boxplot of the functional distance square roots $\sqrt{D_{i,fil}}$ (Equation \eqref{eq_dfil}) obtained from the FUF applied on the training set. Some components clearly show the presence of functional cellwise outliers that are possibly arranged in groups, while other components seem less severely contaminated. \begin{figure} \begin{center} \includegraphics[width=.75\textwidth]{fig/boxplot.pdf} \caption{Boxplot of the functional distance square roots $\sqrt{D_{i,fil}}$ (Equation \eqref{eq_dfil}) obtained from the FUF applied on the training set.} \label{fig_boxplot} \end{center} \end{figure} Figure \ref{fig_ccreal} shows the application of the proposed RoMFCC. \begin{figure} \centering \includegraphics[width=\textwidth]{fig/control_chart.pdf} \caption{Hotelling's $ T^2 $ and $ SPE $ control charts for the RoMFCC in the real-case study. The vertical line separates the monitoring statistics calculated for the tuning set, on the left, and the Phase II data set on the right, while the horizontal lines define the control limits.} \label{fig_ccreal} \end{figure} The vertical line separates the monitoring statistics calculated for the tuning set, on the left, and the Phase II data set on the right, while the horizontal lines define the control limits. Note that, a significant number of tuning set observations are signaled as OC, highlighted in red in Figure \ref{fig_ccreal}. This is expected because these points may include functional casewise outlier not filtered out by the FUF. In the monitoring phase, many points are signaled as OCs by the RoMFCC. In particular, the RoMFCC signals 72.3\% of the observations in the Phase II data set as OC. This shows that the proposed method is particularly sensible to mean shifts caused by an increased electrode wear. Finally, the proposed method is compared with the competing methods presented in the simulation study in Section \ref{sec_sim}. Table \ref{tab_arlreal} shows the estimated TDR values $\widehat{TDR}$ on the Phase II sample for all the considered competing methods. Similarly to \cite{centofanti2020functional}, the uncertainty of $\widehat{TDR}$ is quantified through a bootstrap analysis \citep{efron1994introduction}. Table \ref{tab_arlreal} reports the mean $\overline{TDR}$ of the empirical bootstrap distribution of $\widehat{TDR}$, and the corresponding bootstrap 95\% confidence interval (CI) for each monitoring method. \begin{table}[] \centering \resizebox{0.4\textwidth}{!}{ \begin{tabular}{ccccc} \toprule & $\widehat{TDR}$ & $\overline{TDR}$ & CI\\ \midrule M & 0.336 & 0.335 & [0.305,0.368]\\ Miter & 0.462 & 0.461 & [0.428,0.496]\\ MRo & 0.513 & 0.512 & [0.481,0.547]\\ MFCC & 0.541 & 0.541 & [0.511,0.574]\\ MFCCiter & 0.632 & 0.632 & [0.595,0.664]\\ RoMFCC & 0.723 & 0.723 & [0.695,0.753]\\ \bottomrule \end{tabular} } \caption{Estimated TDR values $\hat{TDR}$ on the Phase II sample, mean $\overline TDR$ of the empirical bootstrap distribution of $\hat{TDR}$, and the corresponding bootstrap 95\% confidence interval (CI) for each monitoring method in the real-case study.} \label{tab_arlreal} \end{table} The bootstrap analysis shows that the RoMFCC outperforms the competing control charts, indeed bootstrap 95\% confidence intervals are strictly above those of all considered monitoring approaches. As in Section \ref{sec_sim}, non-functional approaches, i.e., M, Miter, MRo, show worse performance than the functional counterparts because they are not able to satisfactorily capture the functional nature of the data and robust approaches always improve the non-robust ones. Therefore, the proposed RoMFCC stands out as the best method to promptly identify OC conditions in the RWS process caused by an increased electrode wear with a Phase I sample contaminated by functional outliers. \section{Conclusions} \label{sec_conclusions} In this paper, we propose a new robust framework for the statistical process monitoring of multivariate functional data, referred to as \textit{robust multivariate functional control charts} (RoMFCC). The RoMFCC is designed to assess the presence of assignable causes of variation while being robust to both functional casewise and cellwise outliers. The proposed method is suitable for those industrial processes where many functional variables are available and occasional outliers are produced, such as anomalies in the data acquisition and data collected during a fault in the process. Specifically, the RoMFCC framework is based on four main elements, i.e. a functional univariate filter to identify functional cellwise outliers, to be replaced by missing values, a robust functional data imputation of these missing values, a casewise robust dimensionality reduction based on ROBPCA and a monitoring strategy based on the Hotelling's $T^2$ and $SPE$ control charts. These elements are combined in a Phase II monitoring strategy where a set of Phase I observations, which can be contaminated with both functional casewise and cellwise outliers, is used for the design of the control chart. To the best of the authors' knowledge, the RoMFCC framework is the first monitoring scheme that is able to monitor a multivariate functional quality characteristic while being robust to functional casewise and cellwise outliers. Indeed, methods already present in the literature either apply robust approaches to multivariate scalar features extracted from the profiles or use diagnostic approaches on the multivariate functional data to iteratively remove outliers. However, the former are not able to capture the functional nature of the data, while the latter are not able to deal with functional cellwise outliers. The performance of the RoMFCC framework is assessed through an extensive Monte Carlo simulation study where it is compared with several competing monitoring methods for multivariate scalar data and multivariate functional data. The ability of the proposed method to estimate the distribution of the data without removing observations while being robust to both functional casewise and cellwise outliers allows the RoMFCC to outperform the competitors in all the considered scenarios. Lastly, the practical applicability of the proposed method is illustrated through a motivating real-case study, which addresses the issue of monitoring the quality of a resistance spot-welding process in the automotive industry. Also in this case, the RoMFCC shows better performance than the competitors in the identification of out-of-control condition of the dynamic resistance curves. \section*{Supplementary Materials} The Supplementary Materials contain additional details about the data generation process in the simulation study (A), additional simulation results (B), as well as the R code to reproduce graphics and results over competing methods in the simulation study. \section*{Acknowledgments} The present work was developed within the activities of the project ARS01\_00861 ``Integrated collaborative systems for smart factory - ICOSAF'' coordinated by CRF (Centro Ricerche Fiat Scpa - \texttt{www.crf.it}) and financially supported by MIUR (Ministero dell’Istruzione, dell’Università e della Ricerca). \bibliographystyle{apalike} \setlength{\bibsep}{5pt plus 0.3ex} {\small \section{Details on Data Generation in the Simulation Study} \label{sec_appB} The data generation process is inspired by the real-case study in Section 4 and mimics typical behaviours of DRCs in a RSW process. The data correlation structure is generated similar to \cite{centofanti2020functional,chiou2014multivariate}. The compact domain $\mathcal{T}$ is set, without loss of generality, equal to $\left[0,1\right]$ and the number components $p$ is set equal to 10. The eigenfunction set $\lbrace \bm{\psi}_i\rbrace $ is generated by considering the correlation function $\bm{G}$ through the following steps. \begin{enumerate} \item Set the diagonal elements $G_{ll}$, $l=1,\dots,p$ of $\bm{G}$ as the \textit{Bessel} correlation function of the first kind \citep{abramowitz1964handbook}. The general form of the correlation function and parameter used are listed in Table \ref{ta_corf}. Then, calculate the eigenvalues $\lbrace\eta_{lk}^{X}\rbrace$ and the corresponding eigenfunctions $\lbrace\vartheta_{lk}\rbrace$, $k=1,2,\dots$, of $G_{ll}$, $l=1,\dots,p$. \item Obtain the cross-correlation function $G_{lj}$, $l,j=1,\dots,p$ and $l\neq j$, by \begin{equation} G_{lj}\left(t_1,t_2\right)=\sum_{k=1}^{\infty}\frac{\eta_{k}}{1+|l-j|}\vartheta_{lk}\left(t_{1}\right)\vartheta_{jk}\left(t_{2}\right)\quad t_1,t_2\in\mathcal{T}. \end{equation} \item Calculate the eigenvalues $\lbrace\lambda_i\rbrace$ and the corresponding eigenfunctions $\lbrace \bm{\psi}_i\rbrace $ through the spectral decomposition of $\bm{G}=\lbrace G_{lj}\rbrace_{l,j=1,\dots,p}$, for $i=1,\dots,L^{*}$. \end{enumerate} \begin{table} \caption{Bessel correlation function and parameter for data generation in the simulation study.} \label{ta_corf} \centering \resizebox{0.5\textwidth}{!}{ \begin{tabular}{ccc} \toprule &$\rho$&$\nu$\\ \midrule $J_{v}\left(z\right)=\binom{|z|/\rho}{2}^{\nu}\sum_{j=0}^{\infty}\frac{\left(-\left(|z|/\rho\right)^{2}/4\right)^{j}}{j!\Gamma\left(\nu+j+1\right)}$&0.125&0\\[.35cm] \bottomrule \end{tabular}} \end{table} Further, $L^{*}$ is set equal to $10$. Let $\bm{Z}=\left(Z_1,\dots,Z_p\right)$ as \begin{equation} \bm{Z}=\sum_{i=1}^{L^{*}}\xi_i\bm{\psi}_i. \end{equation} with $\bm{\xi}_{L^{*}}=\left(\xi^{X}_1,\dots,\xi^{X}_{L^{*}}\right)^{T}$ generated by means of a multivariate normal distribution with covariance $\Cov\left(\bm{\xi}_{L^{*}}^{X}\right)=\bm{\Lambda^{X}}=\diag\left(\lambda_1,\dots,\lambda_{L^{*}}\right)$. Furthermore, let the mean process $m$ \begin{multline} m(t)= 0.2074 + 0.3117\exp(-371.4t) +0.5284(1 - \exp(0.8217t))\\ -423.3\left[1 + \tanh(-26.15(t+0.1715)) \right]\quad t\in\mathcal{T}. \end{multline} Note that, the mean function $m$ is generated to resemble a typical DRC through the phenomenological model for the RSW process presented in \cite{schwab2012improving}. Then, the contamination models $C_E$ and $C_P$, which mimics a splash weld (expulsion) and phase shift of the peak time, are respectively defined as \begin{equation} C_E(t)=\min\Big\lbrace 0, -2M_E(t-0.5)\Big\rbrace \quad t\in\mathcal{T}, \end{equation} and \begin{multline} C_P(t)= -m(t)-(M_P/20)t + 0.2074\\ + 0.3117\exp(-371.4h(t)) +0.5284(1 - \exp(0.8217h(t)))\\ -423.3\left[1 + \tanh(-26.15(h(t)+0.1715))\right] \quad t\in\mathcal{T}, \end{multline} where $h:\mathcal{T}\rightarrow\mathcal{T}$ transforms the temporal dimension $t$ as follows \begin{equation} h(t)= \begin{cases} t & \text{if } t\leq 0.05 \\ \frac{0.55-M_P}{0.55}t-(1+\frac{0.55-M_P}{0.55})0.05 & \text{if } 0.05< t\leq 0.6 \\ \frac{0.4+M_P}{0.4}t+1-\frac{0.4+M_P}{0.4} & \text{if } t> 0.6, \\ \end{cases} \end{equation} and $M_E$ and $M_P$ are contamination sizes. Then, $\bm{X}=\left(X_1,\dots,X_p\right)^T$ is obtained as follows \begin{equation} \label{eq_modgen} \bm{X}\left(t\right)= \bm{m}(t) +\bm{Z}\left(t\right)\sigma+\bm{\varepsilon}\left(t\right) +B_E\bm{C}_E(t)+B_P\bm{C}_P(t) \quad t\in \mathcal{T}, \end{equation} where $\bm{m}$ is a $p$ dimensional vector with components equal to $m$, $\sigma>0$, $\bm{\varepsilon}=\left(\varepsilon_1,\dots,\varepsilon_p\right)^T$, where $\varepsilon_i$ are white noise functions such that for each $ t \in \left[0,1\right] $, $ \varepsilon_i\left(t\right) $ are normal random varaibles with zero mean and standard deviation $ \sigma_e $, and $B_{CaE}$ and $B_{CaP}$ are two independent random variables following Bernoulli distributions with parameters $p_{CaE}$ and $p_{CaP}$, respectively. Moreover, $\bm{C}_E=\left(B_{1,CeE}C_E,\dots,B_{p,CeE}C_E\right)^T$ and $\bm{C}_P=\left(B_{1,CeP}C_P,\dots,B_{p,CeP}C_P\right)^T$, where $\lbrace B_{i,CeE}\rbrace$ and $\lbrace B_{i,CeP}\rbrace$ are two sets of independent random variables following Bernoulli distributions with parameters $p_{CeE}$ and $p_{CeP}$, respectively. Then, the Phase I and Phase II samples are generated through Equation \eqref{eq_modgen} by considering the parameters listed in Table \ref{ta_1} and Table \ref{ta_2}, respectively, with $\sigma_e=0.0025$ and $\sigma=0.01$. The parameter $\tilde{p}$ is the probability of contamination that for the data analysed in Section 3 and Supplementary Material B is set equal to 0.05 and 0.1, respectively. \begin{table} \caption{Parameters used to generate the Phase I sample in Scenario 1 and Scenario 2 of the simulation study.} \label{ta_1} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{cM{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}} \toprule &\multicolumn{4}{c}{Scenario 1}& \multicolumn{4}{c}{Scenario 1}\\ \cmidrule(lr){2-5}\cmidrule(lr){6-9} &\multicolumn{2}{c}{Out E}&\multicolumn{2}{c}{Out P}&\multicolumn{2}{c}{Out E}&\multicolumn{2}{c}{Out P}\\ \cmidrule(lr){2-5}\cmidrule(lr){6-9} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=p_{CaP}=1$,\\ $p_{CeE}=\tilde{p}$, $p_{CeP}=0$}} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=p_{CaP}=1$,\\ $p_{CeE}=0$, $p_{CeP}=\tilde{p}$}} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=\tilde{p}$, $p_{CaP}=0$,\\ $p_{CeE}=p_{CeP}=1$}} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=0$, $p_{CaP}=\tilde{p}$, \\$p_{CeE}=p_{CeP}=1$}}\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9} &$M_E$&$M_P$&$M_E$&$M_P$&$M_E$&$M_P$&$M_E$&$M_P$\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9} C1&0.04 &0.00&0.00&0.40&0.02 &0.00&0.00 &0.20\\ C2&0.06 & 0.00&0.00&0.45&0.03 &0.00&0.00 &0.30\\ C3&0.08 & 0.00&0.00&0.50&0.04 &0.00&0.00 &0.40\\ \bottomrule \end{tabular}} \end{table} \begin{table} \caption{Parameters used to generate the Phase II sample for OC E and OC P and severity level $SL = \lbrace 0, 1, 2, 3, 4 \rbrace$ in the simulation study.} \label{ta_2} \centering \resizebox{0.5\textwidth}{!}{ \begin{tabular}{cM{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}M{0.1\textwidth}} \toprule &\multicolumn{2}{c}{OC E}&\multicolumn{2}{c}{OC P}\\ \cmidrule(lr){2-5} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=1$, $p_{CaP}=0$,\\ $p_{CeE}=1$, $p_{CeP}=0$}} &\multicolumn{2}{c}{\specialcell{$p_{CaE}=0$, $p_{CaP}=1$,\\ $p_{CeE}=0$, $p_{CeP}=1$}} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} $SL$&$M_E$&$M_P$&$M_E$&$M_P$\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} 0&0.00 &0.00&0.00&0.00\\ 1&0.01 & 0.00&0.00&0.20\\ 2&0.02 & 0.00&0.00&0.27\\ 3&0.03 & 0.00&0.00&0.34\\ 4&0.04 & 0.00&0.00&0.40\\ \bottomrule \end{tabular}} \end{table} Moreover, in Scenario 0 of the simulation study data are generated thorugh $p_{CaE}=p_{CaP}=p_{CeE}=p_{CeP}=0$. Finally, the generate data are assumed to be discretely observed at 100 equally spaced time points over the domain $\left[0,1\right]$. \section{Additional Simulation Results} In this section, we present additional simulations to analyse the performance of the RoMFCC and the competing methods methods when data are generated as described in Supplementary Material A with probability of contamination $\tilde{p}$ equals to 0.1. The RoMFCC is implemented as described in Section 3. Figure \ref{fi_results_2_1} and Figure \ref{fi_results_3_1} show for Scenario 1 and Scenario 2 the mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P. \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P in Scenario 1 for $\tilde{p}=0.1$.} \label{fi_results_2_1} \centering \hspace{-2.5cm} \begin{tabular}{cM{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}} &\multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out E}}}& \multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out P}}}\\ &\hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}&\hspace{0.5cm}\textbf{\footnotesize{OC E}}&\hspace{0.5cm}\textbf{\footnotesize{OC P}}\\ \textbf{\footnotesize{C1}}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s1_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s1_OC_P.pdf}\\ \textbf{\footnotesize{C2}}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s2_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s2_OC_P.pdf}\\ \textbf{\footnotesize{C3}}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_E_s3_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_cellwise_P_s3_OC_P.pdf}\\ \end{tabular} \vspace{-.5cm} \end{figure} \begin{figure}[h] \caption{Mean FAR ($ SL=0 $) or TDR ($ SL\neq 0 $) achieved by M, Miter, MRo, MFCC, MFCCiter and RoMFCC for each contamination level (C1, C2 and C3), OC condition (OC E and OC P) as a function of the severity level $SL$ with contamination model Out E and Out P in Scenario 2 for $\tilde{p}=0.1$.} \label{fi_results_3_1} \centering \hspace{-2.5cm} \begin{tabular}{cM{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}M{0.24\textwidth}} &\multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out E}}}& \multicolumn{2}{c}{\hspace{0.12cm} \textbf{\large{Out P}}}\\ &\hspace{0.6cm}\textbf{\footnotesize{OC E}}&\hspace{0.6cm}\textbf{\footnotesize{OC P}}&\hspace{0.5cm}\textbf{\footnotesize{OC E}}&\hspace{0.5cm}\textbf{\footnotesize{OC P}}\\ \textbf{\footnotesize{C1}}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s1_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s1_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s1_OC_P.pdf}\\ \textbf{\footnotesize{C2}}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s2_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s2_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s2_OC_P.pdf}\\ \textbf{\footnotesize{C3}}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_E_s3_OC_P.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s3_OC_E.pdf}&\includegraphics[width=.25\textwidth]{fig/sim_1_casewise_P_s3_OC_P.pdf}\\ \end{tabular} \vspace{-.5cm} \end{figure} By comparing Figure \ref{fi_results_2_1} and Figure \ref{fi_results_3_1} with Figure 4 and Figure 5, the RoMFCC confirms itself as the best method in all the considered settings. Specifically, the proposed method, differently from the competing methods, is almost insensible to the fact that a large fraction of the data in the Phase I sample is now composed by either cellwise or case wise outliers. On the contrary, the competing methods are strongly affected by the different probability of contamination and show overall worse performance than in Section 3 and, thus, are totally inappropriate to monitor the multivariate functional quality characteristic in this setting. This further confirms as shown in Section 3, i.e., in this simulation study the RoMFCC performance is almost independent of the contamination of the Phase I sample. As in Section 3, also in this case, performance differences between the proposed and competing methods are less pronounced in Scenario 2 due to the less severe contamination produced by functional casewise outliers. However, RoMFCC still outperforms all the cometing methods and, thus, stand out as the best method. \bibliographystyle{chicago} {\small
1,941,325,220,144
arxiv
\section{Introduction} Let $\zeta(s)$ denote the Riemann zeta-function. Understanding the distribution of the zeros of $\zeta(s)$ is an important problem in number theory. In this paper, assuming the Riemann hypothesis (RH), we study the pair correlation function \begin{equation*}\label{N alpha} N(T,\beta) \, := \!\!\! \sum_{\substack{ 0< \gamma,\gamma' \le T \\ 0< \gamma'-\gamma \le \frac{2\pi \beta}{\log T} } } \!\!\!\!\! 1\,, \end{equation*} where the sum runs over two sets of nontrivial zeros $\rho=\frac{1}{2}+i\gamma$ and $\rho'=\frac{1}{2}+i\gamma'$ of $\zeta(s)$. Here and throughout the text, all sums involving the zeros of $\zeta(s)$ are counted with multiplicity. The pair correlation conjecture of H. L. Montgomery \cite{M1} asserts that \begin{equation}\label{PCC} N(T,\beta) \, \sim \, N(T) \int_0^\beta \left\{1 -\Big(\frac{\sin \pi x}{\pi x}\Big)^2 \right\} \,\text{\rm d}x \end{equation} for any fixed $\beta>0$ as $T\to \infty$, where $N(T)$ denotes the number of nontrivial zeros of $\zeta(s)$ with ordinates $\gamma$ satisfying $0<\gamma\le T$. It is known that \begin{equation}\label{N} N(T) \, :=\sum_{0<\gamma \le T} 1 \, \sim \, \frac{T \log T }{2\pi} \end{equation} as $T\to \infty$. Therefore, if we let $0 < \gamma_1 \le \gamma_2 \le \ldots $ denote the sequence of ordinates of nontrivial zeros of $\zeta(s)$ in the upper half-plane, it follows that average size of $\gamma_{n+1}-\gamma_n$ is about $2\pi/\log \gamma_n$. Thus, the quantity $N(T,\beta)$ essentially counts the number of pairs $0<\gamma,\gamma'\leq T$ of (not necessarily consecutive) ordinates of nontrivial zeros of $\zeta(s)$ whose difference is less than or equal to $\beta$ times the average spacing. It is known that the function $N(T,\beta)$ is connected to the distribution of primes in short intervals, see \cite{GaMu,Gold2,GM}. \smallskip Montgomery's pair correlation conjecture is a special case of the more general conjecture that the normalized spacings between the ordinates of the nontrivial zeros of $\zeta(s)$ follow the GUE distribution from random matrix theory. In his original paper \cite{M1}, Montgomery gave some theoretical evidence for the pair correlation conjecture, and later, Odlyzko \cite{O} provided numerical evidence. Higher correlations of the zeros of $\zeta(s)$, and of the zeros of more general $L$-functions, were studied by Hejhal \cite{H} and by Rudnick and Sarnak \cite{RS2}. \smallskip If the asymptotic formula in \eqref{PCC} remains valid when $\beta=\beta(T) \to \infty$ (sufficiently slowly) as $T\to\infty$, one should expect \begin{equation}\label{PCC2} N(T,\beta) \sim N(T) \left\{ \beta-\frac{1}{2}+\frac{1}{2\pi^2 \beta} + O\left(\frac{1}{\beta^2}\right) \right\} \end{equation} as $T\to\infty$, where the implied constant is independent of $\beta$. Using techniques of Selberg, Fujii \cite{F2} proved the unconditional estimate \begin{equation}\label{Fujii} N(T,\beta) = N(T) \, \big\{ \beta + O(1) \big\} \end{equation} for $\beta =O(\log T)$. This improved upon an earlier result of Mueller (unpublished but announced in \cite{G}). \smallskip \subsection{Montgomery's formula and bounds for the pair correlation} For our purposes we define a class of {\it admissible functions} consisting of all $R \in L^1(\mathbb{R})$ whose Fourier transform \begin{equation*} \widehat{R}(t) = \int_{-\infty}^{\infty}e^{-2\pi i x t}\,R(x)\, \text{\rm d}x \end{equation*} is supported in $[-1,1]$. By the Paley-Wiener theorem, this class of admissible functions is exactly the class of entire functions of exponential type\footnote{An entire function $g: \mathbb C \rightarrow \mathbb C$ has exponential type at most $2\pi\Delta$ if, for all $\epsilon > 0$, there exists a positive constant $C_\epsilon$ such that $|g(z)| \leq C_{\epsilon}\,e^{(2\pi\Delta + \epsilon)z}$ for all $z \in \mathbb{C}$.} at most $2\pi$ whose restriction to the real axis is integrable. An important tool in the study of the correlation of zeros of $\zeta(s)$ is {\it Montgomery's formula}\footnote{This is not Montgomery's original version of his formula. For a derivation of \eqref{Mont_formula}, see the appendix of \cite{G} or \S2.1 below.}, which asserts that, for an admissible function $R$, under RH, we have \begin{align}\label{Mont_formula} \begin{split} \lim_{T \rightarrow \infty}\frac{1}{N(T)} \sum_{0 < \gamma, \gamma' \leq T} & R\!\left( (\gamma' \! - \! \gamma) \tfrac{\log T}{2\pi}\right) w(\gamma' \!-\! \gamma)\\ & = \ R(0) +\int_{-\infty}^{\infty} R(x) \left\{1 - \left(\frac{\sin \pi x}{\pi x}\right)^2 \right\}\,\text{\rm d}x, \end{split} \end{align} where $w(x) = 4/(4+x^2)$ is a suitable weight function. \smallskip Following Gallagher's \cite{G} notation, for an admissible function $R$ we define \begin{equation}\label{def-of-M} M(R) := \int_{-\infty}^{\infty} R(x) \left\{1 - \left(\frac{\sin \pi x}{\pi x}\right)^2 \right\}\,\text{\rm d}x, \end{equation} and, for $\beta >0$, we write \begin{equation*} \mathcal{U}(\beta):=\limsup_{T\to \infty} \frac{N(T,\beta)}{N(T)} \quad \text{ and } \quad \mathcal{L}(\beta):=\liminf_{T\to \infty} \frac{N(T,\beta)}{N(T)}. \end{equation*} Let $R_{\beta}^{\pm}$ be a pair of admissible functions satisfying \begin{equation}\label{Intro_R_beta_pm} R_{\beta}^-(x) \leq \chi_{[-\beta,\beta]}(x) \leq R_{\beta}^+(x) \end{equation} for all $x \in \mathbb{R}$. \new{Then, if we let \begin{equation*} N^*(T) = \sum_{0<\gamma \le T} m_\gamma, \end{equation*} where $m_\gamma$ denotes the multiplicity of a zero of $\zeta(s)$ with ordinate $\gamma$, we observe that } \begin{align}\label{Intro_char_beta_reduction} \begin{split} \frac{1}{N(T)} \sum_{0 < \gamma, \gamma' \leq T} & \!R_{\beta}^{+}\left( (\gamma' \! - \! \gamma) \tfrac{\log T}{2\pi}\right) w(\gamma' \!-\! \gamma) \\ & \geq R_{\beta}^{+}(0)\frac{N^*(T)}{N(T)} + \frac{2\, N(T, \beta)}{N(T)} + O_{\beta}\left(\frac{1}{(\log T)^2}\right) \end{split} \end{align} \new{and, similarly, that \begin{align}\label{Intro_char_beta_reduction2} \begin{split} \frac{1}{N(T)} \sum_{0 < \gamma, \gamma' \leq T} & \!R_{\beta}^{-}\left( (\gamma' \! - \! \gamma) \tfrac{\log T}{2\pi}\right) w(\gamma' \!-\! \gamma) \\ & \leq R_{\beta}^{-}(0)\frac{N^*(T)}{N(T)} + \frac{2\, N(T, \beta)}{N(T)} + O_{\beta}\left(\frac{1}{(\log T)^2}\right). \end{split} \end{align} Observing that $N(T) \le N^*(T)$ for all $T>0$ and combining the estimates in \eqref{Mont_formula}, \eqref{def-of-M}, \eqref{Intro_R_beta_pm}, \eqref{Intro_char_beta_reduction} and \eqref{Intro_char_beta_reduction2}, we arrive at the following result.} \begin{theorem}\label{Intro_thm1_Gallagher} Assume RH. For any $\beta >0$ we have \begin{equation}\label{Intro_thm1_eq1} \frac{1}{2} M(R_{\beta}^-) \leq \mathcal{L}(\beta) \leq \mathcal{U}(\beta) \leq \frac{1}{2} M(R_{\beta}^+), \end{equation} where the lower bound holds if we assume that almost all zeros of $\zeta(s)$ are simple in the sense that \begin{equation}\label{N star} \lim_{T \to \infty} \frac{N^*(T)}{N(T)} = 1. \end{equation} \end{theorem} This result is implicit in the work of Gallagher \cite{G}. The difficult problem here is to construct admissible majorants and minorants for $\chi_{[-\beta,\beta]}$ that optimize the values of $M(R_{\beta}^\pm)$ (and to actually compute these values). In \cite{G}, Gallagher considered the case $\beta \in \frac12\mathbb{N}$, for which a classical construction of Beurling and Selberg, described in \cite{V}, produces admissible majorants and minorants $r_{\beta}^{\pm}$ that optimize the $L^1(\mathbb{R})$-distance to $\chi_{[-\beta, \beta]}$ (but not necessarily the $L^1\big(\mathbb{R}, \big\{1 - \big(\frac{\sin \pi x}{\pi x}\big)^2 \big\}\text{\rm d}x\big)$-distance). When $\beta \in \frac12\mathbb{N}$, the Fourier transforms $\widehat{r}_{\beta}^{\pm}$ have simple explicit representations as finite series, which allowed Gallagher to compute the values of $M(r_{\beta}^{\pm})$ and to show that \begin{equation}\label{Intro_BS_quotas} \frac12 M(r_{\beta}^{\pm}) = \beta - \frac12 \pm \frac12 + \frac{1}{2 \pi^2 \beta} + O\!\left(\frac{1}{\beta^2 }\right). \end{equation} In a second part of his paper \cite{G}, still in the case $\beta \in \frac12\mathbb{N}$, Gallagher solved the {\it two-delta problem} with respect to the pair correlation measure (i.e. to minimize $M(R)$ over the class of nonnegative admissible functions $R$ satisfying $R(\pm \beta) \geq 1$) and was able to quantify the error between his bounds in Theorem \ref{Intro_thm1_Gallagher} and the theoretical optimal bounds achievable by this method. \smallskip In this paper we extend Gallagher's work \cite{G}, providing a complete solution to this problem. The three main features are: \smallskip \noindent (i) We find an explicit representation for the reproducing kernel associated to the pair correlation measure, which allows us to use Hilbert spaces techniques to solve the two-delta problem in the general case $\beta >0$. \smallskip \noindent (ii) From the reproducing kernel, we find a suitable de Branges space of entire functions \cite{B} associated to the pair correlation measure. We solve the more general extremal problem of majorizing and minorizing characteristic functions of intervals optimizing a given de Branges metric, which provides, in particular, the optimal values of $M(R_{\beta}^{\pm})$. It turns out that asymptotics in terms of $\beta$ as in \eqref{Intro_BS_quotas} are not easily obtainable for this family, since it involves nodes of interpolation that are roots of equations with algebraic and transcendental terms. This brings us to point (iii). \smallskip \noindent (iii) In order to obtain (non-extremal) bounds that can be easily stated in terms of $\beta$, we compute $M(r_{\beta}^{\pm})$, for the family of Beurling-Selberg functions $r_{\beta}^{\pm}$ in the general case $\beta >0$, and prove that Gallagher's asymptotic formula in \eqref{Intro_BS_quotas} continues to hold in this case. \smallskip We now describe in more detail each of these three parts of the paper. We start with the third part, which is slightly simpler to state. Similar extremal problems in harmonic analysis have appeared in connection to analytic number theory, in particular to the theory of the Riemann zeta-function. For some recent results of this sort, see \cite{CC, CCM, CS, GG}. \subsection{Explicit bounds via Beurling-Selberg majorants} Let \begin{equation}\label{Intro_def_H_0} H_0(z) = \left( \frac{\sin \pi z}{\pi}\right)^2 \left\{ \sum_{m=-\infty}^{\infty} \frac{\sgn(m)}{(z-m)^2} + \frac{2}{z}\right\} \end{equation} and \begin{equation}\label{Intro_def_H_1} H_1(z) = \left( \frac{\sin \pi z}{\pi z}\right)^2. \end{equation} For the functions $H^\pm$ defined by $H^{\pm}(z) = H_0(z) \pm H_1(z)$, Beurling \cite{V} showed that \begin{equation*} H^-(x) \leq \sgn(x) \leq H^+(x) \end{equation*} for all $x \in \mathbb{R}$, and that these are the unique extremal functions of exponential type $2\pi$ for $\sgn(x)$ (with respect to $L^1(\mathbb{R})$). Moreover, we have \begin{equation*} \int_{-\infty}^{\infty} \big\{ H^+(x) \!-\! \sgn(x) \big\}\,\text{\rm d}x = \int_{-\infty}^{\infty} \big\{\sgn(x) \!-\! H^-(x)\big\}\,\text{\rm d}x = 1. \end{equation*} For $\beta >0$, Selberg \cite{V} (see also \cite{SelVol2}) considered the functions \begin{align}\label{Intro_BS1} \begin{split} r_{\beta}^{+}(x):= \tfrac{1}{2} \big\{ H^{+}(x &+ \beta) + H^+(-x + \beta)\big\} \\ & \geq \tfrac{1}{2} \big\{ \sgn (x + \beta) + \sgn(-x + \beta)\big\} = \chi_{[-\beta, \beta]}(x) \end{split} \end{align} and \begin{align}\label{Intro_BS2} \begin{split} r_{\beta}^{-}(x):= \tfrac{1}{2} \big\{ H^{-}(x &+ \beta) + H^-(-x + \beta)\big\} \\ & \leq \tfrac{1}{2} \big\{ \sgn (x + \beta) + \sgn(-x + \beta)\big\} = \chi_{[-\beta, \beta]}(x). \end{split} \end{align} We remark that here and later, all the discontinuous functions we treat are normalized, i.e. at the discontinuity, the value of the function is the midpoint between the left-hand and right-hand limits. The functions $r_{\beta}^{\pm}$ have exponential type $2\pi$ and are bounded and integrable on $\mathbb{R}$. Therefore, they belong to $L^2(\mathbb{R})$ and the Paley-Wiener theorem implies that they have continuous Fourier transforms supported in $[-1,1]$. Throughout the text we reserve the notation $r_{\beta}^{\pm}$ for this particular family of functions. In Section \ref{Sec_BS_majorants} we prove the following result. \begin{theorem}\label{Intro_thm2_Gallagher_2} Let $\beta >0$ and $r_{\beta}^{\pm}$ be the pair of admissible functions defined by \eqref{Intro_BS1} and \eqref{Intro_BS2}. Then \begin{align} \frac12 M(r_{\beta}^{\pm}) & = \Big( \beta \pm \frac{1}{2} \Big) - \frac{1}{2\pi^2 \beta} + \frac{\sin 2\pi\beta }{4\pi^3\beta^2} - \frac{1}{4\pi^2} \sum_{\substack{n \in\mathbb{Z} }} \frac{\sgn(n^{\pm})}{(n\!-\!\beta)^2}\left( 2 + \frac{\sin 2\pi \beta}{\pi (n \!-\! \beta)}\right) \label{Intro_value_M_r}\\ & = \beta - \frac12 \pm \frac12 + \frac{1}{2 \pi^2 \beta} + O\!\left(\frac{1}{\beta^2 }\right), \nonumber \end{align} where $\sgn(0^{\pm}) = \pm1$. \end{theorem} We note that the right-hand side of \eqref{Intro_value_M_r} is a continuous function of $\beta$. In Section \ref{Sec_BS_majorants} we also include a discussion on upper and lower bounds for $N(T,\beta)$, where the parameter $\beta$ is allowed to increase as a function of $T$. \begin{figure} \label{figure1} \includegraphics[scale=.44]{Pic1.pdf} \qquad \includegraphics[scale=.44]{Pic2.pdf} \caption{The above images illustrate the inequalities in Theorems \ref{Intro_thm1_Gallagher} and \ref{Intro_thm2_Gallagher_2}. Montgomery's conjecture for $\lim_{T \to \infty} N(T,\beta)/N(T)$ is plotted in black, while the functions $\beta \mapsto \frac12 M(r_{\beta}^{\pm}) $ are plotted in gray. } \end{figure} \subsection{The reproducing kernel for the pair correlation measure}\label{rkhs-pcm} \new{The following quantity gives a lower bound for the difference of the values in Theorem \ref{Intro_thm1_Gallagher}.} For $\beta >0$ we define \begin{equation}\label{Intro_Delta_beta} \varDelta(\beta) = \inf_{R \in \Omega_\beta} M(R), \end{equation} where the infimum is taken over the subclass $\Omega_\beta$ of nonnegative admissible functions $R$ such that $R(\pm \beta) \geq1$. If $R_{\beta}^{\pm}$ is a pair of admissible functions satisfying \eqref{Intro_R_beta_pm} then $R:= (R_{\beta}^+ - R_{\beta}^-) \in \Omega_\beta$ and \begin{equation*} M(R_{\beta}^+) - M(R_{\beta}^-) = M(R) \geq \varDelta(\beta). \end{equation*} Hence the gap between an upper bound for $\mathcal{U}(\beta)$ and a lower bound for $\mathcal{L}(\beta)$ in Theorem \ref{Intro_thm1_Gallagher} cannot be smaller than $\frac{1}{2}\varDelta(\beta)$. \smallskip In the case $\beta \in \frac12\mathbb{N}$, Gallagher \cite[Section 2]{G} used a variational argument to solve this two-delta problem and compute $\varDelta(\beta)$. This argument was previously used by Montgomery and Taylor \cite{M2} to solve the simpler one-delta problem in connection to bounds for the proportion of simple zeros of $\zeta(s)$. Gallagher's variational approach for the two-delta problem relies heavily on the fact that $\beta \in \frac12\mathbb{N}$ to establish orthogonality relations in some passages, thus making its extension to the general case $\beta >0$ a nontrivial task. Here we revisit this problem and solve it in the general case using a different technique, namely the theory of reproducing kernel Hilbert spaces. Proofs of the theorems in this section are given in Section \ref{HS_approach}. Let us write $$\text{\rm d}\mu(x) = \left\{ 1 - \left(\frac{\sin \pi x}{\pi x}\right)^2\right\} \,\text{\rm d}x.$$ We denote by $\mathcal{B}_2(\pi,\mu)$ the class of entire functions $f$ of exponential type at most $\pi$ for which $$\int_{-\infty}^\infty |f(x)|^2 \, \text{\rm d}\mu(x) <\infty,$$ and we write $\mathcal{B}_2(\pi)$ if $\text{\rm d}\mu$ is replaced by the Lebesgue measure (i.e. $\mathcal{B}_2(\pi)$ is the classical Paley-Wiener space). Using the uncertainty principle for the Fourier transform, we show that $\mu$ and the Lebesgue measure define equivalent norms on the class of functions of exponential type at most $\pi$ for which either and hence both norms are finite. This implies, in particular, that $\mathcal{H} = \mathcal{B}_2(\pi,\mu)$ is a Hilbert space with norm given by $$\|f\|^2_{\mathcal{H}} = \int_{-\infty}^\infty |f(x)|^2 \, \text{\rm d}\mu(x).$$ For each $w \in \mathbb{C}$, the functional $f \mapsto f(w)$ is therefore continuous on $\mathcal{H}$ (since this holds for the Paley-Wiener space $\mathcal{B}_2(\pi)$). Hence, there exists a function $K(w,\cdot) \in \mathcal{H}$ such that $$f(w) = \langle f, K(w,\cdot) \rangle_{\mathcal{H}} = \int_{-\infty}^\infty f(x)\, \overline{K(w,x)}\, \text{\rm d}\mu(x)$$ for all $f \in \mathcal{H}$. This is the so-called {\it reproducing kernel} for the Hilbert space $\mathcal{H}$, and our first goal is to find an explicit representation for this kernel. For $w\in\mathbb{C}$ (initially with $w \neq \pm 1/\pi\sqrt{2}$) define constants $c(w)$ and $d(w)$ by \begin{align}\label{Intro_HS_eq3} \begin{split} c(w) &= \frac{\cos(\pi w) - \pi w \sin(\pi w) }{(1-2\pi^2 w^2 ) \big(\cos\big(2^{-\frac12}\big) - 2^{-\frac12} \sin\big(2^{-\frac12}\big)\big)},\\ d(w) &= \frac{ 2\pi w \cos(\pi w) }{(1-2\pi^2 w^2 )\, 2^{\frac12} \cos\big(2^{-\frac12}\big) }, \end{split} \end{align} and functions $f(w,\cdot) ,g,h\in \mathcal{H}$ by \begin{align*} f(w,z) &= \frac{2\pi^2 w^2}{(2\pi^2 w^2 - 1)} \frac{\sin\pi(z-w)}{\pi(z-w)}, \\ g(z) &= \frac{2^{\frac12}\sin\big(2^{-\frac12}\big) \cos(\pi z) - 2\pi z \cos\big(2^{-\frac12}\big) \sin(\pi z)}{1- 2\pi^2 z^2}, \\ h(z) &= \frac{2\pi z \sin\big(2^{-\frac12}\big)\cos(\pi z) - 2^{\frac12} \cos\big(2^{-\frac12}\big) \sin(\pi z)}{1-2\pi^2 z^2}. \end{align*} \begin{theorem}\label{Intro_HS_Thm1_RP} For each $w\in \mathbb{C}$ we have \begin{align}\label{Intro_rk-rep} K(w,z) &= f(\overline{w},z) + c(\overline{w}) g(z) + d(\overline{w}) h(z). \end{align} \new{At the points $w = \pm 1/\pi\sqrt{2}$, this formula should be interpreted in terms of the appropriate limit.} \end{theorem} \smallskip We exploit the Hilbert space structure and the explicit formula for the reproducing kernel to give a complete solution to the two-delta problem with respect to the pair correlation measure. \begin{theorem}\label{Intro_HS_Thm5} Let $\beta >0$, let $\varDelta(\beta)$ be defined by \eqref{Intro_Delta_beta}, \new{and let $K$ be given by \eqref{Intro_rk-rep}.} Then \begin{align}\label{Intro_HS_Thm2_eq1} \begin{split} \varDelta(\beta) = \frac{2}{ K(\beta, \beta) + |K(\beta, -\beta)|}= 2 \left\{ 1 - \left|\frac{\sin2\pi \beta}{2\pi\beta}\right| \right\} + O\!\left(\frac{1}{\beta^2 }\right). \end{split} \end{align} The extremal functions {\rm(}i.e. functions that realize the infimum in \eqref{Intro_Delta_beta}{\rm )} are given by the following formulae. \begin{enumerate} \item[(i)]If $K(\beta, -\beta) = 0$, then \begin{equation*} R(z) = \frac{1}{K(\beta, \beta)^2} \big( c_1 K(\beta, z) + c_2 K(-\beta, z) \big) \big( \overline{c_1} \,K(\beta, z) + \overline{c_2} \,K(-\beta, z) \big), \end{equation*} where $c_1, c_2 \in \mathbb{C}$ with $|c_1| = |c_2| =1$. \smallskip \item[(ii)] If $K(\beta, -\beta) \neq 0$, then \begin{equation*} R(z) = \frac{\left(\frac{K(\beta, -\beta)}{|K(\beta, -\beta)|} K(\beta, z) + K(-\beta, z)\right)^2}{\big(K(\beta, \beta) + |K(\beta, -\beta)|\big)^2}. \end{equation*} \end{enumerate} \end{theorem} \smallskip In particular, the bounds given in Theorem \ref{Intro_thm2_Gallagher_2} are optimal up to order $O(\beta^{-2})$ when $\beta \in \frac12 \mathbb{N}$. The appearance of the term $|\frac{\sin2\pi \beta}{2\pi\beta}|$ on the right-hand side of \eqref{Intro_HS_Thm2_eq1} is not a coincidence, for this term already appears naturally in the work of Littmann \cite{Lit} on the Beurling-Selberg extremal problem for $\chi_{[-\beta, \beta]}(x)$. Using the same circle of ideas, one could explicitly compute the reproducing kernels associated to other measures that arise naturally in the study of families of $L$-functions, see \cite{ILS, KS}. \subsection{An extremal problem in de Branges spaces} \subsubsection{De Branges spaces} Let us briefly review the basic facts and terminology of de Branges' theory of Hilbert spaces of entire functions \cite[Chapters 1 and 2]{B}. A function $F$ analytic in the open upper half-plane $\mathbb{C}^{+} = \{z \in \mathbb{C};\ {\rm Im}\,(z) >0\}$ has {\it bounded type} if it can be written as the quotient of two functions that are analytic and bounded in $\mathbb{C}^{+}$. If $F$ has bounded type in $\mathbb{C}^{+}$, from its Nevanlinna factorization \cite[Theorems 9 and 10]{B} we have \begin{equation*} v(F) = \limsup_{y \to \infty} \, y^{-1}\log|F(iy)| <\infty. \end{equation*} The number $v(F)$ is called the {\it mean type} of $F$. If $F:\mathbb{C} \to \mathbb{C}$ is entire, we denote by $\tau(F)$ its exponential type, i.e. \begin{equation*} \tau(F) = \limsup_{|z|\to \infty} |z|^{-1}\log|F(z)|, \end{equation*} and we define $F^*:\mathbb{C} \to \mathbb{C}$ by $F^*(z) = \overline{F(\overline{z})}$. We say that $F$ is {\it real entire} if $F$ restricted to $\mathbb{R}$ is real-valued. \smallskip Let $E:\mathbb{C} \to \mathbb{C}$ be a {\it Hermite-Biehler} function, i.e. an entire function satisfying the basic inequality \begin{equation}\label{Intro_HB_cond} |E(\overline{z})| < |E(z)| \end{equation} for all $z \in \mathbb{C}^+$. The {\it de Branges space} $\H(E)$ is the space of entire functions $F:\mathbb{C} \to \mathbb{C}$ such that \begin{equation}\label{Intro_dB_inequality1} \|F\|_E^2 := \int_{-\infty}^\infty |F(x)|^{2} \, |E(x)|^{-2} \, \text{\rm d}x <\infty\,, \end{equation} and such that $F/E$ and $F^*/E$ have bounded type and nonpositive mean type in $\mathbb{C}^{+}$. The remarkable property about $\H(E)$ is that it is a reproducing kernel Hilbert space with inner product \begin{equation*} \langle F, G \rangle_{E} = \int_{-\infty}^\infty F(x) \, \overline{G(x)} \, |E(x)|^{-2} \, \text{\rm d}x. \end{equation*} The reproducing kernel (that we continue denoting by $K(w,\cdot)$) is given by (see \cite[Theorem 19]{B}) \begin{equation}\label{Intro_dB_rk} 2\pi i (\overline{w}-z)K(w,z) = E(z)E^*(\overline{w}) - E^*(z)E(\overline{w}). \end{equation} Associated to $E$, we consider a pair of real entire functions $A$ and $B$ such that $E(z) = A(z) -iB(z)$. These functions are given by \begin{equation*} A(z) := \frac12 \big\{E(z) + E^*(z)\big\} \ \ \ {\rm and} \ \ \ B(z) := \frac{i}{2}\big\{E(z) - E^*(z)\big\}, \end{equation*} and the reproducing kernel has the alternative representation \begin{equation*} \pi (z - \overline{w})K(w,z) = B(z)A(\overline{w}) - A(z)B(\overline{w}). \end{equation*} When $z = \overline{w}$ we have \begin{equation}\label{Intro_Def_K} \pi K(\overline{z}, z) = B'(z)A(z) - A'(z)B(z). \end{equation} For each $w \in \mathbb{C}$, the reproducing kernel property implies that \begin{align*} 0 \leq \|K(w, \cdot)\|_E^2 = \langle K(w, \cdot), K(w, \cdot) \rangle_E = K(w,w), \end{align*} and it is not hard to show (see \cite[Lemma 11]{HV}) that $K(w,w)=0$ if and only if $w \in\mathbb{R}$ and $E(w) = 0$ (in this case we have $F(w) =0$ for all $F \in \H(E)$). \smallskip For our purposes we consider the class of Hermite-Biehler functions $E$ satisfying the following properties: \begin{enumerate} \item[(P1)] $E$ has bounded type in $\mathbb{C}^{+}$; \item[(P2)] $E$ has no real zeros; \item[(P3)] $z \mapsto E(iz)$ is a real entire function; \item[(P4)] $A, B \notin \H(E)$. \end{enumerate} By a result of M. G. Krein (see \cite{K} or \cite[Lemmas 9 and 12]{HV}) we see that if $E$ satisfies (P1), then $E$ has exponential type and $\tau(E) = v(E)$. Moreover, the space $\H(E)$ consists of the entire functions $F$ of exponential type $\tau(F) \leq \tau(E)$ that satisfy \eqref{Intro_dB_inequality1}. \subsubsection{\new{De Branges space for the pair correlation measure}} We show that the Hilbert space $\mathcal{H}$ defined in Section \ref{rkhs-pcm} can be identified with a suitable de Branges space $\mathcal{H}(E)$, where $E$ is a Hermite-Biehler function satisfying properties (P1) - (P4). Define \begin{equation*} L(w,z) = 2\pi i (\overline{w} - z) K(w,z)\,, \end{equation*} where $K$ is given \new{by \eqref{Intro_rk-rep}}. It follows then that the entire function \begin{equation}\label{Intro_Def_E_special} E(z) = \frac{L(i,z)}{L(i,i)^{\frac12}} \end{equation} is a Hermite-Biehler function such that \begin{equation}\label{Intro_rep_kernel} L(w,z) = E(z) E^*(\overline{w}) - E^*(z)E(\overline{w}). \end{equation} For the convenience of the reader we include short proofs of these facts in Appendix A. This implies \cite[Theorem 23]{B} that the Hilbert space $\mathcal{H}$ is isometrically equal to the de Branges space $\mathcal{H}(E)$. In particular, the key identity \begin{equation}\label{Intro_key_id} \int_{-\infty}^{\infty} |f(x)|^2 \, |E(x)|^{-2}\,\text{\rm d}x = \int_{-\infty}^\infty |f(x)|^2 \, \text{\rm d}\mu(x) \end{equation} holds for any $f \in \mathcal{H}$. \smallskip We now verify (P1) - (P4). It is clear that $E(z)$ has exponential type $\pi$ and is bounded on $\mathbb{R}$. Therefore, by the converse of Krein's theorem (see \cite{K} or \cite[Lemma 9]{HV}), we have that $E$ has bounded type in $\mathbb{C}^+$, which shows (P1). If $E$ had a real zero $w$, we would have $F(w) = 0$ for all $F \in \H(E) = \H$. However, we have seen that $\H$ is equal (as a set) to the Paley-Wiener space, which is a contradiction. This proves (P2). \smallskip A direct computation using \eqref{Intro_Def_E_special} and Theorem \ref{Intro_HS_Thm1_RP} shows that $E(ix)$ is real when $x$ is real, which shows (P3). For real $x$ we have $A(x) = {\rm Re}\, (E(x))$ and $B(x) = - {\rm Im}\, (E(x))$. Since $c(-i), id(-i), g(x)$ and $h(x)$ are all real, a direct computation gives us \begin{align*} \begin{split} A(x) &= \frac{{\rm Re}\,(L(i,x))}{L(i,i)^{\frac12}}\\ & = \frac{1}{L(i,i)^{\frac12}} \frac{4 \pi^2}{(2\pi^2 +1)} \cos\pi x \left\{\sinh \pi + \frac{\tan\big(2^{-\frac12}\big)\,\cosh \pi}{\pi \sqrt{2}}\right\} + O(x^{-1}) \end{split} \end{align*} and \begin{align*} \begin{split} B(x) &= - \frac{{\rm Im}\,(L(i,x))}{L(i,i)^{\frac12}}\\ & = \frac{1}{L(i,i)^{\frac12}} \frac{4 \pi^2}{(2\pi^2 +1)} \sin\pi x \left\{\cosh \pi + \frac{(\cosh \pi + \pi \sinh \pi)\cos\big(2^{-\frac12}\big) }{2 \pi^2 \big(\!\cos\big(2^{-\frac12}\big) - 2^{-\frac12} \sin\big(2^{-\frac12}\big)\big)}\right\} + O(x^{-1}), \end{split} \end{align*} for large $x$. This shows that $A,B \notin L^2(\mathbb{R})$ and thus, by \eqref{Intro_key_id} and Lemma \ref{Lem_equiv_norms} below, $A,B \notin \H(E)$. This proves (P4). \subsubsection{The extremal problem} \new{We now return to the case of an arbitrary Hermite-Biehler function $E$ satisfying properties (P1) - (P4) above.} From now on we assume, without loss of generality, that $E(0) >0$ (note that this holds for the particular $E$ defined by \eqref{Intro_Def_E_special}). \new{Generalizing \eqref{def-of-M},} let us write \begin{equation*} M_{E}( R )= \int_{-\infty}^{\infty} R(x)\,|E(x)|^{-2}\,\text{\rm d}x. \end{equation*} For $\beta >0$ we define \begin{equation}\label{Intro_Def_Lambda_+} \varLambda_E^+(\beta) = \inf {M_E(R_{\beta}^+)}, \end{equation} and \begin{equation}\label{Intro_Def_Lambda_-} \varLambda_E^-(\beta) = \sup {M_E(R_{\beta}^-)}, \end{equation} where the infimum and the supremum are taken over the entire functions $R_{\beta}^{\pm}$ of exponential type at most $2\tau(E)$ such that \begin{equation}\label{char-ineq-E} R_{\beta}^-(x) \leq \chi_{[-\beta,\beta]}(x) \leq R_{\beta}^+(x) \end{equation} for all $x \in \mathbb{R}$. \smallskip In its simplest version, for the Paley-Wiener space (which corresponds to $E(z) = e^{-i\pi z}$), this is a classical problem in harmonic analysis with numerous applications to inequalities in number theory and signal processing. Its sharp solution was discovered by Beurling and Selberg \cite{V} when $\beta \in \tfrac12\mathbb{N}$, by Donoho and Logan \cite{DL} when $\beta < \frac12$, and recently by Littmann \cite{Lit} for the remaining cases\footnote{B. F. Logan announced the solution for the general case in the abstract ``Bandlimited functions bounded below over an interval", Notices Amer. Math. Soc., 24 (1977), pp. A331. His proof, however, has never been published.}. Here we provide a complete solution to this optimization problem with respect to a general de Branges metric $L^1(\mathbb{R}, |E(x)|^{-2}\,\text{\rm d}x)$. As in the Paley-Wiener case, there are three distinct qualitative regimes for the solution, and these depend on the roots of $A$ and $B$ (observe that if $E(z) = e^{-i\pi z}$, then $A(z) = \cos \pi z$ and $B(z) = \sin \pi z$, which have roots exactly at $\beta \in \tfrac12 \mathbb{N}$). Similar extremal problems in de Branges and Euclidean spaces were considered in \cite{CGon, CL2, HV, Ke, LS}. \smallskip Property (P3) implies that $A$ is even and $B$ is odd, and by the Hermite-Biehler condition, $A$ and $B$ have only real zeros. Morever, these zeros are all simple. To see this, note that by \eqref{Intro_Def_K} we see that any double zero $w$ of either $A$ or $B$ implies that $K(w,w)=0$ which would, in turn, imply that $E(w) =0$ in contradiction to (P2). It also follows from well-known properties of Hermite-Biehler functions (see for instance the discussion related to the phase function in \cite[Problem 48]{B} or \cite[Section 3]{HV}) that the zeros of $A$ and $B$ interlace. In our case we have $B(0) = 0$ and $A(0) >0$. If we label the nonnegative zeros of $B$ in order as $0 = b_0 < b_1 < b_2 < \ldots $ and the positive zeros of $A$ as $a_1 < a_2 < \ldots $, then we have \begin{equation*} 0 = b_0 < a_1 < b_1 < a_2 < b_2 < \ldots \end{equation*} For each $\beta > 0$ that is not a root of $A$ or $B$, we define an auxiliary Hermite-Biehler function $E_{\beta}(z)$. The corresponding companion functions $A_{\beta}(z)$ and $B_{\beta}(z)$ and the reproducing kernel $K_{\beta}(w,z)$ play an important role in the solution of our extremal problem. We divide this construction in two cases, depending on the sign of $A(\beta)B(\beta)$. Since $A(0) >0$ and $B(0)=0$, from \eqref{Intro_Def_K} we find that $B'(0) > 0$. Then, \smallskip \noindent(i) if $b_k < \beta < a_{k+1}$, we set $\gamma_\beta:= \beta B(\beta)/A(\beta) >0;$ \smallskip \noindent(ii) if $a_k < \beta < b_k$, we set $\gamma_\beta:= -\beta A(\beta)/B(\beta) >0.$ \smallskip \noindent In either case we now define $E_\beta$ by \begin{equation}\label{Intro_Def_E_beta_2} E_{\beta}(z) = E(z)(\gamma_{\beta} - iz). \end{equation} \begin{theorem}\label{Intro_thm5_super} Let $E$ be a Hermite-Biehler function satisfying properties {\rm (P1) - (P4)}. Let $\beta >0$ and $\varLambda_E^{\pm}(\beta)$ be defined by \eqref{Intro_Def_Lambda_+} and \eqref{Intro_Def_Lambda_-}. \begin{enumerate} \item[(i)] If $\beta \in \{a_i\}$, then \begin{equation*} \varLambda_E^+(\beta) = \sum_{\stackrel{A(\xi) = 0}{|\xi| \leq \beta}} \frac{1}{K(\xi, \xi)} \ \ \ {\rm and} \ \ \ \varLambda_E^-(\beta) = \sum_{\stackrel{A(\xi) = 0}{|\xi| < \beta}} \frac{1}{K(\xi, \xi)}. \end{equation*} \item[(ii)] If $\beta \in \{b_i\}$, then \begin{equation*} \varLambda_E^+(\beta) = \sum_{\stackrel{B(\xi) = 0}{|\xi| \leq \beta}} \frac{1}{K(\xi, \xi)} \ \ \ {\rm and} \ \ \ \varLambda_E^-(\beta) = \sum_{\stackrel{B(\xi) = 0}{|\xi| < \beta}} \frac{1}{K(\xi, \xi)}. \end{equation*} \item[(iii)] If $b_k < \beta < a_{k+1}$, then \begin{equation*} \varLambda_E^+(\beta) = \sum_{\stackrel{A_{\beta}(\xi) = 0}{|\xi| \leq \beta}} \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)}\ \ \ {\rm and} \ \ \ \varLambda_E^-(\beta) = \sum_{\stackrel{A_{\beta}(\xi) = 0}{|\xi| < \beta}} \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)}. \end{equation*} \item[(iv)] If $a_k < \beta < b_{k}$, then \begin{equation*} \varLambda_E^+(\beta) = \sum_{\stackrel{B_{\beta}(\xi) = 0}{|\xi| \leq \beta}} \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)}\ \ \ {\rm and} \ \ \ \varLambda_E^-(\beta) = \sum_{\stackrel{B_{\beta}(\xi) = 0}{|\xi| < \beta}} \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)}. \end{equation*} \end{enumerate} In each of the cases above, there exists a pair of extremal functions \new{ $R_{\beta,E}^\pm$, i.e. functions for which \eqref{char-ineq-E} holds and the identities $M_E(R_{\beta,E}^\pm) = \Lambda_E^\pm(\beta)$ are valid. In particular, the values $M_E(R_{\beta,E}^\pm)$ are finite.} These extremal functions interpolate the characteristic function $\chi_{[-\beta,\beta]}$ at points $\xi$ given by {\rm (i)} $A(\xi) = 0$; {\rm (ii)} $B(\xi) = 0$; {\rm (iii)} $A_{\beta}(\xi) = 0$; {\rm (iv)} $B_{\beta}(\xi) = 0$, respectively. In the generic cases {\rm (iii)} and {\rm (iv)} such a pair of extremal functions is unique. \end{theorem} \noindent{\sc Remark.} In the above theorem, interpolating $\chi_{[-\beta,\beta]}$ at the endpoints $\xi = \pm \beta$ means taking the value $1$ for the majorant and the value $0$ for the minorant. \smallskip We observe that Theorem \ref{Intro_thm5_super} provides a complete solution to our original extremal problem related to the pair correlation measure. In fact, recall that $E$ defined by \eqref{Intro_Def_E_special} has exponential type $\pi$. Let $R_{\beta}^{\pm}$ be a pair of functions of exponential type at most $2\pi$ that verifies \eqref{Intro_R_beta_pm}. Since $R_{\beta}^+$ is nonnegative on $\mathbb{R}$, a classical result of Krein \cite[p. 154]{A} (alternatively, see \cite[Lemma 14]{CL2}) gives us the representation $R_{\beta}^+(z) = U(z)U^*(z)$, where $U$ is entire of exponential type at most $\pi$. By the identity \eqref{Intro_key_id} we have \begin{equation*} M(R_{\beta}^+) = \int_{-\infty}^{\infty} |U(x)|^2 \,\text{\rm d}\mu(x) = \int_{-\infty}^{\infty} |U(x)|^2 \, |E(x)|^{-2}\, \text{\rm d}x = M_E(R_{\beta}^+) \end{equation*} provided either, and hence both, of the values $M(R_{\beta}^+)$ or $M_E(R_{\beta}^+)$ is finite. To prove the analogous statement for $R_{\beta}^-$, we \new{write $R_\beta^-$ as a difference of nonnegative functions (on $\mathbb{R}$), } \begin{equation*} \new{R_{\beta}^-(z) =R_{\beta,E}^+(z) - \big(R_{\beta,E}^+(z) - R_{\beta}^-(z)\big) }, \end{equation*} and conclude that \begin{equation*} M(R_{\beta}^-) = M(R_{\beta,E}^+) - M(R_{\beta,E}^+ - R_{\beta}^-) = M_E(R_{\beta,E}^+) - M_E(R_{\beta,E}^+ - R_{\beta}^-) = M_E(R_{\beta}^-) \end{equation*} provided either, and hence both, of the values $M(R_{\beta}^-)$ or $M_E(R_{\beta}^-)$ is finite. \subsubsection{Connection to the two-delta problem} We may consider the two-delta problem in the general de Branges setting, i.e. for a Hermite-Biehler function $E$ satisfying properties {\rm (P1) - (P4)} we define \begin{equation}\label{two-Delta_E} \varDelta_E(\beta) = \inf_{R \in \Omega_{\beta,E}} M_E(R) \end{equation} where the infimum is taken over the subclass $\Omega_{\beta,E}$ of nonnegative functions $R$ of exponential type at most $2\tau(E)$ such that $R(\pm \beta) \geq1$. Since $\H(E)$ is a reproducing kernel Hilbert space, the solution for this problem is given by Theorem \ref{Intro_HS_Thm5} (the proof is identical, with $K$ being the reproducing kernel of the space $\H(E)$). If $R_{\beta,E}^\pm$ is a pair of extremal functions given by Theorem \ref{Intro_thm5_super}, we show in Section \ref{Sec_de_Branges_spaces} that their difference $R:= R_{\beta,E}^+ - R_{\beta,E}^-$ is an extremal function for the two-delta problem \eqref{two-Delta_E}, and in particular we obtain \begin{equation*} \varDelta_E(\beta) = \varLambda_E^+(\beta) - \varLambda_E^-(\beta). \end{equation*} From Theorem \ref{Intro_thm1_Gallagher} and Theorem \ref{Intro_HS_Thm5} we arrive at the following result. \begin{corollary}\label{Intro_Cor6} Assume RH and \eqref{N star}, and let $K(w,z)$ be defined by \eqref{Intro_rk-rep}. Then \begin{equation} \label{U minus L} \big\{\mc{U}(\beta) - \mathcal{L}(\beta)\big\} \leq \frac{1}{ K(\beta, \beta) + |K(\beta, -\beta)|}= 1 - \left|\frac{\sin2\pi \beta}{2\pi\beta}\right| + O\!\left(\frac{1}{\beta^2 }\right). \end{equation} \end{corollary} \subsection{Related results} Our lower bounds for $N(T,\beta)$ are only nontrivial if the left-hand side of the inequality in \eqref{Intro_thm1_eq1} is positive. It is natural to ask for bounds on the smallest value of $\beta$ for which $N(T,\beta)$ is positive. For instance, in the context of Theorem \ref{Intro_thm2_Gallagher_2}, a straightforward numerical calculation implies that $ \tfrac12M(r_{\beta}^-) >0$ if $\beta \ge 0.8163$ and hence, assuming RH and \eqref{N star}, we see that $N(T,0.8163) \gg N(T)$; this is illustrated in Figure 1. In Section \ref{sg}, using Montgomery's formula in a different manner, we improve this estimate. \begin{theorem}\label{small gaps} Assume RH and \eqref{N star}. Then $N(T,0.606894) \gg N(T).$ \end{theorem} As stated, this result appears to be the best known result on small gaps coming from Montgomery's formula. Theorem \ref{small gaps} gives a modest improvement of the previous results of Montgomery \cite{M1} and Goldston, Gonek, \"{O}zl\"{u}k and Snyder \cite{GGOS} who, under the same assumptions, had shown that $N(T,0.6695...) \gg N(T)$ and $N(T, 0.6072...) \gg N(T)$, respectively.\footnote{The result in \cite{M1} is stated with $0.68$ in place of $0.6695...$\,. As is pointed out in \cite{GGOS}, it is not difficult to modify Montgomery's argument to derive this sharper estimate. Moreover, it is shown in \cite{GGOS} that a result stronger than Theorem \ref{small gaps} holds assuming \eqref{N star} and the generalized Riemann hypothesis for Dirichlet $L$-functions.} Our proof differs somewhat from the proofs of these previous results since we actually use Montgomery's formula twice, choosing two different test functions. \smallskip \begin{figure} \label{figure2} \includegraphics[scale=.44]{Pic3.pdf} \qquad \includegraphics[scale=.44]{Pic4.pdf} \caption{The above images illustrate the upper bound for $\mc{U}(\beta) - \mathcal{L}(\beta)$ given in Corollary \ref{Intro_Cor6}. } \end{figure} Theorem \ref{small gaps} implies that infinitely often the gap between the imaginary parts of consecutive nontrivial zeros of $\zeta(s)$ is less than the average spacing. Define the quantity \[ \mu = \liminf_{n \to \infty} \frac{(\gamma_{n+1}\!-\!\gamma_n) \log \gamma_n}{2\pi}. \] Since the average size of $\gamma_{n+1}-\gamma_n$ is $2\pi/\log \gamma_n$, we see that trivially $\mu \le 1$. Assuming RH, Theorem \ref{small gaps} implies that $\mu \le 0.606894$. To see why, note that if \eqref{N star} holds then the claimed inequality for $\mu$ follows from Theorem \ref{small gaps} since $\mu \le \beta$ if $N(T,\beta)\gg N(T)$. On the other hand, if \eqref{N star} does not hold, then there are infinitely many multiple zeros of $\zeta(s)$ implying that $\mu=0$. Hence, in either case, we have $\mu \le 0.606894$. \smallskip Due to the connection to the class number problem for imaginary quadratic fields \cite{CI,MW}, it is an interesting open problem to prove that $\mu < \frac{1}{2}$. By a different method, also assuming RH, Feng and Wu \cite{FW} have proved that $\mu \le 0.5154$. This improves previous estimates by a number of other authors \cite{BMN,CCG,MO}. It does not appear, however, that any of these results can be applied to prove nontrivial estimates for the function $N(T,\beta)$. \smallskip In Section \ref{Sec_Q_analogue}, we prove a result which is an analogue of Theorems \ref{Intro_thm1_Gallagher} and \ref{Intro_thm2_Gallagher_2} for the zeros of primitive Dirichlet $L$-functions in $q$-aspect. This requires the version of Montgomery's formula given in \cite{CLLR}, which was proved using a modification of the asymptotic large sieve of Conrey, Iwaniec and Soundararajan \cite{CIS1}. In this case, the results in \cite{CLLR} allow to use Beurling-Selberg majorants and minorants of $\chi_{[-\beta, \beta]}(x)$ with Fourier transforms supported in $(-2,2)$. This leads to stronger results which are stated in Theorem \ref{q theorem}. \section{Bounds via Beurling-Selberg majorants}\label{Sec_BS_majorants} In this section we prove Theorem \ref{Intro_thm2_Gallagher_2}. Exploiting the fact that we have explicit expressions for the Beurling-Selberg functions $r_{\beta}^{\pm}$ and their Fourier transforms, we also prove a version of Theorem \ref{Intro_thm1_Gallagher} that allows $\beta$ to vary with $T$. \begin{theorem}\label{Gallagher} Assume RH. Then, for any $\beta = \beta(T)>0$ satisfying \begin{equation} \label{beta condition} \beta \, \left( \frac{\log\log T}{\log T} \right)^{\!1/2} \to 0 \quad \text{as } T\to \infty, \end{equation} we have \begin{equation}\label{precise1} \begin{split} \frac12 M(r_{\beta}^{-}) + \frac{1}{2}\left(1\! -\! \frac{N^*(T)}{N(T)}\right) +o(1) \, \leq \, \frac{N(T,\beta)}{N(T)} \, &\leq \, \frac12 M(r_{\beta}^{+}) + \frac{1}{2}\left(1\! -\! \frac{N^*(T)}{N(T)}\right) +o(1) \end{split} \end{equation} when $T$ is sufficiently large. \end{theorem} The condition on $\beta$ in \eqref{beta condition} arises from the size of the error term in \eqref{F alpha} below, and it may be possible to weaken this condition slightly. Since it is generally believed that the zeros of $\zeta(s)$ are all simple, we expect that $N^*(T) = N(T)$ for all $T>0$ and hence that \eqref{N star} should hold. Assuming RH, Montgomery \cite{M1} has shown that \begin{equation}\label{Intro_4/3_bound} N^*(T) \le \left( \frac{4}{3} +o(1) \right) N(T) \end{equation} as $T\to \infty$. Observing that $N^*(T)\ge N(T)$, and combining \eqref{precise1}, \eqref{Intro_4/3_bound}, and Theorem \ref{Intro_thm2_Gallagher_2}, we deduce the following corollary which does not rely on the additional assumption in \eqref{N star}. \begin{corollary}\label{Mont} Assume RH. Then, for any $\beta>0$ satisfying \eqref{beta condition}, we have \begin{equation*} \beta - \frac{7}{6} + \frac{1}{2 \pi^2 \beta} + O\!\left(\frac{1}{\beta^2 }\right) +o(1) \leq \frac{N(T,\beta)}{N(T)} \leq \beta + \frac{1}{2 \pi^2 \beta} + O\!\left(\frac{1}{\beta^2 }\right) + o(1) \end{equation*} when $T$ is sufficiently large. \end{corollary} \noindent{\sc Remark.} The lower bound in Corollary \ref{Mont} can be sharpened slightly using improved estimates for $N^*(T)$ obtained by Montgomery and Taylor \cite{M2} (see the remark after Corollary \ref{cor_Mont_Taylor} below) or by Cheer and Goldston \cite{CG} assuming RH, or by Goldston, Gonek, \"{O}zl\"{u}k and Snyder \cite{GGOS} assuming the generalized Riemann hypothesis for Dirichlet $L$-functions. \smallskip Our original proof of Corollary \ref{Mont} was a bit different and did not rely directly on Montgomery's formula. We briefly indicate the main ideas. Writing $N(T,\beta)$ as a double sum and using a more precise formula for $N(T)$, we can show that \begin{equation}\label{alt_proof} N(T,\beta) \, = \, N(T) \left\{ \beta \mp \frac{1}{2}\frac{N^*(T)}{N(T)} +o(1) \right\} \ \pm \sum_{0<\gamma \le T} \!\! S\Big( \gamma \!\pm\! \frac{2\pi\beta}{\log T} \Big) \end{equation} for $\beta=o(\log T)$. Here, if $t$ does not correspond to an ordinate of a zero of $\zeta(s)$, we define $S(t) = \frac{1}{\pi}\arg \zeta(\frac{1}{2}+it)$ and otherwise we let \[ S(t) = \frac{1}{2} \lim_{\varepsilon \to 0} \big\{ S(t\!+\!\varepsilon)+S(t\!-\!\varepsilon) \big\}. \] Using ideas from \cite{CCM}, we can replace the sum involving $S(t)$ on the right-hand side of \eqref{alt_proof} with a double sum over zeros involving the odd function $f(x) = \arctan(1/x)-x/(1+x^2)$. In \cite{CCM}, we construct majorants and minorants of exponential type $2\pi$ for $f(x)$ using the framework for the solution of the Beurling-Selberg extremal problem given in \cite{CL} for the truncated (and odd) Gaussian. This allows us to prove the upper and lower bounds for $N(T,\beta)$ in Corollary \ref{Mont} by using these majorants and minorants in the sum on the right-hand side of \eqref{alt_proof}, twice applying the explicit formula, and then carefully estimating the resulting sums and integrals. The fact that our original proof relied on two applications of the explicit formula suggests using Montgomery's formula instead, and we have chosen only to present this simpler proof here. \subsection{Montgomery's function $F(\alpha)$} In order to study the distribution of the differences of pairs of zeros of $\zeta(s)$, Montgomery \cite{M1} introduced the function \begin{equation}\label{Mont_function_sec2} F(\alpha) := F(\alpha,T) = \frac{2\pi}{T\log T} \sum_{0< \gamma,\gamma' \le T } T^{i\alpha(\gamma'-\gamma)}\,w(\gamma'\!-\!\gamma)\,, \end{equation} where $\alpha$ is real, $T\ge 2$, and $w(u)=4/(4+u^2)$. Note that $F(\alpha)$ is real and that $F(\alpha)=F(-\alpha)$. Moreover, since \[ \sum_{0< \gamma,\gamma' \le T } T^{i\alpha(\gamma'-\gamma)}\,w(\gamma'\!-\!\gamma) = 2\pi \int_{-\infty}^{\infty} e^{-4\pi |u|} \Bigg| \sum_{0<\gamma\le T} T^{i\alpha\gamma} e^{2\pi i \gamma u} \Bigg|^2 \text{\rm d}u\,, \] we see that $F(\alpha) \ge 0$ for $\alpha \in \mathbb{R}.$ Multiplying $F(\alpha)$ by a function $\widehat{R} \in L^1(\mathbb{R})$ and integrating, we derive the convolution formula \begin{equation} \label{convolution} \sum_{0< \gamma,\gamma' \le T } R\!\left( (\gamma'\!-\!\gamma)\frac{\log T}{2\pi} \right) w(\gamma'\!-\!\gamma) = \frac{T\log T}{2\pi} \int_{-\infty}^{\infty} \widehat{R}(\alpha) \, F(\alpha) \, \text{\rm d}\alpha. \end{equation} Assuming RH, refining the original work of Montgomery \cite{M1}, Goldston and Montgomery \cite[Lemma 8]{GM} proved that \begin{equation} \label{F alpha} F(\alpha) = \left( T^{-2|\alpha|} \log T + |\alpha| \right)\left(1 + O\left(\sqrt{\tfrac{\log\log T}{\log T}} \right) \right), \quad \text{as } T \to \infty, \end{equation} uniformly for $0\le |\alpha| \le 1$. Using this asymptotic formula for $F(\alpha)$ in the integral on the right-hand side of \eqref{convolution} allows for the evaluation of a large class of double sums over differences of zeros of $\zeta(s)$. \smallskip From \eqref{convolution}, \eqref{F alpha}, and Plancherel's theorem, one can deduce Montgomery's formula as stated in \eqref{Mont_formula}. Furthermore, Montgomery \cite{M1} conjectured that $F(\alpha) = 1 + o(1)$ for $|\alpha| >1$, uniformly for $\alpha$ in bounded intervals. Along with \eqref{F alpha}, this conjecture completely determines the behavior of $F(\alpha)$, and suggests that Montgomery's formula in \eqref{Mont_formula} continues to hold for any function $R(x)$ whose Fourier transform $\widehat{R}(\alpha)$ is compactly supported. Choosing $R(x)$ to approximate the characteristic function of an interval led Montgomery to make the pair correlation conjecture for the zeros of $\zeta(s)$ in \eqref{PCC}. \subsection{The Fourier transforms of $r_{\beta}^{\pm}$} Recall the entire functions $H_0(z)$ and $H_1(z)$ defined in \eqref{Intro_def_H_0} and \eqref{Intro_def_H_1}. The Fourier transform of $H_1$ is given by \begin{equation}\label{FTK} \widehat{H_1}(t) = \max\big(1 \!-\! |t|, 0\big) \end{equation} for $t \in \mathbb{R}$, while the Fourier transform of the integrable function $W(x) = H_0(x) - \sgn(x)$ is given by \cite[Theorems 6 and 7]{V} \begin{equation}\label{FTE} \widehat{W}(t) = \left\{ \begin{array}{ll} 0, & {\rm if} \ \ t=0,\\ (\pi i t)^{-1} \big\{(1 - |t|)(\pi t \cot \pi t - 1)\big\}, & { \rm if} \ \ 0 < |t| <1,\\ -(\pi i t)^{-1}, & {\rm if} \ \ |t| \geq 1. \end{array} \right. \end{equation} We can now compute the Fourier transforms of the functions $r_{\beta}^{\pm}$ defined in \eqref{Intro_BS1} and \eqref{Intro_BS2}, which, as we already noted, are continuous functions supported in $[-1,1]$. \begin{lemma} For $-1 \leq t \leq 1$ we have \begin{align}\label{FTr} \begin{split} \widehat{r}_{\beta}^{\pm}(t) & = i \,\sin 2\pi \beta t\,\, \widehat{W}(t) + \frac{\sin 2 \pi \beta t}{\pi t} \pm (1 - |t|) \cos 2\pi \beta t. \end{split} \end{align} \end{lemma} \begin{proof} Note that \begin{equation*} r_{\beta}^{\pm}(x) = \frac12 \big\{ (W(x+\beta) \pm H_1(x + \beta)) + (W(-x+\beta) \pm H_1(-x + \beta))\big\} + \chi_{[-\beta, \beta]}(x). \end{equation*} The result now follows from \eqref{FTK} and \eqref{FTE}. \end{proof} Observe from \eqref{FTr} that $\widehat{r}_{\beta}^{\pm}$ are Lipschitz functions, each with Lipschitz constant $C = O\big((1 + \beta)^2\big)$. \subsection{Proof of Theorem \ref{Gallagher}} \label{amf} For any admissible function $R$, Plancherel's theorem implies that \begin{equation} \label{Plancherel} M(R) = \widehat{R}(0) - \int_{-1}^1 \widehat{R}(t) \, \big(1\! -\! |t|\big)\,\text{\rm d}t. \end{equation} For simplicity, let $r_{\beta} = r_{\beta}^{\pm}$ denote either of our Beurling-Selberg functions. Then, by \eqref{N}, \eqref{convolution}, \eqref{F alpha}, \eqref{Plancherel}, and another application of Plancherel's theorem, we have \begin{align}\label{MontFor} \begin{split} \frac{1}{N(T)} \sum_{0< \gamma, \gamma' \leq T} & r_{\beta} \left((\gamma' \!-\! \gamma)\tfrac{\log T}{2\pi}\right)\,w(\gamma' \!-\! \gamma) \\ & = \int_{-1}^{1} \widehat{r}_{\beta}(t) \left( T^{-2|t|}\, \log T + |t|\right)\text{\rm d}t + O\!\left( (1\!+\!\beta) \sqrt{\tfrac{\log\log T}{\log T}} \right) \\ & = \int_{-\infty}^{\infty} \widehat{r}_{\beta}\left( \frac{u}{\log T}\right) e^{-2|u|} \,\text{\rm d}u + \int_{-1}^{1} \widehat{r}_{\beta}(t) \, |t|\, \text{\rm d}t + o(1) \\ &= \widehat{r}_{\beta}(0) + \int_{-1}^{1} \widehat{r}_{\beta}(t) \, \text{\rm d}t - \int_{-1}^{1} \widehat{r}_{\beta}(t) \, \big(1\!-\!|t|\big) \, \text{\rm d}t + o(1) \\ &= r_{\beta}(0) +M(r_\beta) + o(1). \end{split} \end{align} Here we have used the fact that $$|\widehat{r}_{\beta}(t)| = O(1 + \beta)$$ uniformly for all $t \in \mathbb{R}$, together with the assumption that $\beta$ satisfies \eqref{beta condition}, to establish the error term of $o(1)$ in \eqref{MontFor}. This error term relies, in part, on the bound (here using that $\widehat{r}_{\beta}$ has Lipschitz constant $C = O(1 + \beta)^2$), \begin{align*} &\left| \int_{-\infty}^{\infty} \left\{ \widehat{r}_{\beta}\left( \frac{u}{\log T}\right) - \widehat{r}_{\beta}(0)\right\} e^{-2|u|} \,\text{\rm d}u \right| \leq \int_{-\infty}^{\infty} C \frac{|u|}{\log T} e^{-2|u|} \,\text{\rm d}u = O\!\left( \frac{(1\!+\!\beta)^2}{\log T} \right)= o(1). \end{align*} For the majorant $r_{\beta}^{+}$, noting that $1-\frac{u^2}{4} \le w(u) \le 1$, we have \begin{align}\label{maj11} \begin{split} \sum_{0< \gamma, \gamma' \leq T} & r_{\beta}^{+} \left((\gamma' \!-\! \gamma)\tfrac{\log T}{2\pi}\right)\,w(\gamma' \!-\! \gamma) \\ & \geq \, r_{\beta}^{+}(0) N^*(T) \, + \!\! \sum_{\substack{0< \gamma, \gamma' \leq T\\ \gamma\neq \gamma'}} \!\!\! \chi_{[-\beta,\beta]}\! \left((\gamma' \!-\! \gamma)\tfrac{\log T}{2\pi}\right) w(\gamma' \!-\! \gamma) \\ &= \, r_{\beta}^{+}(0) N^*(T) \, +\, \left\{ 2 + O\!\left(\frac{(1\!+\!\beta)^2}{\log^2 T} \right) \right\} \, N(T,\beta) \\ &= \, r_{\beta}^{+}(0) N^*(T) \, + \, 2 N(T,\beta) \, + \, o(T\log T), \end{split} \end{align} where we have used \eqref{Fujii} and the assumption on $\beta$ in \eqref{beta condition} to estimate the error term. Using the inequalities $N^*(T) \ge N(T)$ and $r_\beta^+(0) \ge 1$, we conclude from \eqref{N}, \eqref{MontFor} and \eqref{maj11} that \begin{equation}\label{conclusion+} \frac{N(T,\beta)}{N(T)} \, \leq \, \frac{1}{2} \left\{ M(r_{\beta}^{+}) + r_\beta^+(0) \left( 1 \!-\! \frac{N^*(T)}{N(T)} \right) \right\} +o(1) \, \leq \, \frac{1}{2} \left\{ M(r_{\beta}^{+}) + \left( 1 \!-\! \frac{N^*(T)}{N(T)} \right) \right\} +o(1). \end{equation} Similarly, for the minorant $r_{\beta}^{-}$, we obtain \begin{align}\label{min11} \begin{split} \sum_{0< \gamma, \gamma' \leq T} & r_{\beta}^{-} \left((\gamma' \!-\! \gamma)\tfrac{\log T}{2\pi}\right)\,w(\gamma' \!-\! \gamma) \leq r_{\beta}^{-}(0) N^*(T) \, + \, 2 N(T,\beta) \, + \, o(T\log T)\,, \end{split} \end{align} for $\beta$ satisfying \eqref{beta condition}. In this case, since $r_\beta^-(0)\le 1$, we conclude from \eqref{N}, \eqref{MontFor} and \eqref{min11} that \begin{equation*} \frac{N(T,\beta)}{N(T)} \, \geq \, \frac{1}{2} \left\{ M(r_{\beta}^{-}) + r_\beta^-(0) \left( 1 \!-\! \frac{N^*(T)}{N(T)} \right) \right\} +o(1) \, \geq \, \frac{1}{2} \left\{ M(r_{\beta}^{-}) + \left( 1 \!-\! \frac{N^*(T)}{N(T)} \right) \right\} +o(1). \end{equation*} This concludes the proof of Theorem \ref{Gallagher}. \subsection{Proof of Theorem \ref{Intro_thm2_Gallagher_2}} \label{sec:evaluateMr} \subsubsection{Evaluation of $M(r_{\beta}^{\pm})$} We now calculate a slightly more general version of the quantity $M(r_{\beta}^{\pm})$, and specialize to the case of Theorem \ref{Intro_thm2_Gallagher_2} at the end of this subsection. In particular, we assume the validity of Montgomery's formula in \eqref{Mont_formula} for any integrable function $R$ with Fourier transform supported in $[-\Delta,\Delta]$ with $\Delta \geq 1$ (this stronger version is used later in the proof of Theorem \ref{q theorem}). The functions \begin{equation*} s_{\Delta, \beta}^{\pm}(x) = r_{\Delta \beta}^{\pm}(\Delta x) \end{equation*} are a majorant and a minorant of the characteristic function of the interval $[-\beta,\beta]$ of exponential type $2\pi \Delta$, and hence with Fourier transform supported in $[-\Delta,\Delta]$. We evaluate the quantity \begin{equation} \label{MDelta} \frac{1}{2} M\big(s_{\Delta, \beta}^{\pm}\big) = \frac{1}{2} \widehat{s}_{\Delta, \beta}^{\pm}(0) -\frac{1}{2} \int_{-1}^{1} \widehat{s}_{\Delta, \beta}^{\pm}(t) (1- |t|)\,\text{\rm d}t, \end{equation} and deduce Theorem \ref{Intro_thm2_Gallagher_2} from the case $\Delta=1$. \smallskip First observe that \begin{align*} \widehat{s}_{\Delta, \beta}^{\pm}(t) & = \frac{1}{\Delta} \, \widehat{r} _{\Delta \beta}^{\pm}\left(\frac{t}{\Delta}\right)\\ & = \frac{i}{\Delta}\,\sin 2\pi \beta t\, \widehat{W}\left(\frac{t}{\Delta}\right) + \frac{\sin 2 \pi \beta t}{\pi t} \pm \frac{(\Delta - |t|)}{\Delta^2} \cos 2\pi \beta t\\ & = \frac{\sin 2\pi \beta t}{\pi t} \left( 1 - \frac{|t|}{\Delta}\right)\left( \frac{\pi t}{\Delta} \cot \frac{\pi t }{\Delta} - 1\right) + \frac{\sin 2 \pi \beta t}{\pi t} \pm \frac{(\Delta - |t|)}{\Delta^2} \cos 2\pi \beta t\\ & = \frac{1}{\Delta^2} \big(\Delta - |t|\big) \sin 2 \pi \beta t \,\cot \frac{\pi t }{\Delta} + \frac{1}{\Delta} \frac{|t| \sin 2\pi \beta t }{\pi t} \pm \frac{(\Delta - |t|)}{\Delta^2} \cos 2\pi \beta t, \end{align*} and note that \begin{equation} \label{S0} \frac{1}{2} \widehat{s}_{\Delta, \beta}^{\pm}(0) = \beta \pm \frac{1}{2 \Delta}. \end{equation} Since $\widehat{s}_{\Delta, \beta}^{\pm}(t) $ is an even function, we have \begin{align*} \frac{1}{2} \int_{-1}^{1} \widehat{s}_{\Delta, \beta}^{\pm}(t) (1- |t|)\,\text{\rm d}t &= \frac{1}{\Delta^2} \int_0^1 (\Delta - t)(1-t)\sin 2 \pi \beta t \, \,\cot \frac{\pi t }{\Delta}\, \text{\rm d}t \\ & \ \ \ \ \ \ \ \ \ + \frac{1}{\Delta} \int_0^1 \frac{\sin 2\pi \beta t }{\pi} \, (1-t)\,\text{\rm d}t \pm \frac{1}{\Delta^2} \int_0^1 (\Delta - t)\,(1-t)\,\cos 2 \pi \beta t \, \text{\rm d}t\\ & := A + B \pm C, \end{align*} say. Integrating by parts, we find that \begin{align} B &= \frac{1}{\Delta}\left\{ \frac{1}{2 \pi^2 \beta} - \frac{\sin 2\pi \beta}{4\pi^3 \beta^2}\right\} \label{B} \end{align} and \begin{align} C &= \frac{1}{\Delta^2}\left\{ - \frac{(\Delta - 1)\cos 2\pi \beta}{(2 \pi \beta)^2} + \frac{(\Delta + 1)}{(2 \pi \beta)^2} - \frac{\sin 2 \pi \beta}{4 \pi^3 \beta^3}\right\}.\label{C} \end{align} In order to evaluate $A$, we make use of the identity \begin{equation*}\label{cot_identity} i \sum_{n= -N}^N \sgn(n)\, e^{-2\pi i nt} = \cot \pi t - \left( \frac{\cos \pi (2N+1) t}{\sin \pi t}\right), \end{equation*} which implies that \begin{align*} A & = \frac{1}{\Delta^2} \int_0^1 (\Delta - t) (1-t) \sin 2 \pi \beta t \,\left\{ i \sum_{n= -N}^N \sgn(n)\, e^{-2\pi i n\frac{t}{\Delta}}\right\} \,\text{\rm d}t \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{\Delta^2} \int_0^1 (\Delta - t) (1-t) \sin 2 \pi \beta t \, \left( \frac{\cos \pi (2N+1) \frac{t}{\Delta}}{\sin \pi \frac{t}{\Delta}}\right) \,\text{\rm d}t\\ & := A_N + D_N, \end{align*} say. The Riemann-Lebesgue lemma implies that $\displaystyle{\lim_{N\to \infty} D_N = 0}$, and thus it remains to evaluate $A_N$. Interchanging summation and integration, we arrive at \begin{align*} A_N &= \frac{1}{\Delta^2}\sum_{n= -N}^N \sgn(n) \int_{0}^1 \left(\frac{e^{2\pi i \beta t} - e^{- 2\pi i \beta t}}{2}\right) e^{-2\pi i n \frac{t}{\Delta}}\, (\Delta - t)(1 - t) \,\text{\rm d}t \\ & = \frac{1}{4 \pi^2} \sum_{n= -N}^N \sgn(n) \left\{ - \frac{(\Delta - 1)\cos 2\pi (\beta -\tfrac{n}{\Delta}) }{(\Delta \beta -n)^2} + \frac{(\Delta + 1)}{(\Delta \beta -n)^2} - \frac{\sin 2 \pi (\beta -\tfrac{n}{\Delta}) }{\frac{\pi}{\Delta} (\Delta \beta -n)^3}\right\}. \end{align*} Therefore, letting $N\to \infty$, the above estimates imply that \begin{equation}\label{FinalA} A = \frac{1}{4 \pi^2} \sum_{n= -\infty}^\infty \sgn(n) \left\{ - \frac{(\Delta - 1)\cos 2\pi (\beta -\tfrac{n}{\Delta}) }{(\Delta \beta -n)^2} + \frac{(\Delta + 1)}{(\Delta \beta -n)^2} - \frac{\sin 2 \pi (\beta -\tfrac{n}{\Delta}) }{\frac{\pi}{\Delta} (\Delta \beta -n)^3}\right\}. \end{equation} \smallskip Combining the contributions from $A$ and $C$, we define the continuous functions $V^{\pm}: (0,\infty) \to \mathbb{R}$ by \begin{equation} \label{VDelta} V_{\Delta}^{\pm}(\beta) = \frac{1}{4 \pi^2} \sum_{n= -\infty}^\infty \frac{\sgn(n^{\pm})}{(\Delta \beta -n)^2} \left\{ - (\Delta - 1)\cos 2\pi (\beta -\tfrac{n}{\Delta}) + (\Delta + 1) - \frac{\sin 2 \pi (\beta-\tfrac{n}{\Delta}) }{\frac{\pi}{\Delta} (\Delta \beta -n)}\right\}, \end{equation} where $\sgn(0^{\pm}) = \pm1$. Then \eqref{MDelta}, \eqref{S0}, \eqref{B}, \eqref{C}, \eqref{FinalA} and \eqref{VDelta} imply that \begin{equation}\label{final_exp_s} \frac{1}{2} M\big(s_{\Delta, \beta}^{\pm}\big) = \left(\beta \pm \frac{1}{2 \Delta}\right) - \frac{1}{\Delta}\left\{ \frac{1}{2 \pi^2 \beta} - \frac{\sin 2\pi \beta}{4\pi^3 \beta^2}\right\} - V_{\Delta}^{\pm}(\beta). \end{equation} Specializing to the case $\Delta = 1,$ we obtain $$ \frac{1}{2} M\big(r_{\beta}^{\pm}\big) = \left(\beta \pm \frac{1}{2}\right) - \left\{ \frac{1}{2 \pi^2 \beta} - \frac{\sin 2\pi \beta}{4\pi^3 \beta^2}\right\} - V_{1}^{\pm}(\beta),$$ which is the explicit expression in Theorem \ref{Intro_thm2_Gallagher_2}. \subsubsection{Asymptotic evaluation} \label{sec:deduceThm1from2} By \eqref{VDelta} we have \begin{align} \label{eqn:asympVhaveG} \begin{split} V_{\Delta}^{\pm}(\beta) &= \frac{1}{4 \pi^2} \sum_{n= -\infty}^\infty \frac{1}{(\Delta \beta -n)^2} \left\{ - (\Delta - 1)\cos 2\pi (\beta -\tfrac{n}{\Delta}) + (\Delta + 1) - \frac{\sin 2 \pi (\beta-\tfrac{n}{\Delta}) }{\frac{\pi}{\Delta} (\Delta \beta -n)}\right\} \\ & \ \ \ \ \ \ \ \ \ \ \ - \frac{2}{4 \pi^2} \sum_{n<0 ({\rm or}\, \leq 0)} \frac{1}{(\Delta \beta -n)^2} \left\{ - (\Delta - 1)\cos 2\pi (\beta -\tfrac{n}{\Delta}) + (\Delta + 1) - \frac{\sin 2 \pi (\beta-\tfrac{n}{\Delta}) }{\frac{\pi}{\Delta} (\Delta \beta -n)}\right\} \\ &= \frac{1}{4 \pi^2} \sum_{n= -\infty}^\infty \frac{1}{(\Delta \beta -n)^2} \left\{ - (\Delta - 1)\cos 2\pi (\beta -\tfrac{n}{\Delta}) + (\Delta + 1) - \frac{\sin 2 \pi (\beta-\tfrac{n}{\Delta}) }{\frac{\pi}{\Delta} (\Delta \beta -n)}\right\} \\ & \ \ \ \ \ \ \ \ - \frac{(\Delta + 1)}{2 \pi^2 \beta \Delta} + O\!\left(\beta^{-2}\right) \\ &:= G_\Delta(\beta) - \frac{(\Delta + 1)}{2 \pi^2 \beta \Delta} + O\!\left(\beta^{-2}\right), \end{split} \end{align} say. Here we have used the estimate \[ \sum_{n\ge 0} \frac{(\Delta \!-\! 1) \cos 2\pi (\beta\!+\!\tfrac{n}{\Delta})}{(\Delta \beta + n)^2} = O\!\left(\beta^{-2}\right), \] which follows summation by parts and the fact that \[ \sum_{n=0}^N (\Delta\!-\!1)\cos 2\pi (\beta\!+\!\tfrac{n}{\Delta}) = O(\Delta^2) \] uniformly in $N$. Notice that \begin{align*} G_{\Delta}(\beta) = \lim_{N \rightarrow \infty} \ \frac{1}{2\Delta^2}\sum_{n= -N}^N \int_{0}^1 \left(e^{2\pi i \big(\beta - \frac{n}{\Delta}\big)t} + e^{- 2\pi i \big(\beta - \frac{n}{\Delta}\big) t}\right) \, (\Delta - t)(1 - t) \,\text{\rm d}t. \end{align*} Since the series defining $G_\Delta(\beta)$ in \eqref{eqn:asympVhaveG} converges uniformly for $\beta$ in a compact set, Morera's theorem can be used to show that $G_{\Delta}(\beta)$ is an analytic function of $\beta$. Thus, we can differentiate $G_{\Delta}(\beta)$ with respect to $\beta$ term-by-term, and it follows from the Riemann-Lebesgue lemma that \begin{align*} G'_\Delta(\beta) &= \lim_{N \rightarrow \infty} \ \frac{1}{2\Delta^2}\sum_{n= -N}^N \int_{0}^1 2\pi i t \left(e^{2\pi i \big(\beta - \frac{n}{\Delta}\big)t} - e^{- 2\pi i \big(\beta - \frac{n}{\Delta}\big) t}\right) \, (\Delta - t)(1 - t) \,\text{\rm d}t \\ &= \lim_{N \rightarrow \infty} \ -\frac{1}{\Delta^2} \int_{0}^1 2\pi t \, (\Delta - t)(1 - t) \, \sin(2\pi \beta t)\, \frac{\sin (2\pi \left( N + \frac{1}{2}\right)\frac{ t}{\Delta})}{\sin \frac{\pi t}{\Delta}} \,\text{\rm d}t \\ &= 0. \end{align*} Therefore $G_{\Delta}(\beta)$ is a constant function in $\beta$ and, in order to determine its value, it suffices to evaluate $G_{\Delta}(0).$ Using the identities \cite[pp. 927--930]{MP} $$ \sum_{n = 1}^{\infty} \frac{\cos nx}{n^2} = \frac{1}{12} \left( 3x^2 -6\pi x +2\pi^2\right), \quad 0 \leq x \leq 2\pi,$$ and $$ \sum_{n = 1}^{\infty} \frac{\sin nx}{n^3} = \frac{1}{12} \left(x^3 - 3\pi x^2 + 2\pi^2x\right), \quad 0 \leq x \leq 2\pi,$$ it follows that \begin{align*} \label{eqn:gDelta0} G_\Delta(0) &= \frac{1}{2\Delta} - \frac{1}{6\Delta^2} + \frac{1}{2\pi^2}\sum_{n = 1}^\infty \left(-\frac{(\Delta - 1)\cos \frac{2\pi n}{\Delta}}{n^2} + \frac{(\Delta + 1)}{n^2} - \frac{\Delta \sin \frac{2\pi n}{\Delta}}{\pi n^3}\right) = \frac{1}{2}. \end{align*} Inserting this estimate into (\ref{eqn:asympVhaveG}), we derive that \begin{equation*}\label{eqn:asymptoticofV} V_{\Delta}^\pm(\beta) = \frac{1}{2} - \frac{(\Delta + 1)}{2\pi^2 \beta \Delta} + O\left( \beta^{-2}\right), \end{equation*} and therefore, from \eqref{final_exp_s}, \begin{equation}\label{Final_answer_M_s} \frac{1}{2} M\big(s_{\Delta, \beta}^{\pm}\big) = \left(\beta - \frac{1}{2} \pm \frac{1}{2 \Delta}\right) + \frac{1}{2\pi^2\beta} + O\left(\beta^{-2}\right) . \end{equation} In particular, choosing $\Delta = 1,$ we deduce that \begin{equation*} \frac{1}{2} M\big(r_{\beta}^{\pm}\big) = \left(\beta - \frac{1}{2} \pm \frac{1}{2}\right) + \frac{1}{2\pi^2\beta} + O\left(\beta^{-2}\right), \end{equation*} and this concludes the proof of Theorem \ref{Intro_thm2_Gallagher_2}. \section{Reproducing kernel Hilbert spaces}\label{HS_approach} Our objective in this section is to prove Theorems \ref{Intro_HS_Thm1_RP} and \ref{Intro_HS_Thm5}. \subsection{Equivalence of norms via uncertainty} In order to establish the equivalence of the norms of $\mathcal{B}_2(\pi,\mu)$ and $\mathcal{B}_2(\pi)$ we shall make use of the classical uncertainty principle for the Fourier transform. The version we present here is due to Donoho and Stark \cite{DS}. \begin{lemma}{\rm (cf. \cite[Theorem 2]{DS})}\label{uncertainty} Let $T,W \subset \mathbb{R}$ be measurable sets and let $f \in L^2(\mathbb{R})$ with $\|f\|_2 =1$. Then $$|W|^{1/2}\,.\,|T|^{1/2} \geq 1 - \|f \chi_{\mathbb{R}\setminus T}\|_2 - \|\widehat{f} \chi_{\mathbb{R}\setminus W}\|_2 ,$$ where $|W|$ denotes the Lebesgue measure of the set $W$. \end{lemma} \begin{lemma} \label{Lem_equiv_norms} Let $f$ be entire. Then $f\in \mathcal{B}_2(\pi)$ if and only if $f\in \mathcal{B}_2(\pi,\mu)$. Moreover, there exists $c>0$ independent of $f$ such that $$c\|f\|_{2} \le \|f\|_{L^2(\text{\rm d}\mu)} \le \|f\|_{2}$$ for all $f\in \mathcal{B}_2(\pi)$. \end{lemma} \begin{proof} Since $\mu$ is absolutely continuous with respect to the Lebesgue measure, it is clear that \[ \|f\|_{L^2(\text{\rm d}\mu)} \le \|f\|_{2} \] for all $f\in \mathcal{B}_2(\pi)$, so in particular $\mathcal{B}_2(\pi)\subseteq \mathcal{B}_2(\pi,\mu)$. \smallskip Now let $f\in\mathcal{B}_2(\pi,\mu)$. Since $f$ is entire, it is in particular continuous at the origin, hence $\|f\|_{2}<\infty$ and $f\in \mathcal{B}_2(\pi)$. It remains to show that there exists $c$, independent of $f$, with $c\|f\|_2\le \|f\|_{L^2(\text{\rm d}\mu)} $. We let $T = [-\tfrac18, \tfrac18]$, $W = [-\tfrac12, \tfrac12]$ and use Lemma \ref{uncertainty} to get $$\|f \chi_{\mathbb{R}\setminus T}\|_2 \geq \frac{1}{2} \|f\|_2.$$ Let $0 < \eta<1$ be such that $$\eta^2\, \chi_{\mathbb{R}\setminus T}(x) \le \left\{ 1 - \left(\frac{\sin \pi x}{\pi x}\right)^2\right\}.$$ Then $$\frac{\eta}{2}\, \|f\|_2 \le \eta \,\|f\chi_{\mathbb{R}\setminus T}\|_2 \le \|f\|_{L^2(\text{\rm d}\mu)}.$$ This completes the proof of the lemma. \end{proof} \subsection{Proof of Theorem \ref{Intro_HS_Thm1_RP}} We start by recording the expansions: \begin{align}\label{kw-pieces-rep} \begin{split} f(w,x) &= \frac{2\pi^2 w^2}{(2\pi^2 w^2 - 1)} \int_{-\frac12}^{\frac12} e^{2\pi i x t} \, e^{-2\pi i w t} \,\text{\rm d}t,\\ g(x) &= \int_{-\frac12}^{\frac12} e^{2\pi i x t} \cos\big(2^{\frac12} t\big) \,\text{\rm d}t,\\ h(x) &= -i \int_{-\frac12}^{\frac12}e^{2\pi i x t} \sin \big(2^{\frac12} t\big)\, \text{\rm d}t. \end{split} \end{align} Define \begin{align*} \kappa_w(x) &:= f(w,x) + c(w) g(x) + d(w) h(x),\\ \ell_w(x) &:= \left\{ 1 - \left(\frac{\sin \pi x}{\pi x}\right)^2\right\}\kappa_w(x), \end{align*} and $$j_w(t) := \chi_{[-\frac12,\frac12]}(t) \left\{ \frac{2\pi^2 w^2}{(2\pi^2 w^2 - 1)} e^{-2\pi i w t} + c(w) \cos\big(2^{\frac12} t\big) - i\, d(w) \, \sin\big(2^{\frac12} t\big)\right\}. $$ It follows from \eqref{kw-pieces-rep} that \begin{align}\label{HS_eq4} \begin{split} \ell_w(x) &=\left\{ 1 - \left(\frac{\sin \pi x}{\pi x}\right)^2\right\}\int_{-\infty}^\infty e^{2\pi i x t} \,j_w(t) \,\text{\rm d}t \\ &= \int_{-\infty}^\infty e^{2\pi i x t} \left\{ j_w(t) -\int_{-1}^1 (1-|u|) \,j_w(t-u) \,\text{\rm d}u \right\} \text{\rm d}t. \end{split} \end{align} Let $-\frac12 < t < \frac12$. The following identities hold for $a \in \mathbb{R}$: \begin{align*} \int_{t-\frac12}^{t+\frac12} \cos(a(t-u))\, \text{\rm d}u & = \frac2a \sin(a/2),\\ \int_{t-\frac12}^{t+\frac12} |u| \cos(a(t-u))\, \text{\rm d}u &= \frac{2}{a^2} \cos(a/2) -\frac{2}{a^2} \cos(at) +\frac1a \sin(a/2), \end{align*} and therefore \begin{align}\label{HS_cos-int} \begin{split} \cos(at) \,- &\int_{-1}^{1} (1-|u|)\, \chi_{[-\frac12,\frac12]}(t-u) \cos(a(t-u))\,\text{\rm d}u \\ &=-\frac1a \sin(a/2) + \frac{2}{a^2} \cos(a/2) + \left(1- \frac{2}{a^2}\right) \cos(at). \end{split} \end{align} Similarly, we have \begin{align}\label{HS_sin-int} \begin{split} \sin(at) \, - &\int_{-1}^{1} (1-|u|) \,\chi_{[-\frac12,\frac12]}(t-u) \,\sin(a(t-u))\,\text{\rm d}u\\ &=\frac{2t}{a}\cos(a/2) +\left(1-\frac{2}{a^2}\right) \sin(at). \end{split} \end{align} Letting $a=2^{\frac12}$ in \eqref{HS_cos-int} and \eqref{HS_sin-int} gives, for $|t| < \frac12$, the identities \begin{align}\label{HS_eq1} \begin{split} \cos\big(2^{\frac12} t\big) - \int_{-1}^1 (1-|u|)\,\chi_{[-\frac12,\frac12]}(t-u) \,\cos\big(2^{\frac12} (t-u)\big) \,\text{\rm d}u &= \cos\big(2^{-\frac12}\big) -2^{-\frac12} \sin\big(2^{-\frac12}\big),\\ \sin\big(2^{\frac12} t\big) - \int_{-1}^1 (1-|u|) \,\chi_{[-\frac12,\frac12]}(t-u) \,\sin\big(2^{\frac12} (t-u)\big) \,\text{\rm d}u &= 2^{\frac12} t \cos\big(2^{-\frac12}\big) , \end{split} \end{align} while the choice $a = -2\pi w $, for $|t|< \frac12$, gives \begin{align}\label{HS_eq2_0} \begin{split} \frac{2\pi^2 w^2}{(2\pi^2 w^2 - 1)} &\left(e^{-2\pi i w t} - \int_{-1}^1 (1-|u|)\,\chi_{[-\frac12,\frac12]}(t-u) \,e^{-2\pi i (t-u)w} \,\text{\rm d}u\right) \\ &= e^{-2\pi i w t} - \frac{(1-2\pi i w t) \cos(\pi w) -\pi w\sin(\pi w)}{1-2\pi^2 w^2 }. \end{split} \end{align} \smallskip We note that $\ell_w$ has exponential type at most $3\pi$. When inserting \eqref{Intro_HS_eq3}, \eqref{HS_eq1} and \eqref{HS_eq2_0} into \eqref{HS_eq4}, the linear functions (of the variable $t$) from \eqref{HS_eq1} multiplied by $c(w)$ and $d(w)$ eliminate, for $|t|< \frac12$, the linear function in \eqref{HS_eq2_0}. Hence we obtain $$\ell_w(x) = \int_{-3/2}^{3/2} e^{2\pi i x t} \left(e^{-2\pi i wt} + q_w(t)\right) \text{\rm d}t\,,$$ where $q_w(t) =0$ for $-\frac12<t<\frac12$. Therefore $$\ell_w(x) = \frac{\sin\pi(x-w)}{\pi(x-w)} + Q_w(x),$$ where $$\int_{-\infty}^\infty f(x) \,Q_w(x) \,\text{\rm d}x =0$$ for all $f\in \mathcal{B}_2(\pi,\mu)$. This implies that \begin{align*} \int_{-\infty}^\infty f(x) \, \kappa_w(x)\,\left\{ 1 - \left(\frac{\sin \pi x}{\pi x}\right)^2\right\}\text{\rm d}x = f(w) \end{align*} for all $f\in \mathcal{B}_2(\pi,\mu)$. Thus $\overline{\kappa_w}$ is a reproducing kernel, and since such a kernel is unique, it follows that $K(w,x) = \overline{\kappa_w(x)}$ as desired. This concludes the proof. \smallskip \noindent{\sc Remark.} The initial guess for the reproducing kernel was found in the following way. The starting point is the function $\ell_w$ introduced in the above proof. A Fourier transform leads to the identity \begin{equation}\label{HS_original_integral_equation} e^{-2\pi i t w} = \widehat{\kappa}_w(t) - \int_{-1}^1 (1-|u|) \,\widehat{\kappa}_w(t-u) \,\text{\rm d}u \end{equation} for $|t|<\frac12$, and two (formal) differentiations (using the fact that the second derivative of $(1-|u|)\chi_{[-1,1]}(u)$ is a linear combination of three Dirac deltas) lead to the equation $$-4\pi^2 w^2 e^{-2\pi i t w} = \widehat{\kappa}_w''(t) - \big(\widehat{\kappa}_w(t+1) + \widehat{\kappa}_w(t-1) - 2\widehat{\kappa}_w(t)\big).$$ If $\kappa_w$ has exponential type $\pi$, then for $|t|<\frac12$ this equation simplifies to $$ -4\pi^2 w^2 e^{-2\pi i t w} = \widehat{\kappa}_w''(t) + 2\widehat{\kappa}_w(t),$$ which can be solved explicitly. The original integral equation \eqref{HS_original_integral_equation} determines the two free parameters $c(w)$ and $d(w)$. \subsection{A geometric lemma} Before we proceed to the proof of Theorem \ref{Intro_HS_Thm5} we present a basic lemma\footnote{A similar version of this result was independently obtained by M. Kelly and J. D. Vaaler (personal communication).} on the geometry of Hilbert spaces. \begin{lemma}\label{HS_geometric_lemma} Let $H$ be a Hilbert space (over $\mathbb{C}$) with norm $\| \cdot \|$ and inner product $\langle \cdot, \cdot \rangle$. Let $v_1, v_2 \in H$ be two nonzero vectors (not necessarily distinct) such that $\|v_1\| = \|v_2\|$ and define $$\mc{J} = \big\{ x \in H; \ |\langle x, v_1\rangle | \geq 1 \ {\rm and} \ |\langle x, v_2\rangle | \geq 1\big\}.$$ Then \begin{equation}\label{HS_Lemma_eq1} \min_{x \in \mc{J}}\|x\| = \left(\frac{2}{\big(\|v_1\|^2 + |\langle v_1, v_2\rangle|\big)}\right)^{1/2}. \end{equation} The extremal vectors $y \in \mc{J}$ are given by: \begin{enumerate} \item[(i)] If $\langle v_1, v_2\rangle = 0$, then \begin{equation}\label{HS_def_x_theta_0} y = \left(\frac{2}{\big(\|v_1\|^2 + |\langle v_1, v_2\rangle|\big)}\right)^{1/2} \,\frac{(c_1v_1 + c_2v_2)}{\left\|v_1 + v_2\right\|}\,, \end{equation} where $c_1, c_2 \in \mathbb{C}$ with $|c_1| = |c_2| =1$. \smallskip \item[(ii)] If $\langle v_1, v_2\rangle \neq 0$, and we write $\langle v_1, v_2\rangle = \,e^{-i\alpha} \,|\langle v_1, v_2\rangle|$, then \begin{equation}\label{HS_def_x_theta} y = \left(\frac{2}{\big(\|v_1\|^2 + |\langle v_1, v_2\rangle|\big)}\right)^{1/2}\, \frac{c\,(e^{i\alpha } v_1 + v_2)}{\left\| e^{i\alpha} v_1 + v_2\right\|}\,, \end{equation} where $c \in \mathbb{C}$ with $|c| =1$. \end{enumerate} \end{lemma} \begin{proof} If $v_1$ and $v_2$ are linearly dependent the result is easy to verify, so we focus on the general case. The verification that each $y$ given by \eqref{HS_def_x_theta} belongs to $\mc{J}$ and has norm given by the right-hand side of \eqref{HS_Lemma_eq1} is straightforward. Now let $$\kappa := \min_{x \in \mc{J}}\|x\|\,,$$ and let $y \in \mc{J}$ be such that $\|y\| = \kappa$ (observe that such an extremal vector exists since we may restrict the search to the subspace $ {\rm span}\{v_1, v_2\}$). We consider $v_1' = e^{i\vartheta_1}v_1$ and $v_2' = e^{i\vartheta_2}v_2$ for appropriate choices of $\vartheta_1$ and $\vartheta_2$ such that \begin{equation}\label{HS_eq2} \langle y, v_1' \rangle \geq 1\ \ {\rm and} \ \ \langle y, v_2' \rangle \geq 1. \end{equation} Since $y \in {\rm span}\{v_1', v_2'\}$, we write $$y = a \,v_1' + b\,v_2'\,,$$ where $a,b \in \mathbb{C}$. The fact that $y$ satisfies \eqref{HS_eq2} implies that $$y' = \overline{b}\,v_1' + \overline{a}\,v_2'$$ also satisfies \eqref{HS_eq2} and thus belongs to $\mc{J}$. Therefore $z = (y + y')/2$ also satisfies \eqref{HS_eq2} and belongs to $\mc{J}$. If $y \neq y'$, from the parallelogram law we have $$\|z\|^2 < \left\| \frac{y + y'}{2}\right\|^2 + \left\| \frac{y - y'}{2}\right\|^2= \frac{1}{2}(\|y\|^2 + \|y'\|^2) = \|y\|^2 = \kappa^2,$$ a contradiction. Therefore $b = \overline{a}$ and we have \begin{equation}\label{HS_eq3_y_final} y = a \,v_1' + \overline{a}\,v_2'. \end{equation} Having reduced our considerations to a vector $y$ of the form \eqref{HS_eq3_y_final}, we see that the two conditions in \eqref{HS_eq2} are complex conjugates, and we may work with only one of them, say $\langle y, v_1' \rangle \geq 1$. Since $\|y\| = \kappa$ is minimal, we must have the equality $\langle y, v_1' \rangle = 1$. This translates to \begin{equation}\label{HS_key_a_eq1} a \|v_1'\|^2 + \overline{a} \, \langle v_2', v_1' \rangle = 1, \end{equation} and we find \begin{align*} \|y\|^2 & = \left(|a|^2\|v_1'\|^2 + \overline{a}^2 \langle v_2', v_1' \rangle\right) + \left(|a|^2\|v_2'\|^2 + a^2 \langle v_1', v_2' \rangle\right) = \overline{a} + a = 2\, {\rm Re}\,(a). \end{align*} By solving the system of equations \eqref{HS_key_a_eq1} in the variables ${\rm Re}\,(a)$ and ${\rm Im}\,(a)$ we arrive at \begin{align}\label{HS_real_part} \kappa^2 = \|y\|^2 \, = \, 2\, {\rm Re}\,(a) \, = \, 2 \frac{\|v_1'\|^2 - {\rm Re}\,(\langle v_1', v_2'\rangle)}{\|v_1'\|^4 - |\langle v_1', v_2'\rangle|^2} \, \geq \, 2 \frac{\|v_1\|^2 - |\langle v_1, v_2\rangle|}{\|v_1\|^4 - |\langle v_1, v_2\rangle|^2} \, = \, \frac{2}{\|v_1'\|^2 + |\langle v_1', v_2'\rangle|}, \end{align} and \begin{equation}\label{HS_im_part} {\rm Im}\,(a) = \frac{{\rm Im}\,(\langle v_1', v_2'\rangle)}{\|v_1'\|^4 - |\langle v_1', v_2'\rangle|^2}. \end{equation} We have equality in \eqref{HS_real_part} if and only if $\langle v_1', v_2'\rangle \geq 0$. If $\langle v_1', v_2'\rangle = 0$, then $\vartheta_1$ and $\vartheta_2$ are arbitrary and $a \geq 0$. This leads to the family in \eqref{HS_def_x_theta_0}. If $\langle v_1', v_2'\rangle \neq 0$, then we must have $\vartheta_1 \equiv \vartheta_2 + \alpha \, \,({\rm mod} \,2\pi)$ and $a \geq 0$, which leads to the family in \eqref{HS_def_x_theta}. \end{proof} \subsection{Proof of Theorem \ref{Intro_HS_Thm5}} Let $R$ be a nonnegative admissible function such that $R(\pm \beta) \geq 1$. Since $R$ has exponential type at most $2\pi$, by Krein's decomposition \cite[p. 154]{A} we have $$R(z) = S(z)\,\overline{S(\overline{z})},$$ where $S$ is an entire function of exponential type at most $\pi$. On the real line we have $R(x) = |S(x)|^2$ and thus $S \in L^2(\mathbb{R})$. Therefore, the function $S$ belongs to the reproducing kernel Hilbert space $\mathcal{H} = \mathcal{B}_2(\pi,\mu)$. The hypotheses imply that $$1 \leq |S(\beta)| =\big|\langle S, K(\beta, \cdot)\rangle_{\mathcal{H}} \big|$$ and $$1 \leq |S(-\beta)| =\big|\langle S, K(-\beta, \cdot)\rangle_{\mathcal{H}} \big|.$$ We want to minimize the quantity $$\|S\|^2_{\mathcal{H}} = \int_{-\infty}^{\infty} |S(x)|^2 \left\{ 1 - \left(\frac{\sin \pi x}{\pi x}\right)^2\right\} \,\text{\rm d}x.$$ By the reproducing kernel property and the symmetry of the pair correlation measure (alternatively, one can check directly by Theorem \ref{Intro_HS_Thm1_RP}), we have $$\|K(\beta, \cdot)\|^2_{\mathcal{H}} = K(\beta, \beta) = K(-\beta, -\beta) = \|K(-\beta, \cdot)\|^2_{\mathcal{H}}.$$ We are thus in position to use Lemma \ref{HS_geometric_lemma} to derive that \begin{align}\label{HS_asym_beta} \|S\|^2_{\mathcal{H}} \geq \frac{2}{K(\beta, \beta) + |\langle K(\beta, \cdot), K(-\beta, \cdot)\rangle_{\mathcal{H}}|} = \frac{2}{ K(\beta, \beta) + |K(\beta, -\beta)|}. \end{align} The cases of equality in \eqref{HS_asym_beta} follow from \eqref{HS_def_x_theta_0} and \eqref{HS_def_x_theta}. \smallskip It remains to verify the asymptotic behavior on the right-hand side of \eqref{Intro_HS_Thm2_eq1} as $\beta \to \infty$. From Theorem \ref{Intro_HS_Thm1_RP} we get $$K(\beta, \beta) = \frac{2\pi^2 {\beta}^2}{(2\pi^2 {\beta}^2 - 1)} + c(\beta) g(\beta) + d(\beta) h(\beta)$$ and \begin{align*} K(\beta,-\beta) & = \frac{2\pi^2 {\beta}^2}{(2\pi^2 {\beta}^2 - 1)} \frac{\sin2\pi \beta}{2\pi\beta}+ c(\beta) g(-\beta) + d(\beta) h(-\beta)\\ & = \frac{2\pi^2 {\beta}^2}{(2\pi^2 {\beta}^2 - 1)} \frac{\sin2\pi \beta}{2\pi\beta}+ c(\beta) g(\beta) - d(\beta) h(\beta). \end{align*} Therefore, if $K(\beta,-\beta) \geq 0$ we have \begin{align}\label{HS_Asym_1} K(\beta,\beta) + |K(\beta,-\beta)| = \frac{2\pi^2 {\beta}^2}{(2\pi^2 {\beta}^2 - 1)} \left(1 + \frac{\sin2\pi \beta}{2\pi\beta}\right) + 2 c(\beta) g(\beta), \end{align} and if $K(\beta,-\beta) \leq 0$ we have \begin{align}\label{HS_Asym_2} K(\beta,\beta) + |K(\beta,-\beta)|= \frac{2\pi^2 {\beta}^2}{(2\pi^2 {\beta}^2 - 1)} \left(1 - \frac{\sin2\pi \beta}{2\pi\beta}\right) + 2 d(\beta) h(\beta). \end{align} Observe that $c(\beta) g(\pm\beta) = O(\beta^{-2})$ and that $d(\beta) h(\pm\beta) = O(\beta^{-2})$. We then have two cases to consider. First, if for large $\beta$ we have $$ \frac{\sin2\pi \beta}{2\pi\beta} = O\left(\beta^{-2}\right)\,,$$ then the asymptotic on the right-hand side of \eqref{Intro_HS_Thm2_eq1} is trivially true. Otherwise, $$\frac{\sin 2\pi \beta}{2\pi\beta} = O\big((\beta + 1)^{-1}\big).$$ Hence $K(\beta, -\beta)$ will have the sign of $\frac{\sin2\pi \beta}{2\pi\beta}$, and we use \eqref{HS_Asym_1} and \eqref{HS_Asym_2} to get the desired asymptotic. This concludes the proof. \subsection{The one-delta problem} Our methods can also be used to recover the original result of Montgomery and Taylor \cite{M2} concerning the optimal majorant for the delta function with respect to the pair correlation measure. This problem was also solved, in a more general context, by Iwaniec, Luo and Sarnak \cite[Appendix A]{ILS}. \begin{corollary}[cf. \cite{M2}]\label{cor_Mont_Taylor} Let $R$ be a nonnegative admissible function such that $R(0) \geq 1$. Then \begin{align}\label{eq_Mont_Taylor} \begin{split} M(R) & \geq \frac{1}{ K(0, 0)} = 2^{-\frac12} \,\cot\big(2^{-\frac12}\big) - \frac{1}{2} = 0.3274992 \ldots \end{split} \end{align} Equality in \eqref{eq_Mont_Taylor} is attained if and only if $$R(z) = \frac{1}{(1- 2\pi^2 z^2)^2}\left( \cos (\pi z) - 2^{\frac12}\pi z \cot \big(2^{-\frac12}\big) \sin(\pi z) \right)^2.$$ \end{corollary} \begin{proof} As in the proof of Theorem \ref{Intro_HS_Thm5} we may write $R(z) = S(z)\,\overline{S(\overline{z})}$, where $S \in \mathcal{H} = \mathcal{B}_2(\pi,\mu)$. Using the Cauchy-Schwartz inequality we get \begin{align*} 1 & \leq |S(0)|^2 =\big|\langle S, K(0, \cdot)\rangle_{\mathcal{H}} \big|^2 \leq \|S\|^2_{\mathcal{H}} \,\|K(0, \cdot)\|^2_{\mathcal{H}} = \|S\|^2_{\mathcal{H}} \,K(0,0). \end{align*} Therefore, it follows that $$\int_{-\infty}^{\infty} R(x) \left\{ 1 - \left(\frac{\sin \pi x}{\pi x}\right)^2\right\} \,\text{\rm d}x = \int_{-\infty}^{\infty} |S(x)|^2 \left\{ 1 - \left(\frac{\sin \pi x}{\pi x}\right)^2\right\} \,\text{\rm d}x = \|S\|^2_{\mathcal{H}} \geq \frac{1}{K(0,0)},$$ and equality holds if and only if $S(z) = c \,K(0, z)$, where $c$ is a complex constant of absolute value $K(0,0)^{-1}$. Using the explicit representation for $K$ given by Theorem \ref{Intro_HS_Thm1_RP} we get \begin{align*} R(z) &= S(z)\,\overline{S(\overline{z})}= \frac{1}{K(0,0)^2} K(0, z)^2 \\ & = \frac{1}{K(0,0)^2}\left( \frac{1}{\big(\cos\big(2^{-\frac12}\big) - 2^{-\frac12} \sin\big(2^{-\frac12}\big)\big)} \frac{2^{\frac12}\sin\big(2^{-\frac12}\big) \cos(\pi z) - 2\pi z \cos\big(2^{-\frac12}\big) \sin(\pi z)}{(1- 2\pi^2 z^2)}\right)^2\\ & = \left( \frac{1}{2^{\frac12}\sin\big(2^{-\frac12}\big)} \frac{2^{\frac12}\sin\big(2^{-\frac12}\big) \cos(\pi z) - 2\pi z \cos\big(2^{-\frac12}\big) \sin(\pi z)}{(1- 2\pi^2 z^2)}\right)^2\\ & = \frac{1}{(1- 2\pi^2 z^2)^2}\left( \cos (\pi z) - 2^{\frac12}\pi z \cot \big(2^{-\frac12}\big) \sin(\pi z) \right)^2. \end{align*} \end{proof} \noindent {\sc Remark:} It follows from \eqref{Mont_formula}, \eqref{def-of-M}, and \eqref{eq_Mont_Taylor} that $$N^*(T) \leq \left(2^{-\frac12} \,\cot\big(2^{-\frac12}\big) + \frac{1}{2} + o(1) \right) N(T).$$ This inequality was previously proved by Montgomery and Taylor \cite{M2}, and can be used in the place of \eqref{Intro_4/3_bound} to give a slightly sharper version of our Corollary \ref{Mont}. \section{Interpolation and orthogonality in de Branges spaces}\label{Sec_de_Branges_spaces} In this section we prove Theorem \ref{Intro_thm5_super}. \new{Recall that $E$ is a Hermite-Biehler function that satisfies properties (P1) - (P4)}, and we assume without loss of generality that $E(0)>0$. \subsection{Preliminary lemmas} We start by proving the following result. \begin{lemma}\label{Sec15_Lem15} Let $\beta \notin \{a_k\} \cup \{b_k\}$ and consider the Hermite-Biehler function $E_{\beta}$ defined in \eqref{Intro_Def_E_beta_2}. \begin{enumerate} \item[(i)] The function $E_{\beta}$ satisfies properties {\rm (P1) - (P4)}. \smallskip \item[(ii)] If $a_k < \beta < b_{k}$, for $k \geq 1$, then $B_{\beta}(0) = B_{\beta}(\beta) = 0$. \smallskip \item[(iii)] If $b_k < \beta < a_{k+1}$, for $k \geq 0$, then $A_{\beta}(\beta) =0$. If $k \geq 1$, then there exists $\xi \in (0,\beta)$ such that $A_{\beta}(\xi) = 0$. \end{enumerate} \end{lemma} \begin{proof}[Proof of \textup{(i)}] Properties (P1), (P2) and (P3) are clear. A direct computation shows that \begin{equation}\label{Sec5_Def_A_beta} A_{\beta}(z) = \gamma_{\beta}A(z) - z B(z) \end{equation} and \begin{equation}\label{Sec5_Def_B_beta} B_{\beta}(z) = z A(z) + \gamma_{\beta}B(z). \end{equation} Suppose $A_{\beta} \in \mathcal{H}(E_{\beta})$. Then $A_{\beta}(x)\,|E_{\beta}(x)|^{-1} \in L^2(\mathbb{R})$. Observe that \begin{equation*} \frac{A_{\beta}(x)}{E_{\beta}(x)} = \gamma_{\beta} \frac{A(x)}{ (\gamma_{\beta}- ix)E(x)} - \frac{x}{(\gamma_{\beta} - ix)}\frac{B(x)}{E(x)} = -i \frac{B(x)}{E(x)} + O(x^{-1}), \end{equation*} for large $x$. This would imply that $B \in \H(E)$, a contradiction. In an analogous manner, we show that $B_{\beta} \notin \mathcal{H}(E_{\beta})$. This establishes (P4). \end{proof} \begin{proof}[Proof of \textup{(ii)}] Since $E_{\beta}$ satisfies (P3), the function $B_{\beta}$ is odd and thus $B_{\beta}(0) = 0$. The fact that $B_{\beta}(\beta) = 0$ follows from \eqref{Sec5_Def_B_beta} and the definition of $\gamma_{\beta}$. \end{proof} \begin{proof}[Proof of \textup{(iii)}] The fact that $A_{\beta}(\beta) = 0$ follows from \eqref{Sec5_Def_A_beta} and the definition of $\gamma_{\beta}$. Also, from \eqref{Sec5_Def_A_beta}, a number $\xi$ is a zero of $A_{\beta}$ if and only if \begin{equation}\label{Sec5_def_xi_sol} \xi = \gamma_{\beta} \frac{A(\xi)}{B(\xi)}. \end{equation} Since $B(x) >0$ in $(0,a_1]$ with $B(0) = 0$, and $A(x) >0$ in $[0, a_1)$ with $A(a_1) =0$, the function $x \mapsto A(x)/B(x)$ assumes every positive real value in the interval $(0,a_1)$. In particular, there exists $\xi \in (0,a_1)$ satisfying \eqref{Sec5_def_xi_sol}. \end{proof} The importance of condition (P4) lies in the fact that the sets $\{K(\xi, \cdot); \ A(\xi) = 0\}$ and $\{K(\xi, \cdot); \ B(\xi) = 0\}$ are orthogonal bases for $\H(E)$ (see \cite[Theorem 22]{B}). Using this fact, we establish four suitable quadrature formulas below. These are the key elements to prove the optimality of our approximations. \begin{lemma}\label{Sec5_Lem16} Let $F$ be an entire function of exponential type at most $2\tau(E)$ such that $F(x) \geq 0$ for all $x \in \mathbb{R}$ and \begin{equation}\label{Sec5_cond_M_E_finite} M_E(F) = \int_{-\infty}^{\infty} F(x) \, |E(x)|^{-2}\,\text{\rm d}x < \infty. \end{equation} \begin{enumerate} \item[(i)] We have \begin{equation}\label{Sec5_Lem16_1} M_E(F) = \sum_{A(\xi) = 0} \frac{F(\xi)}{K(\xi, \xi)} = \sum_{B(\xi) = 0} \frac{F(\xi)}{K(\xi, \xi)}. \end{equation} \item[(ii)] If $\beta \notin \{a_k\} \cup \{b_k\}$, then we have \begin{equation}\label{Sec5_Lem16_2} M_E(F) = \sum_{A_{\beta}(\xi) = 0} F(\xi) \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)} = \sum_{B_{\beta}(\xi) = 0} F(\xi) \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)}. \end{equation} \end{enumerate} \end{lemma} \begin{proof}[Proof of \textup{(i)}] By \cite[Lemma 14]{CL2} (this is the corresponding version of Krein's decomposition \cite[p. 154]{A} for the de Branges space $\H(E)$) we can write $F(z) = U(z)U^*(z)$ where $U \in \H(E)$. Part (i) then follows from the orthogonal basis given by \cite[Theorem 22]{B} \begin{align*} M_E(F) = \int_{-\infty}^{\infty} |U(x)|^2 \, |E(x)|^{-2}\,\text{\rm d}x = \sum_{A(\xi) = 0} \frac{|U(\xi)|^2}{K(\xi, \xi)} = \sum_{A(\xi) = 0} \frac{F(\xi)}{K(\xi, \xi)}. \end{align*} A similar representation holds at the zeros of $B$. \end{proof} \begin{proof}[Proof of \textup{(ii)}] We now consider $F_{\beta}(z):= F(z)(z^2 + \gamma_{\beta}^2)$. This is also an entire function of exponential type at most $2\tau(E)$ which is nonnegative on the real axis. Since $|E_{\beta}(x)|^2 = |E(x)|^2 (x^2 + \gamma_{\beta}^2)$ we see from \eqref{Sec5_cond_M_E_finite} that $M_E(F) = M_{E_{\beta}}(F_{\beta}) < \infty$. We then write $F_{\beta}(z) = U_{\beta}(z)U^*_{\beta}(z)$ with $U_{\beta} \in \H(E_{\beta})$ and thus, by \cite[Theorem 22]{B}, we have \begin{align*} M_E(F) = M_{E_{\beta}}(F_{\beta}) = \int_{-\infty}^{\infty} |U_{\beta}(x)|^2 \, |E_{\beta}(x)|^{-2}\,\text{\rm d}x = \sum_{A_{\beta}(\xi) = 0} \frac{|U_{\beta}(\xi)|^2}{K_{\beta}(\xi, \xi)} = \sum_{A_{\beta}(\xi) = 0} F(\xi) \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)}. \end{align*} A similar representation holds at the zeros of $B_{\beta}$. This completes the proof of the lemma. \end{proof} \subsection{Proof of Theorem \ref{Intro_thm5_super}} This proof is divided in three distinct qualitative regimes. \subsubsection{Case 1: Assume $\beta \notin \{a_k\} \cup \{b_k\} \cup (0,a_1)$} Since $E_{\beta}$ is a Hermite-Biehler function of bounded type (property (P1)), the functions $A_{\beta}$ and $B_{\beta}$ belong to the Laguerre-P\'{o}lya class (this follows from \cite[Problem 34]{B} or \cite[Lemma 13]{CL2}), i.e. they are uniform limits (in compact sets) of polynomials with only real zeros. Moreover, from property (P3) we have that $A_{\beta}^2$ and $B_{\beta}^2$ are even functions. \smallskip We consider now the case $b_k < \beta < a_{k+1}$ ($k \geq 1$), in which the nodes of interpolation are the zeros of $A_{\beta}$ (the proof in the case $a_k < \beta < b_{k}$ proceeds along similar lines, using the zeros of $B_{\beta}$ as interpolation nodes). Since $A_{\beta}^2$ is an even Laguerre-P\'{o}lya function with $A_{\beta}^2(\beta) = 0$ and at least two other zeros, counted with multiplicity, lie in the interval $[0,\beta)$ (by Lemma \ref{Sec15_Lem15} (iii)), the hypotheses of \cite[Theorem 3.14]{Lit} are fulfilled. Hence there exists a pair of real entire functions $R_{\beta,E}^{\pm}$ such that \begin{equation*} R_{\beta,E}^-(x) \leq \chi_{[-\beta, \beta]}(x) \leq R_{\beta,E}^+(x) \end{equation*} for all $x \in \mathbb{R}$, with \begin{equation}\label{Sec5_int_1} R_{\beta,E}^{\pm}(\xi) = \chi_{[-\beta, \beta]}(\xi) \end{equation} for all $\xi \neq \pm\beta$ such that $A_{\beta}(\xi) =0$, and \begin{equation}\label{Sec5_int_2} R_{\beta,E}^{+}(\pm\beta)=1 \quad \text{and} \quad R_{\beta,E}^{-}(\pm\beta)=0. \end{equation} Moreover, the functions $R_{\beta,E}^{\pm}$ satisfy the estimate \begin{equation*} \big|R_{\beta,E}^{\pm}(z)\big| \ll \frac{|A_{\beta}^2(z)|}{1 + |{\rm Re}\, (z)|^4} \end{equation*} for all $z \in \mathbb{C}$. This shows, in particular, that the functions $R_{\beta,E}^{\pm}$ have exponential type at most $2\tau(E)$ and that $M_E(R_{\beta,E}^{\pm}) < \infty$. \smallskip We show next that these functions are extremal. First we consider the case of the majorant. Let $R_{\beta}^+$ be an entire function of exponential type at most $2\tau(E)$ such that \begin{equation*} R_{\beta}^+(x) \geq \chi_{[-\beta, \beta]}(x) \end{equation*} for all $x \in \mathbb{R}$. From \eqref{Sec5_Lem16_2} we obtain \begin{align*} M_E(R_{\beta}^+) = \sum_{A_{\beta}(\xi) = 0} R_{\beta}^+(\xi) \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)} \geq \sum_{\stackrel{A_{\beta}(\xi) = 0}{|\xi| \leq \beta}} \frac{\xi^2 + \gamma_{\beta}^2}{K_{\beta}(\xi, \xi)}, \end{align*} and, by \eqref{Sec5_int_1} and \eqref{Sec5_int_2}, we have equality if $R_{\beta}^+ = R_{\beta,E}^+$. The minorant case follows analogously by writing $R_{\beta}^-$ as a difference of two nonnegative functions, e.g. writing $R_{\beta}^- = R_{\beta,E}^+ - (R_{\beta,E}^+ - R_{\beta}^-)$, and applying \eqref{Sec5_Lem16_2} to each nonnegative function separately. \subsubsection{Case 2: Assume $\beta \in \{a_k\} \cup \{b_k\}$} This case follows from the work of Holt and Vaaler \cite[Theorem 15]{HV}. In this paper they construct extremal majorants and minorants for $\sgn(x - \beta)$ that interpolate this function at the zeros of $A$ (if $\beta \in \{a_k\}$) or $B$ (if $\beta \in \{b_k\}$). If we use that \begin{equation*} \tfrac{1}{2} \big\{ \sgn (x + \beta) + \sgn(-x + \beta)\big\} = \chi_{[-\beta, \beta]}(x), \end{equation*} the Holt-Vaaler construction gives us majorants and minorants for $\chi_{[-\beta, \beta]}$ that interpolate this function at the right nodes (the zeros of $A$ or $B$). The optimality now follows from \eqref{Sec5_Lem16_1} as in the previous case. We note that this is the analogous of the Beurling-Selberg construction for the Paley-Wiener case. \subsubsection{Case 3: Assume $\beta \in (0,a_1)$} \new{A special case of this result was shown in \cite[Lemma 10]{DL} using an explicit Fourier expansion and the Cauchy-Schwarz inequality. We prove first that the zero function is an optimal minorant.} We start by noticing, from \eqref{Sec5_Def_B_beta}, that the smallest positive zero of $B_\beta$ is greater than $a_1$, since both $A$ and $B$ are positive in $(0,a_1)$. Since the zeros of $A_\beta$ and $B_\beta$ are simple and interlace, and since $A_\beta(\beta)=0$, we conclude that $\beta$ is the only zero of $A_\beta$ in the interval $(0,a_1)$. Since the zero function interpolates $\chi_{[-\beta, \beta]}$ at all the zeros of $A$ (or $A_{\beta}$), it must be an extremal function from the quadrature formula \eqref{Sec5_Lem16_1} (or \eqref{Sec5_Lem16_2}). \smallskip \new{To find an extremal majorant, define the entire function $Q_\beta$ by} \begin{equation} Q_{\beta}(z) = C_{\beta}\frac{A_{\beta}(z)}{(\beta^2 - z^2)}, \end{equation} where $C_{\beta}$ is a constant chosen so that $Q_{\beta}(\pm\beta) = 1$. Note that $Q_{\beta}$ is an even Laguerre-P\'{o}lya function, with no zeros in $[-\beta, \beta]$. \new{We claim that $Q_{\beta}$ is monotone in the intervals $[-\beta,0]$ and $[0, \beta]$.} To see this, let $\pm x_{\beta}$ be the smallest zeros (in absolute value) of $Q_{\beta}$ (recall that these zeros are simple). We can then regard $Q_{\beta}$ as a uniform limit in $[-x_{\beta}, x_{\beta}]$ of even polynomials $P_k$ with only real and simple zeros. We can choose the smallest zeros (in absolute value) of $P_k$ to be $\pm x_{\beta}$. Therefore, $P_k'$ has only one simple zero in $[-x_{\beta}, x_{\beta}]$, which must be at the origin since $P_k$ is even. Moreover, by Rolle's theorem all zeros of $P_k'$ are real (and simple). Since $P_k'(x) \to Q_{\beta}'(x)$ uniformly in $[-x_{\beta}, x_{\beta}]$, the odd function $Q_{\beta}'$ has only one zero in the interval $[-x_{\beta}, x_{\beta}]$, which must be at the origin. \new{This implies that $|Q_{\beta}|$ is monotone increasing in $[-x_{\beta},0]$ and monotone decreasing in $[0, x_{\beta}]$. In particular, we have} $$R_{\beta,E}^+(x):=Q_{\beta}^2(x) \geq \chi_{[-\beta, \beta]}(x)$$ for all $x \in \mathbb{R}$, and this is our desired majorant of exponential type at most $2 \tau(E)$. The optimality now follows from \eqref{Sec5_Lem16_2} since $R_{\beta,E}^+ =Q_{\beta}^2$ interpolates $\chi_{[-\beta, \beta]}$ at the zeros of $A_{\beta}$. \subsubsection{Uniqueness and relation to the two-delta problem} We have constructed above extremal functions $ R_{\beta,E}^{\pm}$ that interpolate $\chi_{[-\beta, \beta]}$ at the nodes of a certain quadrature (either $A$, $B$, $A_{\beta}$ or $B_{\beta}$). The quadrature is chosen in such a way to have $\pm \beta$ as nodes of interpolation. At these points we have $R_{\beta,E}^+(\pm \beta) = 1$ and $R_{\beta,E}^-(\pm \beta) = 0$. Therefore, the difference $R := R_{\beta,E}^+ - R_{\beta,E}^-$ is a majorant of the two-delta function $\chi_{\{\pm\beta\}}$ that interpolates $\chi_{\{\pm\beta\}}$ at the nodes of the same quadrature. From Lemma \ref{Sec5_Lem16} this difference must be an extremal function for the two-delta problem \eqref{two-Delta_E}. In particular we obtain \begin{equation*} \varDelta_E(\beta) = \varLambda_E^+(\beta) - \varLambda_E^-(\beta). \end{equation*} From property (P3) we have that $A$ is even and $B$ is odd. Thus, for $\beta >0$ we have \begin{equation}\label{thm5_condition_K_beta} K(\beta, -\beta) = \frac{B(-\beta)A(\beta)- A(-\beta)B(\beta)}{-2\pi \beta} = \frac{A(\beta)B(\beta)}{\pi \beta}. \end{equation} In the generic cases (iii) and (iv) we have $\beta \notin \{a_k\} \cup \{b_k\}$, and \eqref{thm5_condition_K_beta} implies that $K(\beta, -\beta) \neq 0$. In this situation, from Theorem \ref{Intro_HS_Thm5}, the extremal solution of the two-delta problem is unique, and therefore the pair of extremal functions $ R_{\beta,E}^{\pm}$ must also be unique. This concludes the proof. \section{Small gaps between the zeros of $\zeta(s)$} \label{sg} \smallskip The goal of this section is to prove Theorem \ref{small gaps}. Our proof relies on the following estimate for Montgomery's function $F(\alpha) = F(\alpha, T)$ defined in \eqref{Mont_function_sec2}. \begin{lemma} \label{Goldston ineq} Assume RH and let $A>1$ be fixed. Then, as $T \to \infty$, we have \[ \int_1^\xi \big( \xi-\alpha \big) \, F(\alpha) \, \text{\rm d}\alpha \ \ge \ \frac{\xi^2}{2} - \xi + \frac{1}{3} + o(1) \] uniformly for $1\le \xi \le A$. \end{lemma} \begin{proof} This inequality is implicit in the work of Goldston \cite[Section 7]{Gold}, but we sketch a proof for completeness. We use \eqref{convolution}, \eqref{F alpha}, and the Fourier transform pair \[ R_\xi(x) = \left( \frac{\sin \pi \xi x}{\pi \xi x} \right)^{\!2} \quad \text{and} \quad \widehat{R}_\xi(\alpha) = \frac{1}{\xi^2} \max\big( \xi\!-\!|\alpha|,0 \big). \] Observe that \begin{equation} \label{step1} \begin{split} 1+o(1) &\le \frac{2\pi}{T\log T} \, N^*(T) \\ & \le \frac{2 \pi}{T \log T} \sum_{0<\gamma,\gamma' \le T} R_\xi\!\left( (\gamma'\!-\!\gamma) \frac{\log T}{2\pi} \right) w(\gamma'\!-\!\gamma) \\ &= \int_{-\xi}^\xi \widehat{R}_\xi (\alpha) F(\alpha) \, \text{\rm d}\alpha, \end{split} \end{equation} where the last step follows from the convolution formula in \eqref{convolution}. Using the fact the the integrand is even and applying \eqref{F alpha}, it follows that \begin{equation*} \begin{split} \int_{-\xi}^\xi \widehat{R}_\xi (\alpha) F(\alpha) \, \text{\rm d}\alpha & = \frac{1}{\xi} + \frac{2}{\xi^2} \int_0^1 \big( \xi\!-\!\alpha \big)\alpha \, \text{\rm d}\alpha + \frac{2}{\xi^2} \int_1^\xi \big( \xi\!-\!\alpha \big) F(\alpha) \, \text{\rm d}\alpha + o(1) \\ & = \frac{2}{\xi} - \frac{2}{3\xi^2} + \frac{2}{\xi^2} \int_1^\xi \big( \xi\!-\!\alpha \big) F(\alpha) \, \text{\rm d}\alpha + o(1) \end{split} \end{equation*} uniformly for $1\le \xi \le A$. Inserting this estimate into \eqref{step1} and rearranging terms, the lemma follows. \end{proof} \subsection{Proof of Theorem \ref{small gaps}} We modify an argument of Goldston, Gonek, \"{O}zl\"{u}k and Snyder in \cite{GGOS} which relied on the Fourier transform pair \[ G(x) = \left( \frac{\sin \pi x}{\pi x} \right)^2 \left(\frac{1}{1\!-\!x^2} \right) \] and \[ \widehat{G}(\alpha) = \left\{ \begin{array}{cl} 1-|\alpha|+\frac{\sin 2 \pi|\alpha|}{2\pi}, & {\rm if} \ \ |\alpha|\le 1,\\ 0, & { \rm if} \ \ |\alpha|>1.\\ \end{array} \right. \] Note that $G(x)$ is a minorant for $\chi_{[-1,1]}$ with (nonnegative) Fourier transform supported in $[-1,1]$. Therefore, $G(x/\beta)$ is a minorant for $\chi_{[-\beta,\beta]}$, and it follows from \eqref{convolution} that \begin{equation*} \begin{split} N^*(T) + 2N(T,\beta) \ &\ge \sum_{0<\gamma,\gamma' \le T} G\!\left( (\gamma'\!-\!\gamma) \frac{ \log T}{2\pi \beta} \right) w(\gamma'\!-\!\gamma) \\ & = \left( \frac{T \log T}{2\pi} \right) \int_{-1/\beta}^{1/\beta} \beta \widehat{G}(\beta \alpha) F(\alpha) \, \text{\rm d}\alpha. \end{split} \end{equation*} Using \eqref{F alpha}, the assumption in \eqref{N star}, and the fact that the integrand is even we have \begin{equation}\label{1th3} N(T,\beta) \ge \left( \frac{1}{2} + o(1) \right) \frac{T \log T}{2\pi} \left( \beta - 1 + 2\beta\int_0^1 \widehat{G}(\beta \alpha) \alpha \, \text{\rm d}\alpha + 2\beta\int_1^{1/\beta} \widehat{G}(\beta \alpha) F(\alpha) \, \text{\rm d}\alpha \right). \end{equation} Since $\widehat{G}(\alpha) \ge 0$ for all $\alpha$, Goldston, Gonek, \"{O}zl\"{u}k and Snyder observed that \[ N(T,\beta) \ge \left( \frac{1}{2} + o(1) \right) \frac{T \log T}{2\pi} \left( \beta - 1 + 2\beta\int_0^1 \widehat{G}(\beta \alpha) \alpha \, \text{\rm d}\alpha \right), \] and then used a numerical calculation to show that $N(T,0.607286) \gg N(T)$ under the assumptions of Theorem \ref{small gaps}. In order to improve their result, we use Lemma \ref{Goldston ineq} to derive a lower bound for the second integral on the right-hand side of \eqref{1th3}. \smallskip Following Goldston \cite{Gold}, we define the function \begin{equation*}\label{I beta} I(\xi) = \int_1^\xi (\xi\!-\!\alpha) F(\alpha) \, \text{\rm d}\alpha \end{equation*} and we observe that \[ I'(\xi) = \int_1^\xi F(\alpha) \, \text{\rm d}\alpha \quad \text{ and } \quad I''(\xi) = F(\xi). \] Note that Lemma \ref{Goldston ineq} provides a nontrivial lower bound for $I(\xi)$ as long as $\xi \ge 1 + 1/\sqrt{3}$. Integrating by parts twice, it follows that \begin{equation}\label{2th3} \begin{split} \int_1^{1/\beta} \widehat{G}(\beta \alpha) F(\alpha) \, \text{\rm d}\alpha & = \int_1^{1/\beta} \widehat{G}(\beta \alpha) I''(\alpha) \, \text{\rm d}\alpha = \beta^2 \int_1^{1/\beta} \widehat{G}''(\beta \alpha) I(\alpha) \, \text{\rm d}\alpha. \end{split} \end{equation} By definition, for $\alpha \ge 0$, we have $\widehat{G}''(\beta \alpha) = -2\pi \sin(2\pi \beta \alpha)$ which is non-negative for $1\le \alpha \le 1/\beta$ if $1/2 \le \beta \le 1$. Therefore \eqref{2th3} and Lemma \ref{Goldston ineq} imply that \[ \int_1^{1/\beta} \widehat{G}(\beta \alpha) F(\alpha) \, \text{\rm d}\alpha \ge -2\pi \beta^2 \int_{1+1/\sqrt{3}}^{1/\beta} \sin(2\pi \beta \alpha) \left(\frac{\alpha^2}{2} \!-\! \alpha \!+\! \frac{1}{3} \!+\! o(1) \right) \text{\rm d}\alpha \] for $1/2 \le \beta \le 1$. Inserting this estimate into \eqref{1th3}, it follows that \[ N(T,\beta) \ge \left( \frac{1}{2} \!+\! o(1) \right) \frac{T \log T}{2\pi} \left( \beta \!-\! 1 \!+\! 2\beta\int_0^1 \widehat{G}(\beta \alpha) \alpha \, \text{\rm d}\alpha - 4\pi \beta^3\int_{1+1/\sqrt{3}}^{1/\beta} \sin(2\pi \beta \alpha) \left(\frac{\alpha^2}{2}\! - \! \alpha \! + \! \frac{1}{3} \right) \, \text{\rm d}\alpha \right). \] A straightforward numerical calculation shows that the right-hand side is positive if $\beta \ge 0.606894$. \section{$q$-analogues of Theorem \ref{Intro_thm1_Gallagher} and Theorem \ref{Intro_thm2_Gallagher_2} }\label{Sec_Q_analogue} As was suggested in Montgomery's original paper \cite{M1}, it is interesting to study the pair correlation of zeros of the family of Dirichlet $L$-functions in $q$-aspect. Montgomery had in mind improving the analogue of \eqref{F alpha} for this family of $L$-functions (see \cite{CLLR, Ozluk}), and so it is not surprising that the analogue of Theorems \ref{Intro_thm1_Gallagher} and \ref{Intro_thm2_Gallagher_2} can also be improved. In this section, we indicate such an improvement. In order to state this result, we need to introduce some notation. All sums over the zeros of Dirichlet $L$-functions are counted with multiplicity and $\varepsilon$ denotes an arbitrarily small positive constant that may vary from line to line. \smallskip Let $W$ be a smooth function, compactly supported in $(1,2)$. Let $\Phi$ be a function which is real and compactly supported in $(a,b)$ with $0< a< b$. As usual, define its Mellin transform $\widetilde{\Phi}$ by $$ \widetilde{\Phi}(s) = \int_0^\infty \Phi(x)\,x^{s-1} \> \text{\rm d}x. $$ Suppose that $\Phi(x) = \Phi(x^{-1})$ for all $x \in \mathbb{R}\setminus\{0\}$ , $\widetilde{\Phi}(it) \geq 0$ for all $t \in \mathbb{R}$, and that $\widetilde{\Phi}(it) \ll |t|^{-2}$. For example, we may choose \[ \widetilde{\Phi} (s) = \left( \frac{e^{s} \!-\! e^{-s} }{2 s} \right)^2 \] so that $ \widetilde{\Phi} ( i t) = (\sin t/ t)^2 \geq 0$ and the function \begin{align*} \Phi(x) & = \begin{cases} \frac12 - \frac14 \log x, & \text{ for } 1 \leq x \leq e^2, \\ \frac12+ \frac14 \log x, & \text{ for } e^{-2} \leq x \leq 1, \\ 0, & \text{ otherwise}, \end{cases} \end{align*} is real and compactly supported in $(a,b)$ for some $a,b > 0$. We define the $q$-analogue of $N(T, \beta)$ as \[ N_{\Phi}(Q, \beta) \, := \, \sum_{q} \frac{W(q/Q)}{\varphi(q)} {\sideset{}{^\star}\sum_{\chi \,(\text{mod }{q})}} \left\{ \sum_{\substack{\gamma_{\chi}, \gamma'_{\chi} \\ 0 < \gamma_{\chi} - \gamma'_{\chi} \leq \frac{2\pi \beta}{\log Q}}} \!\!\!\!\!\!\! \widetilde{\Phi} (i\gamma_{\chi})\widetilde{\Phi} (i\gamma'_{\chi}) \right\}. \] Here the superscript $\star$ indicates the sum is restricted to primitive characters $\chi \,(\text{mod }{q})$, and the inner sum on the right-hand side runs over two sets of nontrivial zeros of the Dirichlet $L$-function $L(s,\chi)$ with ordinates $\gamma_\chi$ and $\gamma'_\chi$, respectively. Similarly, we define the $q$-analogues of $N(T)$ and $N^*(T)$ by \[ N_{\Phi}(Q) := \sum_{q} \frac{W(q/Q)}{\varphi(q)}{\sideset{}{^\star}\sum_{\chi \,(\text{mod }{q})}} \sum_{\gamma_{\chi}} |\widetilde{\Phi} (i\gamma_{\chi})|^2 \] and \[ N_{\Phi}^*(Q) := \sum_{q} \frac{W(q/Q)}{\varphi(q)}{\sideset{}{^\star}\sum_{\chi \,(\text{mod }{q})}} \sum_{\gamma_{\chi}} |\widetilde{\Phi} (i\gamma_{\chi})|^2 \, m_{\gamma_\chi}, \] respectively. Here the superscript $\star$ is as above, and $m_{\gamma_\chi}$ denotes the multiplicity of a zero of $L(s,\chi)$ with ordinate $\gamma_\chi$. Since it is generally believed that the zeros of primitive Dirichlet $L$-functions are all simple, we expect that $N^*_\Phi(Q)=N_\Phi(Q)$ for all $Q>0$. Moreover, analogous to \eqref{PCC2}, we expect that \[ N_\Phi(Q,\beta) \sim N_{\Phi}(Q) \left\{ \beta-\frac{1}{2}+\frac{1}{2\pi^2 \beta} + O\Big(\frac{1}{\beta^2}\Big) \right\} \] as $\beta \to \infty$ sufficiently slowly (when $Q$ is large). In support of this, we prove the following stronger version of Theorems \ref{Intro_thm1_Gallagher} and \ref{Intro_thm2_Gallagher_2} for the zeros of primitive Dirichlet $L$-functions. \begin{theorem} \label{q theorem} Assume the generalized Riemann hypothesis for Dirichlet $L$-functions, and let $\varepsilon>0$ be arbitrary. Then, for any $\beta>0$, we have \[ \limsup_{Q\to\infty} \frac{N_\Phi(Q,\beta)}{N_\Phi(Q)} \le \beta - \frac 14 + \varepsilon + \frac{1}{2 \pi^2 \beta} + O\!\left(\frac{1}{\beta^2 }\right). \] If, in addition, $N_{\Phi}^*(Q) \sim N_{\Phi}(Q)$ as $Q \to \infty$, then we also have \[ \liminf_{Q\to\infty} \frac{N_\Phi(Q,\beta)}{N_\Phi(Q)} \ge \beta - \frac 34 -\varepsilon+ \frac{1}{2 \pi^2 \beta} + O\!\left(\frac{1}{\beta^2 }\right). \] \end{theorem} Define the $q$-analogue of Montgomery's function $F(\alpha)$ by \[ F_{\Phi}(\alpha) := F_{\Phi}(\alpha,Q) = \frac{1}{N_\Phi (Q)} \sum_{q} \frac{W(q/Q)}{\varphi(q)} {\sideset{}{^\star}\sum_{\chi \,(\text{mod }{q})}} \left| \sum_{\gamma_\chi } \widetilde{\Phi} \left( i\gamma_\chi \right)Q^{i \alpha \gamma_\chi }\right|^2. \] Modifying the asymptotic large sieve technique in \cite{CIS1}, Chandee, Lee, Liu and Radziwi\l\l \ have evaluated $F_{\Phi} (\alpha)$ when $|\alpha| < 2.$ \begin{lemma}\label{thm:1} \textup{(cf. \ }\cite[Theorem 2]{CLLR}\textup{)} Assume the generalized Riemann hypothesis for Dirichlet $L$-functions. Then, for any $\varepsilon > 0$, the estimate \begin{align*} F_\Phi (\alpha) =& \, \big(1+o(1)\big) \left( f(\alpha) + \Phi\big(Q^{-|\alpha|} \big)^2 \log Q \left( \frac{1}{ 2 \pi } \int_{-\infty}^\infty \left| \widetilde{\Phi} ( it ) \right|^2 \text{\rm d}t \right)^{-1} \right) \\ & \quad + O\Big( \Phi\big( Q^{- |\alpha|} \big) \sqrt{ f(\alpha) \log Q } \Big) \end{align*} holds uniformly for $ |\alpha| \leq 2-\varepsilon$ as $ Q \to \infty$, where $\displaystyle{ f(\alpha ) := \begin{cases} | \alpha |, & \text{ for } | \alpha |\leq 1, \\ 1, & \text{ for } | \alpha | >1 . \end{cases} }$ \end{lemma} \subsection{Proof of Theorem \ref{q theorem}} The key ingredients are Lemma \ref{thm:1} and the convolution identity \begin{equation*} \label{plugin} \frac{1}{N_\Phi (Q)} \sum_{q} \frac{W(q/Q)}{\varphi(q)} \sideset{}{^\star}\sum_{\chi \, \text{(mod }{q})} \sum_{\gamma_\chi, \gamma'_\chi} R \bigg ( \frac{(\gamma_\chi \! -\! \gamma'_\chi) \log Q}{2\pi} \bigg ) \widetilde{\Phi} (i \gamma_\chi) \widetilde{\Phi} (i \gamma'_\chi) = \int_{-\infty}^{\infty} F_{\Phi}(\alpha) \, \widehat{R}(\alpha) \, \text{\rm d}\alpha. \end{equation*} To obtain the bounds for $N_\Phi(Q, \beta)$ in Theorem \ref{q theorem}, we again use the functions \[ s^\pm_{\Delta, \beta}(x) = r^\pm_{\Delta\beta}(\Delta x). \] As stated in \textsection \ref{sec:evaluateMr}, these functions are a majorant and a minorant of $\chi_{[-\beta,\beta]}$ of exponential type $2\pi \Delta$, and thus with Fourier transform supported in $[-\Delta,\Delta]$. Using arguments similar to those in \textsection \ref{amf}, for any fixed $\beta>0$, we deduce that \[ \limsup_{Q\to\infty} \frac{N_\Phi(Q,\beta)}{N_\Phi(Q)} \leq \frac{1}{2} M(s^+_{\Delta, \beta}) \] and, if $N_{\Phi}^*(Q) \sim N_{\Phi}(Q)$ as $Q \to \infty$, that \[ \liminf_{Q\to\infty} \frac{N_\Phi(Q,\beta)}{N_\Phi(Q)} \geq \frac{1}{2} M(s^-_{\Delta, \beta}). \] Theorem \ref{q theorem} follows from these estimates by using \eqref{Final_answer_M_s} with $\Delta=2-\varepsilon$. \section*{Appendix A}\label{App} Here we prove the following result that was left open in the introduction. \begin{proposition} The entire function $E(z)$ defined in \eqref{Intro_Def_E_special} satisfies properties \eqref{Intro_HB_cond} and \eqref{Intro_rep_kernel}. \end{proposition} \begin{proof} We start by observing that $K(w,z)$ defined by Theorem \ref{Intro_HS_Thm1_RP} verifies the following properties: \begin{enumerate} \item[(i)] $K(w,w) > 0$ for all $w \in \mathbb{C}$. In fact, if we had $K(w,w)=0$ for some $w \in \mathbb{C}$, this would imply that $f(w) = 0$ for every $f \in \H$. This is a contradiction. \smallskip \item[(ii)] $K(\overline{w},z) = \overline{K(w, \overline{z})}$ for all $w,z \in \mathbb{C}$. This is a direct verification. \smallskip \item[(iii)] $K(w,z) = \overline{K(z,w)}$. This follows from the reproducing kernel property: \begin{align*} K(w,z) = \langle K(w,\cdot), K(z, \cdot) \rangle_{\H} = \overline{\langle K(z,\cdot), K(w, \cdot) \rangle_{\H}} = \overline{K(z,w)}. \end{align*} \end{enumerate} Whenever $f \in \H$ has a nonreal zero $w$, the function $z\mapsto f(z)(z-\overline{w})/(z-w)$ belongs to $\H$ and has the same norm as $f$. From the first part of the proof of \cite[Theorem 23]{B} we have that $L(w,z) = 2\pi i (\overline{w} - z) K(w,z)$ satisfies the identity \begin{equation}\label{App_1} L(w,z) = \frac{L(\alpha, z)L(w, \alpha)}{L(\alpha, \alpha)} + \frac{L(\overline{\alpha},z) L(w, \overline{\alpha})}{L(\overline{\alpha}, \overline{\alpha})} \end{equation} for all nonreal $\alpha$ and $w,z \in \mathbb{C}$. From property (ii) above and the fact that $K(\overline{\alpha}, \overline{\alpha})$ is real, we get that \begin{equation}\label{App_2} L(\alpha, \alpha) = - L(\overline{\alpha}, \overline{\alpha}). \end{equation} Taking $w = z$ in \eqref{App_1}, and using \eqref{App_2} and properties (ii) and (iii) above, we get \begin{equation}\label{App_3} L(z,z) = \frac{|L(\alpha, z)|^2}{L(\alpha, \alpha)} - \frac{|L(\alpha, \overline{z})|^2}{L(\alpha, \alpha)}. \end{equation} If we consider any (fixed) $\alpha \in \mathbb{C}^{+}$, we have $L(\alpha, \alpha) >0$, and we can define the entire function (of the variable $z$) \begin{equation*} E(\alpha,z) := \frac{L(\alpha,z)}{L(\alpha, \alpha)^{\frac12}}. \end{equation*} From \eqref{App_3} and property (i), we find that \begin{equation*} |E(\alpha, z)|^2 - |E(\alpha,\overline{z})|^2 = L(z,z) = 4\pi\,{\rm Im}\,(z)\, K(z,z) >0 \end{equation*} for all $z \in \mathbb{C}^+$. This is the Hermite-Biehler property. The identity \begin{equation*} L(w,z) = E(\alpha, z) \overline{E(\alpha, w)} - \overline{E(\alpha, \overline{z})}E(\alpha, \overline{w}) \end{equation*} is equivalent to \eqref{App_1}. In our particular case, we simply choose $\alpha = i$. \end{proof} \section*{Acknowledgements.} \noindent EC acknowledges support from CNPq-Brazil grant $302809/2011-2$, and FAPERJ grant $E-26/103.010/2012$. MBM is supported in part by an AMS-Simons Travel Grant and the NSA Young Investigator Grant H98230-13-1-0217. We would like to thank IMPA -- Rio de Janeiro and CRM -- Montr\'{e}al for sponsoring research visits during the development of this project. \linespread{1.2}
1,941,325,220,145
arxiv
\section{Introduction} Cyclotron frequency measurements of single particles and sparse clouds in Penning traps are commonly used in high precision ion mass measurements~\cite{Marshall1998,Bergstrom2002,Dilling2006} and in measurements of the proton~\cite{DiSciacca2012} and electron~\cite{Odom2006} magnetic moments. In the plasma regime, the cyclotron resonances of electron~\cite{Gould1991} and ion~\cite{Sarid1995} plasmas have also been studied extensively. Cyclotron resonances of ions (or electrons in a low magnetic field) typically occur at radio frequencies and are relatively easy to detect. Electron cyclotron frequencies, however, are often at high microwave frequencies and must be detected using alternative methods. In the single particle regime, microwave cyclotron frequencies are measured using methods that couple the cyclotron and axial motions~\cite{Dyck1981,Odom2006}, producing detectable shifts in the axial bounce frequency. Here we outline and demonstrate a novel detection method of the cyclotron resonance of an electron plasma at microwave frequencies. The key feature of our method is the use of the quadrupole, or breathing, mode oscillation of the electron plasma to detect excitation of the cyclotron motion. We focus on the use of this technique as a tool for in-situ characterization of the magnetic field and a microwave field in a Penning-Malmberg trap. While we demonstrate the technique for microwave frequencies, this method can in principle be applied to cyclotron resonances in the radio frequency range. Initially this work was motivated by the need for an \textit{in situ} measurement of the static magnetic field in the ALPHA (Antihydrogen Laser PHysics Apparatus) experiment at CERN (European Organization for Nuclear Research)~\cite{Friesen2013}. ALPHA and ATRAP (Antihydrogen TRAP, another CERN-based experiment) synthesize neutral antihydrogen atoms from their charged constituents, held in Penning-Malmberg traps. Low-energy antihydrogen atoms are then confined in magnetic potential wells (Ioffe-Pritchard type~\cite{Pritchard1983} magnetic minimum atom traps), thereby eliminating interactions with (and annihilations on) the walls of the surrounding apparatus~\cite{Andresen2010,Gabrielse2012}. Accurate \textit{in situ} determinations of these trapping fields will play an important role in future precision antihydrogen spectroscopy experiments. In this work, electron plasmas are confined along the common axis of the Penning-Malmberg and Ioffe-Pritchard traps and are used to probe the magnetic trapping fields in the vicinity of the minimum magnetic field. This is precisely the region of the trap that is of spectroscopic interest; energy intervals between hyperfine levels of the antihydrogen atom are field dependent, leading to the appearance of sharp extrema in transition frequencies as atoms pass through the field minimum~\cite{Ashkezari2012}. The methods presented here were used extensively in the recent demonstration of the first resonant electromagnetic interaction with antihydrogen~\cite{Amole2012}. We also discuss the use of an electron plasma as a microwave electric field probe. Knowledge of the microwave field is crucial for any microwave experiment in such a trap. Gaps between electrodes, changes in electrode radius, and reflections create an environment that supports a complex set of standing and travelling wave modes. The resulting microwave electric fields can vary drastically as a function of position and frequency and accurately simulating the mode structure presents a largely intractable problem. Using the magnitude of the cyclotron heating by a microwave field, however, we can estimate the amplitude of the co-rotating component of the electric field. Furthermore, by employing techniques analogous to magnetic resonance imaging, we can create a map of the co-rotating microwave electric field amplitude along the cylindrical Penning-Malmberg trap axis. \section{Method} For the remainder of this work we focus on the implementation of the techniques in a cylindrical Penning-Malmberg trap. In principle, the techniques could be adapted for implementation in a hyperbolic electrode Penning trap. We operate under the assumption that the cyclotron frequency of the electron plasma is equivalent to the single particle cyclotron frequency, $f_{\mathrm{c}} = qB/2 \pi m$, where $q$ is the electron charge, $B$, is the amplitude of the magnetic field, and $m$ is the electron mass. In general, a non-neutral plasma will have a set of cyclotron modes that are shifted away from the single particle frequency, an issue that we discuss in section~\ref{shifts}. In the measurements presented here, however, such frequency shifts are below the achieved resolution. When the cyclotron motion of electrons in a plasma is driven by a pulsed microwave field, the absorbed energy is redistributed through collisions, resulting in an increased plasma temperature. We measure this temperature change by non-destructively probing the plasma's quadrupole mode frequency. The quadrupole mode oscillation of a non-neutral plasma is just one of a set of electrostatic plasma modes~\cite{Dubin1991}. The frequencies of these modes are set by the plasma density, temperature, and aspect ratio $\alpha = L/2r$, where $L$ is the plasma length (major axis) and $r$ is the radius (semi-minor axis). For a plasma confined in a perfect quadratic potential produced by distant electrodes, these mode frequencies can be calculated analytically in the cold-fluid limit~\cite{Dubin1991} and have been used experimentally to make non-destructive measurements of plasma parameters~\cite{Tinkle1994,Amoretti2003,Speck2007}. The quadrupole mode holds particular interest here because the frequency is shifted with increasing temperature above the cold fluid limit. An approximate treatment of non-zero temperatures has been proposed~\cite{Dubin1993} and shown to agree well with experiment~\cite{Amoretti2003,Tinkle1995}. For a change in plasma temperature by $\Delta T$, the corresponding change in frequency is given by \begin{equation} ({f_2}^{'})^2 - (f_2)^2 = 5\left(3 - \frac{\alpha^2}{2}\frac{f_p^2}{(f_2^c)^2}\frac{\partial^2 g(\alpha)}{\partial \alpha^2} \right)\frac{k_B\Delta T}{m\pi^2L^2}, \label{quad_shift}\end{equation} \noindent where $k_B$ is the Boltzmann constant, and $f_2$ and ${f_2}^{'}$ are the quadrupole frequencies before and after the heating pulse, respectively. The quadrupole frequency in the cold fluid limit is given by $f_2^{\mathrm{c}}$ and $g(\alpha) = 2Q_1[\alpha(\alpha^2 - 1)^{-1/2}]/(\alpha^2 - 1)$, where $Q_1$ is the first order Legendre function of the second kind. The plasma frequency $f_\mathrm{p}$ is given by $f_{\mathrm{p}}=(2\pi)^{-1}(nq^2/m\epsilon_0)^{1/2}$, where $n$ is the plasma number density, and $\epsilon_0$ is the permittivity of free space. The temperature dependence of $f_2$ can be used to realize a non-destructive plasma temperature diagnostic. We typically work in a regime where $\Delta f_2/f_2 \ll 1$, with $\Delta f_2 = f_2^{'} - f_2$, so we make the approximation $({f_2}^{'})^2 - (f_2)^2 \approx 2f_2 \Delta f_2$. The quadrupole frequency increase is therefore expected to be linear with respect to the plasma temperature and given by \begin{equation} \Delta f_2 \approx \beta \Delta T, \label{DW}\end{equation} where $\beta$ is \begin{equation} \beta = \frac{5}{2 f_2} \left(3 - \frac{\alpha^2}{2}\frac{f_p^2}{(f_2^c)^2}\frac{\partial^2 g(\alpha)}{\partial \alpha^2} \right)\frac{k_B}{m\pi^2L^2}. \label{beta}\end{equation} Calculation of $\beta$ from (\ref{beta}) assumes a plasma confined in a perfect harmonic potential and imperfections will shift $\beta$ in an unknown manner. In the experiments that follow, the electrode structure has not been optimized to produce a perfect harmonic potential. We instead experimentally determine $\beta$ as well as confirm the validity of (\ref{DW}) using an independent, destructive, temperature diagnostic (see section~\ref{quad_calib} for details). It is also important to note that the quadrupole mode frequency is used only to measure relative changes in plasma temperature and we do not attempt to infer absolute temperatures from the mode frequency. The cyclotron resonance frequency is determined by monitoring the quadrupole mode frequency while a series of excitation pulses are applied at frequencies that scan through the cyclotron resonance. Each excitation pulse will cause a jump in the quadrupole mode frequency, the amplitude of which should be maximized when the excitation frequency matches the cyclotron frequency. Between each excitation pulse, the plasma cools back to its equilibrium temperature via emission of cyclotron radiation. Because the quadrupole mode diagnostic is non-destructive, we can map out a full cyclotron lineshape using a single electron plasma. In section~\ref{cycfreq} we demonstrate this technique in two different magnetic field profiles. The first is the standard uniform solenoidal field of a Penning-Malmberg trap. In this field, we expect a peak in the plasma heating at the single particle cyclotron resonance with a linewidth set by the temperature of the plasma and any inhomogeneities in the magnetic field. We can also apply the cyclotron frequency measurements to measure the minimum magnetic field of a magnetic neutral atom trap such as that used for the trapping of antihydrogen in the ALPHA experiment. These methods can also be applied in a microwave electrometry mode by using the magnitude of plasma heating at the cyclotron frequency as a measure of the amplitude of the microwave electric field. We can estimate the amplitude of the co-rotating component of the electric field by treating the electron plasma as a collection of single particles precessing around the magnetic field at the single particle cyclotron frequency. Working from the single particle equations of motion for an electron: \begin{equation} m\frac{d\mathbf{v}}{dt} = q\mathbf{E} + q\mathbf{v}\times\mathbf{B}, \end{equation} we can find the average change in the transverse kinetic energy when a collection of electrons undergoing cyclotron motion is exposed to a near resonant transverse oscillating electric field. To simplify the equations, we define $\omega = qB/m$ and decompose the electric field into components that co-/counter-rotate with respect to the cyclotron motion; that is, $E_\pm(t) = E_x(t) \pm iE_y(t)$. We can then write $v_{\pm} = v_x(t) \pm iv_y(t)$ and the single particle equations of motion become \begin{equation} \frac{dv_{\pm}(t)}{dt} = \frac{q}{m} E_{\pm}(t) \mp i\omega v_{\pm}(t) \label{diff}. \end{equation} Assuming the heating pulses, and consequently $E_\pm(t)$, are non-zero only for a time short compared to damping and collisional timescales, the solution to (\ref{diff}) is \begin{equation} v_\pm(t) = \left[v_\pm(t_0)e^{\pm i\omega t_0} + \frac{q}{m}\int_{-\infty}^t e^{\pm i\omega t'}E_\pm(t')dt'\right]e^{\mp i\omega t}, \end{equation} where $E_\pm(t) = 0$ for $t < t_0$. The change in average transverse kinetic energy, $\langle\mathrm{KE_{\perp}}\rangle = m\langle v_+v_- \rangle/2$, caused by a pulse of microwaves is therefore \begin{equation} \Delta\langle\mathrm{KE_{\perp}}\rangle = \frac{q^2}{2m}\left|\int_{-\infty}^{\infty}E_+(t')e^{i\omega t'}dt'\right|^2 \label{short_soln},\end{equation} where $E_+ = E_x(t) + iE_y(t)$ is the co-rotating component of the microwave electric field. Following the microwave pulse, collisions redistribute the kinetic energy among the three degrees of freedom resulting in a temperature change of \begin{equation} \Delta T = \frac{2}{3k_B}\Delta \langle\mathrm{KE_{\perp}}\rangle. \label{DT}\end{equation} By measuring the temperature increase due to a pulse of microwave radiation, via the quadrupole mode frequency increase, the magnitude of the co-rotating microwave electric field can be calculated using (\ref{short_soln}) and (\ref{DT}). Microwave fields at different frequencies can be probed by adjusting the magnetic field to set the cyclotron resonance to the desired frequency. For convenience we will abbreviate the co-rotating microwave electric field as `CMEF' for the remainder of this work. The structure of the CMEF along the trap axis can also be probed in this manner. If allowed by the trap construction, the electron plasma can be moved to different axial positions, providing a map of the CMEF strength at a resolution set by the plasma length. For a given plasma length, this resolution can be improved by making a magnetic resonance imaging style scan of the plasma. A linear magnetic field gradient is applied across the length of the plasma, creating a position dependant cyclotron frequency. Microwaves injected at a given frequency will only be resonant with a narrow slice of the plasma. The resulting plasma heating depends on the local CMEF over the narrow slice and the number of particles in resonance. If the static magnetic field is changed, without changing the gradient, a different slice of plasma will be moved into resonance. In a uniform microwave electric field, this would amount to a one-dimensional projection image of the plasma, with a plasma heating proportional to the number of particles in resonance at each step. In the case of a highly variable electric field over the plasma length and an approximately uniform density spheroidal plasma, we can extract a map of the CMEF strength over the plasma length. \section{Apparatus} The measurements presented here were performed by the ALPHA antihydrogen experiment~\cite{Amole2014} located in the Antiproton Decelerator facility at CERN. The ALPHA Penning-Malmberg trap consists of 35 cylindrical electrodes whose axis is aligned with the axis of a 1 T superconducting solenoid. Both DC potentials and oscillating fields up to several tens of megahertz in frequency can be applied to the electrodes. The measurements were made in a region of the trap with an electrode wall radius of 22.5 mm. The electrodes are thermally connected to a liquid helium bath and cool to approximately 7.5 K. Surrounding the trap electrodes are three superconducting magnets that form the magnetic minimum neutral atom trap (see figure~\ref{setup}). A three dimensional magnetic minimum is created by two mirror coils, which produce an axially increasing field around the centre, and an octupole winding, creating a radially increasing field~\cite{Bertsche2006}. A smaller superconducting solenoid surrounds a portion of the Penning trap and is used in the capture of antiprotons from the Antiproton Decelerator. With the exception of section~\ref{Standingmap} this solenoid is not energized for any of the measurements presented here. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{figure1.eps} \caption{Sketch of the core of the ALPHA apparatus. A 1 T solenoid (not pictured) surrounds the components shown here with the exception of the microwave horn located 1.3 m from the centre of the trap. The quadrupole mode excitation signal is applied to an electrode at one end of the plasma and the subsequent ringing of the plasma is picked up from the centre electrode. The radial extent of the plasma has been exaggerated here for illustration purposes.} \label{setup} \end{figure} Electrons are emitted by an electron gun positioned on the Penning-Malmberg trap axis by a moveable vacuum manipulator, which also includes a micro-channel plate (MCP) and phosphor screen detector (collectively referred to as the `MCP detector' for convenience) and a microwave horn. Using the MCP detector, the plasma's integrated radial density profile can be measured destructively~\cite{Andresen2009}. The number of electrons in a trapped plasma can be measured by releasing the particles onto a Faraday cup and measuring the deposited charge. From the particle number, radial profile, and knowledge of the confining potentials, the full three-dimensional density distribution can be calculated numerically~\cite{Prasad1979}. The plasma radial distribution, and therefore the aspect ratio and the parameter $\beta$ (see (\ref{DW}),(\ref{beta})), can be manipulated by applying a `rotating-wall' electric field using segmented electrodes~\cite{Huang1997}. Plasma temperatures can also be measured using the MCP detector~\cite{Eggleston1992}. If the confining potential is slowly (with respect to the axial bounce frequency of approximately 15 MHz) reduced, the highest energy particles will escape the well first and their charge will be registered by the MCP detector. Typically, the confining well is reduced to zero over 20 ms, ensuring that particles of a given energy have time to escape before the confining potential changes significantly. Assuming the plasma is in local thermal equilibrium along the magnetic field lines, the velocity distribution of the first escaping particles will follow the tail of a Maxwell-Boltzmann distribution~\cite{Eggleston1992}. The plasma temperature can therefore be determined by an exponential fit to the number of particles released as a function of well depth. Microwaves at frequencies between 26 and 30 GHz are generated by an Agilent 8257D synthesizer and are carried by coaxial cable down one of two potential paths: a high power path with a 4 W amplifier for resonant experiments with antihydrogen, and a low power path (no amplification) for the electron cyclotron resonance diagnostics discussed here. These two paths merge just before entering the trap vacuum via WR28 waveguide through a hermetically sealed quartz window. Finally, an internal length of waveguide brings the microwaves to a microwave horn that is aligned with the Penning trap axis. We measure the quadrupole mode frequency by first exciting the mode with a Gaussian modulated radio-frequency (RF) pulse applied to an electrode at one end of the plasma (see figure \ref{setup}). The subsequent ring down of the plasma ($Q \approx 1000$) is picked up on the central electrode. The response signal is amplified, passed through a broad band-pass filter, then digitized. The quadrupole mode frequency is extracted from the digital signal using a Fast Fourier Transform (FFT) and a peak-finding routine. The drive pulses are typically 0.3 - 1.0 V, 1 $\mu$s in duration, and thus have a spectral width of approximately 1 MHz. We apply 5 pulses, each separated by 100 ms and average the 5 response signals before performing the FFT. In this configuration the quadrupole mode frequency is probed every 1.2 s. All experiments in this paper utilize electron plasmas loaded in a roughly harmonic potential produced by five electrodes in the centre of the Penning traps. At 1 T the electron cyclotron frequency is approximately 28 GHz. The plasmas typically have a radius of 1 mm and are 20 - 40 mm in length, overlapping three electrodes. Plasma loads of $3\times10^6$ to $4\times10^7$ electrons were studied. The lower limit is set by our ability to distinguish the quadrupole mode signal from background noise. These plasmas have densities between $5\times10^{13}$ and $5\times10^{14}$ m$^{-3}$ and typical temperatures of $\sim$150~K. At these densities and temperatures, collisions will bring the cyclotron motion into equilibrium with the motion parallel to the magnetic field at a rate of roughly $10^{5}$~s$^{-1}$~\cite{Glinsky1992}. The quadrupole mode frequency of these plasmas is typically 24 - 28 MHz. \section{Quadrupole mode calibration}\label{quad_calib} Before using the quadrupole mode frequency shift to measure the cyclotron frequency or estimate the microwave electric field, the linearity of (\ref{DW}) and the value of $\beta$ were determined experimentally. This was accomplished by continuously monitoring the quadrupole mode frequency of an electron plasma, while an RF noise drive, applied to a nearby electrode, heats the plasma. After the plasma reaches a new equilibrium, the temperature is destructively measured using the MCP detector. For different RF drive amplitudes, the final plasma temperatures and the corresponding quadrupole frequency shifts were determined. Figure~\ref{calib} shows the measured calibrations for three plasmas with the same number of electrons ($2\times10^7$) but different aspect ratios. The aspect ratios are determined from the numerically calculated self-consistent density distributions based on measured particle numbers and radial profiles. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{figure2.eps} \caption{Final plasma temperature plotted against the quadrupole frequency increase for three different plasma aspect ratios. The temperature measurement uncertainties are roughly 5 K and not visible on this plot. From a linear fit to the data $\beta^{-1}$ is calculated to be $6.4\pm0.5$~K/kHz for $\alpha = 21$, $5.3\pm0.4$~K/kHz for $\alpha=14$, and $4.6\pm0.2$ K/kHz for $\alpha=10$.} \label{calib} \end{figure} The frequency shift is seen to be linear with the change in temperature, as predicted. For the majority of the measurements that follow, we use a plasma consisting of $1.2\times10^7$ electrons, $\alpha = 16$, $L = 26$ mm, a base temperature of $\sim$150 K, and a measured $\Delta T$ vs $\Delta f_2$ calibration of $\beta^{-1} = 3.7\pm0.3$~K/kHz. Quadrupole mode frequency changes can be measured to around 5 kHz, enabling us to resolve temperature changes of roughly 20 K or greater with these plasma parameters. \section{Cyclotron frequency measurements}\label{cycfreq} The cyclotron frequency of an electron plasma is measured by repeatedly probing the quadrupole mode frequency while a series of microwave pulses are applied. Each microwave pulse is 4 $\mu$s long and at a different frequency, spanning a range that includes the cyclotron resonance. The resulting increase in the quadrupole mode frequency is then measured for each pulse. Between the pulses, the electrons will radiatively cool in the 1 T field with a characteristic cooling time of roughly 4 s. To ensure the plasma returns to thermal equilibrium, each pulse is separated by 15 - 35 s. \subsection{Uniform field}\label{Uniform} We first examine the case of a nominally uniform solenoid field at 1 T. A real-time readout of the quadrupole frequency during a cyclotron frequency measurement can be seen in figure~\ref{uniformfield}(a). The lineshape is constructed by plotting the quadrupole mode frequency shifts ($\Delta f_2$) against the microwave frequency (see figure \ref{uniformfield}(b)). The observed lineshape (without additional heating) is roughly Gaussian with a dip near the peak. Increasing the plasma temperature (via RF heating) broadens the overall lineshape but a very strong narrow peak emerges with broad side lobe-like features (figure~\ref{uniformfield}(b)). This peak does not appear to broaden as plasma temperature increases. At the end of the lineshape measurement, while still heating and probing the quadrupole mode frequency, we move the MCP detector into place to measure the plasma temperature. Similar datasets collected using a different cylindrical Penning-Malmberg trap show the same general features: a large roughly central peak with broad side lobe-like features. The side lobes and the relative height of the central peak change significantly at different cyclotron frequencies. Interpretation of these lineshapes is complicated by the strong frequency and position dependence of the microwave field. The narrow central peak is particularly surprising as its full-width-at-half-maximum (FWHM) is on the order of 0.5 - 1 MHz. For comparison, if the microwaves are treated as a plane-wave propagating down the trap axis a Doppler width of $\sim$10 MHz would be expected for an electron cloud at 150 K. The narrow width of the central peak may be due to a subset of the electrons if they are confined within a constant phase region of a standing wave structure~\cite{Kleppner1962} in the trap or an effect of the fact the wavelength of the microwaves ($\sim$1 cm) is comparable to the radius ($\sim$ 0.1 cm) and length ($\sim$4 cm) of the plasma~\cite{Dicke1953}. \begin{figure} \includegraphics[]{figure3a.eps} \hspace{0.75cm} \includegraphics[]{figure3b.eps} \caption{(a) Real-time readout of the quadrupole mode frequency during a cyclotron resonance scan of an electron plasma. The sudden jumps in frequency are due to 4~$\mu$s microwave pulses near the cyclotron resonance frequency. The slowly decreasing baseline quadrupole frequency is likely due to the slow expansion of the plasma. (b) The measured cyclotron lineshapes at different plasma temperatures.} \label{uniformfield} \end{figure} While we do not have a complete understanding of the observed lineshapes, the position of the central peak scales well with the magnetic field strength as the current in the solenoid is increased. Figure~\ref{lakeshorecalib} shows the peak frequency as a function of the magnetic field measured by an uncalibrated Hall probe placed off axis within the solenoid bore. A linear fit to the data results in a root-mean-square deviation of only 1~MHz. We conclude that the position of the central peak (where the cyclotron heating is maximized) corresponds to the cyclotron resonance frequency. We are able to measure this frequency to within 1~MHz in a uniform field, corresponding to a measurement of the magnetic field to 3.6 parts in $10^5$. Surprisingly, due to the nature of the observed lineshapes, we can identify the central peak frequency more precisely using hotter electron plasmas. \begin{figure} \begin{center} \includegraphics{figure4.eps} \end{center} \caption{Peak heating frequency as a function of the solenoid current and resulting magnetic field as measured by a Hall probe. The Hall probe is located within the bore of the Penning trap solenoid.} \label{lakeshorecalib} \end{figure} \subsection{Neutral atom trap field}\label{neutraltrap} One of the goals of the ALPHA collaboration is microwave spectroscopy of the hyperfine levels of antihydrogen's ground state. The highly inhomogeneous magnetic trapping fields, however, are detrimental for such a measurement. The inhomogeneity of the magnetic field and the strong field dependence of the hyperfine transition frequencies reduce the effective time a trapped antihydrogen atom will be in resonance with a microwave field at a fixed frequency. In order to maximize the probability of inducing transitions between the hyperfine levels, a precise measurement of the magnetic trap minimum (where the field is most uniform) is critical. The neutral trap is formed by the superposition of an axial mirror field and an octupole field. Over the extent of the plasma, the octupole field varies by less than 0.1~mT so we first focus on the cyclotron frequency in the mirror field alone. The magnetic field produced by the mirror coils is given by \begin{equation} B_z(z,r) = B_0 + a\left(z^2 - \frac{r^2}{2}\right),\label{mirfield} \end{equation} where $B_0$ is the magnetic field at $z=r=0$ and $a \approx 16$ $\mathrm{T/m^2}$ when the mirror coils are operated at the current used for antihydrogen trapping. The magnetic field is approximately uniform over the 1 mm plasma radius so the axial gradient of the field will dominate. The magnetic field is most homogeneous at the minimum and microwaves tuned to this frequency will be resonant with the largest portion of the plasma. As the frequency is increased above the minimum, the microwaves come into resonance with increasingly narrow slices of plasma symmetrically displaced along the trap axis from the minimum. The axial position of the resonance is plotted as a function of cyclotron frequency in figure~\ref{MirLineshape}(a). A simple model of the expected lineshape can be constructed from the axial magnetic field profile with thermal broadening. The lineshape due to the magnetic field alone is shown in figure~\ref{MirLineshape}(b) (solid blue trace). The thermal motion of the electrons parallel to the magnetic field will broaden this lineshape and introduce a systematic shift of the peak frequency away from the true minimum. This systematic shift arises from the convolution of a Gaussian with the lineshape function due to the field profile. For example, if the microwave field is a plane wave propagating along the trap axis the FWHM of the Gaussian is given by the standard Doppler width $\Delta f_{\mathrm{FWHM}} = (8k_BT\ln2/mc^2)^{1/2}f_{\mathrm{c}}$, where $c$ is the speed of light in a vacuum. With a plasma temperature of 150~K, this results in a shift of the peak frequency of 4~MHz above the true minimum resonance as illustrated in figure~\ref{MirLineshape}(b) (dot-dashed green trace). Without knowledge of the microwave mode structure, however, the true Doppler width is unknown. The uniform field linewidths are narrower than predicted for the axially propagating plane wave case, suggesting that peak frequency is shifted by $<$ 4 MHz. The observed cyclotron lineshape in the mirror coil field is also shown in figure~\ref{MirLineshape}(b). An onset peak is observed as expected but the lineshape deviates from the simple model at higher frequencies. The distortion of the lineshape is a result of spatial and frequency dependent variations in the CMEF. In section~\ref{Standingmap} the spatial variation of the CMEF is measured and used to better model these lineshapes. \begin{figure} \begin{center} \includegraphics{figure5a_5b.eps} \end{center} \caption{(a) The axial positions of the cyclotron resonance as a function of the microwave frequency in the mirror field. (b) The measured cyclotron resonance lineshape (black triangles) and simple models of the expected lineshape in the inhomogeneous mirror coil magnetic field. The solid blue curve denotes the expected lineshape due to the magnetic field profile alone. The dashed and dot-dashed curves show the effect of thermal broadening of this lineshape with plasma temperatures of 25 K and 150 K, respectively. Here the microwave field has been taken to be a plane-wave propagating down the Penning trap axis.} \label{MirLineshape} \end{figure} While the full lineshape is significantly distorted, the low frequency onset we wish to characterize remains a prominent feature. We take the position of the onset peak maximum to be the minimum cyclotron resonance frequency. This frequency is plotted in figure~\ref{cyc_vs_curr}(a) as a function of the current in the mirror coils. As the mirror current is increased, the minimum magnetic field increases and the field profile changes from uniform across the plasma to highly non-uniform. Significant changes in the local CMEF strengths will occur over the range of cyclotron frequencies plotted in Fig~\ref{cyc_vs_curr}(a). The onset peak frequency is relatively stable against these fluctuations, however, with a root-mean-squared deviation of 10~MHz obtained from a linear fit to the data. Measurement of the mirror field lineshapes at plasma temperatures between 150 K and 1000 K show a broadening of the onset peak but we do not observe any systematic shift of the peak frequency. This is likely due to the strong effect of the changing CMEF amplitude as a function of position and frequency. We conclude that the rms deviation of 10MHz in figure~\ref{cyc_vs_curr}(a) reflects our uncertainty in determining the minimum cyclotron frequency. This corresponds to a relative magnetic field measurement of $\Delta B/B \approx 3.4\times10^{-4}$. We also characterized the contribution to the field by the octupole magnet. A perfect octupole field would have no axial component at the trap centre but the end turns in the octupole windings add a small amount. The octupole field is approximately uniform over the radius and length of the plasma so the lineshapes are effectively those of a uniform field. Figure \ref{cyc_vs_curr}(b) shows the measured cyclotron frequency against the octupole current. At our nominal antihydrogen trapping current, the resonance is shifted by approximately 40~MHz. When the full neutral trap is energized one would expect that the minimum cyclotron resonance frequency would be a simple superposition of the octupole and mirror field resonances. Surprisingly, however, the minimum cyclotron resonance is found roughly 40~MHz below the expected value. The cause of this deviation is currently unknown but may be due to some interaction between the superconducting magnets in the ALPHA apparatus such as shielding effects or flux pinning effects. While there are no known plasma effects that could explain this discrepancy, we cannot rule out the possibility of a systematic offset of 40~MHz in the measurement of the minimum field in the full neutral trap. \begin{figure} \begin{center} \includegraphics{figure6a.eps} \hspace{0.75cm} \includegraphics{figure6b.eps} \end{center} \caption{(a) Frequency of the minimum cyclotron resonance ($f_{\mathrm{min}}$) as a function of the current in mirror coil magnets. The octupole magnet is not energized during these measurements. (b) The cyclotron resonance as a function of the current in the octupole magnet. The mirror coil magnets are not energized for these measurements.} \label{cyc_vs_curr} \end{figure} \subsection{Cyclotron frequency shifts}\label{shifts} So far we have been working under the assumption that the observed cyclotron frequency of the electron plasma is equivalent to the single particle cyclotron frequency. In practice, however, a non-neutral plasma in a Penning trap can oscillate at a set of cyclotron modes that occur near the single particle frequency. These modes have been studied experimentally in electron~\cite{Gould1991} and magnesium ion plasmas~\cite{Sarid1995,Affolter2013} in a uniform magnetic field. The observed cyclotron modes have an angular dependence $\exp(i\ell\theta)$, where $\ell \ge 1$. Assuming a uniform density plasma out to a radius $r_{\mathrm{p}}$, these modes are shifted from the single particle cyclotron frequency by an amount~\cite{Gould1994} \begin{equation} \Delta f_{\mathrm{c},\ell} = \left[\ell - 1 - \left(\frac{r_{\mathrm{p}}}{r_{\mathrm{w}}}\right)^{2\ell}\right] f_{\mathrm{rot}}, \label{cycshift} \end{equation} where $f_{\mathrm{rot}}$ is the plasma rotation frequency and $r_{\mathrm{w}}$ is the inner radius of the electrodes. The $\ell = 1$ cyclotron mode is downshifted from the single particle cyclotron frequency by an amount equal to the diocotron frequency of the plasma: $\Delta f_{\mathrm{c},1} = -(r_{\mathrm{p}}/r_{\mathrm{w}})^2f_{\mathrm{rot}} = - f_{\mathrm{d}}$. Assuming a square plasma profile with a uniform density of $n = 9 \times 10^{13}\ \mathrm{m}^{-3}$ out to $r_{\mathrm{p}} = 1$ mm, we can estimate the rotation frequency as $f_{\mathrm{rot}} = nq/(4\pi\epsilon_0B) = 130$ kHz. Because the plasma radius is small compared to the electrode radius ($r_{\mathrm{w}} = 22.5$ mm) the $\ell = 1$ cyclotron mode is downshifted by a negligible amount. The $\ell > 1$ modes, however, will be upshifted by integer multiples of 130 kHz, which is approaching the same order as the full spectral width of the 4 $\mu$s microwave pulses and the width of the observed central peaks (figure~\ref{uniformfield}(b)). We do not, however, observe any systematic shifts of the observed cyclotron lineshapes as a function of density between $n = 8\times10^{13}\ \mathrm{m}^{-3}$ and $n=2\times10^{14}\ \mathrm{m}^{-3}$. \section{Microwave electrometry}\label{fieldamp} In addition to using the microwave-electron interactions to measure the cyclotron frequency, we can also extract information about the microwave electric field. As previously discussed, the structure of a cylindrical Penning-Malmberg trap can give rise to large variations in microwave electric and magnetic field amplitudes as a function of frequency and position in the trap. For any microwave experiment in such an environment, including hyperfine spectroscopy of antihydrogen, \textit{in situ} diagnostics of the microwave field at different positions and frequencies can be extremely useful. \subsection{Electric field amplitude} Using the quadrupole mode frequency calibration, we can measure the change in temperature due to a microwave pulse and infer the CMEF amplitude from (\ref{short_soln}) and (\ref{DT}). To ensure we are in the short pulse limit, we inject 80 ns microwave pulses. The collisional rate at which the cyclotron motion of the electrons equilibrates with the motion parallel to the 1 T field at 150 K is approximately~\cite{Glinsky1992} $\Gamma \sim 10^{-9}n \ \mathrm{m^3s^{-1}}$. We use a plasma with a density of $n = 2\times10^{14} \ \mathrm{m}^{-3}$ giving an expected equilibration rate of $\Gamma \sim 2\times10^{5} \ \mathrm{s^{-1}}$. The Agilent 8257D synthesizer produces a stable frequency and phase over the duration of the microwave pulse so the spectral width is effectively set by the pulse length. The full spectral width of the 80 ns pulse is $2.5\times10^7$ Hz, two orders of magnitude larger than $\Gamma$, so collisional damping can be neglected and (\ref{short_soln}) is valid. We inject square microwave pulses where the transverse components of the electric field approximately take the form: \begin{eqnarray} E_x(t) &= E_{x,0}\cos(\omega_0t)[H(t+\tau/2) - H(t - \tau/2)], \\ E_y(t) &= E_{y,0}\cos(\omega_0t+\delta_y)[H(t+\tau/2) - H(t - \tau/2)], \end{eqnarray} where $H$ is the Heaviside step function and $\tau$ is the pulse duration. The co-rotating component of the electric field is given by $E_+(t) = E_x(t) + iE_y(t)$ and one finds \begin{equation} \int_{-\infty}^{\infty}E_+(t')e^{i\omega t'}dt' = \left(\frac{\sin[(\omega_0-\omega)\tau/2]}{ \omega_0 - \omega} + \frac{\sin[(\omega_0+\omega)\tau/2]}{\omega_0+\omega}\right)E_0, \end{equation} where $E_0 = E_{x,0} +iE_{y,0}e^{i\delta_y}$. Near resonance $\Delta \omega = \omega_0 - \omega \ll \omega_0 + \omega$ so to good approximation \begin{equation} \int_{-\infty}^{\infty}E_+(t')e^{i\omega t'}dt' = \frac{\tau}{2}\mathrm{sinc}\left(\frac{\Delta \omega \tau}{2}\right)E_0 \label{E+}, \end{equation} with a total error of order $1/(\omega_0 \tau)$. Inserting (\ref{E+}) into (\ref{short_soln}) we have \begin{equation} \Delta \langle \mathrm{KE_{\perp}} \rangle = \frac{q^2\tau^2}{8m} \mathrm{sinc}^2\left(\frac{\Delta \omega \tau}{2}\right)|E_0|^2. \label{DKE}\end{equation} Using (\ref{DW}), (\ref{DT}) and (\ref{DKE}) and solving for the amplitude of the CMEF at resonance ($\Delta \omega = 0$) yields \begin{equation} |E_0| = \frac{2\sqrt{3mk_B\Delta f_2/\beta}}{q\tau} \label{efield}. \end{equation} As an example we use a plasma of $1.2\times10^7$ electrons with a measured quadrupole frequency calibration of $\beta^{-1} = 3.7$ K/kHz. Microwaves are injected at a resonant frequency of 27.370 GHz in $80$ ns pulses. With a power of $9$ mW emitted from the microwave horn, using the low-power microwave transmission path, we measure a quadrupole mode shift of $\Delta f_2 = 100$ kHz, corresponding to a CMEF amplitude of $18.4$~Vm$^{-1}$. Approximating the microwaves as plane waves propagating down the trap axis, this corresponds to a power of $0.7$ mW; a loss of $11$ dB from the horn to the plasma. As the resonance frequency changes, large fluctuations in electric field amplitude are observed. For hyperfine spectroscopy of trapped antihydrogen it is useful to estimate the hyperfine transition rate expected. In this case, the transverse magnetic field component of the microwave field is the relevant quantity. Unfortunately, without knowledge of the field structure in the trap we cannot properly calculate the magnetic field amplitude from the measured electric field. We can make an order of magnitude estimate, however, by approximating the microwaves as plane-waves in free space. Based on the measured CMEF amplitude, the high power transmission path of the microwave system would produce a CMEF of roughly 100 Vm$^{-1}$. For a plane wave with this electric field, the co-rotating component of the magnetic field is $B = E/c \approx 0.33$~$\mu$T, where $c$ is the speed of light in a vacuum. Based on a simulation of the interaction of microwaves with trapped antihydrogen, a positron spin flip rate of approximately 1 s$^{-1}$ is expected, consistent with the observations in~\cite{Amole2012}. While only an order of magnitude estimate, these measurements are very useful in a situation where the microwave field is otherwise unknown. \subsection{Microwave electric field maps}\label{Standingmap} By moving the plasma along the trap axis and repeating the electric field amplitude measurement described above, the axial dependence of the CMEF can be probed. The spatial resolution will be set by the plasma length as field variations on a smaller scale will be averaged out. For the typical plasma used in the current work, we can therefore sample changes in the CMEF over 2 - 4 cm in this manner. We can probe the electric field on a finer scale by keeping the plasma position fixed and applying a magnetic field gradient across the plasma (see figure~\ref{StandingWaveMap}(a)) such that only a small portion of the plasma is resonant at a given frequency. The gradient is produced by the fringe field of a superconducting solenoid at one end of the ALPHA trap (see figure~\ref{setup}) and is numerically modelled using TOSCA/OPERA3D~\cite{OPERA}. Microwaves are pulsed every 35 seconds at a fixed frequency, while the Penning trap solenoid is slowly swept (keeping the gradient fixed) to bring different parts of the plasma into resonance for each pulse. The resonant position of each pulse is determined by the modelled magnetic field gradient and the rate at which the solenoid is swept. A microwave pulse length of 4 $\mu$s with a full spectral width of 500 kHz is employed such that only a small portion of the plasma is excited with each pulse. With a uniform microwave electric field, this scan would be analogous to magnetic resonance imaging of the plasma. Here, however, the plasma is approximately a uniform density spheroid in a highly structured electric field. By scanning the resonance across the plasma and measuring the quadrupole frequency shifts, a relative map of the CMEF along the $z$-axis of the plasma can be generated. As a simple estimate we can assume a perfectly linear magnetic field gradient and a cylindrical plasma (uniform radius between $z=-L/2$ and $z=L/2$) such that the relative plasma heating only depends on the local CMEF amplitude. In reality, the plasma is better approximated by a spheroid and the fringe field of the solenoid doesn't produce a perfectly linear magnetic field gradient. These two effects change the volume of the plasma that is in resonance with a microwave pulse as a function of $z$, increasing or decreasing the observed quadrupole frequency shift. To account for the spheroidal shape of the plasma, we multiply by a correction factor $r(0)/r(z) = (1-(2z/L)^2)^{-1/2}$, where $L = 40$ mm for the plasma used in figure~\ref{StandingWaveMap}. As the slope of the fringe magnetic field reduces at high $z$, a larger portion of the plasma will be excited by each pulse, increasing the plasma heating. This can be corrected for by multiplying the observed response by a factor $(B'(z)/B'(0))^{1/2}$, where $B'(z) = dB/dz$. Both correction factors have been normalized to the response at the centre of the plasma. As an example, figure~\ref{StandingWaveMap}(b) plots $(\Delta f_2)^{1/2}$ (which is proportional to $|E_0|$) as a function of $z$ at a microwave frequency of 28.375 GHz with and without the corrections applied. The spheroidal shape correction has the greatest effect far from the middle of the plasma and breaks down when $|z| = L/2$. Better measurement of the CMEF strength at these points can be obtained by repositioning the plasma over the region of interest. \begin{figure} \centering \includegraphics{figure7.eps} \caption{(a) Magnetic field gradient across the electron plasma created by the fringe field of the catching solenoid (see figure~\ref{setup}). (b) The square root of the measured quadrupole mode frequency increase plotted as a function of axial position over the plasma for a microwave frequency of 28.375~GHz. Assuming a perfect linear magnetic field gradient and a cylindrical plasma, the measured $(\Delta f_2)^{1/2}$ (inverted blue triangles) are proportional to the CMEF amplitude. The red triangles are the relative CMEF amplitudes when the spheroidal plasma shape and deviations from a linear gradient are considered.} \label{StandingWaveMap} \end{figure} The spatial resolution of this mapping is set by the field gradient and the linewidth of the resonance. In the current example, the gradient used is approximately 0.09 mT/mm. Based on the FWHM of the observed uniform field lineshape at 140 K (see figure~\ref{uniformfield}), which is 0.2 mT in terms of magnetic field, we estimate that each pulse samples a slice of plasma approximately 2 mm long. \subsection{Modelling the mirror field lineshapes} With the measurements of the axial CMEF profile we can attempt to better model the cyclotron lineshapes observed in section~\ref{neutraltrap}. Starting with the simple lineshape model discussed in section~\ref{neutraltrap} we apply a frequency dependant correction factor based on a measured CMEF map. In the mirror field, each microwave frequency is resonant with two slices of the plasma symmetric about $z = 0$ (see figure~\ref{MirLineshape}(a)). From the CMEF map we can estimate the relative field strengths at these two positions and therefore the distortion of the lineshape due to the spatially varying electric field. This model will not be completely accurate, however, as the CMEF is mapped at a fixed microwave frequency. As the frequency is changed during the cyclotron resonance lineshape measurement the CMEF profile will change with it. Thermal broadening is included in the model by convolving a Gaussian with the lineshape based on the magnetic field profile alone. Because we do not have enough information about the structure of the microwave field to accurately model the thermal broadening, a generic Gaussian given by $\exp{(-4\ln2f^2/{\Delta f}^2)}$ is used, where the width $\Delta f$ is a fit parameter. Using the CMEF map at 28.375 GHz shown in figure~\ref{StandingWaveMap}, a model for the cyclotron lineshape shown in figure~\ref{MirLineshape} (with a peak response frequency at 28.372 GHz) is generated (figure~\ref{ModelFits}(a)). The model matches the onset peak structure well and is an improvement over the simple model but still deviates from the measurements above a frequency of 28.380 GHz. This is likely the effect of the CMEF profile changing as a function of frequency. Figure~\ref{ModelFits}(b) shows a second example of a modelled lineshape with a peak response frequency at 28.270 GHz and using a CMEF map at 28.270 GHz. The improved agreement with the measured cyclotron resonance lineshapes provides an additional measure of confidence that the observed lineshapes are due to the magnetic field inhomogeneity, thermal broadening, and the spatial and frequency dependence of the CMEF amplitude. \begin{figure} \includegraphics[width=0.48\columnwidth]{figure8a.eps}\hspace{0.04\columnwidth}\includegraphics[width=0.48\columnwidth]{figure8b.eps} \caption{(a) The improved model (blue line) of the measured cyclotron lineshape shown in figure~\ref{MirLineshape} using a map of the CMEF at 28.375 GHz. The red dashed line shows the simple model for comparison. (b) A second example demonstrating the improved modelling (blue line) of the cyclotron resonance lineshape using a CMEF map at 28.270 GHz compared to the simple model (red dashed line).} \label{ModelFits} \end{figure} \section{Conclusion} While the uniform field lineshapes measured in section~\ref{Uniform} are not fully understood, we can identify the cyclotron resonance frequency to within 1 MHz. This is of the same order as the spectral width of the $4$ $\mu$s microwave pulses and may be improved with longer pulses. This is also approaching the order on which systematic shifts of the observed resonance away from the single particle cyclotron frequency, due to the plasma rotation, are expected. If the resolution of the cyclotron frequency measurement is increased further, careful study of the frequency shifts will be necessary. When the magnetic field is non-uniform over the plasma length, the spatial dependence of the microwave electric field will distort the cyclotron lineshape significantly. In the neutral atom trap field, the uncertainty in the measurement of the minimum field due to these distortions can be reduced by flattening the field at the minimum. With more of the plasma resonant at the minimum cyclotron frequency, variations in CMEF strength will be averaged over a larger range, approaching the uniform field case, and make identification of the onset peak easier. A new version of the ALPHA apparatus, currently under construction, includes three additional mirror coils that can act as compensation coils to flatten the field minimum while maintaining the magnetic trap depth. Eliminating the uncertainties resulting from the spatial dependence of the microwave field requires the inclusion of a microwave cavity. In addition to removing a large source of uncertainty, if the majority of the plasma is confined between nodes of a trapped resonator mode, the lineshape will be dominated by a Doppler free peak at the cyclotron frequency~\cite{Kleppner1962}, greatly increasing the achievable resolution of the measurement. Designing a cavity that does not compromise the ability to store and manipulate charged plasmas presents a challenge but may be included in future upgrades to the ALPHA apparatus. In this work, we have described a novel method for the measurement of the cyclotron frequency of an electron plasma in a Penning-Malmberg trap. This method is applied at microwave cyclotron frequencies as an \textit{in situ}, non-destructive, and spatially resolving measurement of the static magnetic field and microwave electric field strengths in the trap. In the ALPHA trap our measurement of the magnetic field had an accuracy of about 3.6 parts in 10$^5$ for the nominally uniform magnetic field. In the magnetic neutral atom trap fields, the minimum was resolved to within about 3.4 parts in $10^4$, with a potential systematic offset of 1.4 parts in $10^3$ which cannot be ruled out at this time. An uncertainty of 3.4 parts in $10^4$ would translate to an inaccuracy of only 64 Hz (2.5 parts in $10^{14}$) in the 1S - 2S transition frequency, assuming a minimum magnetic field of 1 T~\cite{Cesar2001}. With hardware improvements and further study, the sensitivity of these measurements could approach a resolution limited by collisional scattering (roughly 1 part in $10^6$ for typical plasmas used here). Implementation of these techniques as a diagnostic tool requires an electron or ion plasma with a detectable quadrupole mode frequency and a method for excitation of the cyclotron motion. \section{Acknowledgements} This work was supported by CNPq, FINEP/RENAFAE (Brazil), ISF (Israel), FNU (Denmark), VR (Sweden), NSERC, NRC/TRIUMF, AITF, FQRNT (Canada), DOE, NSF (USA), EPSRC, the Royal Society and the Leverhulme Trust (UK). \section{References} \bibliographystyle{unsrt.bst}
1,941,325,220,146
arxiv
\section{Introduction\label{sec1:Introduction}} Triangular numbers $T_{t}=\frac{t\left(t+1\right)}{2}$ are one of the figurate numbers enjoying many properties; see, e.g., \cite{key-1,key-2} for relations and formulas. Triangular numbers $T_{\xi}$ that are multiples of other triangular number $T_{t}$ \begin{equation} T_{\xi}=kT_{t}\label{eq:1} \end{equation} are investigated. Only solutions for $k>1$ are considered as the cases $k=0$ and $k=1$ yield respectively $\xi=0$ and $\xi=t,\forall t$. Accounts of previous attempts to characterize these triangular numbers multiple of other triangular numbers can be found in \cite{key-3,key-4,key-5,key-6,key-7,key-8,key-9}. Recently, Pletser showed \cite{key-9} that, for non-square integer values of $k$, there are infinitely many solutions that can be represented simply by recurrent relations of the four variables $t,\xi,Tt$ and $T_{\xi}$, involving a rank $r$ and parameters $\kappa$ and $\gamma$, which are respectively the sum and the product of the $\left(r-1\right)^{\text{th}}$ and the $r^{\text{th}}$ values of $t$. The rank $r$ is being defined as the number of successive values of $t$ solutions of (\ref{eq:1}) such that their successive ratios are slowly decreasing without jumps. In this paper, we present a method based on the congruent properties of $\xi\left(\text{mod\,}\ensuremath{k}\right)$, searching for expressions of the remainders in function of $k$ or of its factors. This approach accelerates the numerical search of the values of $t_{n}$ and $\xi_{n}$ that solve (\ref{eq:1}), as it eliminates values of $\xi$ that are known not to provide solutions to (\ref{eq:1}). The gain is typically in the order of $k/\upsilon$ where $\upsilon$ is the number of remainders, which is usually such that $\upsilon\ll k$. \section{Rank and Recurrent Equations\label{sec2:Rank-and-Recurrent} } Sequences of solutions of (\ref{eq:1}) are known for $k=2,3,5,6,7,8$ and are listed in the Online Encyclopedia of Integer Sequences (OEIS) \cite{key-10}, with references given in Table \ref{tab1:OEIS--references}. \begin{table} \caption{\label{tab1:OEIS--references}OEIS \cite{key-10} references of sequences of integer solutions of (\ref{eq:1}) for $k=2,3,5,6,7,8$} \centering{ \begin{tabular}{ccccccc} \hline $k$ & 2 & 3 & 5 & 6 & 7 & 8\tabularnewline \hline \hline $t$ & A053141 & A061278 & A077259 & A077288 & A077398 & A336623\tabularnewline \hline $\xi$ & A001652 & A001571 & A077262 & A077291 & A077401 & A336625\tabularnewline \hline $T_{t}$ & A075528 & A076139 & A077260 & A077289 & A077399 & A336624\tabularnewline \hline $T_{\xi}$ & A029549 & A076140 & A077261 & A077290 & A077400 & A336626\tabularnewline \hline \end{tabular} \end{table} Among all solutions, $t=0$ is always a first solution of (\ref{eq:1}) for all non-square integer value of $k$, yielding $\xi=0$. Let's consider the two cases of $k=2$ and $k=7$ yielding the successive solution pairs as shown in Table \ref{tab2:Solutions-of-}. We indicate also the ratios $t_{n}/t_{n-1}$ for both cases and $t_{n}/t_{n-2}$ for $k=7$. It is seen that for $k=2$, the ratio $t_{n}/t_{n-1}$ varies between close values, from 7 down to 5.829, while for $k=7$, the ratio $t_{n}/t_{n-1}$ alternates between values 2.5 ... 2.216 and 7.8 ... 7.23, while the ratio $t_{n}/t_{n-2}$ decreases regularly from 19.5 to 16.023 (corresponding approximately to the product of the alternating values of the ratio $t_{n}/t_{n-1}$). We call rank $r$ the integer value such that $t_{n}/t_{n-r}$ is approximately constant or, better, decreases regularly without jumps (a more precise definition is given further). So, here, the case $k=2$ has rank $r=1$ and the case $k=7$ has rank $r=2$. \begin{table} \caption{\label{tab2:Solutions-of-}Solutions of (\ref{eq:1}) for $k=2,7$} \centering{ \begin{tabular}{|c|rrl|rrll|} \hline {\small{}$n$} & \multicolumn{3}{c|}{{\small{}$k=2$}} & \multicolumn{4}{c|}{{\small{}$k=7$}}\tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & \multicolumn{1}{r|}{{\small{}$t_{n}$}} & \multicolumn{1}{r|}{{\small{}$\xi_{n}$}} & {\small{}$\frac{t_{n}}{t_{n-1}}$} & \multicolumn{1}{r|}{{\small{}$t_{n}$}} & \multicolumn{1}{r|}{{\small{}$\xi_{n}$}} & \multicolumn{1}{l|}{{\small{}$\frac{t_{n}}{t_{n-1}}$}} & {\small{}$\frac{t_{n}}{t_{n-2}}$}\tabularnewline \hline \hline {\small{}0} & {\small{}0} & {\small{}0} & & {\small{}0} & {\small{}0} & & \tabularnewline \hline {\small{}1} & {\small{}2} & {\small{}3} & {\small{}--} & {\small{}2} & 6 & {\small{}--} & {\small{}--}\tabularnewline \hline {\small{}2} & {\small{}14} & {\small{}20} & {\small{}7} & 5 & {\small{}14} & 2.5 & {\small{}--}\tabularnewline \hline {\small{}3} & {\small{}84} & {\small{}119} & {\small{}6} & {\small{}39} & {\small{}104} & 7.8 & 19.5\tabularnewline \hline {\small{}4} & {\small{}492} & {\small{}696} & {\small{}5.857} & {\small{}87} & {\small{}231} & 2.231 & 17.4\tabularnewline \hline {\small{}5} & {\small{}2870} & {\small{}4059} & {\small{}5.833} & {\small{}629} & {\small{}1665} & 7.230 & 16.128\tabularnewline \hline {\small{}6} & {\small{}16730} & {\small{}23660} & {\small{}5.829} & {\small{}1394} & {\small{}3689} & 2.216 & 16.023\tabularnewline \hline \end{tabular} \end{table} In \cite{key-9},we showed that the rank $r$ is the index of $t_{r}$ and $\xi_{r}$ solutions of (\ref{eq:1}) such that \begin{equation} \kappa=t_{r}+t_{r-1}=\xi_{r}-\xi_{r-1}-1\label{eq:3.2} \end{equation} and that the ratio $t_{2r}/t_{r}$, corrected by the ratio $t_{r-1}/t_{r}$, is equal to a constant $2\kappa+3$ \begin{equation} \frac{t_{2r}-t_{r-1}}{t_{r}}=2\kappa+3\label{eq:3-0} \end{equation} For example, for $k=7$ and $r=2$, (\ref{eq:3.2}) and (\ref{eq:3-0}) yield respectively, $\kappa=7$ and $2\kappa+3=17$. Four recurrent equations for $t_{n},\xi_{n},T_{t_{n}}$ and $T_{\xi_{n}}$ are given in \cite{key-9} for each non-square integer value of $k$ \begin{align} t_{n} & =2\left(\kappa+1\right)t_{n-r}-t_{n-2r}+\kappa\label{eq:3.3}\\ \xi_{n} & =2\left(\kappa+1\right)\xi_{n-r}-\xi_{n-2r}+\kappa\label{eq:3.3-1}\\ T_{t_{n}} & =\left(4\left(\kappa+1\right)^{2}-2\right)T_{t_{n-r}}-T_{t_{n-2r}}+\left(T_{\kappa}-\gamma\right)\label{eq:3.3-2}\\ T_{\xi_{n}} & =\left(4\left(\kappa+1\right)^{2}-2\right)T_{\xi_{n-r}}-T_{\xi_{n-2r}}+k\left(T_{\kappa}-\gamma\right)\label{eq:3.3-3} \end{align} where coefficients are functions of two constants $\kappa$ and $\gamma$, respectively the sum $\kappa$ and the product $\gamma=t_{r-1}t_{r}$ of the first two sequential values of $t_{r}$ and $t_{r-1}$. Note that the first three relations (\ref{eq:3.3}) to (\ref{eq:3.3-2}) are independent from the value of $k$. \section{Congruence of $\xi$ modulo $k$\label{sec3:Congruence-of-}} We use the following notations: for $A,B,C\in\mathbb{Z},B<C,C>1$, $A\equiv B\left(\text{mod\,}C\right)$ means that $\exists D\in\mathbb{Z}$ such that $A=DC+B$, where $B$ and $C$ are called respectively the remainder and the modulus. To search numerically for the values of $t_{n}$ and $\xi_{n}$ that solve (\ref{eq:1}), one can use the congruent properties of $\xi\left(\text{mod\,}\ensuremath{k}\right)$ given in the following propositions. In other words, we search in the following propositions for expressions of the remainders in function of $k$ or of its factors. \begin{prop} \label{prop1:For-,-}For $\forall s,k\in\mathbb{Z}^{+}$, $k$ non-square, $\exists\xi,\mu,\upsilon,i,j\in\mathbb{Z}^{+}$, such that if $\xi_{i}$ are solutions of (\ref{eq:1}), then for $\xi_{i}\equiv\mu_{j}\left(\text{mod\,}k\right)$ with $1\leq j\leq\upsilon$, the number $\upsilon$ of remainders is always even, $\upsilon\equiv0\left(\text{mod\,}2\right)$, the remainders come in pairs whose sum is always equal to $\left(k-1\right)$, and the sum of all remainders is always equal to the product of $\left(k-1\right)$ and the number of remainder pairs, $\sum_{j=1}^{\upsilon}\mu_{j}=\left(k-1\right)\upsilon/2$. \end{prop} \begin{proof} Let $s,i,j,k,\xi,\mu,\upsilon,\alpha,\beta\in\mathbb{Z}^{+}$, $k$ non-square, and $\xi_{i}$ solutions of (\ref{eq:1}). Rewriting (\ref{eq:1}) as $T_{t_{i}}=T_{\xi_{i}}/k$, for $T_{t_{i}}$ to be integer, $k$ must divide exactly $T_{\xi_{i}}=\xi_{i}\left(\xi_{i}+1\right)/2$, i.e., among all possibilities, $k$ divides either $\xi_{i}$ or $\left(\xi_{i}+1\right)$, yielding two possible solutions $\xi_{i}\equiv0\left(\text{mod\,}k\right)$ or $\xi_{i}\equiv-1\left(\text{mod}\,k\right)$, i.e. $\upsilon=2$ and the set of $\mu_{j}$ includes $\left\{ 0,\left(k-1\right)\right\} $. This means that $\xi_{i}$ are always congruent to either $0$ or $\left(k-1\right)$ modulo $k$ for all non-square values of $k$. Furthermore, if some $\xi_{i}$ are congruent to $\alpha$ modulo $k$, then other $\xi_{i}$ are also congruent to $\beta$ modulo $k$ with $\beta=\left(k-\alpha-1\right)$. As $\xi_{i}\equiv\alpha\left(\text{mod}\,k\right)$, then $\xi_{i}\left(\xi_{i}+1\right)/2\equiv\left(\alpha\left(\alpha+1\right)/2\right)\left(\text{mod\,}k\right)$ and replacing $\alpha$ by $\alpha=\left(k-\beta-1\right)$ yields $\left(\alpha\left(\alpha+1\right)/2\right)=\left(\left(k-\beta-1\right)\left(k-\beta\right)/2\right)$, giving $\xi_{i}\left(\xi_{i}+1\right)/2\equiv\left(\left(k-\beta-1\right)\left(k-\beta\right)/2\right)\left(\text{mod\,}k\right)\equiv$ $\left(\beta\left(\beta+1\right)/2\right)\left(\text{mod\,}k\right)$. In this case, $\upsilon=4$ and the set of $\mu_{j}$ includes, but not necessarily limits to, $\left\{ 0,\alpha,\left(k-\alpha-1\right),\left(k-1\right)\right\} $. \end{proof} Note that in some cases, $\upsilon>4$, as for $k=66,70,78,105,...$ , $\nu=8$. However, in some other cases, $\upsilon=2$ only and the set of $\mu_{j}$ contains only $\left\{ 0,\left(k-1\right)\right\} $, as shown in the next proposition. In this proposition, several rules (R) are given constraining the congruence characteristics of $\xi_{i}$. \begin{prop} \label{prop2:For-,-}For $\forall s,k,\alpha,n\in\mathbb{Z}^{+}$, $k$ non-square, $\alpha>1$, $\exists\xi,\mu,\upsilon,i\in\mathbb{Z}^{+}$, such that if $\xi_{i}$ are solutions of (\ref{eq:1}), then $\xi_{i}$ are always only congruent to $0$ and $\left(k-1\right)$ modulo $k$ , and $\upsilon=2$ if either (R1) $k$ is prime, or (R2) $k=\alpha^{n}$ with $\alpha$ prime and $n$ odd, or (R3) $k=s^{2}+1$ with $s$ even, or (R4) $k=s^{\prime2}-1$ or (R5) $k=s^{\prime2}-2$ with $s^{\prime}$ odd. \end{prop} \begin{proof} Let $s,s^{\prime},k,\alpha>1,n,i,\xi,\mu,\upsilon\in\mathbb{Z}^{+}$, $k$ non-square, and $\xi_{i}$ are solutions of (\ref{eq:1}). (R1)+(R2): If $k$ is prime or if $k=\alpha^{n}$ (with $\alpha$ prime and $n$ odd as $k$ is non-square), then, in both cases, $k$ can only divide either $\xi_{i}$ or $\left(\xi_{i}+1\right)$, yielding the two congruences $\xi_{i}\equiv0\left(\text{mod\,}k\right)$ and $\xi_{i}\equiv-1\left(\text{mod\,}k\right)$. (R3): If $k=s^{2}+1$ with $s$ even, the rank $r$ is always $r=2$ \cite{key-11}, and the only two sets of solutions are \begin{align} \left(t_{1},\xi_{1}\right) & =\left(s\left(s-1\right),\left(s^{2}+1\right)\left(s-1\right)\right)\label{eq:2.8}\\ \left(t_{2},\xi_{2}\right) & =\left(s\left(s+1\right),\left(s^{2}+1\right)\left(s+1\right)-1\right)\label{eq:2.9} \end{align} as can be easily shown. For $t_{1}$, forming \begin{align*} kT_{t_{1}} & =\frac{1}{2}\left(s^{2}+1\right)\left(s\left(s-1\right)\right)\left(s\left(s-1\right)+1\right)\\ & =\frac{1}{2}\left[\left(s^{2}+1\right)\left(s-1\right)\right]\left[\left(s^{2}+1\right)\left(s-1\right)+1\right]=T_{\xi_{1}} \end{align*} which is the triangular number of $\xi_{1}$. One obtains similarly $\xi_{2}$ from $t_{2}$. These two relations (\ref{eq:2.8}) and (\ref{eq:2.9}) show respectively that $\xi_{1}$ is congruent to $0$ modulo $k$ and $\xi_{2}$ is congruent to $\left(k-1\right)$ modulo $k$. (R4) For $k=s^{\prime2}-1$ with $s^{\prime}$ odd, the rank $r=2$ \cite{key-11}, and the only two sets of solutions are \begin{align} \left(t_{1},\xi_{1}\right) & =\left(\left(s^{\prime}-1\right)s^{\prime}-1,\left(s^{\prime2}-1\right)\left(s^{\prime}-1\right)-1\right)\label{eq:2.13}\\ \left(t_{2},\xi_{2}\right) & =\left(\left(s^{\prime}-1\right)\left(s^{\prime}+2\right)+1,\left(s^{\prime2}-1\right)\left(s^{\prime}+1\right)\right)\label{eq:2.14} \end{align} as can be easily demonstrated as above. These two relations (\ref{eq:2.13}) and (\ref{eq:2.14}) show that $\xi_{1}$ and $\xi_{2}$ are congruent respectively to $\left(k-1\right)$ and $0$ modulo $k$. (R5) For $k=s^{\prime2}-2$ with $s^{\prime}$ odd, the rank $r=2$ \cite{key-11}, and the only two sets of solutions are \begin{align} \left(t_{1},\xi_{1}\right) & =\left(\frac{1}{2}\left(s^{\prime}-2\right)\left(s^{\prime}+1\right),\frac{1}{2}\left(s^{\prime2}-2\right)\left(s^{\prime}-1\right)-1\right)\label{eq:2.15}\\ \left(t_{2},\xi_{2}\right) & =\left(\frac{s^{\prime}}{2}\left(s^{\prime}+1\right)-1,\frac{1}{2}\left(s^{\prime2}-2\right)\left(s^{\prime}+1\right)\right)\label{eq.2.16} \end{align} as can easily be shown as above. These two relations (\ref{eq:2.15}) and (\ref{eq.2.16}) show that $\xi_{1}$ and $\xi_{2}$ are congruent respectively to $\left(k-1\right)$ and $0$ modulo $k$. \end{proof} There are other cases of interest as shown in the next two Propositions \begin{prop} \label{prop3:For-,-,}For $\forall n\in\mathbb{Z}^{+}$, $\exists k,\xi,\mu<k,i,j\in\mathbb{Z}^{+}$, $k$ non-square, such that if $\xi_{i}$ are solutions of (\ref{eq:1}) with $\xi_{i}\equiv\mu_{j}\left(\text{mod}\,k\right)$, and (R6) if $k$ is twice a triangular number $k=n\left(n+1\right)=2T_{n}$, then the set of $\mu_{j}$ includes $\left\{ 0,n,\left(n^{2}-1\right),\left(k-1\right)\right\} $, with $1\leq j\leq\upsilon$. \end{prop} \begin{proof} Let $n,k,\xi,\mu<k,i,j\in\mathbb{Z}^{+}$, $k$ non-square, and $\xi_{i}$ solutions of (\ref{eq:1}). Let $\xi_{i}\equiv\mu_{j}\left(\text{mod\,}k\right)$ with $1\leq j\leq\upsilon$. As the ratio $\xi_{i}\left(\xi_{i}+1\right)/k$ must be integer, $\xi_{i}\left(\xi_{i}+1\right)\equiv0\left(\text{mod\,}k\right)$ or $\mu_{j}\left(\mu_{j}+1\right)\equiv0\left(\text{mod}\,n\left(n+1\right)\right)$ which is obviously satisfied if $\mu_{j}=n$ or $\mu_{j}=\left(n^{2}-1\right)$. \end{proof} Finally, this last proposition gives a general expression of the congruence $\xi_{i}\left(\text{mod\,}\ensuremath{k}\right)$ for most cases to find the remainders $\mu_{j}$ other than $0$ and $\left(k-1\right)$. \begin{prop} \label{prop4:For-,-,}For $\forall n>1\in\mathbb{Z}^{+}$, $\exists k,f,\xi,\nu<n<k,\mu<k,m<n,i,j\in\mathbb{Z}^{+}$, $k$ non-square, let $\xi_{i}$ be solutions of (\ref{eq:1}) with $\xi_{i}\equiv\mu_{j}\left(\text{mod}\,k\right)$, let $f$ be a factor of $k$ such that $f=k/n$ with $f\equiv\nu\left(\text{mod\,}\ensuremath{n}\right)$ and $k\equiv\nu n\left(\text{mod}\,n^{2}\right)$, then the set of $\mu_{j}$ includes either $\left\{ 0,mf,\left(\left(n-m\right)f-1\right),\left(k-1\right)\right\} $ or $\left\{ 0,\left(mf-1\right),\left(n-m\right)f,\left(k-1\right)\right\} $, where $m$ is an integer multiplier of $f$ in the congruence relation and such that $m<n/2$ or $m<\left(n+1\right)/2$ for $n$ being even or odd respectively, and $1\leq j\leq\upsilon$. \end{prop} \begin{proof} Let $n>1,k,f,\xi,\mu<k,m<n,i,j<n<k\in\mathbb{Z}^{+}$, $k$ non-square, and $\xi_{i}$ a solution of (\ref{eq:1}). Let $\xi_{i}\equiv\mu_{j}\left(\text{mod\,}k\right)$ with $1\leq j\leq\upsilon$. As the ratio $\xi_{i}\left(\xi_{i}+1\right)/k$ must be integer, $\xi_{i}\left(\xi_{i}+1\right)\equiv0\left(\text{mod\,}k\right)$ or $\mu_{j}\left(\mu_{k}+1\right)\equiv0\left(\text{mod\,}fn\right)$. For a proper choice of the factor $f$ of $k$, let $\mu_{j}$ be a multiple of $f$, $\mu_{j}=mf$, then $m\left(mf+1\right)\equiv0\left(\text{mod\,}n\right)$. As $f\equiv\nu\left(\text{mod\,}\ensuremath{n}\right)$, one has \begin{equation} m\left(m\nu+1\right)\equiv0\left(\text{mod}\,n\right)\label{eq:120} \end{equation} . Let now $\left(\mu_{j}+1\right)$ be a multiple of $f$, $\mu_{j}+1=mf$, then $m\left(mf-1\right)\equiv0\left(\text{mod\,}n\right)$ or \begin{equation} m\left(m\nu-1\right)\equiv0\left(\text{mod\,}n\right)\label{eq:121} \end{equation} An appropriate combination of integer parameters $m$ and $\nu$ guarantees that (\ref{eq:120}) and (\ref{eq:121}) are satisfied. Proposition 1 yields the other remainder value as $mf+\left(n-m\right)f-1=k-1$ and $\left(mf-1\right)+\left(n-m\right)f=k-1$. \end{proof} The appropriate combinations of integer parameters $m$ and $\nu$ are given in Table \ref{tab3:Combination-of-parameters} for $2\leq n\leq12$. The sign $-$ in subscript corresponds to the remainder $\left(mf-1\right)$; the sign $/$ indicates an absence of combination. \begin{table} \caption{\label{tab3:Combination-of-parameters}Combination of parameters $m$ and $\nu$ for $2\protect\leq n\protect\leq12$} \centering{ \begin{tabular}{|c|c|ccccccccccc|} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13} \multicolumn{1}{c}{$m$} & & \multicolumn{11}{c|}{$\nu$}\tabularnewline \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13} \multicolumn{1}{c}{} & $\searrow$ & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{10} & 11\tabularnewline \hline \multirow{11}{*}{$n$} & 2 & 1\_ & & & & & & & & & & \tabularnewline \cline{2-4} \cline{3-4} \cline{4-4} & 3 & 1\_ & 1 & & & & & & & & & \tabularnewline \cline{2-5} \cline{3-5} \cline{4-5} \cline{5-5} & 4 & 1\_ & / & 1 & & & & & & & & \tabularnewline \cline{2-6} \cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6} & 5 & 1\_ & 2 & 2\_ & 1 & & & & & & & \tabularnewline \cline{2-7} \cline{3-7} \cline{4-7} \cline{5-7} \cline{6-7} \cline{7-7} & 6 & 1\_ & / & / & / & 1 & & & & & & \tabularnewline \cline{2-8} \cline{3-8} \cline{4-8} \cline{5-8} \cline{6-8} \cline{7-8} \cline{8-8} & 7 & 1\_ & 3 & 2 & 2\_ & 3\_ & 1 & & & & & \tabularnewline \cline{2-9} \cline{3-9} \cline{4-9} \cline{5-9} \cline{6-9} \cline{7-9} \cline{8-9} \cline{9-9} & 8 & 1\_ & / & 3\_ & / & 3 & / & 1 & & & & \tabularnewline \cline{2-10} \cline{3-10} \cline{4-10} \cline{5-10} \cline{6-10} \cline{7-10} \cline{8-10} \cline{9-10} \cline{10-10} & 9 & 1\_ & 4 & / & 2 & 2\_ & / & 4\_ & 1 & & & \tabularnewline \cline{2-11} \cline{3-11} \cline{4-11} \cline{5-11} \cline{6-11} \cline{7-11} \cline{8-11} \cline{9-11} \cline{10-11} \cline{11-11} & 10 & 1\_ & / & 3 & / & 5\_ & / & 3\_ & / & 1 & & \tabularnewline \cline{2-12} \cline{3-12} \cline{4-12} \cline{5-12} \cline{6-12} \cline{7-12} \cline{8-12} \cline{9-12} \cline{10-12} \cline{11-12} \cline{12-12} & 11 & 1\_ & 5 & 4\_ & 3\_ & 2 & 2\_ & 3 & 4 & 5\_ & 1 & \tabularnewline \cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13} & 12 & 1\_ & / & / & / & 3 & / & 4\_ & / & / & / & 1\tabularnewline \hline \end{tabular} \end{table} One deduces from Table \ref{tab3:Combination-of-parameters} the following simple rules: 1) $\forall n\in\mathbb{Z}^{+}$, only those values of $\nu$ that are co-prime with $n$ must be kept, all other combinations (indicated by $/$ in Table \ref{tab3:Combination-of-parameters}) must be discarded as they correspond to combinations with smaller values of $n$ and $\nu$; for $n$ even, this means that all even values of $\nu$ must be discarded. For example, $\nu=2$ and $n=4$ are not co-prime and their combination obviously corresponds to $\nu=1$ and $n=2$. 2) For $\nu=1$ and $\nu=n-1$, all values of $m$ are $m=1$ with respectively the remainders $\left(mf-1\right)$ and $mf$. 3) For $\forall n,i\in\mathbb{Z}^{+}$, $n$ odd, $2\leq i\leq\left(n-1\right)/2$, and for $\nu=\left(n-\left(2i-3\right)\right)/2$ and $\nu=\left(n+\left(2i-3\right)\right)/2$, all the values of $m$ are $m=i$. 4) For $\forall n\in\mathbb{Z}^{+}$, $n$ odd, and for $\nu=2$ and $\nu=n-2$, the remainders are respectively $mf$ and $\left(mf-1\right)$. 5) For $\forall n,i\in\mathbb{Z}^{+}$, $n$ even, $2\leq i\leq n/2$, and for $\nu=\left(n-\left(2i-3\right)\right)/2$ and $\nu=\left(n+\left(2i-3\right)\right)/2$, all the values of $m$ are $m=i$. Expressions of $\mu_{i}$ are given in Table \ref{tab4:Expressions-of-} for $2\leq n\leq12$ (with codes E$n\nu$). For example, for $k\equiv12\nu\left(\text{mod}\,12^{2}\right)$ and $\nu=5$ (code E125), i.e. $k=60,204,348,...$, $\xi_{i}\equiv\mu_{j}\left(\text{mod\,}k\right)$ with the set of remainders $\mu_{j}$ including $\left\{ 0,mf,\left(\left(n-m\right)f-1\right),\left(k-1\right)\right\} $ with $m=3$ (see Table \ref{tab3:Combination-of-parameters}) and $f=k/12=5,17,29...$respectively. \begin{table} \caption{\label{tab4:Expressions-of-}Expressions of $\mu_{j}$ for $2\protect\leq n\protect\leq12$} \centering{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline $n$ & $\nu$ & $m$ & $k\equiv$ & $f$ & $\mu_{j}$ & Code\tabularnewline \hline \hline 2 & 1 & 1 & $2\left(\text{mod\,}4\right)$ & $k/2$ & $0,(k/2)-1,k/2,k-1$ & E21\tabularnewline \hline 3 & 1 & 1 & $3\left(\text{mod\,}9\right)$ & $k/3$ & $0,\left(k/3\right)-1,2k/3,k-1$ & E31\tabularnewline & 2 & 1 & $6\left(\text{mod}\,9\right)$ & & $0,k/3,\left(2k/3\right)-1,k-1$ & E32\tabularnewline \hline 4 & 1 & 1 & $4\left(\text{mod\,}16\right)$ & $k/4$ & $0,\left(k/4\right)-1,3k/4,k-1$ & E41\tabularnewline & 3 & 1 & $12\left(\text{mod\,}16\right)$ & & $0,k/4,\left(3k/4\right)-1,k-1$ & E43\tabularnewline \hline 5 & 1 & 1 & $5\left(\text{mod\,}25\right)$ & $k/5$ & $0,\left(k/5\right)-1,4k/5,k-1$ & E51\tabularnewline & 2 & 2 & $10\left(\text{mod\,}25\right)$ & & $0,2k/5,\left(3k/5\right)-1,k-1$ & E52\tabularnewline & 3 & 2 & $15\left(\text{mod\,}25\right)$ & & $0,\left(2k/5\right)-1,3k/5,k-1$ & E53\tabularnewline & 4 & 1 & $20\left(\text{mod\,}25\right)$ & & $0,k/5,\left(4k/5\right)-1,k-1$ & E54\tabularnewline \hline 6 & 1 & 1 & $6\left(\text{mod\,}36\right)$ & $k/6$ & $0,\left(k/6\right)-1,5k/6,k-1$ & E61\tabularnewline & 5 & 1 & $30\left(\text{mod\,}36\right)$ & & $0,k/6,\left(5k/6\right)-1,k-1$ & E65\tabularnewline \hline 7 & 1 & 1 & $7\left(\text{mod\,}49\right)$ & $k/7$ & $0,\left(k/7\right)-1,6k/7,k-1$ & E71\tabularnewline & 2 & 2 & $14\left(\text{mod\,}49\right)$ & & $0,3k/7,\left(4k/7\right)-1,k-1$ & E72\tabularnewline & 3 & 3 & $21\left(\text{mod\,}49\right)$ & & $0,2k/7,\left(5k/7\right)-1,k-1$ & E73\tabularnewline & 4 & 3 & $28\left(\text{mod\,}49\right)$ & & $0,\left(2k/7\right)-1,5k/7,k-1$ & E74\tabularnewline & 5 & 2 & $35\left(\text{mod\,}49\right)$ & & $0,\left(3k/7\right)-1,4k/7,k-1$ & E75\tabularnewline & 6 & 1 & $42\left(\text{mod\,}49\right)$ & & $0,k/7,\left(6k/7\right)-1,k-1$ & E76\tabularnewline \hline 8 & 1 & 1 & $8\left(\text{mod\,}64\right)$ & $k/8$ & $0,\left(k/8\right)-1,7k/8,k-1$ & E81\tabularnewline & 3 & 3 & $24\left(\text{mod\,}64\right)$ & & $0,\left(3k/8\right)-1,5k/8,k-1$ & E83\tabularnewline & 5 & 3 & $40\left(\text{mod\,}64\right)$ & & $0,3k/8,\left(5k/8\right)-1,k-1$ & E85\tabularnewline & 7 & 1 & $56\left(\text{mod\,}64\right)$ & & $0,k/8,\left(7k/8\right)-1,k-1$ & E87\tabularnewline \hline 9 & 1 & 1 & $9\left(\text{mod\,}81\right)$ & $k/9$ & $0,(k/9)-1,8k/9,k-1$ & E91\tabularnewline & 2 & 4 & $18\left(\text{mod\,}81\right)$ & & $0,4k/9,(5k/9)-1,k-1$ & E92\tabularnewline & 4 & 2 & $36\left(\text{mod\,}81\right)$ & & $0,2k/9,(7k/9)-1,k-1$ & E94\tabularnewline & 5 & 2 & $45\left(\text{mod\,}81\right)$ & & $0,(2k/9)-1,7k/9,k-1$ & E95\tabularnewline & 7 & 4 & $63\left(\text{mod\,}81\right)$ & & $0,(4k/9)-1,5k/9,k-1$ & E97\tabularnewline & 8 & 1 & $72\left(\text{mod\,}81\right)$ & & $0,k/9,(8k/9)-1,k-1$ & E98\tabularnewline \hline 10 & 1 & 1 & $10\left(\text{mod\,}100\right)$ & $k/10$ & $0,(k/10)-1,9k/10,k-1$ & E101\tabularnewline & 3 & 3 & $30\left(\text{mod\,}100\right)$ & & $0,3k/10,(7k/10)-1,k-1$ & E103\tabularnewline & 7 & 3 & $70\left(\text{mod\,}100\right)$ & & $0,(3k/10)-1,7k/10,k-1$ & E107\tabularnewline & 9 & 1 & $90\left(\text{mod}\,100\right)$ & & $0,k/10,(9k/10)-1,k-1$ & E109\tabularnewline \hline 11 & 1 & 1 & $11\left(\text{mod\,}121\right)$ & $k/11$ & $0,(k/11)-1,10k/11,k-1$ & E111\tabularnewline & 2 & 5 & $22\left(\text{mod\,}121\right)$ & & $0,5k/11,(6k/11)-1,k-1$ & E112\tabularnewline & 3 & 4 & $33\left(\text{mod\,}121\right)$ & & $0,(4k/11)-1,7k/11,k-1$ & E113\tabularnewline & 4 & 3 & $44\left(\text{mod\,}121\right)$ & & $0,(3k/11)-1,8k/11,k-1$ & E114\tabularnewline & 5 & 2 & $55\left(\text{mod\,}121\right)$ & & $0,2k/11,(9k/11)-1,k-1$ & E115\tabularnewline & 6 & 2 & $66\left(\text{mod\,}121\right)$ & & $0,(2k/11)-1,9k/11,k-1$ & E116\tabularnewline & 7 & 3 & $77\left(\text{mod\,}121\right)$ & & $0,3k/11,(8k/11)-1,k-1$ & E117\tabularnewline & 8 & 4 & $88\left(\text{mod\,}121\right)$ & & $0,4k/11,(7k/11)-1,k-1$ & E118\tabularnewline & 9 & 5 & $99\left(\text{mod\,}121\right)$ & & $0,(5k/11)-1,6k/11,k-1$ & E119\tabularnewline & 10 & 1 & $110\left(\text{mod\,}121\right)$ & & $0,k/11,(10k/11)-1,k-1$ & E1110\tabularnewline \hline 12 & 1 & 1 & $12\left(\text{mod\,}144\right)$ & $k/12$ & $0,(k/12)-1,11k/12,k-1$ & E121\tabularnewline & 5 & 3 & $60\left(\text{mod\,}144\right)$ & & $0,3k/12,(9k/12)-1,k-1$ & E125\tabularnewline & 7 & 4 & $84\left(\text{mod\,}144\right)$ & & $0,(4k/12)-1,8k/12,k-1$ & E127\tabularnewline & 11 & 1 & $132\left(\text{mod\,}144\right)$ & & $0,k/12,(11k/12)-1,k-1$ & E1211\tabularnewline \hline \end{tabular} \end{table} Values of the remainders $\mu_{j}$ are given in Table \ref{tab5:Values-of-} for $2\leq k\leq120$, with rule (R) and expression (E) codes as references. R and E codes separated by comas imply that all references apply simultaneously to the case; E codes separated by + mean that all expressions are applicable to the case; some expression references are sometimes missing. One observes that in two cases (for $k=74$ and 104), expressions could not be found (indicated by question marks). \begin{table} \caption{\label{tab5:Values-of-}Values of $\mu_{j}$ for $2\protect\leq k\protect\leq120$} \centering{ \begin{tabular}{|c|c|c||c|c|c|} \hline $k$ & $\mu_{j}$ & References & $k$ & $\mu_{j}$ & References\tabularnewline \hline \hline 2 & 0,1 & R1,R6,E21 & 63 & 0,27,35,62 & E72,E97\tabularnewline \hline 3 & 0,2 & R1,E31 & 65 & 0,64 & R3\tabularnewline \hline 5 & 0,4 & R1,R3,E51 & 66 & 0,11,21,32,33,44,54,65 & E21+E31+E65+E116\tabularnewline \hline 6 & 0,2,3,5 & R6,E21,E32,E61 & 67 & 0,66 & R1\tabularnewline \hline 7 & 0,6 & R1,R5,E71 & 68 & 0,16,51,67 & E41\tabularnewline \hline 8 & 0,7 & R2,R4,E81 & 69 & 0,23,45,68 & E32\tabularnewline \hline 10 & 0,4,5,9 & E21,E52,E101 & 70 & 0,14,20,34,35,49,55,69 & E21+E54+E73+E107\tabularnewline \hline 11 & 0,10 & R1,E111 & 71 & 0,70 & R1\tabularnewline \hline 12 & 0,3,8,11 & R6,E31,E43,E121 & 72 & 0,8,63,71 & R6,E81,E98\tabularnewline \hline 13 & 0,12 & R1 & 73 & 0,72 & R1\tabularnewline \hline 14 & 0,6,7,13 & E21,E72 & 74 & 0,73 & ? \tabularnewline \hline 15 & 0,5,9,14 & E32,E53 & 75 & 0,24,50,74 & E31\tabularnewline \hline 17 & 0,16 & R1,R3 & 76 & 0,19,56,75 & E43\tabularnewline \hline 18 & 0,8,9,17 & E21,E92 & 77 & 0,21,55,76 & E74,E117\tabularnewline \hline 19 & 0,18 & R1 & 78 & 0,12,26,38,39,51,65,77 & E21+E32+E61\tabularnewline \hline 20 & 0,4,15,19 & R6,E41,E54 & 79 & 0,78 & R1,R5\tabularnewline \hline 21 & 0,6,14,20 & E31,E73 & 80 & 0,79 & R4\tabularnewline \hline 22 & 0,10,11,21 & E21,E112 & 82 & 0,40,41,81 & E21\tabularnewline \hline 23 & 0,22 & R1,R5 & 83 & 0,82 & R1\tabularnewline \hline 24 & 0,23 & R4 & 84 & 0,27,56,83 & E31,E127\tabularnewline \hline 26 & 0,12,13,25 & E21 & 85 & 0,34,50,84 & E52\tabularnewline \hline 27 & 0,26 & R2 & 86 & 0,42,43,85 & E21\tabularnewline \hline 28 & 0,7,20,27 & E43,E74 & 87 & 0,29,57,86 & E32\tabularnewline \hline 29 & 0,28 & R1 & 88 & 0,32,55,87 & E83,E118\tabularnewline \hline 30 & 0,5,24,29 & R6,E51,E65 & 89 & 0,88 & R1\tabularnewline \hline 31 & 0,30 & R1 & 90 & 0,9,80,89 & R6,E91,E109\tabularnewline \hline 32 & 0,31 & R2 & 91 & 0,13,77,90 & E75\tabularnewline \hline 33 & 0,11,21,32 & E32,E113 & 92 & 0,23,68,91 & E43\tabularnewline \hline 34 & 0,16,17,33 & E21 & 93 & 0,30,62,92 & E31\tabularnewline \hline 35 & 0,14,20,34 & E52,E75 & 94 & 0,46,47,93 & E21\tabularnewline \hline 37 & 0,36 & R1,R3 & 95 & 0,19,75,94 & E54\tabularnewline \hline 38 & 0,18,19,37 & E21 & 96 & 0,32,63,95 & E32\tabularnewline \hline 39 & 0,12,26,38 & E31 & 97 & 0,96 & R1\tabularnewline \hline 40 & 0,15,24,39 & E53,E85 & 98 & 0,48,49,97 & E21\tabularnewline \hline 41 & 0,40 & R1 & 99 & 0,44,54,98 & E92,E119\tabularnewline \hline 42 & 0,6,35,41 & R6,E61,E76 & 101 & 0,100 & R1,R3\tabularnewline \hline 43 & 0,42 & R1 & 102 & 0,50,51,102 & E21\tabularnewline \hline 44 & 0,11,32,43 & E43,E114 & 103 & 0,102 & R1\tabularnewline \hline 45 & 0,9,35,44 & E54,E95 & 104 & 0,103 & ?\tabularnewline \hline 46 & 0,22,23,245 & E21 & 105 & 0,14,20,35,69,84,90,104 & E32+E51+E71\tabularnewline \hline 47 & 0,46 & R1,R5 & 106 & 0,52,53,105 & E21\tabularnewline \hline 48 & 0,47 & R4 & 107 & 0,106 & R1\tabularnewline \hline 50 & 0,24,25,49 & E21 & 108 & 0,27,80,107 & E43\tabularnewline \hline 51 & 0,17,33,50 & E32 & 109 & 0,108 & R1\tabularnewline \hline 52 & 0,12,39,51 & E41 & 110 & 0,10,99,109 & R6,E101,E1110\tabularnewline \hline 53 & 0,52 & R1 & 111 & 0,36,74,110 & E31\tabularnewline \hline 54 & 0,26,27,53 & E21 & 112 & 0,48,63,111 & E72\tabularnewline \hline 55 & 0,10,44,54 & E51,E115 & 113 & 0,112 & R1\tabularnewline \hline 56 & 0,7,48,55 & R6,E71,E87 & 114 & 0,56,57,113 & E21\tabularnewline \hline 57 & 0,18,38,56 & E31 & 115 & 0,45,69,114 & E53\tabularnewline \hline 58 & 0,28,29,57 & E21 & 116 & 0,28,87,115 & E41\tabularnewline \hline 59 & 0,58 & R1 & 117 & 0,26,90,116 & E94\tabularnewline \hline 60 & 0,15,44,59 & E43,E125 & 118 & 0,58,59,117 & E21\tabularnewline \hline 61 & 0,60 & R1 & 119 & 0,118 & R1,R5 \tabularnewline \hline 62 & 0,30,31,61 & E21 & 120 & 0,15,104,119 & E87\tabularnewline \hline \end{tabular} \end{table} This Table \ref{tab5:Values-of-} gives correctly the values of the remainder pairs in most of the cases. There are although some exceptions and some values missing. Among the exceptions to the values given in Table \ref{tab5:Values-of-}, for $n=2$, remainders values for $k=30,42,74,90,110,\ldots$ are different from the theoretical ones in Table \ref{tab4:Expressions-of-}. Furthermore, for $k=66,70,78,105,...$, additional remainders exist. Expressions are missing for $k=74$ (E21) and 104 (E85). Finally, one observes also that for 16 cases, some Rules or Expressions supersede some other Expressions (indicated by Ra > Exy or Exy > Ezt), as reported in Table \ref{tab6:Rules-and-Expressions}. For example, Rule 6 supersedes Expression 21 (R6 > E21) for $k=30,42,90,110$, i.e., $k=2T_{5},2T_{6},2T_{9},2T_{10},...$ and more generally for all $k=2T_{i}$ for $i\equiv1,2\left(\text{mod}4\right)$. \begin{table} \caption{\label{tab6:Rules-and-Expressions}Rules and Expressions superseding other Rules and Expressions} \centering{ \begin{tabular}{cl} \hline $k$ & \tabularnewline \hline \hline 24 & R4 > E32; R4 > E83\tabularnewline \hline 30 & R6 > E21; R6 > E31; R6 > E103; E51 > E103; E65 > E103\tabularnewline \hline 42 & R6 > E21; R6 > E32\tabularnewline \hline 48 & R4 > E31\tabularnewline \hline 56 & R6 > E43\tabularnewline \hline 60 & E43 > E32; E43 > E52\tabularnewline \hline 65 & R3 > E53\tabularnewline \hline 72 & R6 > E43\tabularnewline \hline 80 & R4 > E51\tabularnewline \hline 84 & E31 > E41; E31 > E75\tabularnewline \hline 90 & R6 > E21; R6 > E53\tabularnewline \hline 102 & E21 > E31; E21 > E65\tabularnewline \hline 110 & R6 > E21; R6 > E52\tabularnewline \hline 114 & E21 > E32; E21 > E61\tabularnewline \hline 119 & R1 > E73; R5 > E73\tabularnewline \hline 120 & E87 > R4; E87 > E31; E87 > E54\tabularnewline \hline \end{tabular} \end{table} Note that 11 of these 16 values of $k$ are multiple of 6, the others are 2 mod 6 and 5 mod 6 for, respectively three and two cases. One notices as well, that generally, Ra and Exy supersede Ezt with $x<z$ and $t<y$, except for $k=60$ and $120$. \section{Conclusions\label{sec4:Conclusions}} We have shown that, for indices $\xi$ of triangular numbers multiples of other triangular numbers, the remainders in the congruence relations of $\xi$ modulo $k$ come always in pairs whose sum always equal $\left(k-1\right)$, always include 0 and $\left(k-1\right)$, and only 0 and $\left(k-1\right)$ if $k$ is prime, or an odd power of a prime, or an even square plus one or an odd square minus one or minus two. If the multiplier $k$ is twice a triangular number of $n,$the set of remainders includes also $n$ and $\left(n^{2}-1\right)$ and if $k$ has integer factors, the set of remainders include multiple of a factor following certain rules. Finally, algebraic expressions are found for remainders in function of $k$ and its factors. Several exceptions are noticed as well and it appears that there are superseding rules between the various rules and expressions. This approach allows to eliminate in numerical searches those $\left(k-\upsilon\right)$ values of $\xi_{i}$ that are known not to provide solutions of (\ref{eq:1}), where $\upsilon$ is the even number of remainders. The gain is typically in the order of $k/\upsilon$, with $\upsilon\ll k$ for large values of $k$.
1,941,325,220,147
arxiv
\section{Introduction} \label{sec:intro} Sign language is usually the principal communication method among hearing-impaired people. Sign language recognition (SLR) aims to transcribe sign languages into glosses (basic lexical units in a sign language), which is an important technology to bridge the communication gap between the normal-hearing and hearing-impaired people. According to the number of glosses in a sign sentence, SLR can be categorized into (a) isolated SLR (ISLR), in which each sign sentence consists of only a single gloss, and (b) continuous SLR (CSLR), in which each sign sentence may consist of multiple glosses. ISLR can be seen as a simple classification task, which becomes less popular in recent years. In this paper, we focus on CSLR which is more practical than its isolated counterpart. In recent years, more and more CSLR models are built using deep learning techniques because of their superior performance over traditional methods \cite{stmc, vac, sfl}. According to \cite{sfl}, the backbone of most deep-learning-based CSLR models is composed of three parts: a visual module, a sequential (contextual) module, and an alignment module. Within this framework, visual features are first extracted from sign videos by the visual module. After that, sequential and contextual information are modeled by the sequential module. Finally, due to the difference between the length of a sign video and its gloss label sequence, an alignment module is needed to align the sequential features with the gloss label sequence and yields its probability. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figures/intro.pdf} \caption{An overview of the CSLR backbone and the three proposed auxiliary tasks. First, our SAC enforces the visual module to focus on informative regions by leveraging pose keypoints heatmaps. Second, our SEC aligns the visual and sequential features at the sentence level, which can enhance the representation power of both the features simultaneously. SAC and SEC constitute our preliminary work \cite{zuo2022c2slr}, consistency-enhanced CSLR ($\text{C}^2$SLR). In this work, we extend $\text{C}^2$SLR by developing a novel signer removal module based on feature disentanglement for signer-independent CSLR.} \label{fig:intro} \end{figure} Usually, such CSLR backbones are trained with the connectionist temporal classification (CTC) \cite{ctc} loss. However, since CSLR datasets are usually small, only using the CTC loss may not train the backbones sufficiently \cite{iopt, dnf, cma, stmc, self-mutual, fcn, vac}. That is, the extracted features are not representative enough to be used to produce accurate recognition results. To relieve this issue, existing works can be roughly divided into two categories. First, \cite{dnf} proposes a stage optimization strategy to iteratively refine the extracted features with the help of pseudo labels, which is widely adopted in \cite{dilated, iopt, cma, stmc, self-mutual}. However, it introduces more hyper-parameters and is time-consuming since the model needs to adapt to a different objective in each new stage \cite{fcn}. As an alternative strategy, auxiliary learning can keep the whole model end-to-end trainable by just adding several auxiliary tasks \cite{fcn, vac}. In this work, three novel auxiliary tasks are proposed to help train CSLR backbones. Our first auxiliary task aims to enhance the visual module, which is important to feature extraction but sensitive to the insufficient training problem \cite{vac, dnf, stmc}. Since the information of sign languages is mainly included in signers' facial expressions and hand movements \cite{stmc, koller2020quantitative, hu2021global}, signers' face and hands are treated as informative regions. Thus, to enrich the visual features, some CSLR models \cite{stmc, stmc_jour, papadimitriou20_interspeech} leverage an off-the-shelf pose detector \cite{cao2019openpose, sun2019deep} to locate the informative regions and then crop the feature maps to form a multi-stream architecture. However, this architecture will introduce extensive parameters since each stream processes its inputs independently and the cropping operation may overlook the rich information in the pose keypoints heatmaps. As shown in Figure \ref{fig:intro}, by visualizing the heatmaps, we find that they can reflect the importance of different spatial positions, which is similar to the idea of spatial attention. Thus, as shown in Figure \ref{fig:framework}, we insert a lightweight spatial attention module into the visual module and enforce the spatial attention consistency (SAC) between the learned attention masks and pose keypoints heatmaps. In this way, the visual module can pay more attention to the informative regions. Only enhancing the visual module may not fully exploit the power of the backbone. According to \cite{vac, self-mutual}, better performance can be obtained by explicitly enforcing the consistency between the visual and sequential modules. VAC \cite{vac} adopts a knowledge distillation loss between the two modules by treating the visual and sequential modules as a student-teacher pair. With a similar idea, SMKD \cite{self-mutual} transfers knowledge by shared classifiers. Knowledge distillation can be treated as a kind of consistency since it is usually instantiated as the KL-divergence loss, a measurement of the distance between two probability distributions. Nevertheless, the above two methods have a common deficiency that they measure consistency at the frame level, \emph{i.e.}} \def\Ie{\emph{I.e.}, each frame has its own probability distribution. We think that it is inappropriate to enforce frame-level consistency since the sequential module is supposed to gather contextual information; otherwise, the sequential module may be dropped. Motivated by that both the visual and sequential features represent the same sentence, we propose the second auxiliary task: enforcing the sentence embedding consistency (SEC) between them. As shown in Figure \ref{fig:framework}, we build a lightweight sentence embedding extractor that can be jointly trained with the backbone, and then minimize the distance between positive sentence embedding pairs while maximizing the distance between negative pairs. We name the CSLR model trained with SAC and SEC as consistency-enhanced CSLR ($\text{C}^2$SLR). According to our experimental results (Table \ref{tab:SD}), with a transformer-based backbone, $\text{C}^2$SLR can achieve satisfactory performance on signer-dependent datasets, in which all signers in the test set appear in the training set. However, as shown in Table \ref{tab:2014SI}, $\text{C}^2$SLR cannot outperform the state-of-the-art (SOTA) work on the more challenging but realistic signer-independent CSLR (SI-CSLR). Under the SI setting, since the signers in the test set are unseen during training, removing signer-specific information can make the model more robust to signer discrepancy. In this work, we further develop a signer removal module (SRM) based on the idea of feature disentanglement. More specifically, we first extract robust sentence-level signer embeddings with statistics pooling \cite{snyder2018x} to ``distill" signer information, which is then dispelled from the backbone implicitly by a gradient reversal layer \cite{ganin2016domain}. Finally, the SRM is trained with a signer classification loss. To the best of our knowledge, we are the first to develop a specific module for SI-CSLR\footnote{Some works \cite{dnf, cma} evaluate their methods on SI-CSLR datasets, but none of them propose any dedicated modules for the SI setting. \cite{yin2016iterative} proposes a metric learning method to deal with the SI situation, but it focuses on ISLR.}. In summary, our main contributions are: \begin{itemize} \item We propose to enforce the consistency between the learned attention masks and pose keypoints heatmaps to enable the visual module to focus on informative regions. \item We propose to align the visual and sequential features at the sentence level to enhance the representation power of both features simultaneously. \item We propose a signer removal module from the idea of feature disentanglement to implicitly remove signer information from the backbone for SI-CSLR. To the best of our knowledge, we are the first to focus on this challenging setting. \item Extensive experiments are conducted to validate the effectiveness of the three auxiliary tasks. More remarkably, with a transformer-based backbone, our model can achieve SOTA or competitive performance on five benchmarks, while the whole model is trained in an end-to-end manner. \end{itemize} This work is an extension to our 2022 CVPR paper, $\text{C}^2$SLR \cite{zuo2022c2slr}. More specifically, we make the following new contributions: \begin{itemize} \item Besides the investigation on signer-dependent continuous sign language recognition (SD-CSLR) in the CVPR paper, we propose in this paper an additional signer removal module (SRM) to tackle the more challenging signer-independent continuous sign language recognition (SI-CSLR) problem. More specifically, the SRM is designed to remove signer information from the backbone for SI-CSLR based on feature disentanglement. To the best of our knowledge, we are the first to propose a dedicated module to deal with SI-CSLR. \item We successfully adapt statistics pooling to SI-CSLR to extract robust sentence-level signer embeddings for the SRM. \item We conduct sufficient ablation studies to validate the effectiveness of the SRM, and the combination of $\text{C}^2$SLR and SRM can achieve SOTA performance on an SI-CSLR benchmark. \item We also report additional experimental results of $\text{C}^2$SLR on the latest large-scale Chinese sign language dataset, CSL-Daily \cite{zhou2021improving} with a vocabulary size of 2K and about 20K videos. \end{itemize} \section{Related Works} \subsection{Deep-learning-based CSLR} According to \cite{sfl}, most deep-learning-based CSLR backbones consist of a visual module (3D-CNNs \cite{iopt, csl-3} or 2D-CNNs \cite{stmc, vac, self-mutual}), a sequential module (1D-CNNs \cite{dense, fcn}, RNNs \cite{stmc, vac, self-mutual, iopt, cma}, or Transformer \cite{sfl, slt}), and an alignment module (CTC \cite{stmc, vac, self-mutual} or hidden Markov models \cite{cnn-lstm-hmm, deep-sign}). To relieve the insufficient training issue, \cite{dnf} proposes a stage optimization strategy to iteratively refine the extracted features by using pseudo labels, which is widely adopted in \cite{iopt, cma, stmc, self-mutual}. Based on it, \cite{iopt} leverages LSTM to build an auxiliary decoder. SMKD \cite{self-mutual} proposes a three-stage optimization strategy, which takes 100 epochs to train its model. In a more time-efficient way, VAC \cite{vac} proposes visual enhancement and visual alignment constraints over the frame-level probability distributions to enhance the visual module and to enforce the consistency between the visual and sequential modules, respectively, and the whole model is still end-to-end trainable. In this work, we enhance the visual module from a novel view of spatial attention consistency, and align the two modules at the sentence level to enforce their sentence embedding consistency. Most of the existing works on CSLR only focus on the signer-dependent setting in which all signers in the test set appear during training. Few works pay attention to the signer-independent (SI) setting, which is more realistic but challenging as all test signers are unseen in the training set. In this work, we further propose a signer removal module based on feature disentanglement to make the model robust to signer discrepancy. \subsection{Spatial Attention} Spatial attention mechanism enables models to focus on specific positions, which is widely-adopted on many computer vision tasks, such as semantics segmentation \cite{fu2019dual}, object detection \cite{woo2018cbam, cao2019gcnet}, and image classification \cite{woo2018cbam, cao2019gcnet, hu2018gather, wang2017residual, linsley2018learning}. However, the spatial attention module may not be well-trained with a single task-specific loss function. Leveraging external information to guide the spatial attention module can be a solution to this issue. In \cite{chen2019motion}, the spatial attention module is guided by motion information for video captioning. \cite{pang2019mask} and \cite{li2020relation} propose mask and relation guidance for occluded pedestrian detection and person re-identification, respectively. Interestingly, GALA \cite{linsley2018learning} leverages click maps collected in a game to supervise the spatial attention module. In this work, we leverage pose keypoints heatmaps to guide the learning of the spatial attention module so that the visual module will focus on informative regions. \subsection{Sentence Embedding} Traditional methods \cite{palangi2016deep, liu2019cross} simply feed the word embedding sequence into RNNs, and take the last hidden state (two for bidirectional RNNs) as the sentence embedding. Recently, many powerful sentence embedding extractors \cite{reimers2019sentence, gao2021simcse, carlsson2020semantic} are built on BERT \cite{kenton2019bert}. However, it is difficult to use these methods in our work because (1) they are too large to be co-trained along with the backbone; (2) they are pretrained on spoken languages, which are totally different to sign languages represented by videos. In this work, we build a lightweight sentence embedding extractor that can be jointly trained with the CSLR backbone. \subsection{Feature Disentanglement} \label{sec:disent} For SI-CSLR, each signer can be seen as a domain, and the key is to enable the model to generalize well to unseen domains, \emph{i.e.}} \def\Ie{\emph{I.e.}, the test signers. As an effective approach for domain generalization, feature disentanglement aims to decompose features into domain-invariant and domain-specific parts \cite{wang2021generalizing}. Adversarial learning has been widely adopted for feature disentanglement by treating the feature extractor as the generator and the domain classifier as the discriminator \cite{xu2020investigating, cheng2021puregaze, liu2018exploring}. For example, \cite{xu2020investigating} removes bias, \emph{e.g.}} \def\Eg{\emph{E.g.}, gender and race, for facial expression recognition by training a series of domain classifiers in an adversarial manner. Recently, \cite{cheng2021puregaze} proposes a self-adversarial framework to remove gaze-irrelevant factors to boost gaze estimation performance. Another frequently-used feature disentanglement method is leveraging attention mechanism to highlight task-relevant features, and the residual features are treated as task-irrelevant ones. For example, \cite{jin2020style} uses a channel attention module to remove style information for person re-identification, and \cite{huang2021age} uses both spatial and channel attention to remove age-related features for face recognition. However, adversarial learning is usually complicated as the generator and discriminator are trained iteratively, and the attention modules would introduce extra parameters. In this work, we adopt the gradient reversal (GR) layer \cite{ganin2016domain} that reverses the gradient coming from the domain (signer) classification loss when the back-propagation process arrives at the feature extractor (CSLR backbone) while keeping the gradient of the domain classifier unchanged. It shares a similar idea with adversarial learning, but it is totally end-to-end and introduces no extra parameters compared to attention-based methods. Thus, we believe it can serve as a simple baseline for future research on SI-CSLR. \section{Our Proposed Method} \subsection{Framework Overview} \label{sec:overview} The blue arrows in Figure \ref{fig:framework} show the three components of the CSLR backbone: visual module, sequential module, and alignment module. Taking a sign video with $T$ RGB frames $\mathbf{x}=\{\mathbf{x}_t\}_{t=1}^T \in \mathbb{R}^{T\times H \times W \times 3}$ as input, the visual module, which simply consists of several 2D-CNN\footnote{We only consider visual modules based on 2D-CNNs since a recent survey \cite{survey} shows that 3D-CNNs cannot provide as precise gloss boundaries as 2D-CNNs, and lead to worse performance.} layers ($C_1, \dots, C_n$) followed by a global average pooling (GAP) layer, first extracts visual features $\mathbf{v}=\{\mathbf{v}_t\}_{t=1}^T \in \mathbb{R}^{T\times d}$. The sequential features $\mathbf{s}=\{\mathbf{s}_t\}_{t=1}^T \in \mathbb{R}^{T\times d}$ will be further extracted by the sequential module. Finally, the alignment module computes the probability of the gloss label sequence $p(\mathbf{y}|\mathbf{x})$ based on the widely-adopted CTC \cite{ctc}, where $\mathbf{y}=\{y_i\}_{i=1}^N$ and $N$ denotes the length of the gloss sequence. \subsection{Spatial Attention Consistency (SAC)} Signers' facial expressions and hand movements are two major clues of sign languages \cite{koller2020quantitative, stmc}. Thus, it is reasonable to expect the visual module can focus on signers' face and hands, \emph{i.e.}} \def\Ie{\emph{I.e.}, informative regions (IRs). From this view, we insert a spatial attention module into the visual module and enforce the consistency between the learned attention masks and keypoints heatmaps. Since SAC is applied to all frames in the same way, we will omit the time steps in the formulation below. \begin{figure}[t] \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=\textwidth]{figures/sac.pdf} \caption{} \label{fig:sac} \end{subfigure} \hfill \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/heatmap.pdf} \caption{} \label{fig:heatmap} \end{subfigure} \caption{(a) The architecture of our spatial attention module. ($J\times K\times C$: the size of the input feature maps, GAP: global average pooling, CMP: channel-wise max pooling. (b) Two examples of the original and refined heatmaps.} \end{figure} \subsubsection{Spatial Attention Module} We build our spatial attention module based on CBAM \cite{woo2018cbam} due to its simplicity and effectiveness. As shown in Figure \ref{fig:sac}, we first pick the most informative channel via a channel-wise max pooling (CMP) operation: \begin{equation} \mathbf{M}_1 = f_{CMP}(\mathbf{F}) \in \mathbb{R}^{J\times K\times 1}, \label{eq:1} \end{equation} where $\mathbf{M}_1$ is the squeezed feature map by CMP, and $\mathbf{F}\in\mathbb{R}^{J\times K \times C}$ is the input feature maps. Besides CMP, CBAM also squeezes the feature maps with an average pooling operation along the channel dimension. However, we propose to dynamically weight the importance of each channel. As shown in Figure \ref{fig:sac}, we first conduct global average pooling (GAP) over $\mathbf{F}$ to gather global spatial information. Then the channel weights $\mathbf{E}\in (0,1)^{1\times 1 \times C}$ are simply generated by a channel-wise softmax layer. By a weighted sum along the channel dimension, we can generate another squeezed feature map $\mathbf{M}_2$: \begin{equation} \mathbf{M}_2 = \mathbf{F} \oplus \mathbf{E} = \sum_{i=1}^C \mathbf{F}_i \cdot \mathbf{E}_i \in \mathbb{R}^{J\times K\times 1}, \end{equation} Finally, the spatial attention mask $\mathbf{M}$ is generated as: \begin{equation} \mathbf{M} = \sigma (f_{conv}(cat(\mathbf{M}_1, \mathbf{M}_2))) \in (0,1)^{J\times K}, \end{equation} where $\sigma(\cdot)$ is the sigmoid function, $f_{conv}(\cdot)$ is a 2D-CNN layer with a kernel size of 7\texttimes7, and $cat(\cdot,\cdot)$ is a channel-wise concatenation operation. The output feature maps will be a product between $\mathbf{F}$ and $\mathbf{M}$. In this way, important positions can be highlighted while trivial ones can be suppressed. It should be noted that our channel weights are similar to the channel attention module in CBAM, but ours introduces no extra parameters and can even outperform the vanilla CBAM according to our ablation studies in Table \ref{tab:sac}. \subsubsection{Keypoints Heatmap Extractor} Simply training the spatial attention module with the backbone may lead to sub-optimal solutions. Given the prior knowledge that signers' faces and hands are informative regions (IRs), we guide the spatial attention module with keypoints heatmaps extracted by the pretrained HRNet \cite{sun2019deep, andriluka20142d}. Specifically, we first normalize the raw outputs of HRNet linearly to obtain the original heatmaps: \begin{equation} \mathbf{H}_o^i = \frac{f_H^i(\mathbf{I}) - \min {f_H^i(\mathbf{I})}} {\max {f_H^i(\mathbf{I})} - \min{f_H^i(\mathbf{I})}} \in [0,1]^{H\times W}, \end{equation} where $\mathbf{I}$ is the raw RGB frame, $f_H(\cdot)$ is the pretrained HRNet, and $i\in\{1,2,3\}$ denotes the face, left hand, and right hand, respectively. \subsubsection{Post-processing} \label{sec:post-process} There are some defects in the original heatmaps although they can roughly highlight the positions of IRs. As shown in Figure \ref{fig:heatmap}, some trivial regions, \emph{e.g.}} \def\Eg{\emph{E.g.}, the top of the face heatmap in the first row and the middle part of the left hand heatmap in the second row, may receive high activation values. Besides, some highlighted regions, \emph{e.g.}} \def\Eg{\emph{E.g.}, both of the face heatmaps in Figure \ref{fig:heatmap}, may not cover the IRs entirely. In addition, there is usually a mismatch between the fixed heatmap resolution of the pretrained HRNet and that of the spatial attention masks. Below we will elaborate our heatmap post-processing module that corrects the mismatch. We first locate the center of each IR from the original heatmaps via a simple argmax operation: $(x_i, y_i) =\mathrm{argmax}\ \mathbf{H}_o^i$. To fit different resolutions of spatial attention masks, we normalize the center as $(\hat{x}_i, \hat{y}_i) = (\frac{x_i}{H-1}, \frac{y_i}{W-1})$. Suppose the spatial attention masks have a common resolution of $J \times K$, then a Gaussian-like refined keypoints heatmap is generated for each IR to reduce unwanted noise: \begin{equation} \label{equ:post} \mathbf{H}_r^i(a,b) = \exp{\left(-\frac{1}{2}\left(\frac{(a-\hat{c}_i^x)^2}{(J/\gamma_x)^2}+\frac{(b-\hat{c}_i^y)^2}{(K/\gamma_y)^2}\right)\right)}, \end{equation} where $0\leq a < J$, $0\leq b < K$. $(\hat{c}_i^x, \hat{c}_i^y) = (\hat{x}_i(J-1), \hat{y}_i(K-1))$, which denotes the transformed center for each IR under the resolution $J \times K$. $\gamma_x$ and $\gamma_y$ are two hyper-parameters to control the scale of the highlighted regions. In real practice, we set $\gamma_x=\gamma_y$. Finally, we merge the three processed IR heatmaps into a single one: $\mathbf{H}_r = \underset{i}{\max}\ \mathbf{H}_r^i \in (0,1)^{J\times K}$. \subsubsection{SAC Loss} The spatial attention module is guided by the refined keypoints heatmaps via the SAC loss\footnote{For implementation, we further compute the average of $\mathcal{L}_{sac}$ over all time steps.}: \begin{equation} \mathcal{L}_{sac} = \frac{1}{J\times K} \| \mathbf{M}-\mathbf{H}_r \|_2^2. \end{equation} \subsection{Sentence Embedding Consistency (SEC)} \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{figures/sec.pdf} \caption{The workflow of sentence embedding extraction. We omit LayerNorm \cite{layernorm} for simplicity.} \label{fig:sec} \end{figure} Some works \cite{vac, self-mutual} find that enforcing the consistency between the visual and sequential features can enhance their representation power, and lead to better performance. Different from \cite{vac, self-mutual} that measure their consistency at the frame level, we impose a sentence embedding consistency between them. \subsubsection{Sentence Embedding Extractor (SEE)} Within a sign video, each gloss consists of only a few frames. We believe a good SEE for sign languages should take local contexts into consideration. As shown in Figure \ref{fig:sec}, our SEE is built on QANet \cite{qanet}, which consists of a depth-wise temporal convolution network (TCN) layer and a transformer encoder layer. The depth-wise TCN first extracts local contextual information from the frame-level feature sequence, then the transformer encoder models global contexts by its inner self-attention module. Similar to the class token in BERT \cite{kenton2019bert}, we first prepend a learnable sentence embedding token, [SEN], to the sequential features $\mathbf{s}\in\mathbb{R}^{T\times d}$ defined in Section \ref{sec:overview}: \begin{equation} \mathbf{s}' = cat(\text{[SEN]}, \mathbf{s}) \in \mathbb{R}^{(T+1)\times d}. \end{equation} The input of the SEE is the summation of the feature sequence and the positional embeddings \cite{transformer}; \emph{i.e.}} \def\Ie{\emph{I.e.}, $\mathbf{s}''=\mathbf{s}' + \mathbf{P}$, where $\mathbf{P}\in\mathbb{R}^{(T+1)\times d}$. Within the SEE, the depth-wise TCN \cite{wu2018pay} layer first models local contexts with a residual shortcut: $\mathbf{s}_l''=f_{TCN}(\mathbf{s}'')+\mathbf{s}''$. Then the transformer encoder layer gathers information from all time steps to get the sentence embedding: \begin{equation} \mathbf{E}_{sen}^s = f_{TF}(\mathbf{s}_l'') = \sum_{i=0}^T w_i \mathbf{s}_{l_i}'' \in \mathbb{R}^d, \end{equation} where $w_i$ are the learned weights by the self-attention module in the transformer encoder. We can also get the sentence embedding of visual features, $\mathbf{E}_{sen}^v$, in the same way. \subsubsection{Negative Sampling} Directly minimizing the distance between $\mathbf{E}_{sen}^s$ and $\mathbf{E}_{sen}^v$ will result in trivial solutions. For example, if the parameters of SEE are all zeros, then the outputs of SEE will always be the same. A simple way to address this issue is introducing negative samples. In this work, we follow the common practice \cite{schroff2015facenet, ye2019unsupervised, oord2018representation, hjelm2018learning} and sample another video from the mini-batch and take its sequential features as the negative sample. Note that most CSLR models \cite{vac, self-mutual, stmc, stmc_jour} are trained with a batch size of 2, and our negative sampling strategy will degenerate to swapping under this setting: \begin{equation} (neg(\mathbf{B}[0]), neg(\mathbf{B}[1])) = (\mathbf{B}[1], \mathbf{B}[0]), \end{equation} where $\mathbf{B} \in \mathbb{R}^{2\times T\times d}$ is a mini-batch of the sequential features, and $neg(\mathbf{B}[\cdot])$ denotes the corresponding negative sample. \subsubsection{SEC Loss} We implement SEC loss as a triplet loss \cite{schroff2015facenet} and minimize the distances between the sentence embeddings computed from the visual and sequential features of the same sentence, while maximizeing the distances between those from different sentences: \begin{equation} \begin{split} \mathcal{L}_{sec} = \max &\{d(\mathbf{E}_{sen}^v, \mathbf{E}_{sen}^s) - d(\mathbf{E}_{sen}^v, neg(\mathbf{E}_{sen}^{s}))+\alpha, 0\}, \end{split} \label{equ:sec} \end{equation} where $d(\mathbf{x}_1,\mathbf{x}_2)=1-\frac{\mathbf{x}_1 \cdot \mathbf{x}_2}{\|\mathbf{x}_1\|_2 \cdot \|\mathbf{x}_2\|_2}$; $\{\mathbf{E}_{sen}^v, \mathbf{E}_{sen}^s\}$ are sentence embeddings of visual and sequential features from the same sentence; $\{\mathbf{E}_{sen}^v, neg(\mathbf{E}_{sen}^{s})\}$ are sentence embeddings of visual and sequential features from different sentences, and we treat the sentence embedding of the sequential features from a different sentence as the negative sample $neg(\mathbf{E}_{sen}^{s})$; $\alpha$ is the margin. \subsection{Signer Removal Module (SRM)} \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figures/srm.pdf} \caption{The workflow of our signer removal module (SRM). We insert the SRM after the $m$-th CNN layer, $C_m$. The loss of $\text{C}^2$SLR, $\mathcal{L}_b$, which is a sum of the CTC, SAC, and SEC losses, is used to train the backbone parameters $\theta_b$. The signer classification loss $\mathcal{L}_{srm}$ is used to train the SRM parameters $\theta_s$ as usual, while the gradient coming from $\mathcal{L}_{srm}$ will be reversed for $\theta_b$. $\lambda$ is the loss weight for $\mathcal{L}_{srm}$.} \label{fig:srm} \end{figure} To remove signer information from CSLR backbones, we further develop a signer removal module (SRM) based on statistics pooling and gradient reversal as shown in Figure \ref{fig:srm}. \subsubsection{Signer Embeddings} We first extract signer embeddings to ``distill" signer information before dispelling it. A na\"ive method is simply feeding the frame-level features into an MLP, and treat the outputs of MLP as signer embeddings. In this work, motivated by the superior performance of x-vectors \cite{snyder2018x} in speaker recognition, we leverage statistics pooling to obtain more robust sentence-level signer embeddings. Specifically, we first feed the intermediate visual features $\mathbf{F} \in \mathbb{R}^{T\times J \times K \times C}$ into a global average pooling layer to squeeze the spatial dimension and obtain frame-level features\footnote{Here we misuse the notation $\mathbf{F}$ in Eq.\ref{eq:1}.} $\mathbf{F}_s \in \mathbb{R}^{T\times C}$. Then a statistics pooling (SP) layer is used to aggregate frame-level information: \begin{equation} \mathbf{F}_s^{SP} = cat(\mathbf{F}_s^{mean}, \mathbf{F}_s^{std}) \in \mathbb{R}^{2C}, \end{equation} where $\mathbf{F}_s^{mean} \in \mathbb{R}^C$ and $\mathbf{F}_s^{std} \in \mathbb{R}^C$ are the temporal mean and standard deviation of $\mathbf{F}_s$, respectively. In this way, $\mathbf{F}_s^{SP}$ is capable to capture signer characteristics over the entire video instead of at the frame-level. After that, a simple two-layer MLP with rectified linear unit (ReLU) function is used to project the statistics into the signer embedding space: \begin{equation} \mathbf{E}_{sig} = ReLU(\mathbf{W}_2 ReLU(\mathbf{W}_1 \mathbf{F}_s^{SP}+\mathbf{b}_1)+\mathbf{b}_2) \in \mathbb{R}^{C}, \end{equation} where $\mathbf{W}_1 \in \mathbb{R}^{C\times 2C}, \mathbf{b}_1 \in \mathbb{R}^C, \mathbf{W}_2 \in \mathbb{R}^{C\times C}, \mathbf{b}_2 \in \mathbb{R}^C$ represent the two-layer MLP. Finally, the signer embeddings $\mathbf{E}_{sig}$ are fed into a classifier to yield signer probabilities $\mathbf{p}_{sig} \in (0,1)^{N_{sig}}$, where $N_{sig}$ denotes the number of signers. The SRM is trained with the signer classification loss, which is simply a cross-entropy loss: \begin{equation} \mathcal{L}_{srm} = -\log p_{sig}^i, \end{equation} where $i$ is the label of the signer. \subsubsection{Gradient Reversal} If the CSLR backbone is jointly trianed with $\mathcal{L}_{srm}$, it will become the multi-task learning, which, however, cannot promise removing the signer information from the backbone. In this work, we treat each signer as a domain and formulate SI-CSLR as a domain generalization problem in which no test signers are seen during training. The gradient reversal layer was proposed in \cite{ganin2016domain} to address the domain generalization problem by learning features that are discriminative to the main classification task while indiscriminate to the domain gap. More specifically, according to \cite{ganin2016domain}, denoting the parameters of the feature extractor, label predictor, and domain classifier as $\theta_f$, $\theta_y$, and $\theta_d$, respectively, the optimization of these parameters can be formulated as: \begin{equation} \label{equ:grad_rev} \begin{split} \theta_f &\leftarrow \text{optimizer}(\theta_f, \nabla_{\theta_f}\mathcal{L}_y, -\lambda \nabla_{\theta_f}\mathcal{L}_d, \eta), \\ \theta_y &\leftarrow \text{optimizer}(\theta_y, \nabla_{\theta_y}\mathcal{L}_y, \eta), \\ \theta_d &\leftarrow \text{optimizer}(\theta_d, \lambda \nabla_{\theta_d}\mathcal{L}_d, \eta), \end{split} \end{equation} where $\mathcal{L}_y$ and $\mathcal{L}_d$ are the main classification and domain classification losses, respectively, $\lambda$ is the loss weight for $\mathcal{L}_d$, and $\eta$ is the learning rate. We adapt Eq. \ref{equ:grad_rev} by instantiating $\mathcal{L}_y$ and $\mathcal{L}_d$ as the backbone training loss $\mathcal{L}_b$ and signer classification loss $\mathcal{L}_{srm}$, which are illustrated in Figure \ref{fig:srm}, respectively. We also merge $\theta_f$ and $\theta_y$ as $\theta_b$ to denote the parameters of the backbone, and use $\theta_s$ to represent the parameters of the SRM. The new optimization process can be formulated as: \begin{equation} \begin{split} \theta_b &\leftarrow \text{optimizer}(\theta_b, \nabla_{\theta_b}\mathcal{L}_b, -\lambda \nabla_{\theta_b}\mathcal{L}_{srm}, \eta), \\ \theta_s &\leftarrow \text{optimizer}(\theta_s, \lambda \nabla_{\theta_s}\mathcal{L}_{srm}, \eta). \end{split} \end{equation} As a result, the SRM itself is trained with $\mathcal{L}_{srm}$ as usual, but the backbone is trained ``reversely" so that the extracted features cannot discriminate signers, and the signer information is implicitly removed. We empirically validate the effectiveness of the SRM on two challenging SI-CSLR benchmarks, establishing a strong baseline for future works on SI-CSLR. \subsection{Alignment Module and Loss Function} We follow recent works \cite{self-mutual, vac, stmc, cma} to adopt CTC-based alignment module. It yields a label for each frame which may be a repeating label or a special blank symbol. CTC assumes that the model output at each time step is conditionally independent of each other. Given an input sequence $\mathbf{x}$, the conditional probability of a label sequence $\boldsymbol{\phi}=\{\phi_i\}_{i=1}^T$, where $\phi_i \in \mathcal{V}\cup\{blank\}$ and $\mathcal{V}$ is the vocabulary of glosses, can be estimated by: \begin{equation} p(\boldsymbol{\phi}|\mathbf{x}) = \prod_{i=1}^T p(\phi_i|\mathbf{x}), \label{equ:ctc} \end{equation} where $p(\phi_i|\mathbf{x})$ is the frame-level gloss probabilities generated by a classifier. The final probability of the gloss label sequence is the summation of all feasible alignments: \begin{equation} p(\mathbf{y}|\mathbf{x}) = \sum_{\boldsymbol{\phi}=\mathcal{G}^{-1}(\mathbf{y})} p(\boldsymbol{\phi}|\mathbf{x}), \end{equation} where $\mathcal{G}$ is a mapping function to remove repeats and blank symbols in $\boldsymbol{\phi}$, and $\mathcal{G}^{-1}$ is its inverse mapping. Then the CTC loss is defined as: \begin{equation} \mathcal{L}_{ctc} = -\log p(\mathbf{y}|\mathbf{x}). \end{equation} Finally, the overall loss function is a combination of the CTC, SAC, SEC, and signer classification losses: \begin{equation} \label{equ:overall_loss} \mathcal{L}=\underbrace{\mathcal{L}_{ctc}+\mathcal{L}_{sac}+\mathcal{L}_{sec}}_{\mathcal{L}_b}+\lambda\mathcal{L}_{srm}, \end{equation} where $\lambda=0$ for signer-dependent datasets, and $\lambda>0$ for signer-independent ones. \subsection{A Strong Sequential Module: Local Transformer} \label{sec:lt} \begin{figure}[t] \begin{subfigure}[t]{.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/lctr.pdf} \caption{Local Transformer (LT). We omit LayerNorm \cite{layernorm} for simplicity.} \label{fig:lctr} \end{subfigure} \hfill \begin{subfigure}[t]{.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/lcsa.pdf} \caption{Local Self-attention (LSA).} \label{fig:lcsa} \end{subfigure} \caption{We propose a strong sequential module, local transformer. It is based on QANet \cite{qanet}, which validates the effectiveness of combining TCNs with self-attention. The difference is that we further leverage Gaussian bias \cite{gau-1, gau-2} to introduce local contexts into the self-attention module, \emph{i.e.}} \def\Ie{\emph{I.e.}, local self-attention. ($L$: number of LT layers, which is set to 2 as default; $RPE$: relative positional encoding \cite{rpe}; $D$: window size of the Gaussian bias.)} \end{figure} The sequential module is an important component of the CSLR backbone. Most existing CSLR works adopt globally-guided architectures, \emph{e.g.}} \def\Eg{\emph{E.g.}, BiLSTM \cite{iopt, cma} and vanilla Transformer \cite{sfl, slt}, for sequence modeling due to their strong capability of capturing long-term temporal dependencies. However, within a sign video, each gloss is short, consisting of only a few frames. This can explain why a locally-guided architecture, such as TCNs, can also achieve excellent performance \cite{fcn}. In this subsection, we will elaborate a mixed architecture, Local Transformer (LT), which can leverage both global and local contexts for sequence modeling for CSLR. Figure \ref{fig:lctr} shows the architecture of LT. Each LT layer consists of a depth-wise TCN layer, a local self-attention (LSA) layer, and a feed-forward network. Since the depth-wise TCN layer and the feed-forward network are the same as those used in \cite{qanet, transformer}, below we will only give the formulation of the LSA. As shown in Figure \ref{fig:lcsa}, three linear layers first project the input feature sequence $\mathbf{A} \in \mathbb{R}^{T\times d}$ into queries $\mathbf{Q}\in \mathbb{R}^{T\times d}$, keys $\mathbf{K}\in \mathbb{R}^{T\times d}$, and values $\mathbf{V} \in \mathbb{R}^{T\times d}$, respectively. We then split $\mathbf{Q}, \mathbf{K}, \mathbf{V}$ into $\{\mathbf{Q}^h\}_{h=1}^{N_h}, \{\mathbf{K}^h\}_{h=1}^{N_h}, \{\mathbf{V}^h\}_{h=1}^{N_h}$, respectively, for multi-head self-attention as \cite{transformer}, where $\mathbf{Q}^h, \mathbf{K}^h, \mathbf{V}^h \in \mathbb{R}^{T\times d/{N_h}}$ and $N_h$ is the number of heads. The attention scores for each head can be obtained by the scaled dot-product attention as follows: \begin{equation} \mathbf{ATT} = \left\{\frac{(\mathbf{Q}^h)(\mathbf{K}^{h})^T}{\sqrt{d/N_h}}\right\}_{h=1}^{N_h} \in \mathbb{R}^{N_h\times T\times T}. \end{equation} The vanilla self-attention treats each position equally. To emphasize local contexts, we adopt Gaussian bias \cite{gau-1, gau-2} to weaken the interactions between distant query-key (QK) pairs. Given a QK pair ($\mathbf{q}_i^h, \mathbf{k}_j^h$), the Gaussian bias (GB) is defined as: \begin{equation} GB_{ij}^h = -\frac{(j-i)^2}{2\sigma^2}, \label{bias} \end{equation} where $\sigma=\frac{D}{2}$, and $D$ is the window size of the Gaussian bias \cite{gau-1}. Note that although we can assign Gaussian bias with a different value of $D$ for each head, we find that a common Gaussian bias among all heads suffices to boost the performance of transformer significantly. The final attention weights for each value vector are obtained from a softmax layer, and the output of the LSA is: \begin{equation} \begin{cases} \quad \mathbf{O}^h = softmax(\mathbf{ATT}^h+\mathbf{GB}^h)\mathbf{V}^h \\ \quad \mathbf{O}^{LSA} = cat(\{\mathbf{O}^h\}_{h=1}^{N_h})\mathbf{W}^O \in \mathbb{R}^{T\times d} \ , \end{cases} \end{equation} where $\mathbf{W}^O \in \mathbb{R}^{d\times d}$ denotes the output linear layer. We intuitively set $D$ as the average ratio of frame length to gloss length: $D=\frac{1}{|tr|}\sum_{i=1}^{|tr|}\frac{T_i}{N_i}$, where $|tr|$ is the number of training samples, based on the idea that a good window size should reflect the average frame length of a gloss. More specifically, $D=6.3,15.8,5.0$ for the PHOENIX datasets, CSL, and CSL-Daily, respectively. \section{Experiments} \subsection{Datasets and Evaluation Metric} \subsubsection{Datasets} We evaluate our method on three signer-dependent datasets (PHOENIX-2014, PHOENIX-2014-T, and CSL-Daily) and two signer-independent datasets (PHOENIX-2014-SI and CSL). \textbf{PHOENIX-2014} \cite{2014} is a German CSLR dataset with a vocabulary size of 1081. There are 5672, 540, and 629 videos performed by 9 signers in the training, development (dev), and test set, respectively. \textbf{PHOENIX-2014-T} \cite{2014T} is an extension of PHOENIX-2014 with a vocabulary size of 1085. There are 7096, 519, and 642 videos performed by 9 signers in the training, dev, and test set, respectively. \textbf{CSL-Daily} \cite{zhou2021improving} is the latest large-scale Chinese sign language dataset consisting of 18401, 1077, 1176 videos performed by 10 signers in the training, dev, and test set, respectively. Its vocabulary size is 2000. \textbf{PHOENIX-2014-SI} \cite{2014} is the signer-independent version of PHOENIX-2014 consisting of 4376, 111, and 180 videos in the training, dev, and test set, respectively. It has 8 signers for training, and leaves the remaining one for validation and test. \textbf{CSL} \cite{iopt, csl-2, csl-3} is a Chinese CSLR dataset consisting of 4000 and 1000 videos in the training and test set, respectively, with a vocabulary size of 178. We follow \cite{fcn, vac} to conduct experiments on its signer-independent split in which 40 signers only appear in training while the remaining 10 signers only appear in testing. Compared to some widely-adopted datasets in action recognition, \emph{e.g.}} \def\Eg{\emph{E.g.}, Kinetics-600 \cite{k600} with ~500K videos and Something-Something v2 \cite{sthsthv2} with ~169K videos, the size of these sign language datasets are quite small. This can also explain why some specific training strategies, \emph{e.g.}} \def\Eg{\emph{E.g.}, stage optimization and auxiliary training, are suggested necessary for CSLR before. \subsubsection{Evaluation Metric} We use word error rate (WER) to measure the dissimilarity between two sequences. \begin{equation} \text{WER} = \frac{\#\text{deletions} + \#\text{substitutions} + \#\text{insertions}}{\#\text{glosses in label}} \end{equation} The official evaluation scripts provided by each dataset are used for measuring the WER. \subsection{Implementation Details} \subsubsection{Data Augmentation} We first resize RGB frames to $256\times 256$ and then crop them to $224\times 224$. Stochastic frame dropping (SFD) \cite{sfl} with a dropping ratio of 50\% is adopted for the PHOENIX datasets. Since videos in CSL and CSL-Daily are much longer, we adopt a \textit{seg-and-drop} strategy that first segments the videos into short clips consisting of only two frames, and then one frame is randomly dropped from each clip. In this way, the processed videos only consist of half of the original frames while most information can be kept. After that, we further randomly drop 40\% frames using SFD from these processed videos. \subsubsection{Backbones and Hyper-parameters} We choose the following three representative backbones to validate the effectiveness of our method. \begin{itemize} \item VGG11+TCN+BiLSTM (VTB). It is widely adopted in some recent works \cite{stmc, vac}. VGG11 \cite{vggnet} is used as the visual module, and the sequential module is composed of the TCN and BiLSTM to capture both local and global contexts. \item CNN+TCN (CT). This lightweight backbone only consists of a 9-layer 2D-CNN and a 3-layer TCN, which is proposed in \cite{fcn}. \item VGG11+Local Transformer (VLT). The sequential module is a 2-layer local transformer encoder described in Section \ref{sec:lt}. \end{itemize} We set the number of output channels of the TCN layers in CT and VTB to 512 and the number of hidden units of BiLSTM in VTB to 2\texttimes256 to match the channel dimensions of the visual and sequential features. These modifications lead to comparable WERs with those reported in the original papers \cite{fcn, stmc}. We insert the spatial attention module after the 5th CNN layer as a trade-off between heatmap resolution and GPU memory limitation. In terms of post-processing, we set $\gamma_x=\gamma_y=14$ according to the experimental results in Section \ref{sec:gamma}. The kernel size of the depth-wise TCN layer in both our SEE and VLT backbone is set to 5, which is the same as in \cite{qanet}. The margin $\alpha$ in Eq. \ref{equ:sec} is set to 2, which is the maximum difference between the negative and positive distance with a cosine distance function. Regarding the signer removal module, we also put it after the 5th CNN layer, and the weight for $\mathcal{L}_{srm}$, $\lambda$, is set to 0.75 by default. \subsubsection{Training} Following recent works \cite{self-mutual, vac, stmc}, all models are trained with a batch size of 2. We adopt an Adam optimizer \cite{adam} with an initial learning rate of $1\times 10^{-4}$ and a weight decay factor of $1\times 10^{-4}$. We empirically find that $\mathcal{L}_{sec}$ decreases much faster than $\mathcal{L}_{ctc}$, thus we multiply the learning rate of the SEE with a factor of 0.1/0.01/0.1 for the three backbones, respectively, to match the training pace of the backbone and SEE. As in \cite{slt}, we adopt plateau to schedule the learning rate: if the development WER does not decrease for 6 evaluation steps, the learning rate will be decreased by a factor of 0.7. But since CSL does not have an official dev split, we decrease the learning rate after the 15th and 25th epoch and per 5 epochs after the 30th epoch. The maximum number of training epochs is set to 60. \subsubsection{Inference and Decoding.} Following \cite{sfl}, to match the training condition, we evenly select every $\frac{1}{p_d}$-th frame to drop during inference, where $p_d$ is the dropping ratio. We adopt the beam search algorithm with a beam size of 10 for decoding. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figures/vis_sac.pdf} \caption{Visualization results for learned spatial attention masks with or without the guidance of $\mathcal{L}_{sac}$. We randomly select five samples ($s_1, \dots, s_5$) from the \textbf{test} set, and for each sample, we select one clear frame and one blurry frame. It is clear that the guidance of $\mathcal{L}_{sac}$ can help the spatial attention module capture the informative regions (face and hands) more accurately.} \label{fig:vis_sac} \end{figure*} \subsection{Ablation Studies for $\text{C}^2$SLR} We first conduct ablation studies for $\text{C}^2$SLR on PHOENIX-2014 following previous works \cite{vac, stmc, cma, self-mutual}. \input{tables/sacsec} \subsubsection{Effectiveness of SAC and SEC} As shown in Table \ref{tab:sacsec}, both SAC and SEC generalize well on different backbones: the performance of all the three backbones can be clearly improved. However, if the spatial attention module is inserted into the backbones without any guidance, \emph{i.e.}} \def\Ie{\emph{I.e.}, $\text{SAC}^-$, the model performance can only be improved slightly, which verifies the effectiveness of $\mathcal{L}_{sac}$. The effectiveness of SEC suggests that explicitly enforcing the consistency between the visual and sequential modules at the sentence level can strengthen the cross-module cooperation, which leads to the performance gain. The improvements due to SAC and SEC are complementary so that using both of them can obtain better results than using only one of them. Besides, since VLT performs the best among the three backbones, we will use it as the default backbone for the following experiments. \subsubsection{Visualization Results for SAC} Figure \ref{fig:vis_sac} shows the visualization results of the learned spatial attention masks of SAC (with $\mathcal{L}_{sac}$) and $\text{SAC}^-$ (without $\mathcal{L}_{sac}$) for five test samples. It should be noted that since SAC is deactivated during testing, our comparison is fair. First, it is quite clear that the learned attention masks with the guidance of $\mathcal{L}_{sac}$ look much better. Without the guidance of $\mathcal{L}_{sac}$, the attention masks are quite messy with horizontal lines at the top and many highlights at trivial regions, \emph{e.g.}} \def\Eg{\emph{E.g.}, the left shoulder of $s_2$, the hair of $s_1$ and $s_4$, and the waist of $s_3, s_5$. This explains why $\text{SAC}^-$ can only slightly improve the performance of the backbones as shown in Table \ref{tab:sacsec}. Second, our SAC is so robust that the IRs (face and hands) in blurry frames (right columns of $s_1$ to $s_5$) can still be captured precisely. Third, it is capable of dealing with different hand positions: \emph{e.g.}} \def\Eg{\emph{E.g.}, both two hands are lower than the face ($s_1, s_3$); one hand is near the face while the other one is not ($s_1, s_2, s_4$), and hands are overlapped ($s_5$). \subsubsection{Channel Weights} Within our spatial attention module, each channel can receive a weight to better measure its importance before squeezing the feature maps. Removing the channel weights degenerates to the channel-wise average pooling in CBAM \cite{woo2018cbam} and achieves a WER of 21.3\%, which leads to a performance drop by 0.5\% as shown in Table \ref{tab:sac}. Although our channel weights share a similar idea with the channel attention module of CBAM, which builds extra linear layers to generate the attention weights, no extra parameters are introduced in our spatial attention module. To further validate their effectiveness, after removing the channel weights, we conduct one more experiment by adding the channel attention module back as CBAM; however, it can only lead to a slight performance gain and cannot beat ours even with extra parameters. \input{tables/sac} \subsubsection{Heatmap Refinement} We discuss in the Section \ref{sec:post-process} that the raw heatmaps of HRNet \cite{sun2019deep} consist of too many defects which may hinder the learning of the spatial attention module. As shown in Table \ref{tab:sac}, the quality of the keypoints heatmaps can make a difference on model performance: directly using the original heatmaps without post-processing achieves a WER of 21.7\%, which reduces the performance of SAC by almost 1\%. \subsubsection{Effect of Each Informative Region} As shown in the last two rows in Table \ref{tab:sac}, removing either face or hands region can harm the performance of SAC. The results validate that both signers' face and hands play a key role in conveying information, which is also mentioned in \cite{stmc, koller2020quantitative}. \begin{figure}[!t] \centering \includegraphics[width=0.7\linewidth]{figures/gamma.pdf} \caption{Visualization results and performance comparison for different $\gamma_x, \gamma_y$ in Eq. \ref{equ:post}. Since for real practice, the height and the width of the spatial attention masks are usually the same, we set $\gamma_x$ and $\gamma_y$ to the same value.} \label{fig:gamma} \end{figure} \subsubsection{Effect of the Hyper-parameters $\gamma_x, \gamma_y$ of Eq. \ref{equ:post}} \label{sec:gamma} We think $\gamma_x$ and $\gamma_y$ are two important hyper-parameters since they control the scale of highlighted regions in keypoints heatmaps. Thus, we conduct experiments to compare the performance of different $\gamma_x, \gamma_y$ as shown in Figure \ref{fig:gamma}. The model performance is worse when they are either too large (cannot cover the informative regions entirely) or too small (cover too many trivial regions). When $\gamma_x=\gamma_y=14$, the model achieves the best performance. \input{tables/sec} \subsubsection{Sentence Embedding Extractor and Negative Sampling} Our sentence embedding extractor consists of a depth-wise TCN layer and a transformer encoder aiming to model local and global contexts, respectively. As shown in Table \ref{tab:sec_ext}, local contexts are important to sentence embedding extraction as dropping the TCN layer would lead to worse performance. We also compare our method with the common practice, which concatenates the last two hidden states of BiLSTM and treats it as the sentence embedding. Nevertheless, that it underperforms the transformer-based extractors implies the strength of the self-attention mechanism for sentence embedding extraction. Table \ref{tab:sec_ext} also shows that negative sampling plays a key role in our SEC: without negative sampling, that is, directly minimizing the sentence embedding distance between the visual and sequential features, is not effective. \subsubsection{Constraint Level} \label{sec:cons_lev} As shown in Table \ref{tab:sec_lev}, we implement some frame-level constraints to validate the effectiveness of our SEC. First, we replace the sentence embeddings, $\mathbf{v}_{se}$ and $\mathbf{s}_{se}$ in Eq. \ref{equ:sec}, by their corresponding frame-level features to minimize the positive distances while maximizing the negative distances at the frame level. However, it leads to a performance degradation of 0.7\% compared to our SEC. We further compare our SEC with VAC \cite{vac}, which is composed of two frame-level constraints: visual enhancement (VE) and visual alignment (VA). First, an extra classifier is appended to the visual module to yield frame-level probability distributions (visual distribution). VE is implemented as a CTC loss computed between the visual distribution and the gloss label, which is the same as the one used for training the backbone. Second, VA is simply a KL-divergence loss, which aims to minimize the distance between the visual distribution and the original probability distribution ($p(\phi_i|\mathbf{x})$ in Eq. \ref{equ:ctc}). Table \ref{tab:sec_lev} shows that both VE and VA perform much worse than our SEC. The results suggest that our SEC is a more proper way to measure the consistency between the visual and sequential modules. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figures/vgpair.pdf} \caption{Two examples of video-gloss pairs.} \label{fig:vgpair} \end{figure} \input{tables/vgpair_and_lambda} \subsubsection{Examples of Video-gloss Pairs} To verify whether $\mathcal{L}_{sec}$ can really separate positive and negative samples, we provide two examples of video-gloss pairs denoted as $(v_1, l_1)$ and $(v_2, l_2)$ as shown in Figure \ref{fig:vgpair}. The sentence embedding distances of the visual and sequential features of $v_1$ and $v_2$ are shown in Table \ref{tab:vgpair}. It is clear that the distance between the two features of the same video (diagonal entries, positive pairs) can be very small. Otherwise (off-diagonal entries, negative pairs), the distance can be very large (the maximum value is 2.00). \subsection{Ablation Studies for the Signer Removal Module} We further conduct ablation studies for our signer removal module (SRM) on the challenging signer-independent dataset, PHOENIX-2014-SI. \subsubsection{Effect of the Hyper-parameter $\lambda$ of Eq. \ref{equ:overall_loss}} \label{sec:lambda} According to \cite{liu2018exploring}, the weight for the domain classification loss, \emph{i.e.}} \def\Ie{\emph{I.e.}, our signer classification loss $\mathcal{L}_{srm}$, is an important hyper-parameter. We fine-tune it from 0 to 1.5 with an interval of 0.25 as shown in Table \ref{tab:lambda}. When $\lambda=0$, the model degenerates to $\text{C}^2$SLR and performs worse than other models with $\lambda>0$. The results suggest the importance of removing signer information for SI-CSLR. When $\lambda=0.75$, the model can achieve the best performance with a WER of 33.1\%/32.7\% on the dev and test set, respectively. \input{tables/srm} \subsubsection{Statistics Pooling and Gradient Reversal} We further conduct ablation studies for the two major components of our SRM, the statistics pooling (SP) and gradient reversal (GR) layer. First, the use of the GR layer decides the type of learning method: feature disentanglement or multi-task learning. As shown in Table \ref{tab:srm}, it is clear that with the use of GR, models under the feature disentanglement setting can significantly outperform those under the multi-task learning setting. The result implies that removing signer information is effective to SI-CSLR. However, we find that the model, $\text{C}^2$SLR+SP, can also outperform the baseline under the multi-task setting. We think this is because the multi-task learning can be seen as a kind of regularization \cite{zhang2021survey}, which endows the shared networks between the CSLR and signer classification branches with better generalization capability. Similar ideas also appear in some works that jointly train a speech recognition model and a speaker recognition model \cite{liu2018speaker, pironkov2016speaker}. Finally, the effectiveness of SP also validates that sentence-level signer embeddings are more robust than frame-level ones to achieve signer classification, leading to better performance. \input{tables/srm_gap} \subsubsection{Effect of the SRM over Seen and Unseen Signers} We finally study the effect of the SRM over seen and unseen signers. We first build an extra test set consisting of only seen signers during training by removing videos performed by unseen signers from the official test set of PHOENIX-2014, and then retest ``$\text{C}^2$SLR" and ``$\text{C}^2$SLR+SRM" on this extra test set. As shown in Table \ref{tab:srm_gap}, with a comparable performance over the seen signers, adding the SRM can significantly narrow the performance gap between unseen and seen signers. The results suggest that our SRM can be more helpful for the real-world situation that most testing signers are unseen. \subsection{Comparison with State-of-the-art Results} \input{tables/SD} \subsubsection{Signer-dependent} As shown in Table \ref{tab:SD}, we first evaluate our $\text{C}^2$SLR on three signer-dependent benchmarks: PHOENIX-2014, PHOENIX-2014-T, and CSL-Daily. Our $\text{C}^2$SLR follows the idea of auxiliary learning, which also appears in some existing works, \emph{e.g.}} \def\Eg{\emph{E.g.}, FCN \cite{fcn} and VAC \cite{vac}. FCN proposes a gloss feature enhancement (GFE) module to introduce auxiliary supervision signals into the model training process. However, the GFE module highly relies on pseudo labels (CTC decoded results), which may contain too many errors. Our method only relies on pre-extracted heatmaps, which are quite accurate with the help of our post-processing algorithm, and the model's inherent consistency: the visual and sequential features represent the same sentence. These two properties enable our method to outperform FCN by more than 3\% on both PHOENIX-2014 and PHOENIX-2014-T. Recently, VAC proposes two auxiliary losses at the frame-level, which are not quite appropriate and perform worse than ours according to the comparison in Section \ref{sec:cons_lev}. The SOTA work, STMC \cite{stmc}, adopts the complicated stage optimization strategy, which introduces extra hyper-parameters, and needs to manually decide when to switch to a new stage. Our method is totally end-to-end trainable, and it can outperform STMC on both PHOENIX-2014 and PHOENIX-2014-T. To the best of our knowledge, this is the first time that an end-to-end method can outperform those using the stage optimization strategy. In terms of modality usage, our method just uses extra pose modality during training, while only RGB videos are needed for inference. Thus, it is simpler for real application compared to DNF \cite{dnf} which is built on a two-stream architecture taking both RGB videos and optical flow as inputs. Finally, the results reported on CSL-Daily may be more important due to its large vocabulary size. Our method can still achieves SOTA performance on this large-scale dataset, which also validates the generalization capability of our method over different sign languages. \input{tables/SI} \subsubsection{Signer-independent} As shown in Table \ref{tab:SI}, we further evaluate our SRM on the following two signer-independent benchmarks: PHOENIX-2014-SI and CSL. Although some works, \emph{e.g.}} \def\Eg{\emph{E.g.}, DNF \cite{dnf} and CMA \cite{cma}, evaluate their method on PHOENIX-2014-SI, none of them propose any dedicated module to deal with the challenging SI setting. In this work, we develop a simple yet effective signer removal module (SRM) for SI-CSLR to make the model more robust to signer discrepancy. As shown in Table \ref{tab:2014SI}, our $\text{C}^2$SLR can already achieve competitive performance on PHOENIX-2014-SI, and the SRM can further improve the performance significantly. The result validates that feature disentanglement is an effective method to remove signer-relevant information, and we believe our SRM can serve as a strong baseline for future works on SI-CSLR. As shown in Table \ref{tab:csl}, our SRM can lead to a relative performance gain of 24.4\% over the baseline $\text{C}^2$SLR on CSL\footnote{Although the SI setting itself is challenging, since the sentences in the CSL test set all appear in the training stage, the WER can be very low ($<1$\%).}. It is worth noting that the SOTA work, MSeqGraph \cite{tang2021graph}, uses three modalities including RGB, pose, and depth. However, our method only uses RGB and pose information for training, and only RGB frames are needed for inference. Thus, with a comparable performance to the SOTA work, we believe our method is more applicable in real practice. \section{Conclusion} In this work, we enhance CSLR backbones by developing three auxiliary tasks. First, we insert a keypoint-guided spatial attention module into the visual module to enforce the visual module to focus on informative regions. Second, we impose a sentence embedding consistency constraint between the visual and sequential features to enhance the representation power of both features. We conduct proper ablation studies to validate the effectiveness of the two consistency constraints both quantitatively and qualitatively. Finally, on top of the consistency-enhanced CSLR backbone, a signer removal module based on feature disentanglement is proposed for signer-independent CSLR. More remarkably, our model can achieve SOTA or competitive performance on five benchmarks, while the whole model is trained in an end-to-end manner.
1,941,325,220,148
arxiv
\section{Introduction} Compressive sensing (CS) has attracted plenty of recent attention in the field of signal and image processing. One of the key mathematical issues addressed in CS is how a sparse signal can be reconstructed by a decoding algorithm. An extreme case of CS can be cast as the problem of seeking the sparsest solution of an underdetermined linear system, i.e., $$\min\{\|x\|_0:~ \Phi x=b\},$$ where $\|x\|_0$ counts the number of nonzero components of $x$, $\Phi\in R^{m\times n}$ ($m<n$) is called a sensing matrix, and $b\in R^m$ is the vector of nonadaptive measurements. It is known that the reconstruction of a sparse signal from a reduced number of acquired measurements is possible when the sensing matrix $\Phi$ admits certain properties (see, e.g., \cite{DE2003,T2004,ET2005, ERT2006, ERTR2006, D2006, CDD2009, YZ2008,YBZ2013, FR2013}). Note that measurements must be quantized. Fine quantization provides more information on a signal, making the signal more likely to be exactly recovered. However, fine quantization imposes a huge burden on measurement systems, leading to slower sampling rates and increased costs for hardware systems (see, e.g. \cite{W99, LRRB05, SG09, B2010}). Also, fine quantization introduces error to measurements. This motivates one to consider sparse signal recovery through lower bits of measurements. An extreme quantization is only one bit per measurement. As demonstrated in \cite{BB2008,B2009} and \cite{B2010}, it is possible, in some situations, to reconstruct a sparse signal within certain factors from 1-bit measurements, e.g., the sign of measurements. This motivates the recent development of CS with 1-bit measurements, called 1-bit compressive sensing (see, e.g., \cite{BB2008,B2009, GNR2010, L2011,LWYB2011,LB2012, PV20138}). An ideal model for 1-bit CS is the $\ell_0$-minimization with sign constraints \begin{eqnarray}\label{1bitCS} \min\{\|x\|_0:~\textrm{sign}(\Phi x)=y\}, \end{eqnarray} where $\Phi\in R^{m\times n}$ is a sensing matrix and $y\in R^m$ is the vector of 1-bit measurements. Throughout the paper, we assume that $m<n.$ The sign function in (\ref{1bitCS}) is applied element-wise. Due to the NP-hardness of (\ref{1bitCS}), some relaxations of (\ref{1bitCS}) have been investigated in the literature. A common relaxation is replacing $\|x\|_0$ with $\|x\|_1$ and replacing the constraint of (\ref{1bitCS}) with the linear system \begin{equation}\label{YYY} Y \Phi x \geq 0, \end{equation} where $ Y=\textrm{diag} (y). $ In addition, an extra constraint, such as $\|x\|_2 =1$ and $\|\Phi x\|_1=m,$ is introduced into this relaxation model in order to exclude some trivial solutions. Only the acquired 1-bit information is insufficient to exactly reconstruct a sparse signal. For instance, if $\textrm{sign} (\Phi x^*) = y$ where $y \in \{1, -1\}^m,$ then any small perturbation $x^*+ u$ also satisfies this equation, making the exact recovery of $x^*$ almost impossible by whichever decoding algorithms. While the sign information of measurements might not be enough to exactly reconstruct a signal, it might be adequate to recover the support or the sign of the signal. Thus 1-bit CS still has found applications in signal recovery \cite{BB2008,B2009, GNR2010, B2010, L2011}, imaging processing \cite{BAU2010,BU2013}, and matrix completion \cite{DPBW2014}. The 1-bit CS was first proposed and investigated by Boufounos and Baraniuk \cite{BB2008}. Since 2008, numerous algorithms have been developed in this direction, including greedy algorithms (see, e.g., \cite{B2009,GNR2010,KBAU2012,YYO2012, JLBB2013,GNJN2013, BBR2013}) and convex and nonconvex programming algorithms (see, e.g., \cite{BB2008,LWYB2011,MPD2012, PV20131,PV20138, SS2013, ALPV2014}). To find a polynomial-time solver for the 1-bit CS problems, a linear programming model based on (\ref{YYY}) has been formulated, and certain stability results for reconstruction have been shown in \cite{PV20138} as well. In classic CS setting, it is well known that when a sensing matrix admits some properties such as mutual coherence \cite{DE2003, BED2009}, null space property (NSP) \cite{CDD2009,YZ2008}, restricted isometry property (RIP) \cite{ET2005} or range space property (RSP) of $\Phi^T$ \cite{YBZ2013}, the signals with low sparsity levels can be exactly recovered by the basis pursuit and other algorithms. This motivates one to investigate whether similar recovery theories can also be established for 1-bit CS problems. In \cite{JLBB2013}, the binary iterative hard thresholding (BIHT) algorithm for 1-bit CS problems is discussed and the so-called binary $\varepsilon$-stable embedding (B$\epsilon$SE) condition is introduced. The B$\epsilon$SE can be seen as an extension of the RIP. However, at the current stage, the theoretical analysis for the guaranteed performance of 1-bit CS algorithms is far from complete, in contrast to the classic CS. Recovery conditions in terms of the property of $\Phi$ and/or $y$ are still under development. The fundamental assumption on 1-bit CS is that any solution $x$ generated by an algorithm should be consistent with the acquired 1-bit measurements in the sense that \begin{equation} \label{consistency} \textrm{sign}(\Phi x) = y = \textrm{sign}(\Phi x^*), \end{equation} where $x^*$ is the targeted signal. Clearly, it is very difficult to directly solve a problem with such a constraint if it does not have a tractable reformation. From a computational point of view, an ideal relaxation or reformulation of the sign constraint is a linear system. The current algorithms and theories for 1-bit CS (e.g., \cite{BB2008, B2010,PV20138, SS2013}) have been developed largely based on the system (\ref{YYY}), which is a linear relaxation of (\ref{consistency}). In Section II of this paper, we show that the existing relaxation based on (\ref{YYY}) is not equivalent to the original 1-bit CS model. In fact, a vector satisfying (\ref{YYY}) together with a trivial-solution excluder, such as $\|x\|_2=1$ or $\|\Phi x\|_1=m,$ may not be consistent with the acquired 1-bit measurements $y. $ Some necessary conditions must be imposed on the matrix in order to ensure that the solution of a decoding algorithm based on (\ref{YYY}) is consistent with $y.$ These necessary conditions have been overlooked in the literature (see the discussion in Section II for details). Many existing discussions for 1-bit CS do not distinguish between zero and positive measurements. Both are mapped to 1 (or $-1$) by a nonstandard sign function. In Section II, we point out that it is beneficial to allow $y$ admitting zero components and to treat zero and nonzero measurements separately from both practical and mathematical points of view. Failing to distinguish zero and nonzero magnitude of measurements might yield ambiguity of measurements when sensing vectors are nearly orthogonal to the signal. Such ambiguity might prevent from acquiring a correct sign of measurements due to signal noises or errors in computation. This motivates us to pursue a new direction to establish a recovery theory for 1-bit CS. Our study is remarkably different from existing ones in several aspects. (a) The acquired sign measurements $y$ is allowed to admit zero components. When $y$ does not contain zero components, our model immediately reduces to the existing 1-bit CS model. (b) We introduce a truly equivalent reformulation of the 1-bit CS model (\ref{1bitCS}). The model (\ref{1bitCS}) is reformulated equivalently as an $\ell_0$-minimization problem with linear constraints. Replacing $\|x\|_0$ with $\|x\|_1$ leads naturally to a new linear-program-based decoding method, referred to as the 1-bit basis pursuit. Different from existing formulations, the new reformulation ensures that the solution of the 1-bit basis pursuit is always consistent with the acquired 1-bit measurements $y.$ (c) The sign recovery theory developed in the paper is from the perspective of the restricted range space properties (RRSP) of transposed sensing matrices. In classic CS, it has been shown in \cite{YBZ2013} that any $k$-sparse signal can be exactly recovered with basis pursuit if and only if the transposed sensing matrix admits the so-called range space property (RSP) of order $k. $ This property is equivalent to the well known NSP of order $k$ in the sense that both are the necessary and sufficient ccondition for the uniform recovery of $k$-sparse signals. The new reformulation of the 1-bit CS model proposed in this paper makes it possible to develop an analogous recovery guarantee for the sign of sparse signals with 1-bit basis pursuit. This development naturally yields the concept of the restricted range space property (RRSP) which gives rise to some necessary and sufficient conditions for the nonuniform and uniform recovery of the sign of sparse signals from 1-bit measurements. The main results of the paper can be summarized as follows: \begin{itemize} \item \emph{(Theorem \ref{Main36}, nonuniform)} If the 1-bit basis pursuit can exactly recover the sign of $k$-sparse signals consistent with 1-bit measurements $y , $ then $\Phi$ must admit the N-RRSP of order $k $ with respect to $y $ (see Definition \ref{NRRSP-y}). \item \emph{(Theorem \ref{Main38}, nonuniform)} If $\Phi$ admits the S-RRSP of order $k $ with respect to $y$ (see Definition \ref{SRRSP-y}), then from 1-bit measurements, the 1-bit basis pursuit can exactly recover the sign of $k$-sparse signals which are the sparsest vectors consistent with $y.$ \item \emph{(Theorem \ref{Uniform-1}, uniform)} If the 1-bit basis pursuit can exactly recover the sign of all $k$-sparse signals from 1-bit measurements, then $ \Phi $ must admit the so-called N-RRSP of order $k $ (see Definition \ref{N-RRSP}). \item \emph{(Theorem \ref{Main2}, uniform)} If the matrix admits the S-RRSP of order $k$ (see Definition \ref{S-RRSP}), then from 1-bit measurements, the 1-bit basis pursuit can exactly recover the sign of all $k$-sparse signals which are the sparsest vectors consistent with 1-bit measurements. \end{itemize} The above-mentioned definitions and theorems are given in Sections III and VI. Central to the proof of these results is Theorem 3.2 which provides a full characterization for the uniqueness of solutions to the 1-bit basis pursuit, and thus yields a fundamental basis to develop recovery conditions. This paper is organized as follows. We provide motivations for a new reformulation of the 1-bit CS model in Section II. Based on the reformulation, nonuniform sign recovery conditions with 1-bit basis pursuit are developed in Section III, and uniform sign recovery conditions are developed in Section IV. The proof of Theorem 3.2 is given in Section V. We use the following notation in the paper. Let $R_+^n$ be the set of nonnegative vectors in $R^n. $ The vector $x\in R^n_+$ is also written as $x\geq 0.$ Given a set $S$, $|S|$ denotes the cardinality of $S$. For $x \in R^n $ and $S\subseteq \{1, \dots, n\},$ let $x_{S}\in R^{|S|}$ denote the subvector of $x$ obtained by deleting those components $x_i$ with $i\notin S,$ and let $\textrm{supp}(x)=\{i:x_i\neq 0\}$ denote the support of $x.$ The $\ell_0$-norm $\|x\|_0 $ counts the number of nonzero components of $x$, and the $\ell_1$-norm of $x$ is defined as $\|x\|_1=\sum_{i=1}^{n}|x_i|$. For a matrix $\Phi \in R^{m\times n},$ we use $\Phi^T$ to denote the transpose of $\Phi,$ ${\cal N} (\Phi) =\{x: \Phi x=0\}$ the null space of $\Phi,$ ${\cal R}( \Phi^T) = \{ \Phi^T u: u\in R^m\}$ the range space of $\Phi^T,$ $\Phi_{J,n}$ the submatrix of $\Phi $ formed by deleting the rows of $\Phi$ which are not indexed by $J,$ and $\Phi_{m,J}$ the submatrix of $\Phi $ formed by deleting the columns of $\Phi$ which are not indexed by $J$. $e$ with a suitable dimension is the vector of ones, i.e., $ e= (1,\dots, 1)^T. $ \section{Reformulation of 1-bit compressive sensing} In this section, we point out that for a given matrix, existing 1-bit CS algorithms based on the relaxation (\ref{YYY}) cannot guarantee the found solution being consistent with the acquired 1-bit measurements $y,$ unless the matrix satisfies some condition. This motivates one to propose a new reformulation of the 1-bit CS problem so that the resulting algorithm can automatically ensure its solution being consistent with 1-bit measurements. \subsection{Consistency conditions for existing 1-bit CS methods} The standard sign function is defined as $\textrm{sign} (t) =1 $ if $ t >0,$ $\textrm{sign} (t)=-1$ if $ t <0, $ and $\textrm{sign} (t) =0$ otherwise. In the 1-bit CS literature, many researchers do not distinguish between zero and positive values of measurements and thus define $\textrm{sign} (t) =1$ for $t\geq 0$ and $\textrm{sign} (t)=-1 $ otherwise. The function $\textrm{sign}(\cdot) $ defined this way is referred to as a nonstandard sign function in this paper. We now point out that no matter a standard or nonstandard sign function is used, \emph{the equation $ y = \textrm{sign} (\Phi x) $ is generally not equivalent to the system (\ref{YYY}) even if a trivial-solution excluder such as $\|x\|_2=1$ or $\|\Phi x\|_1 =m $ is used, unless certain necessary assumptions are made on $\Phi.$} First, since $ y = \textrm{sign} (\Phi x) $ implies $ Y \Phi x \geq 0 $ (this fact was observed in \cite {BB2008}), the following statement is obvious: \vskip 0.05in \begin{Lem} \label{L21} If $\Phi \in R^{m\times n}$ and $y\in \{1, -1\}^m$ or $y\in \{1, 0, -1\}^m,$ then $\{x: \textrm{sign}(\Phi x)= y \} \subseteq \{x: Y\Phi x\geq 0\}.$ \end{Lem} \vskip 0.05in Without a further assumption on $\Phi,$ however, the system (\ref{YYY}) does not imply $ \textrm{sign} ( \Phi x) =y $ even if some trivial solutions of (\ref{YYY}) are excluded by adding a widely used trivial-solution excluder, such as $\|x\|_2=1$ or $\|\Phi x\|_1 =m,$ to the system. In fact, for any given $y$ with $ J_-= \{i: y_i =-1\} \not=\emptyset,$ we see that all vectors $0\not= \widetilde{x} \in {\cal N}(\Phi ) $ (or more generally, $\widetilde{x} \not =0$ satisfying $ \Phi_{J_-, n} \widetilde{x} =0 $ and $ \Phi_{J_+, n} \widetilde{x} \geq 0 $) satisfy $Y \Phi \widetilde{x} \geq 0, $ but for these vectors, $ \textrm{sign} ( \Phi \widetilde{x}) \not = y$ no matter $\textrm{sign} (\cdot)$ is standard or nonstandard. The trivial-solution excluder $\|x\|_2 =1 $ (e.g., \cite{BB2008}) cannot exclude vectors satisfying $0\not= \widetilde{x} \in {\cal N}(\Phi ) $ from the set $\{x: Y \Phi x\geq 0\}.$ The excluder $\|\Phi x\|_1=m$ (e.g., \cite {PV20138,SS2013}) cannot exclude $\widetilde{x}$ satisfying $ \Phi_{J_-,n} \widetilde{x} = 0$ and $0\not=\Phi_{J_+, n} \widetilde{x} \geq 0 $ from $\{x: Y \Phi x\geq 0\}. $ This implies that the solutions of some existing 1-bit CS algorithms such as \begin{eqnarray} & \min \{\|x\|_1: & Y \Phi x \geq 0 , ~\|x\|_2 =1\}, \label{Gc} \\ & \min\{\|x\|_1: & Y \Phi x \geq 0 , ~\| \Phi x\|_1 = m\} \label{Gd} \end{eqnarray} may not be consistent with the acquired 1-bit measurements. For example, let \begin{equation} \label{EXAMPLE} \Phi = \left[ \begin{array}{cccc} 2 & -1 & 0 & 2 \\ -1 & 1 & 1 & 0 \\ \end{array} \right], ~ y= \left[ \begin{array}{c} 1 \\ -1 \\ \end{array} \right] . \end{equation} Clearly, for any scalar $\alpha >0, $ $ \widetilde{x} (\alpha) = (\alpha,\alpha,0,0)^T \in \left\{x: Y\Phi x\geq 0\right\},$ but $\widetilde{x} (\alpha) \not \in \left\{x: y=\textrm{sign}(\Phi x)\right\} $ no matter a standard or nonstandard sign function is used, and no matter which of the above-mentioned trivial-solution excluders is used. Clearly, there exists a positive number $\alpha^*$ such that $\widetilde{x}(\alpha^*) =(\alpha^*, \alpha^*, 0, 0)^T $ is an optimal solution to (\ref{Gc}) or (\ref{Gd}). But this optimal solution is not consistent with $y.$ The above discussion indicates that when $J_- \not = \emptyset,$ $x=0$ and $x\in {\cal N}(\Phi)$ are not contained in the set $ \{x: \textrm{sign}(\Phi x) =y\}.$ In this case, we see from Lemma 2.1 that \begin{equation} \label{C1} \{x: \textrm{sign}(\Phi x) =y\} \subseteq \{x: Y\Phi x\geq 0, x\not =0 \}, \end{equation} \begin{equation}\label{C2} \{x: \textrm{sign}(\Phi x) =y\} \subseteq \{x: Y\Phi x\geq 0, \Phi x \not =0 \}. \end{equation} We now find a condition to ensure the opposite direction of the above containing relations. \vskip 0.05in \begin{Lem} \label{L22} Let $\textrm{sign} (\cdot)$ be the nonstandard sign function. Let $\Phi\in R^{m\times n}$ and $y\in \{1, -1\}^m $ with $J_- =\{i: y_i=-1\} \not =\emptyset$ be given. Then \begin{equation} \label{EEEE} \{x: Y\Phi x \geq 0, x\not =0\} \subseteq \{x: \textrm{sign}(\Phi x) = y \} \end{equation} if and only if {\small \begin{equation} \label{L222a} \left[\bigcup_{i \in J_-} {\cal N}(\Phi_{i, n}) \right] \cap \left\{d: \Phi_{J_+,n}d \geq 0, \Phi_{J_-,n}d \leq 0 \right \} = \{0\} \end{equation} } where $J_+= \{i: y_i=1\}.$ \end{Lem} \vskip 0.05in \emph{Proof.} Let $x$ be an arbitrary vector in the set $\{ x: Y \Phi x\geq 0, x \not = 0\}. $ Note that $y \in \{1, -1\}^m. $ So $Y \Phi x\geq 0$ together with $x\not =0$ is equivalent to \begin{equation} \label{333} \Phi_{J_+,n} x \geq 0, ~ \Phi_{J_-,n} x \leq 0 , ~ x \not = 0. \end{equation} Under the condition (\ref{L222a}), we see that for any $x$ satisfying (\ref{333}), it must hold that $x\notin \bigcup_{i \in J_-} {\cal N}(\Phi_{i,n})$ which implies that $ \Phi_{i,n} x\not=0$ for all $ i\in J_-.$ Thus under (\ref{L222a}), the system (\ref{333}) becomes $ \Phi_{J_+,n} x \geq 0, \Phi_{J_-,n} x <0, x \not = 0 $ which, by the definition of the nonstandard sign function, implies that $ \textrm{sign} (\Phi x) =y. $ Thus (\ref{EEEE}) holds. We now assume that the condition (\ref{L222a}) does not hold. Then there exists a vector $d^*\not=0$ satisfying that {\small \begin{equation} \label{ddd} d^*\in \left[\bigcup_{i \in J_-} {\cal N}(\Phi_{i,n}) \right]\cap \left\{d: \Phi_{J_+,n}d \geq 0, \Phi_{J_-,n}d \leq 0 \right \} . \end{equation}} The fact $ d^* \in \left\{d: \Phi_{J_+,n}d \geq 0, \Phi_{J_-,n}d \leq 0 \right \} $ implies that $d^* \in \{x: Y \Phi x \geq 0, x \not = 0 \},$ and $ 0\not= d^* \in \bigcup_{i \in J_-} {\cal N}(\Phi_{i,n}) $ implies that there is $ i \in J_- $ such that $\Phi_{i,n} d^* = 0. $ By the definition of nonstandard sign function, this implies that $ \textrm{sign} (\Phi_{i,n} d^*)=1 \not = y_i $ (since $y_i =-1$ for $i\in J_-$). So $d^* \notin \{x: \textrm{sign}(\Phi x) =y \}, $ and thus (\ref{EEEE}) does not hold. The above proof shows that (\ref{EEEE}) and (\ref{L222a}) are equivalent. ~ $\Box$ \vskip 0.05in Replacing $x\not=0$ with $\Phi x \not=0$ and using the same argument as above yields the next statement. \vskip 0.05in \begin{Lem} \label{L22a} Under the same conditions of Lemma \ref{L22}, the following statement holds: $\{x: Y\Phi x \geq 0, \Phi x \not =0 \} \subseteq \{x: \textrm{sign}(\Phi x)=y \} $ if and only if {\small \begin{equation} \label{L222} \left[\bigcup_{i \in J_-} {\cal N}(\Phi_{i,n}) \right] \cap \left\{d: \Phi_{J_+,n}d \geq 0, \Phi_{J_-,n}d \leq 0, \Phi d \not =0 \right \} = \emptyset. \end{equation}} \end{Lem} where $\emptyset$ denotes the empty set. \vskip 0.05in Therefore, we have the following result. \vskip 0.05in \begin{Thm} \label{Thm24} Let $\textrm{sign} (\cdot)$ be the nonstandard sign function, and let $\Phi\in R^{m\times n} $ and $y\in \{1, -1\}^m $ be given. \begin{enumerate} \item[(i)] If $J_- =\emptyset,$ then $ \{x: \textrm{sign}(\Phi x)= y \} = \{x: Y\Phi x \geq 0 \} . $ \item[(ii)] If $J_- \not =\emptyset,$ then $ \{x: \textrm{sign}(\Phi x) = y \} = \{x: Y\Phi x \geq 0, x\not =0\} $ if and only if (\ref{L222a}) holds. \item[(iii)] If $J_- \not =\emptyset,$ then $ \{x: \textrm{sign}(\Phi x)= y \} = \{x: Y\Phi x \geq 0, \Phi x \not =0 \} $ if and only if (\ref{L222}) holds. \end{enumerate} \end{Thm} The result (i) above is obvious. Results (ii) and (iii) follow by combining (\ref{C1}), (\ref{C2}) and Lemmas 2.2 and 2.3. It is easy to verify that the example (\ref{EXAMPLE}) does not satisfy (\ref{L222a}) and (\ref{L222}). We now consider the standard sign function. In this case, for $y=0$, the set $\{x: Y \Phi x\geq 0\}= R^n $ and $ \{x: 0=\textrm{sign} (\Phi x)\} = \{x: \Phi x=0 \} = {\cal N} (\Phi) \not = R^n$ provided that $\Phi\not=0;$ for $y\not =0$, we see that $ {\cal N}(\Phi) \subseteq \{x: Y \Phi x\geq 0\} $ but any vector in $ {\cal N}(\Phi) $ fails to satisfy the equation $\textrm{sign}(\Phi x) =y. $ Thus we have following observation: \vskip 0.05in \begin{Lem} \label {LL33} For standard sign function and any nonzero $ \Phi \in R^{m\times n}, $ we have $ \{x: Y\Phi x \geq 0\} \not= \{x: \textrm{sign}(\Phi x) = y \}.$ \end{Lem} \vskip 0.05in In general, the set $\{x: Y\Phi x \geq 0\}$ can be significantly larger than $\{x: \textrm{sign}(\Phi x) = y \}.$ In what follows, we only focus on the nontrivial case $y\not =0. $ For a given $0\not =y\in \{1, -1, 0\}^m$, when $J_0 =\{i: y_i=0\} \not= \emptyset,$ the vectors in $ {\cal N} (\Phi)$ and the vectors $ x $ satisfying $ \Phi_{J_0,n} x\not =0$ do not satisfy the constraint $\textrm{sign}(\Phi x) =y. $ These vectors must be excluded from $\{x: Y\Phi x\geq 0\} $ in order to get a tighter relaxation for the sign equation. In other words, only vectors satisfying $\Phi x\not=0 $ and $ \Phi_{J_0, n} x=0$, i.e., $ x\in {\cal N} (\Phi_{J_0, n})\backslash {\cal N}( \Phi),$ should be considered. (Note that $ {\cal N} (\Phi) \subseteq {\cal N} (\Phi_{J_0,n} )$ due to the fact $\Phi_{J_0,n}$ being a submatrix of $\Phi.$) Thus we have the following result. \vskip 0.05in \begin{Thm} \label{L23} Let $\Phi\in R^{m\times n} $ and $0\not= y\in \{1, 0, -1\}^m$ be given. For the standard sign function, the following statements hold: (i) $\{x: y=\textrm{sign}(\Phi x)\} \subseteq \{x: Y\Phi x \geq 0, \Phi_{J_0,n} x=0, \Phi x \not =0 \} . $ (ii) $ \{x: Y\Phi x \geq 0,\Phi_{J_0,n} x=0, \Phi x \not =0, \} \subseteq \{x: \textrm{sign}(\Phi x) = y \} $ if and only if {\small \begin{eqnarray} \label{stand-2} \left[\bigcup_{i \in J_+\cup J_-} {\cal N}(\Phi_{i,n}) \right] & \bigcap & \{d: \Phi_{J_+,n}d \geq 0, \Phi_{J_-,n}d \leq 0, \nonumber \\ & & \Phi_{J_0,n} d=0, \Phi d \not =0 \} =\emptyset. \end{eqnarray}} \end{Thm} \emph{Proof.} The statement (i) follows from Lemma \ref{L21} and the discussion before Theorem \ref{L23}. We now prove the statement (ii). First we assume that (\ref{stand-2}) holds, and let $\hat{x}$ be an arbitrary vector in the set $ \{x: Y\Phi x \geq 0, \Phi_{J_0,n} x=0, \Phi x \not =0 \} . $ Then \begin{equation}\label{LLEE} \Phi_{J_+,n} \hat{x} \geq 0, ~ \Phi_{J_-,n} \hat{x} \leq 0, ~ \Phi_{J_0,n} \hat{x} =0, ~ \Phi \hat{x}\not=0. \end{equation} As $y\not=0$, the set $J_+\cup J_-\not=\emptyset.$ It follows from (\ref{stand-2}) and (\ref{LLEE}) that $ \hat{x} \notin \bigcup_{i \in J_+\cup J_-} {\cal N}(\Phi_{i,n}), $ which implies that the inequalities $\Phi_{J_+,n} \hat{x} \geq 0$ and $ \Phi_{J_-} \hat{x} \leq 0$ in (\ref{LLEE}) must hold strictly, i.e., $ \Phi_{J_+,n} \hat{x} > 0, ~\Phi_{J_-,n} \hat{x} < 0, ~\Phi_{J_0,n} \hat{x} =0, ~ \Phi \hat{x}\not=0 ,$ and hence $ \textrm{sign} (\Phi \hat{x}) =y .$ So \begin{equation}\label{LLCC} \{x: Y\Phi x \geq 0, \Phi_{J_0,n} x=0, \Phi x \not =0 \} \subseteq \{x: \textrm{sign}(\Phi x) =y \}. \end{equation} We now further prove that if (\ref{stand-2}) does not hold, then (\ref{LLCC}) does not hold. Indeed, assume that (\ref{stand-2}) is not satisfied. Then there exists a vector $\hat{d} $ satisfying $$ \Phi_{J_+,n} \hat{d} \geq 0, ~ \Phi_{J_-,n} \hat{d} \leq 0, ~ \Phi_{J_0,n} \hat{d} =0, ~ \Phi \hat{d} \not=0 $$ and $$\hat{d} \in \bigcup_{i \in J_+\cup J_-} {\cal N}(\Phi_{i,n}). $$ This implies that $\hat{d} \in \{x: Y \Phi x \geq 0, \Phi_{J_0,n} x=0, \Phi x \not =0\} $ and that there exists $ i\in J_+ \cup J_- $ such that $\Phi_{i,n} \hat{d} = 0 . $ Thus $\textrm{sign} (\Phi_{i,n} \hat{d}) =0 \not= y_i$ where $y_i= 1 $ or $ -1 $ (since $ i\in J_+ \cup J_-$). Thus (\ref{LLCC}) does not hold. $\Box $ \vskip 0.05in Therefore, under the conditions of Theorem \ref{L23}, the set $\{x: \textrm{sign}(\Phi x) =y\} $ coincides with $ \{x: Y\Phi x \geq 0, \Phi_{J_0,n} x=0, \Phi x \not =0 \}$ if and only if condition (\ref{stand-2}) holds. Recall that the 1-bit CS problem (\cite{BB2008,B2009, PV20138}) can be cast as the $\ell_0$-minimization problem (\ref{1bitCS}), which admits the relaxation \begin{eqnarray} & \min \{\|x\|_0: & Y \Phi x \geq 0 , ~\|x\|_2 =1\}, \label{Ga}\\ & \min \{\|x\|_0: & Y \Phi x \geq 0 , ~\|\Phi x\|_1 = m\}, \label{Gb} \end{eqnarray} where $m$ is not essential and can be replaced with any positive constant. Replacing $\|x\|_0$ by $\|x\|_1$ immediately leads to (\ref{Gc}) and (\ref{Gd}) which are linear programming models. To guarantee that problems (\ref{Ga}) and (\ref{Gb}) are equivalent to (\ref{1bitCS}) and that problems (\ref{Gc}) and (\ref{Gd}) are equivalent to the problem \begin{equation} \label{G4} \min \{\|x\|_1: ~\textrm{sign}(\Phi x)=y\}, \end{equation} as shown in Theorems \ref{Thm24} and \ref{L23}, the conditions (\ref{L222a}), ({\ref{L222}) or (\ref{stand-2}), depending on the definition of the sign function, must be imposed on the matrix. These conditions have been overlooked in the literature. If (\ref{L222a}), ({\ref{L222}) or (\ref{stand-2}) is not satisfied, the feasible sets of (\ref{Ga}), (\ref{Gb}), (\ref{Gc}) and (\ref{Gd}) are larger than that of (\ref{1bitCS}) and (\ref{G4}), and thus their solutions might not satisfy the sign equation $\textrm{sign}(\Phi x) =y. $ In other words, the constructed signal through the algorithms for solving (\ref{Ga}), (\ref{Gb}), (\ref{Gc}) and (\ref{Gd}) might be inconsistent with the acquired 1-bit measurements. \subsection{Allowing zero in sign measurements $y$} The 1-bit CS model with a nonstandard sign function does not cause any inconvenience or difficulty when the magnitude of all components of $|\Phi x^*|$ is relatively large, in which case $\textrm{sign} (\Phi x^*)$ is stable in the sense that any small perturbation of $ \Phi x^*$ does not affect its sign. However, when $ |\Phi x^*| $ admits a very small components (this case does happen in some situations, as we point out later), the nonstandard sign function might introduce certain ambiguity into the 1-bit CS model since $\Phi x^*>0$, $ \Phi x^*=0 $ and $0\not= \Phi x^* \geq 0 $ yield the same measurements $y= (1,1, \dots , 1)^T.$ Once $y$ is acquired, the information concerning which of the above cases yields $y$ in 1-bit CS models is lost. In this situation, through sign information only, it might be difficult to reconstruct the information of the targeted signal no matter what 1-bit CS algorithms are used. When the magnitude of $| \Phi_{i,n} x^*|$ is very small, errors or noises do affect the reliability of the measurements $y.$ The reliability of $y$ is vital since the unknown signal is expected to be partially or fully reconstructed from $y.$ Suppose that $x^*$ is the signal to recovery. We consider a sensing matrix $\Phi\in R^{m\times n}$ whose rows are uniformly drawn from the surface of the $n$-dimensional unit ball $\{u\in R^n: \|u\|_2=1\}.$ Note that for any small positive number $\epsilon>0, $ with positive probability, a drawn vector lies in the region of the unit surface $$\{u \in R^n: \|u\|_2=1, |u^Tx^*| \leq \epsilon \}. $$ The sensing row vector $ \Phi_{i,n} $ drawn in this region yields a very small product $ \Phi_{i,n} x^* \approx 0,$ at which $\textrm{sign} (\Phi_{i,n} x^*) $ becomes sensitive or uncertain in the sense that any small error in measuring $\Phi_{i,n} x^*$ can totally flip its sign, leading to an opposite of the correct sign measurement. In this situation, not only the acquired information $y_i$ might be unreliable to be used for the recover of the sign of a signal, but also the measured value $y_i= 1$ or $-1$ does not reflect the fact $\Phi_{i,n} x^* \approx 0,$ which indicates that $x^*$ is nearly orthogonal to the known sensing vector $\Phi_{i, n}.$ The information $\Phi_{i,n} x^* \approx 0$ is particularly useful to help locate the position of the unknown vector $x^*. $ Using only 1 or $-1$ as the sign of $\Phi_{i,n} x^*, $ however, the information $\Phi_{i,n} x^* \approx 0 $ is completely lost in the 1-bit CS model. Allowing $y_i=0$ in this case can correctly reflect the relation of $\Phi_{i,n}$ and $x^*$ when they are nearly orthogonal. Taking into account the small magnitude of $|\Phi_{i,n} x^*|$ and allowing $y$ to admit zero components provides a practical means to avoid the aforementioned ambiguity of sign measurements resulting from the nonstandard sign function. By using the standard sign function to distinguish the three different cases $\Phi x^* >0,$ $ \Phi x^* =0,$ and $ 0\not= \Phi x^* \geq 0,$ the resulting sign measurements $y$ would carry more information of the signal, which might increase the chance for the sign recovery of the signal. Thus we consider the 1-bit CS model with the standard sign function in this paper. In fact, the standard sign function was already used by some authors (e.g., \cite{PV20138}) but their discussions are based on the linear relaxation of (\ref{YYY}). \subsection{Reformulation of 1-bit CS model} From the above discussions, the system (\ref{YYY}) is generally a loose relaxation of the sign constraint of (\ref{1bitCS}). The 1-bit CS algorithms based on this relaxation might generate a solution inconsistent with 1-bit measurements if a sensing matrix does not satisfy the conditions specified in Theorems \ref{Thm24} and \ref{L23}. We now introduce a new reformulation of the 1-bit CS model, which can ensure that the solution of our 1-bit CS algorithm is always consistent with the acquired 1-bit measurements. In the remainder of the paper, we focus on the 1-bit CS problem with standard sign function. For a given $y\in \{-1, 1, 0\}^m,$ we use $J_+,$ $ J_- $ and $J_0$ to denote the indices of positive, negative, and zero components of $y,$ respectively, i.e., {\small \begin{equation} \label {JJJJ} J_+ =\{i: y_i=1\}, J_- =\{i:~y_i=-1\}, J_0=\{i:~y_i=0\}. \end{equation} } Since these indices are determined by $y,$ we also write them as $J_+ (y), J_-(y)$ and $J_0 (y)$ when necessary. By using (\ref{JJJJ}), the constraint $\textrm{sign}(\Phi x)=y$ can be written as {\small \begin{equation} \label{sign-const} \textrm{sign} (\Phi_{J_+,n}x) = e_{J_+}, \textrm{sign} (\Phi_{J_-,n}x ) = - e_{J_-}, \Phi_{J_0,n}x=0. \end{equation} } Thus the model (\ref{1bitCS}) with $y\in \{-1, 1, 0\}^m$ can be stated as \begin{equation} \label{New-1bit} \begin{array}{cl} \min & \|x\|_0 \\ \textrm{s.t.} & \textrm{sign} (\Phi_{J_+,n}x) = e_{J_+}, \textrm{sign} (\Phi_{J_-,n}x ) = - e_{J_-}, \\ & \Phi_{J_0,n}x=0. \end{array} \end{equation} Consider the system in $u\in R^n $ \begin{equation}\label{var-system} \Phi_{J_+,n} u \geq e_{J_+}, ~ \Phi_{J_-,n} u \leq- e_{J_-}, ~ \Phi_{J_0,n} u =0. \end{equation} Clearly, if $x$ satisfies (\ref{sign-const}), then there exists a positive number $\alpha >0 $ such that $u=\alpha x$ satisfies the system (\ref{var-system}); conversely, if $u $ satisfies the system (\ref{var-system}), then $x=u$ satisfies the system (\ref{sign-const}). Note that $\|x\|_0= \|\alpha x\|_0$ for any $ \alpha \not= 0.$ Thus (\ref{New-1bit}) can be reformulated as the $\ell_0$-minimization problem \begin{eqnarray}\label{l0LP} \begin{array}{cl} \min & \|x\|_0 \\ \textrm{s.t.} & \Phi_{J_+,n}x\geq e_{J_+}, ~ \Phi_{J_-,n}x\leq- e_{J_-}, ~ \Phi_{J_0,n}x=0. \end{array} \end{eqnarray} From the relation of (\ref{sign-const}) and (\ref{var-system}), we immediately have the following observation. \vskip 0.05in \begin{Propn} \label {Prop21} If $x^*$ is an optimal solution to the 1-bit CS model (\ref{New-1bit}), then there exists a positive number $\alpha>0$ such that $\alpha x^*$ is an optimal solution to the $\ell_0$-problem (\ref{l0LP}); conversely, if $x^*$ is an optimal solution to the $\ell_0$-problem (\ref{l0LP}), then $x^*$ must be an optimal solution to (\ref{New-1bit}). \end{Propn} \vskip 0.05in As a result, to study the 1-bit CS model (\ref{New-1bit}), it is sufficient to investigate the model (\ref{l0LP}). This makes it possible to use the CS methodology to study the 1-bit CS problem (\ref{New-1bit}). Motivated by (\ref{l0LP}), we consider the $\ell_1$-minimization \begin{eqnarray}\label{1bit-basis} \begin{array}{cl} \min & \|x\|_1 \\ \textrm{s.t.} & \Phi_{J_+,n}x\geq e_{J_+}, ~\Phi_{J_-,n}x\leq- e_{J_-}, ~\Phi_{J_0,n}x=0, \end{array} \end{eqnarray} which can be seen as a natural decoding method for the 1-bit CS problems. In this paper, the problem (\ref{1bit-basis}) is referred to as the 1-bit basis pursuit. It is worth stressing that the optimal solution of (\ref{1bit-basis}) is always consistent with $y$ as indicated by Proposition \ref{Prop21}. More importantly, the later analysis indicates that our reformulation makes it possible to develop a sign recovery theory for sparse signals from 1-bit measurements. For the convenience of analysis, we define the sets $\mathcal{A}(\cdot),$ $\widetilde{\mathcal{A}}_+(\cdot) $ and $ \widetilde{\mathcal{A}}_-(\cdot)$ which are used frequently in this paper. Let $x^*\in R^n $ satisfy the constraints of (\ref{1bit-basis}). At $x^*,$ let \begin{equation} \label{AAA1} \mathcal{A}(x^*)=\{i: ~(\Phi x^*)_i= 1\}\cup\{i: ~(\Phi x^*)_i=-1 \}, \end{equation} \begin{equation} \label{AAA2} \tilde{\mathcal{A}}_+(x^*)=J_+\setminus \mathcal{A}(x^*), ~\tilde{\mathcal{A}}_-(x^*) =J_-\setminus \mathcal{A}(x^*). \end{equation} Clearly, $\mathcal{A}(x^*)$ is the index set of active constraints among the inequality constraints of (\ref{1bit-basis}), $ \tilde{\mathcal{A}}_+(x^*)$ is the index set of inactive constraints in the first group of inequalities of (\ref{1bit-basis}) (i.e., $ \Phi_{J_+, n} x^* \geq e_{J_+}$), and $\tilde{\mathcal{A}}_-(x^*)$ is the index set of inactive constraints in the second group of inequalities of (\ref{1bit-basis}) (i.e., $\Phi_{J_-, n} x^* \leq - e_{J_-}$). Thus we see that \begin{eqnarray*} & & (\Phi x^*)_i= 1 \textrm{ for } i \in\mathcal{A}(x^*)\cap J_+, \\ & & (\Phi x^*)_i> 1 \textrm{ for } i\in \tilde{\mathcal{A}}_+(x^*),\\ & & (\Phi x^*)_i=-1 \textrm{ for } i\in\mathcal{A}(x^*)\cap J_-, \\ & & (\Phi x^*)_i<- 1 \textrm{ for } i\in \tilde{\mathcal{A}}_-(x^*). \end{eqnarray*} We also need symbols $\pi (\cdot)$ and $\varrho (\cdot)$ defined as follows. Denote the elements in $J_+ $ by $i_k \in \{1, ..., m\}, k=1, \dots, p, $ i.e., $J_+=\{i_1, i_2, \dots,i_p\}$ where $p=|J_+|.$ Without loss of generality, we let the elements be sorted in ascending order $i_1< i_2< \cdots <i_p.$ Then we define the bijective mapping $\pi: J_+ \to \{1, \dots, p\}$ as \begin{equation}\label{pi} \pi( i_k) =k \textrm{ for all } k=1, \dots, p. \end{equation} Similarly, let $J_-=\{j_1, j_2, \dots,j_q\},$ where $q=|J_-|,$ $j_k \in \{1, \dots, m\}$ for $k=1, \dots, q$ and $j_1< j_2< \cdots < j_q.$ We define the bijective mapping $\varrho: J_- \to \{1, \dots, q\}$ as \begin{equation} \label{varrho} \varrho(j_k) =k ~\textrm{ for all } k=1, \dots, q. \end{equation} By introducing variables $\alpha\in R^{|J_+|}_+$ and $\beta\in R^{|J_-|}_+$, the problem (\ref{1bit-basis}) can be written as \begin{eqnarray}\label{8888} & \min & \|x\|_1, \nonumber \\ & \textrm{ s.t.}& \Phi_{J_+,n} x -\alpha= e_{J_+}, \nonumber\\ & & \Phi_{J_-,n} x +\beta=- e_{J_-}, \\ & & \Phi_{J_0,n} x=0, \nonumber \\ & & \alpha\geq0,~\beta\geq0.\nonumber \end{eqnarray} Note that for any optimal solution $(x^*, \alpha^*, \beta^*) $ of (\ref{8888}), we have $\alpha^*=\Phi_{J_+,n} x^* - e_{J_+} $ and $ \beta^* =- e_{J_-}- \Phi_{J_-,n} x^*. $ Using (\ref{AAA1})--(\ref{varrho}), we immediately have the following observation. \vskip 0.05in \begin{Lem} \label{LemlepsLP1} (i) For any optimal solution $(x^*, \alpha^*, \beta^*)$ to the problem (\ref{8888}), we have \begin{equation} \label{Lemma31} \left\{\begin{array}{ll} \alpha^*_{\pi(i)}=0, & \text{for } i\in \mathcal{A}(x^*)\cap J_+, \\ \alpha^*_{\pi(i)}= (\Phi x^*)_i -1 >0, & \text{for } i\in \tilde{\mathcal{A}}_+(x^*), \\ \beta^*_{\varrho(i)} =0, & \text{for } i\in \mathcal{A}(x^*)\cap J_-, \\ \beta^*_{\varrho(i)}=-1 -(\Phi x^*)_i>0, & \text{for } i\in \tilde{\mathcal{A}}_-(x^*). \end{array} \right. \end{equation} (ii) $x^*$ is the unique optimal solution to the 1-bit basis pursuit (\ref{1bit-basis}) if and only if $ (x^*,\alpha^*,\beta^*)$ is the unique optimal solution to the problem (\ref{8888}), where $(\alpha^*, \beta^*)$ is determined by (\ref{Lemma31}). \end{Lem} \subsection{Recovery criteria} When $y=\textrm{sign} (\Phi x^*) \in \{1, -1\}^m,$ any small perturbation $x^*+ u $ is also consistent with $y. $ When $y\in \{1, -1, 0\}^m ,$ any small perturbation $x^*+ u $ with $ u\in {\cal N} (\Phi_{J_0, n}) $ is also consistent with $y.$ Thus a 1-bit CS problem generally has infinitely many solutions and the sparsest solution of a sign equation is also not unique in general. Since the amplitude of signals is not available, the recovery criteria in 1-bit CS scenarios can be sign recovery, support recovery or others, depending on signal environments. The exact sign recovery of a signal means that the found solution $\widetilde{x}$ by an algorithm satisfies $$\textrm{sign} (\widetilde{x})= \textrm{sign}(x^*). $$ The support recovery, i.e., the found solution $\widetilde{x}$ satisfying $ \textrm{supp} (\widetilde{x})= \textrm{supp}(x^*)$ is a relaxed version of the sign recovery. It is worth mentioning that the following criterion $$ \left\| \frac{x}{\|x\|_2} - \frac{x^*}{\|x^*\|_2} \right\| \leq \varepsilon $$ has been widely used in the 1-bit CS literature, where $\varepsilon>0 $ is a certain small number. In the remainder of the paper, we work toward developing some necessary and sufficient conditions for the exact recovery of the sign of sparse signals from 1-bit measurements. \section{Nonuniform sign recovery} We assume that the measurements $y=\textrm{sign} (\Phi x^*) $ is available. From this information, we use the 1-bit basis pursuit (\ref{1bit-basis}) to recover the sign of $x^*.$ We ask when the optimal solution of (\ref{1bit-basis}) admits the same sign of $x^*.$ The recovery of the sign of an individual sparse signal is referred to as the nonuniform sign recovery. In this section, we develop certain necessary and sufficient conditions for the nonuniform sign recovery from the perspective of the range space property of a transposed sensing matrix. Assume that $y\in \{1, -1, 0\}^m$ is given and $(J_+, J_-, J_0)$ is specified as (\ref{JJJJ}). We first introduce the concept of the RRSP. \vskip 0.05in \begin{Def} [RRSP of $\Phi^T$ at $x^*$] \label{DefRRSP} Let $x^*\in R^n$ satisfy $y=\textrm{sign} (\Phi x^*).$ We say that $\Phi^T$ satisfies the restricted range space property (RRSP) at $x^*$ if there exist vectors $\eta\in \mathcal{R}(\Phi^T)$ and $w\in \mathcal{F}(x^*)$ such that $\eta=\Phi^T w$ and $$\eta_i=1 \text{ for } x^*_i>0, \eta_i=-1 \text{ for } x^*_i<0, |\eta_i|<1 \text{ for } x^*_i=0,$$ where $ \mathcal{F}(x^*) $ is the set defined as \begin{eqnarray} \label{FFFF} & \mathcal{F}(x^*) & = \{w \in R^m: w_i > 0\textrm{ for } i\in {\cal A}(x^*) \cap J_+, \nonumber \\ & & ~~~~ w_i < 0 \textrm{ for } i\in {\cal A}(x^*) \cap J_-, \\ & & ~~~~ w_i=0 \textrm{ for } i\in \tilde{\mathcal{A}}_+(x^*) \cup \tilde{\mathcal{A}}_-(x^*)\}. \nonumber \end{eqnarray} \end{Def} The RRSP of $\Phi^T$ at $x^*$ is a natural condition for the uniqueness of optimal solutions to the 1-bit basis pursuit (\ref{1bit-basis}), as shown by the following theorem. \vskip 0.05in \begin{Thm}[Necessary and sufficient condition]\label{Ness-Suff} $x^*$ is the unique optimal solution to the 1-bit basis pursuit (\ref{1bit-basis}) if and only if the RRSP of $\Phi^T $ at $x^*$ holds and the matrix \begin{eqnarray}\label{FRPmatrix} H (x^*) =\left[ \begin{array}{cc} \Phi_{\mathcal{A}(x^*)\bigcap J_+, S_+} & \Phi_{\mathcal{A}(x^*)\bigcap J_+, S_-} \\ \Phi_{\mathcal{A}(x^*)\bigcap J_-, S_+} & \Phi_{\mathcal{A}(x^*)\bigcap J_-, S_-} \\ \Phi_{J_0,S_+} & \Phi_{J_0,S_-} \end{array} \right] \end{eqnarray} has a full-column rank, where $S_+=\{i: x^*_i>0\} $ and $ S_-=\{i: x^*_i<0\}.$ \end{Thm} \vskip 0.05in The proof of Theorem 3.2 requiring some fundamental facts for linear programs is given in Section V. The uniqueness of solutions to a decoding method like (\ref{1bit-basis}) is an important property required in signal reconstruction. As indicated in \cite{F2004, P07, FR2013, YBZ2013}, the uniqueness conditions often lead to certain criteria for the nonuniform and uniform recovery of sparse signals. Later, we will see that Theorem \ref{Ness-Suff}, together with the matrix properties N-RRSP and S-RRSP of order $k$ that will be introduced in this and next sections, provides a fundamental basis to develop a sign recovery theory for sparse signals from 1-bit measurements. Let us begin with the following lemma. \vskip 0.05in \begin{Lem}\label{Lemma41} Let $x^*$ be a sparsest solution of the $\ell_0$-problem (\ref{l0LP}) and let $S_+ $ and $ S_- $ be defined as in Theorem \ref{Ness-Suff}. Then \begin{equation}\label{HHTT} \widetilde{H}(x^*) = \left[\begin{array}{cc} \Phi_{\mathcal{A}(x^*)\cap J_+,S_+} & \Phi_{\mathcal{A}(x^*)\cap J_+,S_- } \\ \Phi_{\mathcal{A}(x^*)\cap J_-,S_+} & \Phi_{\mathcal{A}(x^*)\cap J_-,S_- } \\ \Phi_{J_0,S_+} & \Phi_{J_0,S_-} \\ \Phi_{\tilde{\mathcal{A}}_+(x^*),S_+} & \Phi_{\tilde{\mathcal{A}}_+(x^*),S_- } \\ \Phi_{\tilde{\mathcal{A}}_-(x^*),S_+} & \Phi_{\tilde{\mathcal{A}}_-(x^*),S_- } \end{array}\right] \end{equation} has a full-column rank. Furthermore, at any sparsest solution $x^*$ of (\ref{l0LP}), which admits the maximum cardinality $ |{\cal A}(x^*)| =\max\{|{\cal A}(x)|: x\in F^*\}, $ where $F^*$ is the set of optimal solutions of (\ref{l0LP}), $ H(x^*) $ given by (\ref {FRPmatrix}) has a full-column rank. \end{Lem} \vskip 0.05in {\it Proof.} Note that $x^*$ is a sparsest solution to the system \begin{equation} \label{sys0} \Phi_{J_+,n} x^* \geq e_{J_+}, ~ \Phi_{J_-,n} x^* \leq - e_{J_-}, ~ \Phi_{J_0,n} x^*=0. \end{equation} Including $\alpha^*$ and $\beta^*,$ given by (\ref{Lemma31}), into (\ref{sys0}) leads to {\small \begin{equation}\label{sys4} \Phi_{J_+,n}x^*-\alpha^*= e_{J_+}, ~\Phi_{J_-,n}x^*+\beta^*=- e_{J_-}, ~ \Phi_{J_0,n}x^*=0. \end{equation} } Eliminating the zero components of $x^* $ from (\ref{sys4}) leads to \begin{eqnarray}\label{sys5} \left\{\begin{array}{l} \Phi_{ J_+,S_+} x^*_{S_+}+\Phi_{ J_+,S_- }x^*_{S_-} -\alpha^* = e_{ J_+},\\ \Phi_{ J_-,S_+} x^*_{S_+}+\Phi_{ J_-,S_- }x^*_{S_-} +\beta^* =- e_{ J_-},\\ \Phi_{J_0,S_+} x^*_{S_+}+\Phi_{J_0,S_-} x^*_{S_-}=0. \end{array} \right. \end{eqnarray} Since $x^*$ is a sparsest solution of (\ref{l0LP}), it is not very difficult to see that the coefficient matrix $$ \widehat{ H } =\left[\begin{array}{cc} \Phi_{J_+,S_+} & \Phi_{ J_+,S_- } \\ \Phi_{J_-,S_+} & \Phi_{ J_-,S_- } \\ \Phi_{J_0,S_+} & \Phi_{J_0,S_-} \end{array}\right] $$ has a full-column rank, since otherwise at least one column of $ \widehat{ H } $ can be linearly represented by its other columns, the system (\ref{sys5}), which is equivalent to (\ref{sys0}), has a solution sparser than $x^*. $ From (\ref{AAA1}) and (\ref{AAA2}), we see that {\small \begin{equation} \label{XXXc} J_+ = (\mathcal{A}(x^*)\cap J_+)\cup \widetilde{\mathcal{A}}_+(x^*), ~ J_- = (\mathcal{A}(x^*)\cap J_-) \cup\widetilde{\mathcal{A}}_-(x^*). \end{equation} } Performing row permutations on $ \widehat{ H }, $ if necessary, yields $\widetilde{H}(x^*)$ given as (\ref{HHTT}). Since row permutations do not affect the column rank of $ \widehat{H} ,$ $ \widetilde{H}(x^*)$ must have a full-column rank. We now show that $H(x^*)$ has a full-column rank if ${\cal A}(x^*)$ admits the maximum cardinality in the sense that $ |{\cal A}(x^*)| =\max\{|{\cal A} (x)|: x \in F^*\}, $ where $F^*$ is the set of optimal solutions of (\ref{l0LP}). We prove this by contradiction. Assume that the columns of $H(x^*)$ are linearly dependent. Then there is a nonzero vector $ d= (u,v) \in R^{|S_+|}\times R^{|S_-|} $ such that $$H(x^*) d= H(x^*) \left[ \begin{array}{c} u \\ v \\ \end{array} \right] =0.$$ Since $d \not =0$ and $\widetilde{H}(x^*),$ given by (\ref{HHTT}), has a full-column rank, we see that \begin{equation} \label{HHdd} \left[\begin{array}{cc} \Phi_{\tilde{\mathcal{A}}_+(x^*),S_+} & \Phi_{\tilde{\mathcal{A}}_+(x^*),S_- } \\ \Phi_{\tilde{\mathcal{A}}_-(x^*),S_+} & \Phi_{\tilde{\mathcal{A}}_-(x^*),S_- } \end{array}\right] \left[ \begin{array}{c} u \\ v \\ \end{array} \right] \not =0. \end{equation} Let $x(\lambda) $ be the vector with components $ x(\lambda)_{S_+} = x^*_{S_+} + \lambda u,$ $ ~x(\lambda)_{S_-} = x^*_{S_-} + \lambda v $ and $ x(\lambda)_i=0 \textrm{ for all }i \notin S_+\cup S_-,$ where $\lambda\in R. $ Clearly, we have $\textrm{supp}(x(\lambda)) \subseteq \textrm{supp} (x^*)$ for any $\lambda\in R. $ By (\ref{Lemma31}) and (\ref{XXXc}), the system (\ref{sys5}) is equivalent to \begin{equation}\label{system1} \left\{\begin{array}{l} \Phi_{\mathcal{A}(x^*)\cap J_+,S_+} x^*_{S_+}+\Phi_{\mathcal{A}(x^*)\cap J_+,S_- }x^*_{S_-} = e_{\mathcal{A}(x^*)\cap J_+},\\ \Phi_{\mathcal{A}(x^*)\cap J_-,S_+} x^*_{S_+}+\Phi_{\mathcal{A}(x^*)\cap J_-,S_- }x^*_{S_-} =- e_{\mathcal{A}(x^*)\cap J_-},\\ \Phi_{J_0,S_+} x^*_{S_+}+\Phi_{J_0,S_-} x^*_{S_-}=0,\\ \Phi_{\tilde{\mathcal{A}}_+(x^*),S_+} x^*_{S_+}+\Phi_{\tilde{\mathcal{A}}_+(x^*),S_- }x^*_{S_-} > e_{\tilde{\mathcal{A}}_+(x^*)},\\ \Phi_{\tilde{\mathcal{A}}_-(x^*),S_+} x^*_{S_+}+\Phi_{\tilde{\mathcal{A}}_-(x^*),S_- }x^*_{S_-}< - e_{\tilde{\mathcal{A}}_-(x^*)}, \end{array} \right. \end{equation} From the above system and the definition of $x(\lambda),$ we see that for any sufficiently small $|\lambda| \not=0, $ the vector $( x(\lambda)_{S_+},$ $ x(\lambda)_{S_-})$ satisfies the system {\small \begin{eqnarray} & H(x^*) \left[ \begin{array}{c} x(\lambda)_{S_+} \\ x(\lambda)_{S_-} \end{array} \right] = \left[ \begin{array}{c} e_{\mathcal{A}(x^*)\cap J_+} \\ -e_{\mathcal{A}(x^*)\cap J_-} \\ 0 \\ \end{array} \right], & \label {H1} \\ & \left[\Phi_{\widetilde{\mathcal{A}}_+(x^*),S_+}, \Phi_{\widetilde{\mathcal{A}}_+(x^*),S_-}\right] \left[ \begin{array}{c} x(\lambda)_{S_+} \\ x(\lambda)_{S_-} \end{array} \right] > e_{\widetilde{\mathcal{A}}_+(x^*)}, ~~& \label {H2} \\ & \left[\Phi_{\widetilde{\mathcal{A}}_- (x^*),S_+}, \Phi_{\widetilde{\mathcal{A}}_- (x^*),S_-}\right] \left[ \begin{array}{c} x(\lambda)_{S_+} \\ x(\lambda)_{S_-} \end{array} \right] <- e_{\widetilde{\mathcal{A}}_-(x^*)}. ~~ & \label{H3} \end{eqnarray} } Equality (\ref{H1}) actually holds for any $\lambda \in R^n. $ Starting from $\lambda =0,$ we continuously increase the value of $|\lambda|$. In this process, if one of the components of the vector $( x(\lambda)_{S_+}, x(\lambda)_{S_-})$ satisfying (\ref{H1})--(\ref{H3}) becomes zero, then a sparser solution than $x^*$ is found, leading to a contradiction. Thus without loss of generality, we assume that $ \textrm{supp} (x(\lambda)) = \textrm{supp}(x^*)$ is maintained when $|\lambda|$ is continuously increased. It follows from (\ref{HHdd}) that there exists $\lambda^* \not=0 $ such that $(x (\lambda^*)_{S_+}, x( \lambda^*)_{S_-})$ satisfies (\ref{H1})--(\ref{H3}) and at this vector, one of the inactive constraints in (\ref{H2}) and (\ref{H3}) becomes active. Therefore $|{\cal A} (x(\lambda^*))| > |{\cal A} (x^*)|.$ This contradicts the fact $|{\cal A} (x^*)|$ has the maximal cardinality amongst the sparsest solutions. Thus we conclude that $H(x^*)$ must have a full-column rank. ~~ $ \Box $ \vskip 0.05in From Lemma \ref{Lemma41}, we see that the full-rank property of (\ref {FRPmatrix}) can be guaranteed if $x^*$ is a sparsest solution consistent with 1-bit measurements and $ | \mathcal{A} (x^*)|$ is maximal. Thus by Theorem \ref{Ness-Suff}, the central condition for $x^*$ to be the unique optimal solution to (\ref{1bit-basis}) is the RRSP described in Definition \ref{DefRRSP}. From the above discussions, we obtain the following connection between 1-bit CS and 1-bit basis pursuit. \vskip 0.05in \begin{Thm}\label{Thm-equ} (i) Suppose that $x^*$ is an optimal solution to the $\ell_0$-problem (\ref{l0LP}) with maximal $| \mathcal{A} (x^*)|.$ Then $x^* $ is the unique optimal solution to (\ref{1bit-basis}) if and only if the RRSP of $\Phi^T$ at $x^* $ holds. (ii) Suppose that $x^*$ is an optimal solution to the problem (\ref{New-1bit}) or (\ref{l0LP}). Then the sign of $x^*$ coincides with the sign of the unique solution of (\ref{1bit-basis}) if and only if there exists a weight $z\in R^n $ satisfying $z_i> 0$ for $i\in \textrm{supp} (x^*)$ and $z_i =0$ for $i\notin \textrm{supp} (x^*)$ such that $Zx^*,$ where $Z=\textrm{diag} (z),$ is feasible to (\ref{1bit-basis}) and $H(Zx^*)$ has a full-column rank and the RRSP of $\Phi^T$ at $Zx^*$ holds. \end{Thm} \vskip 0.05in \emph{Proof. } Result (i) follows directly from Lemma \ref{Lemma41} and Theorem \ref{Ness-Suff}. We now prove result (ii). If the sign of $x^*$ coincides with the sign of the unique optimal solution $\widetilde{x}$ of (\ref{1bit-basis}), then $\widetilde{x}$ can be written as $ \widetilde{x}= Zx^*$ for a certain weight satisfying that $z_i> 0$ for $i\in \textrm{supp} (x^*)$ and $z_i =0$ for $i\notin \textrm{supp} (x^*).$ It follows from Theorem \ref{Ness-Suff} that $H(Zx^*)$ has a full-column rank and the RRSP of $\Phi^T$ at $Zx^*$ holds. Conversely, if there exists a weight $z \in R^n $ satisfying $z_i> 0$ for $i\in \textrm{supp} (x^*)$ and $z_i =0$ for $i\notin \textrm{supp} (x^*)$ such that $ \widetilde{x} = Zx^*,$ where $Z=\textrm{diag} (z),$ is feasible to (\ref{1bit-basis}) and $H(Zx^*)$ has a full-column rank and the RRSP of $\Phi^T$ at $Zx^*$ holds, then by Theorem \ref{Ness-Suff} again $\widetilde{x}=Zx^*$ is the unique optimal solution to (\ref{1bit-basis}). Clearly, by the definition of $Z,$ we have $\textrm{sign} (\widetilde{x} ) =\textrm{sign} (Zx^*)=\textrm{sign} (x^*). $ ~$\Box$ \vskip 0.05in The above result provides some insight into the nonuniform recovery of the sign of an individual sparse signal via the 1-bit measurements and 1-bit basis pursuit. This result indicates that central to the sign recovery of $x^*$ is the RRSP of $\Phi^T$ at $x^*.$ However, this property is defined at $x^*,$ which is unknown in advance. Thus we need to further strengthen this concept in order to develop certain recovery conditions independent of the specific signal $x^*.$ To this purpose, we introduce the notion of \emph{N- and S-RRSP of order $k$ with respect to 1-bit measurements,} which turns out to be a necessary condition and a sufficient condition, respectively, for the nonuniform sign recovery. For given measurements $y\in \{1, -1, 0\}^m, $ let $P(y) $ denote the set of all possible partitions of the support of signals consistent with $y$: $$P(y)= \{ (S_+ (x), S_- (x)): y=\textrm{sign} (\Phi x) \}$$ where $S_+(x)= \{i: x_i >0\} $ and $S_-(x)= \{i: x_i < 0\} .$ \vskip 0.05in \begin{Def}[N-RRSP of order $k$ with respect to $y$]\label{NRRSP-y} The matrix $\Phi^T$ is said to satisfy the necessary restricted range space property (N-RRSP) of order $k$ with respect to $y$ if there exist a pair $(S_+, S_-) \in P(y) $ with $|S_+ \cup S_-|\leq k$ and a pair $(T_1, T_2)$ with $T_1 \subseteq J_+$, $T_2 \subseteq J_-,$ $ T_1\cup T_2 \not =J_+\cup J_- $ and $ \left[\begin{array}{c} \Phi_{J_+\backslash T_1,S} \\ \Phi_{J_-\backslash T_2,S} \\ \Phi_{J_0,S} \end{array}\right], $ where $S =S_+ \cup S_-,$ having a full-column rank such that there is a vector $\eta \in {\cal R} (\Phi^T) $ satisfying the following properties: \begin{enumerate} \item[(i)] $ \eta_i=1 \text{ for }i\in S_+, ~ \eta_i=-1 \text{ for }i\in S_-, ~ |\eta_i|<1 \text{ otherwise}; $ \item[(ii)] $\eta=\Phi^T w$ for some $w\in \mathcal{F}(T_1,T_2), $ where \begin{eqnarray}\label{F4} & \mathcal{F}(T_1,T_2) = \{w\in R^m : & w_{J_+ \backslash T_1} > 0, w_{J_- \backslash T_2}< 0, \nonumber \\ & & w_ {T_1 \cup T_2}=0 \}. \end{eqnarray} \end{enumerate} \end{Def} The above matrix property turns out to be a necessary condition for the nonuniform recovery of the sign of a $k$-sparse signal, as shown by the next theorem. \vskip 0.05in \begin{Thm}\label{Main36} Let $x^*$ be an unknown $k$-sparse signal (i.e., $\|x^*\|_0\leq k$) and assume that the measurements $y=\textrm{sign}(\Phi x^*) $ are known. If the 1-bit basis pursuit (\ref{1bit-basis}) admits a unique optimal solution $\widetilde{x} $ satisfying $ \textrm{sign} (\widetilde{x}) =\textrm{sign}(x^*)$ (i.e., the sign of $x^*$ can be exactly recovered by (\ref{1bit-basis})), then $\Phi^T$ has the N-RRSP of order $k$ with respect to $y$. \end{Thm} \vskip 0.05in \emph{Proof.} Suppose that the measurements $y =\textrm{sign} (\Phi x^*)$ are given, where $x^*$ is an unknown $k$-sparse signal. By the definition of $P(y),$ we see that $ (S_+(x^*), S_- (x^*)) \in P(y). $ Denote by $S=S_+(x^*)\cup S_-(x^*).$ Suppose that (\ref{1bit-basis}) has a unique optimal solution $\widetilde{x} $ satisfying $ \textrm{sign} (\widetilde{x}) =\textrm{sign}(x^*), $ which implies that $ (S_+(\widetilde{x}), S_-(\widetilde{x})) = (S_+(x^*), S_-(x^*)). $ By Theorem \ref{Ness-Suff}, the uniqueness of $ \widetilde{x} $ implies that the RRSP of $\Phi^T$ at $\widetilde{x} $ holds and $H(\widetilde{x}) $ has a full-column rank. Let \begin{equation} \label{T1T2} T_1= \widetilde{{\cal A}}_+(\widetilde{x})=J_+\setminus {\cal A}(\widetilde{x}) , T_2= \widetilde{{\cal A}}_-(\widetilde{x}) = J_-\setminus {\cal A}(\widetilde{x}) . \end{equation} Note that at any optimal solution of (\ref{1bit-basis}), at least one of the inequality constraints of (\ref{1bit-basis}) must be active. Thus $ {\cal A}(\widetilde{x}) \not =\emptyset,$ which implies that $ T_1\cup T_2 \not =J_+\cup J_-. $ We also note that $ J_+\setminus T_1 =J_+\cap{\cal A}(\widetilde{x}) $ and $ J_-\backslash T_2 = J_- \cap {\cal A}(\widetilde{x}).$ Hence the matrix $ \left[\begin{array}{c} \Phi_{J_+\backslash T_1,S} \\ \Phi_{J_-\backslash T_2,S} \\ \Phi_{J_0,S} \end{array}\right],$ coinciding with $H(\widetilde{x}), $ has a full-column rank. The RRSP of $\Phi^T$ at $\widetilde{x}$ implies that properties (i) and (ii) of Definition \ref{NRRSP-y} are satisfied with $(S_+,S_-)=(S_+(\widetilde{x}), S_-(\widetilde{x})) = (S_+(x^*), S_-(x^*))$ and $(T_1, T_2)$ being given as (\ref{T1T2}). This implies that the N-RRSP of order $k$ with respect to $y$ must hold. ~ $\Box$ \vskip 0.05in A slight enhancement of the N-RRSP property by varying the choices of $(S_+, S_-)$ and $(T_1, T_2)$, we obtain the next property which turns out to be a sufficient condition for the exact recovery of the sign of a $k$-sparse signal. \vskip 0.05in \begin{Def}[S-RRSP of order $k$ with respect to $y$]\label{SRRSP-y} The matrix $\Phi^T$ is said to satisfy the sufficient restricted range space property (S-RRSP) of order $k$ with respect to $y$ if for any $(S_+ , S_-) \in P(y) $ with $|S_+ \cup S_- |\leq k$, there exists a pair $(T_1, T_2)$ such that $T_1 \subseteq J_+$, $T_2 \subseteq J_- , $ $ T_1\cup T_2 \not =J_+\cup J_- $and $ \left[\begin{array}{c} \Phi_{J_+\backslash T_1,S} \\ \Phi_{J_-\backslash T_2,S} \\ \Phi_{J_0,S} \end{array}\right], $ where $S =S_+ \cup S_-,$ has a full-column rank, and for any such a pair $(T_1, T_2),$ there is a vector $\eta \in {\cal R}(\Phi^T)$ satisfying the following properties: \begin{enumerate} \item [(i)] $ \eta_i=1 \text{ for }i\in S_+, ~ \eta_i=-1 \text{ for }i\in S_-, ~ |\eta_i|<1 \text{ otherwise}; $ \item[(ii)] $\eta=\Phi^T w$ for some $w\in \mathcal{F}(T_1,T_2) $ defined by (\ref{F4}). \end{enumerate} \end{Def} \vskip 0.05in Note that when $\left[\begin{array}{c} \Phi_{J_+\backslash T_1,S} \\ \Phi_{J_-\backslash T_2,S} \\ \Phi_{J_0,S} \end{array}\right]$ has a full-column rank, so does $\Phi_{m, S}.$ Thus we have the next lemma. \vskip 0.05in \begin{Lem}\label{KcolumnRSP} If $\Phi^T$ satisfies the S-RRSP of order $k$ with respect to $y, $ then for any $(S_+, S_-)\in P(y) $ with $|S_+\cup S_-|\leq k,$ $\Phi_{m, S} $ must have a full-column rank, where $S=S_+\cup S_-.$ \end{Lem} \vskip 0.05in For a given $y , $ the equation $y=\textrm{sign}(\Phi x)$ might possess infinitely many solutions. We now prove that if $x^*$ is a sparsest solution to this equation, then its sign can be exactly recovered by (\ref{1bit-basis}) if $\Phi^T$ has the S-RRSP of order $k$ with respect to $y.$ \vskip 0.05in \begin{Thm}\label{Main38} Let measurements $y\in \{-1, 1,0\}^m$ be given and assume that $\Phi^T$ has the S-RRSP of order $k$ with respect to $y$. Then the 1-bit basis pursuit (\ref{1bit-basis}) admits a unique optimal solution $x' $ satisfying $ \textrm{supp} (x') \subseteq \textrm{supp}(x^*) $ for any $k$-sparse signal $x^*$ consistent with the measurements $ y,$ i.e., $y= \textrm{sign}(\Phi x^*). $ Furthermore, if $x^*$ is a sparsest signal consistent with $y,$ then $ \textrm{sign} (x') =\textrm{sign}(x^*),$ and thus the sign of $x^*$ can be exactly recovered by (\ref{1bit-basis}). \end{Thm} \vskip 0.05in {\it Proof.} Let $x^*$ be a $k$-sparse signal consistent with $y$, i.e., $\textrm{sign}(\Phi x^*) =y. $ Denote by $S_+=\{i: x^*_i>0\}$, ~ $S_-=\{i: x^*_i<0\}$ and $S=\textrm{supp}(x^*)= S_+\cup S_-.$ Clearly, $(S_+, S_-) \in P(y)$ and $|S_+\cup S_-|\leq k.$ Consistency implies that $ (\Phi x^*)_i >0\textrm{ for all } i\in J_+, (\Phi x^*)_i < 0\textrm{ for all } i\in J_- $ and $ (\Phi x^*)_i = 0\textrm{ for all } i\in J_0.$ This implies that there is a scalar $\alpha>0$ such that $ \alpha (\Phi x^*)_i \geq 1 \textrm{ for all } i\in J_+ $ and $ \alpha (\Phi x^*)_i \leq -1 \textrm{ for all } i\in J_-. $ Thus $\alpha x^*$ is feasible to (\ref{1bit-basis}), i.e., \begin{eqnarray} & &\Phi_{J_+,n} (\alpha x^*) \geq e_{J_+},\label{eqn13a}\\ & &\Phi_{J_-,n} (\alpha x^*) \leq - e_{J_-},\label{eqn14a}\\ & &\Phi_{J_0,n} (\alpha x^*) =0. \label{eqn15a} \end{eqnarray} We see that $\alpha \geq \frac{1}{ (\Phi x^*)_i}\textrm{ for }i\in J_+ $ and $ \alpha \geq \frac{1}{-(\Phi x^*)_i}\textrm{ for }i\in J_-. $ Let $\alpha^* $ be the smallest $\alpha$ satisfying these inequalities, i.e. {\small $$ \alpha^* = \max \left\{ \max_{i\in J_+} \frac{1}{ (\Phi x^*)_i}, \max_{i\in J_-} \frac{1}{-(\Phi x^*)_i} \right\} =\max_{i\in J_+\cup J_-} \frac{1}{ |(\Phi x^*)_i|}.$$ } By the choice of $\alpha^*$, at $\alpha^* x^* $ one of the inequalities in (\ref{eqn13a}) and (\ref{eqn14a}) becomes an equality. Let $T'_0$ and $T''_0$ be the set of indices for active constraints in (\ref{eqn13a}) and (\ref{eqn14a}), i.e., {\small $$ T'_0 = \left\{i \in J_+ : \Phi (\alpha^* x^*) _i =1 \right\}, T''_0 = \left\{i \in J_- : \Phi (\alpha^* x^*) _i =-1 \right\} $$ } If the null space $ {\cal N} (\left[ \begin{array}{c} \Phi_{ T'_0, S} \\ \Phi_{ T''_0, S} \\ \Phi_{J_0, S} \\ \end{array} \right] ) \not=\{0\},$ then let $d \not=0 $ be a vector in this null space. It follows from Lemma \ref{KcolumnRSP} that $\Phi_{m,S}$ has a full-column rank. This implies that \begin{equation}\label{HD} \left[ \begin{array}{c} \Phi_{J_+\backslash T'_0, S} \\ \Phi_{ J_-\backslash T''_0, S} \end{array} \right] d \not =0.\end{equation} Consider the vector $x(\lambda)$ with components $ x(\lambda)_S = \alpha^* x^*_S + \lambda d$ and $x(\lambda)_i =0 $ for $i\notin S,$ where $\lambda \in R.$ By the choice of $d$, we see that $ \textrm{supp}( x(\lambda))\subseteq \textrm{supp}(x^*) $ for any $\lambda \in R.$ For all sufficiently small $ |\lambda|, $ the vector $ x(\lambda) $ is feasible to the problem (\ref{1bit-basis}) and the active constraints at $\alpha^* x^* $ in (\ref{eqn13a}) and (\ref{eqn14a}) are still active at $ x(\lambda) $ and the inactive constraints at $\alpha^* x^* $ are still inactive at $ x(\lambda). $ Due to (\ref{HD}), if letting $|\lambda|$ continuously vary from zero to a positive number, there exists $ \lambda^* \not =0 $ such that $x(\lambda^*) $ is still feasible to (\ref{1bit-basis}) and one of the above-mentioned inactive constraints becomes active at $x(\lambda^*).$ Let $x' = x(\lambda^*)$ and $$T' = \left\{i \in J_+ : (\Phi x' )_i =1 \right\}, T'' = \left\{i \in J_- : (\Phi x')_i =-1 \right\}. $$ By the construction of $x',$ we see that $ T'_0 \subseteq T' $ and $ T''_0 \subseteq T''.$ So we obtain an augmented set of active constraints at $x'.$ Now replace the role of $\alpha^* x^*$ by $ x'$ and repeat the above process. If $ {\cal N} (\left[ \begin{array}{c} \Phi_{ T', S} \\ \Phi_{ T'', S} \\ \Phi_{J_0, S} \\ \end{array} \right]) \not=\{0\},$ pick a vector $d'\not =0 $ from this null space. Since $\Phi_{m, S}$ has a full-column rank, we must have that $ \left[ \begin{array}{c} \Phi_{J_+\backslash T', S} \\ \Phi_{ J_-\backslash T'', S} \end{array} \right]d' \not =0.$ So we can continue to update the components of $x'$ by setting $x'_S \leftarrow x'_S+\lambda' d'$ and keeping $x'_i =0$ for $i\notin S,$ where $\lambda'$ is chosen such that $x'_S+\lambda' d'$ is still feasible to (\ref{1bit-basis}) and one of the inactive constraints at the current point $x'$ becomes active at $x'_S+\lambda' d'.$ Thus the index sets $T'$ and $T''$ for active constraints are further augmented. Since $\Phi_{m, S}$ has a full-column rank, after repeating the above process a finite number of times, we stop at a point, denoted still by $x',$ at which ${\cal N} (\left[ \begin{array}{c} \Phi_{T', S} \\ \Phi_{ T'', S} \\ \Phi_{J_0, S} \\ \end{array} \right]) =\{0\},$ i.e., $\left[ \begin{array}{c} \Phi_{T', S} \\ \Phi_{ T'', S} \\ \Phi_{J_0, S} \\ \end{array} \right] $ has a full-column rank. Note that $\textrm{supp} (x') \subseteq \textrm{supp}(x^*) $ is always maintained in the above process. Define the sets \begin{equation}\label{TT00} T_1 = \widetilde{ {\cal A}}_+ (x'), ~ T_2 = \widetilde{ {\cal A}}_- (x') . \end{equation} Thus $T_1 \subseteq J_+$ and $T_2\subseteq J_-.$ By the construction of $x',$ we see that ${\cal A}(x') \not =\emptyset.$ Thus $(T_1, T_2)$ given by (\ref{TT00}) satisfies that $T_1\cup T_2 \not = J_+ \cup J_-.$ We now further prove that $x'$ must be the unique optimal solution to the 1-bit basis pursuit (\ref{1bit-basis}). By Theorem \ref{Ness-Suff}, it is sufficient to prove that $\Phi^T$ has the RRSP at $x'$ and the matrix $$H(x') = \left[ \begin{array}{cc} \Phi_{\mathcal{A}(x')\cap J_+, S'_+} & \Phi_{\mathcal{A}(x')\cap J_+, S'_-} \\ \Phi_{\mathcal{A}(x')\cap J_-, S'_+} & \Phi_{\mathcal{A}(x')\cap J_-, S'_-} \\ \Phi_{J_0,S'_+} & \Phi_{J_0,S'_-} \end{array} \right] $$ has a full-column rank, where $S'_+=\{i: x'_i >0\}$ and $S'_- =\{i: x'_i<0\}.$ Indeed, let $S'_+, S'_-, T_1 $ and $T_2$ be defined as above. Since $x'$ is consistent with $y$ and satisfies that $\textrm{supp} (x') \subseteq \textrm{supp}(x^*), $ we see that $(S'_+, S'_-) \in P(y) $ satisfying $S'= S'_+\cup S'_- \subseteq S. $ Since $\left[ \begin{array}{c} \Phi_{T', S} \\ \Phi_{ T'', S} \\ \Phi_{J_0, S} \\ \end{array} \right] $ has a full-column rank, $ \left[ \begin{array}{c} \Phi_{T', S'} \\ \Phi_{ T'', S'} \\ \Phi_{J_0, S'} \\ \end{array} \right] $ must have a full-column rank. Note that {\small \begin{equation} \label{Union} T'= J_+ \setminus T_1 ={\cal A} (x')\cap J_+ , ~~ T''= J_- \setminus T_2 ={\cal A} (x') \cap J_- . \end{equation} } Thus $H(x') =\left[\begin{array}{c} \Phi_{J_+\backslash T_1,S'} \\ \Phi_{J_-\backslash T_2,S'} \\ \Phi_{J_0,S'} \end{array}\right]$ has a full-column rank. Since $\Phi^T$ has the S-RRSP of order $k$ with respect to $y, $ there exists a vector $\eta\in \mathcal{R}(\Phi^T)$ and $w\in \mathcal{F}(T_1,T_2)$ satisfying that $\eta=\Phi^T w$ and $ \eta_i=1 $ for $ i\in S'_+,$ $ \eta_i=-1 $ for $ i\in S'_-,$ and $|\eta_i|<1 $ otherwise. The set $ \mathcal{F}(T_1,T_2) $ is defined as (\ref{F4}). From (\ref{TT00}), we see that the conditions $w_{T_1\cup T_2}=0 $ in (\ref{F4}) coincides with the condition $w_i=0\textrm{ for } i\in{\tilde{\mathcal{A}}_+(x')} \cup {\tilde{\mathcal{A}}_-(x')}.$ This, together with (\ref{Union}), implies that $\mathcal{F}(T_1,T_2)$ coincides with ${\cal F} (x')$ defined as (\ref{FFFF}). Thus the RRSP of $\Phi^T$ at $x' $ holds (see Definition \ref{DefRRSP}). This, together with the full-column-rank property of $H(x') ,$ implies that $x'$ is the unique optimal solution to (\ref{1bit-basis}). Furthermore, suppose that $x^* $ is a $k$-sparse signal and $x^*$ is a sparsest signal consistent with $y.$ Since $x'$ is also consistent with $y$, it follows from $\textrm{supp} (x') \subseteq \textrm{supp} (x^*)$ that $\textrm{supp} (x') = \textrm{supp} (x^*).$ So $x'$ is also a sparsest vector consistent with $y.$ From the aforementioned construction process of $x',$ it is not difficult to see that the updating scheme $x'_S \leftarrow x_S'+\lambda ' d'$ does not change the sign of nonzero components of the vectors. In fact, when we vary the parameter $\lambda $ in $ x_S'+\lambda d'$ to determine the critical value $\lambda '$ which yields new active constraints, this value $\lambda '$ still ensures that the new vector $x_S'+\lambda ' d'$ is feasible to (\ref{1bit-basis}). If there is a nonzero component of $x_S'+\lambda ' d'$, say the $i$th component, holds a different sign from the corresponding nonzero component of $x'_S,$ then by continuity and by convexity of the feasible set of (\ref{1bit-basis}), there is a suitable $\lambda $ lying between zero and $\lambda' $ such that the $i$th component of $x_S'+\lambda d'$ is equal to zero. Thus $x_S'+\lambda d'$ is sparser than $x^*.$ Since $x_S'+\lambda d'$ is also feasible to (\ref{1bit-basis}), it is consistent with $y.$ This is a contradiction as $x^*$ is a sparsest signal consistent with $y.$ Therefore, we must have $\textrm{sign}(x') =\textrm{sign} (x^*).$ ~ $\Box$ \section{Uniform sign recovery } \vskip 0.05in Theorems \ref{Main36} and \ref{Main38} provide some conditions for the nonuniform recovery of the sign of an individual $k$-sparse signal. In this section, we develop some necessary and sufficient conditions for the uniform recovery of the sign of all $k$-sparse signals through a sensing matrix $\Phi.$ Let us first define $$ Y^k= \{y: y=\textrm{sign} (\Phi x), x\in R^n, \|x\|_0 \leq k\}.$$ For any two disjoint subsets $S_1, S_2 \subseteq \{1, \dots, n\}$ satisfying $|S_1\cup S_2|\leq k,$ there exists a $k$-sparse signal $x$ such that $S_1 =S_+(x)$ and $ S_2=S_- (x).$ Thus any such disjoint subsets $(S_1, S_2)$ must be in the set $P(y)$ for some $y\in Y^k.$ We now introduce the notion of the N-RRSP of order $k$ which turns out to be a necessary condition for uniform sign recovery. \vskip 0.05in \begin{Def} [N-RRSP of order $k$] \label{N-RRSP} The matrix $\Phi^T$ is said to satisfy the necessary restricted range space property (N-RRSP) of order $k$ if for any disjoint subsets $S_+, S_- $ of $\{1,\dots, n\}$ with $|S|\leq k,$ where $S=S_+\cup S_-,$ there exist $y\in Y^k$ and $(T_1, T_2) $ such that $(S_+, S_-)\in P(y), $ $T_1 \subseteq J_+ (y)$, $T_2 \subseteq J_-(y), $ $T_1 \cup T_2 \not = J_+(y) \cup J_-(y) $ and {\small $ \left[\begin{array}{c} \Phi_{J_+(y)\backslash T_1,S} \\ \Phi_{J_-(y)\backslash T_2,S} \\ \Phi_{J_0,S} \end{array}\right] $ } has a full-column rank, and there is a vector $\eta \in {\cal R}(\Phi^T)$ satisfying the following properties: \begin{enumerate} \item [(i)] $ \eta_i=1 \text{ for }i\in S_+, ~ \eta_i=-1 \text{ for }i\in S_-, ~ |\eta_i|<1 \text{ otherwise}; $ \item[(ii)] $\eta=\Phi^T w$ for some $w\in \mathcal{F}(T_1,T_2)$ defined by (\ref{F4}) \end{enumerate} \end{Def} \vskip 0.05in The N-RRSP of order $k$ is a necessary condition for the uniform recovery of the sign of all $k$-sparse signals via 1-bit measurements and basis pursuit. \vskip 0.05in \begin{Thm}\label{Uniform-1} Let $\Phi \in R^{m\times n} $ be a given matrix and assume that for any $k$-sparse signal $x^*$, the sign measurements $\textrm{sign} (\Phi x^*)$ can be acquired. If the sign of any $k$-sparse signal $x^*$ can be exactly recovered by the 1-bit basis pursuit (\ref{1bit-basis}) with $ J_+= \{i: \textrm{sign}(\Phi x^*)_i=1\}, $ $ J_-=\{i:\textrm{sign}(\Phi x^*)_i=-1\} $ and $ J_0 =\{i: \textrm{sign}(\Phi x^*)_i= 0\} $ in the sense that (\ref{1bit-basis}) admits a unique optimal solution $\widetilde{x}$ satisfying $\textrm{sign} (\widetilde{x})=\textrm{sign} (x^*), $ then $\Phi^T $ must admit the N-RRSP of order $k.$ \end{Thm} \emph{Proof.} Let $x^*$ be an arbitrary $k$-sparse signal with $S_+= \{i: x^*_i>0\} ,$ $ S_-=\{i: x^*_i <0\}$ and $S=S_+\cup S_-.$ Clearly, $|S| \leq k.$ Let $y= \textrm{sign}(\Phi x^*)$ be the acquired measurements. Assume that $\widetilde{x}$ is the unique optimal solution to (\ref{1bit-basis}) and $\textrm{sign} (\widetilde{x})= \textrm{sign} (x^*) .$ Then we see that $y\in Y^k$, $ (S_+, S_-) \in P(y),$ and \begin{equation} \label{set-11}(S_+(\widetilde{x}), S_-(\widetilde{x})) = (S_+, S_-).\end{equation} It follows from Theorem \ref{Ness-Suff} that the uniqueness of $\widetilde{x}$ implies that the matrix $H(\widetilde{x}) $ admits a full-column rank and there exists a vector $\eta \in {\cal R}(\Phi^T)$ such that \begin{itemize} \item[(a)] $\eta_i=1$ for $ i \in S_+(\widetilde{x})$, $\eta_i=-1$ for $ i\in S_-(\widetilde{x}),$ and $|\eta_i|<1$ otherwise; \item[(b)] $\eta=\Phi^T w$ for some $w\in \mathcal{F} (\widetilde{x}) $ given as \begin{eqnarray*} & \mathcal{F}(\widetilde{x}) & = \{w \in R^m: w_i > 0\textrm{ for } i\in {\cal A}(\widetilde{x}) \cap J_+(y) , \nonumber \\ & & ~~~~ w_i < 0 \textrm{ for } i\in {\cal A}(\widetilde{x}) \cap J_- (y), \\ & & ~~~~ w_i=0 \textrm{ for } i\in \tilde{\mathcal{A}}_+(\widetilde{x}) \cup \tilde{\mathcal{A}}_-(\widetilde{x})\}. \end{eqnarray*} \end{itemize} Let $T_1= \widetilde{{\cal A}}_+(\widetilde{x}) \subseteq J_+ (y) $ and $ T_2 = \widetilde{{\cal A}}_-(\widetilde{x}) \subseteq J_- (y). $ Since $\widetilde{x}$ is an optimal solution to (\ref{1bit-basis}), we must have that $ {\cal A} (\widetilde{x}) \not = \emptyset, $ which implies that $T_1 \cup T_2 \not = J_+(y) \cup J_-(y).$ Clearly, \begin{equation} \label{set-22} {\cal A}(\widetilde{x}) \cap J_+ (y) = J_+ (y) \backslash T_1, ~ \mathcal{A}(\widetilde{x}) \cap J_-(y) = J_- (y) \backslash T_2. \end{equation} Therefore, the full-column-rank property of $H(\widetilde{x})$ implies that {\small $ \left[\begin{array}{c} \Phi_{J_+(y)\backslash T_1,S} \\ \Phi_{J_-(y)\backslash T_2,S} \\ \Phi_{J_0,S} \end{array}\right] $ } has a full-column rank. By (\ref{set-11}) and (\ref{set-22}), the above properties (a) and (b) coincide with the properties (i) and (ii) described in Definition \ref{N-RRSP}. By considering all possible $k$-sparse signals $x^*, $ which yield all possible disjoint subsets $S_+, S_- $ of $\{1,\dots, n\} $ satisfying $|S_+\cup S_-|\leq k.$ Thus $\Phi^T $ admits the N-RRSP of order $k.$ ~$\Box $ \vskip 0.05in It should be pointed out that for random matrices $\Phi,$ with probability 1 the optimal solution to the linear program (\ref{1bit-basis}) is unique. In fact, the non-uniqueness of optimal solutions happens only if the optimal face of the feasible set (which is a polyhedron) is parallel to the objective hyperplane, and the probability for this event is zero. This means that the uniqueness assumption for the optimal solution of (\ref{1bit-basis}) is very mild and it holds almost for sure. Thus when the sensing matrix $\Phi$ is randomly generated according to a probability distribution, with probability 1 the RRSP of $\Phi^T$ at its optimal solution $\widetilde{x}$ holds and the associated matrix $H (\widetilde{x})$ has a full-column rank. The N-RRSP of order $k$ is defined based on such a mild assumption. Theorem 4.2 has indicated that the N-RRSP of order $k$ is a necessary requirement for the uniform recovery of the sign of all $k$-sparse signals from 1-bit measurements with the linear program (\ref{1bit-basis}). Using linear programs as decoding methods will necessarily and inevitably yield a certain range space property like the RRSP (since this property results directly from the fundamental optimality condition of linear programs). From the study in this paper, we conclude that if the sign of $k$-sparse signals can be exactly recovered from 1-bit measurements with a linear programming decoding method, then $\Phi^T$ must satisfy the N-RRSP of order $k$ or its variants. At the moment, it is not clear whether this necessary condition is also sufficient for the exact sign recovery in 1-bit CS setting. In classic CS, a sensing matrix is required to admit a general positioning property in order to achieve the uniform recovery of $k$-sparse signals. This property is reflected in all concepts such as RIP, NSP and RSP. Similarly, in order to the achieve the uniform recover of the sign of $k$-sparse signals in 1-bit CS setting, the matrix should admit a certain general positioning property as well. Since N-RRSP is a necessary property for uniform sign recovery, a sufficient sign recovery condition can be developed by slightly enhancing this necessary property, i.e., by considering all possible sign measurements $y\in Y^k$ together with the pairs $(T_1 , T_2)$ described in Definition \ref{N-RRSP}. This naturally leads to the next definition. \vskip 0.05in \begin{Def} [S-RRSP of order $k$] \label{S-RRSP} The matrix $\Phi^T$ is said to satisfy the sufficient restricted range space property (S-RRSP) of order $k$ if for any disjoint subsets $(S_+, S_-) $ of $\{1, \dots, n\} $ with $|S|\leq k,$ where $S=S_+\cup S_-,$ and for any $y\in Y^k$ such that $(S_+, S_-)\in P(y) $, there exist $T_1$ and $T_2$ such that $T_1 \subseteq J_+ (y),$ $T_2 \subseteq J_-(y), $ $ T_1 \cup T_2 \not = J_+(y) \cup J_-(y) $ and {\small $ \left[\begin{array}{c} \Phi_{J_+(y)\backslash T_1,S} \\ \Phi_{J_- (y) \backslash T_2,S} \\ \Phi_{J_0,S} \end{array}\right] $ } has a full-column rank, and for any such a pair $(T_1, T_2), $ there is a vector $\eta \in {\cal R}(\Phi^T)$ satisfying the following properties: \begin{enumerate} \item [(i)] $ \eta_i=1 \text{ for }i\in S_+, ~ \eta_i=-1 \text{ for }i\in S_-, ~ |\eta_i|<1 \text{ otherwise}; $ \item[(ii)] $\eta=\Phi^T w$ for some $w\in \mathcal{F}(T_1,T_2) $ defined by (\ref{F4}). \end{enumerate} \end{Def} \vskip 0.05in The above concept taking into account all possible vectors $y$ is stronger than Definition \ref{SRRSP-y}. If a matrix has the S-RRSP of order $k,$ it must have the S-RRSP of order $k$ with respect to any individual vector $y \in Y^k.$ The S-RRSP of order $k$ makes it possible to recover the sign of all $k$-sparse signals from 1-bit measurements with (\ref{1bit-basis}), as shown in the next theorem. \vskip 0.05in \begin{Thm}\label{Main2} Suppose that $\Phi^T$ has the S-RRSP of order $ k $ and that for any $k$-sparse signal $x^*,$ the sign measurements $ \textrm{sign}(\Phi x^*)$ can be acquired. Then the 1-bit basis pursuit (\ref{1bit-basis}) with $ J_+= \{i: \textrm{sign}(\Phi x^*)_i=1\}, $ $ J_-=\{i:\textrm{sign}(\Phi x^*)_i=-1\} $ and $ J_0 =\{i: \textrm{sign}(\Phi x^*)_i= 0\} $ has a unique optimal solution $\widetilde{x} $ satisfying that $\textrm{supp} (\widetilde{x}) \subseteq \textrm{supp}(x^*). $ Furthermore, for any $k$-sparse signal $x^*$ which is a sparsest signal satisfying \begin{equation} \label{m-system}\textrm{sign}(\Phi x)= \textrm{sign} (\Phi x^*),\end{equation} the sign of $x^*$ can be exactly recovered by (\ref{1bit-basis}), i.e., the unique optimal solution $ \widetilde{x} $ of (\ref{1bit-basis}) satisfies that $\textrm{sign} (\widetilde{x})=\textrm{sign} (x^*).$ \end{Thm} \vskip 0.05in \emph{Proof.} Let $x^*$ be an arbitrary $k$-sparse signal, and let measurements $y=\textrm{sign} (\Phi x^*)$ be taken, which determines a partition $(J_+, J_-, J_0)$ of $\{1, \dots, m\}$ as (\ref{JJJJ}). Since $\Phi^T$ has the S-RRSP of order $k$, this implies that $\Phi^T$ has the S-RRSP of order $k$ with respect to this vector $y .$ By Theorem \ref{Main38}, the problem (\ref{1bit-basis}) has a unique optimal solution, denoted by $\widetilde{x},$ which satisfies that $\textrm{supp}(\widetilde{x}) \subseteq \textrm{supp} (x^*).$ Furthermore, if $x^*$ is a sparsest signal satisfying the system (\ref{m-system}), then by Theorem \ref{Main38} again, we must have that $\textrm{sign}(\widetilde{x}) = \textrm{sign} (x^*),$ and hence the sign of $x^*$ can be exactly recovered by (\ref{1bit-basis}). ~ $\Box$ \vskip 0.05in The above theorem indicates that under the S-RRSP of order $k$ if $x^*$ is a sparsest solution to (\ref{m-system}), then the sign of $x^*$ can be exactly recovered by (\ref{1bit-basis}). If $x^*$ is not a sparsest solution to (\ref{m-system}), then at least part of the support of $x^*$ can be exactly recovered by (\ref{1bit-basis}) in the sense that $\textrm{supp} (\widetilde{x}) \subseteq \textrm{supp} (x^*),$ where $\widetilde{x}$ is the optimal solution to (\ref{1bit-basis}). The study in this paper indicates that the models (\ref{l0LP}) and (\ref{1bit-basis}) make it possible to establish a sign recovery theory for $k$-sparse signals from 1-bit measurements. It is worth noting that these models can also make it possible to extend reweighted $\ell_1$-algorithms (e.g., \cite{CWB08, ZL2012, SS2013, ZK2014}) to 1-bit CS problems. The RIP and NSP recovery conditions are widely assumed in classic CS scenarios. Recent study has shown that it is NP-hard to compute the RIP and NSP constants of a given matrix (\cite{TP14, BDMS13}). The RSP recovery condition introduced in \cite{YBZ2013} is equivalent to the NSP since both are the necessary and sufficient condition for the uniform recovery of all $k$-sparse signals. The NSP characterizes the uniform recovery from the perspective of the null space of a sensing matrix, while the RSP characterizes the uniform recovery from its orthogonal space, i.e., the range space of a transposed sensing matrix. So it is also difficult to certify the RSP of a given matrix. Clearly, the N-RRSP and S-RRSP are more complex than the standard RSP, and thus they are hard to certify as well. Note that the existence of a matrix with the RSP follows directly from the fact that any matrix with RIP of order $2k$ or NSP of order $2k$ must admit the RSP of order $k$ (see \cite{YBZ2013}). In 1-bit CS setting, however, the analogous theory are still underdevelopment. The existence analysis of a S-RRSP matrix has not yet properly addressed at the current stage. \section{Proof of Theorem 3.2} We now prove Theorem \ref{Ness-Suff} which provides a complete characterization for the uniqueness of solutions to the 1-bit basis pursuit (\ref{1bit-basis}). We start by developing necessary conditions. \subsection{Necessary condition (I): Range space property} By introducing $u, v, t\in R^n_+,$ where $t$ satisfies that $|x_i|\leq t_i$ for $i=1, \dots,n,$ then (\ref{8888}) can be written as the linear program \begin{eqnarray}\label{9999} & \min & e^T t \nonumber \\ & \textrm{s.t.}& x +u= t, ~ -x+v= t, ~\Phi_{J_+,n} x -\alpha= e_{J_+}, \nonumber \\ & & \Phi_{J_-,n} x +\beta=- e_{J_-}, ~\Phi_{J_0,n} x=0, \\ & & (t,~u,~v,~\alpha,~\beta)\geq 0. \nonumber \end{eqnarray} Clearly, we have the following statement. \vskip 0.05in \begin{Lem}\label{LemLP2LP3} (i) For any optimal solution $(x^*,$ $t^*,u^*,$ $v^*,\alpha^*,$ $\beta^*)$ of (\ref{9999}), we have that $ t^*=|x^*|, $ $ u^*= |x^*|-x^* , $ $ v^* =|x^*|+x^* $ and $(\alpha^*, \beta^*)$ is given by (\ref{Lemma31}). (ii) $x^*$ is the unique optimal solution to (\ref{1bit-basis}) if and only if $(x,t,u,v,\alpha,\beta)=(x^*,|x^*|,|x^*|-x^*,|x^*|+x^*,\alpha^*,\beta^*)$ is the unique optimal solution to (\ref{9999}), where $(\alpha^*, \beta^*)$ is given by (\ref{Lemma31}). \end{Lem} \vskip 0.05in Any linear program can be written in the form $\min\{c^Tz: Az=b, z\geq 0\},$ to which the Lagrangian dual problem is given by $\max\{b^T y: A^T y\leq c\}$ (see, e.g., \cite{D63}). So it is very easy to verify that the dual problem of (\ref{9999}) is given as {\small \begin{align} &\textrm{(DLP)}& \max~ &~ e_{J_+}^T h_3- e_{J_-}^T h_4 \nonumber & \\ & & \textrm{s.t.}~~ & h_1-h_2+(\Phi_{J_+,n})^Th_3+(\Phi_{J_-,n})^Th_4 \nonumber \\ & & & +(\Phi_{J_0,n})^Th_5 = 0,& \nonumber \\ & & &-h_1-h_2\leq e,& \label{cc1} \\ & & &h_1\leq0,& \label{cc2} \\ & & &h_2\leq0,& \label{cc3}\\ & & &-h_3\leq0,& \label{cc4} \\ & & & h_4\leq0.& \label{cc5} \end{align} } The (DLP) is always feasible in the sense that there exists a point, for instance, $(h_1, \dots, h_5)= (0, \dots, 0),$ satisfies all constraints. Furthermore, let $ s^{(1)},\dots, s^{(5)}$ be the nonnegative slack variables associated with the constraints (\ref{cc1}) through (\ref{cc5}), respectively. Then (DLP) can be also written as {\small \begin{eqnarray} & \max~ &~ e_{J_+}^T h_3- e_{J_-}^T h_4 \nonumber \\ & \textrm{s.t.}~~ & h_1-h_2+(\Phi_{J_+,n})^Th_3+(\Phi_{J_-,n})^Th_4 \nonumber \\ & & +(\Phi_{J_0,n})^Th_5 = 0, \label {cc0} \\ & & s^{(1)}-h_1-h_2= e,\label{slack1}\\ & & s^{(2)}+h_1=0,\label{slack2}\\ & & s^{(3)}+h_2=0,\label{slack3}\\ & & s^{(4)}-h_3=0,\label{slack4}\\ & & s^{(5)}+h_4=0,\label{slack5}\\ & & s^{(1)}, \dots , s^{(5)} \geq 0. \nonumber \end{eqnarray}} We now prove that if $x^*$ is the unique optimal solution to (\ref{1bit-basis}), the range space $\mathcal{R}(\Phi^T)$ must satisfy some properties. \vskip 0.05in \begin{Lem} \label{RSPThm} If $x^*$ is the unique optimal solution to (\ref{1bit-basis}), then there exist vectors $h_1,h_2 \in R^n$ and $ w\in R^m$ satisfying {\small \begin{eqnarray} \label{RRSP-CON} \left\{ {\small \begin{array}{ll} h_2-h_1=\Phi^Tw, & \\ (h_1)_i=-1,~(h_2)_i=0 & \text{for } x_i^*>0, \\ (h_1)_i=0,~(h_2)_i=-1 & \text{for } x_i^*<0,\\ (h_1)_i,(h_2)_i<0,(h_1+h_2)_i>-1 & \small \text{for } x^*_i=0,\\ \begin{array} {cl} w_i> 0 & \textrm{ for }i\in \mathcal{A}(x^*)\cap J_+, \\ w_i< 0 & \textrm{ for }i\in \mathcal{A}(x^*)\cap J_-, \\ w_i=0 & \textrm{ for } i\in \tilde{\mathcal{A}}_+(x^*) \cup \tilde{\mathcal{A}}_-(x^*). \end{array} & \\ \end{array} } \right. \end{eqnarray} } \end{Lem} \emph{Proof. } Assume that $x^*$ is the unique optimal solution to (\ref{1bit-basis}). By Lemma \ref{LemLP2LP3}, \begin{eqnarray} \label{sol**} (x,t,u,v,\alpha,\beta)=(x^*,|x^*|,|x^*|-x^*,|x^*|+x^*,\alpha^*,\beta^*) \end{eqnarray} is the unique optimal solution to (\ref{9999}), where $(\alpha^*, \beta^*) $ is given by (\ref{Lemma31}). By the strict complementarity theory of linear programs (see, e.g., Goldman and Tucker \cite{GT56}) , there exists a solution $(h_1, \dots, h_5)$ of (DLP) such that the associated vectors $ s^{(1)},\dots, s^{(5)} $ determined by (\ref{slack1})--(\ref{slack5}) and the vectors $(t, u,v, \alpha, \beta) $ given by (\ref{sol**}) are strictly complementary, i.e., these vectors satisfy the conditions \begin{equation} \label{strict1} t^Ts^{(1)}=u^Ts^{(2)}= v^Ts^{(3)}=\alpha^Ts^{(4)}= \beta^Ts^{(5)}=0 \end{equation} and \begin{equation}\label{strict2} \left\{\begin{array}{l} t+s^{(1)}>0, ~ u+s^{(2)}>0, ~ v+s^{(3)}>0, \\ \alpha+s^{(4)}>0, ~\beta+s^{(5)}>0. \end{array} \right. \end{equation} For the above-mentioned solution $(h_1, \dots, h_5)$ of (DLP), let $w\in R^m $ be the vector defined by $w_{J_+} = h_3, w_{J_-}=h_4,$ and $ w_{J_0}=h_5.$ Then it follows from (\ref{cc0}) that \begin{equation}\label{RangeCond} h_2-h_1= (\Phi_{J_+,n})^Th_3+(\Phi_{J_-,n})^Th_4+(\Phi_{J_0,n})^Th_5 =\Phi^T w. \end{equation} From (\ref{sol**}), we see that the solution of (\ref{9999}) satisfies the following properties: \begin{eqnarray*} \begin{array}{cccl} & t_i=x_i^*>0, ~ u_i=0, ~ v_i=2x^*_i>0 &\text{for } x_i^*>0,\\ & t_i=|x_i^*|>0, ~ u_i=2|x_i^*|>0, ~ v_i=0 &\text{for } x_i^*<0,\\ & t_i=0, ~ u_i=0, ~ v_i=0 &\text{for } x^*_i=0. \end{array} \end{eqnarray*} Thus, from (\ref{strict1}) and (\ref{strict2}), it follows that \begin{eqnarray*} \begin{array}{llll} s^{(1)}_i=0,& s^{(2)}_i>0,& s^{(3)}_i=0 &\text{for } x_i^*>0,\\ s^{(1)}_i=0,& s^{(2)}_i=0,& s^{(3)}_i>0 &\text{for } x_i^*<0,\\ s^{(1)}_i>0,& s^{(2)}_i>0,& s^{(3)}_i>0 &\text{for } x^*_i=0. \end{array} \end{eqnarray*} From (\ref{slack1}), (\ref{slack2}) and (\ref{slack3}), the above relations imply that \begin{eqnarray*} \begin{array}{llll} (h_1+h_2)_i=-1,& (h_1)_i<0,&(h_2)_i=0 &\text{ for } x_i^*>0,\\ (h_1+h_2)_i=-1,& (h_1)_i=0,&(h_2)_i<0 &\text{ for } x_i^*<0,\\ (h_1+h_2)_i>-1,& (h_1)_i<0,&(h_2)_i<0 &\text{ for } x^*_i=0. \end{array} \end{eqnarray*} From (\ref{slack4}) and (\ref{slack5}), we see that $ s^{(4)}=h_3\geq 0 $ and $ s^{(5)}=-h_4\geq 0. $ Let $\pi(\cdot)$ and $\varrho (\cdot) $ be defined as (\ref{pi}) and (\ref{varrho}), respectively. It follows from (\ref{Lemma31}), (\ref{strict1}) and (\ref{strict2}) that \begin{eqnarray*} (h_3)_{\pi(i)} & = & s^{(4)}_{\pi(i)} >0 \textrm{ for } i\in \mathcal{A}(x^*)\cap J_+, \\ (h_3)_{\pi(i)} & = & s^{(4)}_{\pi(i)}=0 \textrm{ for } i\in \tilde{\mathcal{A}}_+(x^*), \\ (-h_4)_{\varrho(i)} & = & s^{(5)}_{\varrho(i)} >0 \textrm{ for } i\in \mathcal{A}(x^*)\cap J_-, \\ (-h_4)_{\varrho(i)} & = & s^{(5)}_{\varrho(i)} =0 \textrm{ for } i\in \tilde{\mathcal{A}}_-(x^*). \end{eqnarray*} By the definition of $w$ (i.e., $w_{J_+} = h_3 ,$ $ w_{J_-}=h_4 $ and $w_{J_0}=h_5$), the above conditions imply that {\small \begin{eqnarray*} w_i & = & (h_3)_{\pi(i)} >0 \textrm{ for } i\in \mathcal{A}(x^*)\cap J_+, \\ w_i & = & (h_3)_{\pi(i)}=0 \textrm{ for } i\in \tilde{\mathcal{A}}_+(x^*), \\ w_i & = & (h_4)_{\varrho(i)} <0 \textrm{ for } i\in \mathcal{A}(x^*)\cap J_-, \\ w_i & = & (h_4)_{\varrho(i)} =0 \textrm{ for } i\in \tilde{\mathcal{A}}_-(x^*). \end{eqnarray*} } Thus, $h_1,h_2 $ and $ w $ satisfy (\ref{RangeCond}) and the properties: {\small \begin{align*} \begin{array}{cl} (h_1)_i=-1,~(h_2)_i=0 & \text{ for } x_i^*>0, \\ (h_1)_i=0,~(h_2)_i=-1 & \text{ for }~x_i^*<0,\\ (h_1)_i,(h_2)_i<0, ~(h_1+h_2)_i>-1 &\text{ for }~x^*_i=0,\\ w_i>0&\textrm{ for }i\in \mathcal{A}(x^*)\cap J_+,\\ w_i=0&\textrm{ for } i\in \tilde{\mathcal{A}}_+(x^*),\\ w_i<0&\textrm{ for }i\in \mathcal{A}(x^*)\cap J_-,\\ w_i=0&\textrm{ for }i\in \tilde{\mathcal{A}}_-(x^*). \end{array} \end{align*} } Therefore, condition (\ref{RRSP-CON}) is a necessary condition for $x^*$ to be the unique optimal solution to (\ref{1bit-basis}). ~ $ \Box $ \vskip 0.05in It should be pointed out that the uniqueness of $x^*$ implies that $x^*$ is the strictly complementary solution. This leads to the condition (\ref{RRSP-CON}) in which all inequalities hold strictly. If $x^*$ is not the unique optimal solution of (\ref{1bit-basis}), then $x^*$ is not necessarily a strictly complementary solution, and thus (\ref{RRSP-CON}) does not necessarily hold. We now present an equivalent statement for (\ref{RRSP-CON}) as follows. \vskip 0.05in \begin{Lem}\label{Lemyeta} Let $x^*\in R^n$ be a given vector satisfying the constraints of (\ref{1bit-basis}). There exist vectors $h_1,h_2$ and $w$ satisfying (\ref{RRSP-CON}) if and only if there exists a vector $\eta\in\mathcal{R}(\Phi^T) $ satisfying the following two conditions: \begin{itemize} \item[(i)] $\eta_i=1$ for $x^*_i>0$, $\eta_i=-1$ for $x^*_i<0,$ and $|\eta_i|<1$ for $x^*_i=0$; \item[(ii)] $\eta=\Phi^T w$ for some $w\in \mathcal{F} (x^*) $ defined as (\ref{FFFF}). \end{itemize} \end{Lem} It is straightforward to verify this lemma. Its proof is omitted here. By Definition \ref{DefRRSP}, Combining Lemmas \ref{RSPThm} and \ref{Lemyeta} yields the following result. \vskip 0.05in \begin{Cor} \label{Cor54} If $x^*$ is the unique optimal solution to (\ref{1bit-basis}), then the RRSP of $\Phi^T$ at $x^*$ holds. \end{Cor} \vskip 0.05in The RRSP at $x^*$ is not sufficient to ensure the uniqueness of $x^*.$ We need to develop another necessary condition (called the full-column-rank property). \subsection{Necessary condition (II): Full column rank} Assume that $x^*$ is the unique optimal solution to (\ref{1bit-basis}). Denote still by $ S_+=\{i: x^*_i>0\}$ and $ S_-=\{i: x^*_i<0\}. $ We have the following lemma. \vskip 0.05in \begin{Lem}\label{Lemma37} If $x^*$ is the unique optimal solution to (\ref{1bit-basis}), then $H(x^*),$ defined by (\ref{FRPmatrix}), has a full-column rank. \end{Lem} \vskip 0.05in {\it Proof.} Assume the contrary that $H(x^*)$ has linearly dependent columns. Then there exists a vector $d= \left[ \begin{array}{c} u \\ v \\ \end{array} \right] \neq 0, $ where $ u\in R^{|S_+|}$ and $ v\in R^{|S_-|},$ such that $ H(x^*) d= 0. $ Since $x^*$ is the unique optimal solution to (\ref{1bit-basis}), there exist nonnegative $\alpha^*$ and $\beta^*,$ determined by (\ref{Lemma31}), such that $(x^*,\alpha^*,\beta^*)$ is the unique optimal solution to (\ref{8888}) with the least objective value $\|x^*\|_1$. Note that $(x^*, \alpha^*, \beta^*)$ satisfies $$ \Phi_{J_+, n} x^* -\alpha^* = e_{J_+}, ~ \Phi_{J_-, n} x^* +\beta^* = - e_{J_-}, ~ \Phi_{J_0, n} x^* = 0. $$ Similar to the proof of Lemma \ref{Lemma41}, eliminating the zero components of $x^*$, $\alpha^*$ and $\beta^*$ from the above system yield the same system as (\ref{system1}). Similarly, we define $x(\lambda) \in R^n$ as $ x(\lambda)_{S_+}= x^*_{S_+}+\lambda u, $ and $ x(\lambda)_{S_-}= x^*_{S_-} + \lambda v, $ and $ x(\lambda)_i=0 \textrm{ for }i \notin S_+ \cup S_-. $ We see that for all sufficiently small $|\lambda|,$ $(x(\lambda)_{S_+}, x(\lambda)_{S_-})$ satisfies the conditions (\ref{H1})--(\ref{H3}). In other words, there exists a small number $\delta >0$ such that for any $\lambda\neq 0$ with $ |\lambda|\in (0, \delta), $ the vector $ x(\lambda) $ is feasible to (\ref{1bit-basis}). In particular, choose $\lambda^* \not=0 $ such that $| \lambda^* |\in (0,\delta), x^*_{S_+}+\lambda^* u >0, x^*_{S_-} + \lambda^* v>0 $ and \begin{equation} \label {ll} \lambda^*(e_{S_+}^Tu-e_{S_-}^Tv)\leq 0. \end{equation} Then we see that $ x(\lambda^*) \not= x^* $ since $\lambda^* \not=0 $ and $(u,v) \not =0. $ Moreover, we have \begin{eqnarray*} \| x(\lambda^* )\|_1&=&e_{S_+}^T (x^*_{S_+}+\lambda^* u)-e_{S_-}^T (x^*_{S_-}+\lambda^* v),\\ &=&e_{S_+}^T x^*_{S_+}-e_{S_-}^Tx^*_{S_-}+\lambda^* e_{S_+}^T u -\lambda^* e_{S_-}^Tv,\\ &=&\|x^*\|_1+\lambda^* (e_{S_+}^Tu -e_{S_-}^Tv) \\ & \leq & \|x^*\|_1, \end{eqnarray*} where the inequality follows from (\ref{ll}). As $\|x^*\|_1$ is the least objective value of (\ref{1bit-basis}), it implies that $x(\lambda^*)$ is also an optimal solution to this problem, contradicting to the uniqueness of $x^*. $ Hence, $H(x^*)$ must have a full-column rank. ~ $ \Box $ Combining Corollary \ref{Cor54} and Lemma \ref{Lemma37} yields the desired necessary conditions. \vskip 0.05in \begin{Thm}\label{necc} If $x^*$ is the unique optimal solution to (\ref{1bit-basis}), then $ H (x^*), $ given by (\ref{FRPmatrix}), has a full-column rank and the RRSP of $\Phi^T$ at $x^*$ holds. \end{Thm} \subsection{Sufficient conditions} We now prove that the converse of Theorem \ref{necc} is also valid, i.e., the RRSP of $\Phi^T$ at $x^*$ combined with the full-column-rank property of $H(x^*)$ is a sufficient condition for the uniqueness of $x^*. $ We start with a property of (DLP). \vskip 0.05in \begin{Lem}\label{LemDualOpt} Suppose that $x^*$ satisfies the constraints of (\ref{1bit-basis}). If the vector $(h_1,h_2,w)\in R^n \times R^n \times R^m $ satisfies that {\small \begin{equation} \label {Lemma310} \left\{ \begin{array}{ll} (h_1)_i=-1,~(h_2)_i=0 & \text{ for }x^*_i>0,\\ (h_1)_i=0,~(h_2)_i=-1 & \text { for }x^*_i<0, \\ (h_1)_i<0, ~(h_2)_i<0, ~(h_1+h_2)_i>-1 & \text{ for }x^*_i=0, \\ h_2-h_1 = \Phi^T w,\\ w_{J_+} \geq 0,\\ w_{J_-}\leq 0, \\ w_i=0\textrm{ for }~i\in\tilde{\mathcal{A}}_+(x^*) \cup \tilde{\mathcal{A}}_-(x^*), \end{array}\right. \end{equation} } then the vector $(h_1,h_2,h_3,h_4,h_5),$ with $h_3=w_{J_+}, h_4=w_{J_-}$ and $ h_5=w_{J_0},$ is an optimal solution to (DLP) and $x^*$ is an optimal solution to (\ref{1bit-basis}). \end{Lem} \vskip 0.05in This lemma follows directly from the optimality theory of linear programs by verifying that the dual optimal value at $(h_1,h_2,h_3,h_4,h_5)$ is equal to $\|x^*\|_1.$ The proof is omitted. We now prove the desired sufficient condition for the uniqueness of optimal solutions of (\ref{1bit-basis}). \vskip 0.05in \begin{Thm}\label{suff} Let $x^*$ satisfy the constraints of the problem (\ref{1bit-basis}). If the RRSP of $\Phi^T$ at $x^*$ holds and $H(x^*),$ defined by (\ref{FRPmatrix}), has a full-column rank, then $x^*$ is the unique optimal solution to (\ref{1bit-basis}). \end{Thm} \vskip 0.05in {\it Proof.} By the assumption of the theorem, the RRSP of $\Phi^T$ at $x^*$ holds. Then by Lemma \ref{Lemyeta}, there exists a vector $(h_1,h_2,w)\in R^n\times R^n\times R^m$ satisfying (\ref{RRSP-CON}), which implies that condition (\ref{Lemma310}) holds. As $x^*$ is feasible to (\ref{1bit-basis}), by Lemma \ref{LemDualOpt}, $(h_1,h_2,h_3,h_4,h_5)$ with $h_3=w_{J_+}, h_4=w_{J_-}$ and $ h_5=w_{J_0}$ is an optimal solution to (DLP). At this solution, let the slack vectors $s^{(1)},\dots,s^{(5)}$ be given as (\ref{slack1})--(\ref{slack5}). Also, from Lemma \ref{LemDualOpt}, $x^*$ is an optimal solution to (\ref{1bit-basis}). Thus by Lemma \ref{LemLP2LP3}, $(x,t,u,v,\alpha,\beta)= (x^*,|x^*|,|x^*|-x^*,|x^*|+x^*,\alpha^*,\beta^*), $ where $(\alpha^*, \beta^*) $ is given by (\ref{Lemma31}), is an optimal solution to (\ref{9999}). We now further show that $x^*$ is the unique solution to (\ref{1bit-basis}). The vector $(x^*, \alpha^*, \beta^*)$ satisfies the system $ \Phi_{J_+, n} x^* -\alpha^* = e_{J_+}, \Phi_{J_-, n} x^* +\beta^* = - e_{J_-} $ and $ \Phi_{J_0, n} x^* = 0. $ As shown in the proof of Lemma \ref{Lemma37}, removing the zero components of $ (x^*, \alpha^*, \beta^*) $ from the above system yields \begin{equation}\label{SL1} H(x^*) \left[ \begin{array}{c} x^*_{S_+} \\ x^*_{S_-} \end{array} \right] = \left[ \begin{array}{c} e_{\mathcal{A}(x^*)\cap J_+} \\ - e_{\mathcal{A}(x^*)\cap J_-} \\ 0 \end{array} \right]. \end{equation} Let $(\widetilde{x}, \widetilde{t},\widetilde{u},\widetilde{v},\widetilde{\alpha},\widetilde{\beta}) $ be an arbitrary optimal solution to (\ref{9999}). By Lemma \ref{LemLP2LP3}, it must hold that $ \widetilde{t} = |\widetilde{x}| ,\widetilde{u} =|\widetilde{x}|- \widetilde{x} $ and $ \widetilde{v} = |\widetilde{x}|+ \widetilde{x}. $ By the complementary slackness property of linear programs (see, e.g., \cite{GT56, D63}), the nonnegative vectors $ (\widetilde{t},\widetilde{u},\widetilde{v},\widetilde{\alpha},\widetilde{\beta}) $ and $ (s^{(1)}, \dots, s^{(5)}) $ are complementary, i.e., \begin{equation} \label {comp} \widetilde{t}^Ts^{(1)}=\widetilde{u}^Ts^{(2)}= \widetilde{v}^Ts^{(3)} =\widetilde{\alpha}^Ts^{(4)}= \widetilde{\beta}^Ts^{(5)}=0. \end{equation} As $(h_1,h_2,w)$ satisfies (\ref{RRSP-CON}), the vector $(h_1, h_2)$ satisfies that $ (h_1)_i=-1<0 $ for $ x_i^*>0, $ $ (h_2)_i=-1<0 $ for $ x_i^*<0 $ and that $ (h_1+h_2)_i>-1, $ $ (h_1)_i<0$ and $ (h_2)_i<0 $ for $ x^*_i=0. $ By the choice of $(h_1,h_2)$ and $(s^{(1)}, \dots, s^{(5)})$, we see that the following components of slack variables are positive: \begin{eqnarray*} \begin{array}{cl} s^{(1)}_i=1+(h_1+h_2)_i>0 & \text{ for } x^*_i=0,\\ s^{(4)}_{\pi(i)}=(h_3)_{\pi(i)}= w_i>0 &\textrm{ for } i\in {\cal A} (x^*)\cap J_+ ,\\ s^{(5)}_{\varrho(i)}=-(h_4)_{\varrho(i)}= -w_i>0 & \textrm{ for } i\in {\cal A}(x^*)\cap J_- . \end{array} \end{eqnarray*} These conditions, together with (\ref{comp}), implies that {\small \begin{eqnarray}\label{eqn9} \left\{\begin{array}{cl} \widetilde{t}_i=0 & \text{ for } x^*_i=0, \\ \widetilde{\alpha}_{\pi(i)}= 0 &\textrm{ for }i\in {\cal A} (x^*)\cap J_+ ,\\ \widetilde{\beta}_{\varrho(i)}= 0 &\textrm{ for }i\in {\cal A} (x^*)\cap J_- . \end{array} \right. \end{eqnarray} } We still use the symbol $S_+ = \{i: x^*_i >0\} $ and $ S_-= \{i: x^*_i<0\}.$ Since $\widetilde{ t}= |\widetilde{x}|,$ the first relation in (\ref{eqn9}) implies that $ \widetilde{x}_i= 0$ for all $i \notin S_+\cup S_-. $ Note that $$ \Phi_{J_+, n} \widetilde{x} -\widetilde{\alpha} = e_{J_+}, ~ \Phi_{J_-, n} \widetilde{x} +\widetilde{\beta} = - e_{J_-}, ~ \Phi_{J_0, n} \widetilde{x} = 0. $$ Since $\widetilde{x}_i =0$ for all $ i \notin S_+\cup S_-, $ by (\ref{XXXc}) and (\ref{eqn9}), it implies from the above system that \begin{equation} \label{SL2} H(x^*) \left[ \begin{array}{c} \widetilde{x}_{S_+} \\ \widetilde{x}_{S_-} \end{array} \right] = \left[ \begin{array}{c} e_{\mathcal{A}(x^*)\cap J_+} \\ - e_{\mathcal{A}(x^*)\cap J_-} \\ 0 \end{array} \right]. \end{equation} By the assumption of the theorem, the matrix $H(x^*) $ has a full-column rank. Thus it follows from (\ref{SL1}) and (\ref{SL2}) that $ \widetilde{x}_{S_+}=x^*_{S_+}$ and $ \widetilde{x} _{S_-}= x^*_{S_-} $ which, together with the fact $ \widetilde{x}_i =0$ for all $i\notin S_+\cup S_-,$ implies that $ \widetilde{x} =x^*.$ By assumption, $(\widetilde{x}, \widetilde{t},\widetilde{u},\widetilde{v},\widetilde{\alpha},\widetilde{\beta}) $ is an arbitrary optimal solution to (\ref{9999}). Thus $(x,t,u,v,\alpha,\beta)= (x^*,|x^*|,|x^*|-x^*,|x^*|+x^*,\alpha^*,\beta^*) $ is the unique optimal solution to (\ref{9999}), and hence (by Lemma \ref{LemLP2LP3}) $x^*$ is the unique optimal solution to (\ref{1bit-basis}). ~ $ \Box $ \vskip 0.05in Combining Theorems \ref{necc} and \ref{suff} yields Theorem \ref{Ness-Suff}. \section{Conclusions} Different from the classic compressive sensing, 1-bit measurements are robust to any small perturbation of a signal. The purpose of this paper is to show that the exact recovery of the sign of a sparse signal from 1-bit measurements is possible. We have proposed a new reformulation for the 1-bit CS problem. This reformulation makes it possible to extend the analytical tools in classic CS to 1-bit CS in order to achieve an analogous theory and decoding algorithms for 1-bit CS problems. Based on the fundamental Theorem \ref{Ness-Suff}, we have introduced the so-called restricted range space property (RRSP) of a sensing matrix. This property has been used to establish a connection between sensing matrices and the sign recovery of sparse signals from 1-bit measurements. For nonuniform sign recovery, we have shown that if the transposed sensing matrix admits the so-called S-RRSP of order $k$ with respect to 1-bit measurements, acquired from an individual $k$-sparse signal, then the sign of the signal can be exactly recovered by the proposed 1-bit basis pursuit. For uniform sign recovery, we have shown that the sign of any $k$-sparse signal, which is the sparsest signal consistent with the acquired 1-bit measurements, can be exactly recovered with 1-bit basis pursuit when the transposed sensing matrix admits the so-called S-RRSP of order $k.$
1,941,325,220,149
arxiv
\section{Introduction} \label{sec:introduction} The antiferromagnetic (AFM) Heisenberg model (AHM) has been extensively investigated in the recent decades as a prototype for strongly correlated electronic behavior \cite{Auerbach98,Fazekas99}. Special attention has been reserved for lattices and clusters with frustrated connectivity, which in combination with low spatial dimensionality and strong quantum fluctuations can lead to unexpected magnetic behavior \cite{Anderson73,Diep05,Schnack10}. This includes phases without conventional order, such as the spin-liquid phase, non-magnetic excitations inside the singlet-triplet gap, and magnetization plateaux and discontinuities in the response to an external magnetic field. In the case of lattices frustration manifests itself in the form of magnetization plateaus and discontinuities \cite{Honecker04,Nishimoto13,Liu14,Schulenburg02,Nakano13}. Finite clusters can also exhibit magnetization discontinuities. A class of molecules associated with magnetic frustration when the AHM is considered for spins sitting on their vertices are the fullerene molecules. These are hollow carbon molecules that come in the form of closed cages \cite{Fowler95}, with structures that can possess high spatial symmetry. They are made of 12 pentagons and a number of hexagons which varies with the numbers of vertices $n$ as $\frac{n}{2}-10$. Frustration originates in the pentagons and decreases on the average with $n$. The polygons that make up the molecules share their edges, while each vertex is three-fold coordinated. Maybe the most representative member of the class is C$_{60}$, which has the shape of a truncated icosahedron and the spatial symmetry of the largest point symmetry group, the icosahedral group $I_h$. C$_{60}$ was found to superconduct when doped with alkali metals \cite{Hebard91}, and lies in the intermediate $U$ regime of the Hubbard model \cite{Chakravarty91,Stollhoff91}. It was shown that in the large $U$ limit of the Hubbard model, the AHM, the particular connectivity of C$_{60}$ leads to a discontinuity of the magnetization as a function of an external magnetic field in the classical ground state \cite{Coffey92}. This is particularly appealing, as the model lacks any magnetic anisotropy. A classical ground state magnetization discontinuity was also found for the dodecahedron, which is the smallest member of the fullerene class and has 20 vertices and also $I_{h}$ spatial symmetry. The investigation was then extended more generally to molecules of $I_h$ symmetry. First, it was shown that the dodecahedron has in fact a total of three ground state magnetization discontinuities in a field at the classical level, and one and two ground state discontinuities respectively at the full quantum limit of individual spin magnitude $s=\frac{1}{2}$ and 1 \cite{NPK05,NPK07}. It was also shown that the total number of classical ground state magnetization discontinuities for the truncated icosahedron is in fact not one but two, and this is a general feature of fullerene molecules of $I_h$ symmetry. It was established that another general feature of the $I_h$ fullerenes is the high-field discontinuity for $s=\frac{1}{2}$. For relatively small fullerene clusters of different symmetry only pronounced magnetization plateaus were found for $s=1/2$ \cite{NPK09}. In addition the icosahedron, which is not a member of the fullerene family but is the smallest cluster with $I_h$ symmetry, has a classical magnetization discontinuity in its lowest energy configuration which persists for lower values of $s$ \cite{Schroeder05,NPK15}. It must also be noted that the on-site repulsion has been found to be stronger for C$_{20}$ than C$_{60}$ in numerical calculations \cite{Lin07,Lin08}, providing further support for the validity of the AHM as a very good approximation of the Hubbard model for the dodecahedron. Work closely related to the above has also been published in the literature \cite{Jimenez14,Hucht11,Strecka15,Sahoo12,Sahoo12-1,Nakano14}. While C$_{60}$ spontaneously forms in condensation or cluster annealing processes \cite{Kroto87}, this was not the case for C$_{20}$, which was eventually produced in the gas phase by Prinzbach {\it et al.} \cite{Prinzbach00}. C$_{20}$ has been synthesized in the solid phase by Wang {\it et al.}, who produced a hexagonal closed-packed crystal \cite{Wang01}, while Iqbal {\it et al.} synthesized an fcc lattice with C$_{20}$ molecules interconnected by two additional carbon atoms per unit cell in the interstitial tetrahedral sites \cite{Iqbal03}. On the other hand quasi one-dimensional structures have been realized experimentally only a few times for C$_{60}$ molecules, due to their highly anisotropic configuration. This included peapods where C$_{60}$ molecules were introduced in carbon nanotubes, one-dimensional C$_{60}$ structures aligned along step edges on vicinal surfaces of metal single crystals, and chains of C$_{60}$ on self-assembled molecular layers \cite{Smith98,Zhang07,Tamai04,Zeng01}. Most recently C$_{60}$ molecules were arranged on chain structures of width two to three molecules on rippled graphene \cite{Chen15}. In addition, as few as two C$_{60}$ molecules have been considered to link and form a dumbbell structure \cite{Manaa09}. The formation of $I_h$-fullerene lattice structures poses the question of the influence of intermolecular interactions on isolated molecule properties. Considering interactions again at the level of the AHM, it is of interest to determine if the appealing ground state magnetic features of a single dodecahedron survive in a lattice-type setting, and if the addition of intermolecular magnetic exchange introduces extra features in the magnetization. This is the main question undertaken in this paper. More specifically the case of two dodecahedra is investigated, with intramolecular interactions exactly as in the isolated dodecahedron case, while the two dodecahedra are connected along with one of their faces with a varying exchange interaction. The properties of the ground state of the whole cluster are mapped as a function of the intermolecular exchange constant and an external magnetic field. For weak intermolecular coupling the response is mainly determined by the isolated dodecahedra, while for strong by the dimer-type interaction between spins belonging to different dodecahedra. For relatively small intermolecular coupling the three classical discontinuities of the lowest energy configuration of the isolated dodecahedron survive. Simultaneously a new low-field magnetization discontinuity appears, which relates to the AFM coupling of the two molecules and its competition with the magnetic field. This discontinuity survives up to the dimer limit. For stronger coupling even more discontinuities appear, producing a rich structure for the classical magnetic response of the two dodecahedra system. For a specific range of interdodecahedral coupling the total number of ground state discontinuities goes up to six. The spins associated with the interdodecahedral interaction do not necessarily increase their projection along the field as the latter increases, due to their unfrustrated AFM interaction. Finally, one of the discontinuities becomes one of the susceptibility close to the dimer limit. In the $s=\frac{1}{2}$ case the isolated dodecahedron discontinuity generates two ground state discontinuities for the coupled dodecahedra. In addition, a third discontinuity appears due to the interdodecahedral coupling. All three discontinuities persist for smaller values of the interdodecahedral coupling and appear one right after the other. This generates a ground state magnetic response which for a considerable range of magnetic fields is associated with magnetization discontinuities of the total spin along the $z$ axis equal to $\Delta S^z=2$, instead of the typical $\Delta S^z=1$ for quantum mechanics. The plan of this paper is as follows: In Sec. \ref{sec:model} the AHM for the system of the two dodecahedra is introduced, and in Sec. \ref{sec:classicalspins} its lowest energy configuration is calculated for classical spins. Section \ref{sec:spinsonehalf} considers the $s=\frac{1}{2}$ case, with perturbation theory for weak and Lanczos diagonalization for arbitrary values of the interdodecahedral coupling. Finally Sec. \ref{sec:conclusions} presents the conclusions. \section{Model} \label{sec:model} The AHM for two linked dodecahedra (Fig. \ref{fig:twododecahedra}) is: \begin{eqnarray} H & = & J ( \sum_{<ij>} \textrm{} \vec{s}_i \cdot \vec{s}_j + \sum_{<20+i,20+j>} \textrm{} \vec{s}_{20+i} \cdot \vec{s}_{20+j} ) \nonumber \\ & & + J' \sum_{i=1}^5 \vec{s}_i \cdot \vec{s}_{20+i} - h \sum_{i=1}^{N} s_i^z \label{eqn:Hamiltonian} \end{eqnarray} The total number of spins for the two dodecahedra is $N=40$, with the first dodecahedron containing spins 1 to 20 and the second 21 to 40. The first two sums run over nearest-neighbor spins within the same dodecahedron, with $i$ and $j$ running from 1 to 20. The second sum connects the spins on the faces of the two dodecahedra that are taken to be directly opposite each other, with $i$ an index counting the spins on these two faces. The magnetic field $h$ is taken to be directed along the $z$ axis. The system interpolates between two independent dodecahedra for $J'=0$, and five independent dimers for $J=0$. The ratio $\alpha \equiv \frac{J'}{J+J'}$ is defined, which correspondingly varies between 0 and 1. \section{Classical Spins} \label{sec:classicalspins} First the spins of Hamiltonian (\ref{eqn:Hamiltonian}) are taken to be classical \cite{NPK15-1}. When $J'=0$ the ground state magnetization of each isolated dodecahedron has three discontinuities in the field, occuring when $\frac{h}{h_{sat}^{dod}}=0.26350$, 0.26983, and 0.73428, with $h_{sat}^{dod}=(3+\sqrt{5})J$ the saturation field of an isolated dodecahedron \cite{NPK07}. The symmetry of the ground state configuration does not necessarily increase with the magnetic field. In zero field nearest-neighbor spins are not antiparallel due to frustration, and the nearest-neighbor correlation in each dodecahedron equals $-\frac{\sqrt{5}}{3}$. On the other hand, if the $J'$ bonds between different dodecahedra were to be considered alone their spins would be antiparallel in the lowest energy state, consequently the non-frustrated dimer bonds should be less susceptible to an external field in comparison with the frustrated intradodecahedral bonds. For finite $J'$ and zero field the relative spin orientations in each dodecahedron in the ground state do not change with respect to the noninteracting case, while spins connected via $J'$ bonds align themselves in an antiparallel fashion and the energy is $-20 \sqrt{5}J - 5J'$. Once the magnetic field is switched on the competition among the intra-, interdodecahedral exchange and magnetic field energies determines the lowest energy configuration. The magnetization discontinuities associated with an isolated dodecahedron survive the interdodecahedral coupling, while new ones emerge. The location of all the lowest energy configuration discontinuities with respect to $\alpha$ and $\frac{h}{h_{sat}}$ is shown in Fig. \ref{fig:configurationdiagram} ($h_{sat}$ is the saturation field of the two dodecahedra, which is a function of $\frac{J'}{J}$). Apart from $\alpha$ close to 1, where the second discontinuity with respect to the field strength becomes the sole susceptibility discontinuity, there are never less than four magnetization discontinuities, showing that the introduction of the interaction between the dodecahedra enriches the magnetic response for any coupling strength. The maximum number of magnetization discontinuities occurs for $\alpha \sim \frac{3}{4}$ and is equal to six. The discontinuities occur mostly for decreasing $\frac{h}{h_{sat}}$ with increasing $\alpha$, until they eventually disappear in the dimer limit $\alpha=1$. The inaccessible magnetizations per spin which fall between the edges of each discontinuity are plotted in Fig. \ref{fig:discontinuitiesminmax-a}. The corresponding magnitudes of the magnetization change per spin are shown in Fig. \ref{fig:discontinuitieswidth}. The width of the nonaccessible magnetizations is not necessarily monotonic with $\alpha$. Once $J'$ becomes non-zero, apart from the three discontinuities of $J'=0$ a new one appears for small magnetic fields, where the two dodecahedra are still connected approximately in an antiparallel fashion via the $J'$ bonds. This discontinuity increases its strength monotonically with $J'$, reaching $\Delta M \sim 0.8$ close to the dimer limit (Figs. \ref{fig:discontinuitiesminmax-a} and \ref{fig:discontinuitieswidth}). The magnetization curve for $\frac{J'}{J}=\frac{1}{7}$ ($\alpha=\frac{1}{8}$) is shown in detail in Fig. \ref{fig:magnetizationpartsinterexchangeJ1=1J2=1over7}. For small fields the spins associated with the non-frustrated bonds on the average do not respond as strongly as the rest, maintaining a smaller total projection along the $z$ axis and having interdodecahedral correlations only weakly deviating from -1 (lower right inset of Fig. \ref{fig:magnetizationpartsinterexchangeJ1=1J2=1over7}). The rest of the nearest-neighbor correlations deviate more strongly from their zero field value, and this deviation increases with $h$. For $\frac{h}{h_{sat}}=0.095350$ the magnetization discontinuity originating in the interdodecahedral coupling appears. The lowest energy configuration right after the discontinuity is similar to the lower-field ground state configuration of an isolated dodecahedron \cite{NPKLuban}. Even though the total magnetization of the two dodecahedra increases right after the low-field discontinuity, the net magnetization of the spins connected via $J'$ bonds changes direction and points away from the field. These spins now share a common polar angle which starts out bigger than $\frac{\pi}{2}$, and then monotonically decreases with the field. The five $J'$ correlations are equal and become more antiferromagnetic with increasing field (lower right inset of Fig. \ref{fig:magnetizationpartsinterexchangeJ1=1J2=1over7}). For a specific value of the field the common polar angle becomes equal to $\frac{\pi}{2}$, and then the $J'$ bonds connect antiparallel spins which have zero net magnetization. If the field is further increased the polar angle of the $J'$ spins becomes less than $\frac{\pi}{2}$ and they start to deviate from being antiparallel, while their net magnetization is now non-zero and points towards the field. Apart from the low-field discontinuity, the net magnetization of the $J'$ spins decreases also at the second and last ones (Fig. \ref{fig:magnetizationpartsinterexchangeJ1=1J2=1over7} and its upper left inset). The three higher-field discontinuities in Fig. \ref{fig:magnetizationpartsinterexchangeJ1=1J2=1over7} are directly related to the ones of an isolated dodecahedron \cite{NPK07}, only renormalized by the interdodecahedral interaction. One of them, the middle one, is the strongest among all discontinuities, and for small $J'$ it is associated with a magnetization change $\Delta M \sim 1.5$ (Figs. \ref{fig:discontinuitiesminmax-a} and \ref{fig:discontinuitieswidth}). When $\frac{J'}{J}=0.84139$ ($\alpha=0.45693$) the third discontinuity splits up in two (Fig. \ref{fig:configurationdiagram}), which have smaller magnitude (Figs. \ref{fig:discontinuitiesminmax-a} and \ref{fig:discontinuitieswidth}). The net magnetization of the $J'$ spins is shown in Fig. \ref{fig:magnetizationpartsinterexchangeJ1=1J2=3over2J1=1J2=2.82}(a) for $\frac{J'}{J}=\frac{3}{2}$ ($\alpha=\frac{3}{5}$). The two discontinuities result in a stepwise increase of the magnetization of the $J'$ spins, as seen in the right part of the figure. The rest of the spins also increase their net magnetization stepwise. The lower of these two discontinuities disappears for $\frac{J'}{J}=2.0800$ ($\alpha=0.67532$) (Fig. \ref{fig:configurationdiagram}). For $\frac{J'}{J}=2.78174$ ($\alpha=0.73557$) and right above the second discontinuity two new discontinuities emerge (Fig. \ref{fig:configurationdiagram} and inset), bringing the total number to a maximum equal to six. The net magnetization of the $J'$ spins is plotted in Fig. \ref{fig:magnetizationpartsinterexchangeJ1=1J2=3over2J1=1J2=2.82}(b) for the case $\frac{J'}{J}=2.82$ ($\alpha=0.73822$). Here the net magnetization of the right- and left-dodecahedron $J'$ spins is not the same after the first new discontinuity, as seen in the right part of the figure, and the net magnetization of the $J'$ spins decreases when this difference appears. The top two discontinuities merge for $\frac{J'}{J}=3.2552$ ($\alpha=0.76499$) (Fig. \ref{fig:configurationdiagram}), while for $\frac{J'}{J}=3.99890$ ($\alpha=0.79996$) the second and the third discontinuity merge. Finally for $\frac{J'}{J} \sim 17.1$ ($\alpha \sim 0.945$) the second discontinuity changes from a magnetization to a susceptibility discontinuity. The lowest energy configuration spin directions change discontinuously like the magnetization as the gaps are encountered with increasing field. Below the first magnetization discontinuity the spin configuration is highly asymmetric, with each spin having its own polar angle. Low symmetry is a general feature of the configurations of Fig. \ref{fig:configurationdiagram}, as the spins at best have their own polar angle value within an individual dodecahedron, with each of these polar angle values shared only by another spin in the other dodecahedron. The most symmetric of these configurations is when the spins mounted exactly at the same location in the two dodecahedra share the polar angle, and their azimuthal angles differ by $\pi$. The most notable exception to these cases is the lowest energy configuration right after the low-field magnetization jump, which is indicated with (blue) up triangles in Fig. \ref{fig:configurationdiagram}, which is also the last configuration just before saturation, occuring for fields higher than the ones depicted with (green) diamonds. This configuration is shown in Fig. \ref{fig:twododecahedraquantum}. Similarly to the low-field ground state configuration of an isolated dodecahedron \cite{NPKLuban}, there are four distinct polar angles for the spins, with each one corresponding to a different circle type. Lines of the same type represent equal nearest-neighbor correlations. All the azimuthal angles are integer multiples of $\frac{\pi}{5}$, while successive azimuthal angles within the same theta group differ by $\frac{4\pi}{5}$. Along the central line defined by spins 1, 6, 11, and 18, nearest-neighbors differ by $\pi$ in the azimuthal plane. Spins symmetrically placed with respect to this line have azimuthal angles adding up to $2\pi$. The polar angles are the same for spins placed exactly at the same locations in the two dodecahedra, while their azimuthal angles differ by $\pi$. In the lowest energy configuration before the (orange) x's in Fig. \ref{fig:configurationdiagram} the polar angles are different in the two dodecahedra, and there are 12 distinct polar angles for each one of them. \section{$s=\frac{1}{2}$} \label{sec:spinsonehalf} For $s=\frac{1}{2}$ an isolated dodecahedron has a discontinuity in the ground state magnetization where its total $z$ spin sector $S_{dod}^z=5$ with five spin flips from saturation is never the ground state in a field \cite{NPK05}. This results from the energy difference of the $S_{dod}^z=5$ and 6 lowest energy states being smaller than the one of the $S_{dod}^z=4$ and 5 lowest energy states. The discontinuity carries over to the case of the two linked dodecahedra: their lowest energy wavefunction for a specific $S^z$ when $J'=0$ is the product of the individual dodecahedra ground state wavefunctions for specific $S_{dod}^z$'s that minimize the energy and have spins adding up to $S^z$, with the total energy equaling the sum of the corresponding individual dodecahedra energies. As a result the isolated dodecahedron discontinuity generates two discontinuities for the coupled dodecahedra, where the lowest energy levels with $S^z=9$ and $S^z=11$ are never the ground states in a field. When $J'$ becomes non-zero, the discontinuities will survive at least for weak values of it. The ground state magnetization curve of Hamiltonian (\ref{eqn:Hamiltonian}) is calculated for the whole $S^z$ range when $J'$ is weak with perturbation theory. The magnetization is also calculated with Lanczos diagonalization for arbitrary $J'$, however due to computational requirements it can not be determined for lower $S^z$ in this case. \subsection{Perturbation Theory} \label{subsec:perturbationtheory} For small $J'$ the lowest energies are calculated for every $S^z$ sector within first order perturbation theory. The unperturbed wavefunction ($J'=0)$ is the product of the lowest energy wavefunctions of the two dodecahedra according to the $S_{dod}^z$ sector they belong to (see Table V of Ref. \cite{NPK05}). When $S_{dod}^z$ is away from the single dodecahedron discontinuity for both dodecahedra, the zeroth order wavefunction for even $S^z$ is the product of the lowest energy state with $S_{dod}^z=\frac{S^z}{2}$ for each dodecahedron: \begin{eqnarray} \vert \Psi_0^{i*d+j}(S^z) \rangle_{J'=0} = \vert \Phi_0^i(\frac{S^z}{2}) \rangle \vert \Phi_0^j(\frac{S^z}{2}) \rangle \label{eqn:pertheoryevenwavefunction} \end{eqnarray} The index $i$, $j=1,\dots,d$ counts the degeneracy $d$ of the single dodecahedron wavefunction, therefore the unperturbed wavefunction of the two dodecahedra is in principle also degenerate. For odd $S^z$ the combining single dodecahedron lowest energy states have $S_{dod}^z=\frac{S^z-1}{2}$ and $\frac{S^z+1}{2}$. Here apart from the degeneracy originating in the degeneracy of the single dodecahedron lowest energy states, an extra factor of two comes about from the two distinct $S_{dod}^z$ values to be accomodated on the two dodecahedra. The only exception in the general pattern is $S^z=10$, where the participating single dodecahedron lowest energy states do not have the same $S_{dod}^z=5$ as in the other even cases, but due to the single dodecahedron discontinuity they have $S_{dod}^z=4$ and 6. Thus in general degenerate perturbation theory is required, unless $S^z$ is even and different from 10 and the lowest single dodecahedron energy level for $\frac{S^z}{2}$ is singly degenerate. The perturbative term is scaled with $J'$ in Hamiltonian (\ref{eqn:Hamiltonian}), and first order degenerate perturbation theory produces the following matrix, exemplarily for the even $S^z$ case of Eq. (\ref{eqn:pertheoryevenwavefunction}): \begin{eqnarray} & & H_1 (i*d+j,k*d+l) = \nonumber \\ & & \textrm{ }_{J'=0}\langle \Psi_0^{i*d+j}(S^z) \vert \sum_{m=1}^5 \vec{s}_m \cdot \vec{s}_{20+m} \vert \Psi_0^{k*d+l}(S^z) \rangle_{J'=0} \nonumber \\ \end{eqnarray} The perturbative term includes combinations of raising and lowering operators, as well as diagonal terms. For even $S^z \neq 10$ only diagonal terms can generate non-zero energy contributions in first order perturbation theory, irrespective of the degeneracy of the single dodecahedron state $\vert \Phi_0^i(\frac{S^z}{2}) \rangle$. For odd $S^z$ and $S^z=10$ combinations of raising and lowering operators also contribute. The first order perturbation theory correction for the energy $E_1$ is listed for the different $S^z$ sectors in Table \ref{table:firstordercorrection}. According to what has already been mentioned in this Subsection about the $S_{dod}^z$ sectors that combine to form the unperturbed wavefunction, away from the discontinuity the energy difference between levels $S^z$, $S^z-1$ and $S^z-1$, $S^z-2$ is the same for $J'=0$ when $S^z$ is even. Consequently what determines the relative value of successive energy differences between adjacent $S^z$ sectors for weak $J'$ are the perturbative energy corrections listed in Table \ref{table:firstordercorrection}. If the relative energy difference between three successive $S^z$ sectors in Table \ref{table:firstordercorrection} when starting from an even $S^z$ increases for decreasing $S^z$, then a new magnetization discontinuity appears. This is the case for $S^z=14$, and the magnetization in a field switches from $S^z=14$ directly to $S^z=12$ at least for small $J'$. This discontinuity does not relate to the one of the isolated dodecahedron but originates in the interdodecahedral coupling. It is then concluded that at least for weak $J'$ and between $S^z=8$ and 14 the magnetization changes with an external field in steps of $\Delta S^z=2$. This shows that for two linked dodecahedra the magnetization can be changed in a controlled way in steps of either $\Delta S^z=1$ or 2 by adjusting the range of an external magnetic field. \subsection{Lanczos Diagonalization} \label{subsec:lanczosdiagonalization} The magnetization response of Hamiltonian (\ref{eqn:Hamiltonian}) can be calculated for $J'$ of arbitrary strength with Lanczos diagonalization, taking into account the $D_{5h}$ spatial symmetry of the Hamiltonian \cite{NPK05,NPK07,NPK09,NPK15,NPK04}. In this way the Hamiltonian is block-diagonalized according to the irreducible representations of its symmetry group. This results in eigenstates well-defined according to symmetry, as well as a Hamiltonian divided in smaller subblocks that are easier to diagonalize. In contrast with the perturbation theory calculation of Sec. \ref{subsec:perturbationtheory} here there is no restriction on the strength of $J'$, however the calculation is limited to higher values of $S^z$ due to computational requirements. Fig. \ref{fig:magnetizationquantum} shows the ground state magnetization curve for four different $J'$ values. When $J'$ is small three successive discontinuities are expected according to Sec. \ref{subsec:perturbationtheory}, with the subsectors $S^z=9$, 11 and 13 never including the lowest energy state in a magnetic field. Two discontinuities are highlighted with (red) arrows in Fig. \ref{fig:magnetizationquantum}(a) where $J'=\frac{J}{50}$, while the one that corresponds to the lowest $S^z$ is not mapped out due to the computational requirements to find the lowest energy state for $S^z=8$. When $J'=\frac{J}{5}$ (Fig. \ref{fig:magnetizationquantum}(b)) the interdodecahedral coupling is not weak any more, and the highest $S^z$ discontinuity has disappeared. For even higher $J'=\frac{3J}{5}$ (Fig. \ref{fig:magnetizationquantum}(c)) the dodecahedra feel each other's influence more strongly, and as a result the intermediate discontinuity disappears as well. Fig. \ref{fig:magnetizationquantum}(d) shows the magnetization for $J'=J$ where all couplings are equal and even magnetization plateaus are absent for the $S^z$ range of the weaker $J'$ discontinuities. Fig. \ref{fig:correlationsquantum} shows the distinct ground state expectation values of the nearest-neighbor correlation functions $<\vec{s}_i \cdot \vec{s}_j>$ for $J'=\frac{J}{50}$ and $\frac{3J}{5}$. There are in principle six unique such correlations, with one for each of the rings that respectively contain spins 1 to 5, 6 to 15, and 16 to 20 (Fig. \ref{fig:twododecahedraquantum}), and two more for nearest-neighbor correlations between spins that belong to different rings. Tha last unique correlation is between spins belonging to different dodecahedra. For $J'=\frac{J}{50}$ the single-dodecahedron character is preserved, at least not for high $S^z$, where only correlations within the two dodecahedra are antiferromagnetic. The intradodecahedral correlation (represented by $<\vec{s}_1 \cdot \vec{s}_{21}>$ in Fig. \ref{fig:correlationsquantum}) is ferromagnetic for all the $S^z$ presented. The situation is different when $J'=\frac{3J}{5}$. Now the intradodecahedral correlation acquires an AFM character, which is the strongest along with its neighboring correlation $<\vec{s}_1 \cdot \vec{s}_2>$. Fig. \ref{fig:spinzquantum} shows the distinct ground state expectation values of the projections of the individual spins along the $z$ axis $<s_i^z>$ for $J'=\frac{J}{50}$ and $\frac{3J}{5}$. There are in principle four unique such projections, with spins 1 to 5 having a common one (Fig. \ref{fig:twododecahedraquantum}), and spins 16 to 20 another one. Also every second of spins 6 to 15 shares the same value of $<s_i^z>$. Spins of the central and the outer pentagon have lower values for $J'=\frac{J}{50}$, which agrees with their stronger intradodecahedral AFM correlations of Fig. \ref{fig:correlationsquantum}(a). For $J'=\frac{3J}{5}$ the central pentagon has even lower $<s_i^z>$, which now corresponds to the strongest AFM correlations being between spins in this pentagon, and between spins in this pentagon and their counterparts in the corresponding pentagon of the other dodecahedron connected via the $J'$ bonds, as shown in Fig. \ref{fig:correlationsquantum}(b). \section{Conclusions} \label{sec:conclusions} The ground state magnetic response of two coupled dodecahedra was investigated within the framework of the AHM. The classical magnetization discontinuities of an isolated dodecahedron were found to be renormalized by the interdodecahedral coupling, while new ones emerge. For a specific range of the coupling the total number of discontinuities goes up to six. At the full quantum limit $s=\frac{1}{2}$ the isolated dodecahedron magnetization discontinuity gives rise to two neighboring discontinuities, with a third one appearing adjacent to these two. The two dodecahedra system has a magnetic response which for a significant range of the field is associated with magnetization steps with $\Delta S^z=2$, which is twice as strong as the usual magnetization difference between adjacent $S^z$ sectors for a quantum spin system. This shows that the magnetization change can be controlled by adjusting the range of an external magnetic field. The frustrated nature of the dodecahedron results in unexpected ground state magnetization discontinuities in a field when the AHM is considered on it. Usually such discontinuities are associated with magnetic anisotropy, but in this case they are allowed by the special connectivity of the dodecahedron. The formation of a two dodecahedra molecule with the introduction of unfrustrated coupling between one of their faces enriches the ground state magnetic response. Along these lines, it is of interest to extend this investigation on more than two dodecahedra linked together to form a chain-type or even more complicated structures, and calculate the magnetic response while the individual cluster frustration and the coupling between clusters compete in the presence of an external magnetic field.
1,941,325,220,150
arxiv
\section{Introduction} Early observations of the disk-halo interface hinted at the presence of a handful of HI clouds connected to the Galactic disk \citep{1964Prata, 1971Simonson, 1984Lockman}. With higher resolution data it became evident that there is a population of discrete HI clouds within this transition zone of the inner Galaxy, having sizes $\sim 30$~pc, masses $\sim 50$~$M_\odot$, and whose kinematics are dominated by Galactic rotation \citep{2002Lockman}. The presence of these clouds is a widespread phenomenon; they have also been detected in the disk \citep{2006Stil} and outer Galaxy \citep{2006Stanimirovic, 2010Dedes}. These clouds may constitute the HI layer, and may be detected in external galaxies once resolution requirements can be met. Possible origin scenarios include galactic fountains (\citealt{1976Shapiro,1980Bregman,1990Houck}; \citealt*{2008Spitoni}), superbubbles \citep{2006McClure-Griffiths}, and interstellar turbulence \citep{2005Audit}. An analysis of the HI disk-halo cloud population in the fourth Galactic quadrant of longitude (QIV) was performed by \citet{2008Ford}. To study the variation of cloud properties and distributions with location in the Galaxy, \citeauthor*{2010Ford} (\citeyear{2010Ford}; hereafter FLMG) analyzed a region in the first quadrant (QI) that is mirror-symmetric about longitude zero to the QIV region, and performed an in-depth comparison of the uniformly selected QI and QIV samples. In this proceeding we summarize some of the main results from that QI--QIV analysis. For an extended and complete discussion of these results we refer the reader to FLMG. \section{Observations} Disk-halo clouds were detected using data from the Galactic All-Sky Survey \citep[GASS;][]{2009McClure-Griffiths}, which were taken with the 21~cm Multibeam receiver at the Parkes Radio Telescope. GASS is a Nyquist-sampled survey of Milky Way HI emission ($-400 \leq V_{\mathrm{LSR}} \leq +500$~km~s$^{-1}$), covering the entire sky south of $\delta \leq 1^\circ$. The spatial resolution of GASS is $16\arcmin$, spectral resolution is $0.82$~km~s$^{-1}$, and sensitivity is $57$~mK. The data analyzed here are from an early release of the survey which has not been corrected for stray radiation. The QI and QIV regions are mirror-symmetric about the Sun--Galactic centre line, where the QI region spans $16.9^\circ \leq \ell \leq 35.3^\circ$ and the QIV region spans $324.7^\circ \le \ell \le 343.1^\circ$. Both are restricted to $|b| \lesssim 20^\circ$. \section{The Tangent Point Sample} Tangent points within the inner Galaxy occur where a line of sight passes closest to the Galactic centre. This is where the maximum permitted velocity occurs assuming pure Galactic rotation, and this velocity is the terminal velocity, $V_{\mathrm{t}}$. Random motions may push a cloud's $V_{\mathrm{LSR}}$ beyond $V_{\mathrm{t}}$, and the difference between the cloud's $V_{\mathrm{LSR}}$ and $V_{\mathrm{t}}$ is defined as the deviation velocity, $V_{\mathrm{dev}}\equiv V_{\mathrm{LSR}}-V_{\mathrm{t}}$. We define the tangent point sample of clouds as those with $V_{\mathrm{dev}} \gtrsim 0$~km~s$^{-1}$ in QI and $V_{\mathrm{dev}} \lesssim 0$~km~s$^{-1}$ in QIV, which results in a sample of clouds whose distances and hence physical properties can be determined. We define $V_{\mathrm{t}}$ based on observational determinations by McClure-Griffiths \& Dickey (in preparation), \citet{1985Clemens}, \citet{2007McClure-Griffiths}, and \citet{2006Luna}. \section{Properties of Individual Disk-Halo Clouds} We detect 255 disk-halo HI clouds at tangent points within QI, but only 81 within QIV. A summary of the properties of the clouds within both regions is presented in Table~\ref{tab:p2comptable}, where $T_{\mathrm{pk}}$ is peak brightness temperature, $\Delta v$ is FWHM of the velocity profile, $N_{\mathrm{HI}}$ is HI column density, $r$ is radius, $M_{\mathrm{HI}}$ is HI mass, and $|z|$ is vertical distance from the midplane. While there are more than three times as many clouds detected in QI than in QIV, the properties of both samples are quite similar, suggesting that the clouds in both quadrants belong to the same population of clouds. \begin{table}[!ht] \caption{Median Properties of Tangent Point Disk-Halo Clouds \label{tab:p2comptable}} \smallskip \begin{center} {\small \begin{tabular}{lcccccc} \tableline \tableline \noalign{\smallskip} & $T_{\mathrm{pk}}$ & $\Delta v$ & $N_{\mathrm{HI}}$ & $r$ & $M_{\mathrm{HI}}$ & $|z|$\\ & [K] & [km~s$^{-1}$] & [cm$^{-2}$] & [pc] & [$M_{\odot}$] & [pc]\\ \noalign{\smallskip} \tableline \noalign{\smallskip} QI & $0.5$ & $10.6$ & $1.0\times 10^{19}$ & $28$ & $700$ & $660$\\ QIV & $0.5$ & $10.6$ & $1.0\times 10^{19}$ & $32$ & $630$ & $560$\\ \noalign{\smallskip} \tableline \end{tabular} } \end{center} \end{table} \section{Properties of the Disk-Halo Cloud Population} \subsection{Cloud--Cloud Velocity Dispersion} Random motions, characterized by a cloud--cloud velocity dispersion ($\sigma_{cc}$), can increase the $V_{\mathrm{LSR}}$ of a cloud that is located near a tangent point to values beyond $V_{\mathrm{t}}$. We can determine the magnitude of these random motions based on the deviation velocity distribution of the tangent point cloud samples. The observed $V_{\mathrm{dev}}$ distributions are presented in Figure \ref{fig:GASS_Q14_vdev}. The steep decline towards larger $V_{\mathrm{dev}}$ shows that the clouds' motions are governed by Galactic rotation. The distributions have a K-S probability of $76\%$ of being drawn from the same population, suggesting the QI and QIV clouds have identical $\sigma_{cc}$. Both populations have distributions consistent with those of simulated populations that have a random velocity component derived from a Gaussian with dispersion $\sigma_{cc}=16$~km~s$^{-1}$. \begin{figure} \plotone{Ford_Alyson_f1.eps} \caption{Distribution of observed $V_{\mathrm{dev}}$ for QI clouds (solid line), observed $-V_{\mathrm{dev}}$ for QIV clouds (dashed line), and simulated $V_{\mathrm{dev}}$ for a population of clouds with $\sigma_{cc}=16$~km~s$^{-1}$ (dash-dotted line). The QI and QIV distributions are consistent with the $\sigma_{cc}=16$~km~s$^{-1}$ distribution.} \label{fig:GASS_Q14_vdev} \end{figure} \subsection{Vertical Distribution} More clouds are detected at most heights within QI than QIV, as can be seen from their observed vertical distributions (Figure~\ref{fig:zhistcompobs}). The decrease in the number of clouds at low $|z|$ is due to confusion, as it is increasingly difficult to detect clouds at lower $|z|$ and lower $|V_{\mathrm{dev}}|$. The vertical distribution of clouds in QI is best represented by an exponential with $h=800$~pc, while in QIV the scale height is only $h=400$~pc. These values are not in agreement, and as $\sigma_{cc}$ is similar in both quadrants, the scale height of the clouds is not linked to the cloud--cloud velocity dispersion; even if $\sigma_{cc}=\sigma_{z}$, this would only propel clouds to $h<100$~pc within the mass model of \citet{2007Kalberla}. \begin{figure} \plotone{Ford_Alyson_f2.eps} \caption{Distribution of observed $|z|$ for the QI (solid line) and QIV (dashed line) clouds. There are significantly more clouds at most $|z|$ in QI than QIV and the scale height is twice as large.} \label{fig:zhistcompobs} \end{figure} \subsection{Longitude Distribution} The observed longitude distributions of clouds in the QI and QIV regions are shown in Figure~\ref{fig:radsurfdens}. Not only are there more clouds at all $|\ell|$ in the QI region, but they are more uniformly distributed compared to the peaked QIV distribution at $\ell\sim 25^\circ$. The radial distribution is directly related to the longitude distribution; $R\equiv R_0\sin\ell$ at the tangent point, where $R_0$ is the radius of the solar circle. The radial distribution of the QI sample is also more uniform compared to QIV, where instead the distribution peaks and declines rapidly at $R>4.2$~kpc. \section{Implications for the Disk-Halo Clouds} While the cloud--cloud velocity dispersions in both quadrants are consistent, the number of clouds detected, along with the vertical and radial distributions, differ dramatically. We have searched for possible systematic effects that might account for the asymmetry but have found none of any significance. These asymmetries imply that the clouds are linked to Galactic structure and events occurring in the disk. We have overlaid the QI and QIV regions on what we believe to be the most appropriate representation of the large-scale Galactic structure to date, which includes recent results from the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE; \citealt{2003Benjamin}) on the location of the bar and spiral arms (\citealt{2005Benjamin}; see Figure~\ref{fig:GLIMPSE_image}). The solid lines enclose the QI and QIV regions, with a line of sight extent equivalent to $\sim1\sigma_{cc}$ volume around the tangent point. A striking asymmetry within the Galaxy that is immediately apparent is that the QIV region contains only a minor arm while the QI region contains the merging of the near-end of the bar and a major spiral arm. \begin{figure} \plotone{Ford_Alyson_f3.eps} \caption{Observed longitude distributions of the QI (solid line) and QIV (dashed line) regions. More clouds are seen in QI at all $\ell$ than in QIV. The distribution is more uniform in QI while it is peaked around $\ell\sim25^\circ$ in QIV.} \label{fig:radsurfdens} \end{figure} \begin{figure} \plotone{Ford_Alyson_f4.eps} \caption{Figure from FLMG showing an artist's conception of the Milky Way. The solid lines enclose the QI and QIV regions, where QI covers the region where the near-end of the Galactic bar merges with a major spiral arm and QIV covers only a minor arm. The artist's conception image is from NASA/JPL-Caltech/R. Hurt (SSC-Caltech).} \label{fig:GLIMPSE_image} \end{figure} The distribution of disk-halo clouds mirrors the spiral structure of the Milky Way, suggesting a correlation with star formation. The spatial relation between extraplanar gas to star formation activity in some external galaxies further supports this hypothesis \citep{2005Barbieri}. More clouds occur in a region of the Galaxy including the near-end of the bar and a major spiral arm, while there are fewer clouds in a region including only a minor arm. The number of clouds likely correlates with the level of star formation activity, and the different scale heights may represent different evolutionary stages or varying levels of star formation. The clouds are likely formed by gas that is lifted by superbubbles and stellar feedback, and some may result from fragmenting supershells. Supershells have much larger timescales than star-forming regions ($\sim 20$--$30$~Myr versus $\sim0.1$~Myr; \citealt{2004deAvillez, 2006McClure-Griffiths, 2007Prescott}), and so clouds may not be near regions of current star formation. If clouds are lifted by events within the disk, large cloud--cloud velocity dispersions are not required to produce the derived scale heights of the disk-halo clouds. Disk-halo clouds are abundant within the Milky Way, and it is likely that many clouds would be detected at other longitudes corresponding with spiral features. \acknowledgements We thank B.~Saxton for his help creating Figure \ref{fig:GLIMPSE_image}. HAF thanks the National Radio Astronomy Observatory for support under its Graduate Student Internship Program. The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under a cooperative agreement with the National Science Foundation. The Parkes Radio Telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
1,941,325,220,151
arxiv
\section{Introduction} The notion of homomorphism-homogeneity was introduced by Cameron and Ne\v set\v ril\xspace in \cite{CameronNesetril:2006} as a generalization of ultrahomogeneity. A relational structure $M$ is homomorphism-homogeneous if every homomorphism between finite induced substructures is restriction of an endomorphism of $M$. Not long afterwards, Lockett and Truss \cite{LockettTruss:2014} introduced finer distinctions in the class of homomorphism-homogeneous $L$-structures, characterized by the type of homomorphism between finite induced substructures of $M$ and the type of endomorphism to which such homomorphisms can be extended. In total, they introduced 18 \emph{morphism-extension classes}, partially ordered by inclusion. For countable structures, the partial order is presented in Figure \ref{fig:ctblestrs}. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{ExtClasses2.pdf} \caption{Morphism extension classes of countable structures, ordered by $\subseteq$.} \label{fig:ctblestrs} \end{figure} As usual, we call a relational structure $M$ XY-homogeneous if every X-morphism between finite induced substructures extends to a Y-morphism $M\to M$, where $\mathrm{X\in\{I,M,H\}}$ and $\mathrm{Y\in\{H,I,A,E,B,M\}}$. The meaning of these symbols is as follows: \begin{itemize} \item[$\ast$]{H: homomorphism.} \item[$\ast$]{M: monomorphism, an injective homomorphism.} \item[$\ast$]{I: isomorphism; an isomorphism $M\to M$ is also called a self-embedding.} \item[$\ast$]{A: automorphism, a surjective isomorphism.} \item[$\ast$]{E: epimorphism, a surjective homomorphism.} \item[$\ast$]{B: bimorphism, a surjective monomorphism.} \end{itemize} The partial order of morphism-extension classes depends on the type of structures that one considers (graphs, partial orders, directed gaphs, etc.), and so for each type of relational structure we can ask its partial order of morphism-extension classes, or, more ambitiously, a full classification of homomorphism-homogeneous structures, by which we mean a countable list of structures in each homomorphism-homogeneity class, up to some suitable equivalence. The ideal equivalence relation for the classification of any given morphism-extension class is given by the uniqueness conditions in the \Fraisse theorem of the class, but such classification is not always possible. For example, limits in the classical \Fraisse theorem are unique up to isomorphism, but there are uncountably many such structures, and so the classification of ultrahomogeneous directed graphs by Cherlin \cite{cherlin1998classification} contains an uncountable class. Since all morphism-extension classes of graphs except IA and HI contain uncountably many isomorphism types of graphs, we will always find some class with uncountably many pairwise non-isomorphic elements. In the case of MB-homogeneous graphs, it is known that there exist uncountably many classes up to B-equivalence (this is the uniqueness condition in MB) in the bimorphism-equivalence class of the Rado graph. We proved in \cite{aranda2019independence} that, with the exception of four ultrahomogeneous graphs, every MB-homogeneous graph is bimorphism-equivalent to the Rado graph. So far a full classification of homomorphism-homogeneous graphs has eluded us. The present paper is part of an effort to produce such a classification. At this point, we know the partial order of morphism extension classes of countable graphs and countable connected graphs (figure \ref{fig:extclassesgraphs}). We also know all the graphs in HI, IA (the Lachlan-Woodrow theorem, \cite{LachlanWoodrow:1980}), MI, and MB (see \cite{aranda2019independence}). In this paper, we extend this partial classification to include IB-homogeneous graphs up to bi\-morphism-equiva\-lence (Theorem \ref{thm:ibclass}). \begin{figure}[h!] \centering \includegraphics[scale=0.7]{ExtClassesGr.pdf} \caption{Morphism extension classes of connected countable graphs (left) and countable graphs, ordered by $\subseteq$.} \label{fig:extclassesgraphs} \end{figure} \section{Represented Morphisms}\label{sec:represented} The purpose of this section is to introduce the notion of a \emph{represented morphism} and rephrase the definition of IB-homogeneity in terms of sets of represented monomorphisms. We will need some notation and conventions. Recall that the age of a relational $L$-structure $M$ is the class $\age{M}$ of all finite $L$-structures that embed into $M$. In this work, we will think of the age of $M$ as containing only one representative from each isomorphism type of $L$-structures embeddable into $M$. There is no real loss in this approach, and it has the salubrious effect of transforming all statements about the proper class $\age{M}$ into statements about a countable set. \begin{notation}~ \begin{enumerate} \item{We will use $A\sqsubset M$ to indicate that $A$ is a finite subset of $M$. We identify a finite subset of $M$ with the induced substructure on it.} \item{The edge relation in a graph will be denoted by $\sim$.} \item{We will denote the restriction of a function $F$ to a subset $X$ of its domain by $F_X$ instead of the more usual $F\restriction{X}$.} \item{The left inverse of an injective function $g$ will be denoted by $\overline{g}$. We reserve the notation $g^{-1}$ for two-sided inverses.} \item{We will use Gothic letters for the elements of $\age{M}$. If $A\sqsubset M$, then $\isotp{A}$ is the unique element of $\age{M}$ isomorphic to $A$.} \item{$\bim{G}$ is the bimorphism monoid of $G$.} \end{enumerate} \end{notation} Throughout the paper, we will use the phrase \emph{local morphism} (or homomorphism, monomorphism, etc.) to refer to a morphism whose domain is a finite induced substructure of the ambient $L$-structure. When compared with other morphism-extension classes, the six classes \linebreak marked in bold in Figure \ref{fig:ctblestrs} have a mismatch between the type of local homomorphism and the type of endomorphism. To understand what we mean by ``mismatch,'' note that all the morphism-extension classes are defined by a promise of the form ``every local X-morphism is restriction of a global Y-morphism.'' All restrictions of an endomorphism are homomorphisms; likewise, the restrictions of a monomorphism or a bimorphism will be monomorphisms. The mismatch lies in the fact that when XY$\in\{\mathrm{IH, IM, IE, MH, IB, ME}\}$, the restrictions of an endomorphism of type Y define a larger class of local homomorphisms than X. It is for this reason that Coleman's approach in \cite{Coleman:2018} did not yield a \Fraisse theorem for these six classes. What the mismatch tells us is that we should not be looking at X-morphisms exclusively, but accept a larger class. We call these morphisms \emph{represented morphisms}, and define them formally below. We think of elements of $\age{M}$ and morphisms between them as archetypes for finite induced substructures of $M$ and local morphisms in $M$. We reflect this idea in our notation by using Gothic typeface for elements of the age and functions between them. \begin{definition} Let $M$ be a relational $L$-structure, $\isotp{A,B}\in\age{M}$, and $e_A\colon\isotp{A}\to M, e_B\colon\isotp{B}\to M$ be embeddings with images $A,B$ respectively. \begin{enumerate} \item{A monomorphism $\isotp{f}\colon\isotp{A}\to\isotp{B}$ is \emph{manifested by} $f\colon A\to B$ \emph{over} $e_A,e_B$, if $e_B\circ f=\isotp{f}\circ e_A$; in this case, we will also say that $f$ is a \emph{manifestation} of $\isotp{f}$ over $e_A,e_B$. Thus, any local monomorphism is a manifestation of some monomorphis between elements of the age over a pair of embeddings.} \item{A monomorphism $\isotp{f}\colon\isotp{A}\to\isotp{B}$ is \emph{represented in} $\bim{M}$ \emph{over} $e_A,e_B$ if there exists a bimorphism $F\in\bim{M}$ such that $\isotp{f}$ is manifested by $F_A\colon A\to B$. We will that $F$ represents $\isotp{f}$.} \item{Given relational $L$-structures $A$ and $B$, we use $\repr{Mon}{A,B}{}{}$ to denote the set of all mono\-morphisms $A\to B$; similarly, $\repr{Emb}{A,B}{}{}$ denotes all embeddings $A\to B$.} \item{We use $\repr{Mon}{e,e'}{M}{Bi}$ to denote the set $$\{f\in\repr{Mon}{\isotp{A,B}}{}{}:\exists F\in\bim{M}(F_{\img{e}}\circ e=e'\circ f)\},$$ that is, the set of monomorphisms represented in $\bim{M}$ over $e,e'$.} \end{enumerate} \end{definition} We can use these notions to give an alternative definition of IB-homogeneity. \begin{proposition} A relational structure $M$ is IB-homogeneous if and only if for all $\isotp{A}\in\age{M}$, and all embeddings $e,e'\colon\isotp{A}\to M$, $\aut{\isotp{A}}\subseteq\repr{Mon}{e,e'}{M}{Bi}$. \end{proposition} \begin{proof} Suppose $M$ is IB-homogeneous, and let $\isotp{s}\colon\isotp{A}\to\isotp{A}$ be an automorphism. Take any embeddings $e,e'\colon\isotp{A}\to M,$ and let $A,A'$ be their images. Then $i\coloneqq e'\circ\isotp{s}\circ\overline e\colon A\to A'$ is a local isomorphism, which by IB-homogeneity is restriction of some bimorphism $I$. It follows that $I_A\circ e=e'\circ\isotp{s}$ and so $\isotp{s}$ is represented in the bimorphism monoid of $M$ by $I$. Now suppose that the condition from the statement is satisfied, and let $i\colon A\to A'$ be a local isomorphism. Choose embeddings $e\colon\isotp{A}\to A, e'\colon\isotp{A}\to A'$. Now $\isotp{s}\coloneqq\overline{e'}\circ i\circ e$ is an automorphism of $\isotp{A}$. By hypothesis $\isotp{s}$ is represented in $\bim{M}$, which means that $i$ is restriction of a bimorphism of $M$. \end{proof} \begin{proposition}\label{prop:manifestations} Suppose $M$ is an IB-homogeneous relational structure and $\isotp{A,B}\in\age{M}$. Let $i,i'\colon\isotp{A}\to M$ and $e,e'\colon\isotp{B}\to M$ be embeddings. Then $\isotp{f}\in\repr{Mon}{i,e}{M}{Bi}$ iff $\isotp{f}\in\repr{Mon}{i',e'}{M}{Bi}$. \end{proposition} \begin{proof} Let $A,B$ be the images of $i,e$ and $A',B'$ be the images of $i',e'$. Suppose $\isotp{f}\in\repr{Mon}{i,e}{M}{Bi}$. Then there exists $F\in\bim{M}$ such that $F_A\circ i=e\circ\isotp{f}$. Since $i,i',e,e'$ are embeddings, there exist isomorphisms $j\colon A'\to A$ and $k\colon B\to B'$ which moreover satisfy $j\circ i'=i$ and $k\circ e=e'$. By IB-homogeneity, $j$ and $k$ are restrictions of bimorphisms $J,K$. We claim that $K\circ F\circ J$ represents $\isotp{f}$ over $i',e'$. To see this, note that $G\coloneqq(K\circ F\circ J)_{A'}=k\circ F_A\circ j$. Thus, \[ \begin{split} G\circ i'&=(K\circ F\circ J)_{A'}\circ i'=k\circ F_A\circ(j\circ i')=k\circ(F_a\circ i)=\\ &=(k\circ e)\circ\isotp{f}=e'\circ\isotp{f}, \end{split} \] and $\isotp{f}$ is represented over $i',e'$. The same proof works in the other direction as well. \end{proof} In an IB-homogeneous structure $M$, the sets $\repr{Mon}{e,e'}{M}{Bi}$ do not depend on the embeddings $e,e'$, but only on the isomorphism type of their domain. In other words, if a monomorphism $\isotp{f}$ is represented in the bimorphism monoid of an IB-homogeneous structure, then \emph{any manifestation of $\isotp{f}$ in $M$ is restriction of a bimorphism of} $M$. \begin{notation} We will write $\repr{Mon}{\isotp{A,B}}{M}{Bi}$ instead $\repr{Mon}{e_A,e_B}{M}{Bi}$ (where $e_A\colon\isotp{A}\to M, e_B\colon\isotp{B}\to M$ are embeddings) when $M$ is IB-homogeneous. \end{notation} \iffalse Since the morphisms of $\age{G}$ are total functions, we need a small workaround to speak about restrictions. We will say that $\isotp{f'}\colon\isotp{A'}\to\isotp{B'}$ is a restriction of $\isotp{f}\colon\isotp{A}\to\isotp{B}$ if there exist embeddings $e_0\colon\isotp{A'}\to A$ and $e_1\colon\isotp{B'}\to B$ such that $e_1\circ\isotp{f'}=\isotp{f}\circ e_0$. It is clear from the definition of IB-homogeneity that any restriction of a represented morphism is also represented, but we include the following proof for completeness and to ensure that all definitions are playing well together. \begin{proposition} Suppose $M$ is an IB-homogeneous relational structure and $\isotp{A,B}\in\age{M}$. If $\isotp{f}\in\repr{Mon}{\isotp{A,B}}{M}{Bi}$ and $\isotp{f'}\colon\isotp{A'}\to\isotp{B'}$ is a restriction of $\isotp{f}$, then $\isotp{f'}\in\repr{Mon}{\isotp{A',B'}}{M}{Bi}$. \end{proposition} \begin{proof} Let $i\colon\isotp{A'}\to\isotp{A}$ and $j\colon\isotp{B'}\to\isotp{B}$ be embeddings such that $\isotp{f}\circ i=j\circ\isotp{f'}$. Since $M$ is IB-homogeneous, $\isotp{f}\in\repr{Mon}{\isotp{A,B}}{M}{Bi}$ tells us that for any embeddings $e_A\colon\isotp{A}\to A\sqsubset M, e_B\colon\isotp{B}\to B\sqsubset M$ there exists $F\in\bim{M}$ such that $F_A\circ e_A=e_B\circ\isotp{f}$. Choose embeddings $e_{B'}\colon\isotp{B'}\to M, e_{A'}\colon\isotp{A'}\to M$ such that $e_B\circ j=e_{B'}$ and $e_A\circ i=e_{A'}$. Clearly, $F$ represents $\isotp{f'}$ over $e_{A'},e_{B'}$ (see DIAGRAM BELOW). \end{proof} \fi \section{IB-homogeneous graphs}\label{sec:graphs} In this section we will prove that any IB-homogeneous graph is either ultrahomogeneous or MB-homogeneous. It follows from the Lachlan-Woodrow theorem \cite{LachlanWoodrow:1980} and the classification of MB-homogeneous graphs in \cite{aranda2019independence} that all IB-homogeneous graphs are known up to bimorphism-equivalence. The complement of a graph $G=(V,E)$ is $\overline G=(V,V^2\setminus(D\cup E))$, where $D=\{(g,g):g\in G\}$. Observe that we assume $G$ and $\overline G$ have the same vertex set. \begin{definition} Let $G,H$ be graphs. A function $f\colon G\to H$ is an \emph{antibimorphism} if it is bijective and $u\not\sim v$ in $G$ implies $f(u)\not\sim f(v)$. \end{definition} \begin{observation}\label{obs:anti} Let $G,H$ be graphs and suppose that $F:G\to H$ a bijective function. The following are equivalent:\ \begin{enumerate} \item{$F$ is a bimorphism $G\to H$,} \item{$F$ is an antibimorphism $\overline G\to\overline H$,} \item{$F^{-1}$ is a bimorphism $\overline H\to\overline G$,} \item{$F^{-1}$ is an antibimorphism $H\to G$.} \end{enumerate} \end{observation} \begin{proof} It suffices to show $1\Rightarrow 2\Rightarrow 3$ because $(F^{-1})^{-1}=F$ and $\overline{\overline{G}}=G$. \begin{description} \item[(1$\Rightarrow$ 2)]{Suppose that $F$ is a bimorphism, so it preserves edges. Now $u\not\sim v$ in $\overline G$ iff $u\sim v$ in $G$, and $u\sim v$ in $G$ implies $F(u)\sim F(v)$ in $H$, or equivalently $F(u)\not\sim F(v)$ in $\overline H$.} \item[(2$\Rightarrow$ 3)]{Suppose $u\sim v$ in $\overline H$. Since $F$ preserves nonedge, $F^{-1}(u)\not\sim F^{-1}(v)$ is not posible, so we must have $F^{-1}(u)\sim F^{-1}(v)$.} \end{description} \end{proof} \begin{remark}\label{rmk:whengish} Let $\bip{G}$ be the set of inverses of bimorphisms of $G$, that is, the antibimorphisms of $G$. When $G=H$, we obtain from Observation \ref{obs:anti} that the following four conditions are equivalent: \begin{multicols}{2} \begin{enumerate} \item{$F\in\bim{G}$,} \item{$F\in\bip{G}$,} \item{$F^{-1}\in\bim{\overline{G}}$,} \item{$F^{-1}\in\bip{G}$.} \end{enumerate} \end{multicols} \end{remark} The easy proposition below will be in the background for most of the paper. \begin{proposition}\label{prop:leftinv} If $M$ is an IB-homogeneous structure, then the left inverse of every finite represented monomorphism can be extended to an antibimorphism. \end{proposition} \begin{proof} If $\overline{g}\colon Y\to X$ is the left inverse of $g\colon X\to Y$, then by IB-homogeneity $g$ is restriction of some bimorphism $G\colon M\to M$, and $G^{-1}$ is an extension of $\overline{g}$. \end{proof} \begin{lemma}\label{prop:complements} If $G$ is an IB-homogeneous graph, then so is $\overline G$. \end{lemma} \begin{proof} If $f\colon\overline X\to\overline Y$ is a local isomorphism in $\overline G$, then so is $f^{-1}\colon Y\to X$, where we think of $X$ and $Y$ as embedded in $G$. By IB-homogeneity, $f^{-1}$ is represented in $G$, so it is restriction of a bimorphism $F$ of $G$. The rest follows from Remark \ref{rmk:whengish}, as $F^{-1}$ is a bimorphism of $\overline G$ that extends $f$. \end{proof} Recall that in an ambient graph $G$ we call a vertex $v$ a \emph{cone} over $X\subset G$ if $v\sim x$ for all $x\in X$. Similarly, we call $v$ a \emph{co-cone} over $X$ if $v\notin X$ and $v\not\sim x$ for all $x\in X$. \begin{notation} The monomorphism mapping a nonedge to an edge will be denoted by $m$. \end{notation} \begin{observation}\label{obs:ultrahom} If $G$ is an IB-homogeneous that does not represent $m$, then $G$ is ultrahomogeneous. \end{observation} \begin{proof} Every isomorphism between finite substructures extends to a bimorphism, which cannot map any nonedge to an edge, as in that case $m$ would be represented in $\bim{G}$. It follows that all bimorphisms of $G$ are automorphisms, and by IB-homogeneity every local isomorphism is restriction of an automorphism of $G$. \end{proof} \begin{proposition}\label{prop:comprepm} If $G$ is IB-homogeneous and represents $m$, then $\overline G$ also represents $m$. \end{proposition} \begin{proof} Let $M$ be any bimorphism that represents $m$ in $\bim{G}$. Then by Remark \ref{rmk:whengish}, the same permutation of vertices is an antibimorphism of $\overline G$ mapping an edge to a nonedge and so $M^{-1}$ is a bimorphism of $\overline G$ that represents $m$. \end{proof} \begin{lemma}\label{lem:largecliques} Let $G$ be an IB-homogeneous graph that represents $m$. Then for every $X\sqsubset G$ there exist $F,M\in\bim{G}$ such that the preimage of $X$ under $F$ is an independent set and the image of $X$ under $F$ is a clique. In particular, $G$ embeds arbitrarily large cliques and independent sets.\end{lemma} \begin{proof} Suppose that $G$ is IB-homogeneous and represents $m$, and let $X\subset G$ be any finite subset of $G$. We will show that $G$ embeds a clique of size $|X|$. If $X$ is a clique, then we are done. Otherwise, there is a nonedge $x\not\sim y$ in $X$. Let $u\sim v$ be any edge. Since $m$ is represented in $\bim{G}$, the map $x\mapsto u, y\mapsto v$ is restriction of a bimorphism $F_0$, by Proposition \ref{prop:manifestations}. The image of $X$ under $F_0$ is a set of size $X$ with strictly more edges than $X$. Iterating this procedure and composing the bimorphisms from each step, we obtain a bimorphism $F$ that maps $X$ to a complete graph on $|X|$ vertices. We can use tha partial result from the preceding paragraph now. By Lemma \ref{prop:complements}, $\overline{G}$ is IB-homogeneous, so there is a bimorphism $M\in\bim{\overline{G}}$ that maps the graph induced on $X$ to a clique. The inverse of $M$ is a bimorphism of $G$ by Remark \ref{rmk:whengish}, and it maps an independent set of size $|X|$ to $X$. \end{proof} \begin{definition}~ \begin{enumerate} \item{The \emph{independence number} of a graph is $$\alpha(G)=\sup\{|X|\colon X\subset G\text{ is an independent set}\}.$$when that number is finite, and $\infty$ otherwise.} \item{The \emph{star number} of a graph $G$ is $$\oo{G}=\sup\{n:K_{1,n}\in\age{G}\}$$ when that number is finite, and $\infty$ otherwise.} \item{A graph $G$ has Property {\normalfont(}$\bigtriangleup${\normalfont )}\xspace if every finite $X\subset G$ has a cone.} \item{A graph $G$ has Property {\normalfont(}$\therefore${\normalfont)}\xspace if $\overline G$ has {\normalfont(}$\bigtriangleup${\normalfont )}\xspace.} \end{enumerate} \end{definition} \begin{fact}[Proposition 3.6 in \cite{ColemanEvansGray:2019}]\label{fact:coleman} If $G$ satisfies {\normalfont(}$\bigtriangleup${\normalfont )}\xspace and {\normalfont(}$\therefore${\normalfont)}\xspace, then $G$ is MB-homogeneous. \end{fact} Since MB-homogeneous graphs are HH-homogeneous, we obtain the following fact as a special case of Corollary 21 from \cite{aranda2019independence}. \begin{fact}\label{fact:AH} If $G$ is an MB-homogeneous graph with infinite independence number, then $G$ has {\normalfont(}$\bigtriangleup${\normalfont )}\xspace. \end{fact} \begin{lemma}\label{lem:infcodep} If $G$ is an IB-homogeneous graph with infinite star number that represents $m$, then $G$ is MB-homogeneous. \end{lemma} \begin{proof} We know from Lemma \ref{lem:largecliques} that any IB-homogeneous graph $G$ that represents $m$ embeds arbitrarily large independent sets. As noted in Remark \ref{lem:largecliques} the proof actually implies that in any such graph, there is always a bimorphism mapping an independent set of size $k$ to any subset of size $k$. Thus, if the star number of $G$ is $\infty$, then every finite independent set has a cone. By Lemma \ref{lem:largecliques}, every finite induced subgraph is image of an independent set under a bimorphism, and so every $A\sqsubset G$ has a cone. This proves that $G$ satisfies {\normalfont(}$\bigtriangleup${\normalfont )}\xspace. Now, the complement of $G$ also represents $m$ (Proposition \ref{prop:complements}) and therefore the same argument proves that $\overline{G}$ satisfies {\normalfont(}$\bigtriangleup${\normalfont )}\xspace, or, equivalently, $G$ satisfies {\normalfont(}$\therefore${\normalfont)}\xspace. Fact \ref{fact:coleman} now tells us that $G$ is MB-homogeneous. \end{proof} \begin{lemma}\label{lem:ctrg} Let $G$ be an IB-homogeneous graph with finite star number that represents $m$. Then $G$ is MB-homogeneous. \end{lemma} \begin{proof} By Lemma \ref{lem:largecliques}, any IB-homogeneous graph that represents $m$ embeds arbitrarily large independent sets. This implies that $G$ satisfies {\normalfont(}$\bigtriangleup${\normalfont )}\xspace, by Fact \ref{fact:AH}. Now we prove that $G$ also satisfies {\normalfont(}$\therefore${\normalfont)}\xspace. \begin{claim}\label{claim:infind} There are no finite $\subseteq$-maximal independent sets in $G$. \end{claim} \begin{proof} Suppose that $A$ is a finite maximal independent subset of $G$. By Lemma \ref{lem:largecliques}, there exists a strictly larger finite independent set $B$ embedded in $G$. Let $A'$ be any subset of $B$ of size $|A|$. Then a bijection $A\to A'$ is an isomorphism, and thus there exists a bimorphism $F$ that extends it. Consider any $b\in B\setminus A'$; its preimage is a co-cone over $A$, contradicting the maximality of $A$ as an independent subset. \end{proof} Let $I$ be a maximal independent set in $G$, so for any vertex in $v\in G\setminus I$, the set $N(v)\cap I$ is nonempty. Since the star number is finite, we know that for all $v\in G$, $|N(v)\cap I|\leq\oo{G}$. Therefore, for any finite $X\subset G$ we have $$\left|\bigcup\left\{N(x)\cap I:x\in X\setminus I\right\}\right|+|X\cap I|\leq\oo{G}\cdot|X|.$$ By Claim \ref{claim:infind} $I$ is infinite, and so any element of $I\setminus\bigcup\{N(x)\cap I,X\cap I:x\in X\}$ is a co-cone over $X$. It follows that $G$ satisfies {\normalfont(}$\therefore${\normalfont)}\xspace, so by fact \ref{fact:coleman}, $G$ is MB-homogeneous. \end{proof} \begin{theorem}\label{thm:ibclass} If $G$ is IB-homogeneous, then $G$ is MB-homogeneous or ultrahomogeneous. \end{theorem} \begin{proof} If $G$ represents $m$, then $G$ is MB-homogeneous by Lemmas \ref{lem:infcodep} and \ref{lem:ctrg}. Otherwise, $G$ is ultrahomogeneous by Observation \ref{obs:ultrahom}. \end{proof} It is not true for general relational structures that IB is the union of IA and MB. All the examples that we know where IB is not IA$\cup$MB have the property that there is a partition of the language $L=L_0\cup L_1$ such that the $L_0$-reduct of $M$ is ultrahomogeneous and the $L_1$-reduct is MB-homogeneous. This motivates the following two questions. \begin{question} Is it true that for all languages $L$ containing only one relation, every countable IB-homogeneous $L$-structure is ultrahomogeneous or MB-homogeneous? If not, then what is the least arity such that a single-relation language has countable IB-homogeneous structures that are not ultrahomogeneous or MB-homogeneous? \end{question} \begin{question} Is it true that if $M$ is an IB $L$-structure, then there exists a partition of $L$ into $L_1,L_2$ such that the $L_1$-reduct of $M$ is ultrahomogeneous and the $L_2$-reduct of $M$ is MB-homogeneous? \end{question} \section{Acknowledgements} I thank Jan Hubi\v{c}ka and David Bradley-Williams who patiently listened to ranting preliminary versions this work. Research funded by the ERC under the European Union's Horizon 2020 Research and Innovation Programme (grant agreement No. 681988, CSP-Infinity). \section*{References}
1,941,325,220,152
arxiv
\section{Introduction} The cold dark matter (CDM) model of structure formation has proven a very successful paradigm for understanding the large-scale structure of the Universe. However, as far as galaxy formation is concerned, a number of important issues still remain. A long-standing problem is that CDM models in general predict too many low-mass dark matter haloes. The mass function of dark matter haloes, $n(M)$, scales with halo mass roughly as $n(M)\propto M^{-2}$ at the low-mass end. This is in strong contrast with the observed luminosity function of galaxies, $\Phi (L)$, which has a rather shallow shape at the faint end, with $\Phi(L) \propto L^{-1}$. To reconcile these observations with the CDM paradigm, the efficiency of star formation must be a strongly nonlinear function of halo mass (e.g. Kauffmann, White \& Guiderdoni 1993; Benson \etal 2003; Yang \etal 2003; van den Bosch \etal 2003b). One of the biggest challenges in galaxy formation is to understand the physical origin of this strongly nonlinear relationship. Another important but related challenge is to understand the baryonic mass fraction as a function of halo mass. The baryonic mass in dark matter haloes maybe roughly divided into three components: stars, cold gas, and hot gas. In the most naive picture one would expect that each halo has a baryonic mass fraction that is close to the universal value of $\sim 17$ percent\footnote{Corresponding to a $\Lambda$CDM concordance cosmology with $\Omega_B h^2 = 0.024$ and $\Omega_m h^2=0.14$ (Spergel \etal 2003)}. In this naive picture one can achieve a low ratio of stellar mass to halo mass by keeping a relatively large fraction of the baryonic mass hot, either by preventing the gas from cooling or by providing a heating source that turns cold gas into hot gas. Alternatively, one may achieve a low star formation efficiency in low-mass haloes by making the total baryonic mass fraction lower in lower mass haloes, which can be achieved either by blowing baryons out haloes or by preventing baryons from ever becoming bound to haloes in the first place. At the present it is unclear which of these scenarios dominates. To a large extent this ignorance owes to the poor observational constraints on the baryonic inventory as function of halo mass. Historically, most observational studies of galaxies have focussed on the stellar light. Although this has given us a fairly detailed consensus of the relation between stellar mass (or light) and halo mass (e.g., Yang \etal 2003, 2005; van den Bosch \etal 2003b; Tinker \etal 2004), less information is available regarding the hot and cold gas components. Hot, tenuous gas in low mass haloes is notoriously difficult to detect, making our knowledge of the relation between halo mass and hot gas mass quite limited. Based on X-ray observations of a few relatively massive spiral galaxies, the gas mass in a hot halo component appears to be small (e.g. Benson et al. 2000). For cold gas the situation has improved substantially in recent years, owing to the completion of relatively large, blind 21-cm surveys (e.g. Schneider, Spitzak \& Rosenberg 1998; Kraan-Korteweg \etal 1999; Rosenberg \& Schneider 2002; Zwaan \etal 2003, 2005). Using these surveys, it is now possible to obtain important constraints on galaxy formation (see Section \ref{sec:coldgas}). When modelling galaxy formation, the process most often considered to suppress star formation in low mass haloes is feedback from supernova explosions. As shown by Dekel \& Silk (1986) and White \& Frenk (1992), the total amount of energy released by supernovae can be significantly larger than the binding energy of the gas in low mass haloes. Therefore, as long as a sufficiently large fraction of this energy can be converted into kinetic energy (often termed the `feedback efficiency'), one can in principle expel large amounts of baryonic material from low mass haloes, thus reducing the efficiency of star formation. Indeed, semi-analytical models for galaxy formation that include a simple model for this feedback process are able to reproduce the observed slope of the faint-end luminosity function in the standard $\Lambda$CDM model, if the feedback efficiency is taken to be sufficiently high (e.g., Benson \etal 2003; Kang \etal 2005). An important question, however, is whether such high efficiencies are realistic. For example, detailed hydrodynamical simulations by Mac-Low \& Ferrara (1999) and Strickland \& Stevens (2000) show that supernova feedback is far less efficient in expelling mass than commonly assumed because the onset of Rayleigh-Taylor instabilities severely limits the mass loading efficiency of galactic winds. This prompted investigations into alternative mechanisms to lower the star formation efficiency in low mass haloes. Another possibility is that photoionization heating by the UV background may prevent gas from cooling into low-mass haloes (e.g., Efstathiou 1992; Thoul \& Weinberg 1996) by increasing its temperature. Numerical simulations have shown that this effect is only efficient in dark matter haloes with $M\la 10^{10}h^{-1}\msun$ (e.g. Quinn et al. 1996, Gnedin 2000; Hoeft \etal 2005), since the gas is only heated to $\sim 10^4$ to $10^5$ K. Although this might be sufficient to explain the relatively low abundance of satellite galaxies in Milky Way sized haloes (Bullock, Kravtsov \& Weinberg 2000; Tully \etal 2002), it is insufficient to explain the faint-end slope of the galaxy luminosity function. However, if a different mechanism were to heat the intergalactic medium (hereafter IGM) to even higher temperatures (and thus higher entropies), the same mechanism could affect more massive haloes as well. Such a process is often referred to as `preheating'. Along these lines, Mo \& Mao (2002) considered galaxy formation in an IGM that was preheated to high entropy by vigorous energy feedback associated with the formation of stars in old ellipticals and bulges and with active galactic nuclei (AGN) activity at redshifts of 2 to 3. They showed that such a mechanism can produce the entropy excess observed today in low-mass clusters of galaxies without destroying the bulk of the Ly-$\alpha$ forest. In addition, it would affect the formation of galaxies in low mass haloes whose virial temperature is lower than that of the preheated IGM. Numerical simulations show that such preheating may indeed significantly lower the gas mass fraction in low mass haloes (e.g., van den Bosch, Abel \& Hernquist 2003a; Lu et al. in preparation). In this paper we investigate an alternative mechanism for creating a preheated IGM. Rather than relying on star formation or AGN, we consider the possibility that the collapse of pancakes (also called sheets) and filaments heats the gas in these structures and that the low mass haloes within them form in a preheated medium. Although the standard picture of hierarchical formation is one in which more massive structures form later, gravitational tidal fields suppress the formation of low-mass haloes, while promoting the formation of pancakes. Consequently, the formation of massive pancakes will precede that of low-mass dark matter haloes, which form within them. In this paper we show that the shock heating associated with pancake collapse at $z \lta 2$ can heat the associated gas to sufficiently high entropy that the subsequent gas accretion into the low mass haloes that form within these pancakes is strongly affected. We demonstrate that the impact of this previrialization is strong enough to explain both the faint-end of the galaxy luminosity function and the low mass end of the galaxy HI mass function, without having to rely on unrealistically high efficiencies for supernova feedback. The outline of the paper is as follows. In \S~\ref{sec:coldgas}, we use current observational results of the HI gas mass function to constrain star formation and feedback in low-mass haloes. We show that these observations are difficult to reconcile with the conventional feedback model, but that a consistent model can easily be found if low-mass haloes are embedded in a preheated medium. In \S~\ref{sec:preheating} we describe how shocks associated with the formation of pancakes can preheat the gas around low-mass haloes. In \S~\ref{sec:GF} we discuss our results in light of existing numerical simulations and discuss the impact of our results on the properties of galaxies and the intergalactic medium. We summarize our results in \S~\ref{sec:concl}. \section{The Formation of Disk Galaxies} \label{sec:coldgas} \subsection{Observational Constraints} The models discussed below focus on galaxies that form at the centers of relatively low mass haloes with $M \lta 10^{12} h^{-1} \Msun$. To constrain these models, we first derive the stellar mass function of central galaxies using the conditional luminosity function (CLF), which expresses how many galaxies of luminosity $L$ reside, on average, in a halo of mass $M$. Using both the galaxy luminosity function and the luminosity dependence of the correlation length of the galaxy-galaxy correlation function obtained from the 2-degree Field Galaxy Redshift Survey (2dFGRS, Colless \etal 2001), Yang \etal (2003) and van den Bosch \etal (2003b) were able to put tight constraints on the CLF (see also Yang \etal 2005 and van den Bosch \etal 2005). As shown in Yang \etal (2003), the CLF also allows one to compute the average relation between halo mass and the luminosity of the central galaxy (assumed to be the brightest galaxy in the halo). We have determined this relation using the fiducial CLF model considered in van den Bosch \etal (2005; Model 6 listed in their Table~1). To obtain a stellar mass function for the central galaxies, we convert the 2dFGRS $b_J$-band luminosity into a stellar mass using a stellar mass-to-light ratio $M_{\rm star}/L = 4.0 \, (L/L^\star)^{0.3} \, (M/L)_\odot$ for $L \leq L^\star$ and $M_{\rm star}/L=4.0 (M/L)_\odot$ for $L > L^\star$, which matches the mean relation between stellar mass and blue-band luminosity obtained by Kauffmann \etal (2003). The resulting stellar mass function of central galaxies is shown in Fig.~\ref{fig:MF}(a) as the short-dashed curve. For comparison, we also plot the stellar mass functions of {\it all} galaxies obtained by Bell \etal (2003) (dotted curve) and Panter, Heavens \& Jimenez (2004) (long-dashed curve). Given the large uncertainties involved (see Bell \etal 2003 for a detailed discussion), these stellar mass functions are in remarkably good agreement with each other, particularly for the low mass galaxies that are the focus of this study. The good agreement suggests that most low mass galaxies are indeed central galaxies in small haloes, i.e. satellite galaxies do not dominate the stellar mass function (see also Cooray \& Milosavljevi\'c 2005). In addition to the stellar mass function, we also constrain our models using the HI mass function of galaxies. With large, blind 21-cm surveys that have recently been completed, this HI-mass function has now been estimated quite accurately over a relatively large range of masses (see Zwaan \etal 2005 and references therein). In Fig.\,\ref{fig:MF}(b), we show the recent results obtained by Zwaan \etal (2005) and Rosenberg \& Schneider (2002). Both HI-mass functions are well fit by a Schechter (1976) function with a power-law slope at the low mass end of about $-1.3 \pm 0.1$. Note that this is slightly steeper than the power-law slope at the low mass end of the stellar mass function, which is about $-1.16$ (Panter et al. 2004). Since galaxy formation is a process that involves both stars and cold gas, a combination of observational constraints on the luminosity function and the HI-mass function provides important constraints on star formation and feedback. In fact, as we show below, the HI-mass function constraints are more generic, and it is only by including them we are able to argue against the standard supernova feedback model. \begin{figure*} \centerline{\psfig{figure=figure1.ps,width=\hdsize}} \caption{ (a) The stellar mass functions predicted by the standard model, the model with heating by the UV background, and the preheating model, as labelled. The curve labelled `SN' is the prediction of a model in which cold gas is heated by supernova explosions (see text for details). In addition, we show the observational results of Bell \etal (2003; dotted curve) and Panter \etal (2004; long-dashed curve), as well as the stellar mass function of {\it central} galaxies in dark matter haloes derived from the CLF as described in the text (short-dashed curve). (b) The HI-mass functions predicted by the same three models compared to the observed HI mass functions of Rosenberg \& Schneider (2002; RS, open circles) and Zwaan \etal (2005; solid dots).} \label{fig:MF} \end{figure*} \subsection{The standard model} \label{ssec:standard} In the standard picture of galaxy formation (White \& Rees 1978) it is assumed that gas cooling conserves specific angular momentum. As a result, the baryons cool to form a centrifugally supported disk galaxy (Fall \& Efstathiou 1980). In what follows we investigate the mass functions of the cold gas and stars of disk galaxies that form within this picture. We make the simplifying assumption that each dark matter halo forms a single disk galaxy. Clearly this is a severe oversimplification since it is known that haloes, especially more massive ones, can contain more than one galaxy and not every galaxy is a disk galaxy. However, we are only interested in the properties of low mass haloes, which to good approximation contain a single, dominant disk galaxy. In particular, the studies of van den Bosch \etal (2003b), Yang \etal (2005), and Weinmann \etal (2005) show that in haloes with $M < 10^{12} h^{-1} \Msun$, the mass range considered here, the fraction of late-type galaxies is larger than 60 percent (see also Berlind \etal 2003). Thus, our simplified model will overpredict the number density of disk galaxies in low mass haloes, but not by more than a factor of two. Furthermore, we will conclude below that the main problem with the standard model is one of gas cooling, a problem that will exist independent of galaxy type. To model the detailed structure of individual disk galaxies we use the model of Mo, Mao \& White (1998, hereafter MMW), which matches a wide variety of properties of disk galaxies. Specifically, this model assumes that (i) the baryons have the same specific angular momentum as the dark matter, (ii) they conserve their specific angular momentum when they cool, (iii) they form an exponential disk, and (iv) the halo responds to the gas cooling by adiabatically contracting. Assumptions (i) and (iv) are supported by numerical simulations (van den Bosch \etal 2002; Jesseit, Naab \& Burkert 2002), while assumption (ii) is required to obtain disks of the right size. Finally, assumption (iii) is equivalent to assuming a particular distribution of specific angular momentum in the proto-galaxy. Haloes are modelled as NFW spheres (Navarro, Frenk \& White 1997) with a concentration that depends on halo mass following Bullock \etal (2001a), and a halo spin parameter, $\lambda$, that is drawn from a lognormal distribution with a median of $\sim 0.04$ and a dispersion of $\sim 0.5$ (e.g., Warren \etal 1992; Cole \& Lacey 1996; Bullock \etal 2001b). The one free parameter in this model is the disk mass fraction $m_d$, defined as the disk mass divided by the total virial mass. Since radiative cooling is very efficient in low mass haloes with $M<10^{12}h^{-1}\msun$, we start our investigation by naively setting $m_d$ equal to the universal baryon fraction, i.e., $m_d=0.17$. In a seminal paper, Kennicutt (1989) showed that star formation is abruptly suppressed below a critical surface density. This critical density is close to that given by Toomre's stability criterion \begin{equation} \label{eq:Q} \Sigma_{\rm crit}(R) = {\sigma_{\rm gas} \kappa(R) \over \pi G Q_{\rm crit}}\,, \end{equation} where $\kappa(R)$ is the epicyclic frequency, $\sigma_{\rm gas}$ is the velocity dispersion of the cold gas and $Q_{\rm crit}\sim 1$ (Toomre 1964). This critical density determines the fraction of the gas that can form stars (Quirk 1972). Given the surface density of the disk, $\Sigma_{\rm disk}$, obtained using the MMW model described above, the radius $R_{\rm SF}$ where the density of the disk equals $\Sigma_{\rm crit}$ can be calculated. Following van den Bosch (2000) we assume that the disk mass inside this radius with surface density $\Sigma_{\rm disk} > \Sigma_{\rm crit}$ turns into stars, i.e. \begin{equation} \label{mstar} M_{\rm star} = 2 \pi\int_{0}^{R_{\rm SF}} \left[\Sigma_{\rm disk}(R) - \Sigma_{\rm crit}(R)\right] R\, {\rm d}R \end{equation} Kennicutt (1989) shows that $\sigma_{\rm gas} = 6 \kms$ and $Q_{\rm crit} \sim 1.5$ yields values of $R_{\rm SF}$ that correspond roughly to the radii where star formation is truncated. However, Hunter et al (1998) show that in low surface brightness galaxies $Q_{\rm crit} \sim 0.75$. Therefore, to be conservative, we adopt $\sigma_{\rm gas} = 6 \kms$ and $Q_{\rm crit} \sim 0.75$. The assumption that {\it all} the gas with $\Sigma_{\rm disk} > \Sigma_{\rm crit}$ has formed stars is consistent with both observations (Kennicutt 1989; Martin \& Kennicutt 2001; Wong \& Blitz 2002; Zasov \& Smirnova 2005) and with predictions based on the typical star formation rate in disk galaxies (Kennicutt 1998) and their formation times (see van den Bosch 2001). We compute the gas mass of each model galaxy using $M_{\rm gas} = \left(M_{\rm disk} - M_{\rm star}\right)$ where $M_{\rm disk} = m_d M$. In other words, we assume that the gas surface density $\Sigma_{\rm gas} = \Sigma_{\rm crit}$ inside $R_{\rm SF}$ and $\Sigma_{\rm gas} = \Sigma_{\rm disk}$ outside $R_{\rm SF}$. Finally, again to be conservative, we assume that the molecular hydrogen fraction is 1/2 (e.g. Keres et al 2003; Boselli et al. 2002) so that the final HI mass of each galaxy, $M_{\rm HI} = 0.71 M_{\rm gas}/2$ where the factor of $0.71$ takes into account the contribution of helium and other heavier elements. Using the halo mass function given by the $\Lambda$CDM concordance cosmology, and assuming that each halo hosts a single disk galaxy whose properties follow from $M$, $m_d$, and $\lambda$ as described above, we obtain the HI and stellar mass functions shown as the solid curves labeled `standard' in Figure~\ref{fig:MF}. Since we are only interested in low-mass haloes, and since our model does not include any processes that may affect gas assembly in massive haloes, we artificially truncate the halo mass function at $M = 5 \times 10^{12} h^{-1} \Msun$, which explains the abrupt turnover of the model's stellar mass function at high masses. Not surprisingly, our naive model severely overpredicts the stellar mass function, yielding an abundance of systems with $M_{\rm star} \simeq 10^8 h^{-2} \Msun$ that is two orders of magnitude too large. In addition, the model predicts an HI mass function at $M_{\rm HI} \lta 10^{8}h^{-2}\msun$ that is more than 10 times higher than the data. Even if we reduce the number density of dark matter haloes by a factor of 2, to account for the possibility that some isolated haloes may not host disk galaxies, the discrepancy remains a factor of five. Note that one might try to lower the stellar masses in our model by increasing $\Sigma_{\rm crit}$, but this leads to an increase of the HI masses, which are already too large. Similarly, a decrease of $\Sigma_{\rm crit}$ may improve the fit of the HI mass function, but at the expense of worsening the fit to the stellar mass function. The failure of the standard model to simultaneously fit the HI mass function and the stellar mass function is robust to the details of star formation. Fitting both mass functions simultaneously requires either a modification of the cosmological parameters or additional physics to lower $m_d$. In what follows, we consider both these possibilities separately. \subsection{Cosmological Parameters} One of the main reasons that the predicted HI mass function is very steep at the low-mass end is that the halo mass function predicted by the standard $\Lambda$CDM model is also very steep at the low-mass end. Hence, one way to alleviate the discrepancy between the model and observations is to change the cosmological parameters such that the low-mass slope of the halo mass function becomes shallower. Unfortunately, the steep halo mass function is a very generic property of all CDM models. In particular, the slope at the low-mass end is almost independent of cosmological parameters, including the cosmic density parameter and the amplitude of the primordial perturbations. The only way to change the slope of the mass function at the low-mass end is to assume that the effective power index of the primordial density perturbation spectrum is significantly lower than the scale invariant value. However, such models are not favored by the power spectrum derived directly from the temperature fluctuations of the cosmic microwave background and the Lyman-$\alpha$ forest (e.g., Croft \etal 1999; Seljak \etal 2003), and are difficult, if not impossible, to reconcile with inflation. Another possibility is that the universe is dominated by warm dark matter (WDM) instead of CDM, so that the power spectrum on small scales is suppressed by free stream damping of the WDM particles. Here again observations of the Lyman-$\alpha$ forest provide a stringent constraint on the particle mass (Narayanan \etal 2000). With particle masses in the allowed range, the WDM model yields a halo mass function that is virtually indistinguishable from that of the CDM models in the halo mass range considered here. In other words, within the parameter space allowed by the data, modifications of the cosmological parameters do not have any significant impact on the results of the standard model presented above. \subsection{Supernova Feedback} Thus far we have only considered models in which the disk mass fraction is equal to the universal baryon fraction, i.e. where $m_d=0.17$. We now consider physical mechanisms that can lower $m_d$, and investigate whether this allows a simultaneous match of the HI and stellar mass functions. We first consider what has become the standard mechanism, namely feedback by supernovae. In this model each halo acquires a baryonic mass fraction that is equal to the universal value (0.17) but $m_d$ is reduced since supernovae inject large amounts of energy into the cold gas, causing it to be ejected from the galaxy. In semi-analytical models of galaxy formation, two schemes have been proposed to model this supernova feedback. In the `retention' model considered by Kauffmann \etal (1999), Cole \etal (2000), Springel \etal (2001) and Kang \etal (2005), among others, part of the cold gas in a galaxy disk is assumed to be heated by supernovae to the halo virial temperature and is added to the hot halo gas for cooling in the future. Since radiative cooling is effective in galaxy haloes, the feedback efficiency must be high enough to keep a large amount of the gas in the hot phase (Benson \etal 2003). However, if there is a critical surface density below which star formation ceases, no galaxy disk is expected to have a surface density below this critical density, because otherwise the feedback from star formation would be insufficient to keep most of the gas hot. We plot an example of such a model in Figure~\ref{fig:MF}. In this model we make the standard assumption that the rate at which cold gas is heated by supernova explosions is proportional to the star formation rate, ${\dot M}_{\rm reheat}=\beta {\dot M}_\star$, where $\beta=(V_{\rm hot}/V_c)^2$, with $V_c$ the circular velocity of the host halo, and $V_{\rm hot}$ an ajustable parameter (e.g. Benson et al. 2003). If the heated gas is not able to cool, the mass of the gas that can form stars will be $1/(1+\beta)$ times $m_d M-M_{\rm gas}$, where $M_{\rm gas}$ is the mass of the gas that remains in the disk. The curve labelled `SN' in Figure \ref{fig:MF} is the result corresponding to $V_{\rm hot}=280\kms$. Although the stellar mass function is reduced significantly in this model, the predicted slope at the low-mass end is much too steep. Furthermore, the HI-mass function is unchanged from the standard no-feedback model and so it is still unable to match the observed HI-mass function. In addition, this model predicts fairly extensive haloes of hot, X-ray emitting gas, which is inconsistent with observations (Benson \etal 2000). An alternative feedback model, considered by Kauffmann \etal (1999) and Somerville \& Primack (1999), is the `ejection' model in which the reheated gas is assumed to be ejected from the current host halo. If the initial velocity of the ejected gas is not much larger than the escape velocity of the host halo, the gas will be recaptured at a later epoch as the halo grows more massive by accreting new material from its surroundings. Then , as in the retention model, the baryon fraction in the more massive halo will be similar to the universal value, except that a (typically small) delay time is added. Consequently, this model also cannot produce disks with gas surface densities below the critical density, which again results in a severe overproduction of low HI mass systems. Only if the supernova explosion energy heats the cold gas to an energy much greater than the binding energy of the halo can the wind escape the halo forever and potentially reduce the number of low HI mass systems. However, there are several problems with this scenario. First, as shown by Martin (1999) and Heckman \etal (2000), the observed mass outflow rate in starburst galaxies, which are extreme systems, is about twice the star formation rate. This implies that one can never achieve a disk mass fraction that is lower than about 1/3 the universal baryon fraction, which is not nearly enough to match the HI observations. Second, the numerical simulations of Mac-Low \& Ferrara (1999) and Strickland \& Stevens (2000) have shown that the mass loss rates in quiescent disk galaxies are much lower than those observed in starburst galaxies. Third, as shown in Benson \etal (2003), the feedback efficiencies that are required to permanently eject the gas are completely unphysical. Finally, as shown in van den Bosch (2002), even if one ignores all these problems and simply ejects the gas forever, the presence of a star formation threshold density still assures that the final surface density of the gas is similar to the critical surface density. As is evident from Figure 5 in that paper, supernova feedback that is modelled with permanent ejection has a drastic impact on the stellar masses but leaves the gas mass basically unchanged. Again, this owes to the fact that gas ejection requires supernovae, which in turn requires star formation, which requires a gas surface density that exceeds the critical density. Note that although supernova feedback may temporarily deplete the gas surface density below the critical value, the ongoing cooling of new and previously expelled material will continue to increase the cold gas surface density until it exceeds $\Sigma_{\rm crit}$, initiating a new episode of star formation and its associated feedback. As shown in van den Bosch (2002), this results in a population of disk galaxies whose cold gas surface densities are, in a statistical sense, similar to $\Sigma_{\rm crit}$. In summary, although supernova feedback may be tuned to yield a good match to the low-mass end of the stellar mass function, it has three fundamental problems: (i) the efficiency needed seems unphysically high compared to what it achieved in detailed numerical simulations, (ii) unless the hot gas is expelled from the halo indefinitely it predicts haloes of hot, X-ray emitting gas which are inconsistent with observations, and (iii) as demonstrated here, if one takes account of a star formation threshold density, which is strongly supported by both theory and observations, it overpredicts the abundance of systems with low HI masses by almost an order of magnitude. In short, the problem of matching the observed HI mass functions is one of preventing gas from entering the galaxy in the first place. Standard feedback schemes fail because even if all the gas is temporarily removed, e.g. by a massive supernova outflow, it will just reaccrete more gas. This is also why our arguments, although framed around disk galaxies, should hold for all galaxies. It is, therefore, important to seek other solutions that are physically more plausible. \subsection{Reionization and Preheating} In the models considered above we assume that the IGM accreted by the dark matter haloes is cold, allowing all of the gas originally associated with the halo to be accreted eventually. However, if the gas in the IGM is preheated to a specific entropy that is comparable to or larger than that generated by the accretion shocks associated with the formation of the haloes, not all of this gas will be accreted into the halo (e.g. Mo \& Mao 2002; Oh \& Benson 2003; van den Bosch \etal 2003a). In that case, some disks may start with a gas surface density already below the critical density, making their HI gas masses smaller. Note that this circumvents the problem with the supernova feedback scenario that requires star formation and its inherent high gas surface densities. Let us first consider photoionization heating by the UV background. After reionization, the UV background can heat the IGM to a temperature of roughly $20,000$K. As first pointed out by Blumenthal \etal (1984), such heating of the IGM can affect gas accretion into dark matter haloes of low masses (see also Rees 1986; Efstathiou 1992; Quinn et all 1996; Thoul \& Weinberg 1996). Recent simulations (e.g., Gnedin 2000; Hoeft \etal 2005) follow the detailed evolution of the UV background and show that at the present time the fraction of gas that can be accreted into a dark matter halo of mass $M$ can be written approximately as \begin{equation}\label{fB:UV} m_{\rm gas}={f_{B}\over (1+M_c/M)^\alpha}\,, \end{equation} where $f_B$ is the universal baryon fraction. Following Hoeft \etal (2005), we consider a model with $M_c=1.7\times 10^{9} h^{-1}\msun$, and $\alpha=3$. Using the same disk formation model as described in Section~\ref{ssec:standard}, and assuming that disks with surface densities below $\Sigma_{\rm crit}$ do not form any stars, we can predict the cold gas and stellar masses. The resulting stellar and HI-mass functions are shown as the solid lines labelled `UV' in Figure~\ref{fig:MF}. Although preheating by the UV background clearly reduces both the stellar and HI-mass functions at the low-mass end, the strength of the effect fails to reconcile the models with the data (see also Benson \etal 2002). However, since the predicted cold gas mass is significantly reduced in haloes with masses below the characteristic mass scale $M_c$, this motivated us to consider a model in which the IGM around low-mass haloes is preheated to a temperature that is significantly higher than $20,000$K. A higher temperature corresponds to a larger $M_c$. As shown in Lu \& Mo (2005, in preparation), the mass fraction of baryons that are accreted by dark matter haloes in a strongly preheated medium is well described by equation~(\ref{fB:UV}) with $\alpha \sim 1$. Therefore, as a test of this idea we adopt $\alpha=1$ and choose $M_c = 5\times 10^{11}h^{-1}\msun$, which corresponds to an initial specific entropy for the preheated IGM of $s\equiv T/n_e^{2/3}\sim 10 {\rm kev\, cm^2}$ (where $n_e$ is the number density of electrons). To determine the relationship between $M_c$ and $s$ we assume that $T$ is the virial temperature of a halo with mass $M_c$ at the present time and that $n_e$ is the mean density of electrons within the halo. In \S\ref{sec:preheating}, we propose a model that explains how the IGM is preheated to such a level. Here we examine how this preheating affects both the HI-mass function and the stellar mass function The solid curves labelled `preheat' in Figure~\ref{fig:MF} show the stellar and HI-mass functions predicted by this model, using the same disk formation model described above. Contrary to the standard model and the reionization model, this preheating model agrees with the data fairly well for the low mass haloes that concern us here. We underpredict the HI mass function at higher masses but remember to be conservative we tried to make the HI mass function as small as possible. For example, we took a small value $Q_{\rm crit}=0.75$, which is appropriate for dwarf galaxies. If we took $Q_{\rm crit}\sim 1.5$, which is appropriate for larger galaxies, it would increase the masses and make them better match the observations. \begin{figure} \centerline{\psfig{figure=figure2.ps,width=\hssize}} \caption{The cold gas mass fraction, defined as the ratio between the mass of cold gas and the total mass of stars and cold gas, as a function of stellar mass predicted by the preheating model (crosses). The thick solid line shows the observed mean relation given by McGaugh \& de Blok (1997).} \label{fig:fg} \end{figure} Unlike the supernova feedback model, preheating can simultaneously match the HI and stellar mass functions. However, this does not guarantee that the model also predicts the correct ratio of cold gas mass to stellar mass in individual galaxies. To test this, we compute for each model galaxy the cold gas mass fraction, $f_{\rm gas} \equiv M_{\rm gas}/(M_{\rm gas} + M_{\rm star})$. Figure~\ref{fig:fg} plots $f_{\rm gas}$ as a function of the stellar mass. The scatter in the model predictions results from the scatter in the halo spin parameters, and is comparable to the observed scatter (McGaugh \& de Blok 1997). The preheating model predicts that the gas mass fraction decreases with stellar mass, in qualitative agreement with the observations (McGaugh \& de Blok 1997; Garnett 2002). As already demonstrated in van den Bosch (2000), this success is a direct consequence of implementing a critical surface density for star formation. Note that the model predicts gas fractions that are slightly lower than those observed. However, given the uncertainties involved, both in the data and in the model, and given the relatively large amount of scatter, we do not consider this a serious shortcoming. As a final test of the preheating model we consider gas metallicities. The higher gas mass fractions in smaller haloes implies that the metallicity of the cold gas must be lower in lower mass systems, which is consistent with observations (Garnett 2002). However, observations also show that the {\it effective} metal yield decreases with galaxy luminosity (Garnett 2002; Tremonti \etal 2004), suggesting that some metals generated by stars must have been ejected from the galaxies and that the ejected fraction is larger for fainter galaxies. This may seem problematic for the preheating model considered here, since we require no outflows to match the HI and the stellar mass functions. However, this does not exclude the possibility that significant amounts of {\it metals} have been ejected from low-mass galaxies. In fact, as the numerical simulations of Mac-Low \& Ferrara (1999) demonstrate, supernova feedback in quiescent disk galaxies is far more efficient at ejecting metals than mass. The reason for this is that the metals are largely produced by the supernovae themselves, so they are part of the hot bubbles of tenuous gas that make up the galactic winds. When these bubbles rupture owing to Raleigh-Taylor instabilities this strongly metal enriched material, which has relatively little mass, is blown away from the disk by its own pressure. Thus, although clearly more work is needed to test this in detail, we believe that the observed effective metal yields are not at odds with the preheating model. \section{Preheating by gravitational pancaking} \label{sec:preheating} \begin{figure*} \centerline{\psfig{figure=figure3.ps,width=\hdsize}} \caption{The mass [panel (a)], overdensity [panel (b)], gas temperature [panel (c)], and gas specific entropy [panel (d)] of pancakes around low-mass haloes at the time of first axis collapse. Thick solid curves assume that perturbations around low-mass haloes have $e$ values equal to the most probable values corresponding to the mass scale in consideration, while thick dashed curves show the results in which $e$ is assumed to be a constant, i.e. independent of pancake mass. The thin lines in panel (c) show the loci of $t_{\rm cool} = t_{\rm H}$ for the three indicated values of the overdensity.} \label{fig:entropy} \end{figure*} In the previous section we have shown that a preheating model, in which the gas surrounding present-day low mass haloes is preheated to a specific entropy of $s\sim 10\,{\rm kev\,\cm^2}$, can simultaneously match the low mass ends of both the HI-mass function and the stellar mass function. Here we propose a physical process that can cause such preheating. We base our proposed model on the following considerations. In the basic picture of structure formation in CDM cosmogonies, as the universe expands, larger and larger objects collapse owing to gravitational instability. The collapse is generically aspherical (Zel'dovich 1970), first forming sheet-like pancakes (first axis collapse), followed by filamentary structures (second axis collapse), and eventually virialized dark matter haloes (third axis collapse). Thus, according to the ellipsoidal collapse model, the formation of a virialized halo requires the collapse of all three axes (e.g. Bond \& Myers 1996; Sheth, Mo \& White 2001). However, owing to the large scale tidal field, the density threshold for the formation of a low-mass halo, i.e. one with a mass below the characteristic mass, $M_*$, defined as the mass at which the rms fluctuation is equal to unity, can be much higher than that in the spherical collapse model. This delays the formation of low mass haloes relative to the prediction of the spherical collapse model. Conversely, the tidal field accelerates the collapse of the first (shortest) axis and hence the formation of a pancake can require a density threshold much lower than that for spherical collapse. Consequently, many of today's low mass haloes, i.e. those with $M \ll M_*(z=0)\sim 10^{13}h^{-1}\msun$, formed in pancakes of larger mass, which formed before the haloes themselves. In the process of pancake formation, the gas associated with the pancake is shock heated. If the temperature of the shocked gas is sufficiently high, and if the gas is not able to cool in a Hubble time, the haloes embedded in the pancakes will have to accrete their gas from a preheated medium, which as we showed in the previous section may have important implications for the formation of galaxies within those haloes. To see if this process of ``gravitational pancaking'' (or previrialization, Peebles 1990) can generate a preheated medium with the required specific entropy, we need to examine the properties of the pancakes within which present-day low mass haloes formed, and to understand how the gas associated with these pancakes was shock heated. To study the first problem it is important to realize that the bias parameters of haloes with $M\la 0.1 M_*$ have similar values (Mo \& White 1996; 2002; Jing 1999; Sheth \& Tormen 1999; Sheth, Mo \& Tormen 2001; Seljak \& Warren 2004; Tinker \etal 2004). This means that all such haloes are, in a statistical sense, embedded within similar large-scale environments. At $z=0$, $M_\star\sim 10^{13} h^{-1}\msun$, and so all haloes with $M\la 10^{12} h^{-1}\msun$ are embedded within similar environments. According to the ellipsoidal collapse model, the collapse of a region on some mass scale $M$ in the cosmic density field is specified by $\delta$, the average overdensity of the region in consideration, and by $e$ and $p$, which express the ellipticity and prolateness of the tidal shear field in the neighborhood of that region. According to Sheth \etal (2001), the density threshold for the formation of a virialized halo is given by solving \begin{equation} {\delta_{\rm ec}(e,p)\over\delta_{\rm sc}} = 1 + \beta\left[5(e^2\pm p^2) {\delta_{\rm ec}^2(e,p) \over \delta_{\rm sc}^2} \right]^\gamma\,, \end{equation} where $\beta=0.47$, $\gamma=0.615$, and $\delta_{\rm sc}$ is the critical overdensity for spherical collapse. For a Gaussian density field, the joint distribution of $e$ and $p$ for given a $\delta$ is \begin{equation} g(e,p\vert\delta) = {1125\over\sqrt{10\pi}} e\left(e^2-p^2\right) \left({\delta\over\sigma}\right)^5 e^{-5\delta^2(3e^2+p^2)/2\sigma^2}\,, \end{equation} where $\sigma$ is the rms fluctuation of the density field on the mass scale in consideration (Doroshkevich 1970). For all $e$, this distribution peaks at $p=0$, and the maximum occurs at \begin{equation} \label{e_m} e_{\rm max}(p=0\vert\delta) = {\sigma \over \sqrt{5} \delta}\,. \end{equation} Thus, the most probable value of $e$ is related to the mass scale through $\sigma$. Using this relation, one can obtain a relation between the density threshold for collapse and the halo mass: \begin{equation} \delta_{\rm ec}(M,z) = \delta_{\rm sc}(z) \left\{1+0.47 \left[{\sigma^2\over \delta_{\rm sc}^2(z)}\right]^{0.615} \right\}\, \end{equation} (Sheth \etal 2001). The ellipsoidal collapse model can also be used to determine the density threshold for the formation of pancakes. Based on similar considerations, one can obtain a corresponding relation between the collapse density threshold and the mass of the pancake: \begin{equation} \delta_{\rm ec}(M, z) = \delta_{\rm sc}(z) \left\{1-0.56 \left[{\sigma^2\over \delta_{\rm sc}^2(z)}\right]^{0.55} \right\}\, \end{equation} (Shen, Abel, Mo \& Sheth, in preparation). As one can see, for low peaks (i.e. $\delta_{\rm sc}/\sigma\ll 1$), the two thresholds can be very different, while for high peaks they are similar. Thus, the effect of pancaking on subsequent halo formation is more important for lower peaks, i.e. for lower mass haloes with later formation times. Given the above properties of the collapse thresholds, we are able to address the following question: for low-mass haloes identified at the present time, what is the nature of the pancakes within which their progenitor haloes were embedded at an earlier time? To quantify the formation of pancakes around a given halo rigorously, one needs to calculate the conditional probability distribution for the overdensity of density perturbations on various scales around the halo and the corresponding tidal shear fields. It is beyond the scope of this paper to present such a detailed analysis here. Instead, we use the cross-correlation between haloes and the linear density field to determine the average linear overdensity around dark matter haloes on different scales. We then use this overdensity to characterize the mean environment of haloes of a given mass at the present time at earlier times. According to the halo bias model (Mo \& White 1996), the average linear overdensity within a radius $r$ of a halo of mass $M$ can be written as \begin{equation} \label{bias} {\overline \delta}_l(r) = b(M) {\overline\xi}_{\rm m}(r)\,, \end{equation} where $b(M)$ is the bias parameter for haloes of mass $M$, and the average two-point correlation function of the linear density field \begin{equation} \label{averxi} {\overline\xi}_{\rm m}(r) \equiv {3\over r^3} \int_0^r \xi_{\rm m}(r') r'^2 {\rm d}r'\,, \end{equation} where $\xi_{\rm m}(r)$ is the two-point correlation function. The radius $r$ corresponds to a mass scale $M_r= (4\pi r^3/3) {\overline\rho}_0$, where ${\overline\rho}_0$ is the mean density of the universe at $z=0$. As we mentioned above, the bias parameter for present day haloes with $M \la 10^{12} h^{-1}\msun$ is independent of halo mass, with $b\sim 0.65$ (Sheth \etal 2001; Jing 1999; Seljak \& Warran 2004; Tinker \etal 2004). We adopt this value to calculate ${\overline\delta}_l$. It is then straightforward to calculate the linear overdensity ${\overline \delta}_l$ as a function of $r$ and the corresponding mass scale $M_r$. If this overdensity reaches the overdensity threshold for pancake formation at a given redshift, a pancake of mass $M_r$ will form. Since the overdensity threshold depends on the values of $p$ and $e$, in principle one has to calculate the joint distribution function of $e$ and $p$ for the appropriate mass and overdensity, under the condition that the region contains a halo of mass $M$ at the present time. As an approximation, we assume that both $e$ and $p$ take their most probable values on the mass scale in question, so that $p=0$ and $e=e_{\rm max}$. We expect this approximation to be valid for $M_r\gg M$, where the correlation between the properties of the halo and its environment becomes weak (e.g. Bardeen et al. 1986). For a given ${\overline\delta}_l$, $M_r$, $p$ and $e$, we use the ellipsoidal collapse model described in Bond \& Myers (1996) to follow the collapse of a uniform ellipsoid embedded in an expanding background along all three axes, taking into account both the ellipsoid's self-gravity and the external tidal field. Following Bond \& Myers, we assume that each axis freezes out at a constant fraction of its initial radius, so that the mean overdensity of the collapsed object is just the same as that in the spherical collapse model (see Bond \& Myers 1996 for details). The solid curve in Fig.~\ref{fig:entropy}(a) shows the mass of the pancake that forms around present day low-mass haloes, as a function of $z$. Remember that at earlier times these haloes have smaller masses. Also, owing to the roughly constant bias parameter, these results are almost independent of halo mass for $M \lta 10^{12} h^{-1} \Msun$. As one can see, the pancake mass decreases with redshift, because the overdensity threshold for collapse is higher at higher $z$. At $z\sim 2$, the pancake mass is about $10^{12.5}h^{-1}\msun$. The solid curve in Fig.~\ref{fig:entropy}(b) shows the overdensity of the pancake at the time of formation. This overdensity increases with $z$, and is about 10 for $z=1$ - $2$. The ellipsoidal collapse model also determines the velocity along the first axis at the time of pancake formation. The gas associated with the pancake will be shocked. If we assume an adiabatic equation of state and that the shock is strong, all the kinetic energy is transformed into internal energy. We can calculate the post shock gas temperature as \begin{equation} T={3\mu V_1^2\over 16 k_B}\,, \end{equation} where $k_B$ is Boltzmann's constant, $\mu$ is the mean molecular weight, and $V_1$ is the velocity along the first axis at the time of shell crossing. We plot this temperature, as a function of $z$, as the thick solid curve in Fig.~\ref{fig:entropy}(c). The temperature decreases with increasing redshift because the mass of the pancake, $M_p$, is smaller at higher $z$ and $V_1 \sim H_0 R_p \propto M_p^{1/3}$ where $R_p$ is the Lagrangian radius of the pancake. At $z\sim 2$, the temperature is about $10^{5.5}{\rm K}$. Assuming that the gas overdensity is the same as that of the dark matter, we can estimate the specific entropy generated in the shock as \begin{eqnarray} s = {T\over n_e^{2/3}} & = & 17 \times \left({\Omega_{\rm B,0}h^2\over 0.024}\right)^{-2/3} \left({T\over 10^{5.5}{\rm K}}\right)\nonumber\\ & & \times \left({1+\delta\over 10}\right)^{-2/3} \left({1+z\over 3}\right)^{-2} {\rm keV}\cm^2\,, \end{eqnarray} where we have taken $\mu=0.6$, valid for a completely ionized medium dominated by hydrogen and helium. As shown by the solid curve in Fig.\,\ref{fig:entropy}(d), $s$ increases rapidly with decreasing redshift, mainly owing to the decreasing gas density. At $z \sim 2$ the resulting entropy is $s \sim 15 {\rm KeV}\cm^{2}$. For $z \lta 2.5$, pancake formation results in a preheated IGM with $s \gta 10 {\rm KeV}\cm^{2}$, which corresponds to the value adopted in the preheating model discussed in the previous section. The results presented here are based on the assumption that both $e$ and $p$ (which specify the local tidal field) take their most probable values. In reality, the tidal field around a point in a Gaussian density field must be coherent over a finite scale. Since the low-mass haloes at $z=0$ are low peaks, typically with large values of $e$, it is possible that the value of $e$ for the region around such a halo is larger than the most probable value. Without going into detailed calculations that include correlations of the tidal field on different scales, we consider an extreme case in which we set $e=0.45$ for all $M_r$. This value for $e$ is approximately equal to that for a peak with $\delta/\sigma=1$. The thick dashed curves in the four panels of Fig.~\ref{fig:entropy} show the results for this extreme model. Because the assumed value of $e$ is larger than the most probable value, a given mass pancake forms earlier and correspondingly the overdensity of the pancake is lower. However, for $z\la 3$, the change in $T$ is less than $50\%$ and the change of $s$ is less than a factor of two. The preheating by gravitational pancaking is expected to have important consequences for the subsequent accretion of gas into dark matter haloes only if the heated gas cannot cool efficiently. The cooling time of the heated gas can be written as \begin{eqnarray} t_{\rm cool} & \sim & 6.3 \, \Lambda_{-23}^{-1} \, \left({\Omega_{\rm B,0} h^2\over 0.024}\right)^{-1} \left({T\over 10^{5.5}{\rm K}}\right) \nonumber\\ & & \times \left( {1+\delta\over 10} \right)^{-1} \left({1+z\over 3}\right)^{-3} \,{\rm Gyr}\,, \end{eqnarray} where $\Lambda_{-23}$ is the cooling function in units of $10^{-23} \erg \sec^{-1} \cm^3$. This time should be compared with the Hubble time,\footnote{Our notation is such that ${\cal H}(z)=1$ for an Einstein-de Sitter Universe, which is also valid to good approximation for the $\Lambda$CDM concordance cosmology at $z \gta 1$.} \begin{equation} t_{\rm H} = 5.0 \left({h\over 0.7}\right)^{-1} \left({\Omega_0\over 0.3}\right)^{-1/2} \left({1+z\over 3}\right)^{-3/2} {\cal H}(z)\,{\rm Gyr}\,, \end{equation} where ${\cal H}(z)=\Omega_0^{1/2} (1+z)^{3/2} H_0/H(z)$ and $H(z)$ is Hubble's constant at redshift $z$. In Fig.~\ref{fig:entropy}(c), the three thin curves show the loci of $t_{\rm cool}=t_{\rm H}$ in the $T$-$z$ plane for $\delta=5$, 10, and 20, respectively. Here we have used the cooling function of Sutherland \& Dopita (1993) with a metallicity of $0.01 Z_\odot$. Effective radiative cooling only occurs below these curves. Comparing these curves with those showing the temperature of the gas in pancakes, we see that heating by gravitational pancaking at $z\la 2$ can have a significant impact on the subsequent accretion of gas into haloes that will be low mass today. At higher redshifts, however, the cooling proceeds sufficiently fast so that the IGM within the pancake can cool back to its original temperature by the time the low mass haloes within the pancake form. Once again remember that the specific entropy generated by gravitational pancaking at $z\sim 2$ is very similar to what one needs to suppress the cold gas fraction in low-mass haloes (see \S\ref{sec:coldgas}). Thus we conclude that the preheating model discussed in the previous section, which is extremely successful at explaining both the stellar and HI mass functions, has a natural origin. One does not need to invoke any star formation or AGN activity; rather, the IGM is preheated to the required specific entropy by the same process that explains the formation of the dark matter haloes themselves. \section{Discussion} \label{sec:GF} \subsection{Comparison with numerical simulations} \label{ssec:simulations} In the previous sections we have argued that preheating by gravitational pancaking causes the shapes of the HI and stellar mass functions at the low mass ends to agree with observations. This begs the question as to why this effect has not already been seen in existing hydrodynamical, cosmological simulations. The short answer is that no previous simulation had the necessary resolution. To study the effects discussed above places two different resolution constraints on simulations. First, they must be able to resolve the shocks that occur in the forming pancakes and second, they must resolve the small galaxies that form within them. The typical pancake thickness is $\sim 200 h^{-1}\kpc$ in comoving units, with an overdensity of about 10. In a Smoothed Particle Hydrodynamics (SPH) code, it requires $4h_{\rm s}$ to resolve a shock (Hernquist \& Katz 1989), where $h_{\rm s}$ is the SPH smoothing radius usually chosen so that there are 32 particles within a sphere of radius $2h_{\rm s}$. Since the pancake has a shock on both sides, the absolute minimum resolution has to be at least $8h_{\rm s}$ across the pancake thickness, i.e. $h_{\rm s} \la 25 h^{-1}\kpc$. Even a shock capturing Eulerian grid or AMR code requires at least 4 grid cells across the pancake width to resolve the post-shock structure. The typical galaxy that needs to be affected has a dark halo with a circular velocity of $\sim 50 \kms$, which corresponds to a halo mass of $\sim 10^{10} \Msun$ and a virial radius of $\sim 50 h^{-1} \kpc$. To marginally resolve such haloes requires at least 100 dark matter particles and hence a particle mass for the dark matter of less than $10^8 \Msun$. In addition, one must have at least 2 spatial resolution elements within the virial radius and hence the spatial resolution must be at least $25 h^{-1} \kpc$. In an Eulerian code both these spatial resolution requirements are identical but in a Lagrangian code like SPH, where the spatial resolution is variable, the halo requirement is actually $(200/10)^{1/3}=2.7$ times easier to satisfy owing to the higher overdensity. The above resolutions have not been achieved in any published numerical simulation. The highest resolution simulation to date at $z=0$ (Keres \etal 2005) has $128^3$ gas and dark matter particles in a periodic, cubic volume of $22.22 h^{-1}$ comoving Mpc on a side. At an overdensity of 10 this simulation has $h_{\rm s} = 81 h^{-1}$ kpc, more than 3 times too large to properly resolve the pancake thickness. In addition, the dark matter particle mass of $9\times 10^8\Msun$ is almost an order of magnitude too large. Furthermore, the volume is really too small to evolve the simulation down to $z=0$ and to sample enough different environments. Springel \& Hernquist (2003) have a SPH simulation in a large enough volume, $100 h^{-1}$ Mpc on a side with $324^3$ gas and dark matter particles. However, both the spatial and mass resolution are worse than in the case of Keres \etal (2005), $143 h^{-1}$ kpc and $2\times 10^9 \msun$, respectively. Their latest unpublished simulation has $486^3$ particles but still has worse spatial resolution ($96 h^{-1}$ kpc) than Keres \etal (2005). In addition, the particle mass is $6\times 10^8 \Msun$, which is still six times too large. The Eulerian simulation of Kang \etal (2005b) has $1024^3$ cells in the same size volume as Springel \& Hernquist (2003) for a grid cell size of $97 h^{-1}$ kpc, again worse than Keres \etal (2005). Finally, Nagamine \etal (2001) have $768^3$ cells within a volume of $25 h^{-1}$ comoving Mpc on a side with a spatial resolution of $75 h^{-1}$ kpc. To perform a cosmological SPH simulation in a uniform periodic volume large enough to evolve robustly to $z=0$, i.e. $100 h^{-1}$ Mpc on a side, and marginally resolve preheating, as outlined above, would require $1860^3$ dark matter and gas particles. Such a large simulation is well beyond the reach of the current generation of computers and codes. However, one can reach such high resolutions in a large volume by using a ``zoom-in'' strategy, in which the resolution is only high in the region of interest (Katz \& White 1993). Here, one starts with a large volume simulated at moderate resolution, identify the object whose formation one wishes to study, trace the particles that end up in or near this object back to the initial conditions, replace the particles in this Lagrangian region with a finer grid of less massive particles, add the small scale waves that can be resolved by this finer grid to the initial fluctuation spectrum, and re-run the simulation. The particle density away from the region of interest is reduced by sparse sampling the original particle grid in a series of nested zones, always keeping the particle density high enough to maintain an accurate representation of tidal forces. This approach is similar to the one used by other groups (e.g. Navarro \& Steinmetz 1997, 2000; Steinmetz \& Navarro 1999, Navarro \etal 1995; Sommer-Larsen \etal 1999, Robertson \etal 2004, Governato \etal 2004) in their simulations of individual galaxies but one needs to focus on present day low mass galaxies that form within pancakes at higher redshift. To test our predictions regarding the preheating by gravitational pancaking we plan to carry out investigations along this line in the near future. \subsection{Implications for galaxy formation and the IGM} According to the results presented above, the assembly of gas into galaxy-sized haloes is expected to proceed in two different modes with a transition at $z\sim 2$. At $z>2$, the accreted intergalactic gas is cold. Since radiative cooling is efficient in galaxy haloes, gas assembly into galaxies is expected to be rapid and to be dominated by clumps of cold gas. Combined with the fact that the formation of galaxy-sized dark matter haloes at $z\ga 2$ is dominated by major mergers (e.g. Li \etal 2005), this suggests that during this period gas can collapse into haloes quickly to form starbursts and perhaps also feed active galactic nuclei (e.g. Baugh et al. 2005). Galactic feedback associated with such systems may drive strong winds into the IGM, contaminating the IGM with metals. Note that strong feedback in this phase is required to reduce the star formation efficiency in high-$z$ galaxies; otherwise too much gas would already turn into stars at high $z$. This phase of star formation maybe what is observed as Lyman-break galaxies and sub-mm sources at $z\ga 2$, and maybe responsible for the formation of elliptical galaxies and the bulges of spiral galaxies. At $z\la 2$, however, the situation is quite different. Since the medium in which galaxy-sized haloes form is already heated by previrialization and since radiative cooling is no longer efficient, the accretion is expected to be dominated by hot, diffuse gas. Such gentle accretion of gas might be favorable for the formation of the quiescent disks of spiral galaxies. Because the accreted gas is diffuse rather than in cold clumps, it can better retain its angular momentum as it settles into a rotationally supported disk. In addition, since the baryonic mass fraction that forms the disk is significantly smaller than the universal baryon fraction, the disk is less likely to become violently unstable. Both effects may help alleviate the angular momentum problem found in some numerical simulations, i.e. disks that form in CDM haloes have too little angular momentum and are too concentrated (Navarro \& Steinmetz 1997). Depending on the halo formation history, the bulge-to-disk ratio will vary from system to system. For haloes that have assembled large amounts of mass before preheating, the galaxies that form within them should contain significant bulges, while in haloes that form after preheating the galaxies should be dominated by a disk component. Since haloes that form later are less concentrated, this model naturally explains why later-type galaxies usually have more slowly rising rotation curves. An extreme example would be low-surface brightness (LSB) galaxies. By definition, LSB galaxies are disks in which the star formation efficiency is low. These galaxies are also gas rich, have high specific angular momenta, and show slowly rising rotation curves. The last two properties are best explained if LSBs are hosted by haloes that have formed only in the recent past, since such haloes have low concentrations (e.g. Bullock \etal 2001; Wechsler \etal 2002; Zhao \etal 2003a,b), and high spin parameters (e.g. Maller, Dekel \& Somerville 2002; D'Onghia \& Burkert 2004; Hetznecker \& Burkert 2005). However, there is one problem with such a link between LSBs and newly formed dark matter haloes. Since the formation of such haloes involves major mergers, these systems are expected to produce strong starbursts rather than low-surface brightness disks. This problem does not exist in our model since these haloes are expected to form in a preheated medium and gas accretion into such haloes will be smooth. It will be interesting to see if our model can predict the right number of LSB galaxies and explain the existence of extreme systems like Malin I. Our model also opens a new avenue to understand the evolution of the galaxy population with redshift. As we argued above, star formation at $z>2$ is expected to be dominated by starbursts associated with major mergers of gas rich systems, while star formation at $z<2$ is expected to occur mostly in quiescent disks. This has important implications for understanding the star formation history of the universe and for understanding the evolution of the galaxy population in general. For example, our model implies a characteristic redshift, $z\sim 2$, where both the star formation history and galaxy population make a transition from a starburst-dominated mode to a more quiescent mode. Furthermore, if AGNs are driven by gas-rich major mergers, a transition at $z\sim 2$ is also expected in the AGN population. There are some hints about such transitions in the observed star formation history (e.g. Blain \etal 1999), and in the observed number density of AGNs (e.g. Shaver \etal 1996). Recent observations of damped Lyman alpha systems also suggests a change in behavior at $z\sim 2$ both in the cold gas content and in the number density of such systems (see Rao 2005). The preheated medium we envision here is closely related to the warm-hot intergalactic medium (WHIGM) under intensive study in recent years. Hydrodynamical simulations show that between 30 and 40 percent of all baryons reside in this WHIGM, which is produced by shocks associated with gravitational collapse of pancakes and filaments (e.g. Cen \& Ostriker 1999; Dave \etal 2001; Kang \etal 2005b). These results are consistent with observational estimates based on the study of UV absorption lines in the spectra of low-redshifts QSOs (see Tripp \etal 2004 for a review). In our model, the low-temperature component of this WHIGM, i.e, with temperatures of a few times $10^5$K, is associated with pancakes of relatively low mass ($\sim 5 \times 10^{12} h^{-1} \Msun$), within which late-type galaxies form. As we discussed in \S\ref{ssec:simulations}, current simulations are still unable to make accurate predictions regarding this particular component of the WHIGM. Future simulations of higher resolution, however, may shed light on the relation between the properties of galaxies and those of the IGM in their immediate surroundings. Such a relationship may prove pivotal in observational hunts for the missing baryons as the spatial distribution of galaxies and their properties can serve as guideposts in the search for the WHIGM. \section{Conclusions} \label{sec:concl} Understanding the shallow faint-end slope of the galaxy luminosity function, or equivalently the stellar mass function, is a well known problem in galaxy formation. In the standard model it is often assumed that supernova feedback keeps large fractions of the baryonic material hot, thus suppressing the amount of star formation. Although the efficiency of this process might be tuned so that one fits the faint-end slope of the galaxy luminosity function it has a number of problems. First, the required efficiencies are extremely high and are inconsistent with detailed numerical simulations (e.g., Mac-Low \& Ferrara 1999). Second, unless the hot gas is somehow expelled from the dark matter halo forever, which requires even higher feedback efficiencies, this model predicts hot haloes of X-ray emitting gas around disk galaxies, which is inconsistent with observations (e.g., Benson \etal 2000). In this paper we have demonstrated that this model has an additional shortcoming. Using recently obtained HI mass functions we show that the supernova feedback model predicts HI masses that are too high. The reason is that supernova feedback requires star formation which in turn requires high surface densities of cold gas. The latter owes to the existence of a star formation threshold density, which has strong support from both theory (e.g., Quirk 1972, Silk 2001) and observations (e.g., Kennicutt 1989; Martin \& Kennicutt 2001). We therefore argue that simultaneously matching the shallow, low-mass slopes of the stellar and HI mass functions requires an alternative mechanism, which does not directly depend on star formation. We demonstrate that a mechanism that can preheat the IGM to a specific gas entropy of $\sim 10\,{\rm keV\,cm^{2}}$, can fit both the observed stellar mass function as well as the HI mass function. We also show that gravitational instability of the cosmic density field can be the source of this preheating. Our idea is fairly simple: low mass haloes form within larger-scale overdensities that have already undergone collapse along their first axis. This `pancake' formation causes the associated gas to be shock-heated, and providing that the gas cooling rate is sufficiently slow, the low mass haloes embedded within these pancakes subsequently form in a preheated medium. Using the ellipsoidal collapse model, we demonstrate that the progenitors of present-day haloes with masses $M\la 10^{12}h^{-1}\msun$ were embedded in pancakes of masses $\sim 5\times 10^{12}h^{-1}\msun$ at $z\sim 2$. The formation of such pancakes can heat the gas associated with them to a temperature of $\sim 5\times 10^5{\rm K}$ and compress it to an overdensity of $\sim 10$. This gas has a cooling time longer than the age of the Universe and a specific entropy of about $15{\rm keV}\cm^2$; the amount needed to explain the observed stellar and HI mass functions. Our results demonstrate that heating associated with previrialization may also help solve a number of outstanding problems in galaxy formation within a CDM universe. However, detailed, high-resolution numerical simulations will be required to test our predictions in detail. Such simulations will help us understand how the formation of a pancake heats the gas initially associated with it and whether the amount of heating follows our analytic results. For example, our calculation indicates that shock heating will dominate over cooling in typical pancakes at $z \la 2$. However, such calculations assume that the gas is smoothly distributed. Structures at scales smaller than the pancake could lead to density inhomogeneities, either pancakes, filaments or halos, and these could promote extra cooling and move the transition to lower redshifts. A similar effect occurs when one calculates the transition mass from cooling dominated to infall dominated accretion in galaxy formation (White \& Frenk 1992) and compares it with actual simulations (Keres et al 2005). They will also help us understand how such heating affects subsequent gas accretion and cooling into the dark haloes that form in the pancake and hopefully derive actual stellar and HI mass functions at the small mass end. We have argue that no cosmological, hydrodynamical simulation to date has the required spatial and/or mass resolution to study the pancake preheating proposed here. To achieve the required numerical resolution, we suggest resimulating, at high resolution, a number of pancakes (and filaments) with masses of the order of $5 \times 10^{12} h^{-1} \Msun$, identified from large cosmological, hydrodynamical simulations. Thus far this `zoom-in' strategy has mainly been used to study clusters and galaxies. If our estimates are correct, it will be extremely interesting to apply this technique to study pancakes and their enclosed, low mass haloes. \section*{Acknowledgement} We thank Jessica Rosenberg and Martin Zwaan for providing us with their HI mass functions. FvdB is grateful to Aaron Dutton, Justin Read and Greg Stinson for valuable discussion. NSK was supported by NSF AST-0205969, NASA NAGS-13308, and NASA NNG04GK68G.
1,941,325,220,153
arxiv
\section{Introduction} The interest towards Analysis and Geometry in Metric Spaces grew drastically in the last decades: a major effort has been devoted to the development of analytical tools for the study of geometric problems, and sub-Riemannian Geometry provided a particularly fruitful setting for these investigations. The present paper aims at giving a contribution in this direction by providing some geometric integration formulae, namely: an {\it area formula} for submanifolds with (intrinsic) $C^1$ regularity, and a {\it coarea formula} for slicing such submanifolds into level sets of maps with (intrinsic) $C^1$ regularity. We will work in the setting of a {\it Carnot group} $\G$, i.e., a connected, simply connected and nilpotent Lie group with stratified Lie algebra. We refer to Section~\ref{sec12101536} for precise definitions; here, we only recall that Carnot groups have a distinguished role in sub-Riemannian Geometry, as they provide the infinitesimal models (tangents spaces) of sub-Riemannian manifolds, see e.g.~\cite{Bellaiche}. As usual, a Carnot group is endowed with a distance $\rho$ that is left-invariant and 1-homogeneous with respect to the group dilations. Our main objects of investigation are {\it $C^1_H$ submanifolds}, which are introduced as (noncritical) level sets of functions with intrinsic $C^1$ regularity: let us briefly introduce the relevant definitions, which are more precisely stated in Section~\ref{sec:preliminaries}. Given an open set $\Omega\subset\G$ and another Carnot\footnote{One could more generally assume that $\G'$ is only graded, see Remark~\ref{rem:rem2.3}.} group $\G'$, a map $f:\Omega\to\G'$ is said to be {\it of class $C^1_H$} if it is differentiable \`a la P.~Pansu~\cite{Pansu} at all $p\in\Omega$ and the differential $\DH f_p:\G\to\G'$ is continuous in $p$. Let us mention that the $C^1_H$ regularity of $f$ is equivalent to its {\it strict} Pansu differentiability (see Proposition~\ref{prop:C1Hiffstrict}): such a notion is introduced in Section~\ref{sec:Pansudifferentiability} and turns out to be useful for simplifying several arguments. Given a Carnot group $\G'$, a set $\Sigma\subset\G$ is a $C^1_H(\G;\G')$-submanifold if it is locally a level set of a map $f:\G\to\G'$ of class $C^1_H$ such that, at all points $p$, $\DH f_p$ is surjective and $\ker \DH f_p$ splits $\G$. We say that a normal homogeneous subgroup $\mathbb{W}<\G$ {\it splits} $\G$ if there exists another homogeneous subgroup $\mathbb{V}<\G$, which is {\it complementary} to $\mathbb{W}$, i.e., such that $\mathbb{V}\cap \mathbb{W} = \{0\}$ and $\G= \mathbb{W}\mathbb{V}$. Observe that $\mathbb{V}$ is necessarily isomorphic to $\G'$, see Remark~\ref{rem:V=G'}. We will also say that $p$ is {\it split-regular} for $f$ if $\DH f_p$ is surjective and $\ker \DH f_p$ splits $\G$. In Sections~\ref{sec09181614} and~\ref{sec:C1H_and_rectifiable} we prove that an Implicit Function Theorem holds for a $C^1_H$ submanifold $\Sigma$; namely, $\Sigma$ is (locally) an {\it intrinsic graph}, i.e., there exist complementary homogeneous subgroups $\mathbb{W},\mathbb{V}$ of $\G$ and a function $\phi:A\to\mathbb{V}$ defined on an open subset $A\subset\mathbb{W}$ such that $\Sigma$ coincides with the intrinsic graph $\{w\phi(w):w\in A\}$ of $\phi$. The function $\phi$ is of class $C^1_{\mathbb{W},\mathbb{V}}$ (see Definition~\ref{def:C1WV}) and it turns out to be {\it intrinsic Lipschitz continuous} according to the theory developed in recent years by B.~Franchi, R.~Serapioni and F.~Serra~Cassano, see e.g.~\cite{FrSeSe2006intrinsic,FSSCJGA2011,FranchiSerapioni2016Intrinsic}. We have to mention that both the Implicit Function Theorem and the intrinsic Lipschitz continuity of $\phi$ follow also from~\cite[Theorem 1.4]{MR3123745}: the proofs we provide in Sections~\ref{sec09181614}--\ref{sec:C1H_and_rectifiable}, however, seem shorter than those in~\cite{MR3123745} and allow for some finer results we need, see e.g.~Lemmas~\ref{lem09122015} and~\ref{lem09151054}. For related results, see~\cite{ArenaSerapioni,CittiManfredini,FSSCCAG,FSSCAdvMath,VSNS}. Our first main result is an area formula for intrinsic graphs of class $C^1_{\mathbb{W},\mathbb{V}}$ (hence, in particular, for $C^1_H$ submanifolds) where complementary subgroups $\mathbb{W}<\G$ and $\mathbb{V}<\G$ are fixed with $\mathbb{W}$ normal. Throughout the paper we denote by $\psi^d$ either the spherical or the Hausdorff measure of dimension $d$ in $\G$. \begin{Theorem}[Area formula]\label{prop05161504} Let $\G$ be a Carnot group and let $\G=\mathbb{W}\mathbb{V}$ be a splitting. Let $A\subset\mathbb{W}$ be an open set, $\phi\in C^1_{\mathbb{W},\mathbb{V}}(A)$ and let $\Sigma:=\{w\phi(w):w\in A\}$ be the intrinsic graph of $\phi$; let $d$ be the homogeneous dimension of $\mathbb{W}$. Then, for all Borel functions $h:\Sigma\to[0,+\infty)$, \begin{equation}\label{eq09071257} \int_\Sigma h \dd \psi^{d} = \int_A h(w\phi(w)) \calA(T_{w\phi(w)}^H\Sigma) \dd \psi^d (w) . \end{equation} \end{Theorem} The function $\calA(\,\cdot\,)$ appearing in~\eqref{eq09071257} is continuous and it is called {\it area factor}: it is defined in Lemma~\ref{lem:areafactor} and it depends only on ($\mathbb{W},\mathbb{V}$ and) the {\it homogeneous tangent space} $T^H_p\Sigma$ at points $p\in\Sigma$. The definition of area factor in Lemma~\ref{lem:areafactor} is only implicit, but of course we expect it can be made more explicit in terms of suitable derivatives of the map $\phi$: to the best of our knowledge, this program has been completed only in Heisenberg groups, see e.g.~\cite{AmbSerVit2006Intrinsic,ArenaSerapioni,Corni,CorniMagnani,FSSCAdvMath}. A relevant tool in the proof of Theorem~\ref{prop05161504} is a differentiation theorem for measures (Proposition~\ref{propFedDensity}) which is based on the so-called {\it Federer density}~\eqref{eq:defFedererdensity}: the importance of this notion was pointed out only recently by V.~Magnani, see~\cite{MagnaniEdinburgh,MagnaniNewDifferentiation,MagnaniSomeRemarks} and~\cite{FSSCcentered}. Observe that the validity of a (currently unavailable) Rademacher-type Theorem for intrinsic Lipschitz graphs would likely allow to extend Theorem~\ref{prop05161504} to the case of intrinsic Lipschitz $\phi$. A first interesting consequence of Theorem~\ref{prop05161504} is the following Corollary~\ref{cor:SvsH}, which is reminiscent of the well-known equality between Hausdorff and spherical Hausdorff measures on $C^1$ submanifolds (and, more generally, on rectifiable subsets) of $\R^n$. We refer to Definitions~\ref{def:rectifiable} and~\ref{def:apprtangent} for the notions of {\it countably $(\G;\G')$-rectifiable} set $R\subset\G$ and of {\it approximate tangent space} $T^H R$. Such sets have Hausdorff dimension $Q-m$, where $Q$ and $m$ denote, respectively, the homogeneous dimensions of $\G$, $\G'$; we write $\cal H^{Q-m},\cal S^{Q-m}$, respectively, for Hausdorff and spherical Hausdorff measures. We denote by $\scr T_{\G,\G'}$ the space of possible tangent subgroups to $(\G;\G')$-rectifiable sets\footnote{Equivalently, $\scr T_{\G,\G'}$ is the space of normal subgroups $\mathbb{P}<\G$ for which there exist a complementary subgroup in $\G$ and a surjective homogeneous morphism $L:\G\to\G'$ such that $\mathbb{P}=\ker L$.} and, by abuse of notation, we write $T^HR$ for the map $R\ni p\mapsto T^H_p R\in\scr T_{\G,\G'}$. \begin{Corollary}\label{cor:SvsH} Let $\G,\G'$ be Carnot groups of homogeneous dimensions $Q$, $m$, respectively. Then, there exists a continuous function $\textfrak a:\scr T_{\G,\G'}\to [1,2^{Q-m}]$ such that, for every countably $(\G;\G')$-rectifiable set $R\subset\G$ \begin{equation}\label{eq:AAAAA} \cal S^{Q-m} \hel R= \textfrak a(T^H R)\cal H^{Q-m}\hel R\,. \end{equation} Moreover, if $\G$ is a Heisenberg group $\mathbb{H}^n$ with a rotationally invariant distance $\rho$ and $\G'=\R$, then the function $\textfrak a$ is constant, i.e., there exists $C\in[1,2^{2n+1}]$ such that \[ \text{$\cal S^{2n+1} \hel R= C\cal H^{2n+1}\hel R\qquad \forall\:(\mathbb{H}^n,\R)$-rectifiable set $R\subset\mathbb{H}^n$.} \] \end{Corollary} Heisenberg groups and rotationally invariant distances are defined in Section~\ref{sec12101536} by condition \eqref{eq:seba1}, while Corollary~\ref{cor:SvsH} is proved in Section~\ref{sec:area}. To the best of our knowledge, this result is new even in the first Heisenberg group $\mathbb{H}^1$, see also~\cite[page 359]{MagnaniSomeRemarks}. Corollary~\ref{cor:SvsH} is deeply connected to the {\it isodiametric problem}, see Remark~\ref{rem:isodiametric}. Not unrelated with Corollary~\ref{cor:SvsH} is another interesting consequence of Theorem~\ref{prop05161504}, namely, the existence of the {\it density} of Hausdorff and spherical measures on rectifiable sets. In Corollary~\ref{cor:densityexistence} we indeed prove that, if $R\subset\G$ is $(\G;\G')$-rectifiable, then the limit \[ \textfrak d(p):=\lim_{r\to 0^+}\frac{\psi^{Q-m}(R\cap \Ball(p,r))}{r^{Q-m}} \] exists for $\psi^{Q-m}$-a.e.~$p\in R$, where $\Ball(p,r)$ is the open ball of center $p$ and radius $r$ for the distance of $\G$. Actually, $\textfrak d(p)$ depends only on $T^H_pR$, in a continuous way. When $\G$ is the Heisenberg group $\mathbb{H}^n$ endowed with a rotationally invariant distance, $\G'=\R^m$ for some $1\le m\le n$, and $\psi$ is the spherical measure, then $\textfrak d$ is constant, see Corollary~\ref{cor:densityconstantinHeis}. The area formula is a key tool also in the proof of our second main result, the coarea formula in Theorem~\ref{thm:coarea} below. The classical coarea formula was first proved in the seminal paper~\cite{FedererCurvatureMeasures} and it is one of the milestones of Geometric Measure Theory. Sub-Riemannian coarea formulae have been obtained in \cite{MagnaniPublMat,MagnaniMathNacr,MagnaniCEJM,MagnaniNonHorizontal, zbMATH06358560, zbMATH06235931}, assuming classical (Euclidean) regularity on the slicing function $u$, and in \cite{MagnaniAreaCoarea,MagnaniStepanovTrevisan,MontiVittoneHeight}, assuming intrinsic regularity but only in the setting of the Heisenberg group. Here we try to work in the utmost generality: we consider a $C^1_H$ submanifold $\Sigma\subset\G$, seen as the level set of a $C^1_H$ map $f$ with values in a homogeneous group $\mathbb{M}$, and we {\it slice} it into level sets of a map $u$ with values into another homogeneous group $\mathbb{L}$. We assume for the sake of generality (see below) that $\mathbb{L},\mathbb{M}$ are complementary subgroups of a larger homogeneous group $\mathbb{K}=\mathbb{L}\mathbb{M}$; we also denote by $Q,m,\ell$ the homogeneous dimensions of $\G,\mathbb{M},\mathbb{L}$, respectively. \begin{Theorem}[Coarea formula]\label{thm:coarea} Let $\G,\mathbb{L},\mathbb{M}$ be Carnot groups and let $\Omega\subset\G$ be open. Fix $f\in C^1_H(\Omega;\mathbb{M})$ and assume that all points in $\Omega$ are split-regular for $f$, so that $\Sigma:=\{p\in\Omega:f(p)=0\}$ is a $C^1_H$ submanifold. Consider a function $u:\Omega\to\mathbb{L}$ such that $uf\in C^1_H(\Omega;\mathbb{K})$ and assume that \begin{equation}\label{eq:technicalassumption} \text{for $\psi^{Q-m}$-a.e.~$p\in\Sigma$,}\quad \left\{ \begin{array}{l} \text{either $\DH (uf)_p|_{T^H_p\Sigma}$ is not surjective on $\mathbb{L}$,}\\ \text{or $p$ is split-regular for $uf$.} \end{array} \right. \end{equation} Then, for every Borel function $h:\Sigma\to[0,+\infty)$ the equality \begin{equation}\label{eq:coarea2} \int_\Sigma h(p)\,\calC (T^H_p\Sigma,\DH (uf)_p) \, \dd\psi^{Q-m} (p)=\!\int_\mathbb{L} \int_{\Sigma\cap u^{-1}(s)} h(p)\dd\psi^{Q-m-\ell}(p)\:\dd\psi^\ell(s) \end{equation} holds. \end{Theorem} In~\eqref{eq:coarea2}, the symbol $\calC (T^H_p\Sigma,\DH (uf)_p)$ denotes the {\it coarea factor}: let us stress that it depends only on the restriction of $u$ to $\Sigma$ and that it does not depend on the choice of $f$ outside of $\Sigma$, see Remark~\ref{rem:olyonu}. The $\psi^\ell$-measurability of the function $\mathbb{L}\ni s\mapsto\int_{\Sigma\cap u^{-1}(s)} h\,\dd\psi^{Q-m-\ell}$ is part of the statement. The assumption $uf\in C^1_H(\Omega;\mathbb{K})$ becomes more transparent when $\mathbb{K}=\mathbb{L}\times\mathbb{M}$ is a direct product (roughly speaking, when $\mathbb{L},\mathbb{M}$ are ``unrelated'' groups): in this case, it is in fact equivalent to the $C^1_H$ regularity of $u$. Moreover, since $T^H_p\Sigma=\ker \DH f_p$, the equality $\DH (uf)_p|_{T^H_p\Sigma}=\DH u_p|_{T^H_p\Sigma}$ holds. Eventually, the statement of Theorem~\ref{thm:coarea} can at the same time be simplified, stated in a more natural way, and generalized to rectifiable sets, as follows. \begin{Corollary}\label{cor:coareaPRODUCT} Let $\G, \mathbb{L},\mathbb{M}$ be Carnot groups, let $\Omega\subset\G$ be an open set and let $R\subset\Omega$ be $(\G;\mathbb{M})$-rectifiable; assume that $u\in C^1_H(\Omega;\mathbb{L})$ is such that \begin{equation}\label{eq:technicalassumptionPRODUCT} \text{for $\psi^{Q-m}$-a.e.~$p\in R$,}\quad \left\{ \begin{array}{l} \text{either $\DH u_p|_{T^H_pR}$ is not surjective on $\mathbb{L}$,}\\ \text{or $T^H_pR\cap\ker\DH u_p$ splits $\G$.} \end{array} \right. \end{equation} Then, for every Borel function $h:\Omega\to[0,+\infty)$ the equality \begin{equation}\nonumber \int_R h(p)\,\calC (T^H_p R,\DH u_p) \, \dd\psi^{Q-m} (p)=\int_\mathbb{L} \int_{R\cap u^{-1}(s)} h(p)\dd\psi^{Q-m-\ell}(p)\:\dd\psi^\ell(s) \end{equation} holds. \end{Corollary} \begin{Remark Let us stress that assumptions~\eqref{eq:technicalassumption} and~\eqref{eq:technicalassumptionPRODUCT} cannot be easily relaxed: given a map $u\in C^1_H(\Omega,\R^2)$ defined on an open subset $\Omega$ of the first Heisenberg group $\mathbb{H}^1\equiv\R^3$, the validity of a coarea formula of the type \[ \int_\Omega \calC(\DH u_p)\dd\psi^4(p) = \int_{\R^2}\psi^{2}(\Omega\cap u^{-1}(s))\dd\mathscr L^2(s) \] is indeed a challenging open problem as soon as $\DH u_p$ is surjective, see e.g.~\cite{Kozhevnikov,LeonardiMagnani,MagnaniStepanovTrevisan}. In our notation, this situation corresponds to $\mathbb{M}=\{0\}$ and $\mathbb{L}=\R^2$. Since the kernel of any homogeneous surjective morphism $\mathbb{H}^1\to\R^2$ is the center of $\mathbb{H}^1$, which does not admit any complementary subgroup, no point can be split-regular for $u$. Therefore, if~\eqref{eq:technicalassumptionPRODUCT} holds, then $\calC(\DH u_p)=0$ by Proposition~\ref{prop:coarea-factor}, and thus both sides of the coarea formula are null. In particular,~\eqref{eq:technicalassumptionPRODUCT} implies that for $\cal L^2$-a.e.~$s\in\R^2$, $\psi^2(\Omega\cap u^{-1}(s))=0$. However, a coarea formula was proved for $u:\mathbb{H}^n\to \R^{2n}$, assuming $u$ to be of class $C^{1,\alpha}_H$, see~\cite[Theorem~6.2.5]{Kozhevnikov} and also \cite[Theorem~8.2]{MagnaniStepanovTrevisan}. \end{Remark} \begin{Remark The following weak version of Sard's Theorem holds: under the assumptions and notation of Theorem~\ref{thm:coarea}, then \begin{equation}\label{eq:sard1} \psi^{Q-m-\ell}(\{p\in\Sigma:\DH (uf)_p(T^H_p\Sigma)\varsubsetneq\mathbb{L}\}\cap u^{-1}(s))=0\quad \text{for $\psi^\ell$-a.e.~$s\in\mathbb{L}$}. \end{equation} Moreover, since every level set $\Sigma\cap u^{-1}(s)$ is a $C^1_H$ submanifold around split-regular points of $uf$, Theorem~\ref{thm:coarea} implies that \begin{equation}\label{eq:sard2} \text{$\Sigma\cap u^{-1}(s)$ is $(\G;\mathbb{K})$-rectifiable\qquad for $\psi^\ell$-a.e.~$s\in\mathbb{L}$.} \end{equation} Clearly, statements analogous to~\eqref{eq:sard1} and~\eqref{eq:sard2} hold under the assumptions and notation of either Corollary~\ref{cor:coareaPRODUCT} or Theorem~\ref{thm:coareaHeisenberg} below. \end{Remark} The proof of Theorem~\ref{thm:coarea} follows the strategy used in~\cite{FedererCurvatureMeasures} (see also~\cite{MagnaniAreaCoarea}) and, as already mentioned, it stems from the area formula of Theorem~\ref{prop05161504}, as we now describe. First, in Proposition~\ref{prop07171752} we prove a coarea inequality, that in turn is based on an ``abstract'' coarea inequality (Lemma~\ref{lem-coarea-ineq}) for Lipschitz maps between metric spaces. Second, in Lemma~\ref{prop:coarea-factor} we prove Theorem~\ref{thm:coarea} in the ``linearized'' case when both $f$ and $u$ are homogeneous group morphisms: in this case formula~\eqref{eq:coarea2} holds with a constant coarea factor $\calC(\mathbb{P},L)$ which depends only on the normal homogeneous subgroup $\mathbb{P}:=\ker f$ and on the homogeneous morphism $L=u$ (actually, on $L|_\Sigma$ only). Lemma~\ref{prop:coarea-factor}, whose proof is a simple application of Theorem~\ref{prop05161504}, actually defines the coarea factor $\calC(\mathbb{P},L)$. The proof of Theorem~\ref{thm:coarea} is then a direct consequence of Theorem~\ref{thm:coarea2}, which states that for $\psi^{Q-m}$-a.e.~$p\in\Sigma$ the Federer density $\Theta_{\psi^d}(\mu_{\Sigma,u};p)$ of the measure \[ \mu_{\Sigma,u}(E) \defeq \int_{\mathbb{L}} \psi^{Q-m-\ell}(E\cap\Sigma\cap u^{-1}(s)) \dd \cal\psi^\ell (s),\qquad E\subset\Omega \] is equal to $\calC(T^H_p\Sigma,\DH (uf)_p)$. For ``good'' points $p$, i.e., when $\DH (uf)_p|_{T^H_p\Sigma}$ is onto $\mathbb{L}$, such equality is obtained by another application of Theorem~\ref{prop05161504}, see Proposition~\ref{prop07251509}: this is the point where one needs the assumption~\eqref{eq:technicalassumption}, which guarantees that, locally around good points, the level sets $\Sigma\cap u^{-1}(s)$ are $C^1_H$ submanifolds. The remaining ``bad'' points, where $\DH (uf)_p|_{T^H_p\Sigma}$ is not surjective on $\mathbb{L}$, can be treated using the coarea inequality, see Lemma~\ref{lem07241228}. Recall that the classical Euclidean coarea formula is proved when the slicing function $u$ is only Lipschitz continuous. Extending Theorem~\ref{thm:coarea} to the case where $u:\Sigma\to\mathbb{L}$ is only Lipschitz seems for the moment out of reach. Observe that one should first provide, for a.e. $p\in\Sigma$, a notion of Pansu differential of $u$ on $T^H_p\Sigma$: this does not follow from Pansu’s Theorem~\cite{Pansu}. Furthermore, the function $f$ in Theorem~\ref{thm:coarea} should play no role, and actually any result should depend only on the restriction of $u$ to $\Sigma$. Let us also stress that, to the best of our knowledge, Theorem~\ref{thm:coarea} provides the first sub-Riemannian coarea formula that is proved when the set $\Sigma$ is not a positive $\psi^Q$-measure subset of $\G$ (i.e., in the notation of Theorem~\ref{thm:coarea}, when $\mathbb{M}=\{0\}$). The only exception to this is~\cite[Theorem~1.5]{MontiVittoneHeight}, where a coarea formula was proved for $C^1_H$ submanifolds of codimension 1 in Heisenberg groups $\mathbb{H}^n,n\ge 2$. As a corollary of Theorem~\ref{thm:coarea}, we are able both to extend this result, to all codimensions not greater than $n$, and to improve it, in the sense that we show that the implicit ``perimeter'' measures considered in~\cite[Theorem 1.5]{MontiVittoneHeight} on the level sets of $u$ are indeed Hausdorff or spherical measures. Furthermore, when $\mathbb{H}^n$ is endowed with a rotationally invariant distance, $u$ takes values in $\R^\ell$, and the measures $\psi^d$ under consideration are $\cal S^d$, then the coarea factor coincides up to constants with the quantity \begin{equation}\label{eq:defJRu} J^Ru(p):=\left( \det (L\circ L^T) \right)^{1/2},\qquad L:=\DH u_p|_{T^H_pR}, \end{equation} In~\eqref{eq:defJRu}, the point $p$ belongs to a rectifiable set $R\subset\mathbb{H}^n$ and, by abuse of notation, we use standard exponential coordinates on $\mathbb{H}^n\equiv\R^{2n+1}$ to identify $T^H_pR$ with a $(2n+1-m)$-dimensional plane; with this identification $\DH u_p$ is a linear map on $\R^{2n+1}$ that is, actually, independent of the last ``vertical'' coordinate. The superscript $^T$ denotes transposition. \begin{Theorem}[Coarea formula in Heisenberg groups]\label{thm:coareaHeisenberg} Consider an open set $\Omega\subset\mathbb{H}^n$, a $(\mathbb{H}^n,\R^m)$-rectifiable set $R\subset\Omega$ and a function $u\in C^1_H(\Omega;\R^\ell)$ such that $1\le m+\ell\le n$. Then, for every Borel function $h:R\to[0,+\infty)$ the equality \begin{equation*} \int_R h(p)\calC(T^H_pR,\DH u_p) \, \dd\psi^{2n+2-m} (p)=\int_{\R^\ell} \int_{R\cap u^{-1}(s)} h(p)\dd\psi^{2n+2-m-\ell}(p)\,\dd\psi^\ell(s) \end{equation*} holds. Moreover, if $\mathbb{H}^n$ is endowed with a rotationally invariant distance $\rho$, then there exists a constant $\textfrak c=\textfrak c(n,m,\ell,\rho)>0$ such that \begin{equation*} \textfrak c \int_R h(p)J^Ru(p) \, \dd\cal S^{2n+2-m} (p)=\int_{\R^\ell} \int_{R\cap u^{-1}(s)} h(p)\dd\cal S^{2n+2-m-\ell}(p)\,\dd\cal L^\ell(s). \end{equation*} \end{Theorem} The first statement of Theorem~\ref{thm:coareaHeisenberg} is an immediate application of Corollary~\ref{cor:coareaPRODUCT}, while the second one needs an explicit representation for the spherical measure on {\it vertical} subgroups of $\mathbb{H}^n$ (i.e., elements of $\scr T_{\mathbb{H}^n,\R^k}$) which use results of~\cite{CorniMagnani}. See Proposition~\ref{prop:Sverticalsubgroups}. \medskip {\it Acknowledgements.} The authors are grateful to F.~Corni, V.~Magnani, R.~Monti and P.~Pansu for several stimulating discussions. They wish to express their gratitude to A.~Merlo for suggesting to address the density existence problem of Corollary~\ref{cor:densityexistence}. \section{Preliminaries}\label{sec:preliminaries} \subsection{First definitions}\label{sec12101536} Let $V$ be a real vector space with finite dimension and $[\cdot,\cdot]:V\times V\to V$ be the Lie bracket of a Lie algebra $\frk g=(V,[\cdot,\cdot])$. We say that $\mathfrak{g}$ is \emph{graded} if subspaces $V_1,\dots,V_s$ are fixed so that \begin{align*} &V=V_1\oplus\dots\oplus V_s\\ \text{and }&[V_i,V_j]:=\mathrm{span}\{[v,w]:v\in V_i,\ w\in V_j\}\subset V_{i+j}\text{ for all $i,j\in\{1,\dots,s\}$,} \end{align*} where we agree that $V_k=\{0\}$ if $k>s$. Graded Lie algebras are nilpotent. A graded Lie algebra is \emph{stratified of step $s$} if equality $[V_1,V_j]=V_{j+1}$ holds and $V_s\neq\{0\}$. Our main object of study are stratified Lie algebras, but we will often work with subspaces that are only graded Lie algebras. On the vector space $V$ we define a group operation via the Baker--Campbell--Hausdorff formula \begin{align*} pq &:= \sum_{n=1}^\infty\frac{(-1)^{n-1}}{n} \sum_{\{s_j+r_j>0:j=1\dots n\}} \frac{ [p^{r_1}q^{s_1}p^{r_2}q^{s_2} \cdots p^{r_n}q^{s_n}] } {\sum_{j=1}^n (r_j+s_j) \prod_{i=1}^n r_i!s_i! } \\ &=p+q+\frac12[p,q]+\dots , \end{align*} where \[ [p^{r_1}q^{s_1}p^{r_2}q^{s_2} \cdots p^{r_n}q^{s_n}] = \underbrace{[p,[p,\dots,}_{r_1\text{ times}} \underbrace{[q,[q,\dots,}_{s_1\text{ times}} \underbrace{[p,\dots}_{\dots}]\dots]]\dots]] . \] The sum in the formula above is finite because $\frk g$ is nilpotent. The resulting Lie group, which we denote by $\G$, is nilpotent and simply connected; we will call it \emph{graded group} or \emph{stratified group}, depending on the type of grading of the Lie algebra. The identification $\G=V=\frk g$ corresponds to the identification between Lie algebra and Lie group via the exponential map $\exp:\mathfrak{g}\to\G$. Notice that $p^{-1}=-p$ for every $p\in\G$ and that $0$ is the neutral element of $\G$. If $\frk g'$ is another graded Lie algebra with underlying vector space $V'$ and Lie group $\G'$, then, with the same identifications as above, a map $V\to V'$ is a Lie algebra morphism if and only if it is a Lie group morphism, and all such maps are linear. In particular, we denote by $\Hom_h(\G;\G')$ the space of all \emph{homogeneous morphisms} from $\G$ to $\G'$, that is, all linear maps $V\to V'$ that are Lie algebra morphisms (equivalently, Lie group morphisms) and that map $V_j$ to $V_j'$. If $\mathfrak{g}$ is stratified, then homogeneous morphisms are uniquely determined by their restriction to $V_1$. For $\lambda>0$, define the \emph{dilations} as the maps $\delta_\lambda:V\to V$ such that $\delta_\lambda v=\lambda^j v$ for $v\in V_j$. Notice that $\delta_\lambda\delta_\mu=\delta_{\lambda\mu}$ and that $\delta_\lambda\in\Hom_h(\G;\G)$, for all $\lambda,\mu>0$. Notice also that a Lie group morphism $F:\G\to\G'$ is homogeneous if and only if $F\circ\delta_\lambda = \delta_\lambda'\circ F$ for all $\lambda>0$, where $\delta'_\lambda$ denotes the dilations in $\G'$. We say that a subset $M$ of $V$ is \emph{homogeneous} if $\delta_\lambda(M)=M$ for all $\lambda>0$. Let $\mathbb{P}$ be a homogeneous subgroup of $\G$ and $\theta$ a Haar measure on $\mathbb{P}$. Since $\delta_\lambda|_{\mathbb{P}}$ is an automorphism of $\mathbb{P}$, there is $c_\lambda>0$ such that $(\delta_{\lambda})_\#\theta = c_\lambda\theta$. Since the map $\lambda\mapsto \delta_\lambda|_{\mathbb{P}}$ is a multiplicative one-parameter group of automorphisms, the map $\lambda\mapsto c_\lambda$ is a continuous automorphism of the multiplicative group $(0,+\infty)$, hence $c_\lambda = \lambda^{-d}$ for some $d\in\R$. As $\delta_\lambda$ is contractive for $\lambda<1$, we actually have $d>0$. Since any other Haar measure of $\mathbb{P}$ is a positive multiple of $\theta$, the constant $d$ does not depend on the choice of the Haar measure. We call such exponent $d$ the \emph{homogeneous dimension} of $\mathbb{P}$. The homogeneous dimension of the ambient space $\G$ is denoted by $Q$ and it is easy to see that $Q:=\sum_{i=1}^si\dim V_i$. A \emph{homogeneous distance} on $\G$ is a distance function $\rho$ that is left-invariant and 1-homogeneous with respect to dilations, i.e., \begin{enumerate} \item[(i)] $\rho(gx,gy)=\rho(x,y)$ for all $g,x,y\in\G$; \item[(ii)] $\rho(\delta_\lambda x,\delta_\lambda y) = \lambda \rho(x,y)$ for all $x,y\in\G$ and all $\lambda>0$. \end{enumerate} When a stratified group $\G$ is endowed with a homogeneous distance $\rho$, we call the metric Lie group $(\G,\rho)$ a \emph{Carnot group}. Homogeneous distances induce the topology of $\G$, see \cite[Proposition 2.26]{MR3943324}, and are biLipschitz equivalent to each other. Every homogeneous distance defines a \emph{homogeneous norm} $\|\cdot\|_\rho:\G\to[0,+\infty)$, $\|p\|_\rho := \rho(0,p)$. We denote by $|\cdot|$ the Euclidean norm in $\R^\ell$. The following property relating norm and conjugation, proved in \cite[Lemma~2.13]{FranchiSerapioni2016Intrinsic}, will be useful: there exists $C=C(\G,\rho)>0$ such that \begin{equation}\nonumber \|q^{-1}pq\|_\rho \le \|p\|_\rho+C\left( \|p\|_\rho^{1/s}\|q\|_\rho^{(s-1)/s}+\|p\|_\rho^{(s-1)/s}\|q\|_\rho^{1/s}\right)\quad\forall\: p,q\in\G. \end{equation} Open balls with respect to $\rho$ are denoted by $\Ball_\rho(x,r)$, closed balls by $ \cBall_\rho(x,r)$, or simply $\Ball(x,r)$ and $ \cBall(x,r)$ if it is clear which distance we are using. We also use the notation $\cBall(E,r) := \{x: d(x,E)\le r\}$ for subsets $E$ of $\G$. The diameter of a set with respect to $\rho$ is denoted by $\diam(E)$ or $\diam_\rho(E)$. Notice that $\diam_\rho(\Ball_\rho(p,r)) = 2r$, for all $p\in\G$ and $r>0$. By left-invariance of $\rho$ it suffices to prove this for $p=0$. On the one hand the triangle inequality implies $\diam_\rho(\Ball_\rho(0,r)) \le 2r$. On the other hand, if $v\in V_1$ is such that $\rho(0,v)=r$, then $\rho(0,v^{-1})=r$ and $\rho(v^{-1},v) = \rho(0,2v) = 2\rho(0,v) = 2r$, because $vv = v+v = \delta_2v$. It follows that $\diam_\rho(\Ball_\rho(0,r)) \ge 2r$. If $\rho$ and $\rho'$ are homogeneous distances on $\G$ and $\G'$, the distance between two homomorphisms $L,M\in\Hom_h(\G;\G')$ is \[ d_{\rho,\rho'}(L,M) := \max_{p\neq0}\frac{\rho'(L(p),M(p))}{\|p\|_\rho} = \max_{\|p\|_\rho=1}\rho'(L(p),M(p)) . \] The function $d_{\rho,\rho'}$ is a distance on $\Hom_h(\G;\G')$ inducing the manifold topology. \subsection{Measures and Federer density}\label{sec09172014} In the following, the word measure will stand for outer measure. We work on $\G$ and its subsets endowed with the metric $\rho$. In particular, the balls are those defined by $\rho$ and the Hausdorff dimension of $(\G,\rho)$ coincides with the homogeneous dimension $Q$. For $d\in [\,0,Q\,]$, let $\calH^d$ and $\calS^d$ be the Hausdorff and spherical Hausdorff measures of dimension $d$ in $\G$ defined for $E\subset\G$ by \begin{align*} &\calH^d(E):=\lim_{\epsilon\to 0^+}\quad\inf\left\{\sum_{j\in\N}(\diam E_j)^d:E\subset\bigcup_{j\in\N}E_j,\ \diam E_j<\epsilon\right\},\\ &\calS^d(E):=\lim_{\epsilon\to 0^+}\quad\inf\left\{\sum_{j\in\N}(2r_j)^d:E\subset\bigcup_{j\in\N}\cBall(x_j,r_j),\ 2r_j<\epsilon\right\}. \end{align*} It is clear that, in the definition of $\calH^d$, one can ask the covering sets $E_j$ to be closed. Moreover, we clearly have $\calH^d(E) \le \calS^d(E) \le 2^d \calH^d(E)$. Note that contrarily to the usual Euclidean or Riemannian definition, we do not introduce normalization constants; this is due to the fact that the appropriate constant is usually linked to the solution to the isodiametric problem, which is open in Carnot Groups and their subgroups and also highly dependent on the metric~$\rho$. See also Remark~\ref{rem:isodiametric}. In the following, $\psi^d$ will be either $\calH^d$ or $\calS^d$ and $\scr E$ will be, respectively, the collection of closed subsets of $\G$ of positive diameter or the collection of closed balls in $\G$ with positive diameter. If $\mu$ is a measure on $\G$, define the \emph{$\psi^d$-density of $\mu$ at $x\in\G$} as \begin{align} \Theta_{\psi^d}(\mu;x) &\defeq \lim_{\epsilon\to 0^+} \sup\left\{ \frac{\mu(E)}{(\diam E)^d} : x\in E\in\scr E,\ \diam E\le\epsilon \right\}.\label{eq:defFedererdensity} \end{align} This upper density is sometimes called \emph{Federer density} \cite{FSSCcentered,MagnaniEdinburgh,MagnaniNewDifferentiation}; note that if $\psi^d$ is the spherical measure, its Federer density can differ from the usual spherical density, as the latter involves centered balls. Recall that a measure $\nu$ is \emph{Borel regular} if open sets are measurable and for every $A\subset\G$ there exists a Borel set $A'\subset\G$ such that $A\subset A'$ and $\nu(A')=\nu(A)$. We will use the following density estimates, which follow from {\cite[Theorems 2.10.17 and 2.10.18]{FedererGMT}}. \begin{Theorem}[Density estimates]\label{thm:Federer} Let $\psi^d$ be as above, $\mu$ a Borel regular measure, and fix $t>0$ and a set $A$ in $\G$. Then \begin{enumerate} \item[(i)] if $\Theta_{\psi^d}(\mu;x) < t$ for all $x\in A$, then $\mu(A) \le t \psi^d(A)$, \item[(ii)] if $\Theta_{\psi^d}(\mu;x)> t$ for all $x\in A$, then $\mu(A)\ge t \psi^d(A)$. \end{enumerate} \end{Theorem} A direct consequence of these results is the following (see also \cite[Theorem~9]{MagnaniEdinburgh} and~\cite[Theorem 1.11]{FSSCcentered}). \begin{Proposition}\label{propFedDensity If $\mu$ is locally finite and Borel regular on $\G$, and if $x\mapsto \Theta_{\psi^d}(\mu;x)$ is a Borel function which is positive and finite $\mu$-almost everywhere, then \[ \mu = \Theta_{\psi^d}(\mu;\cdot) \psi^d. \] \end{Proposition} Proving that the Federer density is a $\psi^d$-measurable or a Borel function is in general not an easy task; we provide a criterion, which will be useful later in Sections~\ref{sec:goodpoints} and~\ref{sec:badpoints}. Recall that a Borel measure $\nu$ is \emph{doubling} if there exists $C\ge 1$ such that $\nu(\Ball(p,2r))\le C\,\nu(\Ball(p,r))$ for all $p\in \G$ and $r>0$. \begin{Proposition}\label{prop:RadonNykodym} Given a set $\Sigma\subset \G$ such that $\psi^d\hel \Sigma$ is locally doubling Borel regular measure, assume that $\mu$ is a locally finite Borel regular measure, absolutely continuous with respect to $\psi^d\hel \Sigma$; then \begin{enumerate}[label=(\roman*)] \item $\Theta_{\psi^d}(\mu;\cdot)$ is $(\psi^d\hel\Sigma)$-measurable; \item $\Theta_{\psi^d}(\mu;\cdot)<+\infty$, $\psi^d$-a.e.~on $\Sigma$ and \[ \Theta_{\psi^d}(\mu;p)=\lim_{r\to 0^+}\frac{\mu(\cBall(p,r))}{\psi^d(\Sigma\cap\cBall(p,r))},\quad \text{for $\psi^d$-a.e.~}p\in\Sigma; \] \item $\mu=\Theta_{\psi^d}(\mu;\cdot) \psi^d\hel\Sigma$. \end{enumerate} In particular \begin{equation}\nonumbe \lim_{r\to 0^+}\aveint{\Sigma\cap\cBall(p,r)}{}\left|\Theta_{\psi^d}(\mu;\cdot) - \Theta_{\psi^d}(\mu;p)\right|\dd \psi^d=0,\qquad\text{for $\psi^d$-a.e.~}p\in\Sigma. \end{equation} \end{Proposition} \begin{proof} It is well-known (see e.g.~\cite{rigot2018differentiation}) that Radon-Nikodym Differentiation Theorem holds for differentiating a measure with respect to a doubling measure. Precisely, by combining~\cite[Theorems 2.2,~2.3,~3.1]{rigot2018differentiation} one infers that the Radon-Nikodym derivative \[ \Theta(p):=\lim_{r\to 0^+}\frac{\mu(\cBall(p,r))}{\psi^d(\Sigma\cap\cBall(p,r))} \] exists and is finite $\psi^d$-a.e.~on $\Sigma$. Moreover, $\Theta$ is $(\psi^d\hel\Sigma)$-measurable, $\mu=\Theta \psi^d\hel\Sigma$ and (see~\cite[Section 2.7]{Heinonen}) \[ \lim_{r\to o^+}\aveint{\Sigma\cap\cBall(p,r)}{}\left|\Theta - \Theta(p)\right|\dd \psi^d=0\qquad\text{for $\psi^d$-a.e.~}p\in\Sigma. \] As a consequence, we have only to prove that $\Theta_{\psi^d}(\mu;p)=\Theta(p)$ for $\psi^d$-a.e.~$p\in \Sigma$. In turn, it is enough to show that, for every fixed $s,t\in\Q$, $s<t$, the sets \begin{align*} & A:=\{p\in\Sigma:\Theta(p)<s<t<\Theta_{\psi^d}(\mu;p)\}\\ & B:=\{p\in\Sigma:\Theta_{\psi^d}(\mu;p)<s<t<\Theta(p)\} \end{align*} are $\psi^d$-negligible. On the one hand, let $A'$ be a Borel set with $A\subset A'$, $\psi^d(A)=\psi^d(A')$ and $A'\subset\{\Theta<s\}$. Then \begin{align*} s\psi^d(A) = s\psi^d(A') \ge \int_{A'} \Theta\dd\psi^d = \mu(A') \ge \mu(A)\ge t\psi^d(A), \end{align*} where the last inequality is a consequence of Theorem~\ref{thm:Federer} $(ii)$. Therefore, $\psi^d(A)=0$. On the other hand, let $B'$ be a Borel set with $B\subset B'$, $\mu(B')=\mu(B)$. Then \[ t\psi^d(B) \le \int_{B'}\Theta\dd\psi^d = \mu(B') = \mu(B) \le s\psi^d(B), \] where the last inequality is a consequence of Theorem~\ref{thm:Federer} $(i)$. Therefore $\psi^d(B)=0$. \end{proof} \subsection{Pansu differential}\label{sec:Pansudifferentiability} Let $\G$ and $\G'$ be two Carnot groups and $\Omega\subset\G$ open. A function $f:\Omega\to\G'$ is \emph{Pansu differentiable at $p\in\Omega$} if there is $L\in\Hom_h(\G;\G')$ such that \[ \lim_{x\to p} \frac{\rho'( f(p)^{-1}f(x) , L(p^{-1}x) )}{\rho(p,x)} = 0 . \] The map $L$ is called \emph{Pansu differential} of $f$ at $p$ and it is denoted by $\DH f(p)$ or $\DH f_p$. A map $f:\Omega\to\G'$ is \emph{of class $C^1_H$} if $f$ is Pansu differentiable at all points of $\Omega$ and the Pansu differential $p\mapsto \DH f(p)$ is continuous. We denote by $C^1_H(\Omega;\G')$ the space of all maps from $\Omega$ to $\G'$ of class $C^1_H$. A function $f:\Omega\to\G'$ is \emph{strictly Pansu differentiable at $p\in\Omega$} if there is $L\in\Hom_h(\G;\G')$ such that \[ \lim_{\epsilon\to0}\ \sup \left\{ \frac{\rho'( f(y)^{-1}f(x) , L(y^{-1}x) )}{\rho(x,y)} : x,y\in\Ball_\rho(p,\epsilon),\ x\neq y \right\} = 0 . \] Clearly, in this case $f$ is Pansu differentiable at $p$ and $L=\DH f(p)$. The next results allows us to simplify several arguments in the sequel: \begin{Proposition}\label{prop:C1Hiffstrict} A function $f:\Omega\to\G'$ is of class $C^1_H$ on $\Omega$ if and only if $f$ is strictly Pansu differentiable at all points in $\Omega$. \end{Proposition} \begin{proof} Assume that $f\in C^1_H(\Omega,\G')$ and let $p\in\Omega$ be fixed; then, by \cite[Theorem 1.2]{MR3123745} one has \[ \lim_{\epsilon\to0}\ \sup \left\{ \frac{\rho'( f(y)^{-1}f(x) , \DH f_x(y^{-1}x) )}{\rho(x,y)} : x,y\in\Ball_\rho(p,\epsilon),\ x\neq y \right\}=0\,. \] The continuity of $x\mapsto \DH f_x$ provides \[ \lim_{\epsilon\to0}\ \sup \left\{ \frac{\rho'( \DH f_x(y^{-1}x) , \DH f_p(y^{-1}x) )}{\rho(x,y)} : x,y\in\Ball_\rho(p,\epsilon),\ x\neq y \right\}=0 \] and the strict differentiability of $f$ at $p$ follows. Conversely, assume that $f$ is strictly Pansu differentiable at all points in $\Omega$; we have to prove that $p\mapsto \DH f_p$ is continuous. Assume not, i.e., assume there exist $\delta>0$ and, for every $n\in\N$, points $x_n\in\Omega$ and $v_n\in\G$ such that $\|v_n\|_\rho=1$, $x_n\to p$ and \[ \rho'(\DH f_{x_n}(v_n),\DH f_p(v_n))\ge 2\delta\qquad\forall\:n\in\N. \] By strict differentiability of $f$ at $p$ there exist $\bar n$ and $\bar s>0$ such that \[ \frac{\rho'(f(x_n)^{-1}f(x_n\delta_s v_n),\DH f_p(\delta_s v_n))}{s}\le \delta\qquad\forall\: n\ge\bar n,\ s\in(0,\bar s). \] In particular, for every $n\ge\bar n$ and $s\in(0,\bar s)$ we have \begin{eqnarray*} {\rho'(f(x_n)^{-1}f(x_n\delta_s v_n),\DH f_{x_n}(\delta_s v_n))}\hspace{-4.5cm}&\\ &\ge&\, {\rho'(\DH f_{p}(\delta_s v_n),\DH f_{x_n}(\delta_s v_n))} -{\rho'(f(x_n)^{-1}f(x_n\delta_s v_n),\DH f_{p}(\delta_s v_n))}\\ &\ge&\, 2\delta s-\delta s=\delta s. \end{eqnarray*} This would contradict the differentiability of $f$ at $x_n$. \end{proof} \begin{Lemma}\label{lem10051109} If $f\in C^1_H(\Omega;\G')$, then $f:(\Omega,\rho)\to(\G',\rho')$ is locally Lipschitz. \end{Lemma} \begin{proof} Let $p\in\Omega$. By strict differentiability of $f$ at $p$, there is $\epsilon>0$ such that \[ \frac{\rho'( f(y)^{-1}f(x) , L(y^{-1}x) )}{\rho(y,x)}<1 \qquad\text{for all $x,y\in \Ball_\rho(p,\epsilon)$, $x\neq y$,} \] where $L=\DH f(p)$. Since $\rho'(0,L (y^{-1}x))\le C \rho(y,x)$ for some positive $C$, then $\rho'(f(y),f(x))=\rho'(0,f(y)^{-1}f(x)) \le (C+1)\rho(y,x)$, that is, $f$ is Lipschitz continuous on $\Ball_\rho(p,\epsilon)$. \end{proof} \begin{Remark}\label{rem:rem2.3} The notion of Pansu differentiability, as well as Lemma \ref{lem10051109}, can be stated also when the target group $\G'$ is only graded. However, there is no loss of generality in assuming $\G'$ to be stratified. Indeed, if $f:\Omega\to\G'$ is locally Lipschitz, then the image of a rectifiable curve in $\G$ is a rectifiable curve in $\G'$ tangent to the first layer $V_1'$ in the grading of $\G'$; since $\G$ is stratified, each connected component $U$ of $\Omega$ is pathwise connected by rectifiable curves, and this implies that $f(U)$ is contained in (a coset of) the stratified subgroup of $\G'$ generated by $V_1'$. Moreover, as soon as $f$ is open, or has a regular point, then $\G'$ must be a Carnot group. \end{Remark} \subsection{Intrinsic graphs and implicit function theorem}\label{sec09181614} We refer to \cite{FranchiSerapioni2016Intrinsic} for a more general theory of intrinsic graphs. Recall the identification $\G=\frk g=V$ that we made in Section~\ref{sec12101536}. \begin{Lemma}\label{lem09151008} Let $\mathbb{V}$ and $\mathbb{W}$ be homogeneous linear subspaces of a graded group $\G$. If $\mathbb{V}\cap\mathbb{W}=\{0\}$ and $\dim\mathbb{V}+\dim\mathbb{W}=\dim\G$, then the map $\mathbb{W}\times\mathbb{V}\to\G$, $(w,v)\mapsto wv$, is a surjective diffeomorphism. \end{Lemma} \begin{proof} Denote by $\phi:\mathbb{W}\times\mathbb{V}\to\G$ the map $\phi(w,v):=wv$. Since its differential at $(0,0)$ is a linear isomorphism, $\phi$ is a diffeomorphism from a neighborhood of $(0,0)$ to a neighborhood of $0\in\G$. Since $\phi(\delta_\lambda w,\delta_\lambda v) = \delta_\lambda\phi(w,v)$ for all $\lambda>0$, we conclude that $\phi$ is a surjective diffeomorphism onto $\G$. \end{proof} A homogeneous subgroup $\mathbb{W}$ is \emph{complementary} to a homogeneous subgroup $\mathbb{V}$ if $\G=\mathbb{W}\mathbb{V}$ and $\mathbb{W}\cap\mathbb{V}=\{0\}$. We denote by $\mathscr W_\mathbb{V}$ the set of all homogeneous subgroups of $\G$ that are complementary to $\mathbb{V}$. By Lemma~\ref{lem09151008}, we have $\mathbb{W}\in\mathscr W_\mathbb{V}$ if and only if $\mathbb{V}\in\mathscr W_\mathbb{W}$. Again by Lemma~\ref{lem09151008}, any choice of $\mathbb{V}$ and $\mathbb{W}\in\mathscr W_\mathbb{V}$ gives two projections \begin{equation}\label{eq:defproiezioni} \pi_\mathbb{W}:\G\to\mathbb{W},\qquad \pi_\mathbb{V}:\G\to\mathbb{V}, \end{equation} which are defined, for every $p\in\G$, by requiring $\pi_\mathbb{W}(p)=w\in\mathbb{W}$ and $\pi_\mathbb{V}(p)=v\in\mathbb{V}$ to be the only elements such that $p=wv$. We will also write $p_\mathbb{W}$ and $p_\mathbb{V}$ for $\pi_\mathbb{W}(p)$ and $\pi_\mathbb{V}(p)$, respectively. We say that a normal homogeneous subgroup \emph{$\mathbb{W}$ splits $\G$} if $\scr W_\mathbb{W}\neq\emptyset$. In this case we call a choice of $\mathbb{W}$ and $\mathbb{V}\in\scr W_\mathbb{W}$ a \emph{splitting} of $\G$ and we write $\G=\mathbb{W}\cdot\mathbb{V}$. We say that $p\in\Omega$ is a \emph{split-regular point} of $f$ if the Pansu differential of $f$ at $p$ exists and is surjective, and if $\ker(\DH f(p))$ splits $\G$. Recall that the kernel of a group morphism is always normal. A \emph{singular point} is a point that is not split-regular. \begin{Remark}\label{rem:V=G'} We observe that, if $p\in\Omega$ is a split-regular point of $f\in C^1_H(\Omega;\G')$ and $\mathbb{V}\in\mathscr W_{\ker(\DH f(p))}$, then $\DH f(p)|_{\mathbb{V}}:\mathbb{V}\to\G'$ is an isomorphism of graded groups. In particular, $\mathbb{V}$ is necessarily stratified. For instance, if $\G'=\R^m$, then $\mathbb{V}$ is an Abelian subgroup of $\G$ contained in $V_1$. \end{Remark} Notice that a point can fail to be split-regular for $f\in C^1_H(\Omega;\G')$ for two distinct reasons: non-surjectivity of the differential, or non-existence of a splitting of $\G$ with the kernel of $\DH f_p$ at some point $p$. However, the set of split-regular points is open, i.e., if $\DH f_p$ is surjective and $(\ker\DH f_p)\cdot\mathbb{V}$ is a splitting, then, for $q$ close enough to $p$, $\DH f_q$ is surjective and $(\ker\DH f_q)\cdot\mathbb{V}$ is a splitting. \begin{Lemma}[Coercivity]\label{lem08011611} If $f\in C^1_H(\Omega;\G')$, $p\in\Omega$ is a split-regular point and $\mathbb{V}$ is complementary to $\ker(\DH f(p))$, then there are a neighborhood $U$ of $p$ and $C>0$ such that, for all $q\in U$ and $v\in\mathbb{V}$ with $qv\in U$, \[ \rho'(f(q),f(qv)) \ge C \|v\|_\rho . \] \end{Lemma} \begin{proof} Arguing by contradiction, assume that there are sequences $q_j\in\Omega$ and $v_j\in\mathbb{V}\setminus\{0\}$ such that $q_j\to p$ and $v_j\to 0$ as $j\to\infty$, and $\rho'(f(q_j),f(q_jv_j)) \le \|v_j\|_\rho/j$. Up to passing to a subsequence, we can assume that there exists $\bar w = \lim_{j\to\infty} \delta_{\|v_j\|^{-1}} v_j$. It follows that $\bar w\in\mathbb{V}$ and $\|\bar w\|_\rho =1$. Moreover, by strict differentiability \[ \DH f(p)\bar w = \lim_{j\to\infty} \frac{f(q_j)^{-1}f(q_jv_j)}{\|v_j\|_\rho} = 0 , \] in contradiction with the fact that $\mathbb{V}$ is complementary to the kernel of $\DH f(p)$. \end{proof} Let $\mathbb{W}\in\mathscr W_{\mathbb{V}}$. A set $\Sigma\subset\G$ is \emph{an intrinsic graph $\mathbb{W}\to\mathbb{V}$} if there is a subset $A\subset\mathbb{W}$ and a function $\phi:A\to\mathbb{V}$ such that $\Sigma=\{w\phi(w):w\in A\}$. Clearly, $\Sigma\subset\G$ is an intrinsic graph $\mathbb{W}\to\mathbb{V}$ if and only if the map $\pi_\mathbb{W}|_\Sigma:\Sigma\to\mathbb{W}$ is injective; in particular, every $\mathbb{P}\in\mathscr W_\mathbb{V}$ is an intrinsic graph $\mathbb{W}\to\mathbb{V}$. Left translations and dilations of $\mathbb{W}\to\mathbb{V}$ intrinsic graphs are again $\mathbb{W}\to\mathbb{V}$ intrinsic graphs, see \cite[Proposition 3.6]{ArenaSerapioni}. The proof of the following lemma is inspired by \cite[Theorem A.5]{MR3906289}. Similar statements are contained in \cite[Theorem~3.27]{FSSCAdvMath} and \cite[Theorem 1.4]{MR3123745}. \begin{Lemma}[Implicit Function Theorem]\label{lem05161502}\label{lem:implicitfunction} Let $\Omega_0\subset\G$ be open, $g\in C^1_H(\Omega_0;\G')$ and let $o\in\G$ be a split-regular point of $g$. Let $\G=\mathbb{W}\cdot\mathbb{V}$ be a splitting of $\G$ such that $\ker(\DH g(o))$ is an intrinsic graph $\mathbb{W}\to\mathbb{V}$. Then there are neighborhoods $A$ of $\pi_\mathbb{W}(o)$ in $\mathbb{W}$, $B$ of $g(o)$ in $\G'$ and $\Omega\subset\Omega_0$ of $o$, and a map $\varphi:A\times B\to\mathbb{V}$ such that the map $(a,b)\mapsto a\varphi(a,b)$ is a homeomorphism $A\times B\to \Omega$ and $g(a\varphi(a,b)) = b$. In particular,the map $\phi:A\to\mathbb{V}$ defined by $\phi(a):=\varphi(a,g(o))$ is such that $\{p\in\Omega:g(p)=g(o)\}=\{a\phi(a)\in\G:a\in A\}$. \end{Lemma} \begin{Remark} Notice that $\mathbb{V}\cap \ker(\DH g(o))=\{0\}$; hence, in the above lemma one can of course choose $\mathbb{W}=\ker(\DH g(o))$. \end{Remark} \begin{proof}[Proof of Lemma~\ref{lem05161502}] First, we prove that there is an open neighborhood $U\subset\Omega_0$ of $o$ such that the restriction $g|_{p\mathbb{V}}:p\mathbb{V}\cap U\to\G'$ is injective, for all $p\in U$. Arguing by contradiction, suppose that this is not the case. Then there are sequences $p_j,q_j\to o$ such that $p_j^{-1}q_j\in\mathbb{V}$ and $g(p_j)=g(q_j)$. From the strict Pansu differentiability of $g$ at $o$, it follows that \[ 0 = \lim_{j\to\infty} \frac{\rho'( g(q_j)^{-1}g(p_j) , \DH g(o)[p_j^{-1}q_j] ) }{ \rho(p_j,q_j) } = \lim_{j\to\infty}\left\| \DH g(o) \left[\delta_{\frac1{\rho(p_j,q_j)}} (p_j^{-1}q_j) \right] \right\|_{\rho'} . \] By the compactness of the sphere $\{v\in\mathbb{V}:\|v\|_\rho=1\}$, up to passing to a subsequence, there is $v\in\mathbb{V}$ with $\|v\|_\rho=1$ such that $\lim_{j\to\infty}\delta_{\rho(p_j,q_j)^{-1}} (p_j^{-1}q_j) = v$. Therefore, we obtain $\DH g(o)v=0$, in contradiction with the assumptions. This proves the first claim. Second, since the restriction $g|_{p\mathbb{V}\cap U}:p\mathbb{V}\cap U\to \G'$ is a continuous and injective map, and since both $\mathbb{V}$ and $\G'$ are topological manifolds of the same dimension, then we can apply the Invariance of Domain Theorem and obtain that $g|_{p\mathbb{V}\cap U}:p\mathbb{V}\cap U\to g(p\mathbb{V}\cap U)$ is a homeomorphism and that $g(p\mathbb{V}\cap U)$ is an open set. Third, let $U_2\Subset U_1\Subset U$ be open neighborhoods of $o$. We claim that there is $A\subset\mathbb{W}$ open such that $\pi_\mathbb{W}(o)\in A$ and such that for every $p\in o\mathbb{V}\cap U_2$ and for every $a\in A$ there is $q\in a\mathbb{V}\cap U_1$ such that $g(p)=g(q)$. Arguing by contradiction, suppose that this is not the case. Then there are sequences $a_j\in\mathbb{W}$ with $a_j\to\pi_\mathbb{W}(o)$ and $p_j\in o\mathbb{V}\cap U_2$ such that $g(p_j)\notin g(a_j\mathbb{V}\cap U_1)$. By the compactness of $\bar U_2$ and the continuity of $g$, for each $j$ there is $q_j\in a_j\mathbb{V}\cap \bar U_1$ such that \begin{equation}\label{eq12101651} \rho'(g(p_j),g(q_j)) = \inf\{\rho'(g(p_j),g(q)) : q\in a_j\mathbb{V}\cap U_1 \} . \end{equation} Since $g$ is a homeomorphism on each fiber $p\mathbb{V}\cap U$ and since $g(p_j)\notin g(a_j\mathbb{V}\cap U_1)$, we have $q_j\in a_j\mathbb{V}\cap\partial U_1$. Up to passing to a subsequence, there are $p_0\in o\mathbb{V}\cap \bar U_2$ and $q_0\in o\mathbb{V}\cap\partial U_1$ such that $p_j\to p_0$ and $q_j\to q_0$. Now, notice that $a_j\pi_\mathbb{W}(o)^{-1}\to 0$ and that, for $j$ large enough, we have $a_j\pi_\mathbb{W}(o)^{-1}p_j\in a_j\mathbb{V}\cap U_1$. Therefore, using~\eqref{eq12101651}, \[ \lim_{j\to\infty} \rho'( g(p_j) , g(q_j) ) \le \lim_{j\to\infty} \rho'( g(p_j) ,g(a_j\pi_\mathbb{W}(o)^{-1}p_j) ) =0 , \] that is, $g(p_0)=g(q_0)$. Since $p_0\in o\mathbb{V}\cap\bar U_2$ and $q_0\in o\mathbb{V}\cap (U\setminus U_1)$, this contradicts the injectivity of $g$ on $o\mathbb{V}\cap U$ and proves the claim. Next, let $B:=g(o\mathbb{V}\cap U_2)$, which is an open neighborhood of $g(o)$, and $\Omega := \pi_\mathbb{W}^{-1}(A)\cap g^{-1}(B) \cap U_1$. The previous claims imply that for every $a\in A$ and every $b\in B$ there is a unique $v\in\mathbb{V}$ such that $av\in \Omega$ and $g(av)=b$. Define $\varphi:A\times B\to \mathbb{V}$ as $\varphi(a,b)=v$. Finally, we claim that the map $\Phi(a,b):= a\varphi(a,b)$ is a homeomorphism $A\times B\to \Omega$. Notice that, if $p=\Phi(a,b)$, then $a=\pi_\mathbb{W}(p)$ and $b=g(p)$: therefore, $\Phi$ is injective. Moreover, if $p\in \Omega$, then $\pi_\mathbb{W}(p)\in A$, $g(p)\in B$ and $\Phi(\pi_\mathbb{W}(p),g(p)) = p$: therefore, $\Phi$ is also surjective. Finally, since $\Phi^{-1}:\Omega\to A\times B$ is a continuous bijection, then it is a homeomorphism by the Invariance of Domain Theorem. This completes the proof. \end{proof} We observe that, when $g:\G\to\G'$ is a homogeneous group morphism, then the statement of Lemma~\ref{lem05161502} holds with $A=\mathbb{W}$, $B=\G'$ and $\Omega=\G$. \begin{Lemma}\label{lem09122015} Under the assumptions and notation of Lemma~\ref{lem05161502}, suppose $o=0$ and define for $\lambda>0$ \[ \begin{array}{lccl} \varphi_{\lambda}:&\delta_{1/\lambda}A \times\delta_{1/\lambda}B &\to &\delta_{1/\lambda}\Omega \\ & (a,b) &\mapsto & \delta_{1/\lambda}\varphi(\delta_\lambda a,\delta_\lambda b) \end{array} \] Let $\varphi_0$ be the implicit function associated with $\DH g(0)$, that is, $\varphi_0:\mathbb W\times\G'\to\mathbb V$ is such that $\DH g(o)(a\varphi_0(a,b))=b$ for all $a$ and $b$. Then $\varphi_{\lambda}\to \varphi_0$ locally uniformly as $\lambda\to0^+$. \end{Lemma} \begin{proof} Without loss of generality, we assume $\Omega$ to be compactly contained in the domain of $g$. Define $g_\lambda:\delta_{1/\lambda}\Omega\to\G'$ by \[ g_\lambda(x) = \delta_{1/\lambda}g(\delta_\lambda x) . \] Notice that $g_\lambda(a\varphi_\lambda(a,b)) = b$ for all $(a,b)\in\delta_{1/\lambda}A \times\delta_{1/\lambda}B$. Possibly taking a smaller $\Omega$, by Lemma~\ref{lem08011611} there is $C>0$ such that $\rho'(g(x),g(y))\ge C\rho(x,y)$ for all $x,y\in\Omega$ with $\pi_\mathbb{W}(x)=\pi_\mathbb{W}(y)$. It follows that that $\rho'(g_\lambda(x),g_\lambda(y)) \ge C\rho(x,y)$ for all $x,y\in\delta_{1/\lambda}\Omega$ with $\pi_\mathbb{W}(x)=\pi_\mathbb{W}(y)$, because $\pi_\mathbb{W}\circ\delta_\lambda=\delta_\lambda\circ\pi_\mathbb{W}$. Fix a compact set $K\subset\mathbb{W}\times\G'$ and let $(a,b)\in K$. Then, for large enough $\lambda$ (depending only on $K$) we have $(a,b)\in\delta_{1/\lambda}A \times\delta_{1/\lambda}B$, $a\varphi_\lambda(a,b)\in\delta_{1/\lambda}\Omega$ and $a\varphi_0(a,b)\in\delta_{1/\lambda}\Omega$, hence \begin{align*} \rho(\varphi_\lambda(a,b),\varphi_0(a,b)) &= \rho(a\varphi_\lambda(a,b),a\varphi_0(a,b)) \\ &\le\frac1C \rho'( g_\lambda(a\varphi_\lambda(a,b)) , g_\lambda(a\varphi_0(a,b)) ) \\ &=\frac1C \rho'( b , g_\lambda(a\varphi_0(a,b)) ) \\ &=\frac1C \rho'( \DH g_0(a\varphi_0(a,b)) , g_\lambda(a\varphi_0(a,b)) ) . \end{align*} Since $g$ is Pansu differentiable at 0, $g_\lambda\to \DH g_0$ uniformly on compact sets. The map $(a,b)\mapsto a\varphi_0(a,b)$ is a homeomorphism $\mathbb{W}\times\G'\to\G$, hence $\varphi_\lambda\to\varphi_0$ uniformly on compact sets. \end{proof} \subsection{\texorpdfstring{$C^1_H$}{C1H} submanifolds and rectifiable sets} \label{sec:C1H_and_rectifiable} A set $\Sigma\subset\G$ is a \emph{submanifold of class }$C^1_H$ (or $C^1_H$ submanifold for short) if there exists a Carnot group $\G'$ such that for every $p\in\Sigma$ there are an open neighborhood $\Omega$ of $p$ in $\G$ and a function $f\in C^1_H(\Omega;\G')$ such that $p$ is split-regular for $f$ and $\Sigma\cap\Omega = \{f=0\}$. In this case, we sometimes call $\Sigma$ a $C^1_H(\G;\G')$-submanifold. The \emph{homogeneous tangent subgroup} to $\Sigma$ at $p\in\Sigma$ is the homogeneous normal subgroup $T^H_p\Sigma := \ker(\DH f(p))$. Statement $(iii)$ in the next lemma implies that $T^H_p\Sigma$ does not depend on the choice of $f$. Observe also that the homogeneous dimension of $T_p^H\Sigma$ is equal to the difference of the homogeneous dimensions of $\G$ and $\G'$ and is, in particular, independent of $p$; we call this integer \emph{homogeneous dimension} of $\Sigma$ and denote it by $\dim_H\Sigma$. \begin{Definition}\label{def:C1WV} Given a splitting $\G=\mathbb{W}\cdot\mathbb{V}$ and an open set $A\subset\mathbb{W}$, we say that \emph{$\phi:A\to\mathbb{V}$ is of class $C^1_{\mathbb{W},\mathbb{V}}(A)$} if the intrinsic graph $\Sigma$ of $\phi$ is a $C^1_H$ submanifold and $T^H_{w\phi(w)}\Sigma\in \mathscr W_\mathbb{V}$ for every $w\in A$. \end{Definition} Observe that, since $\mathbb{V}$ is isomorphic to $\G'$, the homogeneous dimension of $\mathbb{W}$ is equal to that of $\Sigma$. \begin{Lemma}\label{lem09151054} Let $\Sigma\subset\G$ be a $C^1_H$ submanifold and $o\in\Sigma$. Let $\G=\mathbb{W}\cdot\mathbb{V}$ be a splitting such that $T^H_o\Sigma$ is the intrinsic graph of $\phi_0:\mathbb{W}\to\mathbb{V}$. The following statements hold: \begin{enumerate}[label=(\roman*)] \ite There are open neighborhoods $\Omega$ of $o$ and $A$ of $\pi_\mathbb{W}(o)$, and a function $\phi\in C^1_{\mathbb{W},\mathbb{V}}(A)$ such that $\Sigma\cap\Omega$ is the intrinsic graph of $\phi$. \ite Define $\phi_\lambda(x) := \delta_{1/\lambda}\phi(\delta_\lambda x)$; then $\phi_\lambda\in C^1_{\mathbb{W},\mathbb{V}}(\delta_{1/\lambda}A)$ and $\phi_\lambda\to\phi_0$ uniformly on compact sets as $\lambda\to0^+$. \ite $\lim_{\lambda\to0^+} \delta_{1/\lambda}(o^{-1}\Sigma) = T^H_o\Sigma$ in the sense of local Hausdorff convergence of sets. \ite If $U$ is a neighborhood of $o$ such that $\Sigma\cap U$ is the level set of $f\in C^1_H(U,\G')$ and $o$ is a split-regular point of $f$, then $\G'$ is isomorphic to $\mathbb{V}$. \end{enumerate} \end{Lemma} The proof of statements $(i)$, $(ii)$ and $(iii)$ is left to the reader, since it is a consequence of Lemmas~\ref{lem:implicitfunction} and~\ref{lem09122015}. As for statement $(iv)$, it is enough to notice that the group morphism $\DH f(o)|_{\mathbb{V}}:\mathbb{V}\to\G'$ is injective (because $\mathbb{V}\cap\ker \DH f(o)=\{0\}$) and surjective (because $o$ is split-regular). An important property of the parametrizing map $\phi$ is that it is intrinsic Lipschitz in accordance with the theory developed by B.~Franchi, R.~Serapioni and F.~Serra Cassano, see e.g.~\cite{FrSeSe2006intrinsic,FranchiSerapioni2016Intrinsic}. We recall that, given a splitting $\G=\mathbb{W}\cdot\mathbb{V}$ and $A\subset\mathbb{W}$, a map $\phi:A\to\mathbb{V}$ is \emph{intrinsic Lipschitz} if there exists $\cal C\subset \G$ such that the following conditions hold \begin{itemize} \item[(a)] $\cal C$ is a cone i.e., $\delta_\lambda\cal C=\cal C$ for all $\lambda\ge 0$; \item[(b)] $\mathbb{V}$ is an axis of $\cal C$, i.e., $\mathbb{V}\subset\cal C$ and $\mathbb{V}\setminus\{0\}\subset\mathring{\cal C}$; \item[(c)] the graph $\Sigma:=\{a\phi(a):a\in A\}$ of $\phi$ satisfies $\Sigma\cap (p\cal C)=\{p\}$ for all $p\in\Sigma$. \end{itemize} \begin{Remark} The above definition of Lipschitz continuity for intrinsic graphs $\mathbb{W}\to\mathbb{V}$, tough slightly different, is equivalent to the one introduced by B.~Franchi, R.~Serapioni and F.~Serra Cassano, see e.g.~\cite[Remark A.2]{MR3906289}. \end{Remark} \begin{Corollary}\label{cor:intrLip} Intrinsic $C^1_H$ submanifolds are locally intrinsic Lipschitz graphs. \end{Corollary} \begin{proof} Let $\Sigma\subset\G$ be a $C^1_H$ submanifold, $o\in\Sigma$ and $\mathbb{V}\in\scr W_{T^H_o\Sigma}$. We need to prove that then there are a neighborhood $\Omega$ of $o$ and a cone $\cal C$ with axis $\mathbb{V}$ such that for all $p\in\Sigma\cap\Omega$ we have $(\Sigma\cap\Omega)\cap\cal C = \{p\}$. Let $\Omega$ be a neighborhood of $o$ with $f\in C^1_H(\Omega;\G')$ such that $\Sigma\cap \Omega=\{p\in\Omega:f(p)=f(o)\}$ and all points in $\Omega$ are split-regular for $f$. Up to shrinking $\Omega$, we can also assume, by Lemma~\ref{lem08011611}, that there exists $C>0$ such that \[ \rho'(f(p),f(pv))\ge C\|v\|_\rho\quad\forall\: p\in\Omega,\ v\in\mathbb{V}\text{ such that }pv\in\Omega, \] and that, by Lemma~\ref{lem10051109}, $f:(\Omega,\rho)\to(\G',\rho')$ is $L$-Lipschitz, for some $L\ge0$. Define the cone \[ \cal C:=\{0\}\cup\bigcup_{v\in\mathbb{V}} \Ball_\rho(v,\tfrac CL\|v\|_\rho)\subset \G. \] Requirements (a) and (b) above are clearly satisfied; to prove (c), let $p\in\Sigma\cap\Omega$ and $q\in(\Sigma\cap\Omega)\cap\cal C$. Then there exists $v\in \mathbb{V}$ such that $\rho(q,pv)<\tfrac CL\|v\|_\rho$, hence \ \rho'(f(p),f(q)) \ge \rho'(f(p),f(pv))-\rho'(f(pv),f(q))\ge C\|v\|_\rho-L\rho(q,pv)>0. \ We conclude that $f(q)\neq f(p)$ and thus $q\notin\Sigma$. This completes the proof. \end{proof} The following result is an easy consequence of Lemma~\ref{lem09151054}, Corollary~\ref{cor:intrLip} and~\cite[Theorem~3.9]{FranchiSerapioni2016Intrinsic}. We denote by $\psi^d$ either the $d$-dimensional Hausdorff or $d$-dimensional spherical Hausdorff measure on $\G$ as in Section~\ref{sec09172014}. \begin{Proposition}[Local Ahlfors regularity of the surface measure on $C^1_H$ submanifolds]\label{prop:doubling} Let $\Sigma\subset\G$ be a $C^1_H$ and let $d:=\dim_H\Sigma$; then, for every compact set $K\subset\Sigma$ there exists $C=C(K)>0$ such that \begin{equation}\label{eq:Ahlfors} \frac1C r^d\le\psi^d(\Sigma\cap\Ball(p,r))\le Cr^d\qquad\forall\: p\in K. \end{equation} In particular, the measure $\psi^d\hel\Sigma$ is locally doubling. \end{Proposition} Some of the results of this paper hold for the more general class of rectifiable sets that we now introduce. \begin{Definition}[Rectifiable sets]\label{def:rectifiable} We say that a set $R\subset\G$ is {\em countably $(\G;\G')$-rectifiable} if there exists $\G'$ and countably many $C^1_H(\G;\G')$-submanifolds $\Sigma_j\subset\G$, $j\in\N,$ such that, denoting by $Q,m$ the homogeneous dimensions of $\G,\G'$, one has \[ \psi^{Q-m}\Big(R\setminus\bigcup_{j}\Sigma_j\Big)=0. \] We say that $R$ is {\em $(\G;\G')$-rectifiable} if, moreover, $\psi^{Q-m}(R)<+\infty$. \end{Definition} The groups $\G,\G'$ will be usually understood and we will simply write {\it rectifiable} in place of {\it $(\G;\G')$-rectifiable}. Notice that, by Remark~\ref{rem:V=G'}, if $\psi^{Q-m}(R)>0$, then the group $\G'$ is uniquely determined by $R$ up to biLipschitz isomorphism. We recall also that this notion of rectifiability is not known to be equivalent to the ones by means of cones, as in \cite{FrSeSe2006intrinsic,FranchiSerapioni2016Intrinsic,2019arXiv191200493D,2020arXiv200309196J}. A key object in the theory of rectifiable sets is the approximate tangent space. \begin{Definition}[Approximate tangent space]\label{def:apprtangent} Let $R\subset\G$ be countably $(\G;\G')$-rectifiable and let $\Sigma_j$, $j\in\N$, be as in Definition~\ref{def:rectifiable}; for every $\psi^{Q-m}$-a.e.~$p\in R$ we define the {\em approximate tangent space} $T_p^HR$ to $R$ at $p$ as \[ T^H_p R:=T^H_p\Sigma_{\bar\jmath}\qquad\text{whenever }p\in\Sigma_{\bar\jmath}\setminus\bigcup_{j\le\bar\jmath-1}\Sigma_j. \] \end{Definition} Definition~\ref{def:apprtangent} is well-posed provided one shows that, for $\psi^{Q-m}$-a.e.~$p\in R$, $T^H_p R$ does not change if in Definition~\ref{def:rectifiable} one changes the covering family of submanifolds $(\Sigma_j)_j$. In turn, it is enough to show that, if $\Sigma',\Sigma''$ are level sets of $f'\in C^1_H(\Omega';\G')$, $f''\in C^1_H(\Omega'';\G')$ defined on open sets $\Omega',\Omega''\subset\G$ and all points are split-regular for $f',f''$, then (see also~\cite[Section~2]{DonVittoneJMAA}) \begin{equation}\label{eq:psi of I} \psi^{Q-m}(\{p\in\Sigma'\cap\Sigma'':T^H_p\Sigma'\neq T^H_p\Sigma''\})=0 . \end{equation} Let $I$ be the set in \eqref{eq:psi of I}. Assume by contradiction that $\psi^{Q-m}(I)>0$; we can without loss of generality suppose that $\Sigma'$ is the intrinsic graph of a map $\phi:A\to\mathbb{V}$ defined on an open set $A\subset\mathbb{W}$ for some splitting $\G=\mathbb{W}\cdot\mathbb{V}$. Let $J:=\{w\in A:w\phi(w)\in I\}$; by Theorem~\ref{prop05161504} one has $\psi^{Q-m}(J)>0$, hence there exists $\bar w\in J$ such that \[ \lim_{r\to 0^+}\frac{\psi^{Q-m}(J\cap \Ball(\bar w,r))}{\psi^{Q-m}(\mathbb{W}\cap \Ball(\bar w,r))}=1. \] Taking Lemma~\ref{lem09151054} (iii) into account, it is then a routine task to prove that the blow-up of $I$ at $\bar p:=\bar w\phi(\bar w)$, i.e., the limit $\lim_{\lambda\to0^+} \delta_{1/\lambda}(\bar p^{-1}I)$ in the sense of local Hausdorff convergence, is $T^H_{\bar p}\Sigma'$. This implies that $T^H_{\bar p}\Sigma'\supset T^H_{\bar p}\Sigma''$ and in turn, by equality of the dimensions, that $T^H_{\bar p}\Sigma'= T^H_{\bar p}\Sigma''$: this is a contradiction. \section{The area formula}\label{sec:area} Let $\mathbb{P}$ be a homogeneous subgroup of $\G$ with homogeneous dimension $d$ and let $\theta$ be a Haar measure on $\mathbb{P}$. By dilation invariance of $\scr E$ and $\mathbb{P}$ one has \begin{align} \Theta_{\psi^d}(\theta,0) &= \lim_{\epsilon\to0^+} \sup\left\{ \frac{\theta(E\cap\mathbb{P})}{\diam(E)^d} : 0\in E\in\scr E,\ 0<\diam(E)\le\epsilon \right\} \nonumber\\ &= \lim_{\epsilon\to0^+} \sup\left\{ \frac{\theta(\delta_{\diam(E)^{-1}}E\cap\mathbb{P})}{\diam(\delta_{\diam(E)^{-1}}E)^d} : 0\in E\in\scr E,\ 0<\diam(E)\le\epsilon \right\}\label{eq2020blabla} \\ &= \sup\left\{ \theta(E\cap\mathbb{P}) : 0\in E\in\scr E,\ \diam(E)=1 \right\}.\nonumber \end{align} This simple observation turns out to be useful to study the Federer density $\Theta_{\psi^d}$ of $\psi^d\hel \mathbb{P}$. \begin{Lemma}\label{lem09122022} Let $\mathbb{P}$ be a homogeneous subgroup of $\G$ with homogeneous dimension~$d$ and let $\psi^d$ be either the spherical or the Hausdorff $d$-dimensional measure on $\G$. Then $\psi^d\hel\mathbb{P}$ is a Haar measure on $\mathbb{P}$ and for all $x\in \mathbb{P}$, \begin{equation}\label{eq10041858bis} \sup\left\{ \psi^d(E\cap\mathbb{P}): x\in E\in\scr E,\ \diam(E)=1 \right\}= \Theta_{\psi^d} (\psi^d\hel \mathbb{P},x) = 1. \end{equation} \end{Lemma} \begin{proof} As $\scr E$ and $\rho$ are left invariant, $\psi^d\hel \mathbb{P}$ is a left invariant measure on $\mathbb{P}$. Therefore, we only need to show that it is non zero and locally finite to prove that it is a Haar measure. Fix a Haar measure $\theta$ on $\mathbb{P}$. Since $\theta$ is $d$-homogeneous, $\theta$ is Ahlfors $d$-regular on $(\mathbb{P},\rho)$, therefore there are constants $0<c<C$ such that \[ c\theta(B) \le \calH^d(B) \le C\theta(B) \] for all Borel subsets $B\subset\mathbb{P}$, see for instance \cite[Exercise~8.11]{Heinonen}. By basic comparisons of the Hausdorff and spherical measures, we infer that $\psi^d$ is non zero and locally finite. We can conclude that $\psi^d$ is a Haar measure on $\mathbb{P}$. It remains to prove the equalities in~\eqref{eq10041858bis}. The first equality now follows from~\eqref{eq2020blabla} and left-invariance. The second equality follows instead from Theorem~\ref{thm:Federer}. \end{proof} The following lemma proves Theorem~\ref{prop05161504} in a ``linearized'' case and allows to define the area factor $\calA$. \begin{Lemma}[Definition of the area factor]\label{lem:areafactor} Let $\mathbb{W}\cdot\mathbb{V}$ be a splitting of $\G$ with $\mathbb{W}$ normal. Assume that $\mathbb{P}$ is a homogeneous subgroup of $\G$ which is also an intrinsic graph $\mathbb{W}\to\mathbb{V}$ and let $\Phi_\mathbb{P}:\mathbb{W}\to\mathbb{P}$ be the corresponding graph map. Then, there exists a positive constant $\calA(\mathbb{P})$, which we call \emph{area factor}, such that \[ \psi^d\hel \mathbb{P} = \calA(\mathbb{P}) {\Phi_\mathbb{P}}_\# (\psi^d\hel \mathbb{W}). \] Furthermore, the area factor is continuous in $\mathbb{P}$. \end{Lemma} \begin{proof} In order to prove the first part of the lemma it suffices to show that $\mu \defeq {\Phi_\mathbb{P}}_\# (\psi^d\hel \mathbb{W})$ is a Haar measure on $\mathbb{P}$. To see that it is locally finite, note that $\Phi_\mathbb{P}$ is a homeomorphism between $\mathbb{W}$ and $\mathbb{P}$ and that therefore bounded open sets in $\mathbb{P}$ have finite positive $\mu$ measure. We need to prove that $\mu$ is left invariant. Choose a set $E \subset \mathbb{P}$. Let $p = p_\mathbb{W} p_\mathbb{V}$ be a point on $\mathbb{P}$ and pick a point $x =x_\mathbb{W} x_\mathbb{V} \in E$, we can write \[ \pi_\mathbb{W}(px) = \pi_\mathbb{W}(p_\mathbb{W} p_\mathbb{V} x_\mathbb{W} p_\mathbb{V}^{-1} p_\mathbb{V} x_\mathbb{V} ) = p_\mathbb{W} \varphi(x_\mathbb{W}) , \] where $\varphi:\mathbb{W}\to\mathbb{W}$ is the group automorphism $\varphi(w):= p_\mathbb{V} w p_\mathbb{V}^{-1}$. Let $v\in \mathfrak{g}$ be such that $p_\mathbb{V}=\exp(v)$, where $\exp:\mathfrak{g}\to\G$ is the exponential map. Then we have \[ \det(\mathrm{D}\varphi(e)|_\mathbb{W}) = \det(\Ad_{p_\mathbb{V}}|_{\mathbb{W}}) = \det(e^{\mathtt{ad}_v|_{\mathbb{W}}}) = e^{tr(\mathtt{ad}_v|_{\mathbb{W}})} = 1, \] where $tr(\mathtt{ad}_v|_{\mathbb{W}})=0$ because $\mathtt{ad}_v$ is nilpotent. Here, we denoted by $\mathtt{ad}$ and $\Ad$ the adjoint representations of $\mathfrak{g}$ and $\G$ respectively; recall that $\Ad_{\exp(v)} = e^{\mathtt{ad}_v}$. This implies that $\varphi$ preserves Haar measures of $\mathbb{W}$ and thus \begin{align*} \mu(pE) = \psi^d (\pi_\mathbb{W}(pE)) = \psi^d(p_\mathbb{W}\varphi(\pi_\mathbb{W}(E)) = \psi^d(\pi_\mathbb{W} (E)) = \mu(E) . \end{align*} We conclude that $\mu$ is a Haar measure on $\mathbb{P}$, so the first part of the statement is proved. Let us prove that $\calA(\mathbb{P})$ is continuous with respect to $\mathbb{P}$. By Proposition~\ref{propFedDensity}, $\calA(\mathbb{P})$ is equal to $\Theta_{\psi^d}(\mu,0)$ and, by~\eqref{eq2020blabla}, \[ \calA(\mathbb{P})= \sup \{ \psi^d(\pi_\mathbb{W} (E\cap \mathbb{P})) : 0\in E\in \scr E, \diam E= 1\}. \] Fix $\epsilon>0$ and let $\mathbb{P}$ and $\mathbb{P}'$ be homogeneous subgroups that are intrinsic graphs on $\mathbb{W}$ of maps $\phi_\mathbb{P},\phi_{\mathbb{P}'}:\mathbb{W}\to\mathbb{V}$ such that \begin{equation}\label{eq12120947} \rho(\phi_\mathbb{P}(w),\phi_{\mathbb{P}'}(w)) <\epsilon \quad\forall w\in \pi_\mathbb{W}( \cBall(0,1)). \end{equation} Pick $E\in \scr E$ with $0\in E$ and $\diam E = 1$ such that $ \psi^d(\pi_\mathbb{W} (E\cap \mathbb{P}))>(1-\epsilon)\calA(\mathbb{P})$. Notice that, if $w\in\pi_\mathbb{W} (E\cap \mathbb{P})$, then $\rho(w\phi_\mathbb{P}(w),w\phi_{\mathbb{P}'}(w)) < \epsilon$. Therefore, denoting by $\cBall(E,r)$ the closed $r$ neighborhood of $E$, we have \[ \pi_\mathbb{W} (E\cap \mathbb{P}) \subset \pi_\mathbb{W}(\cBall(E,\epsilon)\cap \mathbb{P}'). \] If $\psi^d$ is the Hausdorff measure, then $\cBall(E,\epsilon)\in\scr E$ and $\diam(\cBall(E,\epsilon)) \le 1+2\epsilon$; If $\psi^d$ is the spherical measure, then $E=\cBall(x,1/2)$ for some $x\in\G$ and thus $\cBall(E,\epsilon) \subset \cBall(x,1/2+\epsilon)\in\scr E$ with $\diam(\cBall(x,1/2+\epsilon))\le 1+2\epsilon$. In both cases, we obtain \[ \calA(\mathbb{P}') \ge (1+2\epsilon)^{-d} (1-\epsilon) \calA( \mathbb{P}). \] Notice that this inequality holds for all $\mathbb{P}$ and $\mathbb{P}'$ satisfying \eqref{eq12120947}, hence we also have $\calA(\mathbb{P}) \ge (1+2\epsilon)^{-d} (1-\epsilon) \calA( \mathbb{P}')$. We conclude that $\mathbb{P} \mapsto \calA(\mathbb{P})$ is continuous. \end{proof} It is worth observing that the area factor implicitly depends on the fixed group $\mathbb{W}$. We are now ready to prove our first main result. \begin{proof}[Proof of Theorem~\ref{prop05161504}] The function $a(w) := \calA(T_{w\phi(w)}^H\Sigma) $ is continuous on $A$ with values in $(0,\infty)$. We define the measure $\mu$, supported on $\Sigma$, by \[ \mu(E) := \int_{\pi_\mathbb{W}(E\cap\Sigma)} a(w)\dd (\psi^d\hel \mathbb{W}) (w) \] for any $E\subset \G$. We shall prove~\eqref{eq09071257} by applying Proposition~\ref{propFedDensity}, that is, we will show that $\Theta_{\psi^d}(\mu;o)=1$ for all $o\in\Sigma$. Fix $o\in\Sigma$ and assume without loss of generality that $o=0$. Then \begin{align*} \Theta_{\psi^d}(\mu;0) &= \lim_{r\to 0^+} \sup\left\{ \frac{\mu(E)}{\diam(E)^d} : 0\in E\in\scr E,\ \diam(E)<r\right\} . \end{align*} Using the continuity of the function $a$, we have \[ \Theta_{\psi^d}(\mu;0) = a(0) \lim_{r\to0^+} \sup\left\{ \dfrac{\psi^d(\pi_\mathbb{W}(E \cap \Sigma))}{(\diam E)^d} : 0\in E\in\scr E,\ \diam(E)<r \right\} . \] Since the projection $\pi_\mathbb{W}$ commutes with dilations, we have for $0<\eta\le 1$, \begin{align*} \psi^d(\pi_\mathbb{W}(\delta_\eta E\cap\Sigma))= \eta^d \psi^d(\pi_\mathbb{W}( E\cap \delta_{1/\eta} \Sigma)). \end{align*} Thus \begin{multline*} \Theta_{\psi^d}(\mu;0) \\ = a(0) \lim_{r\to0^+} \sup\left\{ \psi^d(\pi_\mathbb{W}(E \cap \delta_{1/\eta}\Sigma)) : 0\in E\in\scr E,\ \diam(E)=1, 0<\eta <r \right \} . \end{multline*} We claim that \begin{equation} \begin{split}\label{eq:tangentdensity} \lim_{r \to 0^+} \sup\left\{ \psi^d(\pi_\mathbb{W}(E\cap\delta_{1/\eta}\Sigma)) : 0\in E\in\scr E,\ \diam(E)=1,\ 0<\eta<r\right\} \\ = \sup\{\psi^d(\pi_\mathbb{W}(E\cap T_0^H\Sigma)) : 0\in E\in \scr E,\ \diam(E)=1 \} . \end{split} \end{equation} As in Lemma~\ref{lem09151054}, we denote by $\phi_\eta:\delta_{1/\eta}A\to\mathbb{V}$ the function whose intrinsic graph is $\delta_{1/\eta} \Sigma$ and by $\phi_0:\mathbb{W}\to\mathbb{V}$ the one for $T_0^H\Sigma$; then, $\phi_\eta$ converges to $\phi_0$ uniformly on compact sets as $\eta\to0$. In particular, for every $\epsilon>0$ there is $r_\epsilon>0$ such that $\pi_\mathbb{W}(\cBall(0,1))\subset \delta_{1/\eta}A$ and $\rho(\phi_\eta(w),\phi_0(w))<\epsilon$ for all $w\in\pi_\mathbb{W}(\cBall(0,1))$ and $\eta\in(0,r_\epsilon)$. We start by proving that the left hand side (LHS) of \eqref{eq:tangentdensity} is not greater than the right hand side (RHS); we can assume LHS$>0$. Fix $\epsilon>0$. Then there are $\eta\in(0,r_\epsilon)$ and $E$ such that $0\in E\in \scr E$, $\diam E= 1$ and $\psi^d(\pi_\mathbb{W}(E\cap\delta_{1/\eta}\Sigma)) > (1-\epsilon) {\rm LHS}$. Notice that $\pi_\mathbb{W}(E)\subset\pi_\mathbb{W}(\cBall(0,1))$ and that \[ \pi_\mathbb{W}(E\cap\delta_{1/\eta}\Sigma) \subset \pi_\mathbb{W}(\cBall(E,\epsilon)\cap T_0^H\Sigma). \] If $\psi^d$ is the Hausdorff measure, then $\tilde E:=\cBall(E,\epsilon)\in\scr E$ and $\diam\tilde E\le 1+2\epsilon$; If $\psi^d$ is the spherical measure, then $E=\cBall(x,1/2)$ for some $x\in\G$ and thus $\cBall(E,\epsilon)\subset \cBall(x,1/2+\epsilon) =: \tilde E \in \scr E$ and $\diam\tilde E\le 1+2\epsilon$. Thus, by $d$-homogeneity of $\psi^d \hel \mathbb{W}$, we have \[ {\rm RHS} \ge \dfrac{\psi^d(\pi_\mathbb{W}(\tilde E\cap T_0^H\Sigma))}{(\diam \tilde E)^d} \ge \dfrac{1-\epsilon}{(1+2\epsilon)^d} {\rm LHS}. \] The inequality ${\rm RHS} \ge {\rm LHS}$ follows from the arbitrariness of $\varepsilon$. For the converse inequality, fix $\epsilon>0$ and $\tilde E$ with $0\in \tilde E\in \scr E$ and $\psi^d(\pi_\mathbb{W}(\tilde E\cap T_0^H\Sigma)) \ge (1-\epsilon){\rm RHS}$. Notice that, for every $\eta\in(0,r_\epsilon)$, \[ \pi_\mathbb{W}(\delta_{1-2\epsilon}\tilde E\cap T_0^H\Sigma)\subset \pi_\mathbb{W}(\cBall(\delta_{1-2\epsilon} \tilde E,\epsilon)\cap \delta_{1/\eta}\Sigma) \] and that $\diam (\cBall(\delta_{1-2\epsilon} \tilde E,\epsilon))\le 1$. Similarly as before, we can find $\tilde E_\epsilon\in\mathscr E$ such that $\cBall(\delta_{1-2\epsilon} \tilde E,\epsilon)\subset \tilde E_\epsilon$ and $\diam \tilde E_\epsilon= 1$. Therefore, \begin{align*} {\rm LHS} \ge & \limsup_{\eta\to 0^+} \psi^d(\pi_\mathbb{W}(\tilde E_\epsilon\cap \delta_{1/\eta}\Sigma))\\ \ge & \limsup_{\eta\to 0^+} \psi^d(\pi_\mathbb{W}(\cBall(\delta_{1-2\epsilon} \tilde E,\epsilon)\cap \delta_{1/\eta}\Sigma))\\ \ge & \psi^d(\pi_\mathbb{W}(\delta_{1-2\epsilon}\tilde E\cap T_0^H\Sigma))\\ = & (1-2\epsilon)^d\psi^d(\pi_\mathbb{W}(\tilde E\cap T_0^H\Sigma))\\ \ge &(1-2\epsilon)^d(1-\epsilon){\rm RHS}. \end{align*} This concludes the proof of~\eqref{eq:tangentdensity}. We conclude that, by the definition of the area factor in Lemma~\ref{lem:areafactor}, \begin{align*} \Theta_{\psi^d}(\mu;0) &= \calA(T_0^H\Sigma) \sup\{\psi^d(\pi_\mathbb{W}(E\cap T_0^H\Sigma)) : 0\in E\in \scr E,\ \diam(E)=1 \} \\ &= \sup\{\psi^d(E\cap T_0^H\Sigma) : 0\in E\in \scr E,\ \diam(E)=1 \}\\ &=1 \end{align*} where the last equality follows from Lemma~\ref{lem09122022}. \end{proof} We conclude this section with some applications of Theorem~\ref{prop05161504}. We start by proving the first part in the statement of Corollary \ref{cor:SvsH} about the relation between Hausdorff and spherical Hausdorff measures on rectifiable sets; the second part of Corollary \ref{cor:SvsH}, concerning the same application in the setting of the Heisenberg group endowed with a rotationally invariant distance, will be proved in Proposition~\ref{prop:Hncodim1} \begin{proof}[Proof of Corollary~\ref{cor:SvsH}, first part] If $\mathbb{P}\in\scr T_{\G,\G'}$, let $\textfrak a(\mathbb{P})$ as the constant such that \begin{equation}\label{eq03231608} \cal S^{Q-m}\hel\mathbb{P} = \textfrak a(\mathbb{P}) \cal H^{Q-m}\hel\mathbb{P}, \end{equation} which exists because both measures are Haar measures. Now, let $\G=\mathbb{W}\cdot\mathbb{V}$ be a splitting and $\Sigma$ the $C^1_H$ intrinsic graph of $\phi:A\to\mathbb{V}$ with $A\subset\mathbb{W}$. Then, denoting by $\cal A_{\cal S}^\mathbb{W}$ and $\cal A_{\cal H}^\mathbb{W}$ the area factors for the spherical and Hausdorff measures with respect to $\mathbb{W}$, \begin{align*} \cal S^{Q-m}\hel\Sigma &= \cal A_{\cal S}^\mathbb{W}(T^H\Sigma) \Phi_\#(\cal S^{Q-m}\hel\mathbb{W}) \\ &= \textfrak a(\mathbb{W})\frac{\cal A_{\cal S}^\mathbb{W}(T^H\Sigma)}{\cal A_{\cal H}^\mathbb{W}(T^H\Sigma)} \cal A_{\cal H}^\mathbb{W}(T^H\Sigma) \Phi_\#(\cal H^{Q-m}\hel\mathbb{W}) \\ &= \textfrak a(\mathbb{W})\frac{\cal A_{\cal S}^\mathbb{W}(T^H\Sigma)}{\cal A_{\cal H}^\mathbb{W}(T^H\Sigma)} \cal H^{Q-m}\hel\Sigma . \end{align*} Since $\Sigma$ is arbitrary, we can apply this equality to $\Sigma=\mathbb{P}\in\cal W_\mathbb{W}$ to see that \[ \textfrak a(\mathbb{P}) = \textfrak a(\mathbb{W})\frac{\cal A_{\cal S}^\mathbb{W}(\mathbb{P})}{\cal A_{\cal H}^\mathbb{W}(\mathbb{P})} . \] Continuity of $\textfrak a$ and~\eqref{eq:AAAAA} are now clear. \end{proof} \begin{Remark}\label{rem:isodiametric} The definition of $\textfrak a$ in~\eqref{eq03231608} together with Proposition~\ref{propFedDensity} (with $\mu=\cal S^{Q-m}$ and $\psi^d=\cal H^{Q-m}$) distinctly shows that the precise value of $\textfrak a(\mathbb{W})$ is related with the {\it isodiametric problem} on $\mathbb{W}$ about maximizing the measure of subsets of $\mathbb{W}$ with diameter at most 1 (see~\cite{RigotIsodiametric}). This task is a very demanding one already in the Heisenberg group endowed with the Carnot-Carath\'eodory distance, see~\cite{LeonardiRigotV}. \end{Remark} We now prove a statement about weak* convergence of measures of level sets of $C^1_H$ functions; this will be used in the subsequent Corollary~\ref{cor:densityexistence} as well as later in the proof of the coarea formula. We note that the proof of Lemma~\ref{lem:weakstarblowup} relies on the Area formula~\eqref{eq09071257}: we are not aware of any alternative strategy. \begin{Lemma}[Weak* convergence of blow-ups]\label{lem:weakstarblowup} Consider an open set $\Omega\subset\G$, a function $g\in C^1_H(\Omega;\G')$ and a point $o\in\Omega$ that is split-regular for $g$. Let $m$ denote the homogeneous dimension of $\G'$ and, for $b\in\G'$ and $\lambda>0$, define \[ \Sigma_{\lambda,b} :=\delta_{1/\lambda}(o^{-1}\{p\in\Omega:g(p)=g(o)\delta_\lambda b\}) = \{p\in\delta_{1/\lambda}(o^{-1}\Omega):g(o\delta_\lambda p)=g(o)\delta_\lambda b\}. \] Then, the weak* convergence of measures \[ \psi^{Q-m}\hel\Sigma_{\lambda,b} \:\overset{*}{\rightharpoonup}\: \psi^{Q-m}\hel \{p:\DH g(o)p=b\} \qquad\text{as }\lambda\to0^+ \] holds. Moreover, the convergence is uniform with respect to $b\in\G'$, i.e., for every $\chi\in C_c(\G)$ and every $\epsilon>0$ there is $\bar\lambda>0$ such that \[ \left| \int_{\Sigma_{\lambda,b}} \chi \dd\psi^{Q-m} - \int_{\{\DH g(o)=b\}} \chi \dd\psi^{Q-m} \right| < \epsilon \qquad\forall\:\lambda\in(0,\bar\lambda), b\in\G'. \] \end{Lemma} \begin{proof} Up to replacing $g$ with the function $x\mapsto g(o)^{-1}g(ox)$, we can assume $o=0$ and $g(o)=0$; in particular, $\Sigma_{\lambda, b}=\delta_{1/\lambda}(\{p\in\Omega:g(p)=\delta_\lambda b\})$. Notice that, by Lemma~\ref{lem05161502}, $\Sigma_{\lambda,b}\neq\emptyset$ for all $b$ in a neighborhood of $0$ and $\lambda$ small enough. Possibly restricting $\Omega$, we can assume that there exists a splitting $\G=\mathbb{W}\cdot\mathbb{V}$, open sets $A\subset\mathbb{W}$, $B\subset\G'$ and a map $\varphi:A\times B\to\mathbb{V}$ such that the statements of Lemma~\ref{lem05161502} hold. If $p\in \Sigma_{\lambda,b}$, then there is $a\in A$ such that $p = \delta_{1/\lambda}(a\varphi(a,\delta_\lambda b)) = \delta_{1/\lambda} a \varphi_{\lambda}(\delta_{1/\lambda} a, b)$, where $\varphi_{\lambda}(a,b) := \delta_{1/\lambda}\varphi(\delta_\lambda a,\delta_\lambda b)$. In particular, $\Sigma_{\lambda,b}$ is the intrinsic graph of $\varphi_{\lambda}(\cdot,b):\delta_{1/\lambda}A\to\mathbb{V}$. Denoting by $\varphi_0:\mathbb W\times\G'\to\mathbb{V}$ the implicit function associated with $\DH g(0)$, we have by Lemma~\ref{lem09122015} that $\varphi_{\lambda}\to \varphi_0$ uniformly on compact subsets of $\mathbb{W}\times\G'$. Moreover \begin{align*} \lim_{\lambda\to 0^+} T_{a\varphi_{\lambda}(a,b)}\Sigma_{\lambda,b} &= \lim_{\lambda\to 0^+} T_{\delta_{1/\lambda}(\delta_\lambda a\varphi(\delta_\lambda a,\delta_\lambda b))}\delta_{1/\lambda}\Sigma_{1,\delta_\lambda b} \\ &= \lim_{\lambda\to 0^+} T_{\delta_\lambda a\varphi(\delta_\lambda a,\delta_\lambda b)} \Sigma_{1,\delta_\lambda b} \\ &= \lim_{\lambda\to 0^+} \ker(\DH g(\delta_\lambda a\varphi(\delta_\lambda a,\delta_\lambda b))) \\ &= \ker(\DH g(0)) \in \scr W_\mathbb{V}, \end{align*} where the convergence is in the topology of $\scr W_\mathbb{V}$ and it is uniform when $(a,b)$ belong to a compact set of $\mathbb{W}\times\G'$. Therefore, using the area formula of Theorem~\ref{prop05161504}, for every $\chi\in C_c(\G)$ we have \begin{align} \lim_{\lambda\to0^+} \int_{\Sigma_{\lambda,b}} \chi \dd\psi^{Q-m} &= \lim_{\lambda\to0^+} \int_{\delta_{1/\lambda}A} \chi(a\varphi_{\lambda}(a,b))\, \calA (T_{a\varphi_{\lambda}(a,b)}\Sigma_{\lambda,b}) \dd \psi^{Q-m}(a) \nonumber\\ &= \int_{\mathbb W} \chi(a\varphi_0(a,b))\, \calA( \ker\DH g(0)) \dd \psi^{Q-m}(a) \label{eq:limiteCDD}\\ &= \int_{\{\DH g(0)=b\}} \chi \dd\psi^{Q-m} ,\nonumber \end{align} where the limit is uniform when $b$ belongs to a compact subset of $\G'$. Let us show that the convergence is actually uniform on $\G'$. Since $g$ is Lipschitz continuous in a neighborhood of $0$, there is a positive constant $C$ such that $\rho'(0,g(\delta_\lambda p)) \le C\lambda $ for all $p\in\spt\chi$ and $\lambda$ small enough. Therefore, if $\rho'(0,b)>C$, then $\spt\chi\cap \Sigma_{\lambda,b}=\emptyset$. Possibly increasing $C$, we can assume that $\spt\chi\cap \{\DH g(o)=b\} =\emptyset$ for all $b$ such that $\rho'(0,b)>C$. Therefore, the uniformity of the limit~\eqref{eq:limiteCDD} for $b\in\cBall_{\G'}(0,C)$ implies uniformity for all $b\in\G'$. This completes the proof. \end{proof} In the proof of the following corollary, we will need this simple lemma: \begin{Lemma}\label{lem03231638} Let $\theta$ be a Haar measure and $\rho$ a ho\-mo\-ge\-neous distance on a ho\-mo\-ge\-neous group $\mathbb{P}$. Then $\theta(\partial \Ball_\rho(0,R))=0$ for all $R>0$. \end{Lemma} \begin{proof} By homogeneity, there holds \begin{align*} \theta(\partial \Ball(0,R)) &= \lim_{\epsilon\to 0^+} \theta( \Ball(0,R+\epsilon)) - \theta( \Ball(0,R-\epsilon))\\ &= \theta( \Ball(0,1)) \lim_{\epsilon\to 0^+} ((R+\epsilon)^{\dim_H\mathbb{P}}- (R-\epsilon)^{\dim_H\mathbb{P}}) =0. \end{align*} \end{proof} \begin{Corollary}\label{cor:densityexistence} There exists a continuous function $\textfrak d:\scr T_{\G,\G'}\to(0,+\infty)$ with the following property. If $R\subset\G$ is a $(\G;\G')$-rectifiable set and $Q,m$ denote the homogeneous dimensions of $\G,\G'$, respectively, then \begin{equation}\label{eq:density} \lim_{r\to 0^+} \frac{\psi^{Q-m}(R\cap\Ball(p,r))}{r^{Q-m}}=\textfrak d(T^H_p R)\qquad\text{for $\psi^{Q-m}$-a.e.~}p\in R. \end{equation} Moreover, if $R$ is a $C^1_H$ submanifold, then the equality in~\eqref{eq:density} holds at every $p\in R$. \end{Corollary} Clearly, $\textfrak d$ depends on whether the measure $\psi^{Q-m}$ under consideration is the Hausdorff or the spherical one. \begin{proof}[Proof of Corollary~\ref{cor:densityexistence}] Let $\Sigma$ be a $C^1_H$ submanifold and let $\mu:=\psi^{Q-m}\hel(R\setminus\Sigma)$; Theorem~\ref{thm:Federer} (ii) implies that \[ \Theta_{\psi^{Q-m}}(\mu;p)=0\qquad \text{for $\psi^{Q-m}$-a.e.~$p\in\Sigma$}, \] hence \[ \lim_{r\to 0^+} \frac{\psi^{Q-m}((R\setminus\Sigma)\cap\Ball(p,r))}{r^{Q-m}}=0\qquad \text{for $\psi^{Q-m}$-a.e.~$p\in R\cap\Sigma$}. \] A similar argument, applied to $\mu:=\psi^{Q-m}\hel(\Sigma\setminus R)$, gives \[ \lim_{r\to 0^+} \frac{\psi^{Q-m}((\Sigma\setminus R)\cap\Ball(p,r))}{r^{Q-m}}=0\qquad \text{for $\psi^{Q-m}$-a.e.~$p\in R\cap\Sigma$}, \] i.e., \[ \lim_{r\to 0^+} \frac{\psi^{Q-m}(R\cap\Ball(p,r))}{r^{Q-m}}= \lim_{r\to 0^+} \frac{\psi^{Q-m}(\Sigma\cap\Ball(p,r))}{r^{Q-m}}\qquad \text{for $\psi^{Q-m}$-a.e.~$p\in R\cap\Sigma$} \] provided the second limit exists. In particular, it is enough to prove the statement in case $R$ is a $C^1_H$ submanifold. Let $p\in R$ be fixed; for $\lambda>0$ define $R_\lambda:=\delta_{1/\lambda}(p^{-1}R)$ and, by Lemma~\ref{lem:weakstarblowup}, $\psi^{Q-m}\hel R_\lambda \:\overset{*}{\rightharpoonup}\: \psi^{Q-m}\hel T^H_p R $. Since $\psi^{Q-m}(T^H_pR \cap\partial\Ball(0,1))=0$, using~\cite[Proposition 1.62 (b)]{AmFuPa2000} and Lemma~\ref{lem03231638}, one gets \[ \lim_{r\to 0^+} \frac{\psi^{Q-m}(R\cap\Ball(p,r))}{r^{Q-m}} = \lim_{r\to 0^+} \psi^{Q-m}(R_\lambda\cap\Ball(0,1)) = \psi^{Q-m}(T^H_pR \cap\Ball(0,1)). \] Statement~\eqref{eq:density} follows on setting $\textfrak d(\mathbb{P}):=\psi^{Q-m}(\mathbb{P} \cap\Ball(0,1))$ for every $\mathbb{P}\in\scr T_{\G,\G'}$. It remains only to prove the continuity of $\textfrak d$ at every fixed $\mathbb{W}\in\scr T_{\G,\G'}$. Every $\mathbb{P}\in\scr T_{\G,\G'}$ in a proper neighborhood of $\mathbb{W}$ is an intrinsic graph over $\mathbb{W}$. Denoting by $\pi_{\mathbb{W}}:\G\to\mathbb{W}$ the projection defined in \eqref{eq:defproiezioni}, we have by Lemma~\ref{lem:areafactor} that \[ \textfrak d(\mathbb{P})=\psi^{Q-m}(\mathbb{P} \cap\Ball(0,1)) = \calA(\mathbb{P})\psi^{Q-m}(\pi_\mathbb{W}(\mathbb{P} \cap\Ball(0,1))), \] hence we have to prove only the continuity of $\mathbb{P}\mapsto \psi^{Q-m}(\pi_\mathbb{W}(\mathbb{P} \cap\Ball(0,1)))$ at $\mathbb{W}$. Let $\epsilon>0$ be fixed; then, if $\mathbb{P}$ is close enough to $\mathbb{W}$, one has \[ \mathbb{W}\cap\Ball(0,1-\epsilon)\subset \pi_\mathbb{W}(\mathbb{P} \cap\Ball(0,1))\subset \mathbb{W}\cap\Ball(0,1+\epsilon) \] and the continuity of $\mathbb{P}\mapsto \psi^{Q-m}(\pi_\mathbb{W}(\mathbb{P} \cap\Ball(0,1)))$ at $\mathbb{W}$ follows. \end{proof} We conclude this section with the following result, similar in spirit to Lemma~\ref{lem:weakstarblowup}. It will be used in the proof of Lemma~\ref{lem:coareacont}. \begin{Corollary}\label{corollaryreplacement} Suppose that, for $n\in \N$, $L_n:\G\to \G'$ is a homogeneous morphism and that the $L_n$ converge to a surjective homogeneous morphism $L:\G\to \G'$ such that $\ker L$ splits $\G$. Then the following weak* convergence of measures holds: \[ \psi^{Q-m}\hel \{L_n= s\} \overset{*}{\rightharpoonup} \psi^{Q-m}\hel \{L=s\} , \] where $Q$ is the homogeneous dimension of $\G$ and $m$ is the homogeneous dimension of $\G'$. More precisely, given a function $\chi \in C_c(\G)$ and $\epsilon>0$, there exists $N\in\N$ such that for all $n\ge N$ and $s\in \G'$ \[ \left \vert \int_{\{L_n= s\}} \chi \dd \psi^{Q-m} - \int_{ \{L= s\}} \chi \dd \psi^{Q-m}\right \vert <\epsilon. \] \end{Corollary} \begin{proof} Denote by $\mathbb{W}:=\ker L$ and let $\G=\mathbb{W}\cdot\mathbb{V}$ a splitting. Recall that $\mathbb{V}$ and $\G'$ are also vector spaces, the morphisms $L_n$ are linear maps and that $L|_\mathbb{V}:\mathbb{V}\to\G'$ is an isomorphism. Therefore, there exists $N\in\N$ such that $L_n|_{\mathbb{V}}$ is an isomorphism for all $n\ge N$. For all such $n$ and $s\in\G'$, define $\phi_n^s:\mathbb{W}\to\mathbb{V}$ by \[ \phi_n^s(w) := L_n|_{\mathbb{V}}^{-1}(L_n(w)^{-1}s) . \] Notice that $\{L_n=s\}$ is the intrinsic graph of $\phi_n^s$. Let $\phi_\infty^s:\mathbb{W}\to\mathbb{V}$ the function whose intrinsic graph is $\{L=s\}$: it is clear that $\phi_n^s(w)\to \phi_\infty^s(w)$ uniformly on compact sets in the variables $(w,s)\in\mathbb{W}\times\G'$. Fix $\chi\in C_c(\G)$. Then \[ \int_{\{L_n=s\}} \chi \dd \psi^{Q-m} = \cal A(\ker L_n) \int_\mathbb{W} \chi(w\phi_n^s(w)) \dd\psi^{Q-m}(w) \] where the functions $\tilde\chi_n:(s,w)\mapsto \chi(w\phi_n^s(w))$ are continuous and uniformly converge to $ (s,w)\mapsto(wL|_{\mathbb{V}}^{-1}(s))$ as $n\to\infty$. Moreover, $\cal A(\ker L_n)\to 1$. This completes the proof. \end{proof} \section{The coarea formula \subsection{Set-up} Let $\G$ be a Carnot group, $\rho$ a homogeneous distance on $\G$ and $Q$ the homogeneous dimension of $\G$. Let also $\mathbb{M}$, $\mathbb{L}$ and $\mathbb{K}$ be graded groups and such that $\mathbb{L}\mathbb{M} = \mathbb{K}$ and $\mathbb{M}\cap\mathbb{L}=\{0\}$; let $m$ and $\ell$ be the respective homogeneous dimensions of $\mathbb{M}$ and $\mathbb{L}$. Our aim is to prove Theorem~\ref{thm:coarea}, which by Proposition~\ref{propFedDensity} will be a consequence of the following Theorem~\ref{thm:coarea2}: here, $\calC(\mathbb{P}, L)$ denotes the \emph{coarea factor} corresponding to a homogeneous subgroup $\mathbb{P}$ of $\G$ and a homogeneous morphism $L:\G\to \mathbb{L}$; the coarea factor is going to be defined later in Proposition \ref{prop:coarea-factor}. The function $\calC(\mathbb{P},L)$ is continuous in $\mathbb{P}$ and $L$, see Lemma~\ref{lem:coareacont}. \begin{Theorem}\label{thm:coarea2} Let $\Omega\subset\G$ be open, let $f\in C^1_H(\Omega;\mathbb{M})$ and assume that all points in $\Omega$ are split-regular for $f$, so that $\Sigma:=\{p\in\Omega:f(p)=0\}$ is a $C^1_H$ submanifold. Consider a function $u:\Omega\to\mathbb{L}$ such that $uf\in C^1_H(\Omega;\mathbb{K})$ and assume that \begin{equation*} \text{for $\psi^{Q-m}$-a.e.~$p\in\Sigma$,}\quad \left\{ \begin{array}{l} \text{either $\DH (uf)_p|_{T^H_p\Sigma}$ is not surjective on $\mathbb{L}$,}\\ \text{or $p$ is split-regular for $uf$.} \end{array} \right. \end{equation*} For $s\in\mathbb{L}$ set $\Sigma^s:=\Sigma\cap u^{-1}(s)$. Then \begin{itemize} \item[(i)] for every Borel set $E\subset \Omega$ the function $\mathbb{L}\ni s\mapsto\psi^{Q-m-\ell}(E\cap \Sigma^s)\in[0,+\infty]$ is $\psi^\ell$-measurable; \item[(ii)] the function \begin{equation}\label{eq:defmuSigmau} \mu_{\Sigma,u}(E) \defeq \int_{\mathbb{L}} \psi^{Q-m-\ell}(E\cap \Sigma^s) \dd \cal\psi^\ell (s), \end{equation} defined on Borel sets, is a locally finite measure; \item[(iii)] the Radon--Nikodym density $\Theta$ of $\mu_{\Sigma,u}$ with respect to $\psi^{Q-m}\hel\Sigma$ of is locally bounded and \begin{equation}\label{eq:coarealocal} \Theta(p)= \calC (T^H_p\Sigma,\DH (uf)_p)\qquad\text{for $\psi^{Q-m}$-a.e.~}p\in\Sigma. \end{equation} \end{itemize} \end{Theorem} \begin{Remark}\label{rem:olyonu} Let us prove that the differential $\DH (uf)_p|_{T^H_p\Sigma}$ depends only on the restriction of $u$ to $\Sigma$ and, moreover, that it does not depend on the choice of the defining function $f$ for $\Sigma$. In particular, in view of Proposition~\ref{prop:coarea-factor} also the coarea factor $\calC (T^H_p\Sigma,\DH (uf)_p)$ depends only on the restriction of $u$ to $\Sigma$. Let $v\in T^H_p\Sigma$; then there exist sequences $r_j\to 0^+$ and $q_j\to p$ such that $q_j\in\Sigma$ and $v=\lim_{j\to\infty}\delta_{1/r_j} (p^{-1}q_j)$. In particular, $\|q_j^{-1}p\delta_{r_j}v\|_\rho=o(r_j)$ and, by Lemma~\ref{lem10051109}, \[ \lim_{j\to\infty}\delta_{1/r_j}\left( (uf)(q_j)^{-1}(uf)(p\delta_{r_j}v)\right)=0. \] Since $f|_\Sigma=0$ we obtain \begin{align*} \DH (uf)_p(v) &= \lim_{j\to\infty}\delta_{1/r_j}\left( (uf)(p)^{-1}(uf)(p\delta_{r_j}v)\right)\\ &= \lim_{j\to\infty}\delta_{1/r_j}\left( (uf)(p)^{-1}(uf)(q_j)\right)\\ &= \lim_{j\to\infty}\delta_{1/r_j}\left( u(p)^{-1}u(q_j)\right). \end{align*} This proves what claimed. \end{Remark} The proof of Theorem~\ref{thm:coarea2} is divided into several steps. We start by proving that $\mu_{\Sigma,u}$ is a well defined locally finite measure concentrated on $\Sigma$; this uses an abstract coarea inequality. Then we consider the linear case in order to apply a blow-up argument; in doing so, we will define the coarea factor. We finally consider separately ``good points'', i.e., those where $\DH (uf)|_{T^H\Sigma}$ has full rank, and ``bad points'', where $\dD_H (uf)\vert_{T^H \Sigma}$ is not surjective: at good points the blow-up argument applies, while the set of bad points is negligible by an argument similar to the proof of the coarea inequality. \subsection{Coarea Inequality In this section we prove Proposition~\ref{prop07171752}, which is a consequence of the following Lemma~\ref{lem-coarea-ineq}; the latter is basically \cite[Theorem~2.10.25]{FedererGMT}, with a slightly different use of the Lipschitz constant. See also~\cite[Theorem 1.4]{MagnaniCoareaInequality} and~\cite[Lemma~3.5]{EvansGariepyRevised}. \begin{Lemma}[Abstract Coarea Inequality]\label{lem-coarea-ineq} Let $(X,d_X)$ and $(Y,d_Y)$ be boundedly compact metric spaces and assume that there exist $\beta\ge0$ and $C\ge 0$ such that \[ \cal H^\beta(E) \le C \diam(E)^\beta\qquad \text{for all } E\subset Y, \] where $\cal H^\beta$ is the $\beta$-dimensional Hausdorff measure on $(Y,d_Y)$. Let $u:X\to Y$ be a locally Lipschitz function and for $\epsilon>0$ consider \[ \Lip_\epsilon(u) := \sup\left\{\frac{d_Y(u(x),u(y))}{d_X(x,y)} : 0<d_X(x,y)<\epsilon \right\},\qquad \Lip_0(u) := \lim_{\epsilon\to0} \Lip_\epsilon(u). \] Then, for every $\alpha\ge\beta$ and every Borel set $A\subset X$ with $\cal H^\alpha(A)<+\infty$, the function $y\mapsto \cal H^{\alpha-\beta}(u^{-1}(y)\cap A)$ is $\cal H^\beta$-measurable and \begin{equation}\nonumbe \int_Y \cal H^{\alpha-\beta}(u^{-1}(y)\cap A) \dd \cal H^\beta(y) \le C \Lip_0(u)^\beta \cal H^\alpha(A) . \end{equation} Moreover, the set function $A\mapsto \int_Y \cal H^{\alpha-\beta}(u^{-1}(y)\cap A) \dd \cal H^\beta(y)$ is a Borel measure. \end{Lemma} The proof is standard. In our setting, the ``abstract'' coarea inequality translates as follows. \begin{Proposition}[Coarea inequality]\label{prop07171752} Under the assumptions and notation of Theorem~\ref{thm:coarea2}, one has \begin{itemize} \item[(i)] $u|_\Sigma$ is locally Lipschitz continuous; \item[(ii)] for every Borel set $E\subset\G$, the function $\mathbb{L}\ni s\mapsto\psi^{Q-m-\ell}(E\cap \Sigma^s)\in[0,+\infty]$ is $\psi^\ell$-measurable; \item[(iii)] for every compact $K\subset\Sigma$, the coarea inequality \[ \mu_{\Sigma,u}(K \le C \Lip(u|_K)^\ell \psi^{Q-m}(K) \] holds for a suitable $C=C(\mathbb{L})>0$; \item[(iv)] $\mu_{\Sigma,u}$ is a Borel measure on $\Omega$, $\mu_{\Sigma,u}\ll\psi^{Q-m}\hel\Sigma$ with locally bounded density. \end{itemize} \end{Proposition} \begin{proof} The local Lipschitz continuity of $u|_{\Sigma}$ follows from Lemma~\ref{lem10051109} because of the assumption $uf\in C^1_H(\Omega;\mathbb{K})$ and the fact that $u|_\Sigma=uf|_\Sigma$. As already noticed in the proof of Lemma~\ref{lem-coarea-ineq}, statement (ii) follows from~\cite[2.10.26]{FedererGMT}; the careful reader will observe that~\cite[2.10.26]{FedererGMT} is stated only when $\psi^\ell=\cal H^\ell$, but its proof easily adapts to the case $\psi^\ell=\cal S^\ell$. Concerning statement (iii), we notice that $\psi^{Q-m}(K)<\infty$ because the measure $\psi^{Q-m}\hel\Sigma$ is locally finite by Lemma~\ref{lem09151054} and the area formula (Theorem~\ref{prop05161504}): in particular, one can apply Lemma~\ref{lem-coarea-ineq}. Statement (iv) is now a consequence of statement (iii) and the Radon--Nikodym Theorem, which can be applied because $\psi^{Q-m}\hel\Sigma$ is doubling by~\eqref{eq:Ahlfors} (see, e.g.,~\cite{rigot2018differentiation}). \end{proof} \subsection{Linear case: definition of the coarea factor} In the following Proposition~\ref{prop:coarea-factor} we prove the coarea formula in a ``linear'' case, and in doing so we will introduce the coarea factor. We are going to consider a homogeneous subgroup $\mathbb{P}$ of $\G$ that is also a $C^1_H$ submanifold. We observe that this implies that $\mathbb{P}$ coincides with its homogeneous tangent subgroup; in particular, $\mathbb{P}$ is normal and it is the kernel of a surjective homogeneous morphism on $\G$. \begin{Proposition}[Definition of coarea factor]\label{prop:coarea-factor} Let $\mathbb{P}$ be a homogeneous subgroup of $\G$ that is a $C^1_H$ submanifold of $\G$ and let $L:\mathbb{P}\to\mathbb{L}$ be a homogeneous morphism. Let $\mu_{\mathbb{P},L}$ be as in~\eqref{eq:defmuSigmau}, namely, \[ \mu_{\mathbb{P},L} \defeq \int_{\mathbb{L}} \psi^{Q-m-\ell}\hel L^{-1}(s) \dd \psi^\ell(s). \] Then, $\mu_{\mathbb{P},L}$ is either null or a Haar measure on $\mathbb{P}$. In particular, there exists $\cal C (\mathbb{P},L)\ge0$, which we call \emph{coarea factor}, such that \begin{equation}\label{eq07171719} \mu_{\mathbb{P},L} = \calC(\mathbb{P},L)\ \psi^{Q-m} \hel \mathbb{P}. \end{equation} Moreover, $\cal C(\mathbb{P},L)>0$ if and only if $L(\mathbb{P})=\mathbb{L}$. \end{Proposition} \begin{proof} Since $L$ is Lipschitz on $\mathbb{P}$, we can apply Proposition~\ref{prop07171752} and obtain that $\mu_{\mathbb{P},L}$ is a well defined Borel regular measure that is also absolutely continuous with respect to $\psi^{Q-m}\hel \mathbb{P}$ and finite on bounded sets. If $L(\mathbb{P})\neq \mathbb{L}$, then $\mu_{\mathbb{P},L}=0$ and thus~\eqref{eq07171719} holds with $\cal C(\mathbb{P},L)=0$. If $L(\mathbb{P})=\mathbb{L}$, then we will show that $\mu_{\mathbb{P},L}$ is a Haar measure on $\mathbb{P}$, which is equivalent to \eqref{eq07171719} with $\cal C(\mathbb{P},L)> 0$. For $s\in\mathbb{L}$ let $\mathbb{P}^s:=L^{-1}(s)$. Since $\mathbb{P}^s$ is a coset of $\mathbb{P}^0$, $\psi^{Q-m-\ell}\hel \mathbb{P}^s$ is the push-forward of $\psi^{Q-m-\ell}\hel \mathbb{P}^0$ (which is a Haar measure on $\mathbb{P}^0$) via a left translation. It follows that $\mu_{\mathbb{P},L}$ is nonzero on nonempty open subsets of $\mathbb{P}$. \noindent We need only to show that $\mu_{\mathbb{P},L}$ is left-invariant: let $p\in \mathbb{P}$ and choose a Borel set $A\subset \mathbb{P}$. For every $s\in\mathbb{L}$ we have $p^{-1} \mathbb{P}^s = \{q\in \mathbb{P}: L(pq) = s\} = \mathbb{P}^{L(p)^{-1}s}$. By left invariance of $\psi^{Q-m-\ell}$ and $\psi^\ell$, we have \begin{align*} \mu_{\mathbb{P},L}(p A) &= \int_{\mathbb{L}} \psi^{Q-m-\ell}((pA)\cap \mathbb{P}^s)\dd \psi^{\ell}(s) \\ &= \int_{\R^k} \psi^{Q-m-\ell}(p(A\cap \mathbb{P}^{L(p)^{-1}s}))\dd \psi^{\ell}(s) \\ &= \int_{\R^k} \psi^{Q-m-\ell}(A\cap \mathbb{P}^{L(p)^{-1}s})\dd \psi^{\ell}(s) \\ &= \mu_{\mathbb{P},L}(A) \end{align*} as wished. \end{proof} We now prove a continuity property for the coarea factor $\cal C(\mathbb{P},L)$. We agree that, when $L:\G\to\mathbb{L}$ is defined on the whole $\G$, the symbol $\cal C(\mathbb{P},L)$ stands for $\cal C(\mathbb{P},L|_\mathbb{P})$. \begin{Lemma}\label{lem:coareacont} Assume that, for $n\in\N$, surjective homogeneous morphisms $F,F_n:\G\to\mathbb{M}$ and homogeneous maps $L,L_n:\G\to\mathbb{L}$ are given in such a way that \begin{enumerate}[label=(\roman*)] \item $LF$ and $L_nF_n$ are homogeneous morphisms $\G\to\mathbb{K}$; \item $\ker F$ and $\ker(LF)$ split $\G$; \item $F_n\to F$ and $L_n\to L$ on $\G$ as $n\to\infty$. \end{enumerate} Then $\cal C(\ker F_n,L_n)\to\cal C(\ker F,L)$ as $n\to\infty$. \end{Lemma} \begin{proof} Set $\mathbb{P}_n:=\ker F_n$ and $\mathbb{P}:=\ker F$; let $\mathbb{V}$ be a complementary subgroup to $\mathbb{P}$. Then, for large enough $n$, $\mathbb{P}_n\cdot\mathbb{V}$ is a splitting of $\G$ and the subgroup $\mathbb{P}_n$ is the intrinsic graph $\mathbb{P}\to\mathbb{V}$ of a homogeneous map $\phi_n\in C^1_{\mathbb{P},\mathbb{V}}(\mathbb{P})$. Observe that $\phi_n\to 0$ locally uniformly on $\mathbb{P}$ because $\mathbb{P}_n\to\mathbb{P}$. This, together with Lemma~\ref{lem:areafactor} and the continuity of the area factor by Lemma~\ref{lem:areafactor}, implies that $\psi^{Q-m}\hel \mathbb{P}_n$ converges weakly* to $\psi^{Q-m}\hel \mathbb{P}$. Therefore, by Proposition~\ref{prop:coarea-factor} we have only to show that \begin{equation}\label{eq:weakconvint} \mu_{\mathbb{P}_n,L_n} \stackrel{*}{\rightharpoonup} \mu_{\mathbb{P},L}. \end{equation} If $L|_\mathbb{P}$ is surjective, also $LF$ is surjective. Since $\ker(LF)$ splits $\G$, then~\eqref{eq:weakconvint} follows from Corollary~\ref{corollaryreplacement}. If $L\vert_\mathbb{P}$ is not surjective, we can without loss of generality suppose that $L_n\vert_{\mathbb{P}_n}$ is surjective for all $n$. By homogeneity, it suffices to prove that $\mu_{\mathbb{P}_n,L_n}(\cBall_\G(0,1)) \to 0$. We have \begin{align*} \mu_{\mathbb{P}_n,L_n}(\cBall_\G(0,1))&=\int_{\mathbb{L}} \psi^{Q-m-\ell}(\mathbb{P}_n\cap L_n^{-1}(s)\cap\cBall_\G(0,1))\dd \psi^{\ell}(s)\\ &\le \psi^{\ell}(L_n(\cBall_\G(0,1)\cap \mathbb{P}_n)) \sup_{s\in\mathbb{L}} \psi^{Q-m-\ell}(\cBall_\G(0,1)\cap \mathbb{P}_n\cap L_n^{-1}(s))\\ &\le \psi^{\ell}(L_n(\cBall_\G(0,1)\cap \mathbb{P}_n)) \end{align*} where the last inequality holds because, considering $p\in\mathbb{P}_n$ such that $L_n(p)=s^{-1}$, we have by Lemma~\ref{lem09122022} \[ \psi^{Q-m-\ell}(\cBall_\G(0,1)\cap \mathbb{P}_n\cap L_n^{-1}(s))=\psi^{Q-m-\ell}(\cBall_\G(p,1)\cap \mathbb{P}_n\cap L_n^{-1}(0))\le 2^{Q-m-\ell}. \] Thus, we have to prove that $\psi^{\ell}(L_n(\cBall_\G(0,1)\cap \mathbb{P}_n))\to0$; notice that $L_n(\cBall_\G(0,1)\cap \mathbb{P}_n)$ converges in the Hausdorff distance to $L(\cBall_\G(0,1)\cap \mathbb{P})$, which is a compact set contained in a strict subspace of $\mathbb{L}$. As $\psi^\ell$ is a Haar measure on $\mathbb{L}$, we have $\psi^{\ell}(L_n(\cBall_\G(0,1)\cap\mathbb{P}_n)\to 0$ as $n\to \infty$. \end{proof} \subsection{Good Points}\label{sec:goodpoints} By ``good'' point $o\in\Sigma$ we mean a point where the differential $\DH (uf)|_{T_o^H\Sigma}$ is surjective onto $\mathbb{L}$; the following Proposition~\ref{prop07251509} shows that the Radon--Nikodym density $\Theta$ of $\mu_{\Sigma,u}$ with respect to $\psi^{Q-m}\hel\Sigma$ can be explicitly computed at its Lebesgue points and coincides with the coarea factor. Notice that almost every $o\in\Sigma$ is a Lebesgue point for $\Theta$, in the sense that \begin{equation}\label{eq:RNo} \lim_{r\to 0^+}\aveint{\Sigma\cap\Ball(o,r)}{} \left|\Theta - \Theta(o)\right|\dd \psi^{Q-m}=0. \end{equation} \begin{Proposition}\label{prop07251509} Under the assumptions and notation of Theorem~\ref{thm:coarea2}, one has that the equality \begin{equation}\label{eq:Theta=coarea} \Theta(o) = \calC(T^H_o\Sigma,\DH (uf)(o)). \end{equation} holds for $\psi^{Q-m}$-a.e.~$o\in\Sigma$ such that $\DH (uf)\vert_{T^H_o \Sigma}$ is onto $\mathbb{L}$. \end{Proposition} \begin{proof} We are going to prove~\eqref{eq:Theta=coarea} for all $o\in\Sigma$ such that $\DH (uf)\vert_{T^H_0 \Sigma}$ is onto $\mathbb{L}$, $o$ is split-regular for $uf$ and~\eqref{eq:RNo} holds; up to left translations, we may assume that $o=0$ and $u(0)=0$. For every Borel set $A\subset\G$ and $\lambda>0$ we have, on the one hand \begin{align*} \mu_{\Sigma,u}(\delta_\lambda A) &= \int_{\Sigma\cap\delta_\lambda A} \Theta(p) \dd \psi^{Q-m}(p) \\ &= \lambda^{Q-m} \int_{(\delta_{1/\lambda}\Sigma)\cap A} \Theta(\delta_\lambda p) \dd \psi^{Q-m}(p) \\ &= \lambda^{Q-m} (\Theta\circ\delta_\lambda) \psi^{Q-m}\hel\delta_{1/\lambda}\Sigma (A) . \end{align*} On the other hand, \begin{align*} \mu_{\Sigma,u}(\delta_\lambda A) &= \int_{\mathbb{L}} \psi^{Q-m-\ell}((\delta_\lambda A)\cap \Sigma \cap \{u=s\}) \dd \psi^\ell(s) \\ &= \int_{\mathbb{L}} \psi^{Q-m-\ell}(\delta_\lambda (A\cap \delta_{1/\lambda}\Sigma \cap \{u_\lambda=\delta_{1/\lambda}s\})) \dd \psi^\ell(s) \\ &= \lambda^{Q-m} \int_{\mathbb{L}} \psi^{Q-m-\ell}(A\cap \delta_{1/\lambda}\Sigma \cap \{u_\lambda=t\})) \dd \psi^\ell(t) , \\ \end{align*} where $u_\lambda(p) := \delta_{1/\lambda}u(\delta_\lambda p)$. Therefore, one has the equality of measures \begin{equation}\label{eq09181448} (\Theta\circ\delta_\lambda) \psi^{Q-m}\hel\delta_{1/\lambda}\Sigma = \int_{\mathbb{L}} \psi^{Q-m-\ell}\hel (\delta_{1/\lambda}\Sigma \cap \{u_\lambda=b\}) \dd \psi^\ell(b) . \end{equation} We now compute the weak* limits as $\lambda\to 0^+$ of each side of~\eqref{eq09181448}. Concerning the left-hand side, for every $\chi\in C_c(\G)$ one has \begin{multline*} \int_{\delta_{1/\lambda}\Sigma } \chi(p) \Theta(\delta_\lambda p) \dd\psi^{Q-m}(p)\\ = \int_{\delta_{1/\lambda}\Sigma } \chi(p) (\Theta(\delta_\lambda p)-\Theta(0)) \dd\psi^{Q-m}(p) + \Theta(0) \int_{\delta_{1/\lambda}\Sigma } \chi(p) \dd\psi^{Q-m}(p), \end{multline*} Let $r>0$ be such that $\spt\chi\subset\Ball(0,r)$, then \begin{align*} &\hspace{-2cm}\left| \int_{\delta_{1/\lambda}\Sigma } \chi(p) (\Theta(\delta_\lambda p)-\Theta(0)) \dd\psi^{Q-m} (p)\right| \\ \le& \|\chi\|_{\infty} \int_{\Ball(0,r)\cap\delta_{1/\lambda}\Sigma } |\Theta(\delta_\lambda p)-\Theta(0)| \dd\psi^{Q-m}(p) \\ =& \|\chi\|_{\infty} \lambda^{m-Q} \int_{\Ball(0,\lambda r)\cap\Sigma} |\Theta(p)-\Theta(0)| \dd\psi^{Q-m}(p) \\ \le& C\:\|\chi\|_{\infty} \aveint{\Ball(0,\lambda r)\cap\Sigma}{} |\Theta(p)-\Theta(0)| \dd\psi^{Q-m}(p) \end{align*} for a suitable positive $C$. Exploiting~\eqref{eq:RNo} one gets \begin{equation}\label{eq:LHS} \begin{split} \lim_{\lambda\to0^+}\int_{\delta_{1/\lambda}\Sigma } \chi(p)\Theta(\delta_\lambda p) \dd\psi^{Q-m}(p) &=\Theta(0)\lim_{\lambda\to0^+}\int_{\delta_{1/\lambda}\Sigma } \chi(p) \dd\psi^{Q-m}(p)\\ &= \Theta(0)\int_{T^H_0\Sigma} \chi(p) \dd\psi^{Q-m}(p), \end{split} \end{equation} the last equality following from Lemma~\ref{lem:weakstarblowup}. We now consider the right-hand side of~\eqref{eq09181448}; setting $(uf)_\lambda(p):=\delta_{1/\lambda}((uf)(\delta_\lambda p))$, for every $\chi\in C_c(\G)$ one has \begin{equation}\nonumbe \begin{split} \lim_{\lambda\to 0^+}\int_\mathbb{L} \int_{\delta_{1/\lambda}\Sigma\cap \{u_\lambda=b\}}\chi\dd\psi^{Q-m-\ell}\dd\psi^\ell(b) = & \lim_{\lambda\to 0^+}\int_\mathbb{L} \int_{\{(fu)_\lambda=b\}}\chi\dd\psi^{Q-m-\ell}\dd\psi^\ell(b)\\ = & \int_\mathbb{L} \int_{ \{\DH (uf)(0)=b\} }\chi\dd\psi^{Q-m-\ell}\dd\psi^\ell(b), \end{split} \end{equation} where we used Lemma~\ref{lem:weakstarblowup}. The definition of coarea factor then gives \begin{equation}\label{eq:RHS} \begin{split} \lim_{\lambda\to 0^+}\int_\mathbb{L} \int_{\delta_{1/\lambda}\Sigma\cap \{u_\lambda=b\}}\chi\dd\psi^{Q-m-\ell}\dd\psi^\ell(b)= & \int \chi\dd\mu_{T^H_0\Sigma,\DH (uf)(0)}\\ &\hspace{-1cm}= \calC(T^H_0\Sigma,\DH (uf)(0))\int_{T^H_0\Sigma}\chi\dd\psi^{Q-m} \end{split} \end{equation} The statement is now a consequence of~\eqref{eq09181448},~\eqref{eq:LHS} and~\eqref{eq:RHS}. \end{proof} \subsection{Bad points}\label{sec:badpoints} In contrast with ``good'' ones, ``bad'' points are those points $p$ where $(\DH (uf))(p)|_{T^H_p\Sigma}$ is not surjective. The following lemma states that they are $\mu_{\Sigma,u}$-negligible: a posteriori, this is consistent with the fact that, by definition, the coarea factor is null at such points. \begin{Lemma}\label{lem07241228} Under the assumptions and notation of Theorem~\ref{thm:coarea2}, one has \[ \mu_{\Sigma,u}(\{p\in\Sigma:\DH (uf)(p)|_{T^H_p\Sigma}\text{ is not onto }\mathbb{L}\})=0. \] \end{Lemma} \begin{proof} It is enough to show that $\mu_{\Sigma,u}(E)=0$ for an arbitrary compact subset $E$ of $\{p\in\Sigma:\DH (uf)(p)|_{T^H_p\Sigma}\text{ is not onto }\mathbb{L}\}$, which is closed. We have $\psi^{Q-m}(E)<\infty$. Fix $\epsilon>0$; by the compactness of $E$ and the locally uniform differentiability of both $f$ and $uf$, there exists $r>0$ such that $\cBall(E,r)\subset\Omega$ and, for all $p\in E$ and all $q\in \Sigma\cap \Ball(p,r)$, the inequalities \[ \dist(q,p T^H_p\Sigma) \le \epsilon \rho_\G(p,q) , \] and \[ \rho_\mathbb{K} \left( \DH (uf)_p(p^{-1}q) , (uf)(p)^{-1}(uf)(q) \right) \le M\epsilon \rho_\G(p , q ) \] hold, where $M=\Lip((uf)|_{\cBall(E,r)})$. Fixing a positive integer $j>1/r$, one can cover $E$ by countably many closed sets $\{B_i^j\}_i$ of diameter $d^j_i:=\diam B^j_i$ belonging to the class $\scr E$ and such that \begin{equation}\label{eq:quasiHausdorff} d^j_i<1/j,\text{ for all $i$,}\quad\text{ and}\quad \sum_i (d_i^j)^{Q-m} <\psi^{Q-m}(E)+ 1/j. \end{equation} Imitating the proof of \cite[Lemma~3.5]{EvansGariepyRevised}, we define the functions $g^j_i:\mathbb{L}\to[0,1]$ by $g^j_i = (d^j_i)^{Q-m-\ell} \mathbf{1}_{u(B_i^j\cap\Sigma)}$. Note that, using the standard notation $\psi^{Q-m-\ell}_\delta$ for the pre-measures used in the Carathéodory construction, one has \begin{equation}\label{eq07171643} \psi^{Q-m-\ell}_{1/j}(u^{-1}(y)\cap E) \le \sum_i g^j_i(y) , \end{equation} for all $y\in Y$. Then one gets, using upper integrals, \begin{equation}\label{eq03261658} \begin{aligned} \int_{\mathbb{L}} \psi^{Q-m-\ell}_{1/j}(E\cap u^{-1}(s)) \dd \psi^\ell(s) &\overset{\eqref{eq07171643}}{\le} \int_\mathbb{L} \sum_i g^j_i(y) \dd \psi^\ell(y) \\ &\overset{*}{\le} \sum_i \int_\mathbb{L} g^j_i(y) \dd \psi^\ell(y) \\ &\le \int_\mathbb{L}\sum_i(d^j_i)^{Q-m-\ell}\mathbf{1}_{u(B^j_i\cap\Sigma)}(s)\dd\psi^\ell(s)\\ & \le \sum_i (d_i^j)^{Q-m-\ell} \psi^{\ell}(u(B_j^i\cap \Sigma)) , \end{aligned} \end{equation} where the inequality marked by $*$ follow from Fatou's Lemma. We claim that \begin{equation}\label{eq:volumecontrol} \psi^{\ell}(u(B_i^j\cap\Sigma))\le M^\ell C(\epsilon,\mathbb{L}) (\diam B_i^j)^\ell, \end{equation} for a suitable $C(\epsilon,\mathbb{L})>0$ such that $\lim_{\epsilon\to 0^+}C(\epsilon,\mathbb{L})=0$. Let us prove~\eqref{eq:volumecontrol}. Fix some $B= B_i^j$; we can assume that $B$ intersects $E$ in at least a point $p$, which implies in particular that $B\subset \cBall(E,1/j)$. Without loss of generality, suppose that $p=0$ and $(uf)(p)= 0$; we know that for every $q\in B\cap \Sigma$ \[ \dist(q, T^H_0\Sigma) \le \epsilon \Vert q\Vert_\G \qquad\text{and}\qquad \rho_\mathbb{K} ( u(q) , \DH (uf)_0(q) ) \le M\epsilon \Vert q\Vert_\G. \] Observing that $\DH (uf)_0$ has Lipschitz constant at most $M$, we get \begin{align*} \dist(u(q), \DH (uf)_0(T_0^H \Sigma)) &\le \rho_\mathbb{K} ( u(q) , \DH (uf)_0(q) ) + M\dist(q , T_0^H \Sigma) \\ &\le 2M \epsilon \Vert q\Vert_\G. \end{align*} Denoting by $\mathbb{L}'$ the homogeneous subgroup $\DH (uf)_0(T_0^H \Sigma)$, which is strictly contained in $\mathbb{L}$, and using the fact that $u(B\cap\Sigma)\subset\mathbb{L}$, we conclude that \[ u(B\cap\Sigma) \subset \cBall_\mathbb{L}(\mathbb{L}',2M\epsilon\diam B) \cap \cBall_\mathbb{L}(0,M\diam B), \] where we also used the fact that the Lipschitz constant of $u|_{B\cap\Sigma}=(uf)|_{B\cap\Sigma}$ is at most $M$. By homogeneity one has \begin{align*} \psi^{\ell} (u(B\cap\Sigma)) &\le (\diam B)^\ell\:\psi^\ell(\cBall_\mathbb{L}(\mathbb{L}', 2M\epsilon)\cap\cBall_\mathbb{L}(0,M)) \\ &\le M^\ell (\diam B)^\ell\:\psi^\ell(\cBall_\mathbb{L}(\mathbb{L}', 2\epsilon)\cap\cBall_\mathbb{L}(0,1)). \end{align*} The claim~\eqref{eq:volumecontrol} follows on letting \[ C(\epsilon,\mathbb{L}) \defeq \sup_\mathbb{P}\ \psi^\ell(\cBall_\mathbb{L}(\mathbb{P}, 2\epsilon)\cap\cBall_\mathbb{L}(0,1)) \] where the supremum is taken among proper homogeneous subgroups of $\mathbb{L}$. The fact that $\lim_{\epsilon\to 0^+}C(\epsilon,\mathbb{L})=0$ can be easily checked in linear coordinates on the vector space $\mathbb{L}$, by comparing $\rho_\mathbb{L}$ with the Euclidean distance and noting that $\psi^\ell$ is a multiple of the Lebesgue measure. Combining~\eqref{eq:volumecontrol},~\eqref{eq03261658} and~\eqref{eq:quasiHausdorff}, we obtain \begin{align*} \int_{\mathbb{L}} \psi^{Q-m-\ell}_{1/j}(E\cap u^{-1}(s)) \dd \psi^{\ell}(s) \le M^\ell C(\epsilon, \mathbb{L}) (\psi^{Q-m}(E)+ 1/j) \end{align*} and, letting $j\to\infty$, we deduce by Fatou's Lemma that \[ \mu_{\Sigma,u} (E) \le M^\ell C(\epsilon,\mathbb{L})\psi^{Q-m}(E) . \] The proof is accomplished by letting $\epsilon\to 0^+$. \end{proof} Lemma~\ref{lem07241228}, combined with Propositions~\ref{prop:doubling} and~\ref{prop:RadonNykodym}, provides the following consequence. Recall that $\psi^d$ is Borel regular and that the restriction of a Borel regular measure to a Borel set is Borel regular again. \begin{Corollary}\label{cor:baddensityzero} Under the assumptions and notation of Theorem~\ref{thm:coarea2}, the equality $\Theta(p) = 0$ holds for $\psi^{Q-m}$-a.e.~$p\in\Sigma$ such that $\DH (uf)\vert_{T^H_p \Sigma}$ is not sur\-jec\-ti\-ve on $\mathbb{L}$. In particular \[ \Theta(p) = \calC(T^H_p\Sigma,\DH (uf)(p)) =0 \] at all such points $p$. \end{Corollary} \subsection{Proof of the coarea formula} In this section we prove the main coarea formulae of the paper. We start by Theorems~\ref{thm:coarea} and~\ref{thm:coarea2}. \begin{proof}[Proof of Theorems~\ref{thm:coarea} and~\ref{thm:coarea2}] Notice that Theorem~\ref{thm:coarea2} implies Theorem~\ref{thm:coarea}. Sta\-tements $(i)$ and $(ii)$ and the first part of $(iii)$ of Theorem~\ref{thm:coarea2} follow from Proposition~\ref{prop07171752}. The remaining claim~\eqref{eq:coarealocal} follows from Proposition~\ref{prop07251509} and Corollary~\ref{cor:baddensityzero}. \end{proof} A direct consequence is Corollary~\ref{cor:coareaPRODUCT}, where we assume that $\mathbb{K}=\mathbb{L}\times\mathbb{M}$ is a direct product: \begin{proof}[Proof of Corollary~\ref{cor:coareaPRODUCT}] It is enough to prove the statement in case $R$ is a $C^1_H$ submanifold; actually, we can also assume that there exists $f\in C^1_H(\Omega;\mathbb{M})$ such that $R=\Sigma:=\{p\in\Omega:f(p)=0\}$ and all points in $\Omega$ are split-regular for $f$. Since $\mathbb{K}=\mathbb{L}\times\mathbb{M}$ is a direct product, we have $uf\in C^1_H(\Omega;\mathbb{K})$ and $\DH (uf)_p(g)=\DH u_p(g)\DH f_p(g)$ for every $g\in\G$. Moreover, since $T^H_p\Sigma=\ker \DH f_p$, the equality $\DH (uf)_p|_{T^H_p\Sigma}=\DH u_p|_{T^H_p\Sigma}$ holds. In particular, condition~\eqref{eq:technicalassumptionPRODUCT} now implies~\eqref{eq:technicalassumption}, and the statement directly follows from Theorem~\ref{thm:coarea}. \end{proof} \section{Heisenberg groups} The most notable examples of Carnot groups are provided by Heisenberg groups. For an integer $n\ge 1$, the $n$-th {\em Heisenberg group} $\mathbb{H}^n$ is the stratified Lie group associated with the step 2 algebra $V=V_1\oplus V_2$ defined by \begin{align*} & V_1=\textrm{span}\{X_1,\dots,X_n,Y_1,\dots,Y_n\},\qquad V_2=\textrm{span}\{T\},\\ & [X_i,Y_j]=\delta_{ij}T\qquad\text{for every }i,j=1,\dots,n. \end{align*} We will identify $\mathbb{H}^n\equiv\R^{2n+1}$ by the {\em exponential coordinates}: \begin{equation}\nonumbe \R^n\times\R^n\times\R \ni(x,y,t)\longleftrightarrow \exp(x_1 X_1+\dots+y_nY_n+tT)\in\mathbb{H}^n , \end{equation} according to which the group operation is \[ (x,y,t)(x',y',t')=(x+x',y+y',t+\tfrac12\textstyle\sum_{j=1}^n(x_jy_j'-x_j'y_j)). \] We say that a homogeneous distance $\rho$ on $\mathbb{H}^n$ is {\em rotationally invariant}\footnote{The terminology ``rotationally invariant'' might be misleading in $\mathbb{H}^n$ for $n>1$, as not all rotations around the $T$ axis are isometries} (\cite{SNGRigot}) if \begin{equation}\label{eq:seba1} \rho(0,(x,y,t))=\rho(0,(x',y',t))\qquad\text{whenever }|(x,y)|=|(x',y')|, \end{equation} where $|\cdot|$ is the Euclidean norm in $\R^{2n}$. Observe that $\rho$ is rotationally invariant if and only if it is {\it multiradial} according to \cite[Definition 2.21]{CorniMagnani}, i.e., if $\rho(0,(x,y,t))=f(|(x,y)|,|t|)$ for a suitable $f$. If $\mathbb{H}^n=\mathbb{W}\cdot\mathbb{V}$ is a splitting of the $n$-th Heisenberg group $\mathbb{H}^n$ with $\mathbb{W}$ normal, then necessarily $\mathbb{V}$ is an Abelian {\em horizontal subgroup}, i.e., $\mathbb{V}\subset V_1$, while $\mathbb{W}$ is {\em vertical}, i.e., $V_2\subset \mathbb{W}$. See~\cite[Remark 3.12]{FSSCAdvMath}. Moreover, if $1\le k\le n$, then the following conditions are equivalent: \begin{enumerate}[label=(\roman*)] \item $\mathbb{P}\subset\mathbb{H}^n$ is a vertical subgroup with topological dimension $2n+1-k$; \item $\mathbb{P}=P\times V_2$ for some $(2n-k)$-dimensional subspace $P\subset V_1$; \item $\mathbb{P}\in\scr T_{\mathbb{H}^n,\R^k}$. \end{enumerate} Proving the equivalence of the statements above is a simple task when one takes into account that every vertical subgroup of codimension at most $n$ possesses a complementary horizontal subgroup, see e.g.~\cite[Lemma~3.26]{FSSCAdvMath}. \subsection{Area formula in Heisenberg groups} We provide an explicit representation for the spherical measure on vertical subgroups of $\mathbb{H}^n$: \begin{Proposition}\label{prop:Sverticalsubgroups} Assume that $\mathbb{H}^n$ is endowed with a rotationally invariant homogeneous distance and let $1\le k\le n$. Then, there exists a constant $c(n,k)$ such that for every vertical subgroup $\mathbb{P}\in\scr T_{\mathbb{H}^n,\R^k}$ \[ c(n,k)\cal S^{2n+2-k}\hel \mathbb{P} = \cal H^{2n+1-k}_E\hel\mathbb{P}, \] where $\cal H^{2n+1-k}_E$ denotes the Euclidean Hausdorff measure on $\R^{2n+1}\equiv\mathbb{H}^n$. \end{Proposition} \begin{proof} Let $\mathbb{P}\in\scr T_{\mathbb{H}^n,\R^k}$ be a fixed vertical subgroup; by~\cite[Lemma~3.26]{FSSCAdvMath} there exists a complementary Abelian horizontal subgroup $\mathbb{V} = V\times\{0\}$, for a proper $k$-dimensional subspace $V\subset V_1$. Let $W$ be a $(2n-k)$-dimensional complementary subspace of $V$ in $V_1$ and set $\mathbb{W}:=W\times V_2$, which is a vertical subgroup that is complementary to $\mathbb{V}$. Let $P\subset V_1$ such that $\mathbb{P}=P\times V_2$. Let $f:W\to V$ be such that $P=\{w+f(w):w\in W\}$ and let $\phi:\mathbb{W}\to V$ such that $\mathbb{P}=\{w(\phi(w),0):w\in\mathbb{W}\}$. Now, notice that if $z\in W$ and $t\in\R$, then \[ (z,t)(\phi(z,t),0)=(z+\phi(z,t),t+\frac12\omega(z,\phi(z,t)) , \] where $\omega$ is the standard symplectic form on $\R^{2n}$. Since $z+\phi(z,t)\in P$ and $(z+V)\cap P = \{z+f(z)\}$, then we have $\phi(z,t)=f(z)$. The area formula of \cite[Theorem~1.2]{CorniMagnani}, together with \cite[Theorem~2.12 and Proposition 2.13]{CorniMagnani} from the same paper, provide a constant $c(n,k)>0$ such that \begin{equation}\label{eq:uno11} c(n,k)\cal S^{2n+2-k}\hel \mathbb{P} = \Phi_\#(J^\phi\phi\:\cal H^{2n+1-k}_E\hel\mathbb{W}), \end{equation} where $J^\phi\phi$ is the intrinsic Jacobian of $\phi$ as in \cite[Definition~2.14]{CorniMagnani} and $\Phi$ is the intrinsic graph map. On the other side, the Euclidean area formula gives \begin{equation}\label{eq:due22} \cal H^{2n+1-k}_E\hel \mathbb{P} = F_\#(JF\:\cal H^{2n+1-k}_E\hel\mathbb{W}), \end{equation} where $F:\mathbb{W}\to\mathbb{P}$ is defined by $F(x,y,t):=(f(x,y),t)$ for every $(x,y)\in W$ and $JF$ is the Euclidean area factor. As a matter of fact, using the equality $f=\phi$, one has $J^\phi\phi=JF$ and the statement immediately follows from~\eqref{eq:uno11} and~\eqref{eq:due22}. \end{proof} \begin{Remark} Proposition~\ref{prop:Sverticalsubgroups} holds, with no changes in the proof, in the more general case $\mathbb{H}^n$ is endowed with a homogeneous distance that is {\it $(2n+1-k)$-vertically symmetric} according to~\cite[Definition~2.19]{CorniMagnani}. \end{Remark} \begin{Remark}\label{rem:Hrotatinvar} When $\mathbb{H}^n$ is endowed with a rotationally invariant distance $\rho$, then for every pair $(\mathbb{P},\mathbb{P}')$ of one-codimensional homogeneous subgroups of $\mathbb{H}^n$, there exist an isometry $(\mathbb{H}^n,\rho)\to(\mathbb{H}^n,\rho)$ that maps $\mathbb{P}$ to $\mathbb{P}'$. The proof is left to the reader. \end{Remark} The following proposition completes the proof of Corollary~\ref{cor:SvsH}. \begin{Proposition}\label{prop:Hncodim1} If $\mathbb{H}^n$ is endowed with a rotationally invariant homogeneous distance and $\G'=\R$, then the function $\textfrak a$ in Corollary \ref{cor:SvsH} is constant, i.e., there exists $C\in[1,2^{2n+1}]$ such that \[ \text{$\cal S^{2n+1} \hel R= C\cal H^{2n+1}\hel R\qquad \forall\:(\mathbb{H}^n,\R)$-rectifiable set $R\subset\mathbb{H}^n$.} \] \end{Proposition} \begin{proof} When $\G=\mathbb{H}^n$ and $\G'=\R$, then the function $\textfrak a$ defined in~\eqref{eq03231608} is constant by Remark~\ref{rem:Hrotatinvar}. \end{proof} Similarly, Corollary~\ref{cor:densityexistence} can be improved when $\G$ is the Heisenberg group endowed with a rotationally invariant distance. \begin{Corollary}\label{cor:densityconstantinHeis} Assume $\G$ is the Heisenberg group $\mathbb{H}^n$ endowed with a rotationally invariant distance and $\G'=\R^m$ for some $1\le m\le n$; if $\psi^{2n+2-m}$ is the spherical Hausdorff measure, then the function $\textfrak d$ in Corollary~\ref{cor:densityexistence} is constant. If $m=1$ and $\psi^{2n+2-m}$ is the Hausdorff measure, then the function $\textfrak d$ in Corollary~\ref{cor:densityexistence} is constant. \end{Corollary} \begin{proof} Concerning the first part of the statement, let $\mathbb{W}\in\scr T_{\mathbb{H}^n,\R^m}$ be fixed; by Proposition~\ref{prop:Sverticalsubgroups} we have \begin{align*} \textfrak d(\mathbb{W})&=\lim_{r\to 0^+} \frac{\cal S^{2n+2-m}(\mathbb{W}\cap\Ball(0,r))}{r^{2n+2-m}}\\ & = \cal S^{2n+2-m}(\mathbb{W}\cap\Ball(0,1)) =c(n,m) \cal H_E^{2n+1-m}(\mathbb{W}\cap\Ball(0,1)) \end{align*} and the latter quantity does not depend on $\mathbb{W}$ by rotational invariance of the distance. The second part of the statement is an immediate consequence of Remark~\ref{rem:Hrotatinvar}. \end{proof} \subsection{Coarea formula in Heisenberg groups} When one considers spherical measures in the Heisenberg group endowed with a rotationally invariant distance, then the coarea factor coincides up to a multiplicative constant with the quantity \begin{equation*} \begin{split} & J^Ru(p):=\left( \det (L\circ L^T) \right)^{1/2},\qquad L:=\DH u_p|_{T^H_pR}. \end{split} \end{equation*} We prove this fact. \begin{Proposition}\label{prop:coareafactorH} Consider the Heisenberg group $\mathbb{H}^n$ endowed with a rotationally invariant distance. Let $\mathbb{P}\in\scr T_{\mathbb{H}^n,\Rm}$ be a vertical subgroup of topological dimension $2n+1-m$ and let $L:\mathbb{P}\to\R^\ell$ be a homogeneous morphism; assume $1\le m+\ell\le n$. Then \[ \calC(\mathbb{P},L)=\frac{c(n,m+\ell)}{c(n,m)}\left( \det (L\circ L^T) \right)^{1/2}, \] where the positive constants $c(n,m)$ and $c(n,m+\ell)$ are those provided by Proposition~\ref{prop:Sverticalsubgroups}. \end{Proposition} \begin{proof} If $L$ is not onto $\R^\ell$, then the statement is true. We assume that $L$ is surjective. By Proposition~\ref{prop:Sverticalsubgroups} \begin{align*} \mu_{\mathbb{P},L} &=\int_{\R^\ell}\cal S^{2n+2-m-\ell}\hel L^{-1}(s)\dd\cal L^\ell(s)\\ & = c(n,m+\ell)\int_{\R^\ell} \cal H^{2n+1-m-\ell}_E\hel L^{-1}(s)\dd\cal L^\ell(s)\\ & = c(n,m+\ell) (\det L\circ L^T)^{1/2}\cal H^{2n+1-m}_E\hel \mathbb{P}, \end{align*} where we used the Euclidean coarea formula. A second application of Proposition~\ref{prop:Sverticalsubgroups} gives \[ \mu_{\mathbb{P},L} = \frac{c(n,m+\ell)}{c(n,m)} (\det L\circ L^T)^{1/2}\cal S^{2n+2-m}\hel \mathbb{P} \] and this is enough to conclude. \end{proof} We now have all the tools needed in order to prove our coarea formula in Heisenberg groups. \begin{proof}[Proof of Theorem~\ref{thm:coareaHeisenberg}] The first part of the statement is an immediate consequence of Corollary~\ref{cor:coareaPRODUCT} and the fact that, if $\DH u_p|_{T^H_pR}$ is surjective on $\R^\ell$, then $T^H_pR\cap\ker\DH u_p$ is a vertical subgroup of dimension $2n+1-m-\ell\ge n+1$, and by~\cite[Lemma~3.26]{FSSCAdvMath} it admits a complementary (horizontal) subgroup. The second part of the statement is now a consequence of Proposition~\ref{prop:coareafactorH}; clearly, one has $\textfrak c=c(n,m+\ell)/c(n,m)$ according to the constants introduced in Proposition~\ref{prop:Sverticalsubgroups}. \end{proof} \bibliographystyle{abbrv}
1,941,325,220,154
arxiv
\section{Introduction} Over the last decade dozens of new charmonium states have been observed \cite{pdg,hosaka,esposito,nnl}. Among these new states, the most studied one is the $X(3872)$. It was first observed in 2003 by the Belle Collaboration \cite{Choi:2003ue,Adachi:2008te}, and has been confirmed by other five experiments: BaBar~\cite{Aubert:2008gu}, CDF~\cite{Acosta:2003zx,Abulencia:2006,Aaltonen:2009}, \DZero~\cite{Abazov:2004}, LHCb~\cite{Aaij:2011sn,Aaij:2013} and CMS~\cite{Chat13}. The LHCb collaboration has determined the $X(3872)$ quantum numbers to be $J^{PC} = 1^{++}$, with more than 8$\sigma$ significance \cite{Aaij:2013}. The structure of the new charmonium states has been subject of an intense debate. In the case of $X(3872)$, calculations using constituent quark models give masses for possible charmonium states with $J^{PC}=1^{++}$ quantum numbers, which are much bigger than the observed $X(3872)$ mass: $2~^3P_1(3990)$ and $3~^3P_1(4290)$ \cite{bg}. These results, together with the observed isospin violation in their hadronic decays, motivated the conjecture that these objects are multiquark states, such as mesonic molecules or tetraquarks. Indeed, the vicinity of the $X$ mass to the $\bar D D^{*}$ threshold inspired the proposal that the $X(3872)$ could be a molecular $\bar D D^{*} $ bound state with a small binding energy \cite{swanson,eo}. Another well known interpretation of the $X(3872)$ is that it can be a tetraquark state resulting from the binding of a diquark and an antidiquark~\cite{maiani,ricardo}. There are other proposals as well \cite{hosaka,esposito,nnl}. One successful approach in describing experimental data is to treat the $X$ as an admixture of two and four-quark states \cite{carina}. Until now it has not been possible to determine the strucute of the $X$, since the existing data on masses and decay widths can be explained by quite different models. However this situation can change, as we address the production of exotic charmonium in hadronic reactions, i.e. proton-proton, proton-nucleus and nucleus-nucleus collisions. Hadronic collisions seem to be a promising testing ground for ideas about the structure of the new states. It has been shown \cite{esposito,gri} that it is extremely difficult to produce meson molecules in p p collisions. In the molecular approach the estimated cross section for $X(3872)$ production is two orders of magnitude smaller than the measured one. On the other hand, in Ref. \cite{erike} a simple model was proposed to compute the $X$ production cross section in p p collisions in the tetraquark approach. Predictions were made for the next run of the LHC. As pointed out in Refs. \cite{Cho:2010db,EXHIC}, high energy heavy ion collisions offer an interesting scenario to study the production of multiquark states. In these processes, a quite large number of heavy quarks are expected to be produced, reaching as much as 20 $c \, \bar{c}$ pairs per unit rapidity in Pb + Pb collisions at the LHC. Moreover, the formation of quark gluon plasma (QGP) may enhance the production of exotic charmonium states, since the charm quarks are free to move over a large volume and they may coalesce to form bound states at the end of the QGP phase or, more precisely, at the end of the mixed phase, since the QGP needs some time to hadronize. One of the main conclusions of Refs.~\cite{Cho:2010db,EXHIC} was that, if the production mechanism is coalescence, then the production yield of these exotic hadrons at the moment of their formation strongly reflects their internal structure. In particular it was shown that in the coalescence model the production yield of the $X(3872)$, at RHIC or LHC energies, is almost 20 times bigger for a molecular structure than for a tetraquark configuration. After being produced at the end of the quark gluon plasma phase, the $X(3872)$ interacts with other hadrons during the expansion of the hadronic matter. Therefore, the $X(3872)$ can be destroyed in collisions with the comoving light mesons, such as $X+\pi\to \bar D+D$, $X+\pi\to \bar D^* +D^*$, but it can also be produced through the inverse reactions, such as $D+\bar{D}\to X+\pi$, $\bar D^* +D^*\to \pi +X$. We expect these cross sections to depend on the spatial configuration of the $X(3872)$. Charm tetraquarks in a diquark - antidiquark configuration $(c q) - (\bar{c}\bar{q})$ have a typical radius comparable to (or smaller than) the radius of the charmonium groundstates, i.e. $r_{4q} \simeq r_{\bar{c} c} \simeq 0.3 \, - \, 0.4 \, \mbox{fm}$. Charm meson molecules are bound by meson exchange and hence $r_{mol} \simeq 1/m_{\pi} \simeq 1.5$ fm. In fact, the calculations of Ref. \cite{gamer} show that $r_{mol} \simeq 2.0 - 3.0 $ fm. Molecules are thus much bigger than tetraquarks and their absorption cross sections may also be much bigger. On the other hand, when these states are produced from $D - \bar{D}^*$ fusion in a hadron gas, what matters is the overlap between the initial and final state configurations. Assuming that the radius of the $D$ and $D^*$ mesons is $r_D \simeq 0.6 \, \mbox{fm}$ \cite{haglin}, the initial $D \, + \,D^*$ state has a larger spatial overlap with a molecule than with a tetraquark and, therefore, the production of molecules is favored. Hence from geometrical arguments we expect that in a hot hadronic environment molecules are easier to produce and also easier to destroy than tetraquarks. Of course geometrical estimates of cross sections are more reliable if we apply them to high energy collisions. Here the typical collision energies are of the order of the temperature $T \simeq 100 - 180 $ MeV and are probably not high enough. Nevertheless, at a qualitative level, they can be useful as guidance. In Ref.~\cite{ChoLee} the interactions of the $X$ in a hadronic medium were studied in the framework of $SU(4)$ effective Lagrangians. The authors computed the corresponding production and absorption cross sections, finding that the absorption cross section is two orders of magnitude larger than the production one. The effective Lagrangians include the $X$ particle as a fundamental degree of freedom and the theory is unable to distinguish between molecular and tetraquark configurations. Presumably this information might be included in the form factors introduced in the vertices. The authors find that it is much easier to destroy the $X$ than to create it. In particular, for the largest thermally averaged cross sections they find: $<\sigma v>_{\pi X \rightarrow D^* \bar{D}^* } \, \simeq 30 \, <\sigma v>_{D^* \bar{D}^* \rightarrow \pi X}$. In spite of this difference, the authors of Ref. \cite{ChoLee} arrived at the intriguing conclusion that the number of $X$'s stays approximately constant during the hadronic phase. In Ref. \cite{ChoLee} the coupling of the $X(3872)$ with charged charm mesons (such as $D^- \, D^{* +}$) was neglected and only neutral mesons were considered (such as $D^0 \bar{D}^{0*}$). Moreover, the terms with anomalous couplings were not included in the calculations. In Ref.~\cite{XProduction} we showed that the inclusion of the couplings of the $X(3872)$ to charged $D$'s and $D^*$'s and those of the anomalous vertices, $\pi D^*\bar{D}^*$ and $X D^*\bar{D}^*$, increases the cross sections by more than one order of magnitude. Similar results were also observed in the case of $J/\psi\pi$ cross section \cite{nospsi}. These anomalous vertices also give rise to new reaction channels, namely, $\bar D+ D^*\to\pi +X$ and $\pi +X\to \bar D +D^*$. Thus it is important to evaluate the changes that the above mentioned contributions can produce in the $X$ abundance (and in its time evolution) in reactions as those considered in Ref. \cite{ChoLee}. This is the subject of this work. The formalism used in Refs. \cite{ChoLee} and \cite{XProduction} was originally developed to study the interaction of charmonium states (specially the $J/\psi$) with light mesons in a hot hadron gas many years ago \cite{nospsi,rapp}. The conclusions obtained in the past can help us now, giving some baseline for comparison. For example, if it is true that the $X$ has a large $c \bar{c}$ component, we may expect that the corresponding production and absorption cross sections are comparable to the $J/\psi$ ones. If, alternatively, they turn out to be much larger, this could be an indication of a strong multiquark and possibly molecular component. The paper is organized as follows. In the next section we discuss the cross sections averaged over the thermal distributions. In Sec.III we investigate the time evolution of the $X(3872)$ abundance by solving the kinetic equation based on the phenomenological model of Ref. \cite{ChoLee}. Finally in Sec.IV we present our conclusions. \section{Cross Sections averaged over the thermal distribution} In this section we calculate the cross sections averaged over the thermal distributions for the processes $\bar D D\to \pi X$, $\bar D^* D\to \pi X$ and $\bar D^* D^*\to \pi X$, and for the inverse reactions. This information is the input to the study of the time evolution of the $X(3872)$ abundance in hot hadronic matter. In Fig.~\ref{diagrams} we show the different diagrams contributing to each process. In Ref. \cite{ChoLee} it was shown that the contribution from the reactions involving the $\rho$ meson is very small compared to the reactions with pions and thus we neglect the former in what follows. \begin{figure}[th] \centering \includegraphics[width=9.5cm]{Xab_fig1ab.eps}\\ \includegraphics[width=9.5cm]{Xab_fig1cd.eps}\\ \includegraphics[width=9.5cm]{Xab_fig1ef.eps} \\ \includegraphics[width=9.5cm]{Xab_fig1gh.eps} \\ \caption{Diagrams contributing to the processes $\bar{D} D \rightarrow \pi X$ [(a) and (b)], $\bar{D}^{\ast} D \rightarrow \pi X$ [(c) and (d)] and $\bar{D}^{\ast} D ^{\ast} \rightarrow \pi X $ [(e), (f), (g) and (h)]. The filled box in the diagrams (d), (g) and (h) represents the anomalous vertex $X D^*\bar{D}^*$, which was evaluated in Ref. \cite{XProduction}. The charges of the particles are not specified.} \label{diagrams} \end{figure} The cross sections for the processes shown in Fig.~\ref{diagrams} were obtained in Ref.~\cite{XProduction}. Using effective Lagrangians based on SU(4), the coupling of the $X(3872)$ to $\bar D^* D^*$ was estimated through the evaluation of triangular loops and an effective Lagrangian was proposed to describe this vertex. As a result, it was found that the contributions coming from the coupling of the $X(3872)$ to charged $D$'s and $D^*$'s and from the anomalous vertices play an important role in determining the cross sections. The coupling constant of the $X\bar D^* D^*$ vertex was found to be $g_{X\bar D^* D^*}=12.5 \pm 1.0$. For more details about the calculations, we refer the reader to Ref.~\cite{XProduction}. In the present manuscript, we follow Ref.~\cite{XProduction} and obtain the cross sections of the processes in Fig.~\ref{diagrams} using a form factor of the type \be F(\vec{q}) \, = \, \frac{\Lambda ^2}{\Lambda ^2 + \vec{q} ^2}, \label{formf} \ee where $\Lambda = 2.0$ GeV is the cutoff and $\vec{q}$ the three-momentum transfer in the center of mass frame. Following Refs.~\cite{ChoLee,Koch}, the thermally averaged cross section for a process $a b \rightarrow c d$ can be calculated using the expression \ben \langle \sigma_{a b \rightarrow c d } \, v_{a b}\rangle & = & \frac{ \int d^{3} \mathbf{p}_a d^{3} \mathbf{p}_b \, f_a(\mathbf{p}_a) \, f_b(\mathbf{p}_b) \, \sigma_{a b \rightarrow c d } \,\,v_{a b} }{ \int d^{3} \mathbf{p}_a d^{3} \mathbf{p}_b \, f_a(\mathbf{p}_a) \, f_b(\mathbf{p}_b) } \nonumber \\ & = & \frac{1}{4 \alpha_a ^2 K_{2}(\alpha_a) \alpha_b ^2 K_{2}(\alpha_b) } \int _{z_0} ^{\infty } dz K_{1}(z) \,\,\sigma (s=z^2 T^2) \left[ z^2 - (\alpha_a + \alpha_b)^2 \right] \left[ z^2 - (\alpha_a - \alpha_b)^2 \right], \label{thermavcs} \een where $f_a$ and $f_b$ are Bose-Einstein distributions, $\sigma_{a b \rightarrow c d }$ are the cross sections computed in \cite{XProduction}, $v_{ab}$ represents the relative velocity of the two interacting particles ($a$ and $b$), $\alpha _i = m_i / T$, with $m_i$ being the mass of particle $i$ and $T$ the temperature, $z_0 = max(\alpha_a + \alpha_b,\alpha_c + \alpha_d)$, and $K_1$ and $K_2$ are the modified Bessel functions of first and second kind, respectively. The masses used in the present work are: $m_D = 1867.2 $ MeV, $m_{D^{\ast}} = 2008.6$ MeV, $m_{\pi} = 137.3$ MeV and $m_{X} = 3871.7$ MeV~\cite{pdg}. In Fig. \ref{fig2}a we show the thermally averaged cross section for the process $\bar{D} D \rightarrow \pi X(3872) $, considering only the coupling of the $X(3872)$ to the neutral states $\bar D^0D^{*0}$ (solid line) and adding the coupling to the charged components (dashed line). As can be seen, the thermally averaged cross section increases by a factor of about 2.5 when we include the charged $D$'s and $D^*$'s. In Figs. \ref{fig2}b and \ref{fig2}c we show the thermally averaged cross sections for the processes $\bar{D}^{\ast} D \rightarrow \pi X(3872) $ and $\bar{D}^{\ast} D^{\ast} \rightarrow \pi X(3872) $, considering only the coupling of the $X$ to neutral $D$'s and $D^*$'s (dashed line), including couplings to charged $D$'s and $D^*$'s (dotted line) and finally adding also the contribution from the anomalous vertices (shadded region). As can be seen, the contribution from the anomalous vertices produces an enhancement of the thermally averaged cross sections by a factor of $100 \, - \, 150$. \begin{figure}[ht!] \begin{center} \subfigure[ ]{\label{fig1a} \includegraphics[width=0.32\textwidth]{Xab_fig2a.eps}} \subfigure[ ]{\label{fig1b} \includegraphics[width=0.32\textwidth]{Xab_fig2b.eps}} \subfigure[ ]{\label{fig1b} \includegraphics[width=0.32\textwidth]{Xab_fig2c.eps}} \end{center} \caption{Thermally averaged cross sections. a) $\bar{D} D \rightarrow \pi X(3872) $, considering only the coupling of the $X$ to the neutral $D$'s and $D^*$'s (solid line) and adding the coupling to charged $D$'s and $D^*$'s (dashed line). b) $\bar{D}^{\ast} D \rightarrow \pi X(3872) $, considering only the coupling of the $X$ to neutral $D$'s and $D^*$'s (dashed line), including the contribution from charged $D$'s and $D^*$'s (dotted line) and also including diagrams containing the anomalous vertices (shaded region). c) $\bar{D}^{\ast} D^{\ast} \rightarrow \pi X(3872)$. The lines and shaded area have the same meaning as those of b). } \label{fig2} \end{figure} To close this section we show in Fig. \ref{fig3}a the total thermally averaged cross sections for the {processes involving the production of the $X(3872)$ state, i.e., $\bar{D} D \rightarrow \pi X$, $\bar{D}^{\ast} D \rightarrow \pi X$ and $\bar{D}^{\ast} D ^{\ast} \rightarrow \pi X $ reactions, while in Fig.~\ref{fig3}b we show the inverse processes, i.e., the dissociation of $X(3872)$ through the reactions $\pi X\to \bar D D$, $\pi X\to \bar D^* D$, $\pi X\to \bar D^* D^*$, respectively. For the latter cases, we use the principle of detailed balance to determine the corresponding cross sections. Figure \ref{fig3} should be compared with the Fig. 3 of Ref. \cite{ChoLee}. Our cross sections are a factor $100$ larger in reactions with $D^* \bar{D}^*$ in the initial or final state This can be attributed mostly to the inclusion of the anomalous terms. Moreover, our cross sections are a factor $10$ larger in the case of $D D$ mesons in the initial or final state. The difference comes from the inclusion of the coupling of the $X$ to charged charged $D$'s and $D^*$'s, which was not considered in Ref. \cite{ChoLee}. On the other hand, in both works, the absorption cross sections are more than fifty times larger than the production ones. In the computation of the time evolution of the $X$ abundance, we will need to know how the temperature changes with time and this is highly model dependent. Fortunately, as one can see in Fig. \ref{fig3}, the dependence of $<\sigma v>$ on the temperature is relatively weak. \begin{figure}[ht!] \begin{center} \subfigure[ ]{\label{fig1a} \includegraphics[width=0.485\textwidth]{Xab_fig3a.eps}} \subfigure[ ]{\label{fig1b} \includegraphics[width=0.485\textwidth]{Xab_fig3b.eps}} \end{center} \caption{Thermally averaged cross sections. a) $\bar{D} D \rightarrow \pi X(3872) $ (dashed line), $\bar{D}^{\ast} D \rightarrow \pi X(3872) $ (dark-shaded region) and $\bar{D}^{\ast} D^{\ast} \rightarrow \pi X(3872) $ (light-shaded region). b) $\pi X(3872) \rightarrow \bar{D} D $ (dashed line), $\pi X(3872) \rightarrow \bar{D}^{\ast} D $ (dark-shaded region) and $\pi X(3872) \rightarrow \bar{D}^{\ast} D^{\ast}$ (light-shaded region).} \label{fig3} \end{figure} \section{Time evolution of the $X(3872)$ abundance} Following Ref. \cite{ChoLee} we study the yield of $X(3872)$ in central Au-Au collisions at $\sqrt{s_{NN}} = 200$ GeV. By using the thermally averaged cross sections obtained in the previous section, we can now analyze the time evolution of the $X(3872)$ abundance in hadronic matter, which depends on the densities and abundances of the particles involved in the processes of Fig.~\ref{diagrams}, as well as the cross sections associated with these reactions (and the corresponding inverse reactions), Figs.~\ref{fig3}a and \ref{fig3}b. The momentum-integrated evolution equation has the form~\cite{ChoLee,Koch,ChenPLB,ChenPRC} \ben \frac{d N_{X} (\tau)}{d \tau} & = & R_{QGP} (\tau) + \sum_{c,c^{\prime}} \left[ \langle \sigma_{c c^{\prime} \rightarrow \pi X } v_{c c^{\prime}} \rangle n_{c} (\tau) N_{c^{\prime}}(\tau) - \langle \sigma_{ \pi X \rightarrow c c^{\prime} } v_{ \pi X} \rangle n_{\pi} (\tau) N_{X}(\tau) \right], \label{rateeq} \een where $N_{X} (\tau)$, $N_{c^{\prime}}(\tau)$, $ n_{c} (\tau)$ and $n_{\pi} (\tau)$ are the abundances of $X(3872)$, of charmed mesons of type $c^{\prime}$, of charmed mesons of type $c$ and of pions at proper time $\tau$, respectively. The term $R_{QGP} (\tau)$ in Eq.~(\ref{rateeq}) represents the $X$ production from the quark-gluon plasma in the mixed phase, since the hadronization of the QGP takes a finite time, and it is given by~\cite{ChoLee,ChenPRC}: \begin{align} R_{QGP} (\tau)=\left\{\begin{array}{c} \dfrac{N^0_X}{\tau_H-\tau_C},\quad \tau_C<\tau<\tau_H,\\\\ 0,\quad\text{otherwise, }\end{array}\right.\label{R} \end{align} where the times $\tau_C = 5.0$ fm/c and $\tau_H = 7.5$ fm/c determine the beginning and the end of the mixed phase respectively. The constant $N^0_X$ corresponds to the total number of $X(3872)$ produced from quark-gluon plasma.} To solve Eq.~(\ref{rateeq}) we assume that the pions and charmed mesons in the reactions contributing to the abundance of $X$ are in equilibrium. Therefore $N_{c^{\prime}}(\tau)$, $ n_{c} (\tau)$ and $n_{\pi} (\tau)$ can be written as~\cite{ChoLee,Koch,ChenPLB,ChenPRC} \ben N_{c^{\prime}}(\tau) & \approx & \frac{1}{2 \pi^2} \, \gamma_{C} \, g_{D} \, m_{D^{(\ast)}}^2 \, T(\tau) \, V(\tau) \, K_{2}\left(\frac{m_{D^{(\ast)}} }{T(\tau)}\right) ,\nonumber \\ n_{c} (\tau) & \approx & \frac{1}{2 \pi^2} \, \gamma_{C} \, g_{D} \, m_{D^{(\ast)}}^2 \, T(\tau) \, K_{2}\left(\frac{m_{D^{(\ast)}} }{T(\tau)}\right), \nonumber \\ n_{\pi} (\tau) & \approx & \frac{1}{2 \pi^2} \, \gamma_{\pi} \, g_{\pi} \, m_{\pi}^2 \, T(\tau) \, K_{2}\left(\frac{m_{\pi} }{T(\tau)}\right), \label{densities} \een where $\gamma _i$ and $g_i$ are the fugacity factor and the spin degeneracy of particle $i$ respectively. As can be seen in Eq.~(\ref{densities}), the time dependence in Eq.~(\ref{rateeq}) enters through the parametrization of the temperature $T(\tau)$ and volume $V(\tau)$ profiles suitable to describe the dynamics of the hot hadron gas after the end of the quark-gluon plasma phase. Following Refs.~\cite{ChoLee,ChenPLB,ChenPRC}, we assume the $\tau$ dependence of $T$ and $V$ to be given by~\cite{ChoLee,ChenPLB,ChenPRC} \ben T(\tau) & = & T_C - \left( T_H - T_F \right) \left( \frac{\tau - \tau _H }{\tau _F - \tau _H}\right)^{\frac{4}{5}} , \nonumber \\ V(\tau) & = & \pi \left[ R_C + v_C \left(\tau - \tau_C\right) + \frac{a_C}{2} \left(\tau - \tau_C\right)^2 \right]^2 \tau_C . \label{TempVol} \een These expressions are based on the boost invariant picture of Bjorken~\cite{Bjorken} with an accelerated transverse expansion. In the above equation $R_C = 8.0$ fm denotes the final size of the quark-gluon plasma, while $v_C = 0.4 \, c$ and $a_C = 0.02 \, c^2$/fm are its transverse flow velocity and transverse acceleration at $\tau_C$ respectively. The critical temperature of the quark gluon plasma to hadronic matter transition is $T_C=175$ MeV; $T_H = T_C = 175$ MeV is the temperature of the hadronic matter at the end of the mixed phase. The freeze-out takes place at the freeze-out time $\tau_F = 17.3$ fm/c, when the temperature drops to $T_F = 125$ MeV. To solve Eq.~(\ref{rateeq}), we assume that the total number of charm quarks in charm hadrons is conserved during the production and dissociation reactions, and that the total number of charm quark pairs produced at the initial stage of the collisions at RHIC is 3, yielding the charm quark fugacity factor $\gamma _C \approx 6.4$ in Eq.~(\ref{densities}) \cite{ChoLee,EXHIC}. In the case of pions, we follow Ref.~\cite{ChenPRC} and work with the assumption that their total number at freeze-out is 926, which fixes the value of $\gamma _{\pi}$ appearing in Eq.~(\ref{densities}) to be $\sim 1.725$ . In Ref. \cite{ChoLee} the authors studied the yields obtained for the $X(3872)$ abundance within two different approaches: the statistical and the coalescence models. In the statistical model, hadrons are produced in thermal and chemical equilibrium. This model does not contain any information related to the internal structure of the $X(3872)$ and, for this reason we do not consider it in this work. In the case of the coalescence model, the determination of the abundance of a certain hadron is based on the overlap of the density matrix of the constituents in an emission source with the Wigner function of the produced particle. This model contains information on the internal structure of the considered hadron, such as angular momentum, multiplicity of quarks, etc. According to Ref.~\cite{EXHIC}, the number of $X(3872)$ produced at the end of the mixed phase, assuming that the $X(3872)$ is a tetraquark state with $J^{PC}=1^{++}$, is given by: \begin{align} N_{X(4q)} ^{0}=N_{X(4q)}(\tau_H) \approx 4.0 \times 10^{-5} . \label{Nx4q} \end{align} In order to determine the time evolution of the $X(3872)$ abundance we solve Eq.~(\ref{rateeq}) starting at the end of the mixed phase, i.e. at $\tau_H = 7.5$ fm/c, and assuming that the $X(3872)$ is a tetraquark, formed according to the prescription of the coalescence model. The initial condition is given by Eq.~(\ref{Nx4q}). We use this initial abundance to integrate Eq. (\ref{rateeq}) and we show the result in Fig.~\ref{fig4}. In the figure the solid line represents the result obtained using the same approximations as those made in Ref. \cite{ChoLee}. Our curve is slightly different from that of Ref. \cite{ChoLee} because we did not include the contribution of the $\rho$ mesons, as discussed earlier. The dashed line shows the result when we include the couplings of the $X(3872)$ to charged $D$'s and $D^*$'s. The light-shaded band shows the results obtained with the further inclusion of the diagrams containing the anomalous vertices. The band reflects the uncertainty in the $X \bar{D}^* D^*$ coupling constant, which is $g_{X \bar{D}^{\ast} D^{\ast}} = 12.5\pm 1.0$ \cite{XProduction}. \begin{figure}[th] \centering \includegraphics[width=8.0cm]{Xab_fig4.eps} \caption{Time evolution of the $X(3872)$ abundance as a function of the proper time $\tau$ in central Au-Au collisions at $\sqrt{s_{NN}} = 200$ GeV. The solid line, the dashed line and the light-shaded region represent the results obtained considering only the neutral $D$'s and $D^*$'s, adding the contribution from charged $D$'s and $D^*$'s and including contributions from the anomalous vertices respectively. The initial condition is the abundance of the $X(3872)$ considered as a tetraquark Eq. (\ref{Nx4q}).} \label{fig4} \end{figure} As can be seen, without the inclusion of the anomalous coupling terms, the abundance of $X$ remains basically constant. This is because the magnitude of the thermally averaged cross sections for the $X$ production and absorption reactions obtained within this approximation is so small that the second term in the right hand side of Eq.~(\ref{rateeq}) is completely negligible compared to the first term. When including the coupling of the $X$ to charged $D$'s and $D^*$'s we basically do not find any important change for the time evolution of the $X$ abundance, since, as can be seen in Fig.~\ref{fig2}, the thermally averaged cross sections do not change drastically in this case. On the other hand, the inclusion of the anomalous coupling terms, depicted in Figs.~\ref{diagrams}c, \ref{diagrams}d, \ref{diagrams}f, \ref{diagrams}g and \ref{diagrams}h, modifies the behavior of the $X(3872)$ yield, producing a fast decrease of the $X$ abundance with time. This is the main result of this work. We emphasize that the $X(3872)$ abundance, whose time evolution was studied above, is the only one which comes from the QGP and is what could be observed if the $X(3872)$ is a tetraquark state. However, if the $X(3872)$ is a molecular state, it will be formed by hadron coalescence at the end of the hadronic phase. According to Ref.~\cite{ChoLee}, at this time the average number of $X$'s, considering it as a $D\bar{D}^*$ molecule, is \begin{align} N_{X(mol)} \approx 7.8 \times 10^{-4}\;, \label{Nxmol} \end{align} which is about $80$ times larger than the contribution for a tetraquark state at the end of the hadronic phase (see Fig.~\ref{fig4}). We can then conclude that the QGP contribution for the $X(3872)$ production (as a tetraquark state and from quark coalescence) after being suppressed during the hadronic phase, becomes insignificant at the end of the hadronic phase. \section{Concluding Remarks} In this work we have studied the time evolution of the $X(3872)$ abundance in heavy ion collisions. If the $X(3872)$ is a tetraquark state it will be produced at the mixed phase by quark coalescence. After being produced at the end of the quark gluon plasma phase, the $X(3872)$ interacts with other comoving hadrons during the expansion of the hadronic matter. Therefore, the $X(3872)$ can be destroyed in collisions with the comoving light mesons, such as $X+\pi\to \bar D+D$, $X+\pi\to \bar D^* +D^*$ but it can also be produced through the inverse reactions, such as $D+\bar{D}\to X+\pi$, $\bar D^* +D^*\to \pi +X$. In this work we have considered the contributions of anomalous vertices, $\pi D^*\bar{D}^*$ and $X \bar{D}^{\ast} D^{\ast}$, and the contributions from charged $D$ and $D^*$ mesons to the $X(3872)$ production and dissociation cross sections. These vertices, apart from enhancing the cross sections associated with the $\bar D^* D^*$ channel, give rise to additional production/absorption mechanisms of $X$, which are found to be relevant. The cross sections, averaged over the thermal distribution, have been used to analyze the time evolution of the $X(3872)$ abundance in hadronic matter. We have found that the abundance of a tetraquark $X$ drops from $N_{X(4q)} \approx 4.0 \times 10^{-5}$ at the begining of the hadronic phase \cite{ChoLee} to $N_{X(4q)} \sim 1.0 \times 10^{-5}$ at the end of the hadronic phase. On the other hand, if the $X(3872)$ is a molecular state it will be produced by hadron coalescence at the end of the hadronic phase. According to Ref.~\cite{ChoLee}, at this time the average number of $X$'s, considering it as a $D\bar{D}^*$ molecule, is $N_{X(mol)} \approx 7.8 \times 10^{-4}$, which is about $80$ times larger than $N_{X(4q)}$. As expected, the results show that the $X$ multiplicity in relativistic ion collisions depends on the structure of $X(3872)$. Our main conclusion is that the contribution from the anomalous vertices play an important role in determining the time evolution of the $X(3872)$ abundance and they lead to strong suppression of this state during the hadronic phase. Therefore, within the uncertainties of our calculation we can say that if the $X(3872)$ were observed in a heavy ion collision it must have been produced at the end of the hadronic phase and, hence, it must be a molecular state. \begin{acknowledgements} The authors would like to thank the Brazilian funding agencies CNPq and FAPESP for financial support. We also thank C. Greiner and J. Noronha-Hostler for fruitful discussions. \end{acknowledgements}
1,941,325,220,155
arxiv
\section{Introduction}% \textsc{Need for fast RT}. The turbulent, multi-phase structure of the interstellar medium (ISM) is shaped by the complex and non-linear interplay between gravity, magnetic fields, heating and cooling, and the radiation and momentum input from stars, in particular massive stars \citep[see e.g.][]{Agertz2013, Walch2015, KO2017, Peters2017}. Therefore, an efficient treatment of radiation transport in different energy bands (from the submillimeter to X-rays), and of the associated heating and cooling processes, is essential to simulate the structure and evolution of the ISM in detail, and to compare theoretical and numerical models with observations. A fundamental consideration is that radiation is emitted from different types of sources: point sources such as stars, extended sources like cooling shock fronts, diffuse sources like dust, as well as an ambient background radiation field. Hence, a modern radiative transfer algorithm must be able to handle multiple energy bands and multiple sources in an efficient manner. \textsc{Overview of algorithm}. There are many ways to treat the radiation from point sources in 3D: Ray tracing \citep{Mellema2006, Gritschneder2009a} with {\sc HealPix} schemes \citep{Bisbas2009, Wise2011, Baczynski2015, Kim2017, Rosen2017}, long- and short-range characteristics \citep[e.g.][]{Rijkhorst2006}; flux-limited diffusion (FLD) \citep{Krumholz2007, Skinner2013}; combined schemes which work in optically thin and thick regions \citep{Paardekooper2010, Kuiper2010, Klassen2014}; moment methods \citep{Petkova2012, Rosdahl2013, Kannan2019}; and backward radiative transfer schemes \citep[e.g.][]{Kessel2003, Altay2013, Grond2019TREVR}, like the one developed in this work. In comparison with ray-tracing, FLD and moment based methods tend to be computationally less expensive and their cost does not depend on the number of sources. However, typically they do not capture certain features of the radiation field, for example shadowing (although see \citealt{Rosdahl2013}). Several code comparison projects have highlighted the advantages and short-comings of different radiative transfer methods \citep[e.g.][]{Iliev2006, Iliev2009, Bisbas2015}. Ultimately, even simple radiative transfer methods are expensive, at least as expensive as all the other elements of a simulation -- (magneto-)hydrodynamics ((M)HD), self-gravity, chemistry, heating and cooling -- together. Consequently, the numerical overhead of radiative transfer severely limits the astrophysical problems that can be addressed realistically in state-of-the-art 3D simulations, even on today's largest super-computers. \textsc{TreeRay basics}. In response to this challenge, we have devised {\sc TreeRay}, a new tree-based, backward radiative transfer scheme, which can handle multiple sources at acceptable extra cost, when running simulations that include self-gravity anyway. The computational cost of {\sc TreeRay} is independent of the number of emitting sources (as demonstrated in Section~\ref{sec:perf:nsrc}), and indeed every cell can be a source. Therefore, \textsc{TreeRay} can readily treat, for example, the ionizing radiation from multiple massive stars in a molecular cloud \citep{Haid2019}, as well as radiating shocks or other extended sources, on-the-fly in complex 3D (M)HD simulations. {\sc TreeRay} is an extension of the octtree-based solver for gravity and diffuse radiative transfer described in detail in \citet[][hereafter Paper~I]{Wunsch2018}, which has been developed for the Eulerian, adaptive mesh refinement (AMR) code {\sc FLASH} 4\footnote{The {\sc FLASH} code is maintained by the ASC/Alliances Center for Astrophysical Thermonuclear Flashes (Flash Center for Computational Science) at the University of Chicago (http://flash.uchicago.edu/site)} \citep{Fryxell2000}. The tree-solver of Paper~I is available with the official {\sc FLASH} release. Due to its efficiency, the {\sc TreeRay} scheme allows us to treat all dynamically relevant radiative processes in full three-dimensional simulations of the multi-phase ISM with an acceptably small error. {\sc TreeRay} evolved from the original {\sc TreeCol} method \citep{Clark2012} for treating the shielding of a diffuse background radiation field in Smoothed Particle Hydrodynamics. The shielding method has been presented as an {\sc OpticalDepth} module of the tree solver in Paper~I. \textsc{TREVR}. The recently published TREVR code \citep{Grond2019TREVR}, which uses a tree data structure and reverse ray-tracing, is, we believe, the most closely related existing code. However, the two codes differ significantly in several respects. TREVR casts rays in the directions of sources, resulting in some (albeit weak) dependence of the computational cost on source number, while {\sc TreeRay} casts {\sc HealPix} rays (strictly speaking cones) in all directions, thereby covering the whole computational domain. These two fundamentally different approaches result in different types of numerical artefacts. Whilst {\sc TreeRay} uses iteration to deal with regions irradiated by multiple sources, in situations where the absorption coefficient depends on the radiation energy density, TREVR applies limits to the time step and uses a special refinement criterion to take account of the directional dependence of absorption in tree nodes. Finally, TREVR is implemented in the Smoothed Particle Hydrodynamic (SPH) code Gasoline, while {\sc TreeRay} is implemented in the AMR code {\sc Flash}, although both codes are general and could, in principle, be adapted to work with any hydrodynamics code. \textsc{Outline}. The plan of the paper is as follows. In Section~\ref{sec:alg} we present the general algorithm and describe its implementation and coupling to the tree solver. In Section~\ref{sec:acc}, we perform a suite of static and dynamic tests to demonstrate the viability of the scheme, and we discuss the impact of the user-specified parameters on the accuracy. We restrict ourselves to the simple On-The-Spot approximation \citep{Oesterbrock1988} ({\sc TreeRay/OnTheSpot} module) for treating the interaction of UV radiation from massive stars with the ISM. The flexibility and performance of {\sc TreeRay} are demonstrated in Section~\ref{sec:SFFeedback}, where we present a more complex simulation of multiple massive stars dispersing a molecular cloud with their ionising radiation and winds. In Section~\ref{sec:perf}, we discuss the performance and parallel scaling of the algorithm, before summarizing in Section~\ref{sec:summary}. \section{The algorithm} \label{sec:alg} \textsc{Code structure}. {\sc TreeRay} is implemented as a module for the {\em tree-solver} described in Paper~I. The tree-solver provides general algorithms for building an octal tree, communicating the required parts of it to other processors, and traversing the tree to calculate quantities that require integration over the computational domain. It is a generalisation of the widely used algorithm devised by \citet{Barnes1986} to solve for self-gravity in numerical codes. Individual modules (e.g. {\sc Gravity} or {\sc TreeRay}) define which quantities should be stored on the tree, and provide subroutines to be called during the tree-walk to calculate integrated quantities like the gravitational acceleration or radiation energy density. {\sc TreeRay} itself consists of a general part plus submodules that treat the different physical processes needed to solve the Radiation Transport Equation (RTE). The {\sc TreeRay/OpticalDepth} submodule (described in Paper~I) is the simplest {\sc TreeRay} submodule; instead of solving the RTE, it simply sums contributions from different directions to obtain the corresponding optical depths. Here we focus on the {\sc TreeRay/OnTheSpot} submodule, to illustrate how {\sc TreeRay} solves the RTE for ionising radiation. \textsc{New features in tree solver}. The simplified flowchart in Fig.~\ref{fig:flowchart} shows the connection between tree solver and {\sc TreeRay}\footnote{The full scheme of the tree solver and its modules down to the level of individual subroutines can be found at \url{http://galaxy.asu.cas.cz/pages/treeray}.}. In comparison to Paper~I, two modifications have been made. Firstly, the three main steps of the tree-solver (tree-build, communication and tree-walk) can be called several times (outer iteration loop in Fig.~\ref{fig:flowchart}) to calculate correctly absorption in regions irradiated by more than one source. Secondly, the tree walk is now called on a cell-by-cell basis (instead of block-by-block), because the temporary data structures storing quantities needed for solving the RTE are too big to be stored in memory for all the grid cells of a block. \textsc{Algorithm flow}. The algorithm proceeds as follows. First, the tree is built to store quantities provided by the {\sc TreeRay} modules, and appropriate parts of the tree are communicated to other processors. Next, the tree is traversed for each cell (hereafter a {\it target cell}) and these quantities are mapped onto a system of rays originating at the target cell and covering the whole computational domain. Then, the RTE is solved along each ray and the radiative fluxes arriving at each target cell are calculated and converted to a radiation energy density. Finally, the whole process (starting from the tree-build) is repeated until the radiation energy density converges everywhere, i.e. until it does not change by more than a user defined fraction between successive iterations. Below, we describe individual steps in detail. \begin{figure} \includegraphics[width=\columnwidth]{treeray-paper-flowchart-logo} \caption{ The tree solver flowchart and its connection to the {\sc Gravity} and {\sc TreeRay} modules. All general tree solver routines are yellow. {\sc Gravity} routines are blue. {\sc TreeRay} routines are green. } \label{fig:flowchart} \end{figure} \subsection{Tree build} \label{sec:treebuild} \textsc{Quantities added to the tree}. The general {\sc TreeRay} module does not add any quantities to the tree; the tree-solver itself stores four numbers on each node, its mass and centre of mass. However, the {\sc TreeRay} submodules typically need to store two or three numbers for each energy band on each tree node (node index $n$). In the case of the {\sc TreeRay/OnTheSpot} submodule, we are concerned with Extreme Ultraviolet radiation (EUV); i.e. radiation that can ionise hydrogen. The module stores the rate of emission of EUV photons within the node ($\varepsilon_{_{{\rm EUV,}n}}$), the rate of recombination of hydrogen into excited levels ($\alpha_{_{{\rm EUV,}n}}$), and the mean EUV radiation energy density from the previous iteration step ($e_{_{{\rm EUV,}n}}$): \begin{eqnarray} \varepsilon_{_{{\rm EUV,}n}} & = & \max \left(\sum_{c} \dot{n}_{{\rm EUV,}c} - \alpha_{\rm B} n_{{\rm H,}c}^2 dV_c , 0 \right)\,,\label{eq:os:ems}\\ \alpha_{_{{\rm EUV,}n}} & = & \max \left(\sum_{c} \alpha_{\rm B} n_{{\rm H,}c}^2 dV_c - \dot{n}_{{\rm EUV,}c}, 0 \right)\,, \label{eq:os:abs}\\ e_{_{{\rm EUV,}n}} & = & \sum_{c} e_{_{{\rm EUV,}c}} dV_c\,. \label{eq:os:erd} \end{eqnarray} Here $\dot{n}_{{\rm EUV,}c}$ is the rate at which EUV photons are emitted from hot stars in cell $c$, $n_{{\rm H,}c}$ is the number density of hydrogen nuclei, $\alpha_{\rm B}$ is the recombination coefficient into excited states only, $e_{_{{\rm EUV,}c}}$ is the radiation energy density calculated for cell $c$ in the previous iteration, and $dV_c$ is the cell volume. Note that the photons emitted and absorbed within cell $c$ are subtracted from each other in Eqs.~\ref{eq:os:ems} and \ref{eq:os:abs}, as this is more accurate than doing it later, during the RTE integration along rays, when both quantities have been modified by the approximations inherent in mapping the tree nodes onto the rays. \textsc{Mapping sources onto the grid}. In order to obtain $\dot{n}_{{\rm EUV,}c}$, each source of radiation, $i$, characterized by the rate at which it emits EUV photons, $\dot{N}_{\mathrm{EUV,}i}$ and its radius, $r_i$, is mapped onto the grid before the tree-build. During the mapping, $\dot{N}_{\mathrm{EUV,}i}$ is divided between the grid cells which the source intersects, in proportion to the intersecting volumes. \textsc{Communication.} After the tree is built, it is communicated in the same way as described in Paper~I (Section~2.1). In summary, the code first distributes information about all the block positions and sizes to all the processors. Then, each processor runs a tree walk with target points at the closest point to the local node of all the blocks of a given remote sub-domain, and determines which local nodes will be opened when the tree walk is executed at the processor calculating that sub-domain. In this way, the local processor determines lists of the nodes needed at all the remote processors, and sends the nodes there. This ensures that all the information about the nodes to be opened during the tree walk is present at each processor, and at the same time it minimizes the amount of data communicated. \subsection{Structure of rays} \textsc{Ray-arrays}. Before the tree is walked for each target cell, three arrays (\texttt{cdMaps}, \texttt{rays}, and \texttt{raysEb}) are created in order to store the results of the tree-walk. The first one, \texttt{cdMaps} (standing for column density maps), is two dimensional and has size $N_{_{\rm PIX}}\!\times l_q$, where $N_{_{\rm PIX}}$ is the number of directions and $l_q$ is the number of quantities. The directions are defined with the {\sc HealPix} algorithm \citep{Gorski2005}, which tessellates a unit sphere into pixels of equal solid angle, each with a unit vector, $\hat{\boldsymbol n}_k$, which points from the sphere centre (target point) to the centre of the pixel. The value $N_{_{\rm PIX}}$ is specified at the outset by setting the {\sc HealPix} level $N_{\rm side}$, with possible values $N_{\rm side} = 1, 2, 4, 8,\,.\,.\,.\,$, corresponding to $N_{_{\rm PIX}} = 12 N_{\rm side}^2 = 12, 48, 192, 768,\,.\,.\,.\,$. The \texttt{cdMaps} array is used by the {\sc TreeRay/OpticalDepth} module and has already been described in Paper~I.\footnote{Note that in Paper~I this array had an additional dimension going through all the cells within a block, because the tree-walk was run on a block-by-block, rather than a cell-by-cell, basis.} \textsc{rays}. The second array, \texttt{rays}, adds another dimension representing radial distance from the target point. We define $N_r$ {\em evaluation points} on each ray leading from the target point in the direction of vector $\hat{\boldsymbol n}_k$. Each ray is associated with a volume given by the area of the {\sc HealPix} pixel extended along the radial direction. Thus we create a system of rays pointing from each target point in $N_{_{\rm PIX}}$ directions, and covering the whole computational domain. The \texttt{rays} array has size $N_r\times N_{_{\rm PIX}}\times m_q$, where $m_q$ is the number of quantities mapped onto the rays. \textsc{raysEb}. The last array, \texttt{raysEb}, adds another dimension, which is necessary in cases involving multiple energy bands. Its size is ($N_r\times N_\mathrm{eb}\times N_{_{\rm PIX}}\times n_q$, where $N_\mathrm{eb}$ is the total number of energy bands for all sub-modules and $n_q$ is the maximum number of quantities per energy band (by default $3$). Typically, each sub-module requires one or more energy bands to treat radiation at different wavelengths (or radiation involved in different physical processes), and each energy band uses its own emission coefficient, absorption coefficient and radiation energy density, which are stored in the \texttt{raysEb} array. For simplicity, we refer to all three arrays as \texttt{ray-arrays}, considering \texttt{cdMaps} to be a special \texttt{ray-array} with only one data point in each radial direction. \textsc{Evaluation points on rays}. Each ray intercepts $N_r$ evaluation points, where the quantities mapped onto the ray (e.g. emission and absorption coefficients) are set. Evaluation points are located so that the distances between successive points (hereafter ray {\em segment} lengths), are proportional to the distance from the target cell. This is because the size of tree nodes that need to be opened also increases -- often linearly -- with distance from the target point. Hence, starting with the first evaluation point at the target point ($r_0 \equiv 0$), the radial coordinate of the $i^{\rm \,th}$ evaluation point is \begin{equation} \label{eq:etaR} r_i = \frac{\Delta x\;i^2}{2\,\eta_R^2}\,. \end{equation} Here $\Delta x$ is the size of the smallest cell in the simulation and $\eta_R$ is a user defined parameter that controls the resolution in the radial direction. By default, $\eta_R = 2$, which ensures that (if the Barnes-Hut criterion for node acceptance is used, see Section~\ref{sec:MACs}) the segment lengths correspond, approximately, to half the size of the nodes with which the target cell interacts during the tree walk, i.e. \begin{equation} \label{eq:bhmac} |r_{i+1} - r_i| \sim \frac{\theta_\mathrm{lim}\times d}{2}\,, \end{equation} where $d$ is the distance between the target cell and the interacting node. The maximum ray length, $L_\mathrm{ray}$, is set automatically to the length of the three-dimensional diagonal of the computational domain, and hence the number of evaluation points along each ray is \begin{equation} \label{eq:Nr} N_R = \eta_R \times \texttt{floor}\left(\sqrt{\frac{2 L_{\rm ray}}{\Delta x}}\, \right) + 1\,, \end{equation} where \texttt{floor($x$)} is the largest integer less than or equal to $x$. Each module $M$ includes a user-defined parameter, $L_{\mathrm{ray,}M}$, by which the user can specify the maximum distance from the target cell to which the calculation (mapping and/or RTE solution) should be carried out. \subsection{Tree-walk} \label{sec:treewalk} \begin{figure} \includegraphics[width=\columnwidth]{alg-mapping} \caption{Mapping tree nodes onto rays during the tree walk. The tree nodes that need to be opened during the tree walk for a target point in the bottom left corner are denoted by solid line squares. The system of rays which is cast from the target point is represented by the blue and yellow areas. The green circles are node mass centres. The quantities that are stored in each of the tree nodes are distributed to all rays which a node intersects (solid arrows), weighted by the relative intersection volume. For a given ray, the quantities are distributed to the so-called evaluation points along the ray using weights given by the kernel function $W[(r_i-d)/h_n]$ (dashed arrows). } \label{fig:mapping} \end{figure} \textsc{Mapping nodes onto rays}. When the tree is walked, the quantities stored on the tree nodes are mapped onto the \texttt{ray-arrays} in a two-step process. First, each mapped quantity, e.g. the node mass, is divided amongst different rays in proportion to the volume of the intersection between the node and the ray (see Fig.~\ref{fig:mapping}). This is implemented as described in Paper~I, using a pre-calculated table of intersection volumes between a ray and a node with given angular coordinates ($\theta$, $\phi$) and angular size $\psi$. Second, the part of the node belonging to the ray is divided between evaluation points, $r_i$, using a kernel function $W(x)$ with $x \equiv (r_i-d)/h_n)$ where $d$ is the distance of the node mass centre from the target point, and $h_n$ is the node linear size. This mapping also uses a pre-calculated table in which the weights of the evaluation points are recorded as a function of $r_i$, $d$, and $h_n$. \textsc{Kernels}. Three kernels are presently available, as discussed in detail in Appendix \ref{app:kernels}. The first is a Gaussian kernel, $W_g(x)$ (equation~\ref{eKernGaussian}), truncated at $x=\sqrt{3}/2$. The second, $W_p(x)$ (equation~\ref{eKernPolynomial}), has the form of a piece-wise polynomial of third order and has been obtained by fitting the mean intersection between a uniform cubic node and a randomly oriented line. {\sc TreeRay} uses $W_p(x)$ by default for mapping the node masses and volumes. The third, $W_f(x)$, is tailored to the requirements of radiative transfer, and is used with the {\sc OnTheSpot} module; the considerations informing the prescription for the kernel coefficients are described in detail in Appendix \ref{app:kernels}. Due to its nature, $W_f(x)$ is only used for mapping quantities related to radiative transfer (e.g. the radiation energy); all other quantities (e.g. mass) are mapped using $W_p(x)$. \subsection{Multipole Acceptance Criteria (MACs)} \label{sec:MACs} \textsc{MACs in the tree solver}. During the tree walk for a given target cell, a node is accepted if it satisfies the {\em Multipole Acceptance Criterion -- MAC}. The simplest and most commonly used MAC is the Barnes-Hut (BH) MAC \citep{Barnes1986}. With the BH~MAC, a node of size $h_n$, at distance $d$ from the target cell, is accepted if \begin{equation} h_n/d < \theta_\mathrm{lim} \ , \end{equation} where $\theta_\mathrm{lim}$ is a user-specified limiting opening angle. In Paper~I we describe several data-dependent MACs that make the {\sc Gravity} module calculation more efficient. Below we define two new MACs, IF~MAC (standing for Ionisation Front) and Src~MAC (standing for Sources), designed for the {\sc TreeRay/OnTheSpot} module, and also applicable to other {\sc TreeRay} submodules. When these MACs are combined with the {\sc Gravity} module MACs to ensure accurate gravity calculation, smaller nodes are opened in regions of dense, highly structured gas, and this leads to an increase in the computational cost of the tree walk. However, this does not impact the accuracy or performance of the {\sc TreeRay} RTE solution, since the small nodes are smeared out onto the {\sc Healpix} rays with the angular resolution given by $N_{_{\rm PIX}}$. \textsc{BH~MAC}. When the BH~MAC is used together with {\sc TreeRay}, a sensible choice is to set $\theta_{\rm{lim}}\simeq (4\pi/N_{_{\rm PIX}})^{1/2}$. This ensures that the angular size of accepted tree nodes is approximately the same as the angle assigned to each {\sc Healpix} ray (see analysis of this criterion for the {\sc OpticalDepth} module in Paper~I, \S 3.3.1). \textsc{New MACs}. Both the IF~MAC and the Src~MAC are controlled by limiting opening angles, respectively $\theta_\mathrm{IF}$ and $\theta_\mathrm{Src}$. If the IF~MAC is adopted, the code records for each node, $n$, both the total mass, $m_n$, and the mass of ionised gas, $m_\mathrm{ion}$ (defined as a threshold on the ionised hydrogen abundance or on the temperature). Then, during the tree walk, nodes for which $\delta_\mathrm{IF} m_n < m_\mathrm{ion} < (1-\delta_\mathrm{IF})m_n$ (where $\delta_\mathrm{IF} = 10^{-8}$) are assumed to include an ionisation front. Such nodes are accepted only if \begin{equation} h_n/d < \theta_\mathrm{IF} \ . \end{equation} Similarly, if the Src~MAC is adopted, nodes that include sources of radiation, i.e. $\varepsilon_n > 0$, are accepted only if \begin{equation} h_n/d < \theta_\mathrm{Src} \ . \end{equation} \subsection{Radiation transport equation} \label{sec:rte} \begin{figure*} \includegraphics[width=\textwidth]{alg-rte-hatch} \caption{Schematic view of {\sc TreeRay}, depicting the solution of the RTE along a single ray (bounded by thick black lines) towards a given target cell (on the left) after all the required quantities have been mapped onto the evaluation points (crosses). Each evaluation point is characterised by its distance from the target point, $r_i$, its absorption coefficient, $\alpha_i$, the emission coefficient of all sources mapped onto it, $\varepsilon_i$, and the local radiation energy density from the previous time step, $e_i$. See Section~\ref{sec:treewalk} and Fig.~\ref{fig:mapping} for details of the mapping. For the furthest two evaluation points, $j-1$ and $j$ (marked by stars, with emission coefficients, $\varepsilon_{j-1}$ and $\varepsilon_j$), the yellow (blue) shaded area represents the volume $dV_{i,j-1}$ ($dV_{i,j}$) of the cone of radiation from source $j-1$ ($j$) in the segment between evaluation points $i$ and $i+1$ (equation~\ref{eq:dV}). The procedure described in Section~\ref{sec:rte} calculates the photon rates $\Phi_{i,j}$ and fluxes $F_{i,j}$ at each evaluation point. Fluxes $F_{0,j}$ at the target point are summed to give the radiation energy density there.} \label{fig:rte} \end{figure*} \textsc{General RTE}. After the tree walk is completed, the \texttt{ray-arrays} contain all the quantities needed to solve the Radiation Transport Equation (RTE) using the reverse ray-tracing method \citep{Altay2013}. For frequency band $\nu$, the RTE takes the form\footnote{The signs on the righthand side are reversed, as compared with the standard form, because the radiation propagates in the negative direction with respect to coordinate $r$ originating at the target point.} \begin{equation} \label{eq:RTE} \frac{dI_\nu}{dr} = -\varepsilon_\nu + \alpha_\nu I_\nu \end{equation} where $\varepsilon_\nu$ and $\alpha_\nu$ are the emission and absorption coefficients. The RTE must be solved along each of the $N_{_{\rm PIX}}$ rays cast from a target point. On each ray, at each evaluation point $i$ (at distance $r_i$ from the target point along the ray) radiation transport is regulated by (i) the local emission coefficient, $\varepsilon_{\nu,i}$; (ii) the local absorption coefficient, $\alpha_{\nu,i}$; and (iii) the radiation energy density from the previous iteration step, $e_{\nu,i}$. These quantities have all been mapped onto the evaluation points during the tree walk. $e_{\nu,i}$ is only required if the emission or absorption coefficients depend on it. If the emission or absorption coefficients {\it do not} depend on $e_{\nu,i}$, it is straightforward to integrate equation~(\ref{eq:RTE}) along each of the $N_{_{\rm PIX}}$ rays, and thereby obtain the (direction-dependent) radiation intensity $I_{\nu,0,k}\;(k\!=\!1\,{\rm to}\,N_{_{\rm PIX}})$ at the target point. These $I_{\nu,0,k}$ can then be summed to obtain a new estimate of the radiation energy density at the target point, and this can in turn be used to calculate radiative ionisation and/or heating rates. However, if the emission or absorption coefficients {\it do} depend on $e_{\nu,i}$, more elaborate formulations of the problem may be appropriate, and this is the case for the {\sc TreeRay/OnTheSpot} module. \textsc{RTE in OnTheSpot}. In the {\sc TreeRay/OnTheSpot} module treating the EUV radiation, we use different quantities from those in equation~(\ref{eq:RTE}) to characterise the radiation field. $\varepsilon_{_{\rm{EUV,}i}}$ gives the rate of emission of EUV photons from sources (i.e. hot stars) associated with evaluation point $i$. $\alpha_{_{\rm{EUV,}i}}$ gives the rate of recombination of hydrogen into excited levels, per unit volume, at evaluation point $i$; in the {\sc OnTheSpot} approximation, such recombinations are exactly balanced by photo-ionisations \footnote{Although the On-the-Spot Approximation does not strictly require ionisation/recombination equilibrium for recombinations into excited states (Case~B), it is only under extreme circumstances that the On-the-Spot Approximation is valid and ionisation/recombination equilibrium is not.}, so $\alpha_{_{\rm{EUV,}i}}$ is also the rate at which EUV photos are destroyed in unit volume at evaluation point $i$. $\Phi_{_{\rm{EUV,}i,j}}$ gives the rate at which EUV photons emitted into unit solid angle by the sources associated with evaluation point $j$ reach the spherical surface through evaluation point $i$ (see Fig.~\ref{fig:rte}). \textsc{Integration procedure}. The integration procedure first calculates \begin{equation} \Phi_{_{{\rm EUV},i=j,j}} = \frac{\varepsilon_{_{{\rm EUV,}j}}}{4\pi}, \qquad \Phi_{_{{\rm EUV},i\ne j,j}} = 0\,, \end{equation} at all the evaluation points that have sources, and sets it to zero everywhere else. Then, for all evaluation points, $j$, that have sources, starting from $N_{r}-1$, it cycles over the evaluation points from $i=j-1$ to $i=0$ and calculates \begin{equation} \Phi_{_{{\rm EUV},i,j}} = \Phi_{_{{\rm EUV},i+1,j}} - \mathcal{R}_{i+1,j}\,, \end{equation} truncating it to zero if it becomes negative. $\mathcal{R}_{i+1,j}$ is the rate at which photons from evaluation point $j$ are lost due to recombinations in the ray segment between $r_{i+1}$ and $r_{i}$, and is given by \begin{equation}\label{EQN:Rij} \mathcal{R}_{i,j} = \frac{1}{2}\,(\alpha_{_{{\rm EUV},i}} + \alpha_{_{{\rm EUV},i+1}})\,\mathrm{d}V_{i,j} \times \zeta_{\mathrm{d},i+1} \times \zeta_{\mathrm{s},i+1,j} \times \zeta_{\mathrm{g},i,j}\,, \end{equation} where \begin{equation}\label{eq:dV} \mathrm{d}V_{i,j} = [(r_j-r_i)^3 - (r_j-r_{i+1})^3]/3 \end{equation} is the volume of the cone of radiation from source $j$ in the segment between $r_{i}$ and $r_{i+1}$ (see Fig.~\ref{fig:rte}). The absorption rate is further corrected by three factors $\zeta_{\mathrm{s},i+1,j}$, $\zeta_{\mathrm{d},i+1}$, and $\zeta_{\mathrm{g},i,j}$, which are defined below. \textsc{Calculation of fluxes}. The contribution from the sources associated with evaluation point $j$ to the flux of EUV photons through evaluation point $i$ is then given by \begin{equation} F_{_{{\rm EUV,}i,j}} = \Phi_{_{{\rm EUV},i,j}} / (r_j - r_i)^2 \ , \end{equation} and the total flux of EUV photons at $r_i$ from all sources $j$ on the ray at $r_j > r_i$ is \begin{equation} F_{_{{\rm EUV},i}} = \displaystyle\sum_{j=i+1}^{N_{R}} F_{_{{\rm EUV,}i,j}} \ . \end{equation} \textsc{Correction factors in equation~(\ref{EQN:Rij})}. The first correction factor in equation~(\ref{EQN:Rij}), $\zeta_{\mathrm{s},i+1,j}$, accounts for recombinations that absorb photons coming along the same computed ray but from sources other than $j$. It is simply the ratio of the contribution to the flux from source $j$, $F_{i+1,j}$, to the total flux along the ray, $F_{i+1}$, \begin{equation}\label{eq:Rcorr:s} \zeta_{\mathrm{s},i+1,j} = \frac{F_{_{{\rm EUV,}i+1,j}}}{F_{_{{\rm EUV,}i+1}}} \ . \end{equation} The second correction factor in equation~(\ref{EQN:Rij}), $\zeta_{\mathrm{d},i+1}$, follows a similar logic. It accounts for recombinations that absorb photons passing through the ray segment in other directions than along the computed ray, and it is defined as the ratio between $F_{_{{\rm EUV,}i+1,j}}$ and the trace of the radiation pressure tensor $\mathbf{P}_{_{{\rm EUV},i+1}}$ (expressed in photons per unit area and time). The latter is obtained from the radiation energy density as $\mathrm{tr}(\mathbf{P}_{_{{\rm EUV},i+1}}) = e_{_{{\rm EUV,}i+1}} c / h\nu_{_{\rm EUV}}$, where $h\nu_{_{\rm EUV}}$ is the mean energy of an EUV photon and $c$ is the speed of light. The correction factor $\zeta_{\mathrm{s},i+1,j}$ is further truncated to lie between $0$ and $2$ to deliver a moderate convergence rate during the tree solver iterations, yielding \begin{equation}\label{eq:Rcorr:d} \zeta_{\mathrm{d},i+1} = \min\left( 2, \frac{F_{_{{\rm EUV,}i+1}} h\nu_{_{\rm EUV}}}{e_{_{{\rm EUV,}i+1}} c} \right) \ . \end{equation} The third correction factor in equation~(\ref{EQN:Rij}), $\zeta_{\mathrm{g},i,j}$, corrects for the geometry of the ray segments. The segment length $r_{i+1}-r_i$, controlled by parameter $\eta_R$, can be substantially smaller than the tangential extent of the ray $\sim r_i\times N_{_{\rm side}}$. In this situation, the flux from sources close to $r_{i+1}$ would be too high, leading to an over-representation of these sources in equation~(\ref{EQN:Rij}). Therefore, we add a correction of the form \begin{equation}\label{eq:Rcorr:g} \zeta_{\mathrm{g},i,j} = \min\left( 1, \frac{N_{_{\rm side}} (r_{j} - r_{i})}{2r_{j}} \right) \ . \end{equation} Note that the minimum in equation~(\ref{eq:Rcorr:g}) ensures that the correction is only applied to segments with $r_j - r_i < 2r_j/N_{_{\rm side}}$. \textsc{Sum over directions}. Finally, the fluxes $F_{_{{\rm EUV,}0,k}}$ incident on the target point (i.e. i = 0) from the directions of all the {\sc HealPix} rays ($k\!=\!1\,{\rm to}\,N_{_{\rm PIX}}$) are summed to give a new estimate of the radiation energy density $e_{_{{\rm EUV,}0}}^{\rm new}$. It is assumed that the target point is a small sphere with radius $r_\mathrm{tp}$.\footnote{$r_\mathrm{tp}$ is half of the grid cell size (if the target point is a grid cell) or the sink particle accretion radius (if the target point is a sink particle).} Using the approximation of single direction radiation, the radiation energy density is \begin{equation} e_{_{{\rm EUV,}0}}^\mathrm{new} = \sum_{k=1}^{N_{_{\rm PIX}}} \frac{F_{_{{\rm EUV,}0,k}} h\nu_{_{\rm EUV}}}{c}. \end{equation} \subsection{Iterations and error control} \label{sec:err:cntrl} \textsc{Need for iterations}. If the absorption or emission coefficients depend on the ambient radiation field --- as in the {\sc TreeRay/OnTheSpot} sub-module, where photons coming from different directions compete to be absorbed balancing recombinations --- the code must iterate to find an acceptable estimate of the EUV radiation energy density, $e_{_{\rm EUV}}$. Modules and sub-modules that do not need iterations are executed only once, during the first iteration step, to save computing time. The first iteration starts with the radiation field $e_{_{\rm EUV}}$ from the previous hydrodynamic time step. This significantly reduces the number of iterations needed, because changes in the distribution of gas and sources between time steps are typically small. In most cases, fewer than 10 iterations are needed. \textsc{Error control}. Iteration is terminated once the fractional change in the radiation field between successive iterations, $\delta_{e_{_{\mathrm{EUV}}}}$, falls below a user-defined limit $\epsilon_\mathrm{lim}$ (default value is $10^{-2}$). Currently, two ways of determining the fractional change in the radiation field are implemented. The first considers the total radiation energy, \begin{equation}\label{eq:delta:er:tot} \delta_{e_{_{\mathrm{EUV}}},\mathrm{tot}} = \frac{2\left( \sum e_{_{{\rm EUV,}c}}^\mathrm{new} \mathrm{d}V_{c} - \sum e_{_{{\rm EUV,}c}} \mathrm{d}V_{c} \right)} {\sum e_{_{{\rm EUV,}c}}^\mathrm{new} \mathrm{d}V_{c} + \sum e_{_{{\rm EUV,}c}} \mathrm{d}V_{c}}, \end{equation} where subscript $c$ denotes the quantity in a grid cell with volume $ \mathrm{d}V_{c}$, and the sums are taken over all grid cells in the computational domain. This criterion is similar to the one adopted by \citet{Dale2012}, who checked for the fractional change in the total mass of ionised gas; it works well when there are a few sources with comparable luminosity. The second criterion considers the change in the radiation field on a cell-by-cell basis, \begin{equation}\label{eq:delta:er:cell} \delta_{e_{_{\mathrm{EUV}}},\mathrm{cell}} = \max_{c} \left( \frac{e_{_{{\rm EUV,}c}}^\mathrm{new} - e_{_{{\rm EUV,}c}}}{e_{_{\mathrm{EUV,norm,}c}}} \right), \end{equation} where the maximum is taken over all grid cells $c$ in the computational domain, and the normalizing energy is \begin{equation} e_{\mathrm{EUV,norm,}c} = \max(e_{_{{\rm EUV,}c}}, e_{_{\mathrm{EUV,med}}}) \ . \end{equation} Here $e_{\mathrm{EUV,med}}$ is the median of all the $e_{_{{\rm EUV,}c}}$ values over the whole computational domain. The median is used to avoid zero or nearly zero normalizations, because $e_{_{\mathrm EUV}}$ can have arbitrarily small values. The second criterion, using $\delta_{e_{_{\mathrm{EUV}}},\mathrm{cell}}$, is safer and is set as the default. \textsc{Interaction with gas}. The calculated radiation energy density is used to modify the properties of the target point. If the target point is a grid cell, the code can for instance update the temperature or the chemical composition (including the ionisation degree). If $e_{_{{\rm EUV,}0}}$ is non-zero, the {\sc TreeRay/OnTheSpot} module implements two possible treatments: (i) the temperature and ionisation degree are given user defined values (e.g. $10^4$\,K and $1.0$, respectively), or (ii) $e_{_{{\rm EUV,}0}}$ is passed to the {\sc Chemistry} module \citep[see][for details, and Section~\ref{sec:SFFeedback} for a test case including chemistry]{Haid2018}. \subsection{Sources of radiation} \label{sec:sources} \textsc{Source properties}. For some {\sc TreeRay} sub-modules, the emission coefficient $\varepsilon_{\nu ,c}$ in cell $c$ is derived directly from the quantities that describe the gas in the cell. Other sub-modules, including {\sc TreeRay/OnTheSpot}, use radiation sources that are independent of the grid (e.g. sink particles representing stars or star clusters). Such sources are characterised by their position, luminosity (e.g. in the case of the {\sc TreeRay/OnTheSpot} module, number of EUV photons emitted per second), and radius. These properties can either be read from a file at the start of a simulation and stay constant for the whole run, or they can be obtained from sink particles. In the latter case the sinks move and their luminosities (and in principle also their radii) can vary with time. The {\sc FeedbackSinks} module can be used to calculate the luminosities of the stars and stellar clusters that sink particles represent, as functions of their age, mass and mass accretion history \citep[see][for details]{Gatto2017, Peters2017}. \textsc{Mapping onto the grid}. At each call of the tree solver, before the tree is built, all sources are mapped onto the grid, assuming that the sources are uniform spheres. In this way, the emission coefficient in each grid cell, which geometrically intersects with the volume of the source is set to a fraction of the source luminosity proportional to the intersection volume. In order to keep the mapping process fast, it is performed in two stages: creating a list of sources intersecting with each block, and then calculating the intersection of each cell with each source on the list. \subsection{Boundary conditions} \label{sec:bc} \textsc{Boundary condition types}. Three types of boundary condition are available for the tree solver; they are independent of the boundary conditions for the hydrodynamic solver. The first two types ({\it isolated} and {\it periodic}) have already been described in Paper~I, the third one ({\it corner of symmetry}) is newly implemented and described in more detail below. {\it Isolated} boundary conditions assume that all source quantities (e.g. for gravity or emission of radiation) are zero outside the computational domain. {\it Periodic} boundary conditions invoke periodic copies of each tree node during the tree walk, and use only the closest copy to the target point. The {\sc Gravity} module uses the Ewald method to take into account all the periodic copies, but calculations with the {\sc TreeRay} sub-modules are presently limited to the nearest periodic copy. \textsc{Corner of Symmetry}. {\em Corner of Symmetry} boundary condition allow the user to simulate one-eighth of the problem under investigation, if the problem has the appropriate symmetry. We refer to the truly simulated part of the computational domain as the \textit{original}, and we specify one corner of it as the \textit{corner}. During the mapping of the tree onto the rays we reconstruct the entire problem by spawning 7 ghosts of the \textit{original}. The first 3 ghosts are copies of the \textit{original} mirrored in the faces adjacent to the \textit{corner}. The remaining 4 ghosts are point reflections of the first 4 ghosts through the \textit{corner}. One can show that if the MACs discussed in Section~\ref{sec:MACs} hold for the \textit{original}, they also hold for all 7 ghosts. One useful application of this boundary condition is to problems with spherical symmetry such an the Spitzer test (see Section~\ref{sec:spitzer}, model (p)), giving us as an effective resolution of level $n$ while actually only computing at level $n-1$. \subsection{Tree solver time step} \label{sec:tsdt} \textsc{ABU in TreeRay}. In Paper~I we introduced the {\em adaptive block update} (ABU) method, which improves the code performance by only updating blocks where the quantities calculated by the tree solver change significantly; we also illustrated the benefits when ABU is used with the {\sc Gravity} or {\sc TreeRay/OpticalDepth} modules. However, ABU does not deliver such significant benefits with the {\sc TreeRay/OnTheSpot} module, or any other module where the absorption or emission coefficients depend on the local radiation energy density. This is because calculating the radiation energy density couples together large regions of space, thereby requiring the code to update many blocks. \textsc{Tree solver time step}. However, in many applications the time scale for the physical processes which are calculated by the tree solver is much longer than the hydrodynamic time step. For example, in calculations with a multi-phase interstellar medium, the dense cold/warm gas is the main contributor to the gravitational potential as it contains most of the mass in the system and absorbs most of the radiation, and this gas typically moves with velocities smaller than $10$\,km/s \citep[e.g.][]{Girichidis2016}. On the other hand, the hydrodynamic time step calculated from the Courant condition can be very small, because it is defined by the hot gas (with sound speeds $\ga 300\,{\rm km/s}$) or stellar winds (with velocities $\ga 1000\,{\rm km/s}$), but the contribution of this hot or fast gas to the calculation of gravity and radiation is typically negligible. Since the tree solver is used to calculate the gas self-gravity and the EUV or longer-wavelength radiation field, it may not need to be called at every hydrodynamic time step; the decision whether this approximation is reasonable depends on the nature of the phenomena being simulated, and is left to the user. The frequency of calls to the tree solver is regulated by setting the time step, $\Delta t_\mathrm{ts}$, which can be longer than the hydrodynamic time step. A convenient way to set $\Delta t_\mathrm{ts}$ is to define a parameter \begin{equation} v_\mathrm{tsdt} = \Delta x / \Delta t_\mathrm{ts} , \end{equation} where $\Delta x$ is the size of the smallest grid cell. $v_\mathrm{tsdt}$ then sets the maximum velocity with which the gas and/or sources relevant to the tree solver can move. By default, we set $v_\mathrm{tsdt} = \infty$ and hence $\Delta t_\mathrm{ts} = 0$, in which case the tree solver is called at each hydrodynamic time step. Examples of how setting $v_\mathrm{tsdt}$ to a finite value impacts the code performance and accuracy are given in Section~\ref{sec:SFFeedback}. \subsection{Load balancing} \label{sec:lb} \textsc{Workload problem}. The computational cost of the tree solver is almost always dominated by the tree walk, and the most expensive operation is mapping the tree nodes onto the system of rays. If no radiation passes through a node, this operation is omitted, and as a result, the tree walk can be significantly cheaper in regions where no radiation is present. However, this may lead to a non-optimal distribution of the work load among different processors if each processor computes the same number of blocks (which is the default in {\sc Flash}). \textsc{Load balancing scheme}. Therefore, we implement a simple load balancing scheme, based on measuring the wall clock time needed to execute a tree walk for all grid cells in a block, $t_\mathrm{wl,blk}$. After each tree solver call, we collate the $t_\mathrm{wl,blk}$ measurements and calculate their median value, $t_\mathrm{wl,med}$. Then we increase the workload weights, $w_\mathrm{blk}$, of blocks with $t_\mathrm{wl,blk} > t_\mathrm{wl,med}$ by the factor $t_\mathrm{wl,blk} / t_\mathrm{wl,med}$. These workload weights are then used by {\sc Flash} in the next redistribution of blocks amongst the processors; each processor is assigned a number of blocks such that the sum of their weights is approximately the same for all processors. Note that this scheme improves the code performance only if the computational time is dominated by the tree solver. Therefore we use this scheme, in combination with the tree solver time step, only for time steps when the tree solver is called. All other time steps are calculated with a default flat block distribution, i.e. the same number of blocks on each processor. \section{Accuracy and performance tests} \label{sec:acc} In this section we describe four tests of the {\sc TreeRay} algorithm illustrating its strengths and weaknesses, using idealised configurations of gas and sources. A more complex test, involving physical processes commonly included in astrophysical simulations of star formation and its feedback, is presented in Section~\ref{sec:SFFeedback}. All tests in Sections~\ref{sec:acc} and \ref{sec:SFFeedback} have been run on the IT4I/Salomon supercomputer cluster\footnote{\url{https://docs.it4i.cz/salomon/}} consisting of 1008 compute nodes, each equipped with two 12-core Intel Xeon E5-2680v3 @ 2.5 GHz processors and 2GB of RAM per core. The nodes are interconnected with the InfiniBand FDR56 network using the 7D Enhanced hypercube topology. \subsection{Spitzer test} \label{sec:spitzer} \begin{table*} \caption{Accuracy and performance of the Spitzer test.} \label{tab:acc:spitzer} \begin{center} \begin{tabular}{l|l|c|c|r|c|c|c|l|c|c|c|c} \hline model & $l_r$ & $\theta_\mathrm{lim}$ & $\theta_\mathrm{IF}$, $\theta_\mathrm{Src}$ & $N_{_{\rm PIX}}$ & $\delta_{e_{_{\mathrm{EUV}}}}$ & $\epsilon_\mathrm{lim}$ & $\eta_R$ & $R_\mathrm{src}$ & $e_\mathrm{IF}$ & $t_\mathrm{iter}$ & $t_\mathrm{tr}$ & $t_\mathrm{hydro}$ \\ \hline (a) fiducial & $5$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-2}$ & $2$ & $1$ & $0.025$ & $0.24$ & $158$ & $3.4$ \\ (b) $\delta_{e_{_{\mathrm{EUV}}}} = \mathrm{tot}$ & $5$ & $0.5$ & $\infty$ & $48$ & tot & $10^{-2}$ & $2$ & $1$ & $0.024$ & $0.25$ & $78$ & $3.4$ \\ (c) $l_{r} = 4$ & $4$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-2}$ & $2$ & $1$ & $0.011$ & $0.03$ & $9.2$ & $0.3$ \\ (d) $l_{r} = 6$ & $6$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-2}$ & $2$ & $1$ & $0.040$ & $2.5$ & $3100$ & $44$ \\ (e) $N_{_{\rm PIX}} = 12$ & $5$ & $1.0$ & $\infty$ & $12$ & cell & $10^{-2}$ & $2$ & $1$ & $0.173$ & $0.05$ & $65$ & $5.6$ \\ (f) $N_{_{\rm PIX}} = 192$ & $5$ & $0.25$ & $\infty$ & $192$ & cell & $10^{-2}$ & $2$ & $1$ & $0.046$ & $1.5$ & $940$ & $3.4$ \\ (g) $\epsilon_\mathrm{lim} = 10^{-1}$ & $5$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-1}$ & $2$ & $1$ & $0.024$ & $0.24$ & $125$ & $3.4$ \\ (h) $\epsilon_\mathrm{lim} = 10^{-3}$ & $5$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-3}$ & $2$ & $1$ & $0.025$ & $0.24$ & $250$ & $3.4$ \\ (i) $\eta_R = 1$ & $5$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-2}$ & $1$ & $1$ & $0.203$ & $0.21$ & $180$ & $3.5$ \\ (j) $\eta_R = 8$ & $5$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-2}$ & $8$ & $1$ & $0.033$ & $0.39$ & $260$ & $3.4$ \\ (k) $R_\mathrm{src} = 4$\,gc & $5$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-2}$ & $2$ & $4$ & $0.026$ & $0.24$ & $150$ & $3.4$ \\ (l) $100$\,sources & $5$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-2}$ & $2$ & $8^{\star}$ & $0.023$ & $0.24$ & $240$ & $4.5$ \\ (m) $\theta_\mathrm{IF} = 0.5, N_{_{\rm PIX}} = 12$ & $5$ & $1.0$ & $0.5$ & $12$ & cell & $10^{-2}$ & $2$ & $1$ & $0.034$ & $0.11$ & $100$ & $4.1$ \\ (n) $\theta_\mathrm{IF} = 0.5, N_{_{\rm PIX}} = 48$ & $5$ & $1.0$ & $0.5$ & $48$ & cell & $10^{-2}$ & $2$ & $1$ & $0.029$ & $0.12$ & $90$ & $3.7$ \\ (o) $\theta_\mathrm{IF} = 0.25, N_{_{\rm PIX}} = 192$ & $5$ & $1.0$ & $0.25$ & $192$ & cell & $10^{-2}$ & $2$ & $1$ & $0.048$ & $0.34$ & $240$ & $3.7$ \\ (p) COS & $4^{\star\star}$ & $0.5$ & $\infty$ & $48$ & cell & $10^{-2}$ & $2$ & $1$ & $0.065$ & $0.12$ & $80$ & $0.8$ \\ \hline \end{tabular} \end{center} \begin{flushleft} Column 1 gives the model name. The following columns list: \begin{itemize} \item $l_{r}$: the refinement level defining the grid resolution($4\rightarrow 64^3$, $5\rightarrow 128^3$, $6\rightarrow 256^3$) \item $\theta_\mathrm{lim}$: limiting opening angle for BH~MAC \item $\theta_\mathrm{IF}, \theta_\mathrm{Src}$: limiting opening angles for IF~MAC and Src~MAC, respectively (see Section~\ref{sec:MACs}) \item $N_{_{\rm PIX}}$: number of rays (defining the angular resolution) \item $\delta_{e_{_{\mathrm{EUV}}}}$: error control method (either $\delta_{e_{_{\mathrm{EUV}}},\mathrm{cell}}$ given by equation~\ref{eq:delta:er:cell}, or $\delta_{e_{_{\mathrm{EUV}}},\mathrm{tot}}$ given by equation~\ref{eq:delta:er:tot}; see Section~\ref{sec:err:cntrl}) \item $\epsilon_\mathrm{lim}$: maximum allowed relative error \item $\eta_R$: resolution in the radial direction; $\eta_R$ is inversely proportional to the distance between evaluation points on rays (see equation~\ref{eq:etaR}) \item $R_\mathrm{src}$: size of the source (in grid cells) \item $e_\mathrm{IF}$: relative error in the ionisation front position at $t=1.5$\,Myr \item $t_\mathrm{iter}$: processor time for a single iteration step (in core hours) \item $t_\mathrm{tr}$; processor time spent by the tree solver in the whole run (in core hours) \item $t_\mathrm{hydro}$; processor time spent by the hydrodynamic solver in the whole run (in core hours) \end{itemize} $^\star$ in model (k), 100 sources of radius $1$ grid cell were distributed randomly in a sphere with radius $8$ grid cells.\\ $^{\star\star}$ model (p) uses {\em corner of symmetry}, i.e. only one octet is calculated, the grid cell size is the same as in model (a) \end{flushleft} \end{table*} \begin{figure} \includegraphics[width=\columnwidth]{spitzer-fiducial-evol2} \caption{Spitzer test: the evolution of the fiducial run (model a). The green, red and blue lines show the mean radial variation of, respectively, the gas particle density, the pressure and the temperature. Different line types distinguish different stages in the evolution: initial conditions (dotted), 0.5 Myr (dashed) and $1.5$\,Myr (solid).} \label{fig:spitz:fid} \end{figure} \begin{figure*} \includegraphics[width=0.9\textwidth]{spitzer-matrix-4x4} \caption{Spitzer test: the distribution of the radiation energy (blue-green) and the gas density (yellow-red), in the plane $z=0$, at time $1.5$\,Myr, for models (a) through (p), as noted in the top left corner of each panel; see Table~\ref{tab:acc:spitzer} for the model parameters. The logarithm of the radiation energy is shown in the region with non-zero ionisation degree, the logarithm of the gas density is shown in the remaining parts (i.e. for the neutral gas only).} \label{fig:spitz:matrix} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{spitzer-evol-vol} \caption{Spitzer test: the evolution of the ionisation front radius. The top panel shows models (a) -- (f); the middle panel shows models (a) and (g) through (k); and the bottom panel shows models (a) and (l) through (p). The black line shown on all three panels is the analytic solution given by equation~(\ref{eq:spitzer}).} \label{fig:spitz:evol} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{spitzer-rp-0037} \caption{Spitzer test: the mean radial profile of the radiation energy density at $1.5$\,Myr. The top panel shows models (a) through (f); the middle panel shows models (a) and (g) through (k); and the bottom panel shows models (a) and (l) through (p). The black line on all three panels is the radiation energy density in the analytical solution (equation \ref{eq:erSpitzer}).} \label{fig:spitz:rp} \end{figure} \textsc{Spitzer bubble}. The Spitzer bubble is one of the simplest models of the interaction of ionising radiation with gas. In this model, the UV radiation from a young, massive star ionises and heats the surrounding gas, creating an H{\sc ii} region, i.e. an over-pressured, expanding bubble of photo-ionised gas bounded by a sharp ionisation front \citep[e.g.][]{Spitzer1978, Whitworth1979, Deharveng2010}. For a typical O-star, the radiative energy input can be very large, $\ga 10^4\,{\rm L_\odot}$. However, only a small fraction of this radiative energy is converted into kinetic energy \citep[$\lesssim$ 0.1\%;][]{Walch2012a}. The expanding ionisation front drives a shock into the surrounding neutral gas, sweeping it up into a dense cold shell, and this may trigger the formation of a second generation of stars \citep[e.g.][]{Elmegreen1977,Walch2013}. H{\sc ii} regions are able to disrupt low-mass molecular couds long before the massive stars explode as supernovae \citep{Whitworth1979, Walch2012a, Dale2012, Haid2019}. \textsc{Spitzer bubble expansion}. The Spitzer test describes the spherically symmetric expansion of an H{\sc ii} region into a uniform ambient medium with density $\rho_{\rm o}$, which supposedly resembles a cold molecular cloud. It is used as a standard test for radiative transfer schemes \citep[e.g.][]{Mellema2006, Krumholz2007, Iliev2009, Mackey2010, Mackey2011, Raga2012, Rosdahl2013, Bisbas2015, Raskutti2017}. The analytic solution for the time evolution of the D-type ionization front is given by \citep{Spitzer1978} \begin{equation} \label{eq:spitzer} R_\mathrm{IF,anl}(t) = R_{\rm S} \left(1+ \frac{7}{4}\frac{c_i t}{R_{\rm S}} \right)^{4/7}, \end{equation} where \begin{equation} \label{eq:RStr} R_{\rm S} = \left(\frac{3\dot{N}_{_{\rm LyC}} m_p^2}{4 \pi \alpha_{\rm B} X^2 \rho_{\rm o}^2} \right)^{1/3} \end{equation} is the Str{\o}mgren radius \citep{Stromgren1939}; $c_i$ is the sound speed in the ionized gas inside the bubble; $\dot{N}_{_{\rm LyC}}$ is the rate at which the central star emits hydrogen ionizing photons (i.e. photons with energy $E_{\nu} \ge 13.6\,{\rm eV}$); $m_{\rm p}$ is the proton mass, $X$ is the mass fraction of hydrogen; and $\alpha_{\rm B}\!=\!2.7 \times 10^{-13}\,{\rm cm}^{3}\,{\rm s}^{-1}$ is the Case-B recombination coefficient for an isothermal H{\sc ii} region with temperature $T_{\rm i}\!=\!10^4\,{\rm K}$. We assume that the gas is composed of hydrogen, with mass fraction $X=0.70$, and helium, with mass fraction $Y=0.30$; ionisation of helium is ignored. \textsc{Spitzer test setup}. In \citet{Bisbas2015} and \citet{Haid2018}, we have already demonstrated that {\sc TreeRay/OnTheSpot}, as implemented in the {\sc FLASH} code, delivers high accuracy in the Spitzer test. Here we explore how the code behaviour depends on the various control parameters. We set up a cubic computational domain of size $30\times 30\times 30\,\mathrm{pc}^3$, which is filled with cold, dense molecular gas, having uniform temperature, $T_n=10\,{\rm K}$, and uniform density, $\rho_0=7.63\times 10^{-22}\,{\rm g}\,{\rm cm}^{-3}$; the hydrogen is assumed to be molecular, and the helium atomic, so the mean molecular weight is 2.35, and the total number-density of gas particles $n_0\approx 195\,{\rm cm}^{-3}$. At $t=0$, a radiation source at the centre starts emitting ionising photons at rate $\dot{N}_{_{\rm LyC}}=10^{49}\,{\rm s}^{-1}$. In the ionized gas the temperature is set to $T_i=10^4\,{\rm K}$ and the mean molecular weight to $\mu_i=0.678$, corresponding to ionised hydrogen and atomic helium. All the gas has adiabatic index $\gamma = 5/3$. If, during the evolution, the temperature in the neutral gas exceeds $300\,{\rm K}$, it is instantaneously cooled to $300\,{\rm K}$; consequently the shell of shock-compressed neutral gas immediately ahead of the expanding IF is effectively isothermal at $T_s=300\,{\rm K}$ (strictly speaking, the adiabatic index should be slightly below $\gamma=5/3$ at this temperature, because the rotational degrees of freedom of H$_{2}$ are starting to be excited, but we ignore this detail). This temperature limit is chosen in order to resolve the thickness of the shell and prevent numerical instabilities in the hydrodynamic (PPM -- Piecewise Parabolic Method) solver that would otherwise occur. The sound speeds in the molecular cloud, the shell and the H{\sc ii} region, are $c_n = 0.187\,{\rm km\,s^{-1}}$, $c_s = 1.30\,{\rm km\,s^{-1}}$ and $c_i = 14.3\,{\rm km\,s^{-1}}$, respectively. These parameters result in a Str{\o}mgren radius of $1.434\,{\rm pc}$. \textsc{Varied parameters}. We present results for $16$ models, denoted (a) through (p) (see Table~\ref{tab:acc:spitzer}) with different combinations of the following six parameters: the radiation-field error control limit ($\delta_{e_{_{\mathrm{EUV}}},\mathrm{cell}}$ or $\delta_{e_{_{\mathrm{EUV}}},\mathrm{tot}}$, see Section~\ref{sec:err:cntrl}), the grid resolution (given by the refinement level $l_r$), the angular resolution (given by the limiting opening angle $\theta_\mathrm{lim}$, the number of {\sc HealPix} rays $N_{_{\rm PIX}}$, and the limiting opening angles $\theta_\mathrm{IF}$ and $\theta_\mathrm{Src}$ for the IF~MAC and Src~MAC, respectively), the radiation-energy error limit ($\epsilon_\mathrm{lim}$), the radial resolution of rays ($\eta_R$), and the source radius ($R_\mathrm{src}$). In model (l), the single radiation source is replaced by $100$ sources with luminosities $\dot{N}_{_{\rm LyC}}=10^{47}\,{\rm s}^{-1}$ distributed in a sphere of radius $8$ grid cells, instead of a single source of that radius. This is done to demonstrate that {\sc TreeRay} can faithfully handle a large number of radiation sources. The last model (p), is calculated with the {\em corner of symmetry} boundary condition, i.e. only one octant of the total domain is computed (see Section~\ref{sec:bc}). For each model we evaluate the error in the ionisation front position, $e_\mathrm{IF} \equiv (R_\mathrm{IF,num} - R_\mathrm{IF,anl})/R_\mathrm{IF,anl}$, at the end of the run ($1.5$\,Myr), the processor time for a single iteration step, $t_\mathrm{iter}$, and the processor times in the tree solver, $t_\mathrm{tr}$, and in the hydrodynamic solver, $t_{\rm hydro}$, for the whole run. \textsc{Fiducial run}. Fig.~\ref{fig:spitz:fid} shows the evolution of model (a), the fiducial run, displaying the radial profiles of the gas density, pressure and temperature, averaged over all directions, at three times throughout the evolution. By the end of the evolution ($t\!=\!1.5\,{\rm Myr}$), the pressure in the shell approaches the value within the H{\sc ii} region indicating that the shell thickness is approximately resolved. Fig.~\ref{fig:spitz:evol} shows the position of the ionisation front, $R_\mathrm{IF,num}$, as a function of time, and compares it with the prediction of equation~\ref{eq:spitzer}. The fractional error in the ionisation front position, $e_\mathrm{IF}\equiv |R_\mathrm{IF,num} - R_\mathrm{IF,anl}|/R_\mathrm{IF,anl}$, at $t\!=\!1.5\,{\rm Myr}$, is shown in Table~\ref{tab:acc:spitzer}. For the fiducial run, this error is $\sim\!2.5\,\%$. Fig.~\ref{fig:spitz:rp} shows the radial profile of the radiation energy density, $e_{_{\rm EUV}}$, and compares it with the analytically obtained radiation energy density, $e_{_{\mathrm{EUV,anl}}}$, denoted by the black lines. The latter is determined by combining equations (\ref{eq:spitzer}) and (\ref{eq:RStr}), and replacing $\rho_{\rm o}$ with the density in the H{\sc ii} region, $\rho_i$, which is assumed to be uniform between $r = 0$ and $R_\mathrm{IF}$: \begin{equation} \label{eq:erSpitzer} e_{_{\mathrm{EUV,anl}}}(r,t) = \frac{E_{\nu} \dot{N}_{_{\rm LyC}}}{4\pi r^2 c} \times \left[ 1 - \frac{r^3}{R_{\rm S}^3} \left( 1 + \frac{7}{4}\frac{c_i t}{R_{\rm S}} \right)^{-12/7} \right] \ . \end{equation} \textsc{Error control}. Figs~\ref{fig:spitz:matrix} and \ref{fig:spitz:evol} show that most of the other models (all but models (e), (i) and (m)) also agree very well with the analytic solution: the error in the ionisation front position is better than $5\,\%$ (corresponding to $\sim\!1.7$ grid cells at the standard resolution), and the discrepancy in the radiation energy density is negligible. Models (b), (g) and (h) evolve almost identically to model (a), indicating that the choice of error control method has almost no impact, and that the calculation is accurate even with the fractional error limit $\epsilon_\mathrm{lim} = 0.1$. All three models use the same processor time per iteration, $t_\mathrm{iter}$, as the fiducial run, which is to be expected since the error control condition does not affect the calculation within an iteration. The total time spent in the tree solver, $t_\mathrm{tr}$, is approximately two times lower for model (b), as $\delta_{e_{_{\mathrm{EUV}}},\mathrm{tot}}$ requires fewer iterations. Conversely, model (h) has $t_\mathrm{tr}$ approximately $\sim\!60\,\%$ higher than model (a), because the lower error limit leads to a larger number of iterations. In the case of model (g), with increased $\epsilon_{\rm lim}=0.1$, $t_\mathrm{tr}$ drops by $\sim\!20\,\%$ relative to model (a), reaching approximately $2$ iterations per hydrodynamic time step. \textsc{Grid resolution}. A comparison of model (a) with models (c) and (d) shows that a lower (higher) resolution leads to a smoother and more blurred (denser and better resolved) shell. However, even the low resolution model (c), where $R_\mathrm{IF, num}$ is only resolved with $17$ grid cells at $t=1.5$\,Myr, and with $\sim 3$ grid cells at $t=0$, results in an H{\sc ii} region with the correct radius and shape. The processor time per iteration, $t_\mathrm{iter}$, scales with the number of grid cells to the power $\sim\!1.1$, between models (c), (a) and (d) (i.e. slightly super-linearly). Similarly, the total time in the tree solver, $t_\mathrm{tr}$, scales with the number of grid cells to the power $\sim\!1.4$ between the same models (again, slightly above the theoretical $(4/3)$-power, where the extra $(1/3)$ derives from the shorter time steps required by the Courant condition at higher resolution). \textsc{Angular resolution}. Models (e) and (f) test the dependence on the angular resolution by varying $N_{_{\rm PIX}}$ and $\theta_\mathrm{lim}$, setting them so that the typical tree node angular size is similar to the angular size of the rays, as suggested in Paper~I. Model (f) evolves in a similar way to model (a), but $t_\mathrm{iter}$ and $t_\mathrm{tr}$ are $\sim\!6$ times larger. This shows that the fiducial run is well converged with regard to angular resolution, and that the scaling is in agreement with Paper~I where we found that $t_\mathrm{tr}$ scales with $\theta_\mathrm{lim}$ somewhere between $\sim\!\theta_\mathrm{lim}^{-2}$ and $\sim\!\theta_\mathrm{lim}^{-3}$. On the other hand, model (e), with very low angular resolution, $N_{_{\rm PIX}}=12$, exhibits significant departures from the analytical solution. The volume of the H{\sc ii} region reaches only $\sim 53$\% of the correct value, and the radiation energy is distributed non-spherically, with lower values along the Cartesian diagonals. Consequently the shell expands more slowly along the diagonals than along the axes. Along the axes the faster expansion is also accelerated by numerical instabilities in the inadequately resolved shell. The main reason for the depressed radiation-energy density along the diagonals is that the very large $\theta_\mathrm{lim}$ allows the acceptance of very large tree nodes by the BH~MAC. Such large nodes include the source in one corner and a part of the shell in the opposite one, leading to a poor estimate of the rate of absorption of photons within the node, and hence poor estimates of the radiation fluxes and energies. \textsc{New MACs}. The new criteria for accepting nodes in the tree walk, IF~MAC and Src~MAC, also control the angular resolution, setting it higher in regions where increased resolution is needed. The new MACs are tested in models (m), (n) and (o). Model (m) with $\theta_\mathrm{IF}\!=\!\theta_\mathrm{Src}\!=\!0.5$ and $N_{_{\rm PIX}}\!=\!12$ demonstrates that it is not worth increasing the angular resolution in the tree walk without also increasing the number of rays. Even though model (m) does not suffer from the problem of accepting too large tree nodes (unlike model (e)), the radiation field shows significant departures from spherical symmetry, and its computational cost is similar to the much more accurate model (n). Model (n) with $\theta_\mathrm{IF}\!=\!\theta_\mathrm{Src}\!=\! 0.5$ and $N_{_{\rm PIX}}\!=\!48$ evolves in a similar way to the fiducial run (a), and its computational cost is approximately $40\,\%$ lower. Model (o), with even higher angular resolution, $\theta_\mathrm{IF}\!=\!\theta_\mathrm{Src}\!=\! 0.25$ and $N_{_{\rm PIX}}\!=\!192$, shows improvement in the spherical symmetry of the radiation field, like model (f), but it is calculated in approximately one quarter of the time. This demonstrates the benefits of the new MACs, particularly at higher angular resolution. \textsc{Radial resolution}. Models (i) and (j) explore the code behaviour at different radial resolutions, controlled by the parameter $\eta_R$. Model (j), with four times more evaluation points on the rays, gives almost identical results to model (a), indicating that the fiducial run with $\eta_R = 2$ is well converged. The much higher density of evaluation points results in a $\sim\!40\,\%$ increase in the computational time. On the other hand, model (i), with $\eta_R = 1$, takes almost the same amount of time as model (a), and the error in the position of the ionisation front increases to $7$\%. \textsc{Source size}. Models (k) and (l) show that the H{\sc ii} region still evolves correctly if the source of ionising radiation is distributed throughout a larger volume. Model (l) exhibits moderate departures from spherical symmetry; these are attributable to significant departures from spherical symmetry in the ionising flux from $100$ discrete sources distributed {\it randomly} in a sphere of radius $1.9\,{\rm pc}$. The time per iteration, $t_\mathrm{iter}$, is essentially the same as in model (a), demonstrating a unique property of our algorithm, namely that the computational cost does not depend on the number of radiation sources (see Section~\ref{sec:perf:nsrc} for a more detailed discussion). \textsc{Corner-of-symmetry}. Finally, model (p) tests the special 'Corner of Symmetry' boundary conditions, which allow the user to simulate one-eighth of a spherically symmetric problem (see Section~\ref{sec:bc}). Model (p) runs at approximately twice the speed of model (a), and produces broadly similar results. However, the shell exhibits numerical artefacts along the axes resulting from the directionally split hydrodynamic solver. As a result, the error on the radius of the ionisation front is greater, $e_\mathrm{IF}\sim 6.5\,\%$. \subsection{Blister-type HII region} \label{sec:blister} \begin{figure*} \begin{center} \includegraphics[width=1\linewidth]{blister-evol.png} \caption{Blister-type HII region test, at times $t\!=\!18$, $70$, $140$ and $380\,{\rm kyr}$. The top panels show the logarithm of the gas column density. The bottom panels show the logarithm of the radiation energy in the region with non-zero ionisation degree, and the logarithm of the gas density in the remaining parts (i.e. for the neutral gas only). Note that the computational domain has dimensions $8\times4\times4\,{\rm pc}^3$, and is much larger than the region shown in these maps.} \label{fig:blister} \end{center} \end{figure*} \textsc{Blister HII region}. In order to test the algorithm in a situation that is not spherically symmetric, we model a spherical cloud with an ionising star located inside it, but not at its centre. This model was first discussed by \citet{Whitworth1979} and \citet{TenorioTagle1979} who suggested that, as soon as the ionisation front reaches the edge of the cloud on one side, the H{\sc ii} region bursts out of the cloud and the remainder of the cloud on the other side is accelerated by the rocket effect \citep{Kahn1954}. This scenario was later studied numerically by \citet{Bisbas2009} (hereafter B09), \citet{GendelevKrumholz2012} and others. Here, we set up the cloud and the radiation source with the same physical parameters as in B09, and compare our results with theirs. \textsc{Initial conditions}. The radiation source is at the centre of the coordinate system and emits ionising photons at rate $\dot{N}_{_{\rm LyC}} = 10^{49}\,{\rm s^{-1}}$. The spherical cloud has mass $M_0 = 300$\,M$_\odot$, radius $R_0 = 1$\,pc and uniform density $\rho_0 = 4.85\times 10^{-21}$\,g\,cm$^{-3}$; its centre is at $(0.4,0,0)\,{\rm pc}$. The cloud is embedded in a rarefied ambient gas with density $\rho_\mathrm{amb} = 10^{-24}$\,g\,cm$^{-3}$, and the computational domain has dimensions $8\times4\times4\,{\rm pc}^3$. The neutral gas has temperature $T_n = 100$\,K, but otherwise both neutral and ionised (i.e. irradiated) phases have the same properties (molecular weights, adiabatic index, hydrogen mass fraction) as in Section~\ref{sec:spitzer}. All the gas that is not irradiated is immediately returned to the temperature $T_n$, at each time step. Self-gravity is switched off. The simulation is run for $0.5\,{\rm Myr}$, corresponding to $1574$ time steps, and the computational cost is $\sim\!10000$ core hours. \textsc{Numerical parameters}. The model is calculated on an AMR grid with minimum and maximum refinement levels of $4$ and $6$, respectively (i.e. the highest resolution corresponds to $512\times 256 \times 256$). A simple density-based criterion is used to refine/derefine blocks wherever the maximum density exceeds $10^{-21}\,{\rm g\,cm^{-3}}$, or drops below $5\times 10^{-22}\,{\rm g\,cm^{-3}}$. The hydrodynamic boundary conditions are set to outflow. The parameters controlling the {\sc TreeRay} accuracy are set as follows: the tree solver uses both the IF~MAC and the Src~MAC with $\theta_\mathrm{IF} = 0.25$, $\theta_\mathrm{src} = 0.25$ and $\theta_\mathrm{lim} = 1.0$; the number of rays is $N_{_{\rm PIX}} = 192$; the radial resolution is $\eta_R = 2$; and the maximum allowed relative error is $\epsilon_\mathrm{lim} = 0.01$. \textsc{Blister evolution}. The evolution of the blister-type H{\sc ii} region is shown in Fig.~\ref{fig:blister}. The ionised region expands spherically until $t = 18\,{\rm kyr}$, when it reaches the edge of the cloud on the lefthand side. After that time, the ionised gas flows out of the cloud on this side, while opening a growing cavity within it. On the righthand side, the H{\sc ii} region continues to expand into the remainder of the cloud, opening a growing cavity. The originally spherical shell of swept-up gas is at first converted into an hemispherical shell (at $t\sim 70$\,kyr), and later into an almost flat layer (after $t\sim 140$\,kyr). When all the cloud material has been swept up in a given direction from the source, accretion onto the shell stops and the shell starts to accelerate and become Rayleigh-Taylor unstable. As a result, the layer breaks into a large number of cloudlets. These cloudlets were called ``cometary knots'' in B09, due to their almost spherical core and a tail created by ablation by the ionised gas streaming around them and away from the source. \textsc{Comparison with B09}. A qualitative comparison with the SPH simulation of the same model in B09 (see their Figures~13 and 14) shows an almost perfect agreement. The location and the shape of the ionisation front and other large scale features are indistinguishable. The formation of cometary knots is also reproduced remarkably well, given that the angular resolution of the {\sc TreeRay} simulation ($\theta_\mathrm{IF} = 0.25$) is more than an order of magnitude coarser than in B09, where the angular separation between neighbouring rays is set by the local SPH-particle smoothing lengths, typically $\theta\lesssim (0.1\,\mathrm{pc})/(4\,\mathrm{pc}) = 0.025$. This demonstrates the ability of the reverse ray-tracing method to deliver high spatial resolution of the radiation field at the ionisation front, even with relatively large angles between neighbouring rays. However, there are some differences in the small scale structures. Firstly, the {\sc TreeRay} simulation generates larger perturbations of the shell along the grid axes, and the higher noise there seeds the Rayleigh-Taylor instability through the odd-even decoupling mechanism identified by \citet{Quirk1994}. Secondly, by $t=380\,{\rm kyr}$, the number the cometary knots is substantially lower than in B09, because many of them have evaporated, due to their lower density, which in turn is caused by the lower spatial resolution of the grid code. \subsubsection{Rabbit hole test} \label{sec:rabbithole} \textsc{Penetration depth problem}. If the ionisation front has a complex structure, as in the previous test, it is questionable whether the radiation field computed by a code with limited angular resolution can properly follow the ionisation front geometry. A particularly difficult configuration for {\sc TreeRay} is a radiation source shining into a deep, narrow hole. Such holes form frequently in astrophysical applications, for instance when a swept-up shell becomes unstable and breaks apart \citep[e.g. as in][]{Walch2013}. In order to quantify the accuracy of the code in this situation, we implement a test called the {\it rabbit hole}, in which we measure the penetration of the radiation field as a function of the width of the hole. \textsc{Rabbit hole setup}. To mimic a hole within a dense shell, we set up an elongated box containing two media: the walls of the hole are formed of cold dense gas with sound speed $c_{{\rm c}} = 0.25\,{\rm km\,s}^{-1}$ and $\rho_{{\rm c}}$\,=\,10$^{-18}$\,g\,cm$^{-3}$; the inside of the hole is filled with warm rarefied gas with $c_{{\rm w}} = 10\,{\rm km\, s}^{-1}$ and $\rho_{{\rm w}}$\,=\,10$^{-24}$\,g\,cm$^{-3}$. The two media are not in pressure equilibrium, but, since we only calculate the first time step, this is irrelevant. However, the extreme density contrast renders this test particularly difficult, because nodes that are far away from a given target point are large. Hence, the denser the cold medium is, the more mass will be accumulated in these large and distant nodes, and this can lead to an artificial blocking of the radiation. \textsc{Test parameters}. The computational domain is $6\times 2\times 2\,{\rm pc}^3$. The hole has a square cross-section with side $l_{\rm w}$ and stretches from 0 to $6\,{\rm pc}$. The ionising source is placed at the entrance to the hole, $(x,y,z)=(0,0,0)$, and emits ionising photons at rate $N_{\rm LyC} = 10^{49}\,{\rm s}^{-1}$. The test is performed with the hole pointing in the $x-$direction and then in the $z-$direction (as illustrated in Fig.~\ref{fig:rabbit2}, top panels), and with uniform resolution ($384\times 128\times 128\simeq 6.3\times 10^6$ cubic cells with side length $\sim\!0.16\,{\rm pc}$). \textsc{Penetration depth measurement}. The Str{\o}mgren radius for a star with $N_{\rm LyC} = 10^{49}\,{\rm s}^{-1}$, in a uniform medium with $\rho_{{\rm w}}$\,=\,10$^{-24}$\,g\,cm$^{-3}$, is $72\,{\rm pc}$ (see equation~\ref{eq:RStr}), i.e. the radiation should shine right through the rabbit hole. However, for narrow holes (small $l_{\rm w}$), {\sc TreeRay} is unable to obtain the correct solution without reducing the angular resolution to intolerably low values. This is shown in Fig.~\ref{fig:rabbit2} (bottom panel), where we plot the maximum depth, $l_{\rm d}$, up to which we measure a non-zero radiation energy density as a function of $l_{\rm w}$, for two different settings of the control parameters. \textsc{Expected performance}. Theoretically, we expect $l_d$ to fall in the range $l_w/(2\theta_\mathbf{lim}) \lesssim l_d \lesssim l_w/\theta_\mathbf{lim}$. The lower limit corresponds to the situation where the {\sc HealPix} cone widens symmetrically from the target cell towards the source, and touches the walls on both sides of the hole at distance $l_w/(2\theta_\mathbf{lim})$. The upper limit corresponds to the situation where one border of the cone is parallel to the wall of the hole and the opposite border touches the wall at $l_w/\theta_\mathbf{lim}$. \textsc{Actual performance}. To test this behaviour, we use two settings. In the first setting we adopt a constant opening angle of $\theta_{\rm lim}\!=\!0.25$ corresponding to $N_{_{\rm PIX}}\!=\!192$. For this Oct-Tree resolution, we expect $l_{\rm d} = l_{\rm w}/\theta_{\rm lim} = 4 l_{\rm w}$, which is slightly better than the performance actually achieved. Moreover, simply using more rays without implementing the source and ionization-front MACs does not improve this result. In the second setting, we adopt a larger $\theta_{\rm lim}\!=\!1.0$ but also implement the physical MACs, with $\theta_{\rm IF} = \theta_{\rm src} = 0.125$, corresponding to {\sc Healpix} level 8, i.e. $N_{_{\rm PIX}}=768$ rays. Despite the large value of $\theta_{\rm lim}$, the results with the additional physical MACs outperform the more expensive simulations with $\theta_{\rm lim} =0.25$ (see bottom panel of Fig.~\ref{fig:rabbit2}), and in the run with $l_w=1\,{\rm pc}$ the radiation field reaches the end of the computational domain. However, the theoretical scaling of $l_{\rm d} = 8 l_{\rm w}$ is not achieved, due to the high density contrast (a factor of $10^6$) between the warm gas within the hole and the surrounding cold gas. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{rabbit-slice-xz.png} \includegraphics[width=\linewidth]{ld_lw_new2.pdf} \caption{{\it Top 5 panels:} The radiation energy density integrated along the line of sight ($y$-direction) with $\theta_{\rm lim}=1.0$, $\theta_{\rm IF}=\theta_{\rm src} = 0.125$ and -- from top to bottom -- $l_{\rm w} = 0.2, 0.4, 0.6, 0.8$ and $1.0\,{\rm pc}$. Radiation only reaches the end of the computational domain ($z=6\,{\rm pc}$) for $l_{\rm w}\!=\!1\,{\rm pc}$. {\it Bottom panel:} The range of depths up to which the radiation propagates as a function of the width of the rabbit hole. The `error-bars' indicate the minimum and maximum depth, as found by different simulations with the hole pointing in the $x-$ or $z-$directions. The dotted lines show $l_{\rm d} = ml_{\rm w}$ with slopes $m\!=\!2,\,4\;{\rm and}\;8$; $m\!=\!4$ corresponds to the theoretical expectation, $l_{\rm d} = l_{\rm w}/\theta_{\rm lim}$ with $\theta_{\rm lim} = 0.25$, and $m\!=\!8$ corresponds to the theoretical expectation, $l_{\rm d} = l_{\rm w}/\theta_{\rm IF}$ with $\theta_{\rm IF}=\theta_{\rm src} = 0.125$. The results with physical MACs are significantly better than those obtained with $\theta_{\rm lim} = 0.25$, but a slope of $m\!=\!8$ cannot be achieved for the high density contrast simulated here. } \label{fig:rabbit2} \end{center} \end{figure} \subsection{Radiation driven implosion (RDI)} \label{sec:rdi} \begin{figure*} \includegraphics[width=\textwidth]{rdi-cmp-0180} \caption{Radiation driven implosion test, comparison of models (a) through (f). The top panels show the logarithm of the gas column density. The bottom panels show the logarithm of the radiation energy density on the mid-plane. All models are plotted at time $180$\,kyr.} \label{fig:rdi:cmp} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{rdi-evol} \caption{Radiation driven implosion, evolution of the fiducial model (f). The top panels show the logarithm of the gas column density at times $0$, $36$, $130$, $180$, $210$ and $270$\,kyr. The bottom panels show the logarithm of the radiation energy density on the mid-plane at the same times.} \label{fig:rdi:evol} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{rdi-Mcold-evol} \caption{Radiation driven implosion: evolution of the neutral gas mass for models (a) through (f), compared with B09 and \citet{LeflochLazareff1994}.} \label{fig:rdi:Mcold} \end{figure} \begin{table} \caption{Accuracy and performance of the RDI test.} \label{tab:acc:rdi} \begin{center} \begin{tabular}{l|c|c|c|c|c|c|r} \hline model & $l_r$ & $\theta_\mathrm{lim}$ & $N_{_{\rm PIX}}$ & $\theta_\mathrm{IF}$ & $e_{M}$ & $t_\mathrm{iter}$ & $t_\mathrm{tr}$ \\ \hline (a) $N_{_{\rm PIX}} = 12$ & $5$ & $1$ & $12$ & $\infty$ & $0.24$ & $0.49$ & $840$ \\ (b) $N_{_{\rm PIX}} = 48$ & $5$ & $0.5$ & $48$ & $\infty$ & $0.22$ & $1.1$ & $1900$ \\ (c) $N_{_{\rm PIX}} = 192$ & $5$ & $0.25$ & $192$ & $\infty$ & $0.12$ & $6.1$ & $10500$ \\ (d) $\theta_\mathrm{IF} = 0.25$ & $5$ & $1$ & $192$ & $0.25$ & $0.11$ & $2.6$ & $4800$ \\ (e) $l_r = 6$ & $6$ & $1$ & $192$ & $0.25$ & $0.05$ & $26$ & $45000$ \\ (f) fiducial & $4,6$ & $1$ & $192$ & $0.25$ & $0.05$ & $1.7$ & $6300$ \\ \end{tabular} \end{center} \begin{flushleft} Column 1 gives the model name. The following columns list: \begin{itemize} \item $l_{r}$: the refinement level defining grid resolution `5' $\rightarrow 128^2\times 384$; `6' $\rightarrow$ $256^2\times 792$; `4,6' $\rightarrow$ AMR with minimum and maximum refinement levels $4$ and $6$, respectively.) \item $\theta_\mathrm{lim}$: the limiting opening angle for the BH MAC \item $N_{_{\rm PIX}}$: the number of rays (defining the angular resolution) \item $\theta_\mathrm{IF}$: the limiting opening angle for the IF MAC \item $e_\mathrm{M}$: the relative error in the neutral gas mass at $t=0.5$\,Myr \item $t_\mathrm{iter}$: the processor time for a single iteration step (in core-hours) \item $t_\mathrm{tr}$; the processor time in the tree solver for the whole simulation (in core-hours) \end{itemize} \end{flushleft} \end{table} \textsc{RDI}. In this test we study a compact, dense, neutral cloud illuminated by ionising radiation from a single direction. The astronomical motivation is the cometary globules, commonly observed in Galactic {\sc Hii} regions, with bright rims on the side irradiated by a nearby hot star star (or stars) and tails pointing in the opposite direction \citep[see e.g.][]{LeflochLazareff1995, Deharveng2010, Schneider2012, Getman2012}. It has been suggested that, in such a configuration, star formation can be triggered by the {\em Radiation Driven Implosion} (RDI) mechanism \citep{Bertoldi1989} and this mechanism has been extensively studied analytically \citep[e.g.][]{Mellema1998, Miao2009, MackeyLim2010} and numerically using SPH codes \citep[e.g.][and references therein]{KesselDeynetBurkert2003, Gritschneder2010, Bisbas2011, Dale2012}, grid-based hydrodynamic codes \citep[e.g.][]{LeflochLazareff1994, Mellema1998} and magneto-hydrodynamic codes \citep{MackeyLim2010}. \textsc{RDI and TreeRay}. It is usually assumed that the size of the cloud is small, compared with the distance to the radiation source. Technically, this can be arranged, either by using a plane-parallel radiation field (as in the Lefloch \& Lazareff setup), or by setting a large distance between the source and the cloud \citep[as in][hereafter B09]{Bisbas2009}. We choose the latter option here, even though {\sc TreeRay} can easily be modified to treat a plane-parallel radiation field, and we postpone description of this feature to a future paper. This test is relatively hard for algorithms with limited angular resolution, due to the small angular size of the cloud, as seen from the source. This is why it was chosen by B09, to demonstrate the ability of their algorithm to split rays adaptively, so that the ray separation is everywhere similar to the resolution of the hydrodynamic solver. {\sc TreeRay} achieves a similar resolution at the irradiated border of the cloud, by using reverse ray-tracing, which ensures a small separation between neighbouring rays at the point of the flux calculation. \textsc{RDI setup}. We use a similar setup to B09, which was chosen to resemble as closely as possible the setup defined by \citet{LeflochLazareff1994}. A spherical cloud with mass $M = 20$\,M$_\odot$, radius $R = 0.5$\,pc, and uniform density $\rho_0 = 2.6\times 10^{-21}$\,g\,cm$^{-3}$, is illuminated by a source at distance $D = 3.5$\,pc, emitting ionising photons at rate $N_{_{\rm LyC}} = 3.2\times 10^{48}$\,s$^{-1}$. The neutral gas (i.e. the cloud and the gas in its shadow) has temperature $T_n = 100$\,K and is composed of pure atomic hydrogen, i.e. $\mu_n = 1$. The ionised gas has temperature $T_i = 10^4$\,K and $\mu_i = 0.5$. The computational domain is $2\times 2\times 6\,{\rm pc}^3$. Initially, the source is located at $(x,y,z)=(1,1,0)$\,pc, and the cloud centre at $(1,1,3.6)$\,pc. The whole computational domain (apart from the cloud) is filled with a rarefied gas having density $\rho_\mathrm{amb} = 10^{-24}$\,g\,cm$^{-3}$. We calculate 6 models denoted (a) through (f), for which we vary the angular resolution (parameters $\theta_\mathrm{lim}$ and $N_{_{\rm PIX}}$), the grid resolution (refinement level $l_r$) and the MAC criterion (IF MAC on or off; see Table~\ref{tab:acc:rdi}). Models (a) through (e) use a uniform grid, i.e. the minimum and maximum refinement levels are the same, while model (f) uses adaptive mesh refinement (AMR) with refinement levels $4$ to $6$, so the coarsest grid is $64^2\times 192$ and the finest one is $256^2\times 768$. The AMR criterion refines a block if the maximum density within it exceeds $10^{-21}$\,g\,cm$^{-3}$, and de-refines it if the maximum density drops below $5\times 10^{-22}$\,g\,cm$^{-3}$, thereby ensuring that only the cloud and its immediate surroundings are calculated on the highest resolution. A typical grid structure can be seen in the bottom right panel of Fig.~\ref{fig:rdi:cmp}. We evaluate the morphology of the cloud and its shadow, and compute the mass of the neutral gas as a function of time, comparing these quantities with B09 and \citet{LeflochLazareff1994}. The error in the neutral gas mass is $e_M = (M_\mathrm{n} - M_\mathrm{n,B09})/M_\mathrm{n,B09}$, where $M_\mathrm{n}$ is the neutral gas mass at $t=0.5$\,Myr and $M_\mathrm{n,B09}$ is the reference value from B09. \textsc{RDI evolution}. We select model (f) as the fiducial model, since -- along with model (e) -- it gives the best accuracy at reasonable computational cost. Fig.~\ref{fig:rdi:evol} shows the logarithms of column density (top) and radiation energy density (bottom) for model (f) at a sequence of times. The column density can be compared directly with Figure~15 in B09, and we see good agreement between the two codes. Qualitatively, the cloud evolves in the same way as in previous studies. Initially, the radiation ionises the outer layers of the cloud in the direction of the source and a shock starts to propagate into the remaining neutral gas, compressing it from the sides. At $\sim 130$\,kyr a dense core is formed on the cloud axis near the ionisation front and it re-expands due to its internal thermal pressure, while at the same time being ablated by radiation on the side facing the source. Eventually, a cometary tail is formed at $\sim 200$\,kyr. The bottom panels show the radiation energy density and the shadow behind the cloud. Even though a certain amount of radiation diffuses artificially into the shadow region (due to smoothing the edges of the cloud into larger tree nodes), the overall shape of the shadow looks reasonably good. The mass of neutral gas (see Fig.~\ref{fig:rdi:Mcold}) follows almost exactly the curve from B09 up to $t\!\simeq\!0.25$\,Myr, and then becomes slightly higher, leading to a discrepancy $\sim\!5$\% at $t\!\simeq\!0.5$\,Myr. This discrepancy is due to insufficient spatial resolution of the dense core, which starts to be ablated by the ionising radiation after $200$\,kyr. The rate at which gas is ionised is very sensitive to the density of the neutral gas close to the ionisation front, and the SPH code used in B09 uses many more resolution elements to describe the core density profile; in our simulation it is only a few grid cells in diameter. \textsc{RDI comparison}. Fig.~\ref{fig:rdi:cmp} shows the logarithms of the column density (top) and the radiation energy density (bottom), for models (a) through (f), at $t = 180$\,kyr. Models (a) through (c) explore the effect of changing the angular resolution. As expected, a relatively high angular resolution is needed to compute this configuration faithfully. In Model (a), with $\theta_\mathrm{lim}\!=\!1,\;N_{_{\rm PIX}}\!=\!12$, the radiation energy is incorrect by tens of percent, which is mainly due to the very large sizes of tree nodes. In model (b) with $\theta_\mathrm{lim}\!=\!0.5,\;N_{_{\rm PIX}} = 48$, the radiation field is approximately correct in the ionised regions, but the shadow is too wide, and the irradiated side of the cloud is too flat. In both models (a) and (b), the mass of neutral gas is higher than in B09 by $\sim 20$\%. In model (c), with $\theta_\mathrm{lim}\!=\!0.25,\;N_{_{\rm PIX}}\!=\!192$, the shape of the cloud and its shadow closely match the results of B09, and the error in the neutral gas mass is about $10\%$. The computational cost increases by a factor between 5 and 6 for each reduction of $\theta_\mathrm{lim}$ by a factor of 2. Model (d) behaves almost exactly like model (c), but computationally it is almost 5 times cheaper, demonstrating how effective the IF MAC can be. Model (e) refines the spatial resolution by a factor of $2$ in each direction (as compared with models (a) through (d)), and this reduces the error to $e_M\sim 5$\% but increases the computational cost by a factor of $\sim 48$ relative to model (d). Part of this (a factor $\sim 16$) is due to the higher number of grid cells and the shorter time step. The remainder (a factor of $\sim 3$) is partly due to the higher number of tree nodes that need to be opened, and partly due to the larger number of evaluation points on each ray. \subsection{Cloud irradiated by two sources} \label{sec:2src} \begin{figure*} \begin{center} \includegraphics[width=1.0\textwidth]{36-2src-erad-comb} \caption{A spherical cloud irradiated by two sources. The top left panel shows the radiation energy density on the $z\!=\!0\,$ plane, calculated analytically. The remaining panels show the same quantity for models (a) through (g).} \label{fig:2src} \end{center} \end{figure*} \begin{table} \caption{Accuracy and performance of the two-source test.} \label{tab:acc:2src} \begin{center} \begin{tabular}{l|c|l|r|l|l} \hline model & $l_r$ & $\theta_\mathrm{lim}$ & $N_{_{\rm PIX}}$ & $\theta_\mathrm{IF}, \theta_\mathrm{Src}$ & $t_\mathrm{iter}$ \\ \hline (a) $N_{_{\rm PIX}} = 12$ & $5$ & $1$ & $12$ & $\infty$ & $0.095$ \\ (b) $N_{_{\rm PIX}} = 48$ & $5$ & $0.5$ & $48$ & $\infty$ & $0.3$ \\ (c) $N_{_{\rm PIX}} = 192$ & $5$ & $0.25$ & $192$ & $\infty$ & $1.7$ \\ (d) $\theta_\mathrm{IF} = 0.25$ & $5$ & $1$ & $192$ & $0.25$ & $0.65$ \\ (e) $l_r = 6$ & $6$ & $1$ & $192$ & $0.25$ & $6.8$ \\ (f) AMR & $4,6$ & $1$ & $192$ & $0.25$ & $0.2$ \\ (g) $\theta_\mathrm{IF} = 0.125$ & $5$ & $1$ & $768$ & $0.125$ & $2.1$ \\ \end{tabular} \end{center} \begin{flushleft} Column 1 gives the model name. The following columns list: \begin{itemize} \item $l_{r}$: the refinement level defining the grid resolution (`5' $\rightarrow 128^3$; `6' $\rightarrow$ $256^3$; `4,6' $\rightarrow$ AMR with minimum and maximum refinement levels $4$ and $6$, respectively.) \item $\theta_\mathrm{lim}$: the limiting opening angle for the BH MAC \item $N_{_{\rm PIX}}$: the number of rays (defining the angular resolution) \item $\theta_\mathrm{IF}$, $\theta_\mathrm{Src}$: the limiting opening angles for the IF MAC and the Src MAC, respectively \item $t_\mathrm{iter}$: the processor time for a single iteration step (in core-hours) \end{itemize} \end{flushleft} \end{table} \textsc{Two-source motivation}. This test assesses the fidelity of the code when treating a cloud that is irradiated by two identical sources, from different directions. Similarly to the previous tests (Sections~\ref{sec:rabbithole} and \ref{sec:rdi}), it is sensitive to the angular resolution, represented by $\theta_{\rm lim}$ and $N_{_{\rm PIX}}$, and to the choice of MAC. Additionally, it evaluates the iteration and error control procedures (see Section~\ref{sec:err:cntrl}), since their failure would corrupt the symmetry of the radiation field with respect to the plane perpendicular to the line connecting the two sources. \textsc{Two-source setup}. The computational domain is $-2\!\leq\!x\!\leq\!4\,{\rm pc}$, $-2\!\leq\!y\!\leq\!4\,{\rm pc}$ and $-3\!\leq\!z\!\leq\!3\,{\rm pc}$. The two sources are located at $(x,y,z)\!=\!(-2,0,0)$\,pc and $(0,-2,0)$\,pc, and each emits ionising photons at rate $N_{_{\rm LyC}} = 3.2\times 10^{48}$\,s$^{-1}$. The cloud has mass $M = 20$\,M$_\odot$\,, radius $R = 0.5$\,pc, density $2.6\times 10^{-21}\rm{g\,cm^{-3}}$, and is located at the centre of coordinates. Outside the cloud, the computational domain is filled with rarefied gas with density $10^{-24}$\,g\,cm$^{-3}$. The other parameters are given in Table~\ref{tab:acc:2src} for all 7 calculated models, (a) through (g). Models (a) through (f) have the same parameters as the corresponding models for the RDI test; model (g) has a very high angular resolution given by $\theta_\mathrm{IF} = \theta_\mathrm{Src} = 0.125$ and $N_{_{\rm PIX}} = 768$ (as used in the Rabbit hole test, Section~\ref{sec:rabbithole}). All the models were run for a single time step to let the {\sc TreeRay} iteration process converge; the time evolution was not explored. \textsc{Two-source results}. We only evaluate this test qualitatively, by comparing the computed radiation field with the analytic solution. Fig.~\ref{fig:2src} shows the radiation energy density on the $z\!=\!0$ plane for all models, and for the analytic solution, assuming a completely opaque cloud. We see that none of the models exhibit significant deviation from symmetry about the line $x\!=\!y$. Model (a), with the lowest angular resolution, exhibits the largest deviations from the analytic solution. The shadow behind the cloud has the wrong shape, some radiation leaks into the shadowed region, and the radiation energy drops to zero at distances $D\ga\!3$\,pc from the cloud. The last effect is caused by the large tree opening angle $\theta_\mathrm{lim} =1$: at large distance from a target cell, the two sources and parts of the cloud are merged into a single node with high absorption coefficient. The incorrect shadow geometry is the result of both a low number of rays and a high $\theta_\mathrm{lim}$. In model (b) the drop to zero disappears, but distortion of the shadow geometry is still significant. The shadow geometry is approximately correct in model (c) with $\theta_\mathrm{lim} = 0.25$ and $192$ rays. Nevertheless, even this relatively high angular resolution is not sufficient to reproduce the radiation energy in the top right corner, which should be relatively high, because this region is irradiated by both sources. Model (d) shows that almost the same radiation field as in model (c) can be obtained at lower computational costs by invoking the physical MACs with $\theta_\mathrm{IF} = \theta_\mathrm{Src} = 0.25$. Models (e) and (f) show that the grid resolution and the AMR have negligible effect on the radiation field. Finally, model (g) with very high angular resolution (shown in the top right corner) better reproduces the radiation field (though not perfectly) showing that the method converges to the correct solution with increasing angular resolution. \section{Star formation and feedback with TreeRay} \label{sec:SFFeedback} \begin{figure*} \begin{center} \includegraphics[width=1\linewidth]{RadWindChem-evol.png} \caption{Star formation and feedback test: the fiducial model (a) at times $t\!=\!10$, $250$, $570$ and $1000$\,kyr (from left to right). The top row shows the logarithm of the column density. The middle row shows the logarithm of the radiation energy, in the region with non-zero ionisation degree, and the logarithm of the gas density, in the remaining parts (i.e. for the neutral gas only). The bottom row shows the logarithm of the gas temperature. Panels in the middle and bottom rows show the quantities on the $z=0$ plane.} \label{fig:sff:evol} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{40-m-phases} \caption{Star formation and feedback: the evolution of the total radiation energy, $E_\mathrm{rad}$, (thin lines, righthand ordinate), and the mass of ionised gas, $M_\mathrm{H^{+}}$, (thick lines, lefthand ordinate), in the whole computational domain, for models (a) through (e). The main figure shows the first $500$\,kyr of evolution for model (a) only (black lines). The inset shows the evolution between $500$ and $600$\,kyr for all models. Note that, even at the resolution of the inset, the differences in $E_\mathrm{rad}$ between models (a), (b), (d) and (e) cannot be resolved.} \label{fig:sff:cmp} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{40-timing} \caption{The effect of load balancing in the Star Formation and Feedback test. {\it Top panel:} the number of blocks on each processor for model (a) with load balancing off (red, flat distribution), model (a) with load balancing on (blue), and model (b) with load balancing on (magenta). {\it Bottom panel:} the duration of the tree walk on each processor for the same three cases. The measurements shown have been made at time $t\!=\!500$\,kyr, but are representative of the majority of the time.} \label{fig:sff:lb} \end{center} \end{figure} \begin{table*} \caption{Star formation \& feedback test.} \label{tab:sff} \begin{center} \begin{tabular}{l|c|c|l|r|l|c|l|c|c|c|c|r|r|r|r|r} \hline model & $N_\mathrm{src}$ & $l_r$ & $\theta_\mathrm{lim}$ & $N_{_{\rm PIX}}$ & $\theta_\mathrm{IF}$ & $v_\mathrm{tsdt}$ & $\delta_{e_{_{\mathrm{EUV}}}}$ & $n_\mathrm{steps}$ & $n_\mathrm{tr}$ & $e_\mathrm{Erad}$ & $e_\mathrm{M_{H^{+}}}$ & $t_\mathrm{hydro}$ & $t_\mathrm{chem}$ & $t_\mathrm{part}$ & $t_\mathrm{tr}$ & $t_\mathrm{tot}$ \\ \hline (a) fiducial & 100 & $6$ & $1.0$ & $48$ & $0.5$. & $50$ & cell & 4253 & 309 & -- & -- & 1430 & 1000 & 980 & 950 & 4600 \\ (b) $v_\mathrm{tsdt} = \infty$ & 100 & $6$ & $1.0$ & $48$ & $0.5. $ & $\infty$ & cell & 4227 & 36278 & $0.005$ & $0.011$ & 2030 & 1520 & 1180 & $89000$ & $ 94500$ \\ (c) $N_{_{\rm PIX}} = 192$ & 100 & $6$ & $1.0$ & $192$ & $0.25$ & $50$ & cell & 4997 & 358 & $0.12$ & $0.085$ & 1700 & 1160 & 1140 & 4500 & 8800 \\ (d) $v_\mathrm{tsdt} = 25$ & 100 & $6$ & $1.0$ & $48$ & $0.5$. & $25$ & cell & 4253 & 180 & $0.008$ & $0.016$ & 1460 & 990 & 960 & 630 & 4300 \\ (e) $\delta_{e_{_{\mathrm{EUV}}}} = \mathrm{tot}$ & 100 & $6$ & $1.0$ & $48$ & $0.5$. & $25$ & tot & 4288 & 22 & $0.01$ & $0.032$ & 1440 & 1010 & 970 & 240 & 3900 \\ (f) 1 source & 1 & $6$ & $1.0$ & $48$ & $0.5$. & $50$ & cell & 2690 & 156 & -- & -- & 890 & 670 & 370 & 2000 & 4060 \\ \end{tabular} \end{center} \begin{flushleft} Column 1 gives the model name. The following columns list: \begin{itemize} \item $N_\mathrm{src}$: the number of sources \item $l_{r}$: the refinement level defining grid resolution (`5' $\rightarrow 128^3$; `6' $\rightarrow 256^3$; `4,6' $\rightarrow$ AMR with minimum and maximum refinement levels $4$ and $6$, respectively.) \item $\theta_\mathrm{lim}$: the limiting opening angle for the BH MAC \item $N_{_{\rm PIX}}$: the number of rays (defining the angular resolution) \item $\theta_\mathrm{IF}$, : the limiting opening angle for the IF MAC, parameter $\theta_\mathrm{Src}$ of the Src MAC is set to the same value \item $v_\mathrm{tsdt}$: the velocity limit in km/s for the adaptive tree solver time step \item $\delta_{e_{_{\mathrm{EUV}}}}$: error control method (either $\delta_{e_{_{\mathrm{EUV}}},\mathrm{cell}}$ given by equation~\ref{eq:delta:er:cell}, or $\delta_{e_{_{\mathrm{EUV}}},\mathrm{tot}}$ by equation~\ref{eq:delta:er:tot}; see Section~\ref{sec:err:cntrl}) \item $n_\mathrm{steps}$: the number of hydrodynamic time steps for the whole run \item $n_\mathrm{tr}$: number of tree solver iterations for the whole run \item $e_\mathrm{Erad}$: the maximum fractional difference in the total radiation energy, relative to model (a) \item $e_\mathrm{M_{H^{+}}}$: maximum relative difference of the total mass of the ionised gas w.r.t. model (a) \item $t_\mathrm{hydro}$: the processor time spent in the hydro module, for the whole run (in core-hours) \item $t_\mathrm{chem}$: the processor time spent in the chemistry module, for the whole run (in core-hours) \item $t_\mathrm{part}$: the processor time in the particles module, for the whole run (in core-hours) \item $t_\mathrm{tr}$: the processor time spent in the tree solver, for the whole run (in core-hours) \item $t_\mathrm{tot}$: the processor time in all modules, for the whole run (in core-hours) \end{itemize} \end{flushleft} \end{table*} \textsc{Star Formation and Feedback motivation}. The purpose of the final test is to demonstrate the combination of {\sc TreeRay} with other physical modules that are often used in simulations of star formation with feedback, and to evaluate the code performance under realistic conditions. We derive this test setup from model CNM~60 (i.e. $60$\,M$_{\odot}$ star located in the cold neutral medium) of \citet{Haid2018} who explore the relative impact of radiation and stellar winds in different environments. In the CNM~60 model, a source of radiation and stellar wind representing a $60$\,M$_\odot$ star is placed in a dense cold neutral medium, resulting in the formation of an expanding H{\sc ii} region with a stellar wind bubble at its centre. A challenging aspect of this test is the combination of relatively complex physics (related to radiation and cold gas chemistry) with the high velocity of stellar winds, $\ga 1000$ km/s (obliging the hydrodynamic solver to take very short time steps, due to the Courant-Friedrichs-Lewy criterion). In order to demonstrate the ability of {\sc TreeRay} to deal efficiently with a large number of sources, we split the single source used by \citet{Haid2018} into $100$ smaller sources, which together emit ionising photons at the same net rate, and together deliver the same total wind power (with the same wind velocity, so the mass loss rate from each individual star is simply divided by 100). The 100 sources are distributed randomly in a sphere of radius $8$\,pc, within a cloud of radius $10$\,pc, to represent a toy-model star cluster. \textsc{Physical processes}. The physical processes included in this test and the corresponding {\sc Flash} modules are the following. In addition to the standard {\sc Flash} PPM hydrodynamic solver, we use the tree solver to calculate the gas self-gravity, and the {\sc TreeRay/OpticalDepth} module to include the ambient interstellar radiation field (see Paper~I for both modules). The ionising radiation is treated using the {\sc TreeRay/OnTheSpot} module described here, and its coupling to the chemistry module is given in \citet{Haid2018}. The {\sc Chemistry} module implements a network with 7 active species (H$_2$, H, H$^{+}$, CO, C$^{+}$, O, and $e^{-}$; see \citealt{Walch2015, Glover2010, Nelson1997} for details). The sources are modelled as {\sc Flash} sink particles \citep{Federrath2010}, and move under the influence of the gravitational field, but accretion onto sinks is switched off. Stellar winds are treated with the procedure described by \citet{Gatto2017} and implemented in the {\sc FeedbackSinks} module. \textsc{Physical parameters}. Apart from the number of sources, the parameters are similar to \citet{Haid2018}. The computational domain is a $30\times 30\times 30\,{\rm pc}^3$ cube and the grid is uniform with refinement level $l_r = 6$ (corresponding to $256^3$ grid cells). The hydrodynamic boundary conditions are set to ``diode'', and the gravitational boundary conditions are ``isolated''. The interstellar radiation field, from which the heating is calculated by the {\sc TreeRay/OpticalDepth} module, has strength $G_0 = 1.7$ \citep{Habing1968, Draine1978}. At the centre of the computational domain is a cold neutral cloud with mass $1.3\times 10^4\,{\rm M}_\odot$, radius $10$\,pc, density $2.1\times 10^{-22}$\,g\,cm$^{-3}$, and temperature $20$\,K. The remainder of the computational domain is filled with a rarefied ambient medium having density $10^{-24}$\,g\,cm$^{-3}$. The sources are positioned randomly in a sphere of radius $8\,{\rm pc}$, centred on the centre of the cloud. Each of the $100$ sources has ionising output $N_{\rm LyC} = 2.4\times 10^{48}$\,s$^{-1}$, surface temperature $T = 45000$\,K, wind mass loss rate $3\times 10^{-8}$\,M$_\odot$\,yr$^{-1}$, and wind velocity $2700$\,km/s. \textsc{Technical parameters}. We evaluate this test by comparing runs computed with different {\sc TreeRay} parameters. The model is most interesting, and computationally most demanding, when (i) a significant fraction of the cloud is ionised, and (ii) there is hot shocked stellar-wind gas. This happens after several hundred kyr, and therefore we only compare the runs during the time period from 500 to 600\,kyr. The first 500\,kyr are only calculated once, with model (a). The parameters of the fiducial model (a) are selected as a compromise between accuracy and performance. Model (a) uses moderate angular resolution with $N_{_{\rm PIX}} = 48$ and {\sc TreeRay}-specific MACs with $\theta_\mathrm{IF} = 0.5$ and $\theta_\mathrm{Src} = 0.5$. This allows us to adopt a large general opening angle $\theta_\mathrm{lim} = 1.0$. Since the presence of very hot gas results in a very short hydrodynamic time step, we apply the tree solver time step (see Section~\ref{sec:tsdt}) and set the velocity limit to $v_\mathrm{tsdt} = 50$\,km/s. Model (b) does not use the tree solver time step ($v_\mathrm{tsdt} = \infty$) and the tree solver is called at each hydrodynamic time step; hence, (b) is by far the most expensive model. Model (c) differs from the fiducial model by invoking higher angular resolution, with $N_{_{\rm PIX}} = 192$ and $\theta_\mathrm{IF} = \theta_\mathrm{Src} = 0.25$. Models (d) and (e) explore the behaviour when the low velocity limit for the tree solver time step is reduced to $v_\mathrm{tsdt} = 25$\,km/s; model (e) also adopts the less demanding error control criterion involving the total radiation energy. Finally, model (f) involves only a single source of radiation and stellar wind, located at the centre of the cloud, with a total photon emissivity and mass loss rate equal to the sum of the 100 sources of the fiducial model; this model is intended to reproduce model CNM~60 from \citet{Haid2018}. The parameters of all models are summarized in Table~\ref{tab:sff}. \textsc{Model evolution}. Fig.~\ref{fig:sff:evol} illustrates the first Myr of evolution for the fiducial model (a). At early times, H{\sc ii} regions appear around the sources and start to expand at $\sim 6$\,km/s, in agreement with the Spitzer solution (Eqs. \ref{eq:spitzer} and \ref{eq:RStr}). The H{\sc ii} regions around neighbouring sources eventually merge. At $t \simeq 70$\,kyr, some hot gas, resulting from shocked stellar winds, appears and quickly expands due to its high pressure; consequently the H{\sc ii} regions are squeezed between the hot wind bubbles on the inside and the surrounding shells of swept-up cold neutral gas on the outside. The bubbles and H{\sc ii} regions continue to expand and merge, and at the same time the cloud slowly collapses due to its self-gravity. Eventually, at $t \simeq 250$\,kyr, some bubbles reach the cloud edge and break out, in the process known as a {\it champagne flow} \citep{TenorioTagle1979}. Thereafter, the warm photo-ionised H{\sc ii}, and the shocked hot wind-gas, start to flow out of the cloud, with the shocked hot wind-gas also being accelerated by the buoyancy force. The remainder of the cloud decays into a network of filaments, which expands slowly outwards, accelerated by the pressure of the warm and hot gas, and by the rocket effect \citep{OortSpitzer1955}. Some structures formed in the later stages of evolution resemble the {\it elephant trunks} that are frequently observed in star forming regions \citep[see e.g.][]{Hillenbrand1993, McLeod2015}.\\ \textsc{Comparison of models}. Fig.~\ref{fig:sff:cmp} compares the evolution of models (a) through (e) between 500 and 600\,kyr. The thin lines show the total radiation energy in the computational domain ($E_\mathrm{rad}$); its maximum fractional difference relative to model (a), $e_\mathrm{E_{rad}}$, is given in Table~\ref{tab:sff}. Models (a), (b), (d) and (e) have almost the same $E_\mathrm{rad}$ with relative differences of order 1\% or smaller. For model (c), $E_\mathrm{rad}$ is higher by $\sim$10\%, because its higher angular resolution allows the radiation to follow better the curved surfaces of irregular shells, resulting in slightly larger H{\sc ii} regions. The thick lines in Fig.~\ref{fig:sff:cmp} show the total mass of ionised gas, $M_\mathrm{H^{+}}$, and its fractional difference relative to model (a), $e_\mathrm{M_{H^{+}}}$, is again given in Table~\ref{tab:sff}. For model (c) $M_\mathrm{H^{+}}$ is higher than for model (a) by $\sim$8.5\%, again because of higher resolution. Models (a), (b), (d) and (e) show $M_\mathrm{H^{+}}$ differing by of order 1\%, with higher $M_\mathrm{H^{+}}$ for models with a shorter time between calls to the tree solver (i.e. higher $v_\mathrm{tsdt}$). This is because during time steps when the tree solver is not called, the shells continue to expand and a small fraction of the ionised gas gets into regions that are not irradiated, where it recombines and so $M_\mathrm{H^{+}}$ drops unphysically (see periods of decrease in the saw-tooth pattern of models (d) and (e)). We conclude that the angular resolution (i.e. parameters $N_{_{\rm PIX}}$, $\theta_\mathrm{IF}$ and $\theta_\mathrm{Src}$) has an impact on the accuracy of calculations of this type. In contrast, the tree solver time step parameter, $v_\mathrm{tsdt}$, seems to have little impact, provided $v_\mathrm{tsdt} \gtrsim 50$\,km/s. Indeed, even models (d) and (e) with $v_\mathrm{tsdt} = 25$\,km/s give satisfactory results, and would be suitable for quick tests scanning the parameter space. \textsc{Performance}. Table~\ref{tab:sff} shows the total CPU times spent in the four computationally most demanding modules: the hydrodynamic solver ($t_\mathrm{hydro}$), the {\sc Chemistry} module ($t_\mathrm{chem}$), the particle module ($t_\mathrm{part}$), and the tree solver including {\sc TreeRay} ($t_\mathrm{tr}$). In model (a), the times taken by these four modules are comparable, with $t_\mathrm{tr}\sim 2t_\mathrm{hydro}/3$. The small $t_\mathrm{tr}$ is largely due to setting a finite $v_\mathrm{tsdt}$; the tree solver is called only $309$ times, while the hydrodynamic solver is called $4253$ times (columns $n_\mathrm{tr}$ and $n_\mathrm{step}$ in Table~\ref{tab:sff}, respectively). The benefits of setting a finite $v_\mathrm{tsdt}$ are further illustrated by model (b), where $v_\mathrm{tsdt}$ is not set to a finite value, and therefore defaults to $\infty$; the results are indistinguishable from model (a) but the tree solver takes $\sim$90 times more time than in model (a). Model (c), with finer angular resolution, is approximately two times slower than model (a), and the tree solver takes approximately three times longer than the hydrodynamic solver. In models (d) and (e), the tree solver is called less often than in model (a), and $t_\mathrm{tr}$ is proportionally smaller; model (e) uses the total radiation energy as the error control, and {\sc TreeRay} does not need to iterate at all. Model (f), with a single source of radiation and wind, needs almost two times fewer hydrodynamic time steps, due to the lower maximum temperature of the hot shocked gas. This also results in a smaller number of tree solver calls than in model (a). However, each tree solver iteration takes more time, because the ionisation front has larger surface area, and consequently a higher number of tree nodes must be opened. \textsc{Load balancing}. Models (a) through (f) have been calculated with load balancing switched on (see Section~\ref{sec:lb}). In order to evaluate the impact of load balancing on the tree solver performance, we calculate a few time steps of model (a), starting at 500\,kyr, with load balancing off. The top panel of Fig.~\ref{fig:sff:lb} compares the number of blocks per processor for model (a) with load balancing on and off, and for model (b) with load balancing on. The bottom panel shows, for the same three cases, the time spent in a single tree walk on each processor. It can be seen that models with load balancing on have smaller variations in the tree walk time, and that the variation is much smaller for model (b) where the tree solver is called at every time step. Since processors that finish the tree walk earlier have to wait for the slowest processor, we estimate that in model (a) the load balancing decreases the tree walk time from $\sim$\,60\,seconds to $\sim$\,40\,seconds, saving approximately 30\% of the tree solver time. \section{Scaling tests} \label{sec:perf} \subsection{Weak and strong scaling} \label{sec:strong_sc} \textsc{Hardware for scaling tests}. We carry out weak and strong scaling tests for the {\sc TreeRay} algorithm based on the Spitzer test (see Section~\ref{sec:spitzer}). The tests are run on the HPC system COBRA, hosted by the Rechenzentrum Garching at the Max-Planck Computing and Data Facility. COBRA has Intel Xeon 'Skylake' processors. In total there are 3188 compute nodes with 40 cores @ 2.4 GHz each and a memory of more than 2.4 GByte per core. The available memory per core is thus at least 2.2 GByte. \textsc{Strong scaling}. Fig.~\ref{fig:scale_strong} shows the results of the strong scaling test. We plot the time in seconds measured for the tree build, communication, walk, and radiative transfer calculation as measured for 10 time steps during the Spitzer test. The spatial resolution is set to 512$^3$ cells, and therefore the average number of blocks per core changes from 819.2 on 320 cores, to 25.6 on 10240 cores. The scaling is very good, showing an almost ideal, linear speedup with the number of cores. \textsc{Weak scaling}. Fig.~\ref{fig:scale_weak} shows the result of the weak scaling test. The simulations are chosen such that there is the same average number of blocks per core of 102.4. In order to achieve this, we change the resolution of the Spitzer test from $128^3$ cells on $40$ cores to $512^3$ cells on $2560$ cores. The processor time in the tree solver only depends very weakly on the number of cores, $N_{_{\rm core}}$, and can be approximated by a power-law, $\propto N_{_{\rm core}}^{\,0.075}$. This is close to an ideal weak scaling for which the same amount of wall-clock time should be used for all simulations. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{scaling_results_strong_speed.png} \caption{Strong scaling on up to 10240 cores, showing almost linear speedup for the Spitzer test.} \label{fig:scale_strong} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{scaling_results_weak_speed.png} \caption{Log-linear plot showing the results of a weak scaling test on 40 to 20480 cores. The scaling within one node is not ideal, but for more than 40 cores, the scaling is close to ideal. A power-law fit gives a very weak dependence on the number of cores $\propto N_{_{\rm core}}^{\,0.075}$.} \label{fig:scale_weak} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{scaling_results_nsources.png} \caption{Scaling with the number of sources. The setup consists of a uniform density medium that contains from $N_{\rm src}=10$ up to 10$^4$ randomly distributed ionizing sources. The log-linear plot shows the integration time normalized to the time it takes to calculate 10 time steps for only 10 sources, i.e. 10 Spitzer bubbles (left-most point). As $N_{\rm src}$ is increased by a factor of 1000, the integration time only increases by $\sim$ 5\%. This means that the integration time is almost independent of the number of sources.} \label{fig:scale_nsource} \end{center} \end{figure} \subsection{Scaling with the number of sources} \label{sec:perf:nsrc} \textsc{Number-of-sources scaling test setup}. To test the extent to which the performance of the code degrades as the number of ionising sources is increased, we set up a $30\times 30\times 30\,{\rm pc}^3$ computational domain, containing atomic hydrogen with uniform density $\rho=7.63\times 10^{-22}\,{\rm g}\,{\rm cm}^{-3}$ and uniform temperature $T$=10\,K. $N_{\rm src}$ sources are placed randomly in the computational domain, and each source emits ionising photons at rate $N_{\rm LyC} = 5\times 10^{48} \,{\rm s}^{-1}$ into an injection region with radius $r_{\rm inj} = 0.32$ pc (which corresponds to about 1.4 cells). $N_{\rm src}$ is set to 10, 32, 100, 316, 1000, 3162 and 10\,000. All setups are evolved for 10 time steps, on 32 cores, in order to measure the exact integration time. \textsc{Number-of-sources scaling results}. Fig.~\ref{fig:scale_nsource} shows the resulting simulation times as a function of $N_{\rm src}$, normalised to the simulation time for $N_{\rm src}=10$. For $N_{\rm src}\la 10^3$ there is essentially no difference in the simulation times. For $N_{\rm src}> 10^3$, the simulation time increases slightly and becomes $\sim$ 5\% longer for $N_{\rm src}\!=\!10^4$ than for $N_{\rm src}\!=\!10$. We conclude that the simulation time is very nearly independent of the number of sources, as expected from the algorithm design. Therefore, it is an excellent basis for implementing radiation transport schemes where every grid cell represents a source of radiation, e.g. emission from hot gas or dust. \section{Summary} \label{sec:summary} In this paper we describe {\sc TreeRay}, a new, fast algorithm for treating radiation transport in gaseous media. It is based on the combination of reverse ray tracing \citep[e.g.][]{Altay2013} and a tree-based \citep{Barnes1986} accelerated integration scheme. In general, the incident flux of radiation is computed for every grid cell, but it can also be computed for any other target point in the computational domain, for example the position of a sink particle. From every target point, reverse ray tracing is executed in $N_{_{\rm PIX}}$ directions (hence the angular resolution is user-defined), and the directions are interpreted as cones with equal solid angle, based on the {\sc HealPix} scheme \citep{Gorski2005}. Due to the equal solid-angle pixelation, every direction's contribution carries equal weight. In the limit of infinite angular resolution, {\sc TreeRay} converges to the long characteristics method, which is very accurate but usually prohibitively expensive for time-dependent astrophysical simulations. {\sc TreeRay} treats all the gas in the computational domain, and can capture the shadows of even quite small and dense objects, with the limitation that structures at large distances from a target point are smoothed out over a solid angle $\,\sim\!4\pi /N_{_{\rm PIX}}\!$. The smoothing is controlled by the {\sc HealPix} resolution (user-specified $N_{_{\rm PIX}}\!$) and the limiting opening angles set for the Multipole Acceptance Criteria (user-specified $\theta_{\rm lim},\,\theta_{\rm IF},\,\theta_{\rm Src}$). The number of evaluation points at which the radiative transfer equation is integrated, along a given ray, is of secondary importance. A key strength of {\sc TreeRay} is that its computational cost is essentially independent of the number of radiation sources. This enables {\sc TreeRay} to treat big star clusters with many radiation sources, and extended sources like radiatively cooling shock fronts or cool dust clouds, without incurring an unacceptable computational overhead. Furthermore {\sc TreeRay} scales extremely well with the number of processors, which is due to the communication and local tree-walk strategy of the scheme \citep[see also][]{Wunsch2018}. We demonstrate {\it both} an almost ideal weak scaling up to $\sim 2.5\times 10^3$ cores, {\it and} an almost ideal strong scaling on up to $\sim 10^4$ cores (which is usually even harder to achieve). {\sc TreeRay} can easily be extended to include additional radiative transfer sub-modules. Additional sub-modules that are already in preparation include the transfer of non-ionising radiation including the emission from dust and the associated radiation pressure (Klepitko et al., in prep.); the multi-wavelength transfer of X-rays originating from point sources such as high-mass X-ray binaries \citep[Gaches et al., in prep., based on the diffuse X-ray radiative transfer scheme with the {\sc TreeRay/OpticalDepth} module developed by][]{Mackey2019}; and the multi-wavelength transfer of Far Ulraviolet (FUV) and EUV radiation, including the dissociation of molecules (Walch et al., in prep.). We plan in future to include the option to have directionally dependent absorption coefficients, similar to those introduced in \citep{Grond2019TREVR}. \section*{Acknowledgments}% We thank the anonymous referee for constructive and useful comments that helped to improve the paper. This study has been supported by project 19-15008S of the Czech Science Foundation and by the institutional project RVO:67985815. SW, SH, and AK gratefully acknowledge the European Research Council under the European Community's Framework Programme FP8 via the ERC Starting Grant RADFEEDBACK (project number 679852). SW and FD further thank the Deutsche Forschungsgemeinschaft (DFG) for funding through SFB~956 ``The conditions and impact of star formation'' (sub-project C5), and SW thanks the Bonn-Cologne-Graduate School. DS acknowledges the DFG for funding through SFB~956 ``The conditions and impact of star formation'' (sub-project C6). APW acknowledges the support of a consolidated grant (ST/K00926/1) from the UK Science and Technology Facilities Council. This work was supported by The Ministry of Education, Youth and Sports from the Large Infrastructures for Research, Experimental Development and Innovations project ``IT4Innovations National Supercomputing Center -- LM2015070''. The software used in this work was in part developed by the DOE NNSA-ASC OASCR Flash Center at the University of Chicago. We particularly thank the Max-Planck Data \& Computing Facility for being able to carry out the scaling tests presented in this paper on the HPC cluster COBRA. \section*{Data availability} The {\sc TreeRay} module source code and the simulation data underlying this article will be shared on reasonable request to the corresponding author. We plan to include {\sc TreeRay} into a future version of the {\sc Flash} code. \bibliographystyle{mnras
1,941,325,220,156
arxiv
\section{#1}} \def\firstsubsec#1{\obsolete\firstsubsec \subsection{#1}} \def\thispage#1{\obsolete\thispage \global\pagenumber=#1\frontpagefalse} \def\thischapter#1{\obsolete\thischapter \global\chapternumber=#1} \def\nextequation#1{\obsolete\nextequation \global\equanumber=#1 \ifnum\the\equanumber>0 \global\advance\equanumber by 1 \fi} \def\afterassigment\B@XITEM\setbox0={\afterassigment\B@XITEM\setbox0=} \def\B@XITEM{\par\hangindent\wd0 \noindent\box0 } \def\catcode`\^^M=5{\catcode`\^^M=5} \lock \message{ } \def\us#1{\undertext{#1}} \def\coeff#1#2{{#1\over #2}} \def^{\, \prime }{^{\, \prime }} \def^{\, \prime\, \prime }{^{\, \prime\, \prime }} \def^{\, \prime\, \prime\, \prime }{^{\, \prime\, \prime\, \prime }} \def{e^+e^-}{{e^+e^-}} \def{\mu\nu}{{\mu\nu}} \def\leftarrow{\leftarrow} \def\rightarrow {\rightarrow } \def\uparrow {\uparrow } \def\downarrow {\downarrow } \def\doublearrow {\doublearrow } \def\overrightarrow {\overrightarrow } \def\widetilde {\widetilde } \def\start{\begingroup\hsize=4.75in\baselineskip 12pt\raggedright \relax} \def\par\endgroup{\par\endgroup} \newbox\figbox \newdimen\zero \zero=0pt \newdimen\figmove \newdimen\figwidth \newdimen\figheight \newdimen\textwidth \newtoks\figtoks \newcount\figcounta \newcount\figcountb \newcount\figlines \def\figreset{\global\figcounta=-1 \global\figcountb=-1 \global\figmove=\baselineskip \global\figlines=1 \global\figtoks={ } } \def\picture#1by#2:#3{\global\setbox\figbox=\vbox{\vskip #1 \hbox{\vbox{\hsize=#2 \noindent #3}}} \global\setbox\figbox=\vbox{\kern 10pt \hbox{\kern 10pt \box\figbox \kern 10pt }\kern 10pt} \global\figwidth=1\wd\figbox \global\figheight=1\ht\figbox \global\textwidth=\hsize \global\advance\textwidth by - \figwidth } \def\figtoksappend{\edef\temp##1{\global\figtoks=% {\the\figtoks ##1}}\temp} \def\figparmsa#1{\loop \global\advance\figcounta by 1 \ifnum \figcounta < #1 \figtoksappend{ 0pt \the\hsize } \global\advance\figlines by 1 \repeat } \def\figparmsb#1{\loop \global\advance\figcountb by 1 \ifnum \figcountb < #1 \figtoksappend{ \the\figwidth \the\textwidth} \global\advance\figlines by 1 \repeat } \def\figtext#1:#2:#3{\figreset% \figparmsa{#1}% \figparmsb{#2}% \multiply\figmove by #1% \global\setbox\figbox=\vbox to 0pt{\vskip \figmove \hbox{\box\figbox} \vss } \parshape=\the\figlines\the\figtoks\the\zero\the\hsize \noindent \rlap{\box\figbox} #3} \def\Buildrel#1\under#2{\mathrel{\mathop{#2}\limits_{#1}}} \def\hbox to 40pt{\rightarrowfill}{\hbox to 40pt{\rightarrowfill}} \def\Buildrel x\rarrow{+\infty}\under\llongrarrow{\Buildrel x\rightarrow {+\infty}\under\hbox to 40pt{\rightarrowfill}} \def\Buildrel x\rarrow{-\infty}\under\llongrarrow{\Buildrel x\rightarrow {-\infty}\under\hbox to 40pt{\rightarrowfill}} \def\Buildrel x\rarrow\infty\under\llongrarrow{\Buildrel x\rightarrow \infty\under\hbox to 40pt{\rightarrowfill}} \def\boxit#1{\vbox{\hrule\hbox{\vrule\kern3pt \vbox{\kern3pt#1\kern3pt}\kern3pt\vrule}\hrule}} \newdimen\str \def\xstrut#1{\dimen\str = #1 \hbox{\vrule height .8dm\str depth .2dm\str width 0pt}} \def\fboxit#1#2{\vbox{\hrule height #1 \hbox{\vrule width #1 \kern3pt \vbox{\kern3pt#2\kern3pt}\kern3pt \vrule width #1 } \hrule height #1 }} \def\tran#1#2{\gdef\rm{\fam0\eighteenrm \hfuzz 5pt \gdef\label{#1} \vbox to \the\vsize{\hsize \the\hsize #2} \par \eject } \def\par \eject {\par \eject } \newdimen\baseskip \newdimen\lskip \lskip=\lineskip \newdimen\transize \newdimen\tall \def\gdef\rm{\fam0\eighteenrm{\gdef\rm{\fam0\eighteenrm} \font\twentyfourit = amti10 scaled \magstep5 \font\twentyfourrm = cmr10 scaled \magstep5 \font\twentyfourbf = ambx10 scaled \magstep5 \font\twentyeightsy = amsy10 scaled \magstep5 \font\eighteenrm = cmr10 scaled \magstep3 \font\eighteenb = ambx10 scaled \magstep3 \font\eighteeni = ammi10 scaled \magstep3 \font\eighteenit = amti10 scaled \magstep3 \font\eighteensl = amsl10 scaled \magstep3 \font\eighteensy = amsy10 scaled \magstep3 \font\eighteencaps = cmr10 scaled \magstep3 \font\eighteenmathex = amex10 scaled \magstep3 \font\fourteenrm=cmr10 scaled \magstep2 \font\fourteeni=ammi10 scaled \magstep2 \font\fourteenit = amti10 scaled \magstep2 \font\fourteensy=amsy10 scaled \magstep2 \font\fourteenmathex = amex10 scaled \magstep2 \parindent 20pt \global\hsize = 7.0in \global\vsize = 8.9in \dimen\transize = \the\hsize \dimen\tall = \the\vsize \def\eighteensy {\eighteensy } \def\eighteens {\eighteens } \def\eighteenb {\eighteenb } \def\eighteenit {\eighteenit } \def\eighteencaps {\eighteencaps } \textfont0=\eighteenrm \scriptfont0=\fourteenrm \scriptscriptfont0=\twelverm \textfont1=\eighteeni \scriptfont1=\fourteeni \scriptscriptfont1=\twelvei \textfont2=\eighteensy \scriptfont2=\fourteensy \scriptscriptfont2=\twelvesy \textfont3=\eighteenmathex \scriptfont3=\eighteenmathex \scriptscriptfont3=\eighteenmathex \global\baselineskip 35pt \global\lineskip 15pt \global\parskip 5pt plus 1pt minus 1pt \global\abovedisplayskip 3pt plus 10pt minus 10pt \global\belowdisplayskip 3pt plus 10pt minus 10pt \def\rtitle##1{\centerline{\undertext{\twentyfourrm ##1}}} \def\ititle##1{\centerline{\undertext{\twentyfourit ##1}}} \def\ctitle##1{\centerline{\undertext{\eighteencaps ##1}}} \def\hbox{\vrule width 0pt height .35in depth .15in }{\hbox{\vrule width 0pt height .35in depth .15in }} \def\cline##1{\centerline{\hbox{\vrule width 0pt height .35in depth .15in } ##1}} \output{\shipout\vbox{\vskip .5in \pagecontents \vfill \hbox to \the\hsize{\hfill{\tenbf \label} } } \global\advance\count0 by 1 } \rm } \def\SLACHEAD{\setbox0=\vbox{\baselineskip=12pt \ialign{\tenfib ##\hfil\cr \hskip -17pt\tenit Mail Address:\ \ Bin 81\cr SLAC, P.O.Box 4349\cr Stanford, California, 94305\cr}} \setbox2=\vbox to\ht0{\vfil\hbox{\eighteencaps Stanford Linear Accelerator Center}\vfil} \smallskip \line{\hskip -7pt\box2\hfil\box0}\bigskip} \def\tablist#1{\singl@true\doubl@false\spaces@t \halign{\tabskip 0pt \vtop{\hsize 1.2in \noindent ## \vfil} \tabskip 20pt & \vtop{\hsize .5in \noindent ## \vfil} & \vtop{\hsize 1.3in \noindent ## \vfil} & \vtop{\hsize 2.1in \noindent ## \vfil} & \vtop{\hsize 2.1in \noindent ## \vfil} \tabskip 0pt \cr #1 }} \def\def\do##1{\catcode`##1=12}\dospecials{\def\do##1{\catcode`##1=12}\dospecials} \def\typeit#1{\par\begingroup\singl@true\doubl@false\spaces@t\tt \fortransize \def\par{\leavevmode\endgraf\input#1 \endgroup} \def\hsize=45pc \hoffset=.4in{\hsize=45pc \hoffset=.4in} \def\tt \fortransize \def\par{\leavevmode\endgraf{\tt \hsize=45pc \hoffset=.4in \def\par{\leavevmode\endgraf} \obeylines \def\do##1{\catcode`##1=12}\dospecials \obeyspaces} {\obeyspaces\global\let =\ \widowpenalty 1000 \thickmuskip 4mu plus 4mu \def\Stanford{\address{Institute of Theoretical Physics, Department of Physics \break Stanford University,\, Stanford,\!~California 94305}} \def\noalign{\vskip 2pt}{\noalign{\vskip 2pt}} \def\noalign{\vskip 4pt}{\noalign{\vskip 4pt}} \def\noalign{\vskip 6pt}{\noalign{\vskip 6pt}} \def\noalign{\vskip 8pt}{\noalign{\vskip 8pt}} \unlock \Pubnum={${\twelverm IU/NTC}\ \the\pubnum $} \pubnum={0000} \def\p@nnlock{\begingroup \tabskip=\hsize minus \hsize \baselineskip=1.5\ht\strutbox \hrule height 0pt depth 0pt \vskip-2\baselineskip \noindent\strut\the\Pubnum \hfill \the\date \endgroup} \def\FRONTPAGE\paperstyle\p@nnlock{\FRONTPAGE\letterstylefalse\normalspace\papersize\p@nnlock} \def\displaylines#1{\displ@y \halign{\hbox to\displaywidth{$\hfil\displaystyle##\hfil$}\crcr #1\crcr}} \def\letters{\letterstyletrue\singlespace\lettersize \letterfrontheadline% ={\hfil}} \def\tmsletters{\letterstyletrue\singlespace\lettersize \tolerance=2000 \pretolerance=1000 \raggedright\letterfrontheadline% ={\hfil}} \def\PHYSHEAD{ \vskip -1.0in \vbox to .8in{\vfil\centerline{\fourteenrm STANFORD UNIVERSITY} \vskip -2pt \centerline{{\eighteencaps Stanford, California}\ \ 94305} \vskip 12pt \line{\hskip -.3in {\tenpoint \eighteencaps Department of Physics}\hfil}}} \def\FRONTPAGE\null\vskip -1.5in\tmsaddressee{\FRONTPAGE\null\vskip -1.5in\tmsaddressee} \def\addressee#1{\null \bigskip\medskip\rightline{\the\date\hskip 30pt} \vskip\lettertopfil \ialign to\hsize{\strut ##\hfil\tabskip 0pt plus \hsize \cr #1\crcr} \medskip\vskip 3pt\noindent} \def\tmsaddressee#1#2{ \vskip\lettertopfil \setbox0=\vbox{\singl@true\doubl@false\spaces@t \halign{\tabskip 0pt \strut ##\hfil\cr \noalign{\global\dt@ptrue}#1\crcr}} \line{\hskip 0.7\hsize minus 0.7\hsize \box0\hfil} \bigskip \vskip .2in \ialign to\hsize{\strut ##\hfil\tabskip 0pt plus \hsize \cr #2\crcr} \medskip\vskip 3pt\noindent} \def\makeheadline{\vbox to 0pt{ \skip@=\topskip \advance\skip@ by -12pt \advance\skip@ by -2\normalbaselineskip \vskip\skip@ \vss }\nointerlineskip} \def\signed#1{\par \penalty 9000 \bigskip \vskip .06in\dt@pfalse \everycr={\noalign{\ifdt@p\vskip\signatureskip\global\dt@pfalse\fi}} \setbox0=\vbox{\singl@true\doubl@false\spaces@t \halign{\tabskip 0pt \strut ##\hfil\cr \noalign{\global\dt@ptrue}#1\crcr}} \line{\hskip 0.5\hsize minus 0.5\hsize \box0\hfil} \medskip } \def\lettersize{\hsize=6.25in\vsize=8.5in\hoffset=0in\voffset=1in \skip\footins=\smallskipamount \multiply\skip\footins by 3 } \outer\def\newnewlist#1=#2&#3&#4&#5;{\toks0={#2}\toks1={#3}% \dimen1=\hsize \advance\dimen1 by -#4 \dimen2=\hsize \advance\dimen2 by -#5 \count255=\escapechar \escapechar=-1 \alloc@0\list\countdef\insc@unt\listcount \listcount=0 \edef#1{\par \countdef\listcount=\the\allocationnumber \advance\listcount by 1 \parshape=2 #4 \dimen1 #5 \dimen2 \Textindent{\the\toks0{\listcount}\the\toks1}} \expandafter\expandafter\expandafter \edef\c@t#1{begin}{\par \countdef\listcount=\the\allocationnumber \listcount=1 \parshape=2 #4 \dimen1 #5 \dimen2 \Textindent{\the\toks0{\listcount}\the\toks1}} \expandafter\expandafter\expandafter \edef\c@t#1{con}{\par \parshape=2 #4 \dimen1 #5 \dimen2 \noindent} \escapechar=\count255} \def\c@t#1#2{\csname\string#1#2\endcsname} \def\noparGENITEM#1;{\hangafter=0 \hangindent=#1 \ignorespaces\noindent} \outer\def\noparnewitem#1=#2;{\gdef#1{\noparGENITEM #2;}} \noparnewitem\spoint=1.5\itemsize; \def\letterstyle\FRONTPAGE \letterfrontheadline={\hfil{\letterstyletrue\singlespace\lettersize\FRONTPAGE \letterfrontheadline={\hfil} \hoffset=1in \voffset=1.21in \line{\hskip .8in \special{overlay ntcmemo.dat} \quad\fourteenrm NTC MEMORANDUM\hfil\twelverm\the\date\quad} \medskip\medskip \memod@f} \def\memodate#1{\date={#1}\letterstyle\FRONTPAGE \letterfrontheadline={\hfil} \def\memit@m#1{\smallskip \hangafter=0 \hangindent=1in \Textindent{\eighteencaps #1}} \def\memod@f{\xdef\to{\memit@m{To:}}\xdef\from{\memit@m{From:}}% \xdef\topic{\memit@m{Topic:}}\xdef\subject{\memit@m{Subject:}}% \xdef\rule{\bigskip\hrule height 1pt\bigskip}} \memod@f \lock \def\APrefmark#1{[#1]} \singl@true\doubl@false\spaces@t \newif\ifNuclPhys \newif\ifAnnPhys \PhysRevfalse \NuclPhysfalse \AnnPhystrue \def\IUCFREF#1|#2|#3|#4|#5|#6|{\relax \ifPhysRev\REF#1{{\frenchspacing #2, #3 {\eighteenb #4}, #6 (#5).}} \fi \ifNuclPhys\REF#1{{\frenchspacing #2, #3 {\eighteenb #4} (#5) #6}}\fi \ifAnnPhys\REF#1{{\frenchspacing #2, {\eighteenit #3} {\eighteenb #4} (#5), #6.}}\fi} \def\IUCFBOOK#1|#2|#3|#4|{\relax \ifPhysRev\REF#1{{\frenchspacing #2, {\eighteenit #3} (#4).}}\fi \ifNuclPhys\REF#1{{\frenchspacing #2, {\eighteens #3} (#4)}} \fi \ifAnnPhys\REF#1{{\frenchspacing #2, ``#3'', #4.}} \fi} \def\REF{\REF} \IUCFREF\rSTUECK|E. C. G. Stueckelberg and A. Peterman|Helv. Phys. Acta| 26|1953|499| \IUCFREF\rGELL|M. Gell-Mann and F. E. Low|Phys. Rev.|95|1954|1300| \IUCFBOOK\rBOGOL|N. N. Bogoliubov and D.V. Shirkov|Introduction to the Theory of Quantized Fields|Interscience, New York, 1959| \IUCFREF\rWILONE|K. G. Wilson|Phys. Rev.|140|1965|B445| \IUCFREF\rWILTWO|K. G. Wilson|Phys. Rev.|D2|1970|1438| \IUCFREF\rWILTHR|K. G. Wilson|Phys. Rev.|D3|1971|1818| \IUCFREF\rWILFOUR|K. G. Wilson|Phys. Rev.|D6|1972|419| \IUCFREF\rWILFIVE|K. G. Wilson|Phys. Rev.|B4|1971|3174| \IUCFREF\rWILSIX|K. G. Wilson|Phys. Rev.|B4|1971|3184| \IUCFREF\rWILSEVEN|K. G. Wilson and M. E. Fisher|Phys. Rev. Lett.|28|1972|240| \IUCFREF\rWILEIGHT|K. G. Wilson|Phys. Rev. Lett.|28|1972|548| \IUCFREF\rWILNINE|K. G. Wilson and J. B. Kogut|Phys. Rep.|12C|1974|75| \IUCFREF\rWILTEN|K. G. Wilson|Rev. Mod. Phys.|47|1975|773| \IUCFREF\rWILELEVEN|K. G. Wilson|Rev. Mod. Phys.|55|1983|583| \IUCFREF\rWILTWELVE|K. G. Wilson|Adv. Math.|16|1975|170| \IUCFREF\rWILTHIRT|K. G. Wilson|Scientific American|241|1979|158| \IUCFREF\rWEGONE|F. J. Wegner|Phys. Rev.|B5|1972|4529| \IUCFREF\rWEGTWO|F. J. Wegner|Phys. Rev.|B6|1972|1891| \REF\rWEGTHR{F. J. Wegner, {\eighteenit in} ``Phase Transitions and Critical Phenomena'' (C. Domb and M. S. Green, Eds.), Vol. 6, Academic Press, London, 1976.} \IUCFREF\rKADANOFF|L. P. Kadanoff|Physica|2|1965|263| \IUCFBOOK\rREBBI|C. Rebbi|Lattice Gauge Theories and Monte Carlo Simulations|World Scientific, Singapore, 1983| \IUCFREF\rCALLAN|C. G. Callan|Phys. Rev.|D2|1970|1541| \IUCFREF\rSYMONE|K. Symanzik|Comm. Math. Phys.|18|1970|227| \IUCFREF\rFEYN|R. P. Feynman|Rev. Mod. Phys.|20|1948|367| \IUCFBOOK\rNEWTON|I. Newton|Philosophiae Naturalis Principia Mathematica|S. Pepys, London, 1686| \IUCFREF\rMAONE|S. K. Ma|Rev. Mod. Phys.|45|1973|589| \IUCFBOOK\rPFEUTY|G. Toulouse and P. Pfeuty|Introduction to the Renormalization Group and to Critical Phenomena|Wiley, Chichester, 1977| \IUCFBOOK\rMATWO|S. K. Ma|Modern Theory of Critical Phenomena|Benjamin, New York, 1976| \IUCFBOOK\rAMIT|D. Amit|Field Theory, the Renormalization Group, and Critical Phenomena|Mc-Graw-Hill, New York, 1978| \IUCFBOOK\rZINN|J. Zinn-Justin|Quantum Field Theory and Critical Phenomena|Oxford, Oxford, 1989| \IUCFBOOK\rGOLDEN|N. Goldenfeld|Lectures on Phase Transitions and the Renormalization Group|Add-ison-Wesley, Reading Mass., 1992| \IUCFREF\rDIRONE|P. A. M. Dirac|Rev. Mod. Phys.|21|1949|392| \IUCFBOOK\rDIRTWO|P. A. M. Dirac|Lectures on Quantum Field Theory| Academic Press, New York, 1966| \REF\rLFREF{An extensive list of references on light-front physics ({\eighteenit light.tex}) is available via anonymous ftp from public.mps.ohio-state.edu in the subdirectory pub/infolight.} \IUCFREF\rWEI|S. Weinberg|Phys. Rev.|150|1966|1313| \IUCFREF\rHVONE|A. Harindranath and J. P. Vary|Phys. Rev.|D36|1987|1141| \IUCFREF\rHILLER|J. R. Hiller|Phys. Rev.|D44|1991|2504| \IUCFREF\rSWENSON|J. B. Swenson and J. R. Hiller|Phys. Rev.|D48|1993|1774| \IUCFREF\rPERWIL|R. J. Perry and K. G. Wilson|Nucl. Phys.|B403|1993|587| \IUCFREF\rFUB|S. Fubini and G. Furlan|Physics|1|1964|229| \IUCFREF\rDASH|R. Dashen and M. Gell-Mann|Phys. Rev. Lett| 17|1966|340| \IUCFREF\rBJORK|J. D. Bjorken|Phys. Rev.|179|1969|1547| \IUCFBOOK\rFEY|R. P. Feynman|Photon-Hadron Interactions|Benjamin, Reading, Massachusetts, 1972| \IUCFREF\rKOGTWO|J. B. Kogut and L. Susskind|Phys. Rep.|C8|1973|75| \IUCFREF\rCHANG|S.-J. Chang and S.-K. Ma|Phys. Rev.|180|1969|1506| \IUCFREF\rKOGONE|J. B. Kogut and D. E. Soper|Phys. Rev.|D1|1970|2901| \IUCFREF\rBJORKTWO|J. D. Bjorken, J. B. Kogut, and D. E. Soper|Phys. Rev.|D3|1971|1382| \IUCFREF\rCHAONE|S.-J. Chang, R. G. Root and T.-M. Yan|Phys. Rev. |D7|1973|1133| \IUCFREF\rCHATWO|S.-J. Chang and T.-M. Yan|Phys. Rev.|D7|1973|1147| \IUCFREF\rYANONE|T.-M. Yan|Phys. Rev.|D7|1973|1760| \IUCFREF\rYANTWO|T.-M. Yan|Phys. Rev.|D7|1973|1780| \IUCFREF\rBRS|S. J. Brodsky, R. Roskies and R. Suaya|Phys. Rev.|D8|1973|4574| \IUCFREF\rBOUCH|C. Bouchiat, P. Fayet, and N. Sourlas|Lett. Nuovo Cim.| 4|1972|9| \IUCFREF\rHARIONE|A. Harindranath and R. J. Perry|Phys. Rev.|D43|1991|492| \IUCFREF\rMUS|D. Mustaki, S. Pinsky, J. Shigemitsu, and K. Wilson|Phys. Rev.|D43|1991|3411| \IUCFREF\rBURKONE|M. Burkhardt and A. Langnau|Phys. Rev.|D44|1991|1187| \IUCFREF\rBURKTWO|M. Burkhardt and A. Langnau|Phys. Rev.|D44|1991|3857| \IUCFREF\rROBERT|D. G. Robertson and G. McCartor|Z. Phys.|C53|1992|661| \IUCFREF\rPERQCD|R. J. Perry|Phys. Lett.|B300|1993|8| \IUCFREF\rTAM|I. Tamm|J. Phys. (USSR)|9|1945|449| \IUCFREF\rDAN|S. M. Dancoff|Phys. Rev.|78|1950|382| \IUCFBOOK\rBET|H. A. Bethe and F. de Hoffmann|Mesons and Fields, Vol. II| Row, Peterson and Company, Evanston, Illinois, 1955| \IUCFREF\rPERONE|R. J. Perry, A. Harindranath and K. G. Wilson| Phys. Rev. Lett.|65|1990|2959| \REF\rTAN{A. C. Tang, Ph.D thesis, Stanford University, SLAC-Report-351, June (1990).} \IUCFREF\rPERTWO|R. J. Perry and A. Harindranath|Phys. Rev.|D43|1991|4051| \IUCFREF\rTANG|A. C. Tang, S. J. Brodsky, and H. C. Pauli|Phys. Rev.|D44|1991|1842| \IUCFREF\rKALUZA|M. Kaluza and H. C. Pauli|Phys. Rev.|D45|1992|2968| \IUCFREF\rGLAZTHR|St. D. G{\l}azek and R.J. Perry|Phys. Rev.|D45|1992|3740| \IUCFREF\rHARITWO|A. Harindranath, R. J. Perry, and J. Shigemitsu|Phys. Rev.|D46|1992|4580| \IUCFREF\rWORT|P. M. Wort|Phys. Rev.|D47|1993|608| \IUCFREF\rGHPSW|S. G{\l}azek, A. Harindranath, S. Pinsky, J. Shigemitsu, and K. G. Wilson|Phys. Rev.|D47|1993|1599| \IUCFREF\rLIU|H. H. Liu and D. E. Soper|Phys. Rev.|D48|1993|1841| \IUCFREF\rGLAZEK|St. D. G{\l}azek and R.J. Perry|Phys. Rev.|D45|1992|3734| \IUCFREF\rSANDE|B. van de Sande and S. S. Pinsky|Phys. Rev.|D46|1992|5479| \REF\rGLAZWIL{S. D. G{\l}azek and K. G. Wilson, ``Renormalization of Overlapping Transverse Divergences in a Model Light-Front Hamiltonian,'' Ohio State preprint (1992).} \IUCFREF\rBERG|H. Bergknoff|Nucl. Phys.|B122|1977|215| \IUCFREF\rELLER|T. Eller, H. C. Pauli, and S. J. Brodsky|Phys. Rev.|D35|1987|1493| \IUCFREF\rMA|Y. Ma and J. R. Hiller|J. Comp. Phys.|82|1989|229| \IUCFREF\rBURK|M. Burkhardt|Nucl. Phys.|A504|1989|762| \IUCFREF\rHORN|K. Hornbostel, S. J. Brodsky, and H. C. Pauli|Phys. Rev.|D41|1990|3814| \IUCFREF\rMCCART|G. McCartor|Zeit. Phys.|C52|1991|611| \REF\rMO{Y. Mo and R. J. Perry, ``Basis Function Calculations for the Massive Schwinger Model in the Light-Front Tamm-Dancoff Approximation,'' to appear in J. Comp. Phys. (1993).} \REF\rLEPAGE{G. P. Lepage, S. J. Brodsky, T. Huang and P. B. Mackenzie, {\eighteenit in} ``Particles and Fields 2'' (A. Z. Capri and A. N. Kamal, Eds.), Plenum Press, New York, 1983.} \IUCFREF\rNAM|J. M. Namyslowski|Prog. Part. Nuc. Phys.|74|1984|1| \REF\rBROONE{S. J. Brodsky and G. P. Lepage, {\eighteenit in} ``Perturbative quantum chromodynamics'' (A. H. Mueller, Ed.), World Scientific, Singapore, 1989.} \REF\rBPMP{S. Brodsky, H. C. Pauli, G. McCartor, and S. Pinsky, ``The Challenge of Light-Cone Quantization of Gauge Field Theory,'' SLAC preprint no. SLAC-PUB-5811 and Ohio State preprint no. OHSTPY-HEP-T-92-005 (1992).} \IUCFREF\rPAULI|W. Pauli and F. Villars|Rev. Mod. Phys.|21|1949|434| \IUCFREF\rTHOONE|G. 't Hooft and M. Veltman|Nucl. Phys.|B44|1972|189| \REF\rTHOTWO{G. 't Hooft and M. Veltman, ``Diagrammar,'' CERN preprint 73-9 (1973).} \IUCFREF\rBLOONE|C. Bloch and J. Horowitz|Nucl. Phys.|8|1958|91| \IUCFREF\rBLOTWO|C. Bloch|Nucl. Phys.|6|1958|329| \IUCFREF\rBRANDOW|B. H. Brandow|Rev. Mod. Phys.|39|1967|771| \IUCFREF\rLEU|H. Leutwyler and J. Stern|Ann. Phys. (New York)|112|1978|94| \IUCFREF\rOEHONE|R. Oehme, K. Sibold, and W. Zimmerman| Phys. Lett.|B147|1984|115| \IUCFREF\rOEHTWO|R. Oehme and W. Zimmerman|Commun. Math. Phys.|97|1985|569| \IUCFREF\rZIMONE|W. Zimmerman|Commun. Math. Phys.|95|1985|211| \IUCFREF\rKUBO|J. Kubo, K. Sibold, and W. Zimmerman|Nuc. Phys.|B259|1985|331| \IUCFREF\rOEHTHR|R. Oehme|Prog. Theor. Phys. Supp.|86|1986|215| \IUCFREF\rLUCC|C. Lucchesi, O. Piguet, and K. Sibold|Phys. Lett.|B201|1988|241| \IUCFREF\rKRAUS|E. Kraus|Nucl. Phys.|B349|1991|563| \IUCFREF\rCASH|A. Casher|Phys. Rev.|D14|1976|452| \IUCFREF\rBARONE|W.A. Bardeen and R.B. Pearson|Phys. Rev.|D14|1976|547| \IUCFREF\rBARTWO|W.A. Bardeen, R.B. Pearson and E. Rabinovici|Phys. Rev.|D21|1980|1037| \IUCFREF\rLEP|G.P. Lepage and S.J. Brodsky|Phys. Rev.|D22|1980|2157| \REF\rDIRTHR{See P. A. M. Dirac, in `Perturbative Quantum Chromodynamics'' (D. W. Duke and J. F. Owens, Eds.), Am. Inst. Phys., New York, 1981.} \REF\rWILQCD{K. G. Wilson, ``Light Front QCD,'' Ohio State internal report, unpublished (1990).} \IUCFREF\rTOM|E. Tomboulis|Phys. Rev.|D8|1973|2736| \IUCFREF\rGRO|D. J. Gross and F. Wilczek|Phys. Rev. Lett.|30|1973|1343| \IUCFREF\rPOL|H. D. Politzer|Phys. Rev. Lett.|30|1973|1346| \IUCFREF\rGOLD|J. Goldstone|Proc. Roy. Soc. (London)|A239|1957|267| \IUCFREF\rGRIFFIN|P. A. Griffin|Nucl. Phys.|B372|1992|270| \IUCFBOOK\rSCH|J. Schwinger|Quantum Electrodynamics|Dover, New York, 1958| \IUCFBOOK\rBROWN|L. M. Brown|Renormalization|Springer-Verlag, New York, 1993| \IUCFREF\rWILFORT|K. G. Wilson|Phys. Rev.|D10|1974|2445| \IUCFBOOK\rQUARK|W. Buchm{\"u}ller|Quarkonia|North Holland, Amsterdam, 1992| \IUCFREF\rTHORN| C. B. Thorn|Phys. Rev.|D20|1979|1934| \IUCFREF\rGLAZFOUR|St. G{\l}azek|Phys. Rev.|D38|1988|3277| \IUCFBOOK\rKOKK|J. J. J. Kokkedee|The Quark Model|Benjamin, New York, 1969| \PhysRevfalse \NuclPhysfalse \AnnPhystrue \def{\mit P}{{\mit P}} \def{\cal N}{{\cal N}} \def{\cal H}{{\cal H}} \def{\cal O}{{\cal O}} \def{dk^+ d^2k^\perp \over 16 \pi^3 k^+}\;{{dk^+ d^2k^\perp \over 16 \pi^3 k^+}\;} \def{dp^+ d^2p^\perp \over 16 \pi^3 p^+}\;{{dp^+ d^2p^\perp \over 16 \pi^3 p^+}\;} \def{dq^+ d^2q^\perp \over 16 \pi^3 q^+}\;{{dq^+ d^2q^\perp \over 16 \pi^3 q^+}\;} \def{d^2s^\perp dx \over 16 \pi^3 x(1-x)}\;{{d^2s^\perp dx \over 16 \pi^3 x(1-x)}\;} \def{d^2q^\perp dx \over 16 \pi^3 x}\;{{d^2q^\perp dx \over 16 \pi^3 x}\;} \def{d^2r^\perp dy \over 16 \pi^3 y}\;{{d^2r^\perp dy \over 16 \pi^3 y}\;} \def{d^2s^\perp dz \over 16 \pi^3 z}\;{{d^2s^\perp dz \over 16 \pi^3 z}\;} \def{d^2{q^\perp}' dx' \over 16 \pi^3 x'}\;{{d^2{q^\perp}' dx' \over 16 \pi^3 x'}\;} \def{d^2{r^\perp}' dy' \over 16 \pi^3 y'}\;{{d^2{r^\perp}' dy' \over 16 \pi^3 y'}\;} \def{d^2{s^\perp}' dz' \over 16 \pi^3 z'}\;{{d^2{s^\perp}' dz' \over 16 \pi^3 z'}\;} \def{d^2s^\perp dx \over 16 \pi^3 }\;{{d^2s^\perp dx \over 16 \pi^3 }\;} \defd\tilde{p}{d\tilde{p}} \defd\tilde{k}{d\tilde{k}} \defd\tilde{q}{d\tilde{q}} \def{\bf q^\perp}{{\eighteenb q^\perp}} \def{\bf k^\perp}{{\eighteenb k^\perp}} \def{\bf v^\perp}{{\eighteenb v^\perp}} \def{\bf p^\perp}{{\eighteenb p^\perp}} \def{\bf r^\perp}{{\eighteenb r^\perp}} \def{\bf s^\perp}{{\eighteenb s^\perp}} \def{\bf t^\perp}{{\eighteenb t^\perp}} \def{\bf r}{{\eighteenb r}} \def{\bf s}{{\eighteenb s}} \def{\bf t}{{\eighteenb t}} \def{\cal Q}{{\cal Q}} \def{\cal P}{{\cal P}} \def{\cal Q}{{\cal Q}} \def{\cal P}{{\cal P}} \def|\Psi\rangle{|\Psi\rangle} \def|\Phi\rangle{|\Phi\rangle} \def|a\rangle{|a\rangle} \def|b\rangle{|b\rangle} \def|c\rangle{|c\rangle} \def|i\rangle{|i\rangle} \def|j\rangle{|j\rangle} \def|k\rangle{|k\rangle} \def\langle a|{\langle a|} \def\langle b|{\langle b|} \def\langle c|{\langle c|} \def\langle i|{\langle i|} \def\langle j|{\langle j|} \def\langle k|{\langle k|} \def{\cal R}{{\cal R}} \defH_{\P\P}{H_{{\cal P}\P}} \defH_{\Q\Q}{H_{{\cal Q}\Q}} \defH_{\P\Q}{H_{{\cal P}{\cal Q}}} \defH_{\Q\P}{H_{{\cal Q}{\cal P}}} \defh_{\P\P}{h_{{\cal P}\P}} \defh_{\Q\Q}{h_{{\cal Q}\Q}} \defh_{\P\Q}{h_{{\cal P}{\cal Q}}} \defh_{\Q\P}{h_{{\cal Q}{\cal P}}} \defv_{\P\P}{v_{{\cal P}\P}} \defv_{\Q\Q}{v_{{\cal Q}\Q}} \defv_{\P\Q}{v_{{\cal P}{\cal Q}}} \defv_{\Q\P}{v_{{\cal Q}{\cal P}}} \def{\uppercase\expandafter{\romannumeral1 }}{{\uppercase\expandafter{\romannumeral1 }}} \def{\uppercase\expandafter{\romannumeral2 }}{{\uppercase\expandafter{\romannumeral2 }}} \def{\uppercase\expandafter{\romannumeral3 }}{{\uppercase\expandafter{\romannumeral3 }}} \def{\uppercase\expandafter{\romannumeral4 }}{{\uppercase\expandafter{\romannumeral4 }}} \def{\uppercase\expandafter{\romannumeral5 }}{{\uppercase\expandafter{\romannumeral5 }}} \def{\uppercase\expandafter{\romannumeral6 }}{{\uppercase\expandafter{\romannumeral6 }}} \def{\uppercase\expandafter{\romannumeral7 }}{{\uppercase\expandafter{\romannumeral7 }}} \def{\uppercase\expandafter{\romannumeral8 }}{{\uppercase\expandafter{\romannumeral8 }}} \def\sqrt{1-4 m^2/\Lambda_0^2}{\sqrt{1-4 m^2/\Lambda_0^2}} \def\sqrt{1-4 m^2/\Lambda_1^2}{\sqrt{1-4 m^2/\Lambda_1^2}} \def\sqrt{1-4 m^2/E}{\sqrt{1-4 m^2/E}} \def\sqrt{1-4 y(1-y)}{\sqrt{1-4 y(1-y)}} \def{\mathaccent "7E g}{{\mathaccent "7E g}} \def{\eighteenit e.g.}{{\eighteenit e.g.}} \def{\eighteenit i.e.}{{\eighteenit i.e.}} \def{\it a posteriori}{{\eighteenit a posteriori}} \def{\it ad hoc~}{{\eighteenit ad hoc~}} \def{\it ab initio~}{{\eighteenit ab initio~}} \vsize=8.25truein \hsize=6.5truein \hoffset=.25truein \voffset=.25truein \hfill OSU-NT-93-117 \bigskip \bigskip \bigskip \centerline{\eighteenb A renormalization group approach to} \medskip \centerline{\eighteenb Hamiltonian light-front field theory} \bigskip \centerline{Robert J. Perry} \centerline{Department of Physics} \centerline{The Ohio State University} \centerline{Columbus, OH 43210} \bigskip \bigskip \centerline{ABSTRACT} \medskip A perturbative renormalization group is formulated for the study of Hamiltonian light-front field theory near a critical Gaussian fixed point. The only light-front renormalization group transformations found here that can be approximated by dropping irrelevant operators and using perturbation theory near Gaussian fixed volumes, employ invariant-mass cutoffs. These cutoffs violate covariance and cluster decomposition, and allow functions of longitudinal momenta to appear in all relevant, marginal, and irrelevant operators. These functions can be determined by insisting that the Hamiltonian display a coupling constant coherence, with the number of couplings that explicitly run with the cutoff scale being limited and all other couplings depending on this scale only through their dependence on the running couplings. Examples are given that show how coupling coherence restores Lorentz covariance and cluster decomposition, as recently speculated by Wilson and the author. The ultimate goal of this work is a practical Lorentz metric version of the renormalization group, and practical renormalization techniques for light-front quantum chromodynamics. \bigskip \bigskip \centerline{April, 1993} \bigskip \vfill \noindent Accepted for publication in Annals of Physics. \eject \bigskip \noindent {\eighteenb {\uppercase\expandafter{\romannumeral1 }}. Introduction} \medskip In a series of remarkable papers Wilson reformulated the original renormalization group approach to relativistic field theory \APrefmark{\rSTUECK-\rBOGOL}, initially developing the modern renormalization group as a tool for the study of the strong interaction in Minkowski space \APrefmark{\rWILONE-\rWILTHR}. He was later diverted to the study of Euclidean field theory \APrefmark{\rWILFOUR} and statistical field theory \APrefmark{\rWILFIVE-\rWILEIGHT}, where it was possible to implement perturbative and numerical renormalization group transformations for theories of physical interest. He has written a number of reviews of this work \APrefmark{\rWILNINE-\rWILELEVEN} and two simple introductions \APrefmark{\rWILTWELVE,\rWILTHIRT}. This paper relies heavily on Ref. \rWILTEN, and also on Wegner's formulation of the perturbative renormalization group \APrefmark{\rWEGONE-\rWEGTHR}. Of course, all of this work rests on the early development of the renormalization group, especially on the work of Gell-Mann and Low \APrefmark{\rGELL}; as well as on the ideas of Kadanoff that inspired the modern renormalization group \APrefmark{\rKADANOFF}. The most promising area for application of the renormalization group in the study of the strong interaction is currently provided by the Euclidean lattice formulation of quantum chromodynamics (QCD) \APrefmark{\rREBBI}, but here the nonperturbative renormalization group has found limited application and one is still forced to directly include short distance scales in large numerical calculations. The lattice itself introduces numerical complications that are not easily overcome, and alternative nonperturbative tools should be developed. At this point there is no serious challenger to lattice field theory, as all other nonperturbative algorithms rely on uncontrolled `approximations.' The most significant barrier to the application of the renormalization group is algebraic complexity. This complexity stems partially from the general nature of the renormalization group as formulated by Wilson. The original renormalization group formalism developed by Stueckelberg and Petermann \APrefmark{\rSTUECK}, and Gell-Mann and Low \APrefmark{\rGELL}, as well as direct derivatives such as that of Callan and Symanzik \APrefmark{\rCALLAN,\rSYMONE}, are tailored to the problem of renormalizing canonical field theories, and take advantage of tools that have been developed for Feynman perturbation theory \APrefmark{\rFEYN}. These versions are of limited utility for some problems, particularly those that cannot be adequately solved with Feynman perturbation theory and those in which one needs to remove degrees of freedom with explicit cutoffs. In light-front field theory we encounter a problem that requires the more general renormalization group, because we need to use cutoffs that reduce the size of Fock space to attack nonperturbative problems; and because all cutoffs at our disposal violate symmetries such as Lorentz covariance and gauge invariance. Furthermore, we do not want to include a complicated vacuum in our state vectors, so we need the more general formulation of the renormalization group to allow interactions induced by the vacuum to directly enter our Hamiltonians. The modern renormalization group is a pragmatic approach to any problem that involves very many degrees of freedom that can be profitably divided according to a distance or energy scale. Instead of trying to solve such problems by considering all scales at once, which usually fails even in perturbation theory, one breaks the problem into pieces, `solving' each scale in sequence. There are many numerical advantages to this approach. As currently formulated the nonperturbative renormalization group \APrefmark{\rWILTEN} is reminiscent of the calculus as applied by Newton \APrefmark{\rNEWTON}. Most physicists would find it impossible to read the Principia, and few would recognize the calculus in the form that Newton and his peers employed it. Fortunately, most of us can avoid the Principia; but while there are many good introductions to the perturbative renormalization group \APrefmark{\rMAONE-\rGOLDEN}, the nonperturbative renormalization group has not yet been developed to the point where simple introductions exist. This paper follows Wilson's remarkable review article on the renormalization group and its application to the Kondo problem \APrefmark{\rWILTEN}. I concentrate only on the development of a perturbative light-front renormalization group, and do not discuss possible nonperturbative renormalization groups. One of the chief purposes of this paper, however, is to outline some of the problems one must face when developing a nonperturbative light-front renormalization group. Dirac formulated light-front field theory \APrefmark{\rDIRONE} during his unsuccessful search for a reasonable Hamiltonian formulation of relativistic field theory \APrefmark{\rDIRTWO}. For the most complete set of references available on light-front physics, see Ref. \rLFREF. Unfortunately, Dirac did not follow through with his initial development of light-front field theory and it was largely ignored until Weinberg developed the closely related infinite momentum frame formalism \APrefmark{\rWEI}. The principal advantages of the light-front formalism are that boost invariance is kinematic, and that the bare vacuum mixes only with modes that have identically zero longitudinal momentum. The first advantage allows one to factor center-of-mass momenta from the equations of motion, which may be extremely important if it proves possible to formulate any relativistic problem so that it becomes a few body problem; a fact that can be appreciated after a study of the few-body problem in nonrelativistic quantum mechanics. The second advantage is sometimes misrepresented to mean that the vacuum is trivial in light-front field theory. What is actually implied is that one can isolate all modes that mix with the trivial vacuum to form the physical vacuum, and then note that all of these have infinite energy in light-front field theory (ignoring a set of measure zero in theories with massless particles; {\eighteenit i.e.}, states with identically zero transverse momentum). This observation may indicate that it is possible to replace the problem of building the physical vacuum with the problem of renormalizing the light-front Hamiltonian. Using a boost-invariant renormalization group one may be able to embed the vacuum problem into the larger problem of using the light-front renormalization group to remove high energy degrees of freedom. The light-front formalism does not automatically solve the physical problems that force one to actually build the vacuum in Euclidean field theory. It merely allows us to reformulate these problems in what will hopefully prove to be a more tractable form. If a symmetry is broken by the vacuum, we are not allowed to assume that symmetry when using the renormalization group to construct the renormalized Hamiltonian. In other words, we are not allowed to use a broken symmetry to restrict the space of Hamiltonians, even when the symmetry is broken by the vacuum. For example, if we want to study $\phi^4$ theory in 1+1 dimensions beyond the critical coupling ({\eighteenit i.e.}, in the symmetry broken phase), we must allow the space of Hamiltonians to include symmetry breaking interactions ({\eighteenit e.g.}, a $\phi^3$ interaction). If such interactions are not allowed, states with imaginary mass will typically appear in the spectrum \APrefmark{\rHVONE-\rSWENSON}. The renormalization group allows one to isolate relevant and marginal symmetry breaking interactions that might be tuned to reproduce the vacuum effects. A simple example involving spontaneous symmetry breaking has recently been provided by Wilson and the author \APrefmark{\rPERWIL}. While the infinite momentum frame formalism has been widely used, especially in the study of current algebra \APrefmark{\rFUB,\rDASH,\rLFREF} and the parton model \APrefmark{\rBJORK-\rKOGTWO,\rLFREF}, little formal work has been completed on renormalization in light-front field theory, and almost all early work concentrates primarily on developing a map between light-front field theory and equal-time field theory in perturbation theory \APrefmark{\rCHANG-\rBRS}. More recently a number of theorists have begun to study light-front perturbation theory directly \APrefmark{\rBOUCH-\rPERQCD}, and especially the renormalization of light-front field theory after a Tamm-Dancoff \APrefmark{\rTAM-\rBET} truncation is made \APrefmark{\rPERONE-\rGHPSW}. The only formalism currently available for nonperturbative renormalization is the renormalization group, and work on developing the renormalization group for light-front field theory is in its infancy \APrefmark{\rGLAZEK-\rGLAZWIL,\rPERQCD,\rPERWIL}. In my opinion, without a light-front version of the renormalization group, light-front field theory may be relegated to being a tool of last choice for doing perturbative calculations in 3+1 dimensions. Of course, in superrenormalizable theories in 1+1 dimensions light-front field theory has already proven to be very successful \APrefmark{\rBERG-\rMO}; and it can be argued that light-front field theory is a much more powerful tool for many nonperturbative calculations in 1+1 dimensions than equal-time field theory. In 3+1 dimensions we are in a situation where, because of serious renormalization difficulties, we neither know the correct light-front Hamiltonians to study, nor can we compute the physical ground states to which these unknown Hamiltonians lead. This is exactly the type of problem for which the modern renormalization group is suited. The primary purpose of this paper is to develop a perturbative light-front renormalization group as a tool for the study of light-front Hamiltonian field theory, with the hope that this work may aid the development of a practical nonperturbative light-front renormalization group. The most interesting theory that one may be able to study with a perturbative light-front renormalization group is QCD; however, the algebraic complexity of QCD makes it a poor development ground. Following tradition, I use scalar field theory for all of my examples, as it is straightforward ({\eighteenit i.e.}, difficult but not impossible) to generalize the formalism to other theories. In the remainder of this Introduction I outline the rest of the paper. In the process I use both light-front and renormalization group jargon, often without offering any definition. I have tried to carefully define most of the renormalization group jargon in the text, so the reader who is unfamiliar with the modern renormalization group may want to read this Introduction again after reading the rest of the text. There are a number of articles that introduce or review most of the basic aspects of light-front field theory required in this work, and the reader unfamiliar with this formalism may want to consult one or more of these \APrefmark{\rKOGONE-\rYANTWO,\rLEPAGE-\rBPMP,\rPERTWO}. To implement a renormalization group calculation one must first delineate a space of Hamiltonians, and then define a renormalization group transformation \APrefmark{\rKADANOFF} that maps this space into itself. The renormalization group transformation must be carefully formulated because it is central to the whole approach. Several renormalization group transformations are given for light-front field theory, including some boost-invariant transformations. These latter transformations allow one to impose the constraint of boost invariance directly on the space of Hamiltonians. The next step in a renormalization group study is to explore the topology of the Hamiltonian space, first searching for fixed points of the transformation ({\eighteenit i.e.}, Hamiltonians that remain fixed under the action of the transformation) and then studying the trajectories of Hamiltonians near these fixed points. The Hamiltonian itself was originally formulated as a powerful tool for the study of physical trajectories found in nature. The renormalization group can be considered to be a generalization of this idea, in which the renormalization group transformation is used to derive physical Hamiltonians found in nature. As such, it is an alternative to the canonical procedure that starts with classical equations of motion. Almost all analytic work concentrates on Gaussian fixed points ({\eighteenit i.e.}, Hamiltonians with no interactions) and near-Gaussian fixed points where perturbation theory can be used to approximate the renormalization group transformation itself. All of the examples in this paper are of this type, and the entire investigation is directed toward the development of transformations near Gaussian fixed points. Actually, there are fixed volumes rather than isolated fixed points, but I usually refer to fixed points rather than fixed volumes. Such examples may be of immediate relevance for QCD; and they illustrate the basic light-front renormalization group machinery. In Section {\uppercase\expandafter{\romannumeral2 }}~ I provide a brief summary of the modern renormalization group formalism and the generalizations required when there are an infinite number of relevant and marginal operators. The generalizations are not easily appreciated until one has studied the entire paper and understood why they are required. This Section is a poor substitute for Wilson's review article \APrefmark{\rWILTEN}, but I have attempted to introduce the most important concepts required by a perturbative renormalization group. I have also attempted to make the differences between a perturbative renormalization group and a nonperturbative renormalization group clear, focusing on the former. The modern renormalization group formalism may be unfamiliar to many students of relativistic field theory, being employed primarily in the study of critical phenomena. With a few notable exceptions \APrefmark{\rAMIT,\rZINN}, most field theory textbooks deal exclusively with the original renormalization group formalism and its modern descendant, the Callan-Symanzik formalism \APrefmark{\rCALLAN,\rSYMONE}. Perhaps the most important difference between these two types of renormalization group is that the original renormalization group does not actually remove any degrees of freedom, being concerned primarily with the problem of divergences in perturbation theory and techniques for allowing all cutoffs to be removed. Here one typically uses either Pauli-Villars regularization \APrefmark{\rPAULI}, which actually increases the number of degrees of freedom, or dimensional regularization \APrefmark{\rTHOONE,\rTHOTWO}, which retains all degrees of freedom while analytically continuing in the number of dimensions. Neither of these regulators is well-suited to many nonperturbative calculations. In order to reduce the number of degrees of freedom, one must introduce a real cutoff, such as a momentum cutoff or a lattice cutoff. The cutoff is an artifice, so results should not depend on its particular value and it should be possible to change the cutoff without changing any physical result; {\eighteenit i.e.}, without changing the matrix elements of observables between cutoff physical states. (Later I often refer to such matrix elements as observables, since the only operator that is discussed in any detail is the Hamiltonian.) This can be accomplished by making the observables, and in particular the Hamiltonian, depend on the cutoff in precisely the manner required to yield cutoff independent matrix elements. The modern renormalization group is designed to achieve this goal. In Section {\uppercase\expandafter{\romannumeral3 }}~ I illustrate what is meant by a space of Hamiltonians, and provide several light-front renormalization group transformations. I begin by discussing transformations that resemble those developed by Wilson for Euclidean field theory; however, these transformations later lead to pathologies because they remove states of lower energy than some that are retained. Almost all light-front transformations suffer from these pathologies in perturbation theory, and in the end I am forced to consider renormalization groups with some unusual properties to obtain a transformation that may be approximated by discarding at least some irrelevant operators in perturbation theory. These transformations employ invariant-mass cutoffs, so I refer to them as {\eighteenit invariant-mass transformations}. The restriction to transformations that may be approximated perturbatively is an extreme limitation, and it is not even clear that the invariant-mass transformations can be approximated in perturbation theory with controlled errors. We will find that couplings depend on longitudinal momentum fractions when one uses a boost-invariant cutoff, and that corrections to the Hamiltonian diverge logarithmically for states containing particles with arbitrarily small longitudinal momentum fractions, and for interactions involving arbitrarily small longitudinal momentum transfer. Thus, ultimately it is not clear that the invariant-mass transformations are `better' than other transformations one may use; and one should certainly consider other transformations when developing a nonperturbative renormalization group. This article does not attack these nonperturbative problems, even though they may be of more interest than the perturbative results obtained here; however, I try to clarify some of the nonperturbative renormalization problems light-front field theorists should be attacking. An essential restriction in all Euclidean renormalization group calculations is that long range interactions are excluded. As Wilson notes \APrefmark{\rWILTEN}, this is one of the most tenuous assumptions of the renormalization group approach. This locality assumption must be altered in light-front field theory where inverse powers of longitudinal derivatives are required already at the Gaussian fixed points of interest; {\eighteenit i.e.}, in free field theory. Allowing inverse powers of longitudinal derivatives may seem to be a rather minor conceptual modification of the Euclidean version of the renormalization group; but the recognition that there are separate renormalization group transformations that act on the longitudinal and transverse directions may be profound. This generalizes the fact that there are separate power counting analyses for longitudinal and transverse dimensions. The analysis of relevant, marginal and irrelevant longitudinal operators indicates that it is inverse powers of longitudinal derivatives that arise as irrelevant operators when one of the light-front renormalization group transformations is applied to a Hamiltonian. Having allowed inverse longitudinal derivatives, one naturally considers the possibility that inverse powers of transverse derivatives occur, a possibility that is especially intriguing for the study of QCD; however, it is difficult to introduce such operators in a controlled manner, and I avoid their introduction in this article. I assume that transverse interactions are local, or at least short range, and refer to this assumption as {\eighteenit transverse locality}. This restriction is not merely a technical convenience, because inverse transverse derivatives typically lead to an infinite number of relevant operators near critical Gaussian fixed points, including products of arbitrarily large numbers of field operators. Near fixed points that contain interactions, such operators may appear without causing trouble, but I discuss only Gaussian fixed points. Once the space of Hamiltonians is restricted, the renormalization group transformation may produce Hamiltonians that lie outside the space. We are only interested in trajectories of Hamiltonians generated by repeated application of the transformation that remain inside the restricted space. Strict transverse locality is violated by the step function cutoffs I employ; however, these violations appear to be controllable. Moreover, inverse transverse derivatives are generated by the transformations; but they are accompanied by cutoffs and I show that the resultant distributions do not produce long range transverse interactions, and that they do not introduce relevant operators. In all cases that I have discovered where a product of inverse transverse derivatives and cutoff functions arise, the product can be shown to be an irrelevant operator with respect to transverse scaling; although the operator may contain delta functions or derivatives of delta functions of longitudinal momentum fractions. Each of the light-front renormalization group transformations I consider consists of two steps. In the first step one alters a cutoff ({\eighteenit e.g.}, lowers a cutoff on transverse momenta) to remove degrees of freedom, and computes a renormalized Hamiltonian that produces the same eigenvalues and suitably orthonormalized, projected eigenstates, in the remaining subspace. All operators that correspond to observables are renormalized, not just the Hamiltonian, but in this paper I focus only on the Hamiltonian. Of particular interest for future work is the renormalization of other Poincar{\'e} generators and various current operators. In the second step of each transformation the variables ({\eighteenit e.g.}, transverse momenta) are rescaled to their original range, and the fields and Hamiltonian are rescaled. The most difficult part of this procedure is the construction of an effective Hamiltonian, which is analogous to the development of a block spin Hamiltonian in simple spin systems. In Section {\uppercase\expandafter{\romannumeral4 }}~ I discuss two related procedures for accomplishing this task; the first pioneered by Bloch and Horowitz \APrefmark{\rBLOONE}, and the second by Bloch \APrefmark{\rBLOTWO,\rBRANDOW}. The second method was employed by Wilson in his first serious numerical renormalization group study \APrefmark{\rWILTWO}. In Section {\uppercase\expandafter{\romannumeral5 }}~ I turn to the study of fixed points in perturbation theory and linearized behavior of the transformations near these fixed points \APrefmark{\rWEGONE-\rWEGTHR}, developing simple examples that hopefully clarify the procedure. The study of linearized behavior near critical Gaussian fixed points in light-front field theory is dimensional analysis, as usual. The fact that longitudinal boost invariance corresponds to an exact scale invariance \APrefmark{\rDIRONE,\rKOGTWO,\rLEU} leads to the conclusion that all physical Hamiltonians are fixed points with respect to a longitudinal light-front renormalization group transformation. This should be true order-by-order in any perturbative expansion of the Hamiltonian in powers of a parameter upon which it depends analytically, and it should be an extremely powerful tool for the analysis of fixed points. However, the pathologies mentioned above make it difficult to find applications. Transformations that scale only the transverse momenta produce an infinite number of relevant and marginal operators, in addition to the familiar infinity of irrelevant operators. This happens because entire functions of longitudinal momentum fractions may appear in any given operator without affecting the linear analysis that determines this classification. Transformations that scale only longitudinal momenta also lead to an infinite number of relevant and marginal operators because entire functions of transverse momenta are allowed to appear. The appearance of entire functions drastically complicates the renormalization group analysis, but it may also eventually lead to tremendous power in the application of the renormalization group if one can learn how to accurately approximate these functions. In Section {\uppercase\expandafter{\romannumeral6 }}~ I study second-order perturbations about the critical Gaussian fixed point. I concentrate on Hamiltonians near the canonical massless $\phi^4$ Hamiltonian, and first show that several candidate transformations lead to divergences at second-order. I turn to a boost-invariant transformation and concentrate primarily on a few marginal and relevant operators. I show that it is possible to find a closed set of two marginal operators and one relevant operator in a second-order analysis, despite the possible appearance of an infinite number of such operators. This simple analysis illustrates many of the features of a full transformation. I then study Lorentz covariance and cluster decomposition, showing that both are violated by the invariant-mass cutoff in second-order perturbation theory. These properties must be restored by counterterms, and the formalism should be able to produce these counterterms without referring to covariant results. Since there are an infinite number of relevant and marginal operators in the light-front renormalization group, and a simple perturbative analysis indicates that Lorentz covariance and cluster decomposition cannot be restored without actually employing an infinite number of such operators, one must worry that the light-front renormalization group analysis requires one to adjust an infinite number of independent variables. Wilson and I have recently proposed conditions under which a finite number of variables are actually independent \APrefmark{\rPERWIL}. These conditions turn out to be a generalization of the coupling reduction conditions first developed by Oehme, Sibold, and Zimmerman \APrefmark{\rOEHONE-\rKRAUS}. Simply stated, we have proposed that one should seek solutions to the renormalization group equations in which a finite number of specified variables are allowed to be independent functions of the cutoff. While there are an infinite number of relevant, marginal, and irrelevant variables, all but a finite number of variables depend on the cutoff only through their dependence on these independent variables. This condition could merely be a re-parameterization of the cutoff dependence, but we also add the constraint that all dependent variables must vanish when the independent variables are zero. This last constraint is motivated by considering the dependent variables to be counterterms. This means that as one changes the cutoffs, all couplings (including masses) evolve coherently; whereas the general solution to the renormalization group equations might allow much more complicated behavior, at least for the relevant and marginal variables. It is for this reason that we call our conditions {\eighteenit coupling constant coherence conditions}, or more briefly, coupling coherence. In Section {\uppercase\expandafter{\romannumeral6 }}~ I provide part of the demonstration that coupling coherence fixes the strengths of all operators to second order in the canonical coupling, ${\cal O}(g^2)$, and that the resultant strengths are precisely the values required to restore Lorentz covariance and cluster decomposition in second-order perturbation theory. I show that coupling coherence uniquely fixes the relevant mass operator, and the dispersion relation associated with the bare mass term is not that of a free massive particle. I determine the complete set of irrelevant four-boson couplings, and show that a third-order ({\eighteenit i.e.}, two loop) calculation is required to fix the marginal four-boson couplings. I do not compute all couplings to ${\cal O}(g^2)$, but it is straightforward to complete the analysis for the terms I do not evaluate. I am primarily interested in the marginal four-boson couplings, because these indicate how one can expect couplings to run with longitudinal momenta in a light-front renormalization group analysis. In the scalar theory, couplings decrease in strength as the longitudinal momentum fraction carried by the bosons decreases, and as the longitudinal momentum fraction transferred through the vertex decreases. In QCD one expects the opposite behavior. There are two essential expansions that are made in a perturbative renormalization group analysis. The first is the perturbative expansion of the transformation itself, which may converge sufficiently near the Gaussian fixed point. The second is the expansion of the transformed Hamiltonian in terms of relevant, marginal, and irrelevant operators. Each expansion must normally be truncated at some finite order, and one should try to show that each truncation leads to controllable errors. Much of the discussion of the errors introduced by truncating the perturbative expansion is delayed to Section {\uppercase\expandafter{\romannumeral7 }}, although some of the most important sources of errors are already evident in a second-order analysis and are discussed in Section {\uppercase\expandafter{\romannumeral6 }}. Many of the errors that arise when one truncates the expansion of the Hamiltonian in terms of relevant, marginal and irrelevant operators by discarding all or most of the irrelevant operators can be studied using the second-order approximation of the transformation. Dropping some irrelevant operators is essential to the program, because the transformations become too complicated if one must follow the evolution of too many operators. Most transformations one might construct lead to uncontrollable errors even in a second-order analysis for a simple reason. When a transformation is applied once, irrelevant operators are produced; and when the transformation is applied again these irrelevant operators produce divergences. The pathology is easily analyzed. Simple transformations attempt to remove degrees of freedom with much lower free energy than some of the degrees of freedom retained. As one step in making the expansion in terms of relevant, marginal and irrelevant operators, one typically expands every energy denominator encountered in the perturbative expansion of the transformation. This expansion is in powers of one of the energies of a state that is retained after the transformation, and if this state has a higher energy than one of the states removed, the expansion of the energy denominator fails to converge. Simply stated, $1/(E_{in}-E_{loop})$ cannot be expanded in powers of $E_{in}$ when $E_{in}$ is larger than $E_{loop}$. I believe that there are few solutions to this problem. One can abandon the perturbative expansion of the transformation, one can abandon the renormalization group approach or drastically modify it, or one can design the transformation so that $E_{in}$ is always less than $E_{loop}$. In this paper, I choose the final option. The simplest boost-invariant transformations suffer from this same problem, and only the invariant-mass transformation escapes. In Section {\uppercase\expandafter{\romannumeral7 }}~ I discuss third- and higher-order corrections to an invariant-mass transformation near the critical Gaussian fixed point. I first show that the third-order analysis introduces new marginal operators that contain logarithms of longitudinal momentum fractions. I also show that when one insists that these new marginal operators depend on the cutoff only through their dependence on the original marginal $\phi^4$ interaction, their strength is precisely that required to restore Lorentz covariance and cluster decomposition to the boson-boson scattering amplitude computed in second-order perturbation theory. I then turn to a discussion of the errors introduced by various approximations, showing that these errors may be quite large. Wilson has provided a thorough discussion of perturbative renormalization group equations and the errors that result from various approximations, so I focus on new approximations that must be made in a light-front renormalization group analysis. In a typical Euclidean renormalization group there are a finite number of relevant and marginal operators and one can accurately tune their boundary values at the lowest cutoff. In the light-front renormalization group functions of longitudinal momenta appear in each relevant and marginal operator, and these functions must be approximated. This means that errors are made in the relevant and marginal operators themselves, and one must determine whether these errors can be controlled. These new approximations are extremely problematic, because one expects that the strength of a relevant operator will grow exponentially as the cutoff is lowered; which means that any error made in the approximation of a relevant operator will grow exponentially. Of course, one partial solution to this problem is to approximate the relevant operators at the lowest cutoff, and solve the renormalization group equations for the relevant operators in the direction of increasing cutoff, as is usually done \APrefmark{\rWILTEN}; because the strengths of relevant operators decrease exponentially when the cutoff is increased. Marginal operators present a more difficult problem, as usual; because errors tend to grow linearly regardless of the direction in which one solves the equations for marginal operators. Moreover, we will see that operators develop logarithmic singularities when one uses the invariant-mass transformation. These problems require careful study, and this paper barely initiates such a study. No convincing solution to these problems is proposed in this paper, but I have tried to present the problems in a clear manner; because any attempt to use Hamiltonian light-front field theory to perform nonperturbative calculations must address such problems. I offer some speculation on how one might approach this task, and argue that the same problems that complicate the perturbative analysis may actually lead to a simplification of the nonperturbative analysis. The renormalization group analysis is drastically simplified when a Tamm-Dancoff truncation \APrefmark{\rTAM,\rDAN} is made and the light-front Tamm-Dancoff (LFTD) approximation \APrefmark{\rPERONE-\rGHPSW} is used, so I occasionally mention important aspects of a renormalization group analysis of LFTD; however, I do not consider such examples in this paper. The Tamm-Dancoff truncation can be simply included in the initial definition of the space of Hamiltonians, after which the analysis proceeds exactly as when no truncation is made. The truncation preserves boost invariance, and thereby allows the boost-invariant renormalization group transformations to be illustrated. More importantly, it drastically simplifies the operators that are included in the space of Hamiltonians, even allowing one to solve some simple examples analytically. Sector-dependent renormalization, in which parameters appearing in the Hamiltonian and other observables are allowed to depend on the Fock space sector(s) within or between which they act, arises in a natural manner; and the light-front renormalization group drastically improves the discussion of renormalization in LFTD. However, if one wants to use LFTD to study physical theories, it is probably necessary to remove some of the restrictions on the space of allowed Hamiltonians that I assume, in particular restrictions associated with transverse locality. I do not yet know how one can introduce the required nonlocalities and still control the number of operators required; however, simple perturbation theory arguments show that nonlocal transverse operators inevitably arise if one derives a LFTD Hamiltonian by eliminating states with extra particles. These issues are not discussed in this paper. In the conclusion I discuss the possible relevance of this work for the study of Hamiltonian light-front QCD \APrefmark{\rCASH-\rLEP,\rLEPAGE,\rBROONE,\rBPMP}, indicating some of the difficult problems that I carefully avoid with my simple examples in this paper. \bigskip \noindent {\eighteenb {\uppercase\expandafter{\romannumeral2 }}. The Renormalization Group} \medskip In classical mechanics the state of a system is completely specified by a fixed number of coordinates and momenta. The objective of classical mechanics is to compute the state as a function of time, given initial conditions. The state is not regarded as fundamental; rather a Hamiltonian that governs the time evolution of the state is regarded as fundamental. In nonrelativistic quantum mechanics, one must generalize the definition of the state, so that it is specified by a ket in a state space, and one must drastically alter the theory of measurement; but it remains possible to specify a Hamiltonian that governs the time evolution of the state. In both cases the time evolution of the state is a trajectory in a Hilbert space, and the trajectory is determined by a Hamiltonian that must be discovered by fitting data. In principle, one would like to further generalize this procedure for relativistic field theory; however, any straightforward generalization that maintains locality leads to divergences that produce mathematical ambiguities. To make mathematically meaningful statements we must introduce an {\it ad hoc~} regulator, to which I refer as a cutoff; so that physical results can be derived as limits of sequences of finite quantities. The renormalization group provides methods for constructing such limits that are much more powerful than standard perturbation theory. Even if divergences did not signal the need for a cutoff in field theory, we would be forced to introduce a cutoff in some form to obtain finite dimensional approximations for the state vector and Hamiltonian. Fock space is an infinite dimensional sum of cross products of infinite dimensional Hilbert spaces, and this is not a convenient starting point for most numerical work. If all interactions are weak and of nearly constant strength over the entire range of scales that affect an observable, we can use standard perturbation theory to compute the observable; however, if either of these conditions is not met, we cannot directly compute observables with realistic Hamiltonians. This problem is easily appreciated by considering a simple spin system in which 1024 spins are each allowed to take two values. The Hamiltonian for this system is a $2^{1024}\times2^{1024}$ matrix, and this matrix cannot generally be diagonalized directly. This matrix is infinitely smaller than the matrices we must consider when solving a relativistic field theory. If the interactions remain weak over all scales of interest, but change in strength significantly, we can use the perturbative renormalization group. If the interactions become strong over a large number of scales of interest, a nonperturbative renormalization group must be developed. A final possibility is that the interactions are weak over almost all scales, becoming strong only over a few scales of interest. In this case, the perturbative renormalization group can be used to eliminate the perturbative scales; after which one can use some other method to solve the remaining Hamiltonian. The introduction of an {\it ad hoc~} cutoff in field theory complicates the basic algorithm for computing the time evolution of a state, because one must somehow remove any dependence on the cutoff from physical matrix elements. This complication is so severe that it has caused field theorists to essentially abandon many of the most powerful tools employed in nonrelativistic quantum mechanics ({\eighteenit e.g.}, the Schr{\"o}dinger picture). How can we make reasonable estimates in relativistic quantum mechanics? How can we guarantee that results are independent of the cutoff? How can we find a sequence of Hamiltonians that depends on the cutoff in a manner that leads to correct results as the cutoff approaches its limit? These are the type of questions that led Wilson to completely reformulate the original Gell-Mann--Low renormalization group formalism. Wilson adopted the same general strategy familiar from the study of the time evolution of states, adding a layer of abstraction to the original classical mechanics problem to compute `Hamiltonian trajectories'. The strategy is universal in physics, but the layer of abstraction leads to a great deal of confusion. In analogy to a formalism that yields the evolution of a state as time changes, he developed a formalism that yields the evolution of a Hamiltonian as the cutoff changes. In quantum mechanics a state is represented by an infinite number of coordinates in a Hilbert space, and the Hamiltonian is a linear operator that generates the time evolution of these coordinates. In the renormalization group formalism, the existence of a space in which the Hamiltonian can be represented by an infinite number of coordinates is assumed, and the cutoff evolution of these coordinates is given by the renormalization group transformation. The Hamiltonian is less fundamental than the renormalization group transformation, which can be used to construct trajectories of Hamiltonians. Typically there are restrictions placed on the Hamiltonians ({\eighteenit e.g.}, no long-range interactions) that make it possible for these trajectories to leave the space. Trajectories of renormalized Hamiltonians remain in the space of Hamiltonians, and are roughly analogous to physical trajectories in classical mechanics that meet some additional requirement ({\eighteenit e.g.}, do not leave a specified finite volume). Transformations that change the cutoff by different amounts are members of a renormalization `group', which is actually a semi-group since the transformations cannot be inverted. The fact that inverses do not exist is obvious because the transformations reduce the number of degrees of freedom. A more thorough, although by no means complete, discussion of the space of Hamiltonians is given in Section {\uppercase\expandafter{\romannumeral3 }}. Here I will simply state that each term in a Hamiltonian can typically be written as a spatial integral whose integrand is a product of derivatives and field operators. The definition of the Hamiltonian space might be a set of rules that show how to construct all allowed operators. These operators should be thought of as unit vectors, and the coefficients in front of these operators as coordinates. This type of operator is not usually bounded, and this is a source of divergences in field theory. To regulate these divergences the cutoff is included directly in the definition of the space of Hamiltonians. The cutoff one chooses has drastic effects on the renormalization group. While we will see several examples in Section {\uppercase\expandafter{\romannumeral3 }}, one familiar example of a cutoff is the lattice, which replaces spatial integrals by sums over discrete points. The facts that the Hamiltonian can be represented by coordinates that correspond to the strengths of specific operators, and that these operators are all regulated by a cutoff that is part of the definition of the space, is all that one needs to appreciate at this point. Given a space of cutoff Hamiltonians, the next step is to construct a suitable transformation. This is slightly subtle and is usually the most difficult conceptual step in a renormalization group analysis, because one must find a transformation that manages to alter the cutoff without changing the space in which the Hamiltonian lies. These two requirements seem mutually contradictory at first. An additional problem for relativistic field theory is that all transformations one can construct change the cutoff in the wrong direction. To see how these difficulties are averted, let me again use the lattice as an example. A typical lattice transformation consists of two steps. In the first step one reduces the number of lattice points, typically by grouping them into blocks \APrefmark{\rKADANOFF} and thereby doubling the lattice spacing; and one computes a new effective Hamiltonian on the new lattice. I do not discuss how this effective Hamiltonian is constructed for a lattice, but this issue is carefully discussed for light-front Hamiltonians in later Sections. At this point the lattice has changed, so the space in which the Hamiltonian lies has changed. The second step in the transformation is to rescale distances using a change of variables, so that the lattice spacing is returned to its initial value, while one or more distance units are changed. After both steps are completed the lattice itself remains unchanged, if it has an infinite volume, but the Hamiltonian changes. This shows how one can alter the cutoff without leaving the initial space of Hamiltonians. Numerically the cutoff does not change, but the units in which the cutoff is measured change. We want to study the limit in which the lattice spacing is taken to zero, but the transformation increases the lattice spacing as measured in the original distance units. While no inverse transformation that directly decreases the lattice spacing exists, we can obtain the limit in which the lattice spacing goes to zero by considering a sequence of increasingly long trajectories. Instead of fixing the initial lattice spacing, we fix the lattice spacing at the end of the trajectory and we construct a sequence of trajectories in which the initial lattice spacing is steadily decreased. We then directly seek a limit for the last part of the trajectory as it becomes infinitely long, by studying the sequence of increasingly long trajectories. This procedure is illustrated by Wilson's triangle of renormalization, which is briefly discussed below. One must employ Wilson's algorithm to perform a nonperturbative renormalization group analysis; however, it is possible to study the cutoff limit more directly when a reasonable perturbative approximation exists. In this case, the renormalization group transformation can be approximated by an infinite number of coupled equations for the evolution of a subset of coordinates that are asymptotically complete, and these equations can be inverted to allow direct study of the Hamiltonian trajectory as the cutoff increases or decreases \APrefmark{\rWEGONE-\rWEGTHR}. If it can be shown that all but a finite number of coordinates remain smaller than some chosen magnitude, it may be possible to approximate the trajectory by simply ignoring the small coordinates, retaining an increasing number of coordinates only as one increases the accuracy of the approximation. In this case the task of approximating a trajectory of renormalized Hamiltonians is reduced to the task of solving a finite number of coupled nonlinear difference equations. The primary goal of this paper is the development of a perturbative light-front renormalization group. Given a transformation $T$ that maps a subspace of Hamiltonians into the space of Hamiltonians, with the possibility that some Hamiltonians are mapped to Hamiltonians outside the original space, we study $T[H]$. We can apply the transformation repeatedly, and construct a trajectory of Hamiltonians, with the $l$-th point on the trajectory being $$H_l = T^l[H_0] \;. \eqno(2.1)$$ \noindent Any infinitely long trajectory that remains inside the space is called a trajectory of renormalized Hamiltonians. The motivation for this definition of renormalization is clarified further below. It is assumed that the trajectory is completely determined by the initial Hamiltonian, $H_0$, and $T$; however, the dependence on $H_0$ is usually not explicitly indicated. Moreover, we will see later that boundary conditions may be specified in a much more general fashion. Any renormalization group analysis begins with the identification of at least one fixed point, $H^*$. A {\eighteenit fixed point} is defined to be any Hamiltonian that satisfies the condition $$H^*=T[H^*] \;. \eqno(2.2)$$ \noindent For perturbative renormalization groups the search for such fixed points is relatively easy, as we will see in Section {\uppercase\expandafter{\romannumeral5 }}; however, in nonperturbative studies such a search typically involves a difficult numerical trial and error calculation \APrefmark{\rWILTWO,\rWILFOUR,\rWILTEN}. If $H^*$ contains no interactions ({\eighteenit i.e.}, no terms with a product of more than two field operators), it is called {\eighteenit Gaussian}. If $H^*$ has a massless eigenstate, it is called {\eighteenit critical}. If a Gaussian fixed point has no mass term, it is a {\eighteenit critical Gaussian} fixed point. If it has a mass term, this mass must typically be infinite, in which case it is a {\eighteenit trivial Gaussian} fixed point. In lattice QCD the trajectory of renormalized Hamiltonians stays near a critical Gaussian fixed point until the lattice spacing becomes sufficiently large that a transition to strong-coupling behavior occurs. If $H^*$ contains only weak interactions, it is called {\eighteenit near-Gaussian}, and one may be able to use perturbation theory both to identify $H^*$ and to accurately approximate trajectories of Hamiltonians near $H^*$ \APrefmark{\rWILNINE}. Of course, once the trajectory leaves the region of $H^*$ it is generally necessary to switch to a nonperturbative calculation of subsequent evolution. If $H^*$ contains a strong interaction, one must use nonperturbative techniques to find $H^*$, but it may still be possible to produce trajectories near the fixed point using perturbation theory. The perturbative analysis in this case includes the interactions in $H^*$ to all orders, treating only the deviations from these interactions in perturbation theory. Consider the immediate neighborhood of the fixed point, and assume that the trajectory remains in this neighborhood. This assumption must be justified {\it a posteriori}, but if it is true we should write $$H_l=H^*+\delta H_l \;, \eqno(2.3)$$ \noindent and consider the trajectory of small deviations $\delta H_l$. As long as $\delta H_l$ is `sufficiently small', we can use a perturbative expansion in powers of $\delta H_l$, which leads us to consider $$\delta H_{l+1}= L \cdot \delta H_l + N[\delta H_l] \;. \eqno(2.4)$$ \noindent Here $L$ is the linear approximation of the full transformation in the neighborhood of the fixed point, and $N[\delta H_l]$ contains all contributions to $\delta H_{l+1}$ of ${\cal O}(\delta H_l^2)$ and higher. The object of the renormalization group calculation is to compute trajectories and this requires a representation for $\delta H_l$. The problem of computing trajectories is one of the most common in physics, and a convenient basis for the representation of $\delta H_l$ is provided by the eigenoperators of $L$, since $L$ dominates the transformation near the fixed point. These eigenoperators and their eigenvalues are found by solving $$L \cdot O_m=\lambda_m O_m \;. \eqno(2.5)$$ \noindent If $H^*$ is Gaussian or near-Gaussian it is usually straightforward to find $L$, and its eigenoperators and eigenvalues. This is not typically true if $H^*$ contains strong interactions, and in much of the remaining discussion I focus on formalism that is primarily useful for the study of trajectories in the neighborhood of Gaussian and near-Gaussian fixed points. In Section {\uppercase\expandafter{\romannumeral5 }}~ the linear approximations of several simple light-front renormalization group transformations about a critical Gaussian fixed point are derived, and their eigenoperators and eigenvalues are computed. Using the eigenoperators of $L$ as a basis we can represent $\delta H_l$, $$\delta H_l = \sum_{m\in R} \mu_{m_l}O_m +\sum_{m\in M} g_{m_l}O_m+ \sum_{m\in I} w_{m_l}O_m \;.\eqno(2.6)$$ \noindent Here the operators $O_m$ with $m\in R$ are {\eighteenit relevant} ({\eighteenit i.e.}, $\lambda_m>1$), the operators $O_m$ with $m\in M$ are {\eighteenit marginal} ({\eighteenit i.e.}, $\lambda_m=1$), and the operators with $m\in I$ are either {\eighteenit irrelevant} ({\eighteenit i.e.}, $\lambda_m<1$) or become irrelevant after many applications of the transformation. The motivation behind this nomenclature is made clear by considering repeated application of $L$, which causes the relevant operators to grow exponentially, the marginal operators to remain unchanged in strength, and the irrelevant operators to decrease in magnitude exponentially. There are technical difficulties associated with the symmetry of $L$ and the completeness of the eigenoperators that I ignore \APrefmark{\rWEGONE-\rWEGTHR}. I occasionally refer to the relevant variables as masses, and the marginal and irrelevant variables as couplings; but I also occasionally refer to all variables, including relevant variables, as couplings. What is meant should be clear from context. $L$ depends both on the transformation and the fixed point, but there are always an infinite number of irrelevant operators. On the other hand, transformations of interest for Euclidean lattice field theory typically lead to a finite number of relevant and marginal operators. One of the most serious problems for a perturbative light-front renormalization group is that {\eighteenit an infinite number of relevant and marginal operators are required.} This result is derived in Section {\uppercase\expandafter{\romannumeral5 }}, and I discuss some of the consequences below. In the case of scalar field theory, an infinite number of relevant and marginal operators arise because the light-front cutoffs violate Lorentz covariance and cluster decomposition. These are continuous symmetries, and their violation leads to an infinite number of constraints on the Hamiltonian. The key to showing that the light-front renormalization group may not be rendered useless by an infinite number of relevant and marginal operators is the observation that both the strength and the evolution of all but a finite number of relevant and marginal operators are fixed by Lorentz covariance and cluster decomposition. However, one does not want to employ either of these properties directly in the construction of Hamiltonians, because they are never explicit in the renormalization group calculation of a Hamiltonian trajectory. Lorentz covariance and cluster decomposition are properties of observations that are obtained using the full Hamiltonian, and one does not want to solve problems that require the entire Hamiltonian to compute the Hamiltonian itself. The alternative that Wilson and I have proposed \APrefmark{\rPERWIL,\rOEHONE-\rKRAUS} is to insist that the new relevant and marginal variables are not independent functions of the cutoff, but depend only on the cutoff through their dependence on canonical variables. While this requirement obviously fixes the manner in which the new variables evolve with the cutoff, it also fixes their value at all cutoffs once the values of the canonical variables are chosen. The remarkable feature of this procedure is that the value it gives to the new variables is precisely the value required to restore Lorentz covariance and cluster decomposition. This conclusion is not proven to all orders in perturbation theory, but it is illustrated by a nontrivial second order example in this paper. To simplify subsequent discussion, the statement that $\delta H_l$ is small is assumed to mean that all masses and couplings in the expansion of $\delta H_l$ are small. The analysis itself should signal when this assumption is naive. A rigorous discussion would require consideration of the spectra of the eigenoperators. In Section {\uppercase\expandafter{\romannumeral6 }}~ I show that several candidate light-front renormalization group transformations lead to unbounded operators, including unbounded irrelevant operators, even though there are cutoffs. In some of these cases, corrections that are ${\cal O}(\delta H^2)$ are shown to be infinite; {\eighteenit i.e.}, not small. If the coefficient of a single operator ({\eighteenit e.g.}, a single mass) becomes large, it may be straightforward to alter the analysis so that this coefficient is included to all orders in an approximation of the transformation, so that one perturbs only in the small coefficients; but this possibility is not pursued in this paper. For the purpose of illustration, let me assume that $\lambda_m=4$ for all relevant operators, and $\lambda_m=1/4$ for all irrelevant operators. The transformation can be represented by an infinite number of coupled, nonlinear difference equations: $$\mu_{m_{l+1}}=4 \mu_{m_l} + N_{\mu_m}[\mu_{m_l}, g_{m_l}, w_{m_l}] \;, \eqno(2.7)$$ $$g_{m_{l+1}}=g_{m_l} + N_{g_m}[\mu_{m_l}, g_{m_l}, w_{m_l}] \;, \eqno(2.8)$$ $$w_{m_{l+1}}={1 \over 4} w_{m_l} + N_{w_m}[\mu_{m_l}, g_{m_l}, w_{m_l}] \;. \eqno(2.9)$$ \noindent Sufficiently near a critical Gaussian fixed point, the functions $N_{\mu_m}$, $N_{g_m}$, and $N_{w_m}$ should be adequately approximated by an expansion in powers of $\mu_{m_l}$, $g_{m_l}$, and $w_{m_l}$. The assumption that the Hamiltonian remains in the neighborhood of the fixed point, so that all $\mu_{m_l}$, $g_{m_l}$, and $w_{m_{l}}$ remain small must be justified {\eighteenit a posteriori}. Any precise definition of the neighborhood of the fixed point within which all approximations are valid must also be provided {\eighteenit a posteriori}. Wilson has given a general discussion of how these equations are solved \APrefmark{\rWILTEN}, and I repeat only the most important points. In perturbation theory these equations are equivalent to an infinite number of coupled, first-order, nonlinear differential equations. To solve them we must specify `boundary' values for every variable, possibly at different $l$, and then employ a {\eighteenit stable} numerical algorithm to find the variables at all other values of $l$ for which the trajectory remains near the fixed point. We want to apply the transformation ${\cal N}$ times, letting ${\cal N} \rightarrow \infty$, and adjusting the initial Hamiltonian so that this limit exists. Eq. (2.7) must be solved by `integrating' in the exponentially stable direction of decreasing $l$ ({\eighteenit i.e.}, typically toward larger cutoffs), while Eq. (2.9) must be solved in the direction of increasing $l$. Eq. (2.8) is linearly unstable in either direction. The coupled equations must be solved using an iterative algorithm. Such systems of coupled difference equations and the algorithms required for their solution are familiar in numerical analysis. In this context the need for renormalization can be understood by considering the fact that the renormalization group difference equations need to be solved over an infinite number of scales in principle. The final output of the renormalization group analysis is the cutoff Hamiltonian $H_{\cal N}$. If this Hamiltonian is the final point in an infinitely long trajectory of Hamiltonians, it will yield the same observables below the final cutoff as $H_0$; but for an infinitely long trajectory $H_0$ contains no cutoff, so $H_{\cal N}$ {\eighteenit will yield results that do not depend on the cutoff}. It is for this reason that $H_{\cal N}$ and all other Hamiltonians on any infinitely long trajectory are referred to as {\eighteenit renormalized Hamiltonians}. How one solves the final cutoff Hamiltonian problem using $H_{\cal N}$ depends on the theory. For the scalar theories used as examples in this paper I assume that perturbation theory can be used to predict observables. For QCD, even if $H_{\cal N}$ can be derived by purely perturbative techniques, it will have to be solved nonperturbatively because of confinement. In either case, we must have an accurate approximation for $H_{\cal N}$; however, we do not necessarily need to explicitly construct accurate approximations for all $H_l$. The boundary values for the irrelevant variables should be set at $l=0$, because we need to solve Eq. (2.9) in the direction of increasing $l$. At large $l$ all variables are exponentially insensitive to the irrelevant boundary values. Therefore, they can be chosen arbitrarily (universality); and the values of the irrelevant variables at $l={\cal N}$ are output by the modern renormalization group. This is one of the crucial differences between the modern renormalization group and the Gell-Mann--Low renormalization group in which irrelevant variables are not treated. Irrelevant operators are important in $H_{\cal N}$ unless the final cutoff is much larger than the scale of physical interest. The fact that they are irrelevant implies that their final values are exponentially insensitive to their initial values; and it implies that they are driven at an exponential rate toward a function of the relevant and marginal variables, as discussed below. The fact that they are irrelevant does not necessarily imply that they are unimportant. This depends on how sensitive the physical observables in which one is interested are to physics near the scale of the cutoff. The boundary values required by Eqs. (2.7) and (2.8) can be given at $l={\cal N}$. Sufficiently far from $l=0$ the irrelevant variables are exponentially driven to maintain polynomial dependence on relevant and marginal variables, and sufficiently far from $l={\cal N}$ the relevant variables are exponentially driven toward similar polynomial dependence on the marginal variables. While the calculation of transient behavior near $l=0$ and $l={\cal N}$ usually requires a numerical computation, the relevant and irrelevant variables are readily approximated by polynomials that involve only marginal variables in the intermediate region. These polynomials are determined by the expansions of $N_{\mu_m}$, $N_{g_m}$, and $N_{w_m}$; and they can be fed back into Eq. (2.8) to find an approximate equation for the marginal variables that requires direct knowledge only of the marginal variables themselves. These points may be confusing, so let me consider a simple example. Consider coupled differential equations for a relevant variable $m$, a marginal variable $g$, and an irrelevant variable $w$, $${\partial m \over \partial t} = -2 m + c_1 g^2 + c_2 g w \;,\eqno(2.10)$$ $${\partial g \over \partial t} = -c_3 g^3 + c_4 g^2 m + c_5 w^2 \;,\eqno(2.11)$$ $${\partial w \over \partial t} = 2 w + c_6 g^2 \;.\eqno(2.12)$$ \noindent Here the cutoff increases as $t$ increases. We want to fix the boundary condition for $w$ at $t \rightarrow \infty$, and the boundary condition for $m$ at $t=0$. We are only interested in the solution near $t=0$, and this should not depend on the boundary condition for $w$; as long as $w(t)$ remains finite as $t \rightarrow \infty$. Let $m(0)=m_0$, and $g(0)=g_0$. To satisfy the boundary condition for $w$ we must have $$w(t)= -{c_6 \over 2} g^2(t)+{\cal O}(cubic) \;,\eqno(2.13)$$ \noindent for all finite values of $t$, where all cubic and higher order terms in $g$ and $m$ are readily computed. We can substitute this result in Eqs. (2.10) and (2.11) and we see that the irrelevant variable has no effect to leading order in an expansion in powers of $g$. For very large, but finite values of $t$, we find $$m(t)={c_1 \over 2} g^2(t)+{\cal O}(g^3) \;.\eqno(2.14)$$ \noindent Therefore, the marginal variable must satisfy $${\partial g \over \partial t} = -c_3 g^3 + {\cal O}(g^4) \;,\eqno(2.15)$$ \noindent for large but finite values of $t$. Eq. (2.15) can now be solved easily to obtain an accurate approximation for $g(t)$ for large $t$, $$g^2(t)={g^2(0) \over 1+2 c_3 g^2(0) t} \;. \eqno(2.16)$$ To obtain an accurate approximation for small $t$, we can continue to use Eq. (2.13), but we need to use an iterative algorithm to improve our estimate of $g(t)$ and $m(t)$. This is done by integrating Eqs. (2.10)-(2.11) near $t=0$ repeatedly, using the estimates from one iteration on the right-hand-sides of these equations to generate a subsequent estimate. This process is repeated until a convergence criterion is met. The initial seed is given by Eq. (2.13), the solution of Eq. (2.15) for $g(t)$, and $$m(t)=m_0 e^{-2t}+{c_1 \over 2} \Bigl(g^2(t)-g_0^2 e^{-2t}\Bigr) \;,\eqno(2.17)$$ \noindent for example. The transient behavior near $t=0$ in this approximation of $m(t)$ is wrong, but any guess that is sufficiently near the solution should lead to convergence. After this iterative process converges, the desired result is obtained. In this case, the only output is $w(0)$, because $m(0)$ and $g(0)$ are input. In the simplest case there is only one marginal variable and a finite number of relevant variables. It is assumed that all variables are small near the critical Gaussian fixed point, and in particular it is assumed that the marginal variable is small. There are an infinite number of irrelevant variables, but one can classify these variables according to the eigenvalues in Eq. (2.5) and according to their magnitude in terms of the marginal variable. One can first replace the irrelevant variables with an appropriate polynomial involving marginal and relevant variables. To leading order these are given by the zeroes of the polynomials on the right-hand-side of Eq. (2.9). Substituting these results in Eq. (2.7) one can next determine the strength of each relevant variable in terms of the single marginal variable. The appropriate polynomials are given to leading order by the zeroes of the polynomials on the right-hand-side of Eq. (2.7). After this one also has expansions for the irrelevant variables in terms of the single marginal variable. Every irrelevant variable has a leading term of ${\cal O}(g^{p_m})$, where $g$ is the single marginal variable. There are typically a finite number of irrelevant variables with an eigenvalue $\lambda_m$ from Eq. (2.5) greater than any given value and with any given value of $p_m$, and one can construct a first approximation by keeping only the `leading' irrelevant operators. In other words, one constructs a perturbative approximation for the trajectory in which the order of perturbation theory is determined by the single marginal variable $g$. Generalization of this procedure to the case where there is any finite number of marginal and relevant variables is straightforward. Next consider the case where there are an infinite number of relevant variables, in addition to an infinite number of irrelevant variables; but there are only a finite number of marginal variables. The irrelevant variables are treated exactly as above, except now the evolution of the irrelevant variables may be sensitive to an infinite number of relevant variables. There are an infinite number of boundary values that must be specified at $l={\cal N}$ in principle, and an infinite number of polynomials that must be considered in Eqs. (2.7) and (2.9). One can simultaneously classify all of the irrelevant and relevant operators in terms of the leading powers of the marginal variables that appear in the zeroes of these polynomials, just as the irrelevant operators were classified above. To leading order, one can replace the irrelevant and relevant variables in the equations for the marginal variables using these zeroes, and one can construct a perturbative approximation for the marginal variables for $0 \ll l \ll {\cal N}$ by dropping sub-leading irrelevant and relevant variables. This part of the procedure is a straightforward generalization of the procedure used to handle an infinite number of irrelevant variables; however, there is a crucial difference between relevant and irrelevant variables. As discussed above, the boundary values chosen for the irrelevant variables are arbitrary. The boundary values chosen for the relevant variables do not affect the trajectory for $l \ll {\cal N}$, because their effects are exponentially suppressed as $l$ decreases in Eq. (2.7); however, we need to construct $H_{\cal N}$, and the boundary values for the relevant variables may have important effects on all irrelevant, marginal, and relevant variables near $l={\cal N}$. Remember that the boundary values of the relevant and marginal variables are input, while the values of the irrelevant variables at $l={\cal N}$ are output by the renormalization group equations. If there are an infinite number of relevant variables, we are forced to fix an infinite number of boundary conditions. Moreover, even if we want to compute a finite number of `leading' irrelevant variables at $l={\cal N}$, in principle we must approximate the evolution of an infinite number of relevant variables near $l={\cal N}$, because all of the relevant variables affect the evolution of each irrelevant operator; and near $l={\cal N}$ transient behavior may prevent us from replacing the relevant variables with functions of the marginal variables. Does this render the perturbative renormalization group useless? Hopefully not, for two reasons. First, there are many problems in physics where an infinite number of boundary conditions must be fixed ({\eighteenit e.g.}, the value of a field on a surface). There are also many problems in which the evolution of an infinite number of variables must be computed ({\eighteenit e.g.}, the value of a field in a volume). The key to the solution to such problems is to show that it is possible to approximate an infinite number of variables with a few well-chosen variables ({\eighteenit e.g.}, parameters in functions of the original variables). In the case of light-front field theory we encounter an infinite number of relevant operators because functions of longitudinal momenta appear in these operators. However, a finite number of functions appear, and specifying the infinite number of boundary conditions is accomplished by specifying these functions. Further consideration of the renormalization group equations reveals a second reason that the perturbative renormalization group may survive, even though there are an infinite number of relevant operators. It may be natural for only a few of the relevant variables to be `independent'. Suppose that we independently fix the boundary values of an infinite number of relevant variables and then solve Eqs. (2.7)-(2.9). Since we assume that $H_{\cal N}$ is near the Gaussian fixed point, all relevant variables are small at $l={\cal N}$, and we expect all relevant variables to approach polynomials of the marginal variables at an exponential rate. It is only possible for these variables to deviate significantly from these polynomials near the end of the trajectory, so on any trajectory of renormalized Hamiltonians near a Gaussian fixed point all relevant variables are exponentially near these polynomials over an infinite number of cutoff scales. Thus, it is natural to speculate that almost all of these variables track these polynomials exactly, never deviating from this behavior. If only a finite number of relevant variables depart from these polynomials, we can approximate the trajectory by dropping all sub-leading relevant variables and numerically computing the behavior only of those relevant variables that depart from these polynomials. I call a relevant variable whose evolution is exactly given by a polynomial a dependent relevant variable. A relevant variable whose value departs from such a polynomial is called an independent relevant variable. I simply postulate that there are only a finite number of independent relevant variables in theories of physical interest ({\eighteenit e.g.}, Lorentz covariant theories). The boundary values of dependent relevant variables cannot be adjusted independently because they are determined by the polynomials; and this places severe constraints on the theories that satisfy this postulate. These are the coupling constant coherence conditions, and Wilson and I have shown that they arise naturally when there is a hidden symmetry \APrefmark{\rPERWIL}. If there are an infinite number of marginal variables, in addition to an infinite number of relevant and irrelevant variables, their behavior is unpredictable without further assumptions. I postulate that there are a finite number of independent marginal variables, and that the infinite number of remaining dependent marginal variables can each be replaced by a polynomial expansion in powers of the independent marginal variables. To see that these postulates are reasonable, one must try to understand why light-front renormalization group transformations lead to an infinite number of relevant and marginal variables. We will see in Section {\uppercase\expandafter{\romannumeral3 }}~ that every cutoff that is run by a light-front renormalization group transformation breaks Lorentz covariance, and most violate cluster decomposition. Moreover, in gauge theories these cutoffs violate gauge invariance. The price one pays for violating a continuous symmetry in this case is the appearance of an infinite number of relevant and marginal variables. This price should not be surprising, because these symmetries must be restored to all physical observables and this requirement imposes an infinite number of conditions on the renormalized Hamiltonian. These conditions relate the strengths of operators in the Hamiltonian, providing an infinite number of relationships between the relevant and marginal operators at $l={\cal N}$. Thus, it is expected that the appearance of an infinite number of relevant and marginal variables does not imply that there are an infinite number of free parameters in the theory. There should be exactly as many independent relevant and marginal variables in a light-front renormalization group analysis as there are in a Euclidean renormalization group analysis, and the strength of all dependent relevant and marginal variables should be fixed by the requirement that the broken symmetries are restored. These symmetries can be restored by the adjustment of the strength of the dependent variables at the boundary, and the requirement that the symmetries be maintained for all other cutoffs places an infinite number of conditions on the renormalization group. These points are illustrated by examples in Sections {\uppercase\expandafter{\romannumeral6 }}~ and {\uppercase\expandafter{\romannumeral7 }}. In order to summarize the most important aspects of a perturbative renormalization group analysis and clarify the difference between perturbative and nonperturbative analyses, I introduce Wilson's triangle of renormalization, shown in figure 1. The triangle displays a sequence of renormalization group trajectories of increasing length, ${\cal N}$. We can label the Hamiltonian using a superscript to denote the absolute cutoff and a subscript to denote the effective cutoff, $H^{\Lambda_0}_{\Lambda_n}$. Assume that the cutoff is a cutoff on energy. The object of the renormalization group is to make it possible to let $\Lambda_0 \rightarrow \infty$ while keeping $\Lambda_{\cal N}$ at some fixed value, say 2 GeV. The subscript ${\cal N}$ indicates how many times the transformation must be applied to the original Hamiltonian to lower the effective cutoff to its final value, so one has $\Lambda_{\cal N} = \Lambda_0/(2^{\cal N})$ for example. To fix $\Lambda_{\cal N}$ and let $\Lambda_0 \rightarrow \infty$, we must also let ${\cal N} \rightarrow \infty$. The renormalization group enables one to compute renormalized Hamiltonians, shown as the right-most column in figure 1, by providing an operational definition of renormalization. In the perturbative renormalization group this task is reduced to solving a finite number of difference or differential equations, as shown above. At each stage of a nonperturbative renormalization group calculation one selects a cutoff Hamiltonian $H^{\Lambda_0}_{\Lambda_0}$, and applies the transformation, $T$, ${\cal N}$ times to generate the Hamiltonian $H^{\Lambda_0}_{\Lambda_{\cal N}}$. In a successive stage one selects a new Hamiltonian $H^{\Lambda_0}_{\Lambda_0}$ and increments ${\cal N}$ by one. The sequence of initial Hamiltonians are related in a manner that must be determined as part of an algorithm tailored to the specific theory. In a nonperturbative calculation one probably must construct the triangle of Hamiltonians directly, being satisfied with numerical evidence that the limiting trajectory of renormalized Hamiltonians exists. If $H^{\Lambda_0}_{\Lambda_0}$ lies near a Gaussian fixed point we have seen that the irrelevant variables in $H^{\Lambda_0}_{\Lambda_0}$ should have an exponentially small effect on most of the trajectory. We have also seen that we want to fix the strength of some operators at the end of the trajectory, not at the beginning. Similar features should appear in a nonperturbative analysis, but there is no general procedure to identify and order operators in this case. We may have to simply search on initial values of parameters that are identified as important, to find $H^{\Lambda_0}_{\Lambda_0}$ that yields desired operators in $H^{\Lambda_0}_{\Lambda_{\cal N}}$. After convincing oneself that an arbitrarily long trajectory can be constructed if $H^{\Lambda_0}_{\Lambda_0}$ is adjusted to sufficient precision, in practice one hopefully needs to explicitly construct only the last part of a finite trajectory to obtain an accurate approximation of a renormalized Hamiltonian, $H^{\Lambda_0}_{\Lambda_{\cal N}}$. In light-front renormalization groups there are an infinite number of relevant and marginal operators, because undetermined functions of longitudinal momenta appear in a finite number of operators that are relevant or marginal according to transverse power counting. In a perturbative analysis one can use continuous symmetries such as Lorentz covariance to fix these functions in each order of an expansion in terms of a single coupling; however, in a nonperturbative analysis one may have to parameterize each function and seek an approximation of the renormalization group transformation that is represented by a set of coupled equations for the evolution of these parameters. Wilson has given an excellent discussion of the relationship between divergences in standard perturbation theory ({\eighteenit e.g.}, Feynman perturbation theory) and the perturbative renormalization group \APrefmark{\rWILTEN}, and I close this Section by repeating the most salient points. There are usually no divergences encountered when one applies the renormalization group transformation once; however, divergences can arise in the form of powers of $l$ and exponents containing $l$, when $T$ is applied a large number of times; and these divergences are directly related to the divergences in Feynman perturbation theory. There are no divergences apparent when one solves the perturbative renormalization group equations using a stable numerical algorithm; however, if one attempts to expand a coupling $g_l$ in powers of $g_0$, for example, powers of $l$ appear. As $l \rightarrow \infty$ these powers of $l$ lead to the divergences familiar in Feynman perturbation theory. One can see an example of this by studying Eq. (2.16), where $t$ represents the logarithm of a ratio of cutoffs. If the right-hand side of Eq. (2.16) is expanded in powers of $g(0)$, each term diverges like a power of $t$; so this expansion is numerically useless. On the other hand, $g(t)$ is perfectly well-behaved if $g(0)$ is small, and the divergences result from the fact that $g(t)$ and $g(0)$ differ by orders-of-magnitude for sufficiently large $t$. Eq. (2.16) illustrates the significant improvement over standard perturbation theory offered by the perturbative renormalization group. In standard perturbation theory one deals only with bare and renormalized parameters. This is analogous to dealing only with the parameters in $H^{\Lambda_0}_{\Lambda_0}$ and $H^{\Lambda_0}_{\Lambda_{\cal N}}$, without ever encountering separate parameters for every Hamiltonian in the trajectory. Except in super-renormalizable theories, the ratio of bare and renormalized parameters goes to infinity (or zero). If a perturbative expansion of an observable in powers of the renormalized parameters converges, the expansion for the same observable in terms of the bare parameters cannot converge. This leads to some interesting departures from logic in standard perturbation theory \APrefmark{\rDIRTHR}. A small contribution of the renormalization group is that logic may sometimes be restored to perturbation theory. \bigskip \noindent {\eighteenb {\uppercase\expandafter{\romannumeral3 }}. Light-Front Hamiltonians and Renormalization Group Transformations} \medskip The first step in defining a renormalization group transformation is to define the space of Hamiltonians upon which this operator acts. I give no precise definition of this space, partially because it must usually be defined after studying a transformation, not before. I restrict myself to scalar field theory, as it is straightforward but tedious to generalize the discussion to theories that include more complicated fields. I indicate what kind of operators are allowed in the Hamiltonians by example, and I display these operators in several forms that prove useful. Ultimately the Hamiltonians must be expressed in terms of Fock space eigenstates of the Gaussian fixed point Hamiltonian ({\eighteenit i.e.}, in terms of projection operators) if one wants to use an invariant-mass transformation, so much of the early discussion is schematic. A brief summary of canonical light-front scalar field theory is given in Appendix A. The operators and constants with which Hamiltonians can be formed in a 3+1 dimensional scalar field theory, and their naive engineering dimensions, are $$ \partial^+ = \biggl[{1 \over x^-}\biggr] \;,\; \partial^\perp = \biggl[{1 \over x^\perp}\biggr] \;,\; \Lambda = \biggl[{1 \over x^\perp}\biggr] \;,\; \epsilon= \biggl[{1 \over x^-}\biggr] \;,\; \phi(x) = \biggl[{1 \over x^\perp}\biggr] \;. \eqno(3.1) $$ \noindent I should also note that the Fourier transform of the field operator, $\phi(q)$, has the dimension $\bigl[x^\perp\bigr]$. I work with a metric in which $x^\pm=x^0 \pm x^3$. In addition to derivative operators and the scalar field operator, I indicate that there may be a cutoff with the dimension of transverse momentum ($\Lambda$) that can be used, and there may be a cutoff with the dimension of longitudinal momentum ($\epsilon$) that can be used. All masses are expressed as dimensionless constants multiplying $\Lambda$. In general there may be many cutoffs ({\eighteenit e.g.}, different cutoffs in different sectors of Fock space), but all of them can be expressed in terms of $\Lambda$ and $\epsilon$. Perhaps the most important feature of Eq. (3.1) is that transverse and longitudinal dimensions are treated separately \APrefmark{\rWILQCD}, just as one treats time and space differently in nonrelativistic physics. There is no analog of physical mass with the dimensions of longitudinal momentum instead of transverse momentum, because longitudinal boost invariance is a scale invariance \APrefmark{\rKOGTWO,\rLEU}, and physical masses (not necessarily bare masses) violate scale invariance. The cutoff $\epsilon$ is the only constant with the dimensions of longitudinal momentum that can enter the definition of the Hamiltonian, and it must enter in a manner that restores boost invariance to observables despite the violation of explicit boost invariance caused by the cutoff itself. In general one cannot be sure that naive engineering dimensions are significant in an interacting theory; however, near a Gaussian fixed point naive power counting is appropriate for the same reasons it is appropriate in standard perturbation theory. This is explicitly shown in Section {\uppercase\expandafter{\romannumeral5 }}~ for light-front transformations. The assumption of transverse locality naively means that no inverse powers of $\partial^\perp$ are allowed. Restrictions on inverse powers of $\partial^\perp$ are clarified in Section {\uppercase\expandafter{\romannumeral6 }}~ where they first appear in the second-order behavior of the light-front transformations. Masses appear in the Hamiltonian as dimensionless constants multiplying $\Lambda$. I always assume that physical masses are much smaller than $\Lambda$, and I make an operational distinction between physical masses and mass counterterms. Mass counterterms are present even when the physical mass is zero unless a symmetry protects the mass operator, and I know of no examples in cutoff light-front field theory where this occurs. Mass counterterms can have a very different dependence on longitudinal momenta than physical mass terms, as shown in Section {\uppercase\expandafter{\romannumeral6 }}. Except in Appendix C I usually assume that the physical mass is zero and focus on the critical theory. I make no initial restriction on the manner in which longitudinal derivatives appear. In canonical scalar field theory longitudinal derivatives appear only as inverse powers (see Appendix A); however, in canonical light-front QCD in light-cone gauge \APrefmark{\rTOM,\rCASH} one finds both inverse powers and powers of the longitudinal derivative in the three-gluon vertex and in the exchange of an instantaneous gluon between a quark and gluon or between two gluons. The Hamiltonian, $H$, is the integral of the Hamiltonian density, $H = \int dx^- d^2x^\perp {\cal H}$. Their dimensions are easily derived in canonical free field theory, and here I simply take them as given to be $$ H = \biggl[{x^- \over x^{\perp 2}}\biggr] \;,\;\;\; {\cal H} = \biggl[{1 \over x^{\perp 4}}\biggr] \;. \eqno(3.2) $$ \noindent Given the catalog of operators from which the Hamiltonian can be formed, the space of Hamiltonians that I initially consider consists of all operators that can be formed from the basic catalog and that have the appropriate engineering dimension. Inverse powers of the transverse derivative operator are excluded initially and inverse powers of the field operator are always forbidden. Furthermore, cutoffs must be imposed to complete the definition. I work in momentum space rather than position space, and the Hamiltonian can be written schematically as $$\eqalign{ H = &\;\;\;\; {1 \over 2} \int d\tilde{q}_1 \; d\tilde{q}_2 \; (16 \pi^3) \delta^3(q_1+q_2) \; u_2(q_1,q_2) \; \phi(q_1) \phi(q_2) \cr &+{1 \over 4!} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2+q_3+q_4) \cr &\qquad\qquad\qquad\qquad u_4(q_1,q_2,q_3,q_4)\; \phi(q_1) \phi(q_2) \phi(q_3) \phi(q_4) \cr &+{1 \over 6!} \int d\tilde{q}_1 \cdot \cdot \cdot d\tilde{q}_6\; (16 \pi^3) \delta^3(q_1+\cdot\cdot\cdot+q_6) \; u_6(q_1,...,q_6) \; \phi(q_1) \cdot \cdot \cdot \phi(q_6) \cr &+\qquad \cdot \cdot \cdot \;,}\eqno(3.3) $$ \noindent where, $$d\tilde{q} = {dq^+ d^2q^\perp \over 16 \pi^3 q^+} \;. \eqno(3.4) $$ \noindent We will see that this leads to a free energy $({\bf q^\perp}^2+m^2)/q^+$ when $u_2={\bf q^\perp}^2+m^2$. In terms of plane wave creation and annihilation operators, $$\phi(q)=a(q) \;\;\;(q^+>0)\;; \;\;\;\;\;\;\;\;\;\phi(q)=-a^\dagger(-q) \;\;\;(q^+<0) \;. \eqno(3.5)$$ The above restrictions on transverse derivatives become restrictions on the functions $u_2, u_4$, etc. At this point the integrals include both positive and negative longitudinal momenta. The next step toward an expression for the Hamiltonian that can be directly manipulated involves replacing the field operators in Eq. (3.3) with creation and annihilation operators, normal-ordering the Hamiltonian and changing variables so that only positive longitudinal momenta appear. There are no modes with zero longitudinal momentum. This complicates the Hamiltonian algebraically, but the advantages far outweigh this complication. I should mention that there is no need to define the normal-ordering operation until after the cutoffs required by the transformation are implemented, and that after this no divergences are encountered in the normal-ordering procedure. The initial Hamiltonian is simply assumed to be normal-ordered. The schematic Hamiltonian becomes $$\eqalign{ H = &\qquad \int d\tilde{q}_1 \; d\tilde{q}_2 \; (16 \pi^3) \delta^3(q_1-q_2) \; u_2(-q_1,q_2) \;a^\dagger(q_1) a(q_2) \cr &+{1 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2+q_3-q_4) \cr &\qquad\qquad\qquad\qquad\qquad u_4(-q_1,-q_2,-q_3,q_4)\; a^\dagger(q_1) a^\dagger(q_2) a^\dagger(q_3) a(q_4) \cr &+{1 \over 4} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2-q_3-q_4) \cr &\qquad\qquad\qquad\qquad\qquad u_4(-q_1,-q_2,q_3,q_4)\; a^\dagger(q_1) a^\dagger(q_2) a(q_3) a(q_4) \cr &+{1 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1-q_2-q_3-q_4) \cr &\qquad\qquad\qquad\qquad\qquad u_4(-q_1,q_2,q_3,q_4)\; a^\dagger(q_1) a(q_2) a(q_3) a(q_4) \cr &+ \qquad \cdot \cdot \cdot \;.}\eqno(3.6) $$ Since modes with identically zero longitudinal momentum are not allowed, there are no operators in this light-front Hamiltonian that contain only creation operators or only annihilation operators. This is only natural in light-front field theory. {}From this point forward it is assumed that all longitudinal momenta are positive. The Hamiltonian is simply assumed to be normal-ordered, and it is assumed as part of the definition of each transformation that the transformed Hamiltonian is normal-ordered. This is readily insured in perturbation theory, where one only encounters products of operators that are easily normal-ordered, as shown below. When one studies a boost-invariant renormalization group transformation, the Hamiltonians must be written in terms of projection and transition operators that are constructed from the free many-body states generated by products of the above creation and annihilation operators acting on the vacuum. This introduces severe notational complications, but such representations are trivially constructed and manipulated. All boost-invariant cutoffs involve the total longitudinal and transverse momenta of a {\eighteenit state}, and not simply a few momenta carried by individual particles. When the cutoff depends on extensive quantities such as the total momentum, spectator-dependence inevitably appears in the operators, as we will find in Section {\uppercase\expandafter{\romannumeral6 }}. Such spectator-dependence has been studied recently in LFTD \APrefmark{\rPERONE-\rGHPSW}, but I want to emphasize that it is required even if one does not make a Tamm-Dancoff truncation. Since interactions depend on spectators, they also depend on the Fock space sector(s) in or between which they act \APrefmark{\rPERONE,\rPERTWO,\rGLAZTHR}. In this case, one might worry that the distinction between the functions $u_2$, $u_4$, etc. might become altered or blurred. However, these distinctions are easily maintained by considering how many individual particle momenta are altered by the operator. Examples below should clarify these points. In order to develop simple examples of the light-front renormalization group one can use a Tamm-Dancoff truncation \APrefmark{\rTAM,\rDAN}, which simply limits the number of particles and introduces an additional source of sector-dependence in the operators \APrefmark{\rPERONE}. In LFTD the Hamiltonian again must be written in terms of projection operators \APrefmark{\rGLAZTHR}. Let me emphasize that some of the most interesting results one can find in a renormalization group analysis are apparently lost when a Tamm-Dancoff truncation is made, and I do not use LFTD in any examples in this paper. However, to illustrate what a Hamiltonian written in terms of projection and transition operators looks like, let me truncate Fock space to allow only the one-boson and two-boson sectors. In this case the complete Hamiltonian is $$\eqalign{ H = &\;\; \int d\tilde{q}\; {u_2^{(1)}(q) \over q^+}\; |q\rangle \langle q| \cr &+\int d\tilde{k}_1\;d\tilde{k}_2\; \Biggl[ {u_2^{(2)}(k_1) \over k_1^+}\;+\; {u_2^{(2)}(k_2) \over k_2^+} \Biggr] \;|k_1,k_2\rangle \langle k_1,k_2| \cr &+\;{1 \over 3}\;\int d\tilde{q} \; d\tilde{k} \; \Biggl[ {u_3^{(1,2)}(q,-k,k-q) \over q^+-k^+}\; |q\rangle \langle k,q-k| \;+\;{u_3^{(2,1)}(k,q-k,-q) \over q^+-k^+} |k,q-k \rangle \langle q| \Biggr] \cr &+\;{1 \over 4}\;\int d\tilde{k}_1\; d\tilde{k}_2\; d\tilde{k}_3\; \Biggl[ {u_4^{(2,2)}(k_1,k_2,-k_3,-k_1-k_2+k_3) \over k_1^+ + k_2^+ - k_3^+} \Biggr] \; |k_1,k_2 \rangle \langle k_3,k_1+k_2-k_3| \;.}\eqno(3.7) $$ \noindent Note that $|k_1,k_2\rangle = a^\dagger(k_1) a^\dagger(k_2)|0 \rangle$, etc. More complicated examples are readily constructed. A superscript is added to $u_2$, $u_3$, etc. to indicate the Fock space sector(s) within which or between which the operator acts. In this example only the superscript on $u_2$ is required. The Hamiltonians that are actually manipulated in the light-front renormalization group are similar to this last expression, not the expressions in Eqs. (3.3) and (3.6). However, to see the connection with the more familiar canonical formalism, it is necessary to start with the latter expressions. As mentioned above, I have simply dropped all terms that explicitly involve zero modes only; {\eighteenit i.e.}, terms containing creation operators only or annihilation operators only. The motivation for dropping such terms was briefly discussed in the Introduction, but here I simply assume this restriction is placed on the space of Hamiltonians {\it ab initio~}. I do not question whether this assumption is reasonable and I do not believe that it is possible to answer such questions with a perturbative renormalization group analysis. The only issue at present is whether this restriction can be maintained or whether the transformations themselves regenerate pure creation or pure annihilation operators. To insure that this does not occur I simply drop the zero modes in every term in the Hamiltonian, and assume that this does not affect any momentum integral because a set of measure zero is being subtracted. The zero modes are automatically removed by several of the cutoffs studied below, and there is no need to be careful about how the zero modes are removed in any of the examples considered in this paper. This may not be the case in QCD, where severe divergences associated with small longitudinal momenta force one to be more careful. I have nothing more to say about this issue until the Conclusion. Up to this point the operators act in an infinite dimensional Fock space unless otherwise specified, and momenta are left unconstrained. The definition of the space of Hamiltonians is not complete until the momentum cutoffs that enter the renormalization group transformation are added. Let me begin by introducing the two simplest light-front renormalization group transformations, $T^\perp$ and $T^+$. In the renormalization group one starts with degrees of freedom that are already restricted by momentum cutoffs. There is never any point at which one explicitly considers Hamiltonians that contain no cutoffs, although renormalized Hamiltonians are obtained by considering what happens when the initial cutoff is taken to its limit. After introducing a cutoff one removes additional degrees of freedom and computes an effective Hamiltonian that is required, for example, to reproduce all of the low-lying eigenvalues and `accurately' approximate the low-lying eigenstates. This step is radically different in the light-front renormalization group from the integration over large momentum components in a path integral \APrefmark{\rWILNINE} employed in a Euclidean renormalization group, and details are not discussed until Section {\uppercase\expandafter{\romannumeral4 }}. The final step in a light-front renormalization group transformation is to rescale all momenta so that their original numerical range is restored, to rescale the field operators, and to rescale the Hamiltonian itself. At this point the Hamiltonian is identical in form to the original Hamiltonian, but the functions $u_2, u_4$, etc. may all change. The aim of the analysis is to understand how these functions change. In general the analysis is most useful if one can reduce the task of following these functions to the infinitely simpler task of following a few constants. Naturally, the most difficult step is demonstrating that this simplification is a legitimate approximation. For the light-front renormalization group transformation $T^\perp$ one begins by cutting off all transverse momenta, so that $$0 \le {\bf p^\perp}^2 \le \Lambda_0^2 \;. \eqno(3.8)$$ \noindent One then removes any state in which one or more momenta lie in the range $$\Lambda_1^2 < {\bf p^\perp}^2 \le \Lambda_0^2 \;. \eqno(3.9)$$ \noindent Typically $\Lambda_1=\Lambda_0/2$ or $\Lambda_1$ differs by an infinitesimal amount from $\Lambda_0$. In most examples in this paper I use $\Lambda_1=\Lambda_0/2$. A new Hamiltonian must be found that is able to reproduce the eigenvalues and approximate the eigenstates of the original Hamiltonian without the degrees of freedom that have been removed. Techniques for computing such `effective' Hamiltonians are discussed in the next Section. This new Hamiltonian must be written in the same form in which the initial Hamiltonian is written, and in particular it cannot include any explicit energy dependence. Let the initial Hamiltonian be called $H_{\Lambda_0}$, with the new effective Hamiltonian being $H_{\Lambda_1}$. The renormalization group transformation $T^\perp$ is completed by changing variables to $${\bf p^\perp}' = {\Lambda_0 \over \Lambda_1} {\bf p^\perp} \;, \eqno(3.10)$$ \noindent scaling the field operator $\phi$ by a constant so that $$ \phi({\bf p^\perp},p^+) = \zeta^\perp \phi'({\bf p^\perp}',p^+) \;, \eqno(3.11)$$ \noindent and multiplying the entire Hamiltonian by a constant $z^\perp$. The constants $\zeta^\perp$ and $z^\perp$ are introduced so that fixed points can exist. They are part of the definition of the transformation and can be chosen freely. The only practical restriction on the choice of $\zeta^\perp$ and $z^\perp$ is that the choice should lead to a fixed point of physical interest. $z^\perp$ is not essential, but it simplifies the task of comparing Hamiltonians, because it allows the range of eigenvalues of the transformed Hamiltonian to be the same as the range of eigenvalues of the initial Hamiltonian. The price paid for introducing $z^\perp$ is that one must multiply the Hamiltonian by $1/z^\perp$ to obtain eigenvalues in the original units. If ${\cal N}$ transformations are made, one must multiply the resultant Hamiltonian by $(1/z^\perp)^{\cal N}$. This point is clarified by examples below. The cutoffs in the original Hamiltonian introduce step functions into the momentum integrals in Eq. (3.3), for example. The elimination of the degrees of freedom with momenta in Eq. (3.9) leads to a new set of step functions, and the rescaling leads back to the original step functions. Each term in $H_{\Lambda_1}$ changes in a simple manner as a result of these various rescaling operations, and the final result is the transformed Hamiltonian, $T^\perp[H]$. The only difficult step is constructing $H_{\Lambda_1}$. I do not discuss any examples in detail until Section {\uppercase\expandafter{\romannumeral5 }}, where Gaussian fixed points and linear approximations of the transformations are constructed. However, to orient the reader I consider the simple Hamiltonian needed to find the Gaussian fixed point of $T^\perp$ in Section {\uppercase\expandafter{\romannumeral5 }}. If there are no interactions the Hamiltonian is $$\eqalign{ H_{\Lambda_0} = &\;\; \int d\tilde{q} \; \theta(\Lambda_0^2-{\bf q^\perp}^2) u_2(q) \; {a^\dagger(q) a(q) \over q^+} \;,} \eqno(3.12)$$ \noindent and when the cutoff changes the effective Hamiltonian is simply $$\eqalign{ H_{\Lambda_1} = &\;\; \int d\tilde{q} \; \theta(\Lambda_1^2-{\bf q^\perp}^2) u_2(q) \; {a^\dagger(q) a(q) \over q^+} \;.} \eqno(3.13)$$ \noindent The complicated procedure for producing effective Hamiltonians developed in Section {\uppercase\expandafter{\romannumeral4 }}~ is not needed when there are no interactions, because the original Hamiltonian projected onto the subspace of allowed states exactly reproduces all of the eigenvalues and eigenstates in this subspace. Next we change variables according to Eqs. (3.10) and (3.11), and multiply the Hamiltonian by a constant. The final transformed Hamiltonian is $$\eqalign{ T^\perp[H_{\Lambda_0}] = \;\; z^\perp \zeta^{\perp\;2} \Bigl({\Lambda_1 \over \Lambda_0}\Bigr)^2 &\int d\tilde{q} \; \theta(\Lambda_0^2-{\bf q^\perp}^2) \; u_2({\Lambda_1 \over \Lambda_0} {\bf q^\perp},q^+) \; {a^\dagger(q) a(q) \over q^+} \; .} \eqno(3.14)$$ \noindent Naturally, when interactions are included the transformation is far more complicated. In fact, we will see that perturbation theory generally cannot be used to approximate $T^\perp[H]$. For the light-front renormalization group transformation $T^+$, one begins by cutting off all longitudinal momenta, so that $$\epsilon_0 \le p^+ < \infty \;. \eqno(3.15)$$ \noindent It is immediately evident that this transformation is going to be radically different from the transformation on transverse momenta or any transformation considered in Euclidean field theory, because small momenta are removed instead of large momenta. This is an appropriate cutoff because the energy is $({\bf p^\perp}^2+m^2)/p^+$ near the Gaussian fixed point of interest. States with small longitudinal momenta correspond to high energy states unless both ${\bf p^\perp}$ and $m$ vanish, leading to Eq. (3.15). The first step in the transformation is to remove all states in which one or more particle momenta lie in the range $$\epsilon_0 \le p^+ < \epsilon_1 \;. \eqno(3.16)$$ \noindent Typically $\epsilon_1=2 \epsilon_0$ or $\epsilon_1$ differs by an infinitesimal amount from $\epsilon_0$. Again a new Hamiltonian must be found that reproduces the original low-lying eigenvalues and approximates the low-lying eigenstates of the original Hamiltonian without the degrees of freedom that have been removed. The procedures for computing such Hamiltonians are identical to those used for $T^\perp$. However, because longitudinal boosts are scale transformations, as seen in Eq. (3.19) below, it is possible to say a great deal about the transformed Hamiltonian without going through an explicit construction. In Section {\uppercase\expandafter{\romannumeral5 }}~ I prove that physical Hamiltonians are fixed points of the transformation $T^+$. The light-front renormalization group transformation $T^+$ is completed by changing variables to $$p^{+'} = {\epsilon_0 \over \epsilon_1} p^+ \;, \eqno(3.17)$$ \noindent scaling the field operator $\phi$ by a constant so that $$\phi({\bf p^\perp},p^+) = \zeta^+ \phi'({\bf p^\perp},p^{+'}) \;, \eqno(3.18)$$ \noindent and multiplying the entire Hamiltonian by a constant $z^+$. Again the constants $\zeta^+$ and $z^+$ are introduced so that a fixed point can exist. I do not discuss $T^+$ further in this Section, but perturbation theory generally cannot be used to approximate $T^+$ either. At this point a general procedure for inventing light-front renormalization group transformations should be apparent, although many important details may be obscure. I close this Section by discussing boost-invariant light-front renormalization group transformations. Both $T^\perp$ and $T^+$ break manifest boost invariance because they employ cutoffs that are not boost-invariant. Under a longitudinal boost the Hamiltonian and all particle momenta are transformed \APrefmark{\rKOGTWO,\rLEU}, $$p^+ \rightarrow e^\nu p^+ \;, \;\; {\bf p^\perp} \rightarrow {\bf p^\perp} \;, \;\; H \rightarrow e^{-\nu} H \;, \eqno(3.19)$$ \noindent and under a transverse boost, $$p^+ \rightarrow p^+ \;, \;\; {\bf p^\perp} \rightarrow {\bf p^\perp} + p^+ {\bf v^\perp} \;, \;\; H \rightarrow H + 2 {\cal \eighteenb P}^\perp \cdot {\bf v^\perp} + {\cal P}^+ {\bf v^\perp}^2 \;. \eqno(3.20)$$ \noindent In Eq. (3.19) $\nu$ is an arbitrary real number, and in Eq. (3.20) ${\bf v^\perp}$ is an arbitrary velocity, while ${\cal P}^+$ is the total longitudinal momentum and ${\cal \eighteenb P}^\perp$ is the total perpendicular momentum. Since the cutoffs are not changed by these transformations, these transformations place severe restrictions on the possible form of physical Hamiltonians that are not easily satisfied. Let me mention that this is not the chief problem one encounters when studying $T^\perp$ and $T^+$. The chief problem, as mentioned above, is that no perturbative expansion exists for these transformations in general. When a perturbative expansion exists for a transformation, it should be possible to implement Lorentz covariance order-by-order in perturbation theory. Physical Hamiltonians must transform according to Eqs. (3.19) and (3.20), and if possible one would like to build these constraints into the space of Hamiltonians {\eighteenit ab initio} and construct transformations that automatically maintain these constraints. Boosts are part of the kinematic subgroup of Poincar{\'e} transformations in light-front field theory, and this should allow one to make the kinematic invariances manifest by choosing the correct variables. This kinematic subgroup is isomorphic to the two-dimensional Galilean group, and the use of appropriate variables resembles the separation of center-of-mass motion in nonrelativistic quantum mechanics \APrefmark{\rKOGTWO,\rLEU}. The appropriate variables are $$x={p^+ \over {\cal P}^+} \;, \;\; {\eighteenb \kappa}^\perp = {\bf p^\perp} - x {\cal \eighteenb P}^\perp \;. \eqno(3.21)$$ \noindent One can easily verify that $x$ and ${\eighteenb \kappa}^\perp$ are invariant under the above boosts. Therefore any cutoff constructed from these variables does not interfere with our ability to maintain manifest boost invariance. In particular, if we use a cutoff formed from these variables we may simply fix ${\cal P}^+$ and choose ${\cal \eighteenb P}^\perp=0$ as part of the definition of the space of Hamiltonians, without loss of generality. An obvious feature of such cutoffs is their use of the total momenta ${\cal P}^+$ and ${\cal \eighteenb P}^\perp$, which are themselves extensive quantities. The momentum available to a system in one room, which determines what free states can be superposed to form physical states, may depend on the state of a system in the next room; and this may introduce nonlocalities into the Hamiltonian when it is constrained to produce exact physical results despite such nonlocal constraints on phase space. This could be a very severe price to pay for manifest boost invariance, but it does not seem likely that one can avoid paying this price without sacrificing the possibility of using perturbation theory if cutoffs that remove degrees of freedom are used, as is shown in Section {\uppercase\expandafter{\romannumeral6 }}. The important point for a perturbative renormalization group analysis is to show that such cutoff nonlocalities introduce new irrelevant, marginal, and relevant operators that are readily computed in perturbation theory. It is essential to show that cutoff nonlocalities do not lead to inverse powers of transverse momenta that produce long-range interactions. We do encounter inverse powers of transverse momenta that do {\eighteenit not} produce long-range interactions because of the cutoffs that accompany them. There are many boost-invariant renormalization group transformations one can introduce, and I introduce three. All of these transformations are identical in form to $T^\perp$ and $T^+$, so in each case it is sufficient to specify the cutoffs. There is no reason to consider a cutoff on longitudinal momentum fractions alone, because if we change such a cutoff we cannot rescale momenta to recover their original range. When individual longitudinal momenta are rescaled, the total longitudinal momentum is rescaled by the same amount, and the longitudinal momentum fractions are invariant. In other words, longitudinal boost invariance is a scale invariance. The first cutoff one might consider starts by cutting off the relative transverse momenta defined in Eq. (3.21), so that $$0 \le {\eighteenb \kappa^\perp}^2 \le \Lambda_0^2 \;. \eqno(3.22)$$ \noindent One then proceeds by lowering this cutoff, computing a new Hamiltonian, and then by rescaling the transverse momenta exactly as in Eq. (3.10). This cutoff should violate locality in a minimal fashion, but we shall see that perturbation theory again cannot be used to approximate this transformation. The second boost-invariant transformation begins with the introduction of the cutoff $$0 \le \sum_i {{\bf p^\perp}_i^2 \over p_i^+} \le {{\cal \eighteenb P^\perp}^2+\Lambda_0^2 \over {\cal P}^+} \;. \eqno(3.23)$$ \noindent By expanding this sum one easily demonstrates that this cutoff acts only on the relative transverse momenta defined in Eq. (3.21). This cutoff is the invariant-mass cutoff for massless theories, and again the remaining steps in the transformation involve lowering the cutoff, computing a new Hamiltonian and rescaling transverse momenta and fields. We will find that this transformation can be approximated perturbatively if there are no physical mass terms in $u_2$. A variety of `mass' terms arise in light-front field theory, because any function of longitudinal momentum fractions can accompany a mass. {\eighteenit Physical mass terms} appear in $u_2$ with no additional function of longitudinal momenta; {\eighteenit e.g.}, $u_2={\bf p^\perp}^2+m^2$ results in a free energy of the form $({\bf p^\perp}^2+m^2)/p^+$. We will find in Section {\uppercase\expandafter{\romannumeral6 }}~ that the invariant-mass transformation leads to mass counterterms that appear in $u_2$ in the form $g^2 \Lambda_0^2\; p^+/{\mit P}^+$, where $g$ is a coupling constant and ${\mit P}^+$ is the total longitudinal momentum of a state. This type of mass counterterm can be treated perturbatively, whereas a perturbative treatment of the physical mass terms leads to divergent longitudinal momentum integrals. The only transformation that I have been able to approximate using perturbation theory when there are physical mass terms in $u_2$ begins with the cutoff $$0 \le \sum_i {{\bf p^\perp}_i^2 + m_i^2 \over p_i^+} \le {{\cal \eighteenb P^\perp}^2+\Lambda_0^2 \over {\cal P}^+} \;. \eqno(3.24)$$ \noindent Here the cutoff masses $m_i$ that appear must be specified, and the appropriate values can only be chosen after an analysis of the transformation. In Wilson's triangle of renormalization discussed in Section {\uppercase\expandafter{\romannumeral2 }}, one must consider the limit in which the initial cutoff $\Lambda_0 \rightarrow \infty$. When taking this limit one should consider the masses, $m_i$, to be fixed. Clearly, if this limit can actually be taken, the specific values of the cutoff masses should enter primarily in the justification of perturbation theory and are adjusted for that purpose. I go through each step of this transformation, because cutoff masses introduce significant new features. The first step in the transformation is to remove all states in which $$ {{\cal \eighteenb P^\perp}^2+\Lambda_1^2 \over {\cal P}^+} < \sum_i {{\bf p^\perp}_i^2 + m_i^2 \over p_i^+} \le {{\cal \eighteenb P^\perp}^2+\Lambda_0^2 \over {\cal P}^+} \;. \eqno(3.25)$$ \noindent After computing the effective Hamiltonian one rescales all transverse momenta by changing variables as in Eq. (3.10). However, after this change of variables the states satisfy the constraint $$0 \le \sum_i {{\bf p^\perp}_i^2 + (\Lambda_0/\Lambda_1)^2 m_i^2 \over p_i^+} \le {{\cal \eighteenb P^\perp}^2+\Lambda_0^2 \over {\cal P}^+} \;. \eqno(3.26)$$ The momenta do not have the same range after rescaling as they initially had, because the cutoff masses change. We will see that masses in the Hamiltonian rescale in exactly this manner. If one applies the transformation a large number of times, eventually the factor $(\Lambda_0/\Lambda_1)^2 m_i^2$ becomes large and all of phase space is eliminated by the transformation. Clearly the transformation must be highly nonperturbative at this point; however, there is no reason to believe that one can ever lower the cutoff on transverse momenta to a scale comparable to physical mass scales in the problem without the transformation becoming highly nonperturbative. One should consider the cutoff masses to be of the same scale as physical masses. This transformation still defines a semi-group, and I show in Appendix C that all of the basic features of the renormalization group apparently survive when one allows such a transformation. In particular, one can define the functions $u_2$, $u_4$, etc. independently of the cutoffs, and study their evolution as analytic functions of their arguments over the original range of momenta. This discussion is clarified by examples in Appendix C. \bigskip \noindent {\eighteenb {\uppercase\expandafter{\romannumeral4 }}. Perturbation Formulae for the Effective Hamiltonian} \medskip I discuss two related methods for computing the effective Hamiltonian after the cutoff is changed, one developed originally by Bloch and Horowitz \APrefmark{\rBLOONE} and a second developed by Bloch \APrefmark{\rBLOTWO}. I call the first Hamiltonian a Bloch-Horowitz Hamiltonian and the second simply a Bloch Hamiltonian. The Bloch Hamiltonian was used by Wilson in early work on the renormalization group \APrefmark{\rWILTWO}. More sophisticated methods must be employed if one wants to work far from the Gaussian fixed point, but I concentrate only on the development of the effective Hamiltonian in perturbation theory. The primary requirement is that the effective Hamiltonian produce the same low-lying eigenvalues in the second cutoff subspace that the initial Hamiltonian produces in the initial cutoff subspace. A second requirement is that the eigenstates of the effective Hamiltonian be orthonormalized projections of the original eigenstates. The Bloch Hamiltonian is designed to satisfy these requirements, while the Bloch-Horowitz Hamiltonian is not. One of the primary reasons that one wants to preserve the eigenstates in addition to the eigenvalues is to compute measurable quantities in addition to the energy. Each measurable quantity corresponds to the absolute value of a matrix element, and when one introduces cutoffs all operators must be properly renormalized so that their matrix elements in the cutoff space are independent of the cutoffs. An intermediate step in the construction of the Bloch Hamiltonian is the construction of an operator ${\cal R}$. ${\cal R}$ can be used to renormalize all observables. I add two additional requirements that must be met by the effective Hamiltonian. First, it must be manifestly Hermitian. Second, its representation can only involve the unperturbed ({\eighteenit i.e.}, free) energies in any denominators that occur. The second property is a severe limitation that limits the utility of the Hamiltonian to studies of Gaussian and near-Gaussian fixed points. This property is not desirable {\eighteenit a priori} and is adopted only because I do not know of a general procedure for constructing Hamiltonians outside of perturbation theory. This limitation does not imply that one is always limited to the use of perturbation theory, after one uses a perturbative construction of the light-front renormalization group Hamiltonian. The perturbative construction only needs to be valid near the Gaussian fixed point. After one has used the transformation many times to reduce the cutoff to an acceptable value, the final Hamiltonian may have been accurately computed using a perturbative renormalization group transformation, but its diagonalization is typically nonperturbative. I am particularly interested in QCD, where one can hopefully use a light-front renormalization group to remove high energy partons through sequential application of a perturbative renormalization group transformation, justified by asymptotic freedom \APrefmark{\rGRO,\rPOL}; after which one must employ suitable nonperturbative techniques to diagonalize the final cutoff Hamiltonian and obtain low energy observables. I begin with the Bloch-Horowitz Hamiltonian because it is most easily derived. In fact, the primary reason I include a discussion of the Bloch-Horowitz Hamiltonian is because its development is particularly transparent and allows a reader unfamiliar with such many-body techniques to readily grasp the main ideas. Let the operator that projects onto all of the states removed when the cutoff is lowered be ${\cal Q}$, and let ${\cal P}=1-{\cal Q}$. Of course, ${\cal Q}^2={\cal Q}$ and ${\cal P}^2={\cal P}$. I occasionally refer to a subspace using the appropriate projector. With these projectors, Schr{\"o}dinger's equation can be divided into two parts, $${\cal P} H {\cal P} |\Psi\rangle + {\cal P} H {\cal Q} |\Psi\rangle = E {\cal P} |\Psi\rangle \;, \eqno(4.1)$$ $${\cal Q} H {\cal P} |\Psi\rangle + {\cal Q} H {\cal Q} |\Psi\rangle = E {\cal Q} |\Psi\rangle \;. \eqno(4.2) $$ \noindent Solving Eq. (4.2) for ${\cal Q} |\Psi\rangle$ and substituting the result into Eq. (4.1), one finds an operator, $H'$, that produces the same eigenvalue $E$ in the subspace ${\cal P}$ as the original Hamiltonian, $$H'(E)= {\cal P} H {\cal P} + {\cal P} H {\cal Q} {1 \over E - {\cal Q} H {\cal Q}} {\cal Q} H {\cal P} \;. \eqno(4.3)$$ \noindent This is the Bloch-Horowitz effective Hamiltonian. It is not satisfactory, however, because it contains the eigenvalue, $E$; and it is not Hermitian unless $E$ is held fixed. The development of the Bloch Hamiltonian begins with the definition of the same projection operators ${\cal Q}$ and ${\cal P}$ used above. After defining these operators one looks for an operator ${\cal R}$ that satisfies $${\cal Q} |\Psi\rangle = {\cal R} {\cal P} |\Psi\rangle \;, \eqno(4.4)$$ \noindent for all eigenstates of the Hamiltonian that have support in the subspace ${\cal P}$. To construct this operator, multiply Eq. (4.1) by ${\cal R}$ and replace ${\cal Q} |\Psi\rangle$ in Eq. (4.2) with ${\cal R} {\cal P} |\Psi\rangle$, and subtract the resultant equations to obtain $${\cal R} H_{\P\P}-H_{\Q\Q} {\cal R}+{\cal R} H_{\P\Q} {\cal R} -H_{\Q\P} = 0 \;. \eqno(4.5)$$ \noindent Here I have introduced the notation $H_{\P\P}={\cal P} H {\cal P}$, etc. This is the fundamental equation for ${\cal R}$ and one of the most difficult steps in constructing the Bloch Hamiltonian is solving this equation. I am only interested in the perturbative solution, and one can already see that ${\cal R}$ should be proportional to $H_{\Q\P}$. For notational convenience I let $H=h+v$ for the Bloch Hamiltonian, where $h$ is a `free' Hamiltonian and I assume $[h,{\cal Q}]=0$. Since ${\cal P} {\cal Q}=0$, Eq. (4.5) can also be written as $${\cal R} h_{\P\P} - h_{\Q\Q} {\cal R} - v_{\Q\P} + {\cal R} v_{\P\P} - v_{\Q\Q} {\cal R} + {\cal R} v_{\P\Q} {\cal R} = 0 \;. \eqno(4.6)$$ \noindent This equation is now in a form that can be solved using an expansion in powers of $v$, and it is apparent that ${\cal R}$ starts at first order in $v$. Before solving this equation, let me write the effective Hamiltonian and the eigenstates in terms of ${\cal R}$. The states ${\cal P} |\Psi\rangle$ are not orthonormal. However, if we assume that no two eigenstates of $H$ have the same projection in the subspace ${\cal P}$ ({\eighteenit i.e.}, that ${\cal R}$ is single-valued), then one can readily check that $$H' |\Phi\rangle = E |\Phi\rangle \;, \eqno(4.7)$$ \noindent when, $$|\Phi\rangle = \sqrt{1+{\cal R}^\dagger {\cal R}}\;\;{\cal P}\; |\Psi\rangle \;, \eqno(4.8)$$ \noindent and, $$H'= {1 \over \sqrt{1+{\cal R}^\dagger {\cal R}}} ({\cal P}+{\cal R}^\dagger) H ({\cal P}+{\cal R}) {1 \over \sqrt{1+{\cal R}^\dagger {\cal R}}} \;. \eqno(4.9)$$ \noindent The states $|\Phi\rangle$ are orthonormalized projections of the original eigenstates $|\Psi\rangle$, and the manifestly Hermitian effective Hamiltonian $H'$ yields the same eigenvalues as the original Hamiltonian $H$. $H'$ is the Bloch Hamiltonian. To construct the Bloch Hamiltonian in perturbation theory, one first solves Eq. (4.6) in perturbation theory. Since we cannot assume that $[{\cal R},h_{\P\P}]$ or $[{\cal R},h_{\Q\Q}]$ are zero, we must employ the eigenstates $$h_{\P\P} |a\rangle = \epsilon_a |a\rangle \;,\;\;\;h_{\Q\Q} |i\rangle = \epsilon_i |i\rangle \;, \eqno(4.10)$$ \noindent to develop algebraic equations for the matrix elements of ${\cal R}$. I use $|a\rangle$, $|b\rangle$ , ... to indicate free eigenstates in ${\cal P}$; and $|i\rangle$, $|j\rangle$, ... to indicate free eigenstates in ${\cal Q}$. In order to expand $H'$ through third order in the interaction, we only need the first two terms in an expansion of ${\cal R}$, and these are $$\langle i| {\cal R}_1 |a\rangle = {\langle i| v |a\rangle \over \epsilon_a-\epsilon_i} \;, \eqno(4.11)$$ $$\langle i| {\cal R}_2|a\rangle = \sum_j {\langle i| v |j\rangle \langle j| v |a\rangle \over (\epsilon_a-\epsilon_i) (\epsilon_a-\epsilon_j)} - \sum_b {\langle i| v |b\rangle \langle b| v |a\rangle \over (\epsilon_a-\epsilon_i) (\epsilon_b-\epsilon_i)} \;. \eqno(4.12)$$ \noindent There are a few general features of each term in this expansion that I want to note. First, note that ${\cal R}$ only has nonzero matrix elements between a bra in ${\cal Q}$ and a ket in ${\cal P}$. Second, every energy denominator involves a difference between a free energy in ${\cal P}$ and a free energy in ${\cal Q}$. Ultimately the convergence of this expansion is going to rest not only on the weakness of $v$, but also on the fact that high energy states are being eliminated so that these denominators are large throughout the most important regions of phase space. We are ultimately interested only in the very low energy eigenstates and eigenvalues, and almost all of the states we eliminate to obtain the effective Hamiltonian for these states are extremely far off shell. The energy denominators may vanish, but this should only happen for a set of measure zero if the expansion is to converge. One can avoid all but accidental degeneracies by putting the system in a box, but I do not need to do this for the examples considered. Serious problems should only arise when this entire series needs to be re-summed because a particular nonperturbative effect must be properly included. Unfortunately, such serious problems are common. Higher order terms are easily constructed using the recursion relation $${\cal R}_n h_{\P\P}-h_{\Q\Q} {\cal R}_n+{\cal R}_{n-1} v_{\P\P} - v_{\Q\Q} {\cal R}_{n-1} + \sum_{m=1}^{n-2} {\cal R}_m v_{\P\Q} {\cal R}_{n-m-1} = 0 \;. \eqno(4.13)$$ \noindent Clearly the number of terms in each successive order grows exponentially, and it is likely that the expansion is at best asymptotic. Given a perturbative expansion for ${\cal R}$, one can use Eq. (4.9) to derive a perturbative expansion for the Bloch Hamiltonian. After some simple algebra, one finds that through third order in $v$, $$\eqalign{ \langle a| H' |b\rangle = &\langle a| h+v |b\rangle + {1 \over 2} \sum_i \Biggl( {\langle a| v |i\rangle \langle i| v |b\rangle \over (\epsilon_a-\epsilon_i)} + {\langle a| v |i\rangle \langle i| v |b\rangle \over (\epsilon_b-\epsilon_i)} \Biggr) \cr &+ {1 \over 2} \sum_{i,j} \Biggl( {\langle a| v |i\rangle \langle i| v |j\rangle \langle j| v |b\rangle \over (\epsilon_a-\epsilon_i) (\epsilon_a-\epsilon_j)} + {\langle a| v |i\rangle \langle i| v |j\rangle \langle j| v |b\rangle \over (\epsilon_b-\epsilon_i) (\epsilon_b-\epsilon_j)} \Biggr) \cr &- {1 \over 2} \sum_{c,i} \Biggl( {\langle a| v |i\rangle \langle i| v |c\rangle \langle c| v |b\rangle \over (\epsilon_b-\epsilon_i) (\epsilon_c-\epsilon_i)} + {\langle a| v |c\rangle \langle c| v |i\rangle \langle i| v |b\rangle \over (\epsilon_a-\epsilon_i) (\epsilon_c-\epsilon_i)} \Biggr) \;\;+\;{\cal O}(v^4) \;.}\eqno(4.14)$$ This expression can be used to compute the renormalization of the quark-gluon vertex through third order in the bare coupling, for example \APrefmark{\rPERQCD}. It is quite similar to expressions one encounters in time-ordered perturbation theory for off-shell Green's functions, but there are some distinct differences. Given Eq. (4.14), or any extension of higher order, it is straightforward to derive a set of diagrammatic rules that allow one to summarize the operator algebra in fairly simple diagrams. These diagrams are similar to Goldstone diagrams \APrefmark{\rGOLD}, familiar from many-body quantum mechanics, but they require one to display the energy denominators. I refer to them as {\eighteenit Hamiltonian diagrams}. In a standard time-ordered diagram, there is a factor $1/(E-\epsilon_i)$ for every intermediate state, $|i\rangle$, and the energy $E$ is the same in every denominator. In the Hamiltonian diagrams there are a wide variety of denominators, always involving differences of eigenvalues of $H_0$; however, these eigenvalues may be associated with widely separated states in the Hamiltonian diagram. While the above process of generating a perturbative expansion for the effective Hamiltonian is easily mechanized, I have not found a simple set of diagrammatic rules that summarize this process to all orders. It is quite possible that such rules already exist in the literature, but for the purposes of this paper it is only necessary to understand Eq. (4.14) and to appreciate the fact that higher order terms are readily generated. In figure 2 I show a few typical Hamiltonian diagrams that occur when $H$ contains a $\phi^4$ interaction. Energy denominators are denoted by lines connecting the relevant states, with the arrow pointing toward the state whose eigenvalue occurs last in Eq. (4.14). One can infer from these lines which states lie in ${\cal P}$ and which states lie in ${\cal Q}$, because energy denominators always involve differences of energies in ${\cal P}$ and energies in ${\cal Q}$ and the arrow points toward a state in ${\cal Q}$. External states always lie in ${\cal P}$ of course. In a realistic calculation there are many different vertices, corresponding to the many different functions $u_4, u_6$, etc. in the Hamiltonian. Sometimes $u_2$ determines $h$; however, in perturbation theory one usually needs to include part of $u_2$ in $v$. This issue is clarified in the next Section. The division of the Hamiltonian into a `free' and `interaction' part above is arbitrary, except for the requirement that $[h,{\cal Q}]=0$. Let us suppose that this requirement is met, but some of the eigenvalues of $h$ in ${\cal P}$ are larger than some of the eigenvalues of $h$ in ${\cal Q}$. The energy denominators in Eq. (4.14) pass through zero when this happens and one must question the convergence of the expansion in Eq. (4.14). The fact that the denominators may vanish does not automatically imply that the expansion does not converge, but at the minimum it forces one to carefully consider the boundary conditions required to construct the Green's functions in Eq. (4.14). However, even if the expansion in Eq. (4.14) does converge, it may prove useless for a renormalization group study. In general, each term in Eq. (4.14) leads to an infinite number of operators in the transformed Hamiltonian, because of the potentially complicated dependence each term may have on the momenta of the `incoming' and `outgoing' states. We will see in the next Section that when some of the eigenvalues of $h$ in ${\cal P}$ are larger than some of the eigenvalues of $h$ in ${\cal Q}$, operators that are classified as irrelevant in the linearized renormalization group analysis occur with large and sometimes infinite coefficients that render this classification scheme useless. \bigskip \noindent {\eighteenb {\uppercase\expandafter{\romannumeral5 }}. Critical Gaussian Fixed Points and Linearized Behavior Near Critical Gaussian Fixed Points} \medskip A fixed point is defined to be any Hamiltonian that satisfies $$T[H^*]=H^* \;. \eqno(5.1)$$ \noindent The fixed point is central to the renormalization group analysis, and unless one has found a fixed point it is unlikely that anything can be accomplished with perturbation theory. The simplest example is the Gaussian fixed point, one for which $u_4=0, u_6=0$, etc. In this case the transformation that generates $H_{\Lambda_1}$ is trivial. Let me begin by discussing the Gaussian fixed point for the transformation $T^\perp$. To find the Gaussian fixed point we need to consider $$\eqalign{ H_{\Lambda_0} = &\;\; \int d\tilde{q} \; \theta(\Lambda_0^2-{\bf q^\perp}^2) \;u_2(q)\; {a^\dagger(q) a(q) \over q^+} \;,} \eqno(5.2)$$ \noindent and when the cutoff changes the effective Hamiltonian is simply $$\eqalign{ H_{\Lambda_1} = &\;\; \int d\tilde{q} \; \theta(\Lambda_1^2-{\bf q^\perp}^2)\; u_2(q)\; {a^\dagger(q) a(q) \over q^+} \;.} \eqno(5.3)$$ \noindent Next we change variables according to Eqs. (3.10) and (3.11), and multiply the Hamiltonian by a constant. The final transformed Hamiltonian is $$\eqalign{ T[H_{\Lambda_0}] = \;\; z^\perp \zeta^{\perp\;2} \Bigl({\Lambda_1 \over \Lambda_0}\Bigr)^2 &\int d\tilde{q} \; \theta(\Lambda_0^2-{\bf q^\perp}^2) \;u_2({\Lambda_1 \over \Lambda_0} {\bf q^\perp},q^+) \; {a^\dagger(q) a(q) \over q^+} \;.} \eqno(5.4)$$ The factor of $(\Lambda_1/\Lambda_0)^2$ in front of the integral arises because the Hamiltonian itself has the dimension given in Eq. (3.2). I absorb this overall rescaling by setting $$z^\perp=\Bigl({\Lambda_0 \over \Lambda_1}\Bigr)^2 \;. \eqno(5.5)$$ \noindent $z^\perp$ is introduced so that the transformed Hamiltonian can be directly compared with the initial Hamiltonian. To obtain eigenvalues in the initial units, one must multiply by $1/z^\perp$. With $z^\perp$ determined, the Gaussian fixed point is found by insisting that $$ u_2^*({\bf q^\perp},q^+)=\zeta^{\perp\;2}\;u_2^*({\Lambda_1 \over \Lambda_0} {\bf q^\perp},q^+) \;. \eqno(5.6)$$ \noindent The general solution to this equation is familiar from Euclidean field theory \APrefmark{\rWILNINE}, being a monomial in ${\bf q^\perp}$. The general solution is $$u_2^*({\bf q^\perp},q^+)= f(q^+) ({\bf q^\perp})^n \;, \eqno(5.7)$$ $$\zeta^\perp=\Bigl({\Lambda_0 \over \Lambda_1}\Bigr)^{(n/2)} \;. \eqno(5.8)$$ \noindent One must decide {\eighteenit ab initio} which Gaussian fixed point is to be studied by fixing the dispersion relation in the free theory, and I am only interested in the case $n=2$. Since there is no mass term in this fixed point, the correlation length is infinite and it is called a critical fixed point. The fixed point of $T^\perp$ contains an undetermined function of longitudinal momentum, $f(q^+)$. This has important implications for the light-front renormalization group, which will become clearer below. One can look for fixed points that contain interactions, but no such fixed point has been found for a scalar field theory in $3+1$ dimensions. It is possible to change the number of transverse dimensions \APrefmark{\rWILSEVEN} to $2-\epsilon$ and construct an analog of the analysis found, for example, in the review article of Wilson and Kogut \APrefmark{\rWILNINE}. I do not pursue this idea, and turn to the next step in the analysis of $T^\perp$, the construction of the linearized transformation about the Gaussian fixed point. To construct the linearized transformation, consider Hamiltonians that are almost Gaussian, $$H_l=H^* + \delta H_l \;. \eqno(5.9) $$ \noindent Here the subscript $l$ labels the number of times the transformation has been applied, so that a sequence of Hamiltonians can be studied. I assume that $\delta H_l$ is `small'. Applying the full transformation we find that $$\eqalign{ \delta H_{l+1} &= T[H^*+\delta H_l]-H^* \cr &= L_{H^*} \cdot \delta H_l + {\cal O}(\delta H_l^2) \;.} \eqno(5.10)$$ \noindent This equation defines the linear operator $L_{H^*}$, which depends explicitly on the fixed point. I typically drop the subscript on $L_{H^*}$ and simply refer to $L$. In general the construction of the operator $L$ is complicated; however, when $H^*$ is a Gaussian fixed point, $L$ is easily constructed. In the first step of a renormalization group transformation a cutoff is changed and degrees of freedom are removed. Effective interactions result when an incoming state experiences an interaction, so that some momenta are altered and fall into the range being removed. However, both the incoming and outgoing states must fall in the sector retained by the transformation, so a second interaction is always required to return the state to the allowed sector. The only exception to this occurs when zero modes are allowed and one can form a `loop' with a single interaction. I drop zero modes, so I can ignore this possibility and conclude that near the Gaussian fixed point all new interactions and changes to initial interactions in the effective Hamiltonian are ${\cal O}(\delta H^2)$. This means that the only part of the full transformation that affects the linear operator is the rescaling. We immediately conclude that $L$ for the Gaussian fixed point of $T^\perp$ is given by the rescaling operations in Eqs. (3.10) and (3.11), along with the overall multiplicative transformation of the Hamiltonian. $z^\perp$ is given by Eq. (5.5), and $\zeta^\perp$ is given by Eq. (5.8), so one readily finds that $$\eqalign{ u_n({\bf q^\perp}_1,q^+_1,...) & \rightarrow z^\perp \zeta^{\perp \; n} \Bigl({\Lambda_1 \over \Lambda_0}\Bigr)^{(2n-2)} \; u_n({\Lambda_1 \over \Lambda_0} {\bf q^\perp}_1,q^+_1,...) \cr & \rightarrow \Bigl({\Lambda_1 \over \Lambda_0}\Bigr)^{(n-4)} \; u_n({\Lambda_1 \over \Lambda_0} {\bf q^\perp}_1,q^+_1,...) \;.} \eqno(5.11) $$ Subsequent analysis of the full transformation should be simplified by identifying the eigenoperators and eigenvalues of this linearized transformation. We seek solutions to the equation $$L\cdot O = \lambda O \;, \eqno(5.12)$$ \noindent and the solutions are readily found to be homogeneous polynomials of transverse momenta. We can label $O$ with a subscript that displays the number of field operators in $O$ and a superscript that displays the number of powers of transverse momenta. For example, $$O_2^4 = \int d\tilde{q} \; \theta(\Lambda_0^2-{\bf q^\perp}^2) \bigl[{\bf q^\perp} \bigr]^4 \; {a^\dagger(q) a(q) \over q^+} \;. \eqno(5.13)$$ \noindent In general more labels are required because there may be many possible polynomials of the same degree; however, the eigenvalue is determined by these two labels. Applying $L$ to any operator $O_n^m$ one finds $$L\cdot O_n^m = \Bigl( {\Lambda_1 \over \Lambda_0}\Bigr)^{(m+n-4)} O_n^m \;. \eqno(5.14) $$ \noindent Inspection of this result reveals that the eigenvalue is determined by the naive transverse engineering dimension of the operator. This is not surprising since the linear approximation of the transformation is given by the rescaling operations alone. {\eighteenit Relevant operators} are defined to be operators for which $\lambda > 1$; and since $\Lambda_0 > \Lambda_1$, relevant operators have $m+n-4 < 0$. Transverse locality implies that $m \ge 0$, and we always require $n \ge 2$, so the only relevant operator is one satisfying $m=0$, $n=2$, if we assume that a $\phi \rightarrow -\phi$ symmetry is maintained. This is a mass term. There is no {\eighteenit a priori} reason to assume that such a symmetry can be maintained, and if it is violated a $n=3$ relevant operator will appear. The $m=1$, $n=2$ operator is ruled out by rotational invariance about the z-axis. If one includes a power of a transverse momentum, the Lorentz indices on this momentum must be contracted with a second Lorentz index. In the absence of spin the only transverse Lorentz indices are carried by transverse momenta, and therefore single powers of transverse momenta cannot occur. Note that the dependence of $O_n^m$ on longitudinal momenta does not affect this analysis; and in this sense there are an infinite number of relevant operators if there is one. {\eighteenit Marginal operators} are defined to be operators for which $\lambda = 1$. Allowing $\phi \rightarrow -\phi$ symmetry to be broken, there appear to be $(n=2,m=2)$, $(n=3,m=1)$, and $(n=4,m=0)$ marginal operators. The $(n=3,m=1)$ operator is excluded by rotational symmetry about the z-axis. Again, any dependence on longitudinal momenta does not affect this analysis. {\eighteenit Irrelevant operators} are defined to be operators for which $\lambda < 1$, and one sees that almost all operators are irrelevant by this definition. If inverse powers of transverse momenta are allowed, this conclusion is destroyed, and there are an infinite number of relevant and marginal operators in addition to those discussed above. This analysis is itself relevant only if there is a region near the fixed point in which the linearized transformation is a reasonable approximation of the full transformation. In this case there may be a convergent or asymptotic perturbative expansion for the transformation in a region near the fixed point. A perturbative analysis is probably useful only for theories that display asymptotic freedom. In this paper I simply assume that all couplings are small, even though we know that this condition cannot be satisfied for an interacting symmetric scalar theory if the initial cutoff approaches infinity, because it is not asymptotically free. Before discussing $T^+$, I would like to explicitly show what happens when a physical mass is added to $u_2^*$. In this case we have $$\eqalign{ H_{\Lambda_0} = &\;\; \int d\tilde{q} \; \theta(\Lambda_0^2-{\bf q^\perp}^2)\; \bigl[{\bf q^\perp}^2+m^2\bigr]\; {a^\dagger(q) a(q) \over q^+} \;.} \eqno(5.15)$$ \noindent One can readily verify that after $T^\perp$ is applied ${\cal N}$ times the result is $$\eqalign{{T^\perp}^{\cal N}\bigl[H_{\Lambda_0}\bigr]= \;\; \int d\tilde{q} \; \theta(\Lambda_0^2-{\bf q^\perp}^2) \; \bigl[{\bf q^\perp}^2+4^{\cal N} m^2\bigr]\; {a^\dagger(q) a(q) \over q^+} \;,} \eqno(5.16)$$ \noindent where I assume $\Lambda_1=\Lambda_0/2$. As ${\cal N} \rightarrow \infty$ the Hamiltonian approaches another Gaussian fixed point, the so-called trivial fixed point at which the mass is infinitely large in comparison to the kinetic energy. Note that the original energy spectrum can always be obtained by multiplying the Hamiltonian by $4^{-{\cal N}}$. The analysis of the Gaussian fixed point and linearized approximation for $T^+$ closely parallels that of $T^\perp$. To find the Gaussian fixed point we must analyze $$\eqalign{ H_{\epsilon_0} = &\;\; \int d\tilde{q} \; \theta(q^+-\epsilon_0)\; u_2(q)\; {a^\dagger(q) a(q) \over q^+} \;,} \eqno(5.17)$$ \noindent and when the cutoff changes the effective Hamiltonian is $$\eqalign{ H_{\epsilon_1} = &\;\; \int d\tilde{q} \; \theta(q^+-\epsilon_1)\; u_2(q) \; {a^\dagger(q) a(q) \over q^+} \;.} \eqno(5.18)$$ \noindent Next we change variables according to Eqs. (3.17) and (3.18), and multiply by an overall constant to obtain $$\eqalign{ T[H_{\epsilon_0}] = \;\; z^+ \zeta^{+2} \Bigl({\epsilon_0 \over \epsilon_1}\Bigr) &\int d\tilde{q} \; \theta(q^+-\epsilon_0) \; u_2({\bf q^\perp},{\epsilon_1 \over \epsilon_0} q^+)\; {a^\dagger(q) a(q) \over q^+} \;.} \eqno(5.19)$$ Here the factor of $(\epsilon_0 / \epsilon_1)$ in front of the integral arises because the Hamiltonian has the dimension given in Eq. (3.2), and I absorb this overall rescaling by setting $$z^+=\Bigl({\epsilon_1 \over \epsilon_0}\Bigr) \;. \eqno(5.20)$$ \noindent Given $z^+$, the Gaussian fixed point satisfies $$ u_2^*({\bf q^\perp},q^+)=\zeta^{+2}\;u_2^*({\bf q^\perp},{\epsilon_1 \over \epsilon_0} q^+)\;. \eqno(5.21)$$ \noindent The general solution is a monomial in $q^+$, $$u_2^*({\bf q^\perp},q^+)= f({\bf q^\perp}) \Bigl({1 \over q^+}\Bigr)^n \;, \eqno(5.22)$$ $$\zeta^+=\Bigl({\epsilon_1 \over \epsilon_0}\Bigr)^n \;. \eqno(5.23)$$ \noindent Again, one must decide {\eighteenit ab initio} which Gaussian fixed point to investigate, and I am interested in the case $n=0$. $T^\perp$ and $T^+$ both have Gaussian fixed volumes rather than fixed points, because neither scales all momentum components simultaneously. The single fixed point that both share, with the definitions of the scaling constants above, is $$u_2^*({\bf q^\perp},q^+)= {\bf q^\perp}^2 \;. \eqno(5.24)$$ The linearized approximation for $T^+$ is determined by the rescaling operations in Eqs. (3.17) and (3.18). Combined with Eq. (5.23) these yield $$\eqalign{ u_n({\bf q^\perp}_1,q^+_1,...) & \rightarrow z^+ \zeta^{+ \; n} \Bigl({\epsilon_0 \over \epsilon_1}\Bigr) \; u_n({\bf q^\perp}_1,{\epsilon_1 \over \epsilon_0} q^+_1,...) \cr & \rightarrow u_n({\bf q^\perp}_1,{\epsilon_1 \over \epsilon_0} q^+_1,...) \;.} \eqno(5.25) $$ \noindent The eigenoperators of this transformation are homogeneous polynomials of inverse longitudinal momenta. I choose to employ inverse derivatives for reasons that will become clear. For $T^+$ the solutions need not be labelled by the number of field operators, because the scalar field is longitudinally dimensionless, as in Eq.(3.1). However, for convenience I use the same notation found in Eq. (5.13), but with the superscript now indicating the power of inverse longitudinal momenta. In this case one finds that $$O_2^4 = \int d\tilde{q} \; \theta(q^+-\epsilon_0) \Bigl[{1 \over q^+} \Bigr]^4 \; {a^\dagger(q) a(q) \over q^+} \;, \eqno(5.26)$$ \noindent and $$L\cdot O_n^m = \Bigl( {\epsilon_0 \over \epsilon_1} \Bigr)^m O_n^m \;. \eqno(5.27)$$ \noindent I have not included the inverse power of $q^+$ found in the measure $d\tilde{q}$, nor the inverse power of $q^+$ that accompanies any product of creation and annihilation operators found in the Hamiltonian, in the index $m$. This is clear in Eq. (5.26). The eigenvalue in this case is determined by the naive longitudinal engineering dimension of the operator. $\epsilon_0 < \epsilon_1$, so longitudinal relevant operators must have $m<0$; longitudinal marginal operators have $m=0$ and irrelevant operators have $m>0$. This explains the choice of inverse derivatives, because powers of inverse derivatives produce irrelevant operators. The same type of simplification that occurs in a Euclidean renormalization group when locality is assumed, might occur when one formulates a principle of {\eighteenit longitudinal nonlocality} for the longitudinal transformation. When we study second-order corrections in Section {\uppercase\expandafter{\romannumeral6 }}, we will find that $T^+$ leads to problems that prevent us from exploiting any simplifications from longitudinal nonlocality, unfortunately. This completes the analysis of $T^+$ to leading order near the fixed point. The analysis of the next order shows that $T^+$ cannot be approximated in perturbation theory unless one allows arbitrary functions of transverse momenta, including functions that violate transverse locality. Worse, one encounters arbitrarily large coefficients of irrelevant operators in perturbation theory, so these operators cannot be dropped. It is fairly easy to see that Hamiltonians of physical interest are fixed points of $T^+$, using Eq. (3.19). I demonstrate this and discuss some implications before closing this Section. In the first step of the transformation $T^+$, we must remove degrees of freedom following Eq. (3.16), and find a new Hamiltonian that yields the same eigenvalues and properly renormalized eigenstate projections. A Hamiltonian that meets these criteria is readily found by applying a longitudinal boost, so that $$p^+ \rightarrow {\epsilon_1 \over \epsilon_0} p^+ \;. \eqno(5.28)$$ \noindent After this boost all longitudinal momenta satisfy the constraint $\epsilon_1 < p^+ < \infty$. Under this same boost we know that $$H \rightarrow {\epsilon_0 \over \epsilon_1} H \;, \eqno(5.29)$$ \noindent which reveals that when we multiply the boosted Hamiltonian by the constant $(\epsilon_1 / \epsilon_0)$, we retrieve the original Hamiltonian, but with momenta restricted to a new range. $T^+$ is completed by rescaling the variables according to Eqs. (3.17) and (3.18), and with $z^+=(\epsilon_1 / \epsilon_0)$ and $\zeta^+=1$, this merely returns the longitudinal momenta to their original values before the boost and has no other effect on the Hamiltonian. In other words, as long as Eq. (5.29) holds, we have $$T^+[H]=H \;, \eqno(5.30)$$ \noindent so that $H$ is a fixed point of the full transformation $T^+$. This may be an important result. However, fixed points of $T^+$ that contain interactions invariably contain operators that violate transverse locality and leave one with no perturbative means to study the dependence of the Hamiltonian on the transverse scale. At this point I have discovered no benefit to implementing longitudinal boost-invariance by seeking fixed points of $T^+$. All of the boost-invariant renormalization group transformations have the same Gaussian fixed points as $T^\perp$, and the same linear analysis as $T^\perp$. Therefore, for each of these transformations, operators are classified according to their transverse dimensionality. Longitudinal dimensionality plays no role whatsoever, and the linear analysis cannot be used to control the longitudinal functions in the Hamiltonian. There is no need to repeat the discussion leading up to Eq. (5.14). The only caveat is that strictly speaking the invariant-mass transformation that employs the cutoff in Eq. (3.24) has no fixed points. As discussed in Section {\uppercase\expandafter{\romannumeral3 }}, after a finite number of transformations all of phase space is eliminated and the transformed Hamiltonian approaches zero, because of the masses in this cutoff; unless the initial cutoff is allowed to approach infinity. However, using analytic continuation one can extend the functions $u_2$, $u_4$, etc. outside the range of the invariant mass cutoff and define the fixed point using the analytically continued Hamiltonian. In this case, the Gaussian fixed points are again identical to the Gaussian fixed points of $T^\perp$. \bigskip \noindent {\eighteenb {\uppercase\expandafter{\romannumeral6 }}. Second-Order Behavior Near Critical Gaussian Fixed Points} \medskip The perturbative renormalization group analysis becomes much more interesting when second-order corrections to the Hamiltonian are included. In fact, almost all features of a complete perturbative analysis are displayed in a second-order analysis. In this Section I begin by adding one simple interaction to the Hamiltonian and studying the interactions it generates to second-order. This analysis demonstrates that most candidate transformations produce irrelevant interactions with divergent coefficients. I next concentrate on the invariant-mass transformation that runs the cutoff in Eq. (3.23). This transformation is used throughout the rest of the paper, except in Appendix C, where the effects of physical masses are briefly studied and a transformation that runs the cutoff in Eq. (3.24) must be used. It is shown that if one drops all irrelevant operators, a complete sequence of second-order transformations may be summarized by a few coupled difference equations. These equations are solved, and the physics implied by their solution is discussed. The final subject in this Section is the most important. I first show that when one uses the canonical $\phi^4$ interaction and the invariant-mass cutoff, the boson mass shift and the boson-boson scattering amplitude violate both Lorentz covariance and cluster decomposition. The calculation of these expectation values reveals the `counterterms' required to restore Lorentz covariance and cluster decomposition; and these counterterms include an infinite number of irrelevant operators, as well as a marginal operator that contains logarithms of longitudinal momentum fractions. This result is not surprising, and the important question is how to find a Hamiltonian that restores these properties to the mass shift and the scattering amplitude. Of course, one method is to simply compute amplitudes in perturbation theory, and add to the Hamiltonian the difference between the result desired and the result obtained. This is not very satisfactory, because it requires reference to a separate calculation, so I ask a question that Wilson and I have recently posed \APrefmark{\rPERWIL}. What happens if one allows the Hamiltonian to become arbitrarily complicated, but {\eighteenit one insists that a single coupling is allowed to explicitly depend on the cutoff}, with all other interactions depending on the cutoff only through their dependence on this coupling? The answer may depend on the coupling one allows to run, and I select the $\phi^4$ canonical coupling, which a manifestly covariant analysis would show is the only coupling that must be allowed to run if $\phi \rightarrow -\phi$ symmetry is maintained. In this Section I show that coupling coherence uniquely fixes all relevant and irrelevant operators to second-order in the $\phi^4$ coupling constant; and that these operators restore Lorentz covariance and cluster decomposition to the boson mass shift, which is a relevant operator, and to the irrelevant part of the boson-boson scattering amplitude. I then show that a third-order analysis is required to determine the marginal operators to second-order in the canonical coupling. The third-order analysis is completed in Section {\uppercase\expandafter{\romannumeral7 }}. The second term on the right-hand side of Eq. (4.14) can be used to determine the second-order behavior of any transformation near a critical Gaussian fixed point. The fixed point Hamiltonian is $h$ in Eq. (4.14), and all deviations from the fixed point are part of $v$. I should note that deviations of $u_2$ from $u_2^*$ are produced by the second-order part of the transformation, but they do not directly affect the subsequent second-order behavior of the transformations. This is because we need two interactions to first produce an intermediate state with an energy above the new cutoff and then produce a final state with an energy below the new cutoff, and $u_2$ does not change the state. If we consider the second-order behavior of a transformation when acting on a Hamiltonian in which $u_4$ is the only nonzero interaction, for example, we encounter the Hamiltonian diagrams in figure 3. A detailed derivation of the expression for the Hamiltonian diagram in figure 3a is given in Appendix B as an illustrative example, but throughout the text I simply give the appropriate expressions without explicit derivation. The diagrammatic rules for Hamiltonian diagrams are almost identical to the diagrammatic rules for time-ordered Green's functions given in Appendix A. The principal differences are that an infinite number of vertex rules are required, each of which is easily determined from the related interaction; and the rules for energy denominators differ, as discussed in Section {\uppercase\expandafter{\romannumeral4 }}. I have not bothered to indicate the energy denominators in figure 3, because as one sees in Eq. (4.14), there are only two choices in the second-order term. The denominator contains either a difference of the incoming state energy and the intermediate state energy or the outgoing state energy and the intermediate state energy. A transformation is completed by evaluating these Hamiltonian diagrams with the appropriate cutoffs, and then rescaling the external variables. Given the transformations in Section {\uppercase\expandafter{\romannumeral3 }}, and the techniques for constructing effective Hamiltonians in Section {\uppercase\expandafter{\romannumeral4 }}, we can readily compute the second-order behavior of the transformations `near' the critical Gaussian fixed point in Eq. (5.24). The first step in this process is to choose an initial Hamiltonian, $H^{\Lambda_0}_{\Lambda_0}$. If we choose a Hamiltonian for which only $u_2$ is nonzero, the linear analysis is exact, so we need to add at least one interaction to study second-order behavior. The simplest examples should result from adding only one interaction, and the strength of this interaction must be sufficiently weak that $H^{\Lambda_0}_{\Lambda_0}$ is near the Gaussian fixed point. Regardless of what interaction we add to the fixed point Hamiltonian to create $H^{\Lambda_0}_{\Lambda_0}$, we will find that $H^{\Lambda_0}_{\Lambda_1}$ contains an infinite number of interactions. Any interaction we add leads to an infinite number of irrelevant interactions at least, so I first consider the relevant or marginal interactions that can be added to the Gaussian fixed point. Let me specialize to the discussion of $T^\perp$ and the boost-invariant transformations. For these transformations, there are relevant and marginal interactions with $u_3$ nonzero, and there are marginal operators with $u_4$ nonzero. I assume that the symmetry $\phi \rightarrow -\phi$ is maintained manifestly, so that only $u_4$ is nonzero. If spontaneous symmetry breaking occurs, so that the $\phi \rightarrow -\phi$ symmetry is hidden, $u_3$ must be allowed to appear in the Hamiltonian; however, in this situation it is a function of the variables appearing in the manifestly symmetric theory, as Wilson and I have discussed \APrefmark{\rPERWIL}. There can be no powers of transverse momenta in the marginal part of $u_4$, because these lead only to irrelevant operators according to the analysis of Section {\uppercase\expandafter{\romannumeral5 }}. On the other hand, functions of longitudinal momenta have no effect on the linear analysis, so we should consider $$H^{\Lambda_0}_{\Lambda_l} = H^* + \delta H_l \;, \eqno(6.1)$$ \noindent with, $$\eqalign{\delta H_0 &= {1 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2+q_3-q_4) \cr &\qquad\qquad\qquad\qquad\qquad {\mathaccent "7E g}(-q_1^+,-q_2^+,-q_3^+,q_4^+)\; a^\dagger(q_1) a^\dagger(q_2) a^\dagger(q_3) a(q_4) \cr &+{1 \over 4} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2-q_3-q_4) \cr &\qquad\qquad\qquad\qquad\qquad {\mathaccent "7E g}(-q_1^+,-q_2^+,q_3^+,q_4^+)\; a^\dagger(q_1) a^\dagger(q_2) a(q_3) a(q_4) \cr &+{1 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1-q_2-q_3-q_4) \cr &\qquad\qquad\qquad\qquad\qquad {\mathaccent "7E g}(-q_1^+,q_2^+,q_3^+,q_4^+)\; a^\dagger(q_1) a(q_2) a(q_3) a(q_4) \;,}\eqno(6.2)$$ \noindent where ${\mathaccent "7E g}(q_i^+)$ is the marginal part of $u_4$. This is still too general for an initial discussion, because of the function ${\mathaccent "7E g}$. The only constraints on this function are that it be symmetric under the interchange of any two arguments with the same sign and that it be dimensionless, so that ratios of momenta must be employed unless there is a cutoff on longitudinal momenta that can be used to form ${\mathaccent "7E g}$. I assume that such a cutoff does not exist, because one must typically employ $T^+$ to control a cutoff on longitudinal momenta, and I show below that perturbation theory cannot be used to construct $T^+$. We will see below that spectator momenta also enter ${\mathaccent "7E g}$ for some transformations. Ultimately one would like to use Lorentz covariance to place restrictions on $u_4$, but boost invariance alone places no further restrictions. Let me first concentrate on the simplest choice, $${\mathaccent "7E g}(q_1^+,q_2^+,q_3^+,q_4^+) = g \;. \eqno(6.3)$$ \noindent This leads to the familiar canonical $\phi^4$ Hamiltonian, with no mass term. We will see that a mass term is always generated by the transformation. Figure 3 shows the second-order corrections to the effective Hamiltonian that arise when we use $T^\perp$ or any other transformation. Let me begin by studying one of the corrections to $u_4$, the first Hamiltonian diagram in figure 3b. The analytic expression corresponding to this diagram is determined by the second-order term in Eq. (4.14), and I further simplify the calculation by considering only the part of the expression that involves the energy of the incoming state. The relevant momenta are shown in the figure, but the analytic expression is drastically simplified if one chooses the Jacobi variables, $$\eqalign{ &(p_1^+,{\bf p^\perp}_1) = (y {\mit P}^+, y {\mit P}^\perp+{\bf r^\perp}) \;,\;\;\; (p_2^+,{\bf p^\perp}_2) = ((1-y) {\mit P}^+, (1-y) {\mit P}^\perp-{\bf r^\perp})\;, \cr &(k_1^+,{\bf k^\perp}_1) = (x {\mit P}^+, x {\mit P}^\perp+{\bf s^\perp}) \;,\;\;\; (k_2^+,{\bf k^\perp}_2) = ((1-x) {\mit P}^+, (1-x) {\mit P}^\perp-{\bf s^\perp})\;.} \eqno(6.4)$$ \noindent Note that all longitudinal momentum fractions, such as $x$ and $y$ in Eq. (6.4), range from 0 to 1. The total momentum is ${\mit P}$. Using these variables one finds that the correction is $$\eqalign{ \delta v_4 = {1 \over 2}\cdot {1 \over 2} \cdot g^2\; \int & {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; \Bigl[{{\bf r^\perp}^2 \over y(1-y)} - {{\bf s^\perp}^2 \over x(1-x)}\Bigr]^{-1} \cr &\theta\bigl(\Lambda_0^2-(x {\mit P}^\perp+{\bf s^\perp})^2\bigr) \theta\bigl((x {\mit P}^\perp+{\bf s^\perp})^2 - \Lambda_1^2\bigr) \cr &\theta\bigl(\Lambda_0^2-((1-x){\mit P}^\perp-{\bf s^\perp})^2\bigr) \theta\bigl(((1-x) {\mit P}^\perp-{\bf s^\perp})^2 - \Lambda_1^2\bigr) \cr &\theta\bigl(\Lambda_1^2-(y{\mit P}^\perp+{\bf r^\perp})^2\bigr) \theta\bigl(\Lambda_1^2-((1-y){\mit P}^\perp-{\bf r^\perp})^2\bigr) \;.}\eqno(6.5)$$ \noindent I use the notation $\delta v_4$ to indicate that the rescaling operations in $T^\perp$ have not been completed. The first factor of $1/2$ is seen in Eq. (4.14), while the second factor of $1/2$ is a combinatoric factor that can be seen in Eq. (A.16) in Appendix A. The final step functions are associated with the incoming and outgoing particles, whose momenta must satisfy $0 \le {\bf p^\perp}^2 \le \Lambda_1^2$ before the rescaling operations. Usually I do not display the step function cutoffs associated with the incoming and outgoing states, because they are always understood to be present. After completing the rescaling operations of Eqs. (3.10) and (3.11), and multiplying by $z^\perp$, one obtains one of the second-order contributions to $\delta u_4$. There are three important features of Eq. (6.5) upon which I want to focus. First, the cutoffs employed in $T^\perp$ lead to a somewhat complicated analytical analysis, because of their dependence on the total transverse momentum of the incoming particles. However, one might normally be willing to live with this difficulty because it affects only irrelevant operators in $u_4$. Such difficulties also occur for marginal operators in $u_2$. The second feature of Eq. (6.5) is its dependence on ${\bf r^\perp}$ and $y$. We started with a very simple expression for $u_4$ given in Eq. (6.3), and after one application of $T^\perp$ we generate a complicated $u_4$. The renormalization group analysis is useful if we can show that most of this complexity is associated with irrelevant operators. Remembering that any powers of transverse momenta that appear in $u_4$ are irrelevant operators, we should expand the integrand in powers of ${\bf r^\perp}$. This leads to $$\eqalign{ \delta v_4 = -{g^2 \over 4}\; \int & {d^2s^\perp dx \over 16 \pi^3 }\; {1 \over {\bf s^\perp}^2} \sum_{n=0}^\infty \Bigl[{x(1-x) \over y(1-y)} {{\bf r^\perp}^2 \over {\bf s^\perp}^2}\Bigr]^n \cr &\theta\bigl(\Lambda_0^2-(x {\mit P}^\perp+{\bf s^\perp})^2\bigr) \theta\bigl((x {\mit P}^\perp+{\bf s^\perp})^2 - \Lambda_1^2\bigr) \cr &\theta\bigl(\Lambda_0^2-((1-x){\mit P}^\perp-{\bf s^\perp})^2\bigr) \theta\bigl(((1-x) {\mit P}^\perp-{\bf s^\perp})^2 - \Lambda_1^2\bigr) \;.}\eqno(6.6)$$ \noindent I do not display the cutoffs associated with the incoming and outgoing particles. There may be little problem with the convergence of this expansion as far as the ratio ${\bf r^\perp}^2/{\bf s^\perp}^2$ is concerned, because the cutoffs insure that this ratio is usually less than one. It is not guaranteed that this ratio is always less than one; however, we will find a far more serious problem. I choose $\Lambda_1=\Lambda_0/2$ in remaining calculations, unless specified otherwise. Consider the simplest case in which ${\mit P}^\perp=0$, so that $$\delta v_4 = -{g^2 \over 32 \pi^2} \Biggl\{ln(2)+\sum_{n=1}^\infty {\sqrt{\pi} \over 2^{2n+1}} {\Gamma(n+1) \over \Gamma(n+3/2)} {1-(1/2)^{2n} \over 2n} \Bigl[{{\bf r^\perp}^2 \over y(1-y)\Lambda_1^2}\Bigr]^n \Biggr\} \;. \eqno(6.7)$$ \noindent After rescaling the momenta according to Eq. (3.10) we obtain a Hamiltonian that contains $$\delta u_4 = -{ g^2 \over 32 \pi^2} \Biggl\{ln(2)+\sum_{n=1}^\infty {\sqrt{\pi} \over 2^{2n+1}} {\Gamma(n+1) \over \Gamma(n+3/2)} {1-(1/2)^{2n} \over 2n} \Bigl[{{\bf r^\perp}^2 \over y(1-y)\Lambda_0^2}\Bigr]^n \Biggr\} \;. \eqno(6.8)$$ The first term in this expansion is identical in form to the interaction with which we began, and it causes the strength of this marginal operator to decrease as the cutoff decreases. If we add the correction involving the outgoing energy, the first Hamiltonian diagram in figure 3b alters the marginal part of $u_4$ by a factor of $-ln(2)/(16 \pi^2)\;g^2$. The remaining terms all correspond to irrelevant operators. Before we can drop such irrelevant operators as a first approximation, however, we should show that not only do they occur with small coefficients, they also continue to lead to small corrections. In order to see that this is {\eighteenit not} the case, let me simplify the problem by considering a Hamiltonian that contains only the irrelevant interaction $$u_4(q_1,q_2,q_3,q_4)=h \Bigl[(q_1^+ + q_2^+) \bigl( {{\bf q^\perp}^2_1 \over q_1^+ \Lambda_0^2} + {{\bf q^\perp}^2_2 \over q_2^+ \Lambda_0^2}\bigr) + \; permutations \; \Bigr] \;. \eqno(6.9)$$ \noindent While it may not be obvious, after a change of variables this interaction leads to the first irrelevant operator in Eq. (6.8). When we apply the transformation $T^\perp$ to the Hamiltonian containing this irrelevant interaction we encounter Hamiltonian diagrams identical in form to those shown in figure 3; however, instead of the vertex in Eq. (6.3) we have the vertex in Eq. (6.9). Following steps analogous to those leading up to Eq. (6.6) one set of resultant terms is $$\eqalign{ \delta v_4 = -{h^2 \over 4} \; \int & {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; {1 \over \Lambda_0^2} \sum_{n=0}^\infty \Bigl[{x(1-x) \over y(1-y)} {{\bf r^\perp}^2 \over {\bf s^\perp}^2}\Bigr]^n \cr &\theta\bigl(\Lambda_0^2-(x {\mit P}^\perp+{\bf s^\perp})^2\bigr) \theta\bigl((x {\mit P}^\perp+{\bf s^\perp})^2 - \Lambda_1^2\bigr) \cr &\theta\bigl(\Lambda_0^2-((1-x){\mit P}^\perp-{\bf s^\perp})^2\bigr) \theta\bigl(((1-x) {\mit P}^\perp-{\bf s^\perp})^2 - \Lambda_1^2\bigr) \;.}\eqno(6.10)$$ \noindent While this is almost identical to Eq. (6.6), the $n=0$ term is infinite. This problem inevitably results when one applies $T^\perp$; but it is much worse than this of course, because one generates arbitrarily high inverse powers of longitudinal momentum fractions, as is apparent in Eq. (6.7). The cutoffs do not prevent these fractions from becoming arbitrarily small, and divergences inevitably result. Let me note that $T^\perp$ is the type of transformation one must employ when transverse lattice regularization is used and one wants to vary the transverse lattice spacing \APrefmark{\rBARONE,\rBARTWO,\rGRIFFIN}. Placing cutoffs on the longitudinal momentum, as required if we use $T^+$, alters this problem slightly; but it does not cure the problem. Once we have placed a cutoff on longitudinal momentum, $\epsilon_0$, we are free to consider the effective four-point interaction for particles that have much larger longitudinal momentum, ${\mit P}^+$, than this cutoff. The longitudinal momentum cutoff then shows up in Eq. (6.10) as a small cutoff on the $x$ integration, leading to a factor of $ln({\mit P}^+/\epsilon_0)$ instead of $\infty$. This is an improvement, but the analysis of higher-order irrelevant operators in Eq. (6.17), for example, leads to arbitrarily large powers of ${\mit P}^+/\epsilon_0$, and we have no way to prevent this ratio from producing arbitrarily large coefficients. The sceptical reader should explore this problem much further, but I do not believe that there are any simple solutions short of abandoning perturbation theory. All of the above problems can be traced to the fact that some of the energies of states retained by $T^\perp$, as determined by the fixed point Hamiltonian, are larger than some of the energies of states removed. The energy denominators must be expanded in powers of the energy of incoming and outgoing states to separate relevant, marginal, and irrelevant operators; and this expansion does not converge. The problem is extremely severe for $T^\perp$ because states of infinite energy are retained, leading to infinite errors when irrelevant operators are dropped. Similar problems are encountered when one studies $T^+$. The cutoffs in $T^+$ require one to integrate over transverse momenta from zero to the transverse momentum cutoff. For massless Hamiltonians this leads to transverse infrared divergences, and even for massive Hamiltonians one again finds that the irrelevant longitudinal operators end up producing arbitrarily large effects if the external transverse momenta are large. I do not detail this problem further, because the calculations are quite similar to those above, with the only changes being in the cutoffs that occur inside the integrals. If we consider a boost-invariant transformation that places any cutoff on transverse momenta without limiting small longitudinal momenta, we encounter exactly the same problems discussed above. Thus, there is no reason to consider the transformation that utilizes the cutoff in Eq. (3.22). The cutoffs in Eqs. (3.23) and (3.24) limit the longitudinal momenta, and I show that the transformations associated with them apparently avoid the above problems. However, both of these cutoffs involve a sum over all particles present in a given state. As we shall see, this leads to second-order corrections that are spectator-dependent. Spectator dependence may not drastically complicate a perturbative renormalization group analysis. The boost-invariant transformation that utilizes the cutoff in Eq. (3.23) avoids the problems found for $T^\perp$ in second-order. Using this transformation, and starting with a Hamiltonian that contains the interaction in Eq. (6.3), the correction in figure 3b becomes, $$\eqalign{ \delta v_4 = {g^2 \over 4}\; \int & {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; \Bigl[{{\bf r^\perp}^2 \over y(1-y)} - {{\bf s^\perp}^2 \over x(1-x)}\Bigr]^{-1} \cr &\theta\Bigl(\Lambda_0^2-{{\bf s^\perp}^2 \over x(1-x)}\Bigr) \theta\Bigl({{\bf s^\perp}^2 \over x(1-x)} - \Lambda_1^2\Bigr) \theta\Bigl(\Lambda_1^2-{{\bf r^\perp}^2 \over y(1-y)}\Bigr) \;.}\eqno(6.11)$$ \noindent This is identical to Eq. (6.5) except for the cutoffs. There is an important assumption that is made in writing Eq. (6.11). I have assumed that there are no spectators, so that the interactions displayed in figure 3 occur without any disconnected lines present. I return to this issue later, and discuss spectator effects. At this point we can expand the denominator exactly as was done above, but now the cutoffs guarantee that $$ {{\bf s^\perp}^2 \over x(1-x)} \ge {{\bf r^\perp}^2 \over y(1-y)} \;. \eqno(6.12)$$ \noindent Following the same steps that led to Eq. (6.7) we find $$\eqalign{ \delta v_4 &= -{g^2 \over 4}\; \int {d^2s^\perp dx \over 16 \pi^3 }\; {1 \over {\bf s^\perp}^2} \sum_{n=0}^\infty \Bigl[{x(1-x) \over y(1-y)} {{\bf r^\perp}^2 \over {\bf s^\perp}^2}\Bigr]^n \cr &\qquad \qquad \qquad \theta\Bigl(\Lambda_0^2-{{\bf s^\perp}^2 \over x(1-x)}\Bigr) \theta\Bigl({{\bf s^\perp}^2 \over x(1-x)} - \Lambda_1^2\Bigr) \cr &= -{ g^2 \over 32 \pi^2} \Biggl\{ln(2)+\sum_{n=1}^\infty {1-(1/2)^{2n} \over 2n} \Bigl[{{\bf r^\perp}^2 \over y(1-y)\Lambda_1^2}\Bigr]^n \Biggr\} \;.}\eqno(6.13)$$ \noindent I have not displayed the step function cutoffs on the incoming energy. To complete the transformation we need to rescale according to Eq. (3.10), and as we found above in Eq. (6.8), the only effect this has on $\delta u_4$ is to change $\Lambda_1$ in Eq. (6.13) into $\Lambda_0$. This result is similar to the result for $T^\perp$, and to see that problems do not arise when we add these irrelevant operators and apply the transformation a second time we need to follow the steps leading to Eq. (6.10). Starting with a Hamiltonian that contains the interaction given in Eq. (6.9), and applying an invariant-mass transformation, we derive $$\eqalign{ \delta v_4 &= -{h^2 \over 4} \; \int {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; {1 \over \Lambda_0^2} \sum_{n=0}^\infty \Bigl[{x(1-x) \over y(1-y)} {{\bf r^\perp}^2 \over {\bf s^\perp}^2}\Bigr]^n \cr &\qquad \qquad \qquad \theta\Bigl(\Lambda_0^2-{{\bf s^\perp}^2 \over x(1-x)}\Bigr) \theta\Bigl({{\bf s^\perp}^2 \over x(1-x)} - \Lambda_1^2\Bigr) \cr &= -{h^2 \over 32 \pi^2} \Biggl\{{3 \over 8}+ {ln(2) \over 4} {{\bf r^\perp}^2 \over y(1-y)\Lambda_1^2} +\sum_{n=2}^\infty {1-(1/2)^{2n-2} \over 2n-2} \Bigl[{{\bf r^\perp}^2 \over y(1-y)\Lambda_1^2}\Bigr]^n \Biggr\} \;.}\eqno(6.14)$$ \noindent Here again the transformation must be completed by rescaling the momenta, which again replaces $\Lambda_1$ with $\Lambda_0$. Not only is every operator in the sum bounded by cutoffs on external momenta, every coefficient in the expansion is small when $h$ is small. Since these irrelevant operators occur at ${\cal O}(g^2)$, the corrections in Eq. (6.14) are ${\cal O}(g^4)$. This should give us hope that irrelevant operators do indeed lead to small corrections in an analysis that starts by simply discarding such operators at each stage, at least while the Hamiltonian remains near the Gaussian fixed point. Unfortunately, it is not sufficient to show that the coefficients of the irrelevant operators are small, and that each subsequent correction produced by each irrelevant operator is small. One should also worry about the convergence of the corrections produced by the entire sum of irrelevant operators. For example, the reader should be concerned with the convergence of the sum in Eq. (6.13). Completing the transformation by rescaling ${\bf r^\perp}$, and completing the sum leads to $$\eqalign{ \delta u_4 &= -{ g^2 \over 32 \pi^2} \Biggl\{ln(2)+ {1 \over 2}\;ln\Bigl( {4y(1-y)\Lambda_0^2-{\bf r^\perp}^2 \over 4y(1-y)\Lambda_0^2-4{\bf r^\perp}^2}\Bigr) \Biggr\}\;\theta\Bigl(\Lambda_0^2 - {{\bf r^\perp}^2 \over y(1-y)} \Bigr) \;.}\eqno(6.15)$$ \noindent I have restored the cutoff on the incoming energy, and one can clearly see a potential problem coming from the fact that the logarithm diverges when the incoming energy reaches its maximum value. This divergence occurs because the energy denominator in Eq. (6.11) vanishes on a surface when the incoming energy equals $\Lambda_1^2$. However, this divergence is simply being buried in the irrelevant operators. How can this make sense? There are two issues one must address. First, one must determine whether this divergence actually shows up in observables if the Hamiltonian containing the interaction in Eq. (6.15) is solved. While I do not go through the details, if one studies two-particle scattering in perturbation theory with such a Hamiltonian, the logarithmic divergence in Eq. (6.15) enters first-order perturbation theory with a strength of ${\cal O}(g^2)$. In second-order perturbation theory there is an additional contribution of ${\cal O}(g^2)$ coming from the interaction in Eq. (6.3). This contribution also has a logarithmic divergence that tends to cancel the divergence from the interaction in Eq. (6.15). If one drops the irrelevant operators, one finds that this latter divergence is not canceled and a large error is made if the invariant-mass of the scattering state is near the cutoff. Of course, one expects large errors near the cutoff; but well below the cutoff one hopes that errors are small even when irrelevant operators are dropped, and that the results are systematically improved as the leading irrelevant operators are retained. At least to leading orders in the coupling it is easy to verify that this happens. The second issue is directly relevant to the renormalization group. One must determine whether the divergence in Eq. (6.15) causes the irrelevant operators to have a significant effect on the next Hamiltonian produced by a transformation. To determine this, include the entire logarithm in a vertex and study the second-order correction in the first Hamiltonian diagram of figure 3b, using this logarithmic vertex in combination with the original vertex from Eq. (6.3). Keeping only the marginal part of $\delta v_4$, and not worrying about combinatoric factors, we obtain $$\eqalign{ \delta v_4 \approx g^3\; \int & {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; \;{x(1-x) \over {\bf s^\perp}^2} \Biggl\{ln(2)+\sum_{n=1}^\infty {1-(1/2)^{2n} \over 2n} \Bigl[{{\bf s^\perp}^2 \over x(1-x)\Lambda_0^2}\Bigr]^n \Biggr\} \cr & \qquad \qquad \theta\Bigl(\Lambda_0^2-{{\bf s^\perp}^2 \over x(1-x)}\Bigr) \theta\Bigl({{\bf s^\perp}^2 \over x(1-x)} - \Lambda_1^2\Bigr) \;.}\eqno(6.16)$$ \noindent I have restored the sum to facilitate the discussion of convergence. The integrals are easily completed, leading to $$\eqalign{ \delta v_4 \approx & {g^3 \over 8 \pi^2}\; \Biggl\{\bigl[ln(2)\bigr]^2 +\sum_{n=1}^\infty \Bigl[{1-(1/2)^{2n} \over 2n}\Bigr]^2 \Biggr\} \cr \approx & {g^3 \over 8 \pi^2}\;\Biggl\{0.480+0.141+0.055+0.027+\cdot \cdot \cdot \Biggr\} \;.}\eqno(6.17)$$ Clearly this sum converges. It is also clear that it converges rather slowly, with an error that falls off as the inverse power of the number of terms included. If one wants to include corrections of ${\cal O}(g^3)$, such as the third-order corrections to the transformation studied in the next Section, without making a large ({\eighteenit e.g.}, 25\%) error in the coefficient of these corrections, it is also necessary to include some irrelevant operators. However, if $g$ is small, it is apparent that these irrelevant operators produce small corrections to the second-order analysis, despite the fact that the logarithm arising in Eq. (6.13) diverges. This is possible because the divergence is integrable. Next I want to show that the transformation generates a mass term in $u_2$, and that the mass counterterm required when we try to keep the physical mass zero is rather unusual. In figure 3a I show the second-order correction to the Hamiltonian that affects $u_2$, and in Appendix B I analyze this correction for arbitrary $u_4$. I use the interaction in Eq. (6.3). To proceed, define the Jacobi variables $$\eqalign{ &(k_1^+,{\bf k^\perp}_1) = (x {\mit P}^+, x {\mit P}^\perp+{\bf q^\perp}) \;,\;\;\; (k_2^+,{\bf k^\perp}_2) = (y {\mit P}^+, y {\mit P}^\perp+{\bf r^\perp})\;, \cr & \qquad \qquad \qquad (k_3^+,{\bf k^\perp}_3) = (z {\mit P}^+, z {\mit P}^\perp+{\bf s^\perp}).} \eqno(6.18)$$ \noindent Using these variables the second-order correction to $u_2$ is $$\eqalign{ \delta v_2 = - {g^2 \over 3!}\; \int & {d^2q^\perp dx \over 16 \pi^3 x}\; {d^2r^\perp dy \over 16 \pi^3 y}\; {d^2s^\perp dz \over 16 \pi^3 z}\; \Bigl[{{\bf q^\perp}^2 \over x} + {{\bf r^\perp}^2 \over y} + {{\bf s^\perp}^2 \over z} \Bigr]^{-1} \cr &(16 \pi^3) \delta(1-x-y-z) \delta^2({\bf q^\perp}+{\bf r^\perp}+{\bf s^\perp})\cr &\theta\Bigl(\Lambda_0^2- {{\bf q^\perp}^2 \over x} - {{\bf r^\perp}^2 \over y} - {{\bf s^\perp}^2 \over z} \Bigr) \theta\Bigl({{\bf q^\perp}^2 \over x} + {{\bf r^\perp}^2 \over y} + {{\bf s^\perp}^2 \over z} - \Lambda_1^2\Bigr) \;.}\eqno(6.19)$$ \noindent This expression does not depend on ${\mit P}$, so it leads only to a mass shift and does not change the marginal operators in $u_2$ or produce any irrelevant operators in $u_2$. This conclusion changes when spectators are included. It should be obvious that the integral does not vanish, and therefore $u_2$ develops a negative mass-squared term if we start with a Hamiltonian that has no mass term. It is important to note that the mass shift is not infinite, despite the fact that particles with small transverse momenta are removed. If transverse infrared divergences appear, transverse nonlocalities may follow. Consider a Euclidean field theory and place a cutoff on the four-momentum squared, $q^2$. When states with a given large range of $q^2$ are removed, it is not required that every component of the momentum be small. If $q_0^2$ is large, for example, states with small values of $q_3^2$ are removed, and one might naively worry that this produces long-range forces in the $x^3$ direction. Of course this does not happen, because of rotational invariance. In light-front field theory the invariant-mass cutoff violates strict rotational invariance; however, it retains some features of this symmetry and apparently allows one to remove states with small transverse momenta without producing long-range transverse interactions, at least in low orders of perturbation theory. The most interesting examples of this principle are discussed later in this Section. As discussed in Section {\uppercase\expandafter{\romannumeral2 }}, we want to choose the Hamiltonian $H^{\Lambda_0}_{\Lambda_0}$ so that $H^{\Lambda_0}_{\Lambda_{\cal N}}$ gives reasonable results when it is diagonalized. Since there is no inverse transformation, this process is typically trial and error in a nonperturbative analysis. One first tries a particular $H^{\Lambda_0}_{\Lambda_0}$, and constructs $H^{\Lambda_0}_{\Lambda_{\cal N}}$. If $H^{\Lambda_0}_{\Lambda_{\cal N}}$ does not produce reasonable observables, $H^{\Lambda_0}_{\Lambda_0}$ must be altered. While there are an infinite number of operators that can be adjusted in $H^{\Lambda_0}_{\Lambda_0}$, only the relevant and marginal operators are expected to produce effects that survive after many applications of $T$; therefore one hopes that it is necessary to control only a finite number of readily identified terms in $H^{\Lambda_0}_{\Lambda_0}$ to produce desired results in $H^{\Lambda_0}_{\Lambda_{\cal N}}$. One also hopes that the irrelevant operators in $H^{\Lambda_0}_{\Lambda_{\cal N}}$ are less important than the relevant and marginal operators, when $H^{\Lambda_0}_{\Lambda_{\cal N}}$ is diagonalized, but there is no guarantee of this and it is not essential to the renormalization group analysis. If this happens, one is led to consider the relationship between the coefficients of the relevant and marginal operators in $H^{\Lambda_0}_{\Lambda_0}$ to the corresponding coefficients in $H^{\Lambda_0}_{\Lambda_{\cal N}}$. There is no guarantee that this relationship is simple. It could even be chaotic, in which case one may want to find a new problem. However, when one evolves Hamiltonians near a Gaussian fixed point, the relationship between coefficients at the beginning and end of a long trajectory should not be highly nonlinear, as each transformation introduces only small nonlinearities. Wilson has given a general discussion of this issue \APrefmark{\rWILTEN}, and the main new ingredient in the light-front renormalization group is the appearance of arbitrary functions of longitudinal momenta. The mass terms in $u_2$ provide a simple example. I refer to any terms in $u_2$ that do not depend on transverse momenta, regardless of their dependence on longitudinal momenta, as {\eighteenit mass} terms. For the purposes of this discussion I simply regard the mass term that appears with no dependence on longitudinal momentum in $u_2$ in $H^{\Lambda_0}_{\Lambda_{\cal N}}$ as the {\eighteenit physical mass}. In reality one must still solve the Schr\"odinger equation with $H^{\Lambda_0}_{\Lambda_{\cal N}}$ to relate the mass in $H^{\Lambda_0}_{\Lambda_{\cal N}}$ to an experimentally observed mass, and it is even possible that there is no experimentally observed mass that directly corresponds to the mass in $H^{\Lambda_0}_{\Lambda_{\cal N}}$. While the mass terms in $H^{\Lambda_0}_{\Lambda_{\cal N}}$ are affected by all of the terms in $H^{\Lambda_0}_{\Lambda_0}$, if one uses a second-order approximation of $T$ to construct a Hamiltonian trajectory, any mass term in $H_{\Lambda_0}^{\Lambda_0}$ appears directly in $H^{\Lambda_0}_{\Lambda_{\cal N}}$. Thus, in a second-order analysis, it is trivial to control the mass term in $H^{\Lambda_0}_{\Lambda_{\cal N}}$. If we want to produce a Hamiltonian $H^{\Lambda_0}_{\Lambda_{\cal N}}$ in which the physical mass is zero for example, we add a mass `counterterm' to $H^{\Lambda_0}_{\Lambda_0}$ and adjust it to cancel the entire trajectory of mass shifts that begins with the shift in Eq. (6.19). This simplicity is lost in third- and higher-order analyses, as we will see in Section {\uppercase\expandafter{\romannumeral7 }}. On the other hand, we will find that the coupling constant coherence conditions fix the mass counterterm to each order in perturbation theory to a value that yields a massless physical particle in that order of perturbation theory \APrefmark{\rPERWIL}. Let me return to the issue of spectator-dependence. Figure 4 is identical to figure 3a, but I have added a double line to represent all spectators. The calculation proceeds almost exactly as before, except we must include the effects of the spectator momentum. I use the same variables given in Eq. (6.18), and I assume that the incoming particle momentum and the spectator momentum are respectively, $$(w {\mit P}^+,w {\mit P}^\perp + {\bf t^\perp}) \;,\;\;\;\;\; ((1-w) {\mit P}^+, (1-w) {\mit P}^\perp - {\bf t^\perp}) \;. \eqno(6.20)$$ \noindent We also need to know the invariant mass-squared of the spectator state, which I assume to be $M^2$. Even if the particles are massless, we cannot assume that the invariant mass of the spectators is zero, of course. The spectator energy cancels in all energy denominators when no interactions occur between spectators. Such interactions produce disconnected marginal and irrelevant operators that I do not discuss, because they are not important in low orders of a perturbative analysis. The second-order correction to $u_2$ from the Hamiltonian diagrams in figure 4 is $$\eqalign{ \delta v_2 = {g^2 \over 3!} \; \int & {d^2q^\perp dx \over 16 \pi^3 x}\; {d^2r^\perp dy \over 16 \pi^3 y}\; {d^2s^\perp dz \over 16 \pi^3 z}\; \Bigl[{{\bf t^\perp}^2 \over w}- {{\bf q^\perp}^2 \over x} - {{\bf r^\perp}^2 \over y} - {{\bf s^\perp}^2 \over z} \Bigr]^{-1} \cr &(16 \pi^3) \delta(w-x-y-z) \delta^2({\bf t^\perp}-{\bf q^\perp}-{\bf r^\perp}-{\bf s^\perp}) \cr &\theta\Bigl(\Lambda_0^2- {{\bf t^\perp}^2+M^2 \over 1-w} - {{\bf q^\perp}^2 \over x} - {{\bf r^\perp}^2 \over y} - {{\bf s^\perp}^2 \over z} \Bigr) \cr &\theta\Bigl({{\bf q^\perp}^2 \over x} + {{\bf r^\perp}^2 \over y} + {{\bf s^\perp}^2 \over z} + {{\bf t^\perp}^2+M^2 \over 1-w} - \Lambda_1^2\Bigr) \;.}\eqno(6.21)$$ One can compare this result with Eq. (6.19) to see the effect spectators have on $\delta v_2$. In Eq. (6.19) $\delta v_2$ does not depend on the longitudinal momentum of the incoming boson, or the transverse momentum of the incoming boson. In Eq. (6.21) $\delta v_2$ depends on the longitudinal momentum fraction of the incoming boson, $w$; and it depends on the relative transverse momentum of this boson with respect to the rest of the system. When faced with any correction to the Hamiltonian, we are supposed to expand the correction in terms of relevant, marginal and irrelevant variables. In this case, this means we are supposed to expand $\delta v_2$ in powers of the transverse momentum ${\bf t^\perp}$. In addition, we are supposed to expand in powers of $M^2$. In the fixed point Hamiltonian the particles are massless, and $M^2$ is a function of the transverse momenta of the spectator particles that goes to zero when these momenta go to zero. Remember that masses are treated as perturbations. The mass terms in $\delta v_2$ are found by setting all transverse momenta to zero, leading to the relevant part of $\delta v_2$, $$\eqalign{ \delta v_{2R} = - {g^2 \over 3!}\; \int & {d^2q^\perp dx \over 16 \pi^3 x}\; {d^2r^\perp dy \over 16 \pi^3 y}\; {d^2s^\perp dz \over 16 \pi^3 z}\; \Bigl[{{\bf q^\perp}^2 \over x} + {{\bf r^\perp}^2 \over y} + {{\bf s^\perp}^2 \over z} \Bigr]^{-1} \cr &(16 \pi^3) \delta(w-x-y-z) \delta^2({\bf q^\perp}+{\bf r^\perp}+{\bf s^\perp}) \cr &\theta\Bigl(\Lambda_0^2- {{\bf q^\perp}^2 \over x} - {{\bf r^\perp}^2 \over y} - {{\bf s^\perp}^2 \over z} \Bigr) \theta\Bigl({{\bf q^\perp}^2 \over x} + {{\bf r^\perp}^2 \over y} + {{\bf s^\perp}^2 \over z} - \Lambda_1^2\Bigr) \;.}\eqno(6.22)$$ \noindent This expression is identical to the similar limit for Eq. (6.19), except for one very important difference. The longitudinal momentum fractions in the integrand add to $w$ instead of 1. We need to understand how the mass terms that arise in $u_2$ after repeated application of $T$ depend on longitudinal momenta, and here this problem is equivalent to understanding how the mass depends on $w$. This dependence is easily worked out by changing variables, $$x'={x \over w} \;,\;y'={y \over w} \;,\;z'={z \over w} \;,\; {\bf q^\perp}'={{\bf q^\perp} \over \sqrt{w} \Lambda_0}\;,\; {\bf r^\perp}'={{\bf r^\perp} \over \sqrt{w} \Lambda_0}\;,\;{\bf s^\perp}'={{\bf s^\perp} \over \sqrt{w} \Lambda_0} \;. \eqno(6.23)$$ \noindent Using these variables, and using the fact that $\Lambda_1=\Lambda_0/2$, we obtain $$\eqalign{ \delta v_{2R} = - {g^2 \over 3!}\;w \Lambda_0^2 \; \int & {d^2{q^\perp}' dx' \over 16 \pi^3 x'}\; {d^2{r^\perp}' dy' \over 16 \pi^3 y'}\; {d^2{s^\perp}' dz' \over 16 \pi^3 z'}\; \Bigl[{{\bf q^\perp}'^2 \over x'} + {{\bf r^\perp}'^2 \over y'} + {{\bf s^\perp}'^2 \over z'} \Bigr]^{-1} \cr &(16 \pi^3) \delta(1-x'-y'-z') \delta^2({\bf q^\perp}'+{\bf r^\perp}'+{\bf s^\perp}') \cr &\theta\Bigl(1- {{\bf q^\perp}'^2 \over x'} - {{\bf r^\perp}'^2 \over y'} - {{\bf s^\perp}'^2 \over z'} \Bigr) \theta\Bigl({{\bf q^\perp}'^2 \over x'} + {{\bf r^\perp}'^2 \over y'} + {{\bf s^\perp}'^2 \over z'} - {1 \over 4} \Bigr) \;.}\eqno(6.24)$$ \noindent The remaining integral is a finite number that has no dependence on any momenta, so we have exactly determined the dependence of the new mass term on the longitudinal momentum fraction. The new mass term is proportional to the longitudinal momentum fraction, unlike the physical mass which is independent of this fraction! The next step in the analysis is to complete the calculation of marginal and irrelevant operators that arise in Eq. (6.21). The mass term is the only relevant operator occurring in Eq. (6.21), and the only marginal operator is the piece of $\delta v_2$ that is quadratic in the external transverse momenta, including $M^2$. Using the new variables in Eq. (6.23), we find $$\eqalign{ \delta v_2 = {g^2 \over 3!}\;w \Lambda_0^2 \; \int & {d^2{q^\perp}' dx' \over 16 \pi^3 x'}\; {d^2{r^\perp}' dy' \over 16 \pi^3 y'}\; {d^2{s^\perp}' dz' \over 16 \pi^3 z'}\; \Bigl[{{\bf t^\perp}^2 \over w \Lambda_0^2} - {{\bf q^\perp}'^2 \over x'} - {{\bf r^\perp}'^2 \over y'} - {{\bf s^\perp}'^2 \over z'} \Bigr]^{-1} \cr &(16 \pi^3) \delta(1-x'-y'-z') \delta^2({{\bf t^\perp} \over \sqrt{w} \Lambda_0} - {\bf q^\perp}'-{\bf r^\perp}'-{\bf s^\perp}') \cr &\theta\Bigl(1- {{\bf t^\perp}^2+M^2 \over (1-w)\Lambda_0^2}- {{\bf q^\perp}'^2 \over x'} - {{\bf r^\perp}'^2 \over y'} - {{\bf s^\perp}'^2 \over z'} \Bigr) \cr &\theta\Bigl({{\bf q^\perp}'^2 \over x'} + {{\bf r^\perp}'^2 \over y'} + {{\bf s^\perp}'^2 \over z'} + {{\bf t^\perp}^2+M^2 \over (1-w)\Lambda_0^2} - {1 \over 4} \Bigr) \;.}\eqno(6.25)$$ \noindent The factor of ${\bf t^\perp}^2/(w\Lambda_0^2)$ in the energy denominator and the factor of ${\bf t^\perp}/(\sqrt{w}\Lambda_0)$ in the momentum conserving delta function each produce a quadratic term in $\delta v_2$ proportional to ${\bf t^\perp}^2$, with no dependence on longitudinal momentum. This produces a term in the energy proportional to ${\bf t^\perp}^2/(w {\mit P}^+)$. The terms proportional to ${\bf t^\perp}^2+M^2$ in the cutoffs produce a term in $\delta v_2$ that is proportional to $w({\bf t^\perp}^2+M^2)/(1-w)$, which then produces a term in the energy proportional to $({\bf t^\perp}^2+M^2)/[(1-w){\mit P}^+]$. The factor $M^2$ is itself a sum of terms that are each of the form ${\bf q^\perp}^2/x$, with $x$ being a longitudinal momentum fraction and ${\bf q^\perp}$ being a relative transverse momentum, if there are no physical masses in the theory. These corrections are similar to the fixed point $u_2^*$, but with an important difference; they do not depend on the total transverse momentum. Assuming $u_2(q)={\bf q^\perp}^2$ in the Hamiltonian in Eq. (3.7) and using Jacobi variables $({\bf q^\perp}_i,q^+_i)=(x_i {\mit P}^+,x_i {\mit P}^\perp+{\bf r^\perp}_i)$, we can write the terms involving $u_2$ using projection operators and obtain $$\eqalign{ \int {d^2{\mit P}^\perp d{\mit P}^+ \over 16\pi^3 {\mit P}^+}\;\Biggl\{ &\; \int {d^2r^\perp_1 dx_1 \over 16 \pi^3 x_1 } \; (16 \pi^3) \delta^2({\bf r^\perp}_1) \delta(1-x_1) \;\Biggl[ {{\mit P}^{\perp 2} \over {\mit P}^+}+ {{\bf r^\perp}^2_1 \over x_1 {\mit P}^+} \Biggr]\; |q_1 \rangle \langle q_1 | \cr + &\int {d^2r^\perp_1 dx_1 \over 16 \pi^3 x_1 } \; \int {d^2r^\perp_2 dx_2 \over 16 \pi^3 x_2 } \; (16 \pi^3) \delta^2({\bf r^\perp}_1+{\bf r^\perp}_2) \delta(1-x_1-x_2) \cr & \qquad \qquad \qquad \Biggl[{{\mit P}^{\perp 2} \over {\mit P}^+}+ {{\bf r^\perp}^2_1 \over x_1 {\mit P}^+} +{{\bf r^\perp}^2_2 \over x_2 {\mit P}^+}\Biggr] \;|q_1,q_2 \rangle \langle q_1,q_2 | \;\;+\;\;\cdot\cdot\cdot \;\Biggr\} \;.}\eqno(6.26) $$ \noindent The corrections to $u_2$ that are of ${\cal O}({\bf t^\perp}^2,M^2)$ from Eq. (6.25) alter the coefficient of each factor ${\bf r^\perp}^2_i/(x_i {\mit P}^+)$ in Eq. (6.26). These corrections differ from standard wave function renormalization, because the coefficient of each factor ${\mit P}^{\perp 2}/{\mit P}^+$ in Eq. (6.26) remains $1$, and wave function renormalization would alter this coefficient also. These corrections are marginal operators, and one can include them in a fixed point Hamiltonian if $u_2$ is allowed to be spectator-dependent, so that it can depend not only on the momentum of a single particle, but also on the total momentum of the state. Eq. (6.24) proves that $u_2$ must be spectator-dependent if an invariant-mass transformation is used. In the original discussion of Hamiltonians in Section {\uppercase\expandafter{\romannumeral3 }}, it was simply assumed that $u_2$ depends only on one momentum, and now we are finding an example in which the transformation forces us to expand the original definition of the space of Hamiltonians. This point is clarified further below. In order to control these spectator-dependent corrections to $u_2$ in the final Hamiltonian, $H^{\Lambda_0}_{\Lambda_{\cal N}}$, we must allow such terms to appear in $H^{\Lambda_0}_{\Lambda_0}$. Since these counterterms are part of $\delta H_l$, they do not modify the second-order behavior of the transformation, but they do enter at third order. If the scalar particles appear as asymptotic particles ({\eighteenit i.e.}, are not confined), we need to precisely cancel these corrections to $u_2$ to obtain the appropriate dispersion relation for the physical scalar particles, as we are forced to introduce mass counterterms to cancel the corrections found in Eq. (6.24). Just as it is trivial to cancel any mass that arises in a second-order analysis, it is trivial to cancel these marginal corrections to $u_2$. Again, these corrections may have nontrivial effects in a third-order analysis. The analysis of $u_4$ has not been completed, and we must evaluate the remaining Hamiltonian diagrams in figure 3b. It is convenient to use the variables $$\eqalign{ &(p_i^+,{\bf p^\perp}_i) = (x_i {\mit P}^+, x_i {\mit P}^\perp+{\bf r^\perp}_i) \;,\;\;\; (k_i^+,{\bf k^\perp}_i) = (y_i {\mit P}^+, y_i {\mit P}^\perp+{\bf s^\perp}_i)\;.} \eqno(6.27)$$ \noindent I am only interested in the correction to the marginal part of $u_4$, so these diagrams are evaluated with the external transverse momenta and the total transverse momentum set to zero. It is only necessary to explicitly evaluate the second and sixth Hamiltonian diagrams in figure 3b, as others are simply related to these two. The second Hamiltonian diagram in figure 3b, combining both second-order terms in Eq. (4.14) involving incoming and outgoing energy, leads to a correction of the marginal part of $\delta v_4$, $$\eqalign{ \delta v_{4M} = - {g^2 \over 2}\; \theta(x_2-x_4) \; \int & {d^2s^\perp_1 dy_1 \over 16 \pi^3 y_1}\; \int {d^2s^\perp_2 dy_2 \over 16 \pi^3 y_2}\; (16\pi^3)\delta(x_2-x_4-y_1-y_2)\delta^2({\bf s^\perp}_1+{\bf s^\perp}_2) \cr & \theta\bigl(\Lambda_0^2-{{\bf s^\perp}^2_1 \over y_1}-{{\bf s^\perp}^2_2 \over y_2}\bigr) \theta\bigl({{\bf s^\perp}^2_1 \over y_1}+{{\bf s^\perp}^2_2 \over y_2}- \Lambda_1^2\bigr) \Bigl[{{\bf s^\perp}^2_1 \over y_1} + {{\bf s^\perp}^2_2 \over y_2} \Bigr]^{-1} \;.}\eqno(6.28)$$ \noindent To simplify this expression and remove dependence on external momenta from the integrand, we can change variables, $$y_i=(x_2-x_4)z_i \;\;,\;\;{\bf s^\perp}_i=\sqrt{x_2-x_4}\Lambda_0{\bf q^\perp}_i \;.\eqno(6.29)$$ \noindent This leads to $$\eqalign{ \delta v_{4M} = &- {g^2 \over 2}\; \theta(x_2-x_4) \; \int {d^2q^\perp_1 dz_1 \over 16 \pi^3 z_1}\; \int {d^2q^\perp_2 dz_2 \over 16 \pi^3 z_2}\; (16\pi^3)\delta(1-z_1-z_2)\delta^2({\bf q^\perp}_1+{\bf q^\perp}_2) \cr & \qquad \qquad \qquad \theta\bigl(1-{{\bf q^\perp}^2_1 \over z_1}-{{\bf q^\perp}^2_2 \over z_2}\bigr) \theta\bigl({{\bf q^\perp}^2_1 \over z_1}+{{\bf q^\perp}^2_2 \over z_2}- {1 \over 4}\bigr) \Bigl[{{\bf q^\perp}^2_1 \over z_1} + {{\bf q^\perp}^2_2 \over z_2} \Bigr]^{-1} \cr = &-g^2\;{ln(2) \over 16 \pi^2}\;\theta(x_2-x_4) \;.}\eqno(6.30)$$ This correction is of exactly the same form as the correction to the marginal part of $u_4$ from the first Hamiltonian diagram in figure 3b. The correction from the third diagram in figure 3b is identical to the correction in Eq. (6.30), the only difference being that the third diagram survives when $x_4>x_2$. The fourth and fifth diagrams contribute the same amount to the marginal part of $\delta v_4$ as the second and third diagrams. The total correction to the 2-particle $\rightarrow$ 2-particle part of $u_4$ from the first five diagrams in figure 3b leads to the second-order transformation, $$g \rightarrow g - {3 \;ln(2) \over 16 \pi^2} \;g^2 \;.\eqno(6.31)$$ It is a straightforward exercise to compute changes in the irrelevant parts of $u_4$ by allowing the external transverse momenta to be nonzero and expanding in powers of these momenta. It is also straightforward to determine the effects of spectators on the change of $u_4$. I return to the discussion of irrelevant parts of $u_4$ below. Note that spectators apparently have no effect on the marginal part of $u_4$ for the simple vertex in Eq. (6.3). The transverse momenta of the spectators are set to zero when computing the marginal part of $u_4$, so the cutoffs in Eq. (6.30) are not affected. The only effect is in the momentum conserving delta functions, and using variables similar to those in Eq. (6.23) one explicitly recovers Eq. (6.30) unchanged. We will see later that the marginal part of $u_4$ is inevitably spectator-dependent. Higher-order corrections produce spectator-dependence, and Lorentz covariance and cluster decomposition require spectator-dependence. We must adjust the Hamiltonian so that all observers in frames related to one another by rotations obtain covariant results, and so that systems of particles that are not causally connected do not affect one another. The last six Hamiltonian diagrams in figure 3b lead to a correction of the marginal 1-particle $\rightarrow$ 3-particle and 3-particle $\rightarrow$ 1-particle parts of $u_4$ identical to Eq. (6.31). The first diagram in figure 3c does not occur, because the intermediate state always has less energy than the external states, which is not allowed in a second-order correction. The second and third Hamiltonian diagrams are allowed and lead to the irrelevant operator $u_6$. There are additional contributions to $u_6$ that I do not show. The cutoffs require that the external states have an energy below the cutoff while the intermediate state has an energy above the cutoff. Moreover, momentum conservation completely determines the momentum of the single internal particle line. Using the coordinates $(p_i^+,{\bf p^\perp}_i)=(x_i {\mit P}^+,x_i {\mit P}^\perp+{\bf r^\perp}_i)$ and ignoring spectator effects, one part of the contribution to $\delta u_6$ from the second diagram in figure 3c is $$\eqalign{ \delta v_6 = {g^2 \over x_2^+ + x_3^+ - x_6^+} \;&\Bigl[{{\bf r^\perp}^2_2 \over x_2^+}+{{\bf r^\perp}^2_3 \over x_3^+}- {{\bf r^\perp}_6^2 \over x_6^+} - {({\bf r^\perp}_2+{\bf r^\perp}_3-{\bf r^\perp}_6)^2 \over x_2^+ + x_3^+ - x_6^+} \Bigr]^{-1} \cr &\theta\bigl(\Lambda_0^2-{{\bf r^\perp}^2_1 \over x_1^+}-{{\bf r^\perp}_6^2 \over x_6^+}- {({\bf r^\perp}_2+{\bf r^\perp}_3-{\bf r^\perp}_6)^2 \over x_2^+ + x_3^+ - x_6^+}\bigr) \cr &\theta\bigl({{\bf r^\perp}^2_1 \over x_1^+}+{{\bf r^\perp}_6^2 \over x_6^+}+ {({\bf r^\perp}_2+{\bf r^\perp}_3-{\bf r^\perp}_6)^2 \over x_2^+ + x_3^+ - x_6^+}- \Lambda_1^2\bigr) \cr &\theta\bigl(\Lambda_1^2-{{\bf r^\perp}^2_1 \over x_1^+}-{{\bf r^\perp}_2^2 \over x_2^+}- {{\bf r^\perp}_3^2 \over x_3^+}\bigr) \theta\bigl(\Lambda_1^2-{{\bf r^\perp}^2_4 \over x_4^+}-{{\bf r^\perp}_5^2 \over x_5^+}- {{\bf r^\perp}_6^2 \over x_6^+}\bigr) \;.} \eqno(6.32)$$ \noindent Here I have included the cutoffs associated with the external lines, because they are important for further analysis of this term. At this point we should try to proceed by expanding this operator in terms of increasingly irrelevant operators. All terms in $u_6$ are irrelevant if inverse powers of transverse momenta do not arise, but this correction seems problematic. We should obtain the leading correction by letting the external transverse momenta go to zero. However in this limit the energy denominator vanishes and the cutoffs go to zero, forcing us to analyze $\infty \times 0$. I assume that all external longitudinal momenta remain finite as the external transverse momenta approach zero. The leading correction to $u_6$ in this limit becomes $$\delta v_6 \rightarrow {g^2 \over {\bf k^\perp}^2} \theta\bigl(\Lambda_0^2-{{\bf k^\perp}^2 \over y}\bigr) \theta\bigl({{\bf k^\perp}^2 \over y}- \Lambda_1^2\bigr) \;, \eqno(6.33)$$ \noindent where ${\bf k^\perp} \rightarrow 0$, and $y=k^+/{\mit P}^+$. Here ${\bf k^\perp}$ is the transverse momentum carried by the internal boson line, and $y$ is its longitudinal momentum fraction. To analyze this distribution, let us integrate it with a smooth function of $y$, $f(y)$. This leads to the integral $$\eqalign{ &{g^2 \over {\bf k^\perp}^2} \int_0^1 dy \;f(y) \;\theta\bigl(\Lambda_0^2-{{\bf k^\perp}^2 \over y}\bigr)\;\theta\bigl({{\bf k^\perp}^2 \over y}- \Lambda_1^2\bigr) \cr = &{g^2 \over \Lambda_0^2}\;\int_0^{\Lambda_0^2/{\bf k^\perp}^2} dz\; f\Bigl( {{\bf k^\perp}^2 \over \Lambda_0^2}z\Bigr)\;\theta(z-1)\;\theta(4-z) \cr \rightarrow & \qquad\qquad {3 g^2 \over \Lambda_0^2}\; f(0) \;.}\eqno(6.34)$$ \noindent Of course I have used $\Lambda_1=\Lambda_0/2$ again, and in the last line I have finally completed the limit in which all external transverse momenta are taken to zero. Thus we see that the distribution that corresponds to the leading correction to $u_6$ is a delta function in the longitudinal momentum transfer, and as expected the coefficient of the leading correction to $u_6$ is inversely proportional to $\Lambda_0^2$, as an irrelevant operator should be. No long range transverse interactions are produced by the elimination of high energy states, and inverse powers of transverse momenta do not appear when one expands the second-order transformation in terms of relevant, marginal and irrelevant operators. The appearance of delta functions of longitudinal momentum transfer may have interesting consequences, but they are not important in low orders of a perturbative analysis. In figure 3d two of the disconnected Hamiltonian diagrams that affect $u_8$ are displayed. While such disconnected diagrams cancel when there are no cutoffs on the intermediate energies, this cancellation does not occur when the cutoffs are in place. It is possible for the incoming and outgoing states to have energies just below the cutoff, while the intermediate states have energies just above the cutoff. I do not go through a detailed analysis, because the correction is again irrelevant and local in the transverse direction, in this case being proportional to $1/\Lambda_0^4$. This completes the initial analysis of the second-order transformation for a massless Hamiltonian with the simple interaction given in Eq. (6.3). There are two remaining topics in this Section. First, I want to study repeated second-order transformations. Second, I want to discuss Lorentz covariance and cluster decomposition in second-order perturbation theory, and introduce coupling coherence \APrefmark{\rPERWIL,\rOEHONE-\rKRAUS}. Let us assume that the second-order behavior of the invariant-mass transformation is a reasonable approximation of the complete transformation. To compute an example trajectory of Hamiltonians, $H^{\Lambda_0}_{\Lambda_n}$, we can choose $H^{\Lambda_0}_{\Lambda_0}$ to be, $$\eqalign{ H^{\Lambda_0}_{\Lambda_0} =& \int {d^2{\mit P}^\perp d{\mit P}^+ \over 16\pi^3 {\mit P}^+}\;\Biggl\{ \cr &\qquad \; \int {d^2r^\perp_1 dx_1 \over 16 \pi^3 x_1 } \; (16 \pi^3) \delta^2({\bf r^\perp}_1) \delta(1-x_1) \;\Biggl[ {{\mit P}^{\perp 2} \over {\mit P}^+}+ (1+\xi_0) {{\bf r^\perp}^2_1 \over x_1 {\mit P}^+}+\mu_0^2 \Biggr]\; |q_1 \rangle \langle q_1 | \cr &\qquad + \;\int {d^2r^\perp_1 dx_1 \over 16 \pi^3 x_1 } \; \int {d^2r^\perp_2 dx_2 \over 16 \pi^3 x_2 } \; (16 \pi^3) \delta^2({\bf r^\perp}_1+{\bf r^\perp}_2) \delta(1-x_1-x_2) \cr & \qquad \qquad \Biggl[{{\mit P}^{\perp 2} \over {\mit P}^+}+ (1+\xi_0) {{\bf r^\perp}^2_1 \over x_1 {\mit P}^+} + (1+\xi_0) {{\bf r^\perp}^2_2 \over x_2 {\mit P}^+}+2\mu_0^2\Biggr] \;|q_1,q_2 \rangle \langle q_1,q_2 | \;\;+\;\;\cdot\cdot\cdot \;\Biggr\} \cr +&{g_0 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2+q_3-q_4) \; a^\dagger(q_1) a^\dagger(q_2) a^\dagger(q_3) a(q_4) \cr +&{g_0 \over 4} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2-q_3-q_4) \; a^\dagger(q_1) a^\dagger(q_2) a(q_3) a(q_4) \cr +&{g_0 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1-q_2-q_3-q_4) \; a^\dagger(q_1) a(q_2) a(q_3) a(q_4) \;.}\eqno(6.35) $$ \noindent Note that in the critical Gaussian fixed point Hamiltonian, $\xi$, $\mu$, and $g$ are all zero. This Hamiltonian contains a complete set of relevant and marginal operators required for an approximate second-order analysis. In this example there is no sector-dependence in the marginal part of $u_4$, so we are able to write the marginal interaction without using the awkward projection operators that are required to display $u_2$. When the second-order invariant-mass transformation is applied to this Hamiltonian, and irrelevant operators are dropped at every stage, the resultant Hamiltonian contains no new relevant or marginal operators. In this case the only effect on the relevant and marginal operators is to change the constants $\xi_0$, $\mu_0$, and $g_0$. A complete second-order analysis would require us to consider more general interactions than shown in Eq. (6.3), and to retain irrelevant operators. Irrelevant operators typically produce new relevant and marginal operators. Dropping irrelevant operators, we can write the relevant and marginal operators in $H^{\Lambda_0}_{\Lambda_n}$ as in Eq. (6.35), using the constants $\xi_n$, $\mu_n$, and $g_n$. The approximate second-order transformation can be summarized by the equations, $$g_{n+1} = g_n - c_g\;g_n^2 \;, \eqno(6.36)$$ $$\xi_{n+1} = \xi_n + c_\xi \;g_n^2 \;, \eqno(6.37)$$ $$\mu_{n+1}^2 = 4 \mu_n^2 - c_\mu \;g_n^2\;\Lambda_0^2 \;. \eqno(6.38)$$ \noindent Note that if $g_n$ is small, the initial assumption that the second-order corrections are small in comparison to the linear corrections is consistent. I have shown that the constants $c_g$ and $c_\mu$ are positive, but have not analyzed $c_\xi$. It is fairly easy to see from Eq. (6.25) that $c_\xi$ is also positive. If $g_n$ is sufficiently small, these equations should be a reasonable crude approximation to the entire transformation. Wilson has discussed how one must solve such equations so that errors are controlled \APrefmark{\rWILTEN}, and I have already repeated some of his discussion in Section {\uppercase\expandafter{\romannumeral2 }}. Eqs. (6.36)-(6.38) are much simpler than the general case discussed by Wilson, and they are readily solved for large ${\cal N}$. For a massless scalar theory, $\mu_0^2$ and $\xi_0$ can be adjusted so that the physical particle has the dispersion relation of a massless particle. If one insists on actually computing the value of $\mu_0^2$ required to obtain a specific value of $\mu_{\cal N}^2$, one finds that $\mu_0^2$ must be controlled to an accuracy of about 1 part in $4^{-{\cal N}}$. In practice, one is never interested in $\mu_0^2$ directly, and the rest of the calculation should be adjusted so that no equations depend on the precise value of $\mu_0^2$. It is easiest to fix $\mu_{\cal N}^2$ and solve Eq. (6.38) towards decreasing $n$. $\xi_{\cal N}$ should be close to $0$, and $\xi_0$ is adjusted to achieve this result. Through second-order in $\delta H^2$, the equation for $g_n$ decouples from the equations for $\xi_n$ and $\mu_n$; and the first step in solving the complete set is to solve Eq. (6.36). As successive transformations are applied, Eq. (6.36) shows that $g$ decreases. We must start with a sufficiently small value of $g$ for perturbation theory to be reasonable. An accurate solution of Eq. (6.36) can easily be constructed by iteration. A reasonable approximation over any finite segment of the trajectory is $$g_n={g_m \over 1+c_g (n-m) g_m }={g_m \over 1+{c_g \over ln(2)} ln\Bigl({\Lambda_m \over \Lambda_n}\Bigr) g_m }={g_m \over 1+{3 \over 16\pi^2} ln\Bigl({\Lambda_m \over \Lambda_n}\Bigr) g_m } \;, \eqno(6.39)$$ \noindent where in the last step I have used the result for $c_g$ in Eq. (6.31). The error in this approximation grows as $|n-m|$ becomes large. It is interesting to note that the factor of $ln(2)$ in $c_g$, which came from the choice $\Lambda_1=\Lambda_0/2$, drops out of this final result. The result is well-known. In the renormalized Hamiltonian, which as discussed above is obtained by allowing ${\cal N} \rightarrow \infty$, $g=0$. This analysis is not complete, of course, because I have only considered the case where $g_0$ is small; however, I am only interested in showing that a perturbative analysis may be possible. This brings me to the final subject for this Section. Up to this point, there has been no discussion of how Lorentz covariance and cluster decomposition, both of which are violated by an invariant-mass cutoff, are restored in physical predictions. Perhaps the most important observation is that if one succeeds in restoring these properties in predictions with one cutoff, they are restored for all cutoffs, because the renormalization group is designed to preserve the matrix elements of observables as cutoffs are changed. I give three simple examples in second-order perturbation theory. The first example is the dispersion relation for a single boson in the presence of spectators. I compute the second-order contribution to the Green's function with the external propagators removed; which is also the second-order shift in the invariant-mass-squared of the state when $\epsilon$ (see Appendix A) is chosen to be the state's on-shell free energy. If this invariant-mass-squared shift does not transform as a scalar under Lorentz transformations, and/or it depends on the spectators, we must add counterterms to restore Lorentz covariance and cluster decomposition. The second example is the boson-boson scattering amplitude; for which the second-order correction comes from diagrams identical to the first five in figure 3b, with spectators added. This amplitude should be manifestly covariant, depending only on the invariant-mass of the two bosons that scatter from one another, with no dependence on spectators. If these conditions are not satisfied, we must again add counterterms. Finally, I list the correction to the one-boson to three-boson Green's function, given by the fifth, sixth, and seventh diagrams in figure 3b. The main differences between second-order perturbation theory diagrams and second-order Hamiltonian diagrams are the ranges of integration and the energy denominators. In Hamiltonian diagrams the range of integration is bounded by an upper and lower cutoff, and the energy denominator always involves the on-mass-shell energies of incoming, outgoing, or intermediate states. In time-ordered perturbation theory only the upper cutoff appears, and the energy in the denominator is arbitrary. We have already seen that a `mass' term appears in $u_2$ with the wrong dispersion relation when there are spectators, Eq. (6.24); and we will find a mass with the wrong dispersion relation when we study the invariant-mass of a boson in time-ordered perturbation theory. From the point of view of perturbation theory, we need to add a counterterm that exactly cancels this mass term or we do not obtain an invariant-mass shift that transforms like a scalar. The counterterm must completely cancel the mass shift, and I show that one obtains the same counterterm from the renormalization group Eqs. (6.36) and (6.38) using the condition that $\mu_n$ is a function of $g_n$ with no further dependence on $n$. This is a coupling constant coherence condition. The second-order contribution to the boson invariant-mass corresponding to figure 4 is readily determined using the diagrammatic rules in Appendix A. I assume that the total longitudinal momentum is ${\mit P}^+$ and that the total transverse momentum is ${\mit P}^\perp$, with $\epsilon={\mit P}^-={{\mit P}^{\perp 2} \over {\mit P}^+}$ for massless bosons. I assume that there are spectators, with all relevant momenta given in Eq. (6.20). The invariant-mass of a state with a boson in the presence of a massless spectator consists of a kinetic energy term and a mass term for the boson itself. At this point I am only interested in the violation of Lorentz covariance coming from the mass term that appears in the shift, so I set the relative transverse momentum ${\bf t^\perp}=0$ and assume that the invariant-mass of the spectators is zero. In other words, I set all relative transverse momenta equal to zero. In this case, using the Jacobi variables defined in Eq. (6.18), the boson mass shift is $$\eqalign{- {g^2 \over 3!} \; \int & {d^2q^\perp dx \over 16 \pi^3 x}\; {d^2r^\perp dy \over 16 \pi^3 y}\; {d^2s^\perp dz \over 16 \pi^3 z}\; \Bigl[ {{\bf q^\perp}^2 \over x} + {{\bf r^\perp}^2 \over y} + {{\bf s^\perp}^2 \over z} \Bigr]^{-1} \cr &(16 \pi^3) \delta(w-x-y-z) \delta^2({\bf q^\perp}+{\bf r^\perp}+{\bf s^\perp}) \; \theta\Bigl(\Lambda_0^2- {{\bf q^\perp}^2 \over x} - {{\bf r^\perp}^2 \over y} - {{\bf s^\perp}^2 \over z} \Bigr) \;.}\eqno(6.40)$$ \noindent Lorentz covariance and cluster decomposition require that this shift be a constant, independent of all longitudinal momenta. Following the same steps leading to Eq. (6.24), we find that this mass shift can be written as $$\eqalign{ - {g^2 \over 3!}\;w \Lambda_0^2 \; \int & {d^2{q^\perp}' dx' \over 16 \pi^3 x'}\; {d^2{r^\perp}' dy' \over 16 \pi^3 y'}\; {d^2{s^\perp}' dz' \over 16 \pi^3 z'}\; \Bigl[{{\bf q^\perp}'^2 \over x'} + {{\bf r^\perp}'^2 \over y'} + {{\bf s^\perp}'^2 \over z'} \Bigr]^{-1} \cr &(16 \pi^3) \delta(1-x'-y'-z') \delta^2({\bf q^\perp}'+{\bf r^\perp}'+{\bf s^\perp}') \; \theta\Bigl(1- {{\bf q^\perp}'^2 \over x'} - {{\bf r^\perp}'^2 \over y'} - {{\bf s^\perp}'^2 \over z'} \Bigr) \;.}\eqno(6.41)$$ \noindent This can be related to the constant $c_\mu$ that appears in the renormalization group Eq. (6.38), and one finds that the mass shift is $$-{c_\mu g^2 \over 3}\;w \Lambda_0^2\;,\eqno(6.42)$$ \noindent where we can use Eq. (6.24), to show that $$\eqalign{ c_\mu= {4 \over 3!}\; \; \int & {d^2{q^\perp}' dx' \over 16 \pi^3 x'}\; {d^2{r^\perp}' dy' \over 16 \pi^3 y'}\; {d^2{s^\perp}' dz' \over 16 \pi^3 z'}\; \Bigl[{{\bf q^\perp}'^2 \over x'} + {{\bf r^\perp}'^2 \over y'} + {{\bf s^\perp}'^2 \over z'} \Bigr]^{-1} \cr &(16 \pi^3) \delta(1-x'-y'-z') \delta^2({\bf q^\perp}'+{\bf r^\perp}'+{\bf s^\perp}') \cr &\theta\Bigl(1- {{\bf q^\perp}'^2 \over x'} - {{\bf r^\perp}'^2 \over y'} - {{\bf s^\perp}'^2 \over z'} \Bigr) \theta\Bigl({{\bf q^\perp}'^2 \over x'} + {{\bf r^\perp}'^2 \over y'} + {{\bf s^\perp}'^2 \over z'} - {1 \over 4} \Bigr) \;.}\eqno(6.43)$$ \noindent To obtain this result for $c_\mu$ one must remember that $\delta u_4$ contains an extra factor of four not found in Eq. (6.24) that results from the rescaling part of the transformation. To go from Eq. (6.41) to Eq. (6.42), note that the integral in Eq. (6.41) can be written as an infinite sum of integrals with upper and lower cutoffs, with each successive cutoff being 1/4 the previous cutoff. In each of these integrals one can rescale momenta in exactly the manner used to obtain Eq. (6.24), leading to a sum $1+1/4+1/16+\cdot\cdot\cdot=4/3$. The factor of $w$ that appears in Eq. (6.42) shows that the mass shift is neither covariant nor spectator-independent, and a mass counterterm must be added to exactly cancel this shift. Return to Eqs. (6.36) and (6.38) and ask whether it is possible for $\mu_n^2$ to be a perturbative function of $g_n$ with no further dependence on $n$. In general we can choose any initial value for $\mu_0$ and solve Eq. (6.38) to find $\mu_n$; but if we assume that $\mu_n^2=(\alpha g_n+\beta g_n^2+\cdot\cdot\cdot)\Lambda_0^2$, and substitute this into Eq. (6.38), using Eq. (6.36), we find $\alpha=0$, $\beta=c_\mu/3$. To this order we have a unique result, $\mu_n^2=(c_\mu/3)\; g_n^2\;\Lambda_0^2$. Higher order terms are not determined, because Eqs. (6.36) and (6.38) are altered at ${\cal O}(g^3)$. For this choice of $\mu_n^2$, $\mu_0^2$ is uniquely determined, $$\mu_0^2={c_\mu g_0^2 \over 3}\;\Lambda_0^2+{\cal O}(g_0^3)\;.\eqno(6.44)$$ \noindent The factor of $w$ found in Eq. (6.42) is included in the definition of the term in which $\mu_n$ appears, as seen in Eq. (6.35); therefore, this value of $\mu_0^2$ precisely cancels the entire mass shift found in second-order perturbation theory and acts to restore covariance and cluster decomposition. In other words, with no direct reference to these properties, one can use the renormalization group and the coupling constant coherence conditions to remove the violations caused by the invariant-mass cutoff. Let me next consider the boson-boson scattering amplitude corresponding to the first diagram in figure 3b, with spectators added. I assume that the total longitudinal and transverse momenta of the two bosons that scatter are given by the first momentum in Eq. (6.20), with the spectator momentum being the second momentum. For simplicity I assume that the invariant-mass of the spectators is zero. For the bosons that scatter I use the Jacobi variables, $$\eqalign{ &(k_1^+,{\bf k^\perp}_1) = (x w{\mit P}^+, x (w{\mit P}^\perp+{\bf t^\perp})+{\bf s^\perp}), \cr &(k_2^+,{\bf k^\perp}_2) = ((1-x)w {\mit P}^+, (1-x)(w {\mit P}^\perp+{\bf t^\perp})-{\bf s^\perp})\;.} \eqno(6.45)$$ \noindent Letting $\epsilon=({\mit P}^{\perp 2}+M^2)/{\mit P}^+$, the scattering amplitude is $$\eqalign{ & {g^2 \over 2w}\; \int {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; \Bigl[M^2-{{\bf t^\perp}^2 \over w(1-w)} - {{\bf s^\perp}^2 \over wx(1-x)} +i0_+\Bigr]^{-1} \cr &\qquad\qquad\qquad\qquad \theta\Bigl(\Lambda_0^2-{{\bf t^\perp}^2 \over w(1-w)}-{{\bf s^\perp}^2 \over wx(1-x)}\Bigr) \;.}\eqno(6.46)$$ \noindent The real part of this amplitude is $${g^2 \over 32 \pi^2} ln\Biggl({|M^2-{{\bf t^\perp}^2 \over w(1-w)}| \over \Lambda_0^2 -M^2}\Biggr) \;. \eqno(6.47)$$ Remember that $M^2$ is the invariant-mass of the entire state, including the spectators. The invariant-mass of the two-boson subsystem is $$(p_1+p_2)^\mu (p_1+p_2)_\mu=w \Bigl(M^2-{{\bf t^\perp}^2 \over w(1-w)}\Bigr) \;.\eqno(6.48)$$ \noindent The boson-boson scattering amplitude is neither covariant nor spectator-independent. For massless bosons with $$\eqalign{&(p_1^+,{\bf p^\perp}_1) = (y w{\mit P}^+, y(w{\mit P}^\perp+{\bf t^\perp})+{\bf r^\perp}), \cr &(p_2^+,{\bf p^\perp}_2) = ((1-y)w {\mit P}^+, (1-y)(w {\mit P}^\perp+{\bf t^\perp})-{\bf r^\perp})\;,} \eqno(6.49)$$ \noindent it is also easily seen that $$(p_1+p_2)^\mu (p_1+p_2)_\mu={{\bf r^\perp}^2 \over y(1-y)} \;.\eqno(6.50)$$ \noindent The on-shell scattering amplitude blows up in perturbation theory as the relative transverse momentum between the scattering bosons goes to zero, but the important observation is that counterterms must be added to restore covariance and cluster decomposition. To discuss the counterterms, I set ${\bf t^\perp}=0$ for simplicity. To determine what counterterms are required, expand the amplitude in Eq. (6.47), obtaining $${g^2 \over 32 \pi^2}\Biggl[ln\Biggl( {{\bf r^\perp}^2 \over y(1-y)\Lambda_0^2} \Biggr) \;-\;ln(w) \;+\; {{\bf r^\perp}^2 \over wy(1-y)\Lambda_0^2} \;+\; \cdot\cdot\cdot \Biggr] \;.\eqno(6.51)$$ The first term is covariant and spectator-independent, and after the subtraction of a constant associated with coupling renormalization it yields the correct result. All remaining terms must be canceled by counterterms. The counterterms are part of $u_4$, and it is only the second term in the series that affects the marginal part of $u_4$. In this case we find that covariance and cluster decomposition requires us to depart from the simple marginal interaction in Eq. (6.3), and use $${\mathaccent "7E g}(p_i^+)=g+{g^2 \over 32 \pi^2}\;ln\Biggl( {p_1^++p_2^+\over {\mit P}^+}\Biggr)+\cdot\cdot\cdot \;. \eqno(6.52)$$ \noindent Not only are functions of longitudinal momentum allowed in the marginal part of $u_4$, they are required by covariance and cluster decomposition when we use an invariant-mass cutoff. Calculations of the remaining diagrams in figure 3b reveal additional logarithmic corrections to the vertex, but nothing qualitatively new is revealed. Letting $p_3^+=z w {\mit P}^+$ and $p_4^+=(1-z) w {\mit P}^+$, the real part of the complete scattering amplitude can be written $${g^2 \over 32 \pi^2}\Biggl[ ln\Biggl({(p_1+p_2)^2 \over w(\Lambda_0^2-M^2)}\Biggr)+ln\Biggl({(p_1-p_3)^2 \over w |y-z| (\Lambda_0^2-M^2)}\Biggr)+ln\Biggl({(p_2-p_3)^2 \over w |1-y-z| (\Lambda_0^2-M^2)}\Biggr) \Biggr] \;. \eqno(6.53)$$ \noindent The additional counterterms required to restore covariance and cluster decomposition to the entire amplitude are easily determined. We can think of the corrections to $u_4$ as new marginal and irrelevant variables, and as such we expect to find renormalization group equations that show how their strengths change with the cutoff. On the other hand, from the point of view of perturbation theory the strengths of these counterterms are determined by $g$; and they change with the cutoff only because $g$ changes with the cutoff. It is straightforward to compute the one-boson to three-boson Green's functions corresponding to the diagrams in figure 3b, and the real part is, $${g^2 \over 32 \pi^2}\Biggl[ ln\Biggl({(p_2+p_3)^2 \over w(x+y)(\Lambda_0^2-M^2)}\Biggr)+ln\Biggl({(p_2+p_4)^2 \over w(x+z)(\Lambda_0^2-M^2)}\Biggr)+ln\Biggl({(p_3+p_4)^2 \over w(y+z)(\Lambda_0^2-M^2)}\Biggr) \Biggr] \;. \eqno(6.54)$$ \noindent Here I have used $p_1^+=w {\mit P}^+$, $p_2^+=w x {\mit P}^+$, $p_3^+=w y {\mit P}^+$, and $p_4^+=w z {\mit P}^+$. Again, we find that covariance and cluster decomposition are violated, and that counterterms must be added to $u_4$ to subtract these violations. Next I want to use the coupling constant coherence conditions to compute the complete set of ${\cal O}(g^2)$ counterterms. Let us first consider the generic problem of determining the strength of an irrelevant variable from its renormalization group equation, using the condition that it can depend on the cutoff only through its perturbative dependence on $g_n$. The generic equation for an irrelevant variable can be written, $$w_{n+1}=\Bigl({1 \over 4}\Bigr)^{n_w} w_n- c_w g_n^2+{\cal O}(g_n^3) \;, \eqno(6.55)$$ \noindent where $n_w$ is an integer determined by the transverse dimension of the operator. Note that the ${\cal O}(g_n^3)$ corrections include corrections to the Hamiltonian coming from diagrams that are third-order in the original interaction in Eq. (6.3), as well as corrections coming from diagrams that include one original vertex and one counterterm vertex. The fact that tadpoles are eliminated when zero-modes are dropped is used here, because an ${\cal O}(g^2)$ counterterm produces ${\cal O}(g^3)$ corrections when zero-modes are removed. The assumption that all counterterms are at least ${\cal O}(g^2)$ must be justified {\it a posteriori}. It is easy to see that Eq (6.55) implies that an expansion of $w_n$ in powers of $g_n$ should start at second order. Assuming that $w_n=\omega g_n^2+{\cal O}(g_n^3)$, and using Eq. (6.36), we find, $$\omega={c_w \over 1 - \bigl({1 \over 4}\bigr)^{n_w}} \;.\eqno(6.56)$$ Thus, we find that the second-order transformation fixes the strength of all irrelevant operators when we insist that they run only with the coupling. To illustrate the consequences of Eq. (6.56), consider $\delta u_4$ given by Eq. (6.15). Eq. (6.15) yields the values of $c_w$ for an infinite number of irrelevant operators. Using Eq. (6.56) we find that the Hamiltonian must contain the set of irrelevant operators, $$\eqalign{ w_4&={g^2 \over 64 \pi^2}\;ln\Bigl(1+{{\bf r^\perp}^2 \over y(1-y)\Lambda_0^2} \Bigr) \cr &={g^2 \over 4} \; \int {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; \Biggl\{ \Bigl[{{\bf r^\perp}^2 \over y(1-y)} - {{\bf s^\perp}^2 \over x(1-x)}\Bigr]^{-1} - \cr &\qquad\qquad\qquad \qquad\qquad\qquad \qquad \Bigl[{{\bf s^\perp}^2 \over x(1-x)}\Bigr]^{-1} \Biggr\}\; \theta\Bigl({{\bf s^\perp}^2 \over x(1-x)}-\Lambda_0^2\Bigr) \;.}\eqno(6.57)$$ \noindent All momenta are defined in Eq. (6.4). There is another counterterm coming from the first Hamiltonian diagram in figure 3b, with the momenta in Eq. (6.57) being replaced by the momenta of the outgoing particles; as well as additional counterterms from the remaining Hamiltonian diagrams in figure 3b. However, one can easily determine the integral form of the counterterms in each case. Notice that the integrand in Eq. (6.57) is similar to the integrand in Eq. (6.11). There is a subtraction of the latter integrand with the external transverse momenta set to zero to remove the marginal piece, and the step functions are altered so that it is intermediate energies {\eighteenit above} the upper cutoff that are included, rather than energies between the upper and lower cutoffs. In addition to having a simple universal form, the integral representation of the irrelevant `counterterms' will prove convenient in Section {\uppercase\expandafter{\romannumeral7 }}, when I group a second-order contribution that includes this counterterm and the original interaction in Eq. (6.3) with a third-order contribution from the original interaction alone. In diagrammatic terms, I group the one-loop correction that contains a vertex counterterm with a two-loop contribution to the running Hamiltonian; and the integral representation of the counterterm allows this regrouping to be performed directly in the two-loop integrand. The coupling constant coherence conditions placed on the irrelevant operators require that to ${\cal O}(g^2)$ they must be invariant under the action of the full transformation. In the first part of a transformation one lowers the cutoff, and generates new ${\cal O}(g^2)$ irrelevant operators. When these new irrelevant operators are added to the old irrelevant operators, the resultant operators must be identical in form to the old irrelevant operators, with the only change being the replacement of $\Lambda_0$ with $\Lambda_1$. After the scaling part of the transformation is completed, the complete set of irrelevant operators returns to exactly its original form; in this case, the logarithm in Eq. (6.57) is exactly reproduced. Higher order corrections to the irrelevant operators insure that the $g_n^2$ coefficient runs correctly, so that it is indeed the correct running coupling that appears, and they generate new irrelevant operators of ${\cal O}(g_n^3)$ and higher. As a final example of an irrelevant counterterm, consider the correction to $u_6$ resulting from Eq. (6.32). All operators in $u_6$ are irrelevant, and if $u_6$ runs only because the coupling in Eq. (6.3) runs, the same reasoning used above implies that $u_6$ must contain, $$\eqalign{ {g^2 \over x_2^+ + x_3^+ - x_6^+} \;&\Bigl[{{\bf r^\perp}^2_2 \over x_2^+}+{{\bf r^\perp}^2_3 \over x_3^+}- {{\bf r^\perp}_6^2 \over x_6^+} - {({\bf r^\perp}_2+{\bf r^\perp}_3-{\bf r^\perp}_6)^2 \over x_2^+ + x_3^+ - x_6^+} \Bigr]^{-1} \cr &\theta\bigl({{\bf r^\perp}^2_1 \over x_1^+}+{{\bf r^\perp}_6^2 \over x_6^+}+ {({\bf r^\perp}_2+{\bf r^\perp}_3-{\bf r^\perp}_6)^2 \over x_2^+ + x_3^+ - x_6^+}- \Lambda_0^2\bigr) \cr &\theta\bigl(\Lambda_0^2-{{\bf r^\perp}^2_1 \over x_1^+}-{{\bf r^\perp}_2^2 \over x_2^+}- {{\bf r^\perp}_3^2 \over x_3^+}\bigr) \theta\bigl(\Lambda_0^2-{{\bf r^\perp}^2_4 \over x_4^+}-{{\bf r^\perp}_5^2 \over x_5^+}- {{\bf r^\perp}_6^2 \over x_6^+}\bigr) \;.} \eqno(6.58)$$ \noindent Comparison of this term with the correction to $u_6$ resulting from the second Hamiltonian diagram in figure 3c, Eq. (6.32), shows that the second-order tree level counterterms are easily determined. Return to the calculation of the boson-boson scattering amplitude, and add the above irrelevant counterterms to $u_4$. The real part of the scattering amplitude becomes, $${g^2 \over 32 \pi^2}\Biggl[ ln\Biggl({(p_1+p_2)^2 \over w\Lambda_0^2}\Biggr)+ln\Biggl({(p_1-p_3)^2 \over w |y-z| \Lambda_0^2}\Biggr)+ln\Biggl({(p_2-p_3)^2 \over w |1-y-z| \Lambda_0^2}\Biggr) \Biggr] \;. \eqno(6.59)$$ \noindent It should be obvious from this expression that covariance and cluster decomposition are still violated, because of the longitudinal momentum fractions appearing in the logarithms; however, these properties can now be restored by marginal operators alone. In other words, coupling coherence leads to irrelevant operators that restore covariance and cluster decomposition to the irrelevant part of the scattering amplitude. Adding the appropriate irrelevant operator contributions to the one-boson to three-boson Green's function yields, $${g^2 \over 32 \pi^2}\Biggl[ ln\Biggl({(p_2+p_3)^2 \over w(x+y)\Lambda_0^2}\Biggr)+ln\Biggl({(p_2+p_4)^2 \over w(x+z)\Lambda_0^2}\Biggr)+ln\Biggl({(p_3+p_4)^2 \over w(y+z)\Lambda_0^2}\Biggr) \Biggr] \;. \eqno(6.60)$$ \noindent Violations of Lorentz covariance and cluster decomposition in the irrelevant part of this Green's function are also removed. Let us now consider the generic problem of determining the strength of a marginal variable from its renormalization group equation, using the condition that it can depend on the cutoff only through its dependence on $g_n$. To simplify the presentation let me simply state that we must simultaneously consider dependent marginal operators of ${\cal O}(g_n^2)$, to which I collectively refer as $h_n$, and dependent marginal operators of ${\cal O}(g_n^3)$, to which I collectively refer as $j_n$. The reason that both are required will become apparent below. We have already seen that no new marginal operators are produced in the ${\cal O}(g_n^2)$ Hamiltonian diagrams, so the generic equations for these marginal variables can be written, $$h_{n+1}=h_n-c_h g_n h_n-d_h g_n^3+{\cal O}(g_n^4)\;, \eqno(6.61)$$ $$j_{n+1}=j_n-c_j g_n h_n-d_j g_n^3+{\cal O}(g_n^4)\;. \eqno(6.62)$$ \noindent In principle, there should be terms of ${\cal O}(g_n w_n)$ in each of these equations; which result from second-order ({\eighteenit i.e.}, one-loop) corrections that contain an irrelevant vertex in addition to the original vertex in Eq. (6.3). Since the irrelevant counterterms have already been uniquely determined to ${\cal O}(g_n^2)$, I group these one-loop corrections with corrections that are third-order ({\eighteenit i.e.}, two-loop) in the original vertex for simplicity. Thus the final terms in these equations, $d_h g_n^3$ and $d_j g_n^3$, result from a sum of two-loop diagrams and irrelevant counterterm insertions in one-loop diagrams. It is straightforward, but tedious, to separately display these terms in Eqs. (6.61) and (6.62). It is far more tedious to separately compute these one-loop and two-loop diagrams in closed form. Fortunately, there is no need to do so. By assumption, $$h_n=\alpha g_n^2+\beta g_n^3\;,\eqno(6.63)$$ $$j_n=\gamma g_n^3\;,\eqno(6.64)$$ \noindent with higher order terms being suppressed. It is straightforward to confirm that there are two types of solution to Eq. (6.61). If $d_h=0$ and $\alpha \ne 0$, we must have $$c_h=2 c_g\;;\eqno(6.65)$$ \noindent where $c_g$ is shown in Eq. (6.36). If $d_h \ne 0$, then $$\alpha={d_h \over 2 c_g-c_h} \;.\eqno(6.66)$$ \noindent $\beta$ is not determined by Eq. (6.61), but it is clear that one must perform a third-order renormalization group calculation and determine $d_h$ to fix $\alpha$, even though this is the coefficient of the ${\cal O}(g_n^2)$ piece of the marginal operator. $\gamma$ is not fixed by Eq. (6.62), but if a marginal operator arises in the third-order behavior of the transformation, Eq. (6.62) shows that the strength of this operator is ${\cal O}(g_n^3)$, as opposed to ${\cal O}(g_n^2)$, only if $$\alpha c_j+d_j=0 \;.\eqno(6.67)$$ \noindent This condition must be met if an operator appears in the third-order analysis, but is not ${\cal O}(g_n^2)$. However, since $\gamma$ is not fixed, there is no guarantee that $\gamma \ne 0$; so the appearance of the operator in the third-order behavior of the transformation does not insure its appearance in the Hamiltonian. The important point to observe here is that Eq. (6.67) requires $\alpha \ne 0$ if $d_j \ne 0$. This means that if we find any new marginal operators in the third-order analysis, at least one new marginal operator is ${\cal O}(g_n^2)$. This generic analysis shows that there are two types of marginal operator that can arise with strength of ${\cal O}(g_n^2)$. One type is explicitly produced by the third-order behavior of the transformation, after the irrelevant counterterms have been properly included to this order, and this type cannot be missed. The second type of marginal operator does not appear in the subtracted third-order analysis; but it must exactly reproduce itself in a second-order analysis when combined with the interaction in Eq. (6.3), with the strength fixed by the condition $c_h=2 c_g$. I proceed no further with this analysis in this Section. In the next Section I show that the logarithmic functions of longitudinal momentum fractions required to restore Lorentz covariance and cluster decomposition are indeed solutions to Eqs. (6.61) and (6.62). \bigskip \noindent {\eighteenb {\uppercase\expandafter{\romannumeral7 }}. Third- and Higher-Order Behavior Near Critical Gaussian Fixed Points} \medskip Third-order behavior of a transformation near the Gaussian fixed point can be computed using the final terms in Eq. (4.14), and the additional terms required to study higher-order behavior are readily computed, with rapidly increasing algebraic complexity. There are two issues I want to address in this Section. First, I want to complete the calculation of corrections to the marginal part of $u_4$ initiated in the last Section; and show that the coupling constant coherence conditions lead to the marginal counterterms required to restore Lorentz covariance and cluster decomposition to the four-point functions in second-order perturbation theory. Second, I want to discuss the errors one encounters when various approximate analyses are performed. The latter discussion is not intended to be rigorous or complete. The renormalization group offers the possibility of overcoming some serious logical flaws in `old-fashioned perturbation theory' \APrefmark{\rSCH,\rDIRTHR,\rBROWN}, a fact that does not yet receive sufficient attention in text books. The perturbative renormalization group may allow one to effect renormalization so that the perturbative expansions encountered at every stage of a calculation involve only small coupling constants. The caveat is that the running coupling(s) must remain small over the entire range of scales directly encountered in the perturbative portion of the calculation. In `old-fashioned' perturbation theory the bare parameters always diverge, and one must simply follow renormalization recipes \APrefmark{\rSCH} without being distracted by intermediate expansions in powers of divergent constants. We will find that the canonical running coupling constant in the scalar theory remains small if it is small in $H_{\Lambda_0}^{\Lambda_0}$; however, there are logarithms of longitudinal momentum fractions that appear in the Hamiltonian, leading to effective couplings that diverge at small longitudinal momentum transfer. There are severe limitations to how much can be learned from a perturbative renormalization group study of scalar field theory, because it is not asymptotically free and there is no possibility of the perturbative analysis being complete \APrefmark{\rWILNINE}. It is not possible to construct a trajectory of renormalized Hamiltonians using a perturbative renormalization group if the theory is not asymptotically free. For the examples in this Section, I simply assume that the coupling constant in $H^{\Lambda_0}_{\Lambda_0}$, $g_0$, is small; in which case Eq. (6.36) reveals that it should remain small in every Hamiltonian on a renormalization group trajectory. This analysis is naive; however, my interest is the development of the perturbative formalism. Let me turn now to the third order corrections to the part of $u_4$ that governs one-boson to three-boson transitions. This part of $u_4$ is chosen rather than the two-boson to two-boson transition because fewer Hamiltonian diagrams must be computed. The requisite Hamiltonian diagrams are shown in figure 5, where I have suppressed the arrows that indicate the energies appearing in the energy denominators. Each diagram in figure 5 actually corresponds to four Hamiltonian diagrams, which differ in their energy denominators and in the distribution of cutoffs, as seen in the four third-order terms in Eq. (4.14). As discussed in the last Section, I want to group all ${\cal O}(\delta H^2)$ terms that are ${\cal O}(g^3)$ because they contain one ${\cal O}(g)$ marginal vertex and one ${\cal O}(g^2)$ irrelevant vertex, with the ${\cal O}(\delta H^3)$ terms that are ${\cal O}(g^3)$ because they contain three ${\cal O}(g)$ marginal vertices. This comment is most easily understood after studying the results below. The bracketed expressions in figure 5 indicate diagrams in which the momenta of the outgoing particles are permuted, with the permutation being listed in the brackets. I only want to compute the third-order corrections to the marginal part of $u_4$, so I set all incoming and outgoing transverse momenta to zero. Since I am studying the critical theory, this means that the initial and final energies are zero, which considerably simplifies most of the energy denominators. I do not display spectators because they have no direct effect on $u_4$. One can always choose variables in which $\delta v_4$ is independent of the longitudinal momenta of the spectators, even though cluster decomposition is restored by marginal counterterms that explicitly depend on these momenta. Let me begin with the Hamiltonian diagrams in figure 5a. There are four diagrams, and each makes an identical contribution to the marginal part of $\delta v_4$. Using Eq. (4.14) and the interaction in Eqs. (6.2)-(6.3), one finds, $$\eqalign{\delta v_{4M} = &-{1 \over 2}\; {g^3 \over 3!\; p^+} \int d\tilde{k}_1 d\tilde{k}_2 d\tilde{k}_3 \;(16 \pi^3) \delta^3(p-k_1-k_2-k_3)\cr &\qquad\qquad\qquad\qquad (k_1^-+k_2^-+k_3^-)^{-2}\; \Theta_{high}(k_1^-+k_2^-+k_3^-)\;; } \eqno(7.1)$$ \noindent where $d\tilde{k}$ is defined in Eq. (3.4), $k^-={\bf k^\perp}^2/k^+$, and $p^+$ is the longitudinal momentum entering the loops. This last factor results from the presence of an internal line that is not part of a loop. Only one of the four terms in Eq. (4.14) survives here, because only the intermediate state that includes the boson loops can be a high energy state, since the initial and final states must both be low energy states. The first factor of $-1/2$ is seen in Eq. (4.14); while $3!$ is a symmetry factor. I have introduced, $$\Theta_{high}(K^-)=\theta\Bigl({\Lambda_0^2 \over {\mit P}^+} -K^-\Bigr)\; \theta\Bigl(K^--{\Lambda_1^2 \over {\mit P}^+} \Bigr) \;,\eqno(7.2)$$ \noindent which projects onto states of high energy; and later I also need, $$\Theta_{low}(K^-)=\theta\Bigl({\Lambda_1^2 \over {\mit P}^+}-K^-\Bigr) \;, \eqno(7.3)$$ \noindent which projects onto states of low energy. ${\mit P}^+$ is the total longitudinal momentum. I have not displayed the low energy cutoffs associated with external lines. The integrals in Eq. (7.1) are most easily evaluated by introducing Jacobi coordinates; for example, $p^+=w{\mit P}^+$, $k_1^+=xyp^+$, ${\bf k^\perp}_1=\sqrt{w}\Lambda_0(x{\bf s}+{\bf r})$, $k_2^+=(1-x)yp^+$, ${\bf k^\perp}_2=\sqrt{w}\Lambda_0((1-x){\bf s}-{\bf r})$, $k_3^+=(1-y)p^+$, ${\bf k^\perp}_3=-\sqrt{w}\Lambda_0 {\bf s} $. In these coordinates, $$\eqalign{\delta v_{4M} = & -{g^3 \over 12} \int {d^2r \over 16 \pi^3} {d^2s \over 16 \pi^3} \int_0^1 dx \int_0^1 dy \; {1 \over x(1-x)y(1-y)} \cr &\qquad\qquad\qquad\qquad \Bigl({{\bf r}^2 \over yx(1-x)}+{{\bf s}^2 \over y(1-y)}\Bigr)^{-2}\;\Theta_{high}\Bigl({{\bf r}^2 \over yx(1-x)}+{{\bf s}^2 \over y(1-y)}\Bigr) \;,}\eqno(7.4)$$ \noindent where now, $$\Theta_{high}\Bigl({{\bf t}^2 \over z}\Bigr)=\theta\Bigl(1-{{\bf t}^2 \over z}\Bigr)\;\theta\Bigl({{\bf t}^2 \over z}-\eta\Bigr) \;,\eqno(7.5)$$ \noindent and, $$\Theta_{low}\Bigl({{\bf t}^2 \over z}\Bigr)=\theta\Bigl(\eta-{{\bf t}^2 \over z}\Bigr) \;.\eqno(7.6)$$ \noindent I have introduced $\eta=\Lambda_1^2/\Lambda_0^2$ because this ratio appears repeatedly. The appropriate definition of $\Theta_{high}$ and $\Theta_{low}$ is always apparent from context. All integrals are evaluated using Jacobi coordinates, so the definitions in Eqs. (7.5) and (7.6) are always used to explicitly evaluate a correction. The remaining integrals are readily completed, and one finds that each Hamiltonian diagram in figure 5a contributes, $$\delta v_{4M} = \;-{g^3 \over 24\; (16 \pi^2)^2} \;ln\Bigl( {1 \over \eta}\Bigr) \;. \eqno(7.7)$$ \noindent Of course, the entire correction $\delta v_4$ is extremely complicated, but the marginal part of $\delta v_4$ is simple. The rescaling does not change this marginal term, so Eq. (7.7) immediately yields the correction to the marginal part of $u_4$ from each diagram in figure 5a. Next consider the diagrams in figure 5b. The first diagram in figure 5b leads to, $$\eqalign{\delta v_{4M} = &{g^3 \over 2} \int d\tilde{k}_1 d\tilde{k}_2 d\tilde{k}_3 d\tilde{k}_4 \;(16 \pi^3)\delta^3(p_1-p_4-k_3-k_4)\;(16 \pi^3)\delta^3(p_2-k_1-k_2-k_3) \cr & \Biggl\{ {}~(k_3^-+k_4^-)^{-1} \;(k_1^-+k_2^-+k_3^-)^{-1}\; \Theta_{high}(k_3^-+k_4^-) \;\Theta_{high}(k_1^-+k_2^-+k_3^-) \cr & ~-{1 \over 2} (k_3^-+k_4^-)^{-1} \; (k_4^--k_1^--k_2^-)^{-1}\; \Theta_{high}(k_3^-+k_4^-) \;\Theta_{low}(k_1^-+k_2^-+k_3^-) \cr & ~-{1 \over 2} (k_1^-+k_2^-+k_3^-)^{-1}\; (k_1^-+k_2^--k_4^-)^{-1}\;\Theta_{low}(k_3^-+k_4^-) \;\Theta_{high}(k_1^-+k_2^-+k_3^-)\Biggr\} \;.} \eqno(7.8)$$ \noindent The first two of the four third-order terms in Eq. (4.14) combine to give the first term in the integrand. The third term in the integrand is zero because it is not possible for the energy of the first intermediate state in figure 5b to have higher energy than the second intermediate state. I do not display terms that vanish in this manner below. While it is possible to evaluate this integral, it is convenient to group it with the one-loop diagrams that contain all irrelevant counterterms that result from sub-diagrams ({\eighteenit e.g.}, nested loops) in figure 5b. In figure 6a I show the first diagram in figure 5b added to a new one-loop diagram in which there is a new four-boson vertex. The new vertex is a sum of irrelevant operators that must be added to the Hamiltonian to satisfy the coupling constant coherence conditions, and figure 6b shows the original diagram that led to their addition. The derivation of irrelevant operators is discussed in the last Section. Using an integral representation for the appropriate irrelevant counterterms, the one-loop correction to the Hamiltonian shown in figure 6a is, $$\eqalign{\delta v_{4M} = &{g^3 \over 2} \int d\tilde{k}_1 d\tilde{k}_2 d\tilde{k}_3 d\tilde{k}_4 \;(16\pi^3) \delta^3(p_1-p_4-k_3-k_4) \; (k_3^-+k_4^-)^{-1}\; \Theta_{high}(k_3^-+k_4^-) \cr & \Biggl\{{1 \over 2} \; (16 \pi^3)\delta^3(p_2-k_1-k_2-k_3) \cr &\qquad\qquad \Bigl[(k_1^-+k_2^-+k_3^-)^{-1}+(k_1^-+k_2^--k_4^-)^{-1} \Bigr] \; \Theta_{super}(k_1^-+k_2^-+k_3^-) \cr &-(16 \pi^3)\delta^2({\bf k^\perp}_2+{\bf k^\perp}_3)\delta(p_2^+-k_1^+-k_2^+-k_3^+)\; (k_1^-+k_2^-)^{-1} \; \Theta_{super}(k_1^-+k_2^-) \Biggr\} \;.}\eqno(7.9)$$ \noindent The first two terms in the integrand have identical energy denominators to the second-order Hamiltonian diagram in figure 6b that leads to these counterterms, while the third term cancels the marginal part of the counterterm and insures that it is composed of irrelevant operators only. I have introduced a new cutoff function, $$\Theta_{super}(K^-)=\theta\Bigl(K^--{\Lambda_0^2 \over p^+}\Bigr) \;.\eqno(7.10)$$ \noindent This cutoff is not really associated with an intermediate state, because it is part of the integral representation of an irrelevant operator that is added to the Hamiltonian, not part of an operator that is induced by a transformation. However, one can think of an intermediate state in which the energy lies above all cutoffs; and say that the counterterm directly implements a complete set of irrelevant interactions that would have been provided by the exchange of particles whose energy is above the cutoffs placed on the Hamiltonian. I want to emphasize that the four-boson vertex nested in the one-loop diagram in figure 6a, while complicated, is uniquely determined by coupling coherence. If this vertex is not in the Hamiltonian, there are variables that explicitly run with the cutoff other than the coupling in Eq. (6.3). To proceed we need to define Jacobi coordinates that satisfy the delta function constraints, and then combine the integrals remaining in Eq. (7.8) with those in Eq. (7.9). The calculation from this point is complicated only because of tedious calculus, which I do not detail. Perhaps the most difficult part of the calculation is keeping track of the various step function cutoffs, and finding how to regroup terms at appropriate stages of the calculation so that nested integrals lead to simple analytical results. These problems make the calculation more difficult than a simple two-loop Feynman diagram calculation, but this should be no surprise. If we let $p_2^+=w(p_1^+-p_4^+)$, so that $p_3^+=(1-w)(p_1^+-p_4^+)$, the complete result from the diagrams in figure 6a is, $$\delta v_{4M} = {g^3 \over 2\;(16 \pi^2)^2}\; \Biggl\{ ln\Bigl( {1 \over \eta}\Bigr)\;(1-w)\;ln(1-w)+{1 \over 2}\Biggl(ln\Bigl( {1 \over \eta}\Bigr)\Biggr)^2\;w \Biggr\} \;.\eqno(7.11)$$ \noindent This result needs to be added to those from the remaining diagrams in figure 5b, with the one-loop corrections that are analogous to Eq. (7.9) being added. The second diagram in figure 5b leads to a correction that is identical in form to Eq. (7.11), but with $w$ and $1-w$ interchanged. Letting $p_2^+=x p_1^+$, $p_3^+=y p_1^+$, and $p_3^+=z p_1^+$; the complete sum of terms in figure 5b, with the appropriate one-loop counterterm diagrams added, is, $$\eqalign{ \delta v_{4M} = &{g^3 \over 2\;(16 \pi^2)^2}\; \Biggl\{ {3 \over 2} \Biggl(ln\Bigl( {1 \over \eta}\Bigr)\Biggr)^2 + \cr & \qquad ln\Bigl({1 \over \eta}\Bigr)\;\Biggl[ {x \over 1-y}\; ln\Bigl({x \over 1-y}\Bigr)+{x \over 1-z}\;ln\Bigl({x \over 1-z}\Bigr)+{y \over 1-z}\;ln\Bigl({y \over 1-z}\Bigr) \cr &\qquad\qquad\qquad + {y \over 1-x}\; ln\Bigl({y \over 1-x}\Bigr)+{z \over 1-x}\;ln\Bigl({z \over 1-x}\Bigr)+{z \over 1-y}\;ln\Bigl({z \over 1-y}\Bigr) \Biggr] \Biggr\} \;. } \eqno(7.12)$$ The first diagram in figure 5c is redisplayed in figure 7a, with the one-loop correction that contains the appropriate irrelevant counterterm. Figure 7b displays the sub-diagram from which the irrelevant counterterm results, and one sees that in this case the counterterm is part of $u_6$. The sum of both diagrams is, $$\eqalign{\delta v_{4M} &= {g^3 \over 2} \int d\tilde{k}_1 d\tilde{k}_2 d\tilde{k}_3 d\tilde{k}_4 \;(16 \pi^3)\delta^3(p_1-k_1-k_2-k_3)\;(16 \pi^3)\delta^3(p_2-k_1-k_2-k_4) \cr & \Biggl\{ {}~(k_1^-+k_2^-+k_3^-)^{-1}\;(k_1^-+k_2^-+k_4^-)^{-1} \; \Theta_{high}(k_1^-+k_2^-+k_3^-) \;\Theta_{high}(k_1^-+k_2^-+k_4^-) \cr & ~-{1 \over 2} (k_1^-+k_2^-+k_4^-)^{-1} \; (k_4^--k_3^-)^{-1}\; \Theta_{low}(k_1^-+k_2^-+k_3^-) \;\Theta_{high}(k_1^-+k_2^-+k_4^-) \cr & ~+{1 \over 2} (k_1^-+k_2^-+k_3^-)^{-1}\;\Bigl[ (k_1^-+k_2^-+k_4^-)^{-1}+(k_4^--k_3^-)^{-1}\Bigr] \cr &\qquad\qquad\qquad\qquad\qquad \Theta_{high}(k_1^-+k_2^-+k_3^-) \;\Theta_{super}(k_1^-+k_2^-+k_4^-)\Biggr\} \;.} \eqno(7.13)$$ \noindent The first two terms in the integrand result from the third-order corrections to the Hamiltonian in Eq. (4.14), while the last two terms result from the second-order corrections in Eq. (4.14) with one of the interactions being an irrelevant operator. The steps required to evaluate this integral are identical to those required above, and the result is, $$\delta v_{4M} = - {g^3 \over 2\;(16 \pi^2)^2}\;ln\Bigl({1 \over \eta}\Bigr)\;ln(1-x) \;,\eqno(7.14)$$ \noindent where $p_2^+=x p_1^+$. The complete set of diagrams in figure 5c, with the accompanying one-loop corrections, yield, $$\delta v_{4M} = - {g^3 \over 2\;(16 \pi^2)^2}\;ln\Bigl({1 \over \eta}\Bigr)\Bigl[ln(1-x)+ln(1-y)+ln(1-z)\Bigr] \;, \eqno(7.15)$$ \noindent where I have again chosen $p_2^+=x p_1^+$, $p_3^+=y p_1^+$, and $p_4^+=z p_1^+$. The first diagram in figure 5d is redisplayed in figure 8a, along with two one-loop corrections with which it must be grouped. In this case there are two sub-diagrams that lead to irrelevant counterterms, as shown in figures 8b and 8c. After using coupling coherence to uniquely determine the irrelevant vertices in the one-loop diagrams, one finds that the diagrams in figure 8a yield, $$\delta v_{4M} = {g^3 \over 2\;(16 \pi^2)^2}\;\Biggl[ {1 \over 2}\;\Biggl(ln\Bigl({1 \over \eta}\Bigr)\Biggr)^2 - {x \over 1-x}\; ln(x) \Biggr] \;.\eqno(7.16)$$ The complete set of diagrams in figure 5d, with the accompanying one-loop corrections, yield $$\delta v_{4M} = - {g^3 \over 2\;(16 \pi^2)^2}\;\Bigl[ {3 \over 2}\; \Biggl(ln\Bigl({1 \over \eta}\Bigr)\Biggr)^2 + {x \over 1-x}\; ln(x) + {y \over 1-y}\; ln(y) + {z \over 1-z}\; ln(z) \Bigr] \;. \eqno(7.17)$$ The final set of two-loop contributions, along with the accompanying ${\cal O}(g^3)$ one-loop contributions, to the marginal part of $\delta v_4$ are shown in figure 9a. These diagrams yield $$\delta v_{4M} = {3 g^2 \over 4\;(16 \pi^2)} \; \Biggl(ln\Bigl( {1 \over \eta}\Bigr)\Biggr)^2 \;. \eqno(7.18)$$ At this point we have all ${\cal O}(g^3)$ contributions to the renormalization of the marginal one-boson to three-boson part of $u_4$, except for one-loop contributions that involve one of the ${\cal O}(g^2)$ marginal operators that we must find using coupling coherence. For the benefit of later calculations it is convenient to list the complete result using coordinates in which $p_i^+=x_i{\mit P}^+$. The complete set of two-loop contributions to the marginal one-boson to three-boson part of $\delta u_4$, along with all ${\cal O}(g^3)$ one-loop contributions that result from the ${\cal O}(g^2)$ irrelevant operators determined by coupling coherence, yields: $$\eqalign{ \delta u_{4M} = &{g^3 \over 2\;(16 \pi^2)^2} \Biggl\{ -{1 \over 3}\;ln\Bigl({1 \over \eta}\Bigr) + {9 \over 2} \; \Biggl(ln\Bigl({1 \over \eta}\Bigr)\Biggr)^2 \cr &\qquad-2 ln\Bigl({1 \over \eta}\Bigr) \Bigl[ ln\Bigl(x_1-x_2\Bigr)+\;ln\Bigl(x_1-x_3\Bigr)+\;ln\Bigl(x_1-x_4\Bigr) -3\;ln\Bigl(x_1\Bigr)\Bigr] \cr &\qquad-ln\Bigl({1 \over \eta}\Bigr)\;\; \Bigl[ \Bigl({x_2 \over x_1-x_2}- {x_2 \over x_1-x_3}-{x_2 \over x_1-x_4}\Bigr)\;ln\Bigl({x_2 \over x_1}\Bigr) \cr &\qquad\qquad\qquad\qquad\qquad + \Bigl({x_3 \over x_1-x_3}- {x_3 \over x_1-x_2}-{x_3 \over x_1-x_4}\Bigr)\;ln\Bigl({x_3 \over x_1}\Bigr)\cr &\qquad\qquad\qquad\qquad\qquad\qquad +\Bigl({x_4 \over x_1-x_4}- {x_4 \over x_1-x_2}-{x_4 \over x_1-x_3}\Bigr)\;ln\Bigl({x_4 \over x_1}\Bigr) \Bigr]\Biggr\} \;.} \eqno(7.19)$$ This result includes four separate functions of longitudinal momenta that must be considered in the renormalization group analysis, each symmetric under the interchange of outgoing momenta. Any one or all of these may appear in $u_4$ at ${\cal O}(g_n^2)$. At least one of them must appear at this order if the Hamiltonian satisfies the coupling constant coherence conditions, as was demonstrated at the end of Section {\uppercase\expandafter{\romannumeral6 }}, so there are several possibilities that should be studied. A similar calculation that is slightly more tedious leads to the two-loop contributions to the two-boson to two-boson marginal part of $\delta u_4$. The complete set of two-loop contributions to this part of $\delta u_4$, along with all ${\cal O}(g^3)$ one-loop contributions that result from the ${\cal O}(g^2)$ irrelevant operators determined by coupling coherence, yields: $$\eqalign{ \delta u_{4M} = &{g^3 \over 2\;(16 \pi^2)^2} \Biggl\{ -{1 \over 3}\;ln\Bigl({1 \over \eta}\Bigr) + {9 \over 2} \; \Biggl(ln\Bigl({1 \over \eta}\Bigr)\Biggr)^2 \cr &\qquad -2\; ln\Bigl({1 \over \eta}\Bigr) \Bigl[ln(|x_1-x_3|)+ln(|x_1-x_4|) -2\; ln(x_1+x_2) \Bigr] \cr &\qquad+ ln\Bigl({1 \over \eta}\Bigr) \Bigl[ \Bigl( {x_1 \over x_1+x_2}+{x_1 \over x_1-x_3}+ {x_1 \over x_1-x_4}\Bigr)ln\Bigl({x_1 \over x_1+x_2}\Bigr) \cr &\qquad\qquad\qquad\qquad+\Bigl( {x_2 \over x_1+x_2}+{x_2 \over x_2-x_3}+{x_2 \over x_2-x_4}\Bigr) ln\Bigl({x_2 \over x_1+x_2}\Bigr) \cr &\qquad\qquad\qquad\qquad\qquad + \Bigl({x_3 \over x_3+x_4}+{x_3 \over x_3-x_1}+ {x_3 \over x_3-x_2}\Bigr)ln\Bigl({x_3 \over x_3+x_4}\Bigr) \cr &\qquad\qquad\qquad\qquad\qquad\qquad +\Bigl({x_4 \over x_3+x_4}+{x_4 \over x_4-x_1}+ {x_4 \over x_4-x_2}\Bigr)ln\Bigl({x_4 \over x_3+x_4}\Bigr) \Bigr] \Biggr\} \;.} \eqno(7.20)$$ \noindent Here I use the momenta in figure 3b, again choosing $p_i^+=x_i {\mit P}^+$. There are four functions of longitudinal momenta that appear in this correction to the marginal operator, all of which must be included in the renormalization group analysis. A complete renormalization group analysis of the marginal operator requires us to at least introduce all of the functions of longitudinal momentum fractions appearing above in the marginal part of $\delta u_4$, which I indicate as ${\mathaccent "7E g}(x_1,x_2,x_3,x_4)$. We should distinguish between the two-boson to two-boson and the one-boson to three-boson parts of this operator. The three-boson to one-boson interaction is not independent because the Hamiltonian is Hermitian. Thus, we are led to consider, $$\eqalign{ {\mathaccent "7E g}^{(2 \rightarrow 2)}&(x_i) = g+h^{(1)} \Bigl[ ln(|x_1-x_3|) +ln(|x_1-x_4|)\Bigr] +h^{(2)}\; ln(x_1+x_2) \cr &+ j^{(1)} \Bigl[ {x_1 \over x_1+x_2}\;ln\Bigl({x_1 \over x_1+x_2}\Bigr) +{x_2 \over x_1+x_2}\;ln\Bigl({x_2 \over x_1+x_2}\Bigr) \cr &\qquad\qquad + {x_3 \over x_3+x_4}\;ln\Bigl({x_3 \over x_3+x_4}\Bigr) + {x_4 \over x_3+x_4}\;ln\Bigl({x_4 \over x_3+x_4}\Bigr) \Bigr] \cr &+ j^{(2)} \Bigl[\Bigl( {x_1 \over x_1-x_3}+{x_1 \over x_1-x_4}\Bigr)ln\Bigl({x_1 \over x_1+x_2}\Bigr) + \Bigl( {x_2 \over x_2-x_3}+{x_2 \over x_2-x_4}\Bigr) ln\Bigl({x_2 \over x_1+x_2}\Bigr) \cr &+\qquad \Bigl({x_3 \over x_3-x_1}+{x_3 \over x_3-x_2}\Bigr)ln\Bigl({x_3 \over x_3+x_4}\Bigr) + \Bigl( {x_4 \over x_4-x_1}+ {x_4 \over x_4-x_2}\Bigr)ln\Bigl({x_4 \over x_3+x_4}\Bigr) \Bigr] \;,} \eqno(7.21)$$ $$\eqalign{ {\mathaccent "7E g}^{(1 \rightarrow 3)}&(x_i) = g+h^{(3)} \Bigl[ ln(x_1-x_2) + ln(x_1-x_3)+ln(x_1-x_4)\Bigr] + j^{(3)}\;ln(x_1) \cr &+j^{(4)} \Bigl[{x_2 \over x_1-x_2}\;ln\Bigl({x_2 \over x_1}\Bigr) + {x_3 \over x_1-x_3}\;ln\Bigl({x_3 \over x_1}\Bigr) + {x_4 \over x_1-x_4}\;ln\Bigl({x_4 \over x_1}\Bigr)\Bigr] \cr &-j^{(5)} \Bigl[ \Bigl({x_2 \over x_1-x_3}+{x_2 \over x_1-x_4}\Bigr)\;ln\Bigl({x_2 \over x_1}\Bigr) + \Bigl( {x_3 \over x_1-x_2} + {x_3 \over x_1-x_4}\Bigr)\;ln\Bigl({x_3 \over x_1}\Bigr) \cr &\qquad\qquad\qquad\qquad+ \Bigl( {x_4 \over x_1-x_2}+ {x_4 \over x_1-x_3}\Bigr)\;ln\Bigl({x_4 \over x_1}\Bigr) \Bigr] \;,} \eqno(7.22)$$ \noindent I have simply assumed that the constant, $g$, appearing in each of these terms is the same. This issue does not need to be resolved until one computes the ${\cal O}(g^3)$ corrections to the renormalization group equation for the running coupling, so I do not pursue it further. I have also simply assumed that some of the operators appearing in ${\mathaccent "7E g}^{(2 \rightarrow 2)}$ and ${\mathaccent "7E g}^{(1 \rightarrow 3)}$ are ${\cal O}(g^2)$, with their couplings being $h^{(i)}$; while other operators are assumed to be at least ${\cal O}(g^3)$, with their couplings being $j^{(i)}$. These assumptions are justified {\it a posteriori}, and I do not try to prove that this solution is unique. I believe that the solution is unique, but a proof exceeds my patience. To proceed further we must generalize Eqs. (6.61) and (6.62) to allow for additional operators. The complete equations are, $$h^{(k)}_{n+1}=h^{(k)}_n-\sum_{l=1}^3\; c_h^{(k,l)}\;g_n\;h_n^{(l)} -d_h^{(k)}\;g_n^3+{\cal O}(g_n^4) \;,\eqno(7.23)$$ $$j^{(k)}_{n+1}=j^{(k)}_n-\sum_{l=1}^3\; c_j^{(k,l)}\;g_n\;h_n^{(l)} -d_j^{(k)}\;g_n^3+{\cal O}(g_n^4) \;.\eqno(7.24)$$ \noindent Eqs. (7.19) and (7.20) indicate, $$d_h^{(1)}=-{1 \over 2}\;d_h^{(2)}=d_h^{(3)}={ln(1/\eta) \over (16 \pi^2)^2} \;,\eqno(7.25)$$ $$d_j^{(1)}=d_j^{(2)}={1 \over 6}\;d_j^{(3)}=-d_j^{(4)}=-d_j^{(5)}= -{ln(1/\eta) \over 2\;(16 \pi^2)^2} \;.\eqno(7.26)$$ \noindent The calculation of the various coefficients $c_h^{(k,l)}$ and $c_j^{(k,l)}$ requires us to evaluate a set of second-order Hamiltonian diagrams in which one vertex comes from the constant $g$, and the second vertex comes from the appropriate marginal operator. Thus, to compute $c_j^{(1,2)}$, for example, we need to include the vertex corresponding to the operator whose strength is $h^{(2)}$ and determine the part of the resultant one-loop integral that leads to a function of longitudinal momenta that exactly matches the function in the operator with strength $j^{(1)}$. These calculations are straightforward, and the results are, $$c_h^{(1,3)}={1 \over 2}\;c_h^{(2,1)}=c_h^{(2,2)}= {1 \over 4}\;c_h^{(2,3)}=2\;c_h^{(3,2)}=2\;c_h^{(3,3)}={ln(1/\eta) \over 16\pi^2} \;,\eqno(7.27)$$ $$c_j^{(1,1)}=c_j^{(2,3)}={1 \over 3}\;c_j^{(3,1)}={1 \over 3}\; c_j^{(3,3)}=-c_j^{(4,3)}=-c_j^{(5,1)}= {ln(1/\eta) \over 16 \pi^2} \;.\eqno(7.28)$$ \noindent All others are zero. When one solves Eqs. (7.23) and (7.24) using these coefficients, it is found that each coupling $j^{(k)}$ is ${\cal O}(g_n^3)$ and that, $$h^{(1)}_n=h^{(2)}_n=h^{(3)}_n={1 \over 2\;(16\pi^2)}\;g_n^2 +{\cal O}(g_n^3)\;.\eqno(7.29)$$ \noindent These results correspond exactly with the counterterms required to restore Lorentz covariance and cluster decomposition to the boson-boson scattering amplitude and to the one-boson to three-boson Green's function. Thus, the coupling constant coherence conditions lead to a solution of the complete renormalization group equations that yields covariant results with cluster decomposition despite the fact that the cutoff violates these conditions. An exact renormalization group analysis apparently leads to cutoff-independent results with a cutoff Hamiltonian, and coupling constant coherence apparently allows one to find Hamiltonians that produce Lorentz covariant results. This is useless, however, unless one can make approximations with bounded errors. Moreover, if one simply wants to use renormalization group improved perturbation theory in powers of the canonical coupling, it should be clear from the above calculations that the light-front renormalization group is not the simplest tool at one's disposal. It is not obvious how one should estimate `errors'. One might consider fixing the initial Hamiltonian, and measuring the errors by computing differences in the results produced by the final Hamiltonian on a trajectory in comparison to those from the initial Hamiltonian. This is not a good measure, because in general we do not know the appropriate initial Hamiltonian nor can we compute with it, and one of the goals of the renormalization group is to formulate physical problems so that it is never necessary to explicitly deal with the initial Hamiltonian. In order to produce a meaningful discussion of errors, we should consider how a `typical' renormalization group calculation proceeds, and study how observables change as the approximation is systematically improved. This discussion was initiated at the end of Section {\uppercase\expandafter{\romannumeral2 }}, and some basic issues were discussed in Section {\uppercase\expandafter{\romannumeral6 }}; however, the results computed in this Section have a dramatic effect on the analysis of errors in the light-front renormalization group. In Wilson's perturbative Euclidean renormalization group calculations,\APrefmark{\rWILNINE} it is necessary to fix all relevant and marginal couplings at the lowest cutoff, and any irrelevant couplings at an upper cutoff. The upper cutoff is not chosen to be infinite, but should be sufficiently large that all final results are insensitive to the boundary values chosen for the irrelevant variables. It should not be so large that intolerable errors accumulate over the trajectory. One must approximate the marginal couplings over the entire trajectory, and use an iterative algorithm to compute the trajectory. The output is the irrelevant couplings at the lower cutoff, because all relevant and marginal couplings at this cutoff must be input. This is an extremely powerful procedure when there are a finite number of relevant and marginal couplings, because it allows one to obtain an accurate approximation to the endpoint of a complete renormalized trajectory by inputting a finite number of boundary values \APrefmark{\rWILTEN}. However, we have seen that the light-front renormalization group requires an infinite number of relevant and marginal operators. This means that a light-front renormalization group calculation requires a new type of approximation not considered in the Euclidean renormalization group. One must approximate the boundary conditions placed on the relevant and marginal variables at the lowest cutoff. There are three obvious approximations one should consider. First, one can approximate the transformation itself by dropping terms at a given order in $\delta H$. Second, one can approximate the trajectory by discarding specific operators; {\eighteenit e.g.}, all or some of the irrelevant operators. Third, one can employ coupling coherence to compute the Hamiltonian to a given order in the running canonical coupling and drop higher orders. One can also employ combinations of these approximations. I assume that Feynman perturbation theory is accurate, and estimate the errors from each approximation by identifying the order in Feynman perturbation theory at which an error first arises and then discussing the magnitude of this error. Such an analysis is of limited use if Feynman perturbation theory is not valid for the computation of low energy observables, which is exactly the case for which the light-front renormalization group is being developed; but this should give a rough guide to the problems we should study. Let us begin by approximating the transformation by dropping terms starting at some given order in $\delta H$. The simplest comparison we can make that is of any interest is between the ${\cal O}(\delta H)$ analysis and the ${\cal O}(\delta H^2)$ analysis. The linear analysis is trivial to complete. We set the irrelevant variables to arbitrary strengths, and find that they go to zero at the lower cutoff as the upper cutoff is taken to infinity. This leads to errors in Feynman perturbation theory that are ${\cal O}(g^2)$, at least in the irrelevant variables. This means for example, that the real part of the boson-boson scattering amplitude, computed in Eq. (6.53), contains errors of at least ${\cal O}(g^2\;M^2/\Lambda_{\cal N}^2)$. As long as one is studying scattering for states whose invariant mass is much less than the cutoff, these errors are small. This assumes that one inputs the correct marginal and relevant variables at the lower cutoff, $\Lambda_{\cal N}$. For example, if one does not input the marginal operators computed above to ${\cal O}(g^2)$ using coupling coherence, there are logarithmic errors shown in Eq. (6.59) that are arbitrarily large. As $w \rightarrow 0$, these errors diverge like $g^2 \;log(w)$; and as the longitudinal momenta of the outgoing bosons approaches those of the incoming bosons, there are comparable logarithmic errors. For small $g$ these errors are suppressed relative to the leading term by one power of $g$, but the relative error can be arbitrarily large for scattering states of any invariant mass. The lesson here is that arbitrarily large errors arise when functions of longitudinal momenta diverge. Longitudinal divergences forced us to discard several candidate transformations, and we are seeing that the invariant-mass transformations do not completely control the spectrum of the longitudinal operators. This problem is so severe that its solution may require a completely different renormalization group than has been developed in this paper. If we improve the analysis by keeping ${\cal O}(\delta H^2)$ corrections to the trajectory, we automatically generate the correct irrelevant operators to ${\cal O}(g^2)$, even if we do not include the correct ${\cal O}(g^2)$ relevant and marginal counterterms at the lower cutoff. This means that we obtain Eq. (6.59) instead of Eq. (6.53) for the real part of the boson-boson scattering amplitude, for example. If the correct marginal operators are included in the Hamiltonian at the lower cutoff, we obtain Feynman results to ${\cal O}(g^2)$. Therefore, if we approximate the transformation at ${\cal O}(\delta H^2)$, and in addition we make a perturbative expansion in terms of the canonical coupling to ${\cal O}(g^2)$, and we impose the correct boundary conditions on the marginal and relevant operators at the lower cutoff, which means fixing functions of longitudinal momenta, we obtain the Feynman results to ${\cal O}(g^2)$. Of course, we do not need to make the additional perturbative expansion in powers of $g$, but we must make some additional approximations, because the second-order transformation generates an infinite number of vertices and each contains an entire function of momenta. When the transformation is approximated to ${\cal O}(\delta H^2)$, $u_2$ does not affect any of the other functions in the Hamiltonian, as discussed in Section {\uppercase\expandafter{\romannumeral6 }}. This means that the relevant operators produced by the transformation have no effect on the other operators and can be studied separately. There is a large host of additional approximations one might consider that are not perturbative in the canonical coupling and may be of interest. All of them produce `errors' at ${\cal O}(g^3)$; so if Feynman perturbation theory in powers of the canonical coupling is accurate, most of the additional approximations one can make offer little or no improvement to the perturbative approximation discussed in the last paragraph. One interesting approximation is to keep only the marginal part of $u_4$, and complete an analysis that is a generalization of the analysis leading up to Eq. (6.36) for the running canonical coupling. Since the boundary condition on this marginal operator includes functions of longitudinal momenta, the analysis in Section {\uppercase\expandafter{\romannumeral6 }}~ must be generalized to include such functions. This analysis reduces to the study of coupled one-dimensional integral equations, because the dependence of the marginal operator on transverse momentum is fixed, making all transverse integrals the same. Perhaps the most important question one can address with such an investigation is whether functions of longitudinal momenta with nonintegrable singularities are generated, since the boundary functions include logarithmic divergences. I believe that it is relatively easy to show that in the ${\cal O}(\delta H^2)$ analysis no such singularities arise. One can solve the coupled equations for the marginal part of $u_4$ `backwards', towards larger cutoffs, using the functions in Eqs. (7.21) and (7.22) as a boundary condition. In each step one simply convolutes the functions produced from the previous step, which means that after any finite number of steps one effectively considers a multidimensional convolution of logarithms, and these are always finite. Any estimation of errors requires one to place bounds on the longitudinal functions. I have ignored this issue, except where the integrals over longitudinal momentum produce divergences. I have shown that this does not happen to lowest orders in the running canonical coupling when one uses the invariant-mass renormalization group; however, I have not shown that this does not happen in higher orders. Even if a perturbative expansion in powers of the canonical coupling is finite to every order, it is possible for some of the running variables in the invariant-mass renormalization group to diverge; as long as all such divergences cancel against one another when one re-expresses any physical result as a power series in terms of the canonical coupling. If one performs only an expansion in powers of the canonical coupling, and makes no further approximations, it is even possible to use the transformations that run cutoffs on the transverse and longitudinal momenta of individual particles. One should be able to use coupling coherence in this case to again obtain the Feynman results to any order one desires. The problem is that these results depend on huge cancellations that must be precisely maintained. I have not found any reason to believe that nonintegrable singularities arise in the invariant-mass analysis; however, this is far from satisfactory. Suppose one uses coupling coherence and computes the Hamiltonian exactly to a given order in $g$, and then computes low energy results using this Hamiltonian. How large are the errors in perturbation theory? Obviously the results are exactly those of Feynman perturbation theory up to the order to which the Hamiltonian is computed, but beyond this order one encounters errors. For example, the canonical $\phi^4$ Hamiltonian is accurate to ${\cal O}(g)$; but if we compute the boson-boson scattering amplitude to ${\cal O}(g^2)$ we obtain the result in Eq. (6.53), with the same errors discussed above. The errors in perturbation theory are of ${\cal O}(g^2)$, but even when $g$ is small the errors can be arbitrarily large. This same type of error arises no matter how many orders in $g$ are included in the calculation of $H$. This problem may best be understood by thinking of the coupling as running with longitudinal momentum. The second order corrections to the Hamiltonian in Eqs. (7.21) and (7.22) show that the coupling decreases as the longitudinal momentum transferred through the vertex decreases. In fact, according to the perturbative analysis, the coupling actually becomes negative at sufficiently small longitudinal momentum transfer. The perturbative analysis breaks down when such logarithmic divergences arise, and one must find a method of re-summing these corrections. In this case, we are finding that even if the coupling is small for all longitudinal momentum transfers, we cannot expand the coupling for one momentum transfer in powers of the coupling at some drastically different momentum transfer. This is the sort of problem the Euclidean renormalization group manages to avoid, but the invariant-mass renormalization group does not treat all components of the momentum on an equal footing, and this is a price that must be paid. \bigskip \noindent {\eighteenb {\uppercase\expandafter{\romannumeral8 }}. Conclusion} \medskip After defining several renormalization groups that might be of interest in the study of light-front field theory, I have found the Gaussian fixed points and completed the linear analysis about the Gaussian fixed point of interest for relativistic field theory. The subsequent second-order analyses of these transformations show that a perturbative expansion of a transformation about the Gaussian fixed point, in which some of the irrelevant operators are discarded, can only converge for transformations that remove states of higher `free' energy than all states retained, where the free energy is determined by the Gaussian fixed point. This constraint naturally leads to transformations that employ invariant-mass cutoffs. The linear analysis of invariant-mass transformations reveals that there are an infinite number of relevant and marginal variables, because functions of longitudinal momenta do not affect the invariant-mass scaling dimension of operators. While the linear analysis of a light-front transformation about the Gaussian fixed point is readily completed for arbitrary Hamiltonians, the second-order analysis has a complicated dependence on the interactions. One can easily write a general expression for the transformed Hamiltonian using Eq. (4.14). The relevant and marginal operators in $u_2$ do not affect other operators in the second-order light-front analysis when zero-modes are dropped, and this simplifies the analysis considerably. The only marginal interaction that directly enters the second-order analysis is the marginal part of $u_4$. If all irrelevant operators are dropped, only the relevant and marginal parts of $u_2$ and $u_4$ survive. Since the marginal part of $u_4$ contains an arbitrary function of longitudinal momenta, the second-order correction to the marginal part of $u_4$ involves a new function of longitudinal momenta and a complete analysis requires one to compute trajectories of such functions, in general. While an analysis that allows arbitrary functions is not extremely complicated, the simplest example is the trajectory generated when the initial value of the marginal part of $u_4$ is a constant, as suggested by canonical field theory. In this case the second-order correction leads to a new constant and not to a function of longitudinal momentum. This case was studied in detail, and it was shown that the canonical coupling decreases as the cutoff is lowered. This analysis is readily improved by retaining leading irrelevant operators, and allowing functions to appear in the marginal operator, but this was not done in this paper. The invariant-mass cutoff violates explicit Lorentz covariance and cluster decomposition, so the Hamiltonians one must investigate do not display these properties manifestly. If one uses the canonical $\phi^4$ Hamiltonian with an invariant-mass cutoff to compute Green's functions, it is not surprising to find that covariance and cluster decomposition are violated. If one arbitrarily chooses a more complicated renormalized Hamiltonian, there is no reason to expect that these properties are restored. One must select the correct functions of longitudinal momentum in the marginal and relevant operators at the lowest cutoff, and use the perturbative renormalization group to estimate the irrelevant operators at the lowest cutoff, to restore Lorentz covariance and cluster decomposition to observables. While it is possible to adjust these functions until correct results are obtained in perturbation theory, I showed that one can also use coupling constant coherence \APrefmark{\rPERWIL} to restore covariance and cluster decomposition. By insisting that the canonical coupling is the only variable that explicitly runs with the cutoff, and that all other variables depend on the cutoff through perturbative dependence on the canonical coupling, one uniquely fixes the relevant and irrelevant operators at the lowest cutoff to ${\cal O}(g^2)$. The marginal variables are not uniquely fixed until one completes a third-order calculation, and the correct results are not obtained unless ${\cal O}(\delta H^3)$ terms are kept in the transformation. When these third-order terms are retained, it is possible to use coupling coherence to fix the functions appearing in the marginal part of $u_4$ to ${\cal O}(g^2)$, and these functions are exactly those required to restore Lorentz covariance and cluster decomposition to observables computed in second-order perturbation theory. The appearance of entire functions in the relevant and marginal operators severely complicates the development of a perturbative light-front renormalization group. While it is encouraging to find that coupling coherence apparently fixes these functions without direct reference to Lorentz covariance and cluster decomposition in the output observables, the light-front renormalization group does not offer a convenient method for performing perturbative calculations. The Euclidean renormalization group is far more convenient for perturbative calculations, because there are a small number of relevant and marginal operators \APrefmark{\rWILNINE}. In gauge theories Euclidean cutoffs typically violate gauge invariance, and thereby covariance (with the lattice providing an example of how one can take advantage of irrelevant operators to maintain gauge-invariance \APrefmark{\rWILFORT}); but one should be able to use coupling coherence to restore these properties in a Euclidean renormalization group analysis, as was done for the light-front renormalization group in this paper. Thus, the appearance of functions in the relevant and marginal operators leads to computational complication of a perturbative light-front analysis in comparison to a Euclidean analysis at best. Many of the basic problems one encounters in trying to apply the light-front renormalization group to QCD have been discussed in this paper; however, several very important problems have been completely avoided. The most immediate problems result from divergences in QCD that are not regulated by the invariant-mass cutoff. In second-order perturbation theory, one finds that $u_2$ for both quarks and gluons diverges if one uses only an invariant-mass cutoff. There are logarithmic divergences that remain in one-loop longitudinal momentum integrals after the invariant-mass cutoff is imposed, and these require an additional regulator. These infrared longitudinal divergences do not appear in the one-loop corrections to the three-particle and four-particle interactions in QCD if one uses the canonical Hamiltonian, but they are avoided only because there are precise cancellations maintained by the canonical Hamiltonian \APrefmark{\rTHORN,\rPERQCD}. These cancellations require sectors of Fock space with different parton content to contribute with relative strengths given by perturbation theory, and there is no reason to believe that such cancellations are maintained nonperturbatively. In addition to new infrared divergences, a perturbative analysis using coupling coherence in QCD may reveal that the canonical coupling increases in strength for small longitudinal momentum transfer, rather than decreasing as found in the scalar theory. This would mean that one cannot simply re-sum the logarithmic corrections to the canonical coupling and produce a reasonable coupling that runs with longitudinal momentum transfer, because the coupling one obtains in this fashion diverges for a finite longitudinal momentum transfer. At this point it is not even obvious that these problems are unwelcome. When dealing with QCD one must balance seemingly contradictory goals. Because of asymptotic freedom a perturbative renormalization group analysis may enable one to lower the cutoff on invariant masses to a few GeV. At the same time, the theory should confine quarks and gluons, and confinement certainly cannot result from any interaction that can be treated perturbatively at all energies. Ideally only a few operators are required to accurately approximate confinement effects. If a large number of operators are required it is probably quite difficult to find them in an approximate analysis. Moreover, hopefully these interactions do not change particle number. If operators that change particle number diverge in strength, it is difficult to approximate the states. A renormalization group analysis that produces an accurate effective QCD Hamiltonian with a cutoff of a few GeV must either be able to treat the confining interactions nonperturbatively when eliminating high energy states, or these interactions must be perturbative in strength for high energy states and diverge in strength only for low energy states. Another problem that has been avoided in this paper is symmetry breaking. Unless the cutoff violates the symmetry of interest, one never finds symmetry breaking terms in a perturbative analysis. They must simply be added to the Hamiltonian by hand, and their strength must be tuned to reproduce an observable that is computed using the final Hamiltonian, or one must use coupling coherence. It is also possible for terms that do not violate any symmetry to arise in this fashion. Any effective interaction associated with a vacuum condensate, or more generally any effective interaction that does not depend on the canonical coupling analytically, must be introduced directly in the Hamiltonian if one performs a light-front calculation with the zero modes removed, because the vacuum is forced to be trivial in this case. Once again, there are two attitudes one can take concerning vacuum-induced interactions. One can emphasize the fact that the light-front offers little or no advantage to anyone interested in solving the QCD vacuum problem; or one can emphasize the fact that light-front field theory may offer anyone who is primarily interested in building hadrons a way to reformulate the vacuum problem \APrefmark{\rGLAZFOUR}. The best way to check whether a cutoff QCD Hamiltonian resulting from a perturbative light-front renormalization group analysis is even crudely accurate is to diagonalize it and determine whether it produces even a crude description of low-lying hadrons. One can use a trial wave function analysis once the Hamiltonian is selected, which is a powerful nonperturbative tool that is not usually available in a field theoretic analysis. It seems highly unlikely that a perturbative QCD Hamiltonian will produce reasonable results, and one knows that it will not produce the mass splittings associated with chiral symmetry breaking if zero modes are discarded. It is necessary to introduce new operators to produce symmetry breaking in the spectrum, and it is almost certainly necessary to introduce additional vacuum-induced interactions to produce a reasonable description of the mass splittings associated with confinement. Many of the problems one expects to encounter in such an endeavor are familiar from the histories of the constituent quark model \APrefmark{\rKOKK} and lattice QCD \APrefmark{\rREBBI}. The constituent quark model indicates that it should be possible to obtain accurate models of all low-lying hadrons with cutoff Hamiltonians, and a few degrees of freedom. A constituent picture may arise naturally from light-front QCD if the parton number-conserving interactions in the cutoff Hamiltonians produce mass gaps between sectors with different parton content that are reasonably large in comparison to the cutoff. The easiest Hamiltonians to use in producing simple constituent hadrons may employ fairly low invariant-mass cutoffs, making them analogous to strong-coupling lattice actions \APrefmark{\rWILFORT}. To obtain reasonable results with `strong-coupling' Hamiltonians, it may be necessary to simply tune the strength of various operators by hand, using phenomenology as a guide when selecting candidate operators. It is essential to show that if one makes such uncontrolled approximations it is possible to reproduce low energy hadronic observables. On the other hand, if one wants to work with large cutoffs for which the canonical coupling remains small, it may be possible to use somewhat simpler Hamiltonians; but then the Fock space required to diagonalize the Hamiltonian is large and there is no reason to expect that a few constituents will yield even crude approximations. Hopefully an intermediate ground exists in which low-lying hadrons are adequately approximated as few parton states, while the Hamiltonian is not forced to be absurdly complicated by the low cutoff. The light-front renormalization group may eventually lead to the solution of some of the most important and interesting problems encountered in the study of low-energy QCD. At this point it merely offers a new perspective on these problems. This perspective differs radically from those offered by Euclidean field theory, and it is not surprising that difficult new challenges appear. While the primary accomplishment of this article is to illustrate how coupling coherence may allow one to obtain unique answers in perturbation theory without maintaining manifest covariance and gauge invariance, I hope that this article also elucidates some of the challenges we must meet to use light-front field theory to solve QCD. \bigskip \noindent{\eighteenb Acknowledgments } \medskip \indent I want to thank Ken Wilson, without whose help I could not have completed this work. I also owe a great debt to Avaroth Harindranath, with whom I have discussed light-front field theory for several enjoyable years. I have profitted from discussions of the renormalization group with Edsel Ammons, Richard Furnstahl and Tim Walhout, who made many useful comments on the text. In addition I have benefitted from discussions with Armen Ezekelian, Stanislaw G{\/l}azek, Paul Griffin, Kent Hornbostel, Yizhang Mo, Steve Pinsky, Dieter Schuette, Junko Shigemitsu, Brett Van de Sande, and Wei-Min Zhang. This work was supported by the National Science Foundation under Grant No. PHY-9102922 and the Presidential Young Investigator Program through Grant PHY-8858250. \vfill \eject \bigskip \noindent {\eighteenb Appendix A: Canonical Light-Front Scalar Field Theory} \medskip Canonical light-front scalar field theory is discussed by Chang, Root and Yan \APrefmark{\rCHAONE,\rCHATWO}, who derive many of the results below. This Appendix is not intended as an introduction to light-front field theory, but merely collects some useful formulae and establishes notation. In this paper I choose the light-front time variable to be $$ x^{+}=x^{0}+x^{3}, \eqno(A.1)$$ and the light-front longitudinal space variable to be $$ x^{-} = x^{0}-x^{3} . \eqno(A.2)$$ \noindent With these choices the scalar product is $$a \cdot b = a^\mu b_\mu = {1 \over 2} a^+ b^- + {1 \over 2} a^- b^+ - {\eighteenb a}^\perp \cdot {\eighteenb b}^\perp \;,\eqno(A.3)$$ \noindent the derivative operators are $$\partial^\pm = 2 {\partial \over \partial x^\mp} \;,\eqno(A.4)$$ \noindent and the four-dimensional volume element is $$d^4x = {1 \over 2} dx^+ dx^- d^2x^\perp \;.\eqno(A.5)$$ The Lagrangian is independent of the choice of variables, $${\cal L} = {1 \over 2} \partial^\mu \phi \partial_\mu \phi - {1 \over 2} \mu^2 \phi^2 - {\lambda \over 4!} \phi^4 \;, \eqno(A.6)$$ \noindent for example. The commutation relation for the boson field is $$ [\phi(x^{+}, x^{-},\vec x^\perp), \partial^{+} \phi(x^{+}, y^{-}, \vec y^\perp) ] = i \delta^3(x-y). \eqno(A.7)$$ In order to derive the Hamiltonian and other Poincar\'e generators, one typically begins with the energy-momentum tensor \APrefmark{\rCHAONE,\rCHATWO}. While it is certainly possible to derive a formal, ill-defined expression for the complete tensor, I merely list the canonical Hamiltonian, $$\eqalign{ H = \int & dx^-d^2x^\perp \biggl[{1 \over 2} \;\;\phi(x) \biggl(-\partial^{\perp 2}+\mu^2\biggr) \phi(x) + {\lambda \over 4!} \phi^4(x) \biggr] .} \eqno(A.8)$$ Eq. (A.8) provides a formal definition of the Hamiltonian. The boson field can be expanded in terms of a free particle basis, $$ \phi(x) = \int {dq^+ d^2q^\perp \over 16 \pi^3 q^+}\; \;\theta(q^+)\;\biggl[ a(q) e^{-iq \cdot x} + a^\dagger(q) e^{iq \cdot x}\biggr], \eqno(A.9)$$ \noindent with $$ [a(q),a^\dagger(q')] = 16 \pi^3 q^+ \delta^3(q-q'). \eqno(A.10) $$ \noindent If we use $$\phi(x)=\int{d^4k \over (2\pi)^3}\;\delta(k^2-\mu^2)\;\phi(k)\;e^{-ik\cdot x} ,\eqno(A.11)$$ \noindent we find that, $$\phi(q)=a(q)\;\;\;[q^+>0]\;\;\;\;,\;\;\;\; \phi(q)=-a^\dagger(-q)\;\;[q^+<0] \;. \eqno(A.12)$$ \noindent In the remainder of this Appendix I simply assume that all longitudinal momenta are greater than zero. The free part of the Hamiltonian in Eq. (A.8) is, $$H_0=\int dx^-d^2x^\perp {1 \over 2} \;\phi(x) \biggl(-\partial^{\perp 2}+\mu^2\biggr) \phi(x) = \int {dq^+ d^2q^\perp \over 16 \pi^3 q^+}\; \; \Bigl({q^{\perp 2}+\mu^2 \over q^+} \Bigr) \;a^\dagger(q) a(q) \;.\eqno(A.13)$$ \noindent The interaction term is more complicated and I do not expand it in terms of creation and annihilation operators. Fock space eigenstates of the free Hamiltonian are, $$|q_1,q_2,...\rangle = a^\dagger(q_1) a^\dagger(q_2) \cdot\cdot\cdot |0\rangle \;,\eqno(A.14)$$ \noindent with the normalization being, $$\langle k|q\rangle = 16 \pi^3 q^+ \delta(k^+-q^+)\delta^2(k^\perp - q^\perp) \;.\eqno(A.15)$$ \noindent Completeness implies that $$1= |0\rangle \langle 0| \;+\;\int {dq^+ d^2q^\perp \over 16 \pi^3 q^+}\; |q\rangle \langle q| \;+\; {1 \over 2!} \int {dq^+ d^2q^\perp \over 16 \pi^3 q^+}\; \int {dk^+ d^2k^\perp \over 16 \pi^3 k^+}\; |q,k\rangle \langle q,k| \;+\; \cdot\cdot\cdot \;,\eqno(A.16)$$ \noindent so we can write the free Hamiltonian as $$\eqalign{ H_0 &= \int {dq^+ d^2q^\perp \over 16 \pi^3 q^+}\; \Bigl({q^{\perp 2}+\mu^2 \over q^+} \Bigr) |q\rangle \langle q| \cr &+\; {1 \over 2!} \int {dq^+ d^2q^\perp \over 16 \pi^3 q^+}\; \int {dk^+ d^2k^\perp \over 16 \pi^3 k^+}\; \Bigl({q^{\perp 2}+\mu^2 \over q^+} + {k^{\perp 2}+\mu^2 \over k^+} \Bigr) |q,k\rangle \langle q,k| \cr &+\; \cdot\cdot\cdot \;.}\eqno(A.17)$$ In order to provide further orientation, let me consider the problem of computing the connected Green's functions for this theory. This problem in equal-time field theory is discussed in many textbooks, and has been reviewed for light-front field theory \APrefmark{\rLEPAGE-\rBROONE}. To find the Green's functions of a theory, consider the overlap of a state $\vert i(0) \rangle$ at light-front time $x^+=0$ with a second state $\vert f(\tau) \rangle$ at time $x^+=\tau$. We split the Hamiltonian into a free part $H_0$ and an interaction $V$, and find that $$\eqalign{ \langle f(\tau) \vert i(0) \rangle &= (16 \pi^3) \delta^3(P_f-P_i)\; G(f,i;\tau) = \langle f \vert e^{-i H \tau/2} \vert i \rangle \cr &= i \int {d\epsilon \over 2 \pi} e^{-i \epsilon \tau /2} (16 \pi^3) \delta^3(P_f-P_i)\; G(f,i;\epsilon) .}\eqno(A.18)$$ \noindent This definition differs slightly from that given in some other places. It is then straightforward to demonstrate that $$\eqalign{(16 \pi^3) \delta^3(P_f-P_i)\; G(f,i;\epsilon) &= \langle f \vert {1 \over \epsilon-H+i0_+} \vert i \rangle \cr &= \langle f \vert {1 \over \epsilon-H_0+i0_+} + {1 \over \epsilon-H_0+i0_+} V {1 \over \epsilon-H_0+i0_+} \cr &+ {1 \over \epsilon-H_0+i0_+} V {1 \over \epsilon-H_0+i0_+} V {1 \over \epsilon-H_0+i0_+} + \cdot \cdot \cdot \vert i \rangle .}\eqno(A.19)$$ \noindent Operator products are evaluated by inserting a complete set of eigenstates of $H_0$ between interactions, using Eq. (A.16), and using $$H_0 \vert q_1,q_2,...\rangle = \Bigl[\sum_i q_i^-\Bigr] \vert q_1,q_2,...\rangle \;, $$ $$q_i^- = {q_i^{\perp 2}+\mu^2 \over q_i^+} \;,\eqno(A.20)$$ \noindent to replace operators occurring in the denominators with c-numbers. Divergences arise from high energy states that are created and annihilated by adjacent $V$'s, for example. All divergences in perturbation theory come from intermediate states (internal lines) that have large free energy. The free energy is given by Eq. (A.20), so divergences occur in diagrams containing internal lines that carry large perpendicular momentum (`ultraviolet' divergences) and/or small longitudinal momentum (`infrared' divergences). Given a light-front Hamiltonian, $H=H_0+V$, one can determine the rules for constructing time-ordered perturbation theory diagrams. These diagrams are similar to the Hamiltonian diagrams used in the text, but there are important differences related to the energy denominators. The diagrammatic rules allow us to evaluate all terms that occur in the expansion for the Green's functions given in eq. (A.19). For $\phi^4$ canonical field theory the diagrammatic rules for time-ordered connected Green's functions are: \item{(1)} Draw all allowed time-ordered diagrams with the quantum numbers of the specified initial and final states on the external legs. Assign a separate momentum $k^\mu$ to each internal and external line, setting $k^-=(k_\perp^2+\mu^2)/k^+$ for each line. The momenta are directed in the direction time flows. \item{(2)} For each intermediate state there is a factor $\bigl(\epsilon-\sum_i k_i^-+i0_+\bigr)^{-1}$, where the sum is over all particles in the intermediate state. \item{(3)} Integrate $\int {dk^+ d^2k_\perp \over 16 \pi^3 k^+}\; \theta(k^+)$ for each internal momentum. \item{(4)} For each vertex associate a factor of $\lambda \; \delta(K^+_{in}-K^+_{out})\;\delta^2(K^\perp_{in}-K^\perp_{out})$, where $K^+_{in}$ is the sum of momenta entering a vertex, etc. \item{(5)} Multiply the contribution of each time-ordered diagram by a symmetry factor $1/S$, where $S$ is the order of the permutation group of the internal lines and vertices leaving the diagram unchanged with the external lines fixed. To obtain the Green's function defined in Eq. (A.19), propagators for the incoming and outgoing states must be added, and one must divide by an overall factor of $(16 \pi^3)\delta^3(P_f-P_i)$. \vfill \eject \bigskip \noindent {\eighteenb Appendix B: Example Calculation of $u_2$ in the Effective Hamiltonian} \medskip In this Appendix the second-order change in the effective Hamiltonian shown in figure 3a is computed. Let the first term in the Hamiltonian be $$ H = \int d\tilde{q}_1 \; d\tilde{q}_2 \; (16 \pi^3) \delta^3(q_1-q_2) \; u_2(-q_1,q_2) \;a^\dagger(q_1) a(q_2)\;+\cdot\cdot\cdot \;.\eqno(B.1)$$ \noindent Then the matrix element of this operator between single particle states is $$\langle p'|H|p\rangle = \langle 0|\;a(p')\; H\; a^\dagger(p)\;|0\rangle = (16 \pi^3) \delta^3(p'-p)\;u_2(-p,p) \;.\eqno(B.2)$$ \noindent Thus, we easily determine $u_2$ from the matrix element. It is easy to compute matrix elements between other states. Eq. (4.14) gives us the matrix elements of the effective Hamiltonian generated when the cutoff is lowered, and we are interested in a second-order term generated by the interactions in Eq. (3.6), $$\eqalign{v= &{1 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2+q_3-q_4) \cr &\qquad\qquad\qquad\qquad u_4(-q_1,-q_2,-q_3,q_4)\; a^\dagger(q_1) a^\dagger(q_2) a^\dagger(q_3) a(q_4) \;,}\eqno(B.3)$$ $$\eqalign{v^\dagger= &{1 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1-q_2-q_3-q_4) \cr &\qquad\qquad\qquad\qquad\qquad u_4(-q_1,q_2,q_3,q_4)\; a^\dagger(q_1) a(q_2) a(q_3) a(q_4) \;.}\eqno(B.4)$$ Since we will find that the incoming and outgoing momenta must be the same, implying that $\epsilon_a$ and $\epsilon_b$ in Eq. (4.14) are the same in this case, I combine the two second-order terms and obtain, $$\eqalign{ &(16 \pi^3)\delta^3(p'-p)\;\delta v_2(-p,p)= \cr &\qquad\qquad\qquad\qquad {1 \over 3!} \langle p'|\; v^\dagger \;\; \int d\tilde{k}_1 d\tilde{k}_2 d\tilde{k}_3 \; \Theta(k_1,k_2,k_3)\; { |k_1,k_2,k_3\rangle \langle k_1,k_2,k_3| \over p^--k_1^--k_2^--k_3^-} \;\;v\;|p\rangle \;;}\eqno(B.5)$$ \noindent where $p^-={\bf p^\perp}^2/p^+$, etc. $\Theta(k_1,k_2,k_3)$ is the appropriate cutoff, which is determined by the transformation employed. One can readily verify that $$\langle p'|v^\dagger|k_1,k_2,k_3\rangle = (16 \pi^3) \delta^3(p'-k_1-k_2-k_3) \; u_4(-p',k_1,k_2,k_3) \;, \eqno(B.6)$$ $$\langle k_1,k_2,k_3|v|p\rangle= (16 \pi^3) \delta^3(p-k_1-k_2-k_3) \; u_4(-k_1,-k_2,-k_3,p) \;. \eqno(B.7)$$ \noindent Substituting these results in Eq. (B.5) leads to the final result, $$\eqalign{ \delta v_2(-p,p)=&{1 \over 3!} \int d\tilde{k}_1 d\tilde{k}_2 d\tilde{k}_3 \; \Theta(k_1,k_2,k_3)\; (16 \pi^3)\delta^3(p-k_1-k_2-k_3) \cr &\qquad\qquad\qquad\qquad {u_4(-p,k_1,k_2,k_3)\; u_4(-k_1,-k_2,-k_3,p) \over p^--k_1^--k_2^--k_3^-} \;.}\eqno(B.8)$$ \noindent To obtain $\delta u_2$ from $\delta v_2$ we must complete the set of re-scalings appropriate to the transformation. If we want to study spectator dependence, we simply need to compute matrix elements between states containing additional particles, which directly affect the cutoff function $\Theta$ for some transformations. \bigskip \noindent {\eighteenb Appendix C: Physical Masses in the Light-Front Renormalization Group} \medskip What happens if the physical particles are massive? In this case one expects to find at least one mass parameter in the relevant mass operator that is an independent function of the cutoff. I assume that this is the mass term that produces the correct relativistic dispersion relation for free particles ({\eighteenit i.e.}, the part of $u_2$ that does not depend on either transverse or longitudinal momenta), and I call this the `physical mass', even though it is a running mass in the Hamiltonian that should not be confused with the mass of a physical particle. The complete `mass' operator contains an infinite number of relevant operators ({\eighteenit i.e.}, functions of longitudinal momentum fractions that produce a complicated dispersion relation) even in the critical theory, and one should expect new functions to appear in the massive theory. In the massive theory, one can again use the conditions Wilson and I have proposed, selecting a single coupling and a single mass to explicitly run with the cutoff, while all other operators depend on the cutoff only through their dependence on this coupling and mass. In this Appendix I briefly study a few of the consequences of adding a physical mass. This study is both incomplete and preliminary. I explicitly consider only the second-order behavior of the transformation, and I focus on the portion of the Hamiltonian trajectory near the critical Gaussian fixed point where the physical mass is exponentially small in comparison to the cutoff. The physical mass is proportional to some $\Lambda_{\cal N}$, which sets the scale at which the physical mass can no longer be regarded as small and must be treated nonperturbatively. Since $\Lambda_{\cal N}=2^{-{\cal N}}\Lambda_0$, as we construct a renormalized trajectory we expect the physical mass in $H^{\Lambda_0}_{\Lambda_0}$, $m_0$, to be exponentially small in comparison to $\Lambda_0$. In other words, on a renormalized trajectory we should find an infinite number of Hamiltonians near the critical fixed point in an asymptotically free theory. In a scalar theory, if we let $\Lambda_0 \rightarrow \infty$ while keeping the initial coupling small, we expect to find the trajectory approach the critical fixed point and then asymptotically approach the line between the critical and trivial Gaussian fixed points, with the strength of the interaction going to zero. For an infinitely long trajectory there are interactions only over an infinitesimal initial portion of the trajectory, after which the trajectory misses the critical fixed point by an infinitesimal amount and then follow infinitesimally close to the line of Gaussian Hamiltonians joining the critical and trivial Gaussian fixed points. After adding a physical mass to the Hamiltonian, the analysis that led to Eq. (6.11) yields $$\eqalign{ \delta v_4 = {g^2 \over 4}\; \int & {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; \Bigl[{{\bf r^\perp}^2+m^2 \over y(1-y)} - {{\bf s^\perp}^2+m^2 \over x(1-x)}\Bigr]^{-1} \cr &\theta\Bigl(\Lambda_0^2-{{\bf s^\perp}^2 \over x(1-x)}\Bigr) \theta\Bigl({{\bf s^\perp}^2 \over x(1-x)} - \Lambda_1^2\Bigr) \;.}\eqno(C.1)$$ \noindent Note that I have placed the physical mass in the denominator. If we are considering perturbations about the critical Gaussian fixed point, we should include the mass as an interaction; but we can find the results of such a treatment here by expanding in powers of the physical mass. When the physical mass is an independent parameter in the Hamiltonian, and it is small, we expect to find a well-defined perturbative expansion in powers of the running coupling and in powers of the running mass. Here I seek a perturbative renormalization group in which such expansions exist. In Eq. (C.1) I have used an invariant-mass cutoff that contains no mass, and the same problem that plagued $T^\perp$ is encountered; the denominator can change sign inside the integral. There is nothing to prevent $m^2/y/(1-y)-m^2/x/(1-x)$ from becoming large and positive for small $y$. If we expand the denominator in powers of ${\bf r^\perp}^2$, and complete the integrals, the coefficients in the expansion become arbitrarily large, producing divergences in subsequent transformations. The only simple solution that I have found is to abandon the first invariant-mass transformation and use the transformation that employs the cutoffs in Eq. (3.24). Some of the peculiarities of this transformation have already been discussed. Here I want to focus on the changes in the analysis of the massless theory wrought by the physical mass terms in energy denominators and cutoffs. Perhaps the most important observation to make at this point is that the addition of a small mass {\eighteenit may} lead to small changes in the analysis, because all integrals are finite before the mass is added. To see that this is apparently the case, return to the correction to $u_4$ in Eq. (C.1), and use the new invariant mass cutoffs to get $$\eqalign{ \delta v_4 = 144 g^2\; \int & {d^2s^\perp dx \over 16 \pi^3 x(1-x)}\; \Bigl[{{\bf r^\perp}^2+m^2 \over y(1-y)} - {{\bf s^\perp}^2+m^2 \over x(1-x)}\Bigr]^{-1} \cr &\theta\Bigl(\Lambda_0^2-{{\bf s^\perp}^2+m^2 \over x(1-x)}\Bigr) \theta\Bigl({{\bf s^\perp}^2+m^2 \over x(1-x)} - \Lambda_1^2\Bigr) \theta\Bigl(\Lambda_1^2-{{\bf r^\perp}^2+m^2 \over y(1-y)}\Bigr) \;.}\eqno(C.2)$$ \noindent Here I display the cutoff associated with the incoming state, which prevents the energy denominator from going through zero. Let me show the analytic result and use it to analyze $\delta u_4$ when there is a physical mass. Defining $$E = {{\bf r^\perp}^2+m^2 \over y(1-y)} \;, \eqno(C.3)$$ \noindent we obtain, $$\eqalign{ & \qquad \qquad \delta v_4 = -{g^2 \over 64 \pi^2} \; \theta\Bigl(\Lambda_1^2-E\Bigr) \Biggl\{ \;ln \Biggl[ {1+\sqrt{1-4 m^2/\Lambda_0^2} \over 1-\sqrt{1-4 m^2/\Lambda_0^2}}\cdot{1-\sqrt{1-4 m^2/\Lambda_1^2} \over 1+\sqrt{1-4 m^2/\Lambda_1^2}} \Biggr] \cr &+ \; \sqrt{1-4 m^2/E} \; ln \Biggl[ {\sqrt{1-4 m^2/\Lambda_0^2}-\sqrt{1-4 m^2/E} \over \sqrt{1-4 m^2/\Lambda_0^2}+\sqrt{1-4 m^2/E}} \cdot {\sqrt{1-4 m^2/\Lambda_1^2}+\sqrt{1-4 m^2/E} \over \sqrt{1-4 m^2/\Lambda_1^2}-\sqrt{1-4 m^2/E}} \Biggr] \Biggr\} \;. } \eqno(C.4)$$ \noindent Without the cutoff on the external energy, we would encounter negative arguments in the logarithm. With the cutoff the argument can still go to zero; however, I have discussed this same basic issue following Eq. (6.15), where I argued that it is not necessarily a serious problem for a perturbative analysis. The marginal part of this correction is obtained by taking the limit ${\bf r^\perp} \rightarrow 0$ with $y$ and $m$ fixed, which leads to $$\eqalign{ & \qquad \qquad \delta v_4 \rightarrow -{g^2 \over 64 \pi^2} \; \Biggl\{ \;ln \Biggl[ {1+\sqrt{1-4 m^2/\Lambda_0^2} \over 1-\sqrt{1-4 m^2/\Lambda_0^2}}\cdot{1-\sqrt{1-4 m^2/\Lambda_1^2} \over 1+\sqrt{1-4 m^2/\Lambda_1^2}} \Biggr] \cr &+ \; \sqrt{1-4 y(1-y)} \; ln \Biggl[ {\sqrt{1-4 m^2/\Lambda_0^2}-\sqrt{1-4 y(1-y)} \over \sqrt{1-4 m^2/\Lambda_0^2}+\sqrt{1-4 y(1-y)}} \cdot \cr &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad {\sqrt{1-4 m^2/\Lambda_1^2}+\sqrt{1-4 y(1-y)} \over \sqrt{1-4 m^2/\Lambda_1^2}-\sqrt{1-4 y(1-y)}} \Biggr] \Biggr\} \;. } \eqno(C.5)$$ \noindent Unlike the ${\cal O}(g^2)$ correction encountered in the theory with no physical mass, this marginal operator has a complicated dependence on $y$. One can continue to compute transformations, using this entire marginal operator, or one can make an expansion of this operator in powers of $m^2$. Such an expansion should converge, as long as $m^2$ is exponentially small in comparison to $\Lambda_0^2$. After setting $\Lambda_1=\Lambda_0/2$, the expansion yields $$\eqalign{ & \qquad \qquad \delta v_4 \rightarrow -{9 g^2 \over \pi^2} \;\Biggl\{ \;ln(4) - {3 m^2 \over \Lambda_0^2} {1-2y(1-y) \over y(1-y)}+\cdot\cdot\cdot \Biggr\} \;. } \eqno(C.6)$$ The first term in the expansion is the result obtained for the massless theory, while the second term must be considered further. There are cutoffs on the remaining states that prevent $m^2/y/(1-y)$ from becoming larger than $\Lambda_0^2/4$, so the first correction to $ln(4)$ is at most $3/4$. Moreover, the cutoff actually involves ${\bf r^\perp}^2+m^2$, and not just $m^2$. Thus, no individual term in the expansion of $\delta v_4$ given in Eq. (C.5) diverges within the limits imposed by the cutoffs, although the entire sum can diverge. I do not repeat the discussion following Eq. (6.15). As long as the mass remains small, we can make a perturbative expansion of every term in powers of the mass divided by the cutoff. In this case, the mass is treated as if it were a transverse momentum. This must be done with some care. As we have already seen in the critical theory, the step function cutoffs sometimes lead to singular distributions when limits involving their arguments are taken. Just as taking transverse momenta to zero in the critical theory led to delta functions involving longitudinal momentum fractions, taking the mass to zero may produce such singular distributions, and it is essential that this limit be taken exactly. Thus, as long as the mass is small, we can continue to expand the Hamiltonian about the critical fixed point provided we carefully evaluate distributions. We expand every operator in powers of transverse momenta, because the linear analysis reveals that such powers lead to increasingly irrelevant eigenoperators of the linearized transformation. While powers of the physical mass do not lead to increasingly irrelevant operators, they do lead to increasingly large powers of $m^2/\Lambda_0^2$. The subsequent rescaling replaces each power with $4\;m^2/\Lambda_0^2$; however, the expansion should remain reasonable until we run the effective cutoff down to the point where $m^2/\Lambda_n^2$ is not small. To this point I have ignored spectators in the discussion of $\delta v_4$. Spectators have a drastic, potentially disastrous, effect on the analysis. When we studied the marginal part of $\delta u_4$ in the massless theory, spectators had no effect, because their transverse momenta were set to zero to find the marginal operator and they dropped out of the cutoffs as a result. However, if the spectators are massive, they do not drop out of the cutoffs when their transverse momenta are zero. Their presence effectively lowers the cutoff. If there are $n$ spectators, the cutoffs are effectively shifted, $$\Lambda_0^2 \rightarrow \Lambda_0^2 - \sum_{i=1}^n { m^2 \over x_i} \; \;\;,\;\;\; \Lambda_1^2 \rightarrow \Lambda_1^2 - \sum_{i=1}^n { m^2 \over x_i} \; , \eqno(C.7)$$ \noindent where $x_i$ are the longitudinal momentum fractions of the spectators. If the total longitudinal momentum fraction of the spectators is $1-w$, the smallest shift that this sum can introduce is $n m^2/(1-w)$. For sufficiently large $n$, this always becomes larger than $\Lambda_0^2$, and we find that the invariant-mass cutoff is also a cutoff on particle number. Moreover, in the highest sectors of Fock space that survive this cutoff, the effective cutoff is arbitrarily small and we must expect that any change in this cutoff produces nonperturbative effects in the exact results. I see no way around this conclusion; however, the error made by treating the highest sectors of Fock space perturbatively as the cutoff is changed may still be small if one is interested only in states of much lower energy than the states removed, which is always the case in a renormalization group analysis. When the effective cutoff becomes very small, it is because the spectator state is a high energy state. Thus, the nonperturbative problem occurs when we need to accurately approximate the part of the Hamiltonian that governs states of high energy. In this case, the intermediate state has high energy because there are a large number of massive particles. If our ultimate interest is to study states with such high energy, we do not expect to be able to lower the cutoff to this scale without encountering a nonperturbative problem. However, if our ultimate interest is to study low energy states, we need only concern ourselves with the error made in the matrix elements that ultimately govern the mixing of these many-body high energy intermediate states with the few-body low energy states of interest. It may be possible to accurately approximate these matrix elements using perturbation theory even when large errors are made in the complete wave function for the many-body state. Since it is far easier to study such spectator effects quantitatively when computing $\delta u_2$, I turn to this. In order to compute $\delta u_2$ for the massive theory, we can follow the same steps that led to Eq. (6.25) and simply add a mass to obtain $$\eqalign{ \delta v_2 = {g^2 \over 3!} \;w \Lambda_0^2 \; \int & {d^2{q^\perp}' dx' \over 16 \pi^3 x'}\; {d^2{r^\perp}' dy' \over 16 \pi^3 y'}\; {d^2{s^\perp}' dz' \over 16 \pi^3 z'}\; \Bigl[{{\bf t^\perp}^2 \over w \Lambda_0^2} - {{\bf q^\perp}'^2 \over x'} - {{\bf r^\perp}'^2 \over y'} - {{\bf s^\perp}'^2 \over z'} - {2 m^2 \over w \Lambda_0^2} \Bigr]^{-1} \cr &(16 \pi^3) \delta(1-x'-y'-z') \delta^2({{\bf t^\perp} \over \sqrt{w} \Lambda_0} - {\bf q^\perp}'-{\bf r^\perp}'-{\bf s^\perp}') \cr &\theta\Bigl(1- {{\bf t^\perp}^2+M^2 \over (1-w)\Lambda_0^2}- {3 m^2 \over w \Lambda_0^2} - {{\bf q^\perp}'^2 \over x'} - {{\bf r^\perp}'^2 \over y'} - {{\bf s^\perp}'^2 \over z'} \Bigr) \cr &\theta\Bigl({{\bf q^\perp}'^2 \over x'} + {{\bf r^\perp}'^2 \over y'} + {{\bf s^\perp}'^2 \over z'} + {{\bf t^\perp}^2+M^2 \over (1-w)\Lambda_0^2}+ {3 m^2 \over w \Lambda_0^2} - {1 \over 4} \Bigr) \cr &\theta\Bigl(1/4-{{\bf t^\perp}^2+m^2 \over w \Lambda_0^2} - {{\bf t^\perp}^2+M^2 \over (1-w)\Lambda_0^2}\Bigr) \;.}\eqno(C.8)$$ \noindent Here the invariant-mass-squared of the spectators, $M^2$, is dependent on the individual longitudinal momentum fractions of the spectators, and it no longer vanishes when all spectator transverse momenta vanish. I have displayed the cutoff on the external state to make it clear that an expansion in powers of $m^2$ may be reasonable even though one might fear that $M^2$, $1/(1-w)$, or $1/w$ could become large. When $\delta v_2$ is expanded in powers of $m^2$, an infinite number of functions of longitudinal momentum fractions appear. Since this happens even when all external transverse momenta are taken to zero, this means that an infinite number of relevant operators appear. Relevant operators usually must be precisely controlled because they grow at an exponential rate. The exception is when their initial strength is exponentially small, and remains exponentially small over all but the final part of a trajectory of Hamiltonians. In this case, a small error in the coefficient of a relevant operator remains small over the entire trajectory. This error becomes exponentially large only when the mass becomes exponentially large, and I am interested in approximating the portion of the renormalization group trajectory over which the mass is small. There is a separate important question of how precisely one must approximate this trajectory if one wants accurate predictions for low energy observables to result from an exact diagonalization of the final Hamiltonian on the trajectory. I do not address this issue here. If we expand every term in the transformed Hamiltonian in powers of $m^2$, we find that the second-order transformation is almost identical to the massless case. We can compute a trajectory of massive Hamiltonians, using the second-order transformation and expanding every term in powers of $m^2/\Lambda_0^2$. We can choose $H^{\Lambda_0}_{\Lambda_0}$ to be $$\eqalign{ H^{\Lambda_0}_{\Lambda_0} =& \int {d^2{\mit P}^\perp d{\mit P}^+ \over 16\pi^3 {\mit P}^+}\;\Biggl\{ \cr &\qquad \; \int {d^2r^\perp_1 dx_1 \over 16 \pi^3 x_1 } \; (16 \pi^3) \delta^2({\bf r^\perp}_1) \delta(1-x_1) \;\Biggl[ {{\mit P}^{\perp 2} \over {\mit P}^+}+ (1+\xi_0) {{\bf r^\perp}^2_1 + m_0^2 \over x_1 {\mit P}^+}+\mu_0^2 \Biggr]\; |q_1 \rangle \langle q_1 | \cr &\qquad + \;\int {d^2r^\perp_1 dx_1 \over 16 \pi^3 x_1 } \; \int {d^2r^\perp_2 dx_2 \over 16 \pi^3 x_2 } \; (16 \pi^3) \delta^2({\bf r^\perp}_1+{\bf r^\perp}_2) \delta(1-x_1-x_2) \cr & \qquad \Biggl[{{\mit P}^{\perp 2} \over {\mit P}^+}+ (1+\xi_0) {{\bf r^\perp}^2_1 +m_0^2 \over x_1 {\mit P}^+} + (1+\xi_0) {{\bf r^\perp}^2_2+m_0^2 \over x_2 {\mit P}^+}+2\mu_0^2\Biggr] \;|q_1,q_2 \rangle \langle q_1,q_2 | \;\;+\;\;\cdot\cdot\cdot \;\Biggr\} \cr +&{g_0 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2+q_3-q_4) \; a^\dagger(q_1) a^\dagger(q_2) a^\dagger(q_3) a(q_4) \cr +&{g_0 \over 4} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1+q_2-q_3-q_4) \; a^\dagger(q_1) a^\dagger(q_2) a(q_3) a(q_4) \cr +&{g_0 \over 6} \int d\tilde{q}_1\; d\tilde{q}_2\; d\tilde{q}_3\; d\tilde{q}_4 \; (16 \pi^3) \delta^3(q_1-q_2-q_3-q_4) \; a^\dagger(q_1) a(q_2) a(q_3) a(q_4) \;.}\eqno(C.9) $$ \noindent This Hamiltonian contains a complete set of relevant and marginal operators through ${\cal O}(m^2/\Lambda_0^2)$. When the second-order invariant-mass transformation is applied to the Hamiltonian, and all irrelevant operators are dropped, the resultant Hamiltonian contains no new relevant or marginal operators to these orders, and the only effect is to change the constants $m_0$, $\xi_0$, $\mu_0$, and $g_0$. Thus we can write the relevant and marginal operators in $H^{\Lambda_0}_{\Lambda_n}$ as in Eq. (6.35), using the constants $m_n$, $\xi_n$, $\mu_n$, and $g_n$. The complete second-order equations for the evolution of $\xi_n$, $\mu_n$, and $g_n$ are identical to Eqs. (6.36)-(6.38). Moreover the equation for the evolution of $m$ to this order is trivial, $$m_{n+1}^2 = 4 m_n^2 \;. \eqno(C.10)$$ \noindent It is clear that the presence of a physical mass severely complicates the higher-order analyses, but the qualitative features of the analysis for the massless theory should survive when an additional expansion in powers of $m^2/\Lambda_0^2$ is made. \vfill \eject \bigskip \noindent{\eighteenb References} \medskip \refitem {1.} \obeyendofline \frenchspacing E. C. G. Stueckelberg and A. Peterman, {\eighteenit Helv. Phys. Acta} {\eighteenb 26} (1953), 499. \ignoreendofline \refitem {2.} \obeyendofline \frenchspacing M. Gell-Mann and F. E. Low, {\eighteenit Phys. Rev.} {\eighteenb 95} (1954), 1300. \ignoreendofline \refitem {3.} \obeyendofline \frenchspacing N. N. Bogoliubov and D.V. Shirkov, ``Introduction to the Theory of Quantized Fields'', Interscience, New York, 1959. \ignoreendofline \refitem {4.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Phys. Rev.} {\eighteenb 140} (1965), B445. \ignoreendofline \refitem {5.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Phys. Rev.} {\eighteenb D2} (1970), 1438. \ignoreendofline \refitem {6.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Phys. Rev.} {\eighteenb D3} (1971), 1818. \ignoreendofline \refitem {7.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Phys. Rev.} {\eighteenb D6} (1972), 419. \ignoreendofline \refitem {8.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Phys. Rev.} {\eighteenb B4} (1971), 3174. \ignoreendofline \refitem {9.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Phys. Rev.} {\eighteenb B4} (1971), 3184. \ignoreendofline \refitem {10.} \obeyendofline \frenchspacing K. G. Wilson and M. E. Fisher, {\eighteenit Phys. Rev. Lett.} {\eighteenb 28} (1972), 240. \ignoreendofline \refitem {11.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Phys. Rev. Lett.} {\eighteenb 28} (1972), 548. \ignoreendofline \refitem {12.} \obeyendofline \frenchspacing K. G. Wilson and J. B. Kogut, {\eighteenit Phys. Rep.} {\eighteenb 12C} (1974), 75. \ignoreendofline \refitem {13.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Rev. Mod. Phys.} {\eighteenb 47} (1975), 773. \ignoreendofline \refitem {14.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Rev. Mod. Phys.} {\eighteenb 55} (1983), 583. \ignoreendofline \refitem {15.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Adv. Math.} {\eighteenb 16} (1975), 170. \ignoreendofline \refitem {16.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Scientific American} {\eighteenb 241} (1979), 158. \ignoreendofline \refitem {17.} \obeyendofline \frenchspacing F. J. Wegner, {\eighteenit Phys. Rev.} {\eighteenb B5} (1972), 4529. \ignoreendofline \refitem {18.} \obeyendofline \frenchspacing F. J. Wegner, {\eighteenit Phys. Rev.} {\eighteenb B6} (1972), 1891. \ignoreendofline \refitem {19.} \obeyendofline F. J. Wegner, {\eighteenit in} ``Phase Transitions and Critical Phenomena'' (C. Domb and M. S. Green, Eds.), Vol. 6, Academic Press, London, 1976. \ignoreendofline \refitem {20.} \obeyendofline \frenchspacing L. P. Kadanoff, {\eighteenit Physica} {\eighteenb 2} (1965), 263. \ignoreendofline \refitem {21.} \obeyendofline \frenchspacing C. Rebbi, ``Lattice Gauge Theories and Monte Carlo Simulations'', World Scientific, Singapore, 1983. \ignoreendofline \refitem {22.} \obeyendofline \frenchspacing C. G. Callan, {\eighteenit Phys. Rev.} {\eighteenb D2} (1970), 1541. \ignoreendofline \refitem {23.} \obeyendofline \frenchspacing K. Symanzik, {\eighteenit Comm. Math. Phys.} {\eighteenb 18} (1970), 227. \ignoreendofline \refitem {24.} \obeyendofline \frenchspacing R. P. Feynman, {\eighteenit Rev. Mod. Phys.} {\eighteenb 20} (1948), 367. \ignoreendofline \refitem {25.} \obeyendofline \frenchspacing I. Newton, ``Philosophiae Naturalis Principia Mathematica'', S. Pepys, London, 1686. \ignoreendofline \refitem {26.} \obeyendofline \frenchspacing S. K. Ma, {\eighteenit Rev. Mod. Phys.} {\eighteenb 45} (1973), 589. \ignoreendofline \refitem {27.} \obeyendofline \frenchspacing G. Toulouse and P. Pfeuty, ``Introduction to the Renormalization Group and to Critical Phenomena'', Wiley, Chichester, 1977. \ignoreendofline \refitem {28.} \obeyendofline \frenchspacing S. K. Ma, ``Modern Theory of Critical Phenomena'', Benjamin, New York, 1976. \ignoreendofline \refitem {29.} \obeyendofline \frenchspacing D. Amit, ``Field Theory, the Renormalization Group, and Critical Phenomena'', Mc-Graw-Hill, New York, 1978. \ignoreendofline \refitem {30.} \obeyendofline \frenchspacing J. Zinn-Justin, ``Quantum Field Theory and Critical Phenomena'', Oxford, Oxford, 1989. \ignoreendofline \refitem {31.} \obeyendofline \frenchspacing N. Goldenfeld, ``Lectures on Phase Transitions and the Renormalization Group'', Add-ison-Wesley, Reading Mass., 1992. \ignoreendofline \refitem {32.} \obeyendofline \frenchspacing P. A. M. Dirac, {\eighteenit Rev. Mod. Phys.} {\eighteenb 21} (1949), 392. \ignoreendofline \refitem {33.} \obeyendofline \frenchspacing P. A. M. Dirac, ``Lectures on Quantum Field Theory'', Academic Press, New York, 1966. \ignoreendofline \refitem {34.} \obeyendofline An extensive list of references on light-front physics ({\eighteenit light.tex}) is available via anonymous ftp from public.mps.ohio-state.edu in the subdirectory pub/infolight. \ignoreendofline \refitem {35.} \obeyendofline \frenchspacing S. Weinberg, {\eighteenit Phys. Rev.} {\eighteenb 150} (1966), 1313. \ignoreendofline \refitem {36.} \obeyendofline \frenchspacing A. Harindranath and J. P. Vary, {\eighteenit Phys. Rev.} {\eighteenb D36} (1987), 1141. \ignoreendofline \refitem {37.} \obeyendofline \frenchspacing J. R. Hiller, {\eighteenit Phys. Rev.} {\eighteenb D44} (1991), 2504. \ignoreendofline \refitem {38.} \obeyendofline \frenchspacing J. B. Swenson and J. R. Hiller, {\eighteenit Phys. Rev.} {\eighteenb D48} (1993), 1774. \ignoreendofline \refitem {39.} \obeyendofline \frenchspacing R. J. Perry and K. G. Wilson, {\eighteenit Nucl. Phys.} {\eighteenb B403} (1993), 587. \ignoreendofline \refitem {40.} \obeyendofline \frenchspacing S. Fubini and G. Furlan, {\eighteenit Physics} {\eighteenb 1} (1964), 229. \ignoreendofline \refitem {41.} \obeyendofline \frenchspacing R. Dashen and M. Gell-Mann, {\eighteenit Phys. Rev. Lett} {\eighteenb 17} (1966), 340. \ignoreendofline \refitem {42.} \obeyendofline \frenchspacing J. D. Bjorken, {\eighteenit Phys. Rev.} {\eighteenb 179} (1969), 1547. \ignoreendofline \refitem {43.} \obeyendofline \frenchspacing R. P. Feynman, ``Photon-Hadron Interactions'', Benjamin, Reading, Massachusetts, 1972. \ignoreendofline \refitem {44.} \obeyendofline \frenchspacing J. B. Kogut and L. Susskind, {\eighteenit Phys. Rep.} {\eighteenb C8} (1973), 75. \ignoreendofline \refitem {45.} \obeyendofline \frenchspacing S.-J. Chang and S.-K. Ma, {\eighteenit Phys. Rev.} {\eighteenb 180} (1969), 1506. \ignoreendofline \refitem {46.} \obeyendofline \frenchspacing J. B. Kogut and D. E. Soper, {\eighteenit Phys. Rev.} {\eighteenb D1} (1970), 2901. \ignoreendofline \refitem {47.} \obeyendofline \frenchspacing J. D. Bjorken, J. B. Kogut, and D. E. Soper, {\eighteenit Phys. Rev.} {\eighteenb D3} (1971), 1382. \ignoreendofline \refitem {48.} \obeyendofline \frenchspacing S.-J. Chang, R. G. Root and T.-M. Yan, {\eighteenit Phys. Rev. } {\eighteenb D7} (1973), 1133. \ignoreendofline \refitem {49.} \obeyendofline \frenchspacing S.-J. Chang and T.-M. Yan, {\eighteenit Phys. Rev.} {\eighteenb D7} (1973), 1147. \ignoreendofline \refitem {50.} \obeyendofline \frenchspacing T.-M. Yan, {\eighteenit Phys. Rev.} {\eighteenb D7} (1973), 1760. \ignoreendofline \refitem {51.} \obeyendofline \frenchspacing T.-M. Yan, {\eighteenit Phys. Rev.} {\eighteenb D7} (1973), 1780. \ignoreendofline \refitem {52.} \obeyendofline \frenchspacing S. J. Brodsky, R. Roskies and R. Suaya, {\eighteenit Phys. Rev.} {\eighteenb D8} (1973), 4574. \ignoreendofline \refitem {53.} \obeyendofline \frenchspacing C. Bouchiat, P. Fayet, and N. Sourlas, {\eighteenit Lett. Nuovo Cim.} {\eighteenb 4} (1972), 9. \ignoreendofline \refitem {54.} \obeyendofline \frenchspacing A. Harindranath and R. J. Perry, {\eighteenit Phys. Rev.} {\eighteenb D43} (1991), 492. \ignoreendofline \refitem {55.} \obeyendofline \frenchspacing D. Mustaki, S. Pinsky, J. Shigemitsu, and K. Wilson, {\eighteenit Phys. Rev.} {\eighteenb D43} (1991), 3411. \ignoreendofline \refitem {56.} \obeyendofline \frenchspacing M. Burkhardt and A. Langnau, {\eighteenit Phys. Rev.} {\eighteenb D44} (1991), 1187. \ignoreendofline \refitem {57.} \obeyendofline \frenchspacing M. Burkhardt and A. Langnau, {\eighteenit Phys. Rev.} {\eighteenb D44} (1991), 3857. \ignoreendofline \refitem {58.} \obeyendofline \frenchspacing D. G. Robertson and G. McCartor, {\eighteenit Z. Phys.} {\eighteenb C53} (1992), 661. \ignoreendofline \refitem {59.} \obeyendofline \frenchspacing R. J. Perry, {\eighteenit Phys. Lett.} {\eighteenb B300} (1993), 8. \ignoreendofline \refitem {60.} \obeyendofline \frenchspacing I. Tamm, {\eighteenit J. Phys. (USSR)} {\eighteenb 9} (1945), 449. \ignoreendofline \refitem {61.} \obeyendofline \frenchspacing S. M. Dancoff, {\eighteenit Phys. Rev.} {\eighteenb 78} (1950), 382. \ignoreendofline \refitem {62.} \obeyendofline \frenchspacing H. A. Bethe and F. de Hoffmann, ``Mesons and Fields, Vol. II'', Row, Peterson and Company, Evanston, Illinois, 1955. \ignoreendofline \refitem {63.} \obeyendofline \frenchspacing R. J. Perry, A. Harindranath and K. G. Wilson, {\eighteenit Phys. Rev. Lett.} {\eighteenb 65} (1990), 2959. \ignoreendofline \refitem {64.} \obeyendofline A. C. Tang, Ph.D thesis, Stanford University, SLAC-Report-351, June (1990). \ignoreendofline \refitem {65.} \obeyendofline \frenchspacing R. J. Perry and A. Harindranath, {\eighteenit Phys. Rev.} {\eighteenb D43} (1991), 4051. \ignoreendofline \refitem {66.} \obeyendofline \frenchspacing A. C. Tang, S. J. Brodsky, and H. C. Pauli, {\eighteenit Phys. Rev.} {\eighteenb D44} (1991), 1842. \ignoreendofline \refitem {67.} \obeyendofline \frenchspacing M. Kaluza and H. C. Pauli, {\eighteenit Phys. Rev.} {\eighteenb D45} (1992), 2968. \ignoreendofline \refitem {68.} \obeyendofline \frenchspacing St. D. G{\l }azek and R.J. Perry, {\eighteenit Phys. Rev.} {\eighteenb D45} (1992), 3740. \ignoreendofline \refitem {69.} \obeyendofline \frenchspacing A. Harindranath, R. J. Perry, and J. Shigemitsu, {\eighteenit Phys. Rev.} {\eighteenb D46} (1992), 4580. \ignoreendofline \refitem {70.} \obeyendofline \frenchspacing P. M. Wort, {\eighteenit Phys. Rev.} {\eighteenb D47} (1993), 608. \ignoreendofline \refitem {71.} \obeyendofline \frenchspacing S. G{\l }azek, A. Harindranath, S. Pinsky, J. Shigemitsu, and K. G. Wilson, {\eighteenit Phys. Rev.} {\eighteenb D47} (1993), 1599. \ignoreendofline \refitem {72.} \obeyendofline \frenchspacing H. H. Liu and D. E. Soper, {\eighteenit Phys. Rev.} {\eighteenb D48} (1993), 1841. \ignoreendofline \refitem {73.} \obeyendofline \frenchspacing St. D. G{\l }azek and R.J. Perry, {\eighteenit Phys. Rev.} {\eighteenb D45} (1992), 3734. \ignoreendofline \refitem {74.} \obeyendofline \frenchspacing B. van de Sande and S. S. Pinsky, {\eighteenit Phys. Rev.} {\eighteenb D46} (1992), 5479. \ignoreendofline \refitem {75.} \obeyendofline S. D. G{\l }azek and K. G. Wilson, ``Renormalization of Overlapping Transverse Divergences in a Model Light-Front Hamiltonian,'' Ohio State preprint (1992). \ignoreendofline \refitem {76.} \obeyendofline \frenchspacing H. Bergknoff, {\eighteenit Nucl. Phys.} {\eighteenb B122} (1977), 215. \ignoreendofline \refitem {77.} \obeyendofline \frenchspacing T. Eller, H. C. Pauli, and S. J. Brodsky, {\eighteenit Phys. Rev.} {\eighteenb D35} (1987), 1493. \ignoreendofline \refitem {78.} \obeyendofline \frenchspacing Y. Ma and J. R. Hiller, {\eighteenit J. Comp. Phys.} {\eighteenb 82} (1989), 229. \ignoreendofline \refitem {79.} \obeyendofline \frenchspacing M. Burkhardt, {\eighteenit Nucl. Phys.} {\eighteenb A504} (1989), 762. \ignoreendofline \refitem {80.} \obeyendofline \frenchspacing K. Hornbostel, S. J. Brodsky, and H. C. Pauli, {\eighteenit Phys. Rev.} {\eighteenb D41} (1990), 3814. \ignoreendofline \refitem {81.} \obeyendofline \frenchspacing G. McCartor, {\eighteenit Zeit. Phys.} {\eighteenb C52} (1991), 611. \ignoreendofline \refitem {82.} \obeyendofline Y. Mo and R. J. Perry, ``Basis Function Calculations for the Massive Schwinger Model in the Light-Front Tamm-Dancoff Approximation,'' to appear in J. Comp. Phys. (1993). \ignoreendofline \refitem {83.} \obeyendofline G. P. Lepage, S. J. Brodsky, T. Huang and P. B. Mackenzie, {\eighteenit in} ``Particles and Fields 2'' (A. Z. Capri and A. N. Kamal, Eds.), Plenum Press, New York, 1983. \ignoreendofline \refitem {84.} \obeyendofline \frenchspacing J. M. Namyslowski, {\eighteenit Prog. Part. Nuc. Phys.} {\eighteenb 74} (1984), 1. \ignoreendofline \refitem {85.} \obeyendofline S. J. Brodsky and G. P. Lepage, {\eighteenit in} ``Perturbative quantum chromodynamics'' (A. H. Mueller, Ed.), World Scientific, Singapore, 1989. \ignoreendofline \refitem {86.} \obeyendofline S. Brodsky, H. C. Pauli, G. McCartor, and S. Pinsky, ``The Challenge of Light-Cone Quantization of Gauge Field Theory,'' SLAC preprint no. SLAC-PUB-5811 and Ohio State preprint no. OHSTPY-HEP-T-92-005 (1992). \ignoreendofline \refitem {87.} \obeyendofline \frenchspacing W. Pauli and F. Villars, {\eighteenit Rev. Mod. Phys.} {\eighteenb 21} (1949), 434. \ignoreendofline \refitem {88.} \obeyendofline \frenchspacing G. 't Hooft and M. Veltman, {\eighteenit Nucl. Phys.} {\eighteenb B44} (1972), 189. \ignoreendofline \refitem {89.} \obeyendofline G. 't Hooft and M. Veltman, ``Diagrammar,'' CERN preprint 73-9 (1973). \ignoreendofline \refitem {90.} \obeyendofline \frenchspacing C. Bloch and J. Horowitz, {\eighteenit Nucl. Phys.} {\eighteenb 8} (1958), 91. \ignoreendofline \refitem {91.} \obeyendofline \frenchspacing C. Bloch, {\eighteenit Nucl. Phys.} {\eighteenb 6} (1958), 329. \ignoreendofline \refitem {92.} \obeyendofline \frenchspacing B. H. Brandow, {\eighteenit Rev. Mod. Phys.} {\eighteenb 39} (1967), 771. \ignoreendofline \refitem {93.} \obeyendofline \frenchspacing H. Leutwyler and J. Stern, {\eighteenit Ann. Phys. (New York)} {\eighteenb 112} (1978), 94. \ignoreendofline \refitem {94.} \obeyendofline \frenchspacing R. Oehme, K. Sibold, and W. Zimmerman, {\eighteenit Phys. Lett.} {\eighteenb B147} (1984), 115. \ignoreendofline \refitem {95.} \obeyendofline \frenchspacing R. Oehme and W. Zimmerman, {\eighteenit Commun. Math. Phys.} {\eighteenb 97} (1985), 569. \ignoreendofline \refitem {96.} \obeyendofline \frenchspacing W. Zimmerman, {\eighteenit Commun. Math. Phys.} {\eighteenb 95} (1985), 211. \ignoreendofline \refitem {97.} \obeyendofline \frenchspacing J. Kubo, K. Sibold, and W. Zimmerman, {\eighteenit Nuc. Phys.} {\eighteenb B259} (1985), 331. \ignoreendofline \refitem {98.} \obeyendofline \frenchspacing R. Oehme, {\eighteenit Prog. Theor. Phys. Supp.} {\eighteenb 86} (1986), 215. \ignoreendofline \refitem {99.} \obeyendofline \frenchspacing C. Lucchesi, O. Piguet, and K. Sibold, {\eighteenit Phys. Lett.} {\eighteenb B201} (1988), 241. \ignoreendofline \refitem {100.} \obeyendofline \frenchspacing E. Kraus, {\eighteenit Nucl. Phys.} {\eighteenb B349} (1991), 563. \ignoreendofline \refitem {101.} \obeyendofline \frenchspacing A. Casher, {\eighteenit Phys. Rev.} {\eighteenb D14} (1976), 452. \ignoreendofline \refitem {102.} \obeyendofline \frenchspacing W.A. Bardeen and R.B. Pearson, {\eighteenit Phys. Rev.} {\eighteenb D14} (1976), 547. \ignoreendofline \refitem {103.} \obeyendofline \frenchspacing W.A. Bardeen, R.B. Pearson and E. Rabinovici, {\eighteenit Phys. Rev.} {\eighteenb D21} (1980), 1037. \ignoreendofline \refitem {104.} \obeyendofline \frenchspacing G.P. Lepage and S.J. Brodsky, {\eighteenit Phys. Rev.} {\eighteenb D22} (1980), 2157. \ignoreendofline \refitem {105.} \obeyendofline See P. A. M. Dirac, in `Perturbative Quantum Chromodynamics'' (D. W. Duke and J. F. Owens, Eds.), Am. Inst. Phys., New York, 1981. \ignoreendofline \refitem {106.} \obeyendofline K. G. Wilson, ``Light Front QCD,'' Ohio State internal report, unpublished (1990). \ignoreendofline \refitem {107.} \obeyendofline \frenchspacing E. Tomboulis, {\eighteenit Phys. Rev.} {\eighteenb D8} (1973), 2736. \ignoreendofline \refitem {108.} \obeyendofline \frenchspacing D. J. Gross and F. Wilczek, {\eighteenit Phys. Rev. Lett.} {\eighteenb 30} (1973), 1343. \ignoreendofline \refitem {109.} \obeyendofline \frenchspacing H. D. Politzer, {\eighteenit Phys. Rev. Lett.} {\eighteenb 30} (1973), 1346. \ignoreendofline \refitem {110.} \obeyendofline \frenchspacing J. Goldstone, {\eighteenit Proc. Roy. Soc. (London)} {\eighteenb A239} (1957), 267. \ignoreendofline \refitem {111.} \obeyendofline \frenchspacing P. A. Griffin, {\eighteenit Nucl. Phys.} {\eighteenb B372} (1992), 270. \ignoreendofline \refitem {112.} \obeyendofline \frenchspacing J. Schwinger, ``Quantum Electrodynamics'', Dover, New York, 1958. \ignoreendofline \refitem {113.} \obeyendofline \frenchspacing L. M. Brown, ``Renormalization'', Springer-Verlag, New York, 1993. \ignoreendofline \refitem {114.} \obeyendofline \frenchspacing K. G. Wilson, {\eighteenit Phys. Rev.} {\eighteenb D10} (1974), 2445. \ignoreendofline \refitem {115.} \obeyendofline \frenchspacing W. Buchm{\"u}ller, ``Quarkonia'', North Holland, Amsterdam, 1992. \ignoreendofline \refitem {116.} \obeyendofline \frenchspacing C. B. Thorn, {\eighteenit Phys. Rev.} {\eighteenb D20} (1979), 1934. \ignoreendofline \refitem {117.} \obeyendofline \frenchspacing St. G{\l }azek, {\eighteenit Phys. Rev.} {\eighteenb D38} (1988), 3277. \ignoreendofline \refitem {118.} \obeyendofline \frenchspacing J. J. J. Kokkedee, ``The Quark Model'', Benjamin, New York, 1969. \ignoreendofline \vfill \eject \noindent {\eighteenb Figure Captions} \medskip \item{1.} Wilson's triangle of renormalization. The units are chosen so that $\Lambda_{\cal N}=1$. \item{2.} Examples of Hamiltonian diagrams, (a) in which a single energy denominator appears, and (b) in which two energy denominators appear. The arrows indicate the energy differences found in these denominators. \item{3.} Second-order corrections to: (a) $u_2$, (b) $u_4$, (c) $u_6$, and (d) $u_8$. \item{4.} Second-order correction to $u_2$ with spectators. \item{5.} Third-order corrections to the one-boson to three-boson part of $u_4$. \item{6.} (a) A two-loop correction to $u_4$ paired with the appropriate counterterm insertion in a one-loop correction to $u_4$. (b) The source of the counterterm. \item{7.} (a) A two-loop correction to $u_4$ paired with the appropriate counterterm insertion in a one-loop correction to $u_4$. (b) The source of the counterterm. \item{8.} (a) A two-loop correction to $u_4$ paired with two appropriate counterterm insertions in one-loop corrections to $u_4$. The sources of these counterterms are shown in (b) and (c). \item{9.} (a) A two-loop correction to $u_4$ paired with two appropriate counterterm insertions in one-loop corrections to $u_4$. The sources of these counterterms are shown in (b) and (c). \item{10.} One-loop corrections to the two-boson to two-boson part of $u_4$ with one marginal counterterm vertex. \item{11.} One-loop corrections to the one-boson to three-boson part of $u_4$ with one marginal counterterm vertex. \end
1,941,325,220,157
arxiv
\section{INTRODUCTION} In the last few years, approximately 40 new Galactic sources of very-high-energy (VHE) $\gamma$-ray emission (100\,GeV$-$100\,TeV) have been discovered using the High Energy Stereoscopic System (HESS\footnote{See http://www.mpi-hd.mpg.de/hfm/HESS/public/HESS\_catalog.htm for a catalog of HESS-detected sources.}) Cherenkov telescope array \citep[e.g.,][]{aaa+05a}. These are an exciting new population of sources, which give new insight into non-thermal particle acceleration in Galactic objects such as neutron stars, supernova remnants, and X-ray binaries. Thus far, close to half of these sources have been established or suggested as being associated with the pulsar wind nebulae (PWNe) of young pulsars, either through direct detection of a PWN or positional coincidence with a young pulsar which is presumed to have a PWN (Table~\ref{tab:assoc}). Clearly, PWNe are now an important and well-established Galactic source of VHE $\gamma$-rays. Since both young pulsars and their PWNe can be very dim, many of the Galactic HESS sources without identified counterparts \citep[e.g.,][]{aab+08a} may be PWNe, potentially detectable via deep radio or X-ray observations. In general, radio/X-ray PWNe are associated with extended HESS sources, presumably TeV PWNe, whose peak is offset by several arcminutes from the pulsar\footnote{Note however that this situation does not appear to hold in the case of the youngest pulsars, e.g. the Crab, where there is no observed offset and the TeV emission is consistent with a point source. This possibly demonstrates the evolution of TeV PWNe with pulsar age.} (Table~\ref{tab:assoc}). In some cases the offset, if any, cannot be measured because the position of the pulsar is not known \citep[see also][]{gal07}. These offsets have been explained by the hypothesis that the VHE emission is from inverse Compton scattering of ``old'' electrons, which were produced during an earlier epoch in the pulsar's life, off of the ambient photon field (e.g., cosmic microwave background radiation, starlight, and infrared emission from dust; see Aharonian et al. 2005d\nocite{aab+05d}, de Jager \& Djannati-Ata\"i 2008\nocite{dd08}). An alternative model has the $\gamma$-rays produced by the decay of $\pi^0$ mesons, which are created by the interaction of accelerated hadrons with nuclei in the interstellar medium \citep{has+06,hahs07}. In both cases, asymmetric crushing of the pulsar wind by the reverse shock of the expanding supernova remnant, in cases where the remnant has expanded into an asymmetric density distribution in the ambient medium, is then responsible for the offset between the pulsar and the peak of the VHE emission \citep{dd08}. A high pulsar proper motion may also play a role in the offset. The VHE emission could be a key into the earlier energetic history of these young pulsars as well as completing a broadband (radio to TeV $\gamma$-rays) picture of the shocked pulsar wind, whose emission would be dominated by synchrotron-emitting electrons below $\sim 1$\,GeV and inverse Compton emission from the scattering of these same particles above this energy. In this Letter, we present the discovery and subsequent timing of the ``Vela-like'' pulsar \citep[i.e. pulsars having characteristic age $\tau_c = 10-30$\,kyr and spin-down luminosity $\dot{E} \sim 10^{36}$\,ergs s$^{-1}$; e.g.,][]{kbm+03} PSR J1856+0245\ in the Arecibo PALFA survey of the Galactic Plane. This young, energetic pulsar is positionally coincident with, and energetically capable of creating, the VHE emission observed from the hitherto unidentified TeV source HESS J1857+026\ \citep{aab+08a}. This association suggests that some of the other currently unidentified, extended HESS sources in the Galactic plane may also be related to faint radio pulsars, rather than some new source class. An archival {\it ASCA}\ image of the area around PSR J1856+0245\ and HESS J1857+026\ shows a possible X-ray counterpart, cataloged as AX J185651+0245\ by \citet{smk+01}, at the pulsar position; this is possibly a synchrotron counterpart to the TeV PWN\footnote{ Furthermore, in a recent reanalysis of $EGRET$ $\gamma$-ray data, \citet{cg08} identified the source EGR J1856+0235, which is analogous to 3EG J1856+0114 in the 3rd $EGRET$ catalog. PSR J1856+0245\ is well within the 95\% confidence region of EGR J1856+0235, suggesting the pulsar and/or its PWN are also visible in the MeV-GeV range. We will investigate this further in a follow-up paper.}. \section{OBSERVATIONS AND ANALYSIS} PSR J1856+0245\ was discovered in the Arecibo PALFA survey for pulsars and radio transients (Cordes et al. 2006\nocite{cfl+06}; see also Hessels 2007\nocite{hes07}). PALFA is using the 1.4-GHz Arecibo L-band Feed Array (ALFA) 7-beam receiver to survey the Galactic Plane at longitudes of $32^{\circ} < l < 77^{\circ}$ and $168^{\circ} < l < 214^{\circ}$, out to latitudes $|b| \leq 5^{\circ}$. The relatively high observing frequency and unequaled raw sensitivity of the Arecibo telescope make the PALFA survey sensitive to distant, faint, and scattered pulsars that were missed in previous surveys. The larger goals, design, and observational setup of the PALFA survey are presented in detail by \citet{cfl+06}. We found PSR J1856+0245\ in a 268-s survey observation made on 2006 April 16. The pulsar was identified at a signal-to-noise ratio of 35 within a few minutes of the discovery observation itself through a ``real time'' processing pipeline, operating on data with reduced time and spectral resolution, which is automatically run on the survey data as it is being collected \citep{cfl+06}. PSR J1856+0245\ has a spin period of 81\,ms and a large dispersion measure (DM), 622\,cm$^{-3}$ pc. We estimate that the flux density at 1400\,MHz is $S_{1400} = 0.55 \pm 0.15$\,mJy. We also note that PSR J1856+0245\ shows a significant scattering tail at 1170\,MHz ($\tau_{\rm sc} = 10 \pm 4$\,ms at this frequency, where $\tau_{\rm sc}$ is the time constant of a one-sided exponential fitted to the pulse profile) and would be difficult to detect below $\sim 800$\,MHz because the scattering time scale would be greater than the pulse period. Immediately following the discovery of PSR J1856+0245, we began regular timing observations with Arecibo and the Jodrell Bank Observatory's 76-m Lovell Telescope in order to derive a precise rotational and astrometric ephemeris. Between the two telescopes, timing observations were made on 148 separate days between 2006 April 24 and 2008 April 18. At Arecibo, observations were made using both the center pixel of the ALFA receiver at 1440\,MHz (identical setup to the standard PALFA survey mode) and the L-Wide receiver with multiple Wide-band Arecibo Pulsar Processor (WAPP) correlators centered at 1170, 1370, 1470, and 1570\,MHz, each with 256 lags over 100\,MHz of bandwidth, sampled every 256\,$\mu$s. The typical integration time was 3 minutes. The Jodrell Bank observations were made at 1402/1418\,MHz with a $2\times 64\times 1$\,MHz incoherent filterbank system and 202\,$\mu$s sampling time. The typical integration time was 20 minutes. The resulting phase-connected timing solution for PSR J1856+0245\ is presented in Table~\ref{tab:pulsar} and combines data from both Arecibo and Jodrell Bank. This solution shows that PSR J1856+0245\ is a Vela-like pulsar with a characteristic age of 21\,kyr and a spin-down luminosity of $\dot{E} = 4.6 \times 10^{36}$\,ergs s$^{-1}$. The solution was obtained using the {\tt TEMPO} pulsar timing package\footnote{See http://www.atnf.csiro.au/research/pulsar/tempo.}. The times of arrival (TOAs) can be phase-connected by fitting only for position, DM, spin period $P$, period derivative $\dot{P}$, and period second derivative $\ddot{P}$, though further higher-order period derivatives are required to remove all trends in the TOAs (i.e., to ``whiten'' the residuals). These higher-order period derivatives, including $\ddot{P}$, are non-determinisitc and are likely the result of timing noise, which is common in young pulsars. Timing noise affects the measured position of PSR J1856+0245\ at a level exceeding the formal {\tt TEMPO} uncertainties by an order of magnitude. Separately fitting the TOAs from the first and second years of timing gives positions which differ by $\sim 9^{\prime \prime}$. We take this as a rough measure of the true uncertainty on the timing-derived position of PSR J1856+0245. The identification of PSR J1856+0245\ as a young, energetic pulsar raises the likelihood that it powers a radio and/or X-ray PWN \citep[for a review, see][]{krh06}. Accordingly, we have checked source catalogs and archival, multi-wavelength data for other sources at the radio timing position of the pulsar. These searches revealed that PSR J1856+0245\ is spatially coincident with the VHE $\gamma$-ray source HESS J1857+026\ and the faint X-ray source AX J185651+0245. We discuss these possible associations in \S 3. \section{DISCUSSION} PSR J1856+0245\ is spatially coincident with the unidentified VHE $\gamma$-ray source HESS J1857+026\ discovered by \citet[][see Fig.~\ref{fig:xray}]{aab+08a}. Here we show that PSR J1856+0245\ is energetically capable of powering HESS J1857+026\ and that this association has similar characteristics to the other pulsar/VHE associations in the literature (Table~\ref{tab:assoc}). The estimated distance to PSR J1856+0245, based on its DM and the NE2001 electron density model of the Galaxy \citep{cl02}, is $\sim 9$\,kpc. The uncertainty on this distance is not well-defined, but in some cases the model can be off by as much as a factor of $\sim 2-3$. Adopting the DM-distance of 9\,kpc gives a large spin-down flux of $\dot{E}/d^2 = 5.7/d^2_9 \times 10^{34}$\,ergs s$^{-1}$ kpc$^{-2}$, where $d_9$ is the true distance scaled to 9\,kpc. \citet{chh+07} find that statistically 70\,\% of pulsars with $\dot{E}/d^2 \gtrsim 10^{35}$\,ergs s$^{-1}$ kpc$^{-2}$ are visible as VHE $\gamma$-ray sources. PSR J1856+0245\ is very close to this limit and may exceed it if its distance is overestimated. Hence, based on its energetics alone, it is likely to be visible as a VHE $\gamma$-ray source. HESS J1857+026\ has a photon index $\Gamma = 2.2$ and $1-10$\,TeV flux $F_{\rm VHE} = 1.5 \times 10^{-11}$\,ergs cm$^{-2}$ s$^{-1}$ (about 15\,\% of the Crab's flux in this energy range); these spectral parameters are similar to those measured for the other HESS sources identified with PWNe. Given the spin-down luminosity of PSR J1856+0245, this suggests an efficiency $\epsilon_{\gamma} = L_{\gamma}/\dot{E} = 3.1d^2_9$\,\% ($1-10$\,TeV), comparable to what is seen in other proposed associations (Table~\ref{tab:assoc}). PSR J1856+0245\ is offset from the centroid of HESS J1857+026, $\alpha = 18^{\rm{h}}57^{\rm{m}}11^{\rm{s}}$, $\delta = +02^{\circ}40^{\prime}00^{\prime \prime}$ (J2000, there is a $3^{\prime}$ statistical uncertainty on this position), by $8^{\prime}$ (Fig.~\ref{fig:xray}). As discussed in \S 1, this is most likely explained by asymmetric confinement of the pulsar wind. This interpretation is supported by PSR J1856+0245's offset location on the side of HESS J1857+026\ that appears somewhat compressed (i.e., there is a steep gradient in the $\gamma$-rays) compared with the rest of the nebula. If, however, the offset of the VHE emission is due primarily to the proper motion of PSR J1856+0245, then the direction and rough magnitude of this motion are predictable. Based on its characteristic age and offset from the centroid of HESS J1857+026, PSR J1856+0245's proper motion should be roughly 23\,mas yr$^{-1}$ (transverse velocity $v_{\rm t} = 970d_9$\,km s$^{-1}$), to the north-west, assuming that the centroid of HESS J1857+026\ marks the birthplace of the pulsar. The velocity is very large, but not unprecedented for a pulsar \citep{cvb+05}. Of course, the velocity will be smaller if the distance to the source is overestimated, or if the offset is at least partially due to an asymmetrically confined pulsar wind. Detecting this proper motion via timing or interferometry would elucidate this further and may be possible in the coming years, though timing noise and the low flux of the pulsar will make this difficult. PSR J1856+0245\ and HESS J1857+026\ are also coincident with the faint {\it ASCA}\ X-ray source AX J185651+0245\ reported by \citet[][see Fig.~\ref{fig:xray}]{smk+01}. AX J185651+0245\ was found $3^{\prime}$ off axis in observations from 1998 April 6 (sequence number 56003000). It was detected only in the hard band ($2-10$\,keV) of the Gas Imaging Spectrometers (GISs), with a count rate of 2.6\,ks$^{-1}$ GIS$^{-1}$ and a significance of 4.3\,$\sigma$. The exposure was $\sim 13$\,ks for each of GIS2 and GIS3. PSR J1856+0245\ and AX J185651+0245\ (J2000 position: $\alpha = 18^{\rm{h}}56^{\rm{m}}50^{\rm{s}}$, $\delta = +02^{\circ}46^{\prime}$) are spatially coincident to within the $1^{\prime}$ positional uncertainty of sources in the \citet{smk+01} catalog. Although the signal-to-noise ratio of the detection of AX J185651+0245\ is modest, its exact spatial coincidence with a young pulsar of relatively high spin-down flux argues that this source is real. Most of the previously established associations of HESS sources with young pulsars also have known X-ray synchrotron PWNe. AX J185651+0245\ could be an X-ray PWN created by PSR J1856+0245. Using CXCPIMMS\footnote{Available at http://cxc.harvard.edu/toolkit/pimms.jsp.}, with an assumed absorbed power law spectrum with column density $N_{\rm H} = 1 \times 10^{22}$\,cm$^{-2}$ (roughly the total Galactic contribution in this direction) and photon index $\Gamma = 2$ (typical for X-ray PWNe), we find that AX J185651+0245\ has an unabsorbed flux ($2-10$\,keV) of $1.6 \times 10^{-13}$\,ergs s$^{-1}$ cm$^{-2}$. This corresponds to an efficiency for the conversion of spin-down energy into X-rays $\epsilon_{\rm X} = L_{\rm X}/\dot{E} = 0.03d^2_9$\,\% ($2-10$\,keV) that falls into the observed range for Vela-like pulsars \citep{pccm02}. There are two additional nearby sources detected by \citet{smk+01}: AX J185721+0247 and AXJ185750+0240. It is possible that these are part of some extended X-ray emission related with HESS J1857+026, though deeper X-ray observations are needed to investigate this (see below). There are four short {\it Swift}\ observations of the region containing PSR J1856+0245, including two observations specifically targeting AX J185651+0245. The deepest of these, from 2007 March 13 (observation ID 36183002), is a 4.1-ks on-axis exposure with the {\it Swift}\ X-ray Telescope (XRT). This observation does not show any significant emission at the pulsar position; within a circular extraction region of radius $1^{\prime}$ around the pulsar, the background subtracted number of counts is $1.1^{+4.0}_{-2.8}$. Using the aforementioned count rate of AX J185651+0245, along with the same assumed spectrum, the predicted $0.2-10$\,keV count rate for XRT from CXCPIMMS is $\sim 2$\,counts/ks. Thus, in 4.1\,ks, there should have been $\sim 8$\,counts, consistent with the 2-$\sigma$ upper limit derived from the data. Thus, if AX J185651+0245\ is predominantly a point source, it was only at the limit of detectability in this observation. If it is predominantly an extended nebula, then it would not have been detectable in such a short exposure. Clearly, future, deeper observations, like those that recently discovered a likely X-ray PWN associated with HESS J1718$-$385/PSR J1718$-$3825 \citep{hfc+07}, will be needed to determine the nature of this candidate X-ray PWN. We have been granted a $\sim 60$-ks {\it XMM-Newton} observation of PSR J1856+0245\ and will present an analysis of those data in a follow-up paper. At least half of the known HESS/pulsar associations are accompanied by radio emission classified as a PWN, a notable exception being HESS J1825$-$137/PSR B1823$-$13. Only a few of the extended HESS sources, e.g. HESS J0835$-$455, HESS J1813$-$178, and HESS J1640$-$465, are known to be accompanied by a supernova remnant (SNR). We have checked available radio imaging data for signs of a PWN or SNR. There is some faint, extended emission in the vicinity of PSR J1856+0245\ visible in 1.4-GHz VLA Galactic Plane Survey data \citep[VGPS,][]{std+06}, though nothing that is clearly indicative of a PWN or SNR. The surface brightness limit from VGPS is $\sim 1.8 \times 10^{5}$\,Jy sr$^{-1}$ at 1.4\,GHz. It is certainly not uncommon for Vela-like pulsars to have faint or no known radio nebula \citep[e.g. PSR B1823$-$13,][]{bgl89,gsk+03}. Roughly a third of the cataloged Galactic SNRs \citep{gre04} are fainter than the surface brightness limit achieved by the VGPS. Deep, dedicated radio imaging observations of PSR J1856+0245\ are necessary to investigate this further. Higher resolution 1.4-GHz data from the MAGPIS survey (Helfand et al. 2006\nocite{hbw+06}; in the vicinity of PSR J1856+0245, this survey has a sensitivity of 0.2\,mJy/beam at an angular resolution of $6^{\prime \prime}$) reveal no point source which can be definitively associated with PSR J1856+0245, as expected given the flux density and positional uncertainty of the pulsar. \acknowledgements The Arecibo observatory, a facility of the NAIC, is operated by Cornell University in a cooperative agreement with the National Science Foundation. We thank Karl Kosack and the HESS collaboration for providing the $\gamma$-ray image of HESS J1857+026. This work was supported by NSERC (CGS-D, PDF, and Discovery grants), the Canadian Space Agency, the Australian Research Council, FQRNT, the Canadian Institute for Advanced Research, the Canada Research Chairs Program, the McGill University Lorne Trottier Chair in Astrophysics and Cosmology, and NSF grants AST-0647820 and AST-0545837. \bibliographystyle{apj}
1,941,325,220,158
arxiv
\section{Introduction} The discovery of instanton solutions to the Yang-Mills field equations in the four-dimensional Euclidean space has led to an intensive study of such theory and the search for multidimensional generalizations of the self-duality equations. In Refs.~\cite{corr83,ward84}, such equations were found and classified. These were first-order equations that satisfy the Yang-Mills field equations as a consequence of the Bianchi identity. Later, solutions to these equations were found, see Refs.~\cite{fair84,fubi85,corr85,ivan92,logi04,logi05,duna12}, and then used to construct classical solitonic solutions of the low energy effective theory of the heterotic string. \par Another approach to the construction of self-duality equations was proposed in Ref.~\cite{tchr80}. In this work, it was considered self-duality relations between higher-order terms of the field strength. An example of instantons satisfying such self-duality relations was obtained in Ref.~\cite{gros84}, see also Refs.~\cite{naka16,logi20}. As it turned out, these instantons play a role in smoothing out the singularity of heterotic string soliton solutions by incorporating one-loop corrections. Therefore, these exotic solutions were used to construct various string and membrane solutions, see Refs.~\cite{duff91,olsen00,mina01,pedd08,bill09,bill09a,bill21} and to study the higher dimensional quantum Hall effect, see Refs.~\cite{bern03,hase14,inou21}. \par In this paper, we study the (anti-)self-duality equations in the eight-dimensional Euclidean space with the flat metric. We use properties of the Clifford algebra $Cl_{0,8}(\mathbb{R})$ for to find new solutions of the equations. \section{The self-duality equations} In this section, we give a brief summary of Clifford algebras and related constructions. We list the features of the mathematical structure as far as they are of relevance to our work. \par We recall that the Clifford algebra $Cl_{0,8}(\mathbb{R})$ is a real associative algebra generated by the elements $\Gamma_1,\Gamma_2,\dots,\Gamma_8$ and defined by the relations \begin{equation} \Gamma_i\Gamma_j+\Gamma_j\Gamma_i=-2\delta_{ij}. \end{equation} Its subalgebra $Cl_{0,8}^0(\mathbb{R})$ generated by the elements $\Gamma_{ij}=(\Gamma_i\Gamma_j-\Gamma_j\Gamma_i)/2$ is called its even subalgebra. It can be shown that the even subalgebra of $Cl_{0,8}(\mathbb{R})$ is isomorphic to $Cl_{0,7}(\mathbb{R})$. The element $\Gamma_9=\Gamma_1\Gamma_2\dots \Gamma_7$ commutes with all other elements of $Cl_{0,8}^0(\mathbb{R})$, and its square $\Gamma_9\Gamma_9=1$. Therefore the pair $\Gamma^{\pm}=\frac{1}{2}(1\pm\Gamma_9)$ are a complete system of mutually orthogonal central idempotents, and hence the subalgebra decomposes into the direct sum of two ideals. It can be shown that these ideals are isomorphic to the algebra $M_{8}(\mathbb{R})$ of all real matrices of size $8\times 8$. \par Suppose $\phi: Cl_{0,8}^0(\mathbb{R})\to M_{8}(\mathbb{R})$, $\Gamma_{ij}\to R_{ij}$ is the homomorphism of the algebras with the kernel $\text{Ker}\,\phi=\{\alpha(1-\Gamma_9)\mid \alpha\in\mathbb{R}\}$. In turn, the homomorphism of the algebras induces the homomorphism $Spin(8)\to SO(8)$ of the group. Therefore the matrices $R_{ij}$ are generators of $SO(8)$. Now, note (see, e.g.,~\cite{kenn81}) that the identity \begin{equation}\label{01} \Gamma_{i_1\dots i_k}=\frac{1}{(8-k)!}\varepsilon_{i_1\dots i_8}\Gamma_9\Gamma^{i_8\dots i_{k+1}}, \end{equation} holds in the Clifford algebra $Cl_{0,8}(\mathbb{R})$. Here $\varepsilon_{i_1\dots i_8}$ is the Levi-Civita symbol in eight dimensions and $\Gamma_{i_1\dots i_k}=\Gamma_{[i_1\dots i_k]}$, where the square bracket stands for the anti-symmetrization of indices with the weight $1/k!$. Choosing $k=4$ in (\ref{01}), we obtain the self-duality equations \begin{equation} R_{[mn}R_{ps]}=\frac{1}{24}\varepsilon_{mnpsijkl}R_{[ij}R_{kl]}. \end{equation} \par Any totally antisymmetric eight-dimensional tensor of fourth rank can be written as the sum of the self-dual and the anti-self-dual parts $F_{mnps}=F^{+}_{mnps}+F^{-}_{mnps}$, where \begin{equation} F_{mnps}^{\pm}=\left(\delta^i_{[m}\delta^j_n\delta^k_p\delta^l_{s]}\pm\frac{1}{24}\varepsilon_{mnpsijkl}\right)F_{ijkl}. \end{equation} If we now use the identity \begin{equation}\label{02} \Gamma_{p}\Gamma_{s_1\dots s_k}=\Gamma_{ps_1\dots s_k}+\sum^k_{i=1}(-1)^i\delta_{ps_i}\Gamma_{ps_1\dots\hat{s}_{i}\dots s_k}, \end{equation} then we obtain the following expression for the self-dual tensor \begin{equation} F_{mnps}^{+}=\frac{1}{24}\text{Tr}\,(R_{mn}R_{ps}R_{ij}R_{kl})F_{ijkl}. \end{equation} Thus, the tensor $F_{mnps}$ is anti-self-dual if $R_{mn}R_{ps}F_{mnps}=0$. In particular, the tensor $F_{mnps}=F_{[mn}F_{ps]}$ is anti-self-dual if $R_{mn}F_{mn}=0$. Note that the last equality is a sufficient condition for the anti-self-duality, but not necessary. Note also that previously known solutions do not satisfy that condition. \section{Instantons in eight dimensions} In this section, we find solutions of the anti-self-duality equations in the eight-dimensional Euclidean space. This equation is given by the formula \begin{equation}\label{04} F_{[mn}F_{ps]}=-\frac{1}{24}\varepsilon_{mnpsijkl}F_{[ij}F_{kl]}, \end{equation} where the gauge field strength \begin{equation}\label{30} F_{mn}=\partial_mA_n-\partial_nA_m+[A_m,A_n] \end{equation} and the potential $A_m$ takes values in the Lie algebra $so(8)$. \par We choose the ansatz \begin{equation}\label{05} A_m=-\frac{1}{2}R_{mp}\partial_{p}\varphi, \end{equation} where $\varphi$ is a function of $x^2=x_nx^n$. To find the gauge field strength, we substitute the potential (\ref{05}) into (\ref{30}) and use the identity \begin{equation}\label{06} R_{mp}R_{ns}=R_{[mp}R_{ns]}+\delta_{mn}R_{ps}-\delta_{ms}R_{pn}-\delta_{pn}R_{ms}+\delta_{ps}R_{mn} +\delta_{ms}\delta_{pn}-\delta_{mn}\delta_{ps}, \end{equation} which is a consequence of (\ref{02}). As a result, we get \begin{equation}\label{08} F_{mn}=\frac{1}{2}[R_{ms}(\partial_n\partial_s\varphi-\partial_n\varphi\partial_s\varphi) -R_{ns}(\partial_m\partial_s\varphi-\partial_m\varphi\partial_s\varphi)+R_{mn}(\partial_s\varphi)^2]. \end{equation} Now we impose the anti-self-duality condition $R_{mn}F_{mn}=0$ and use the identities \begin{equation}\label{30} R_{mn}R_{sn}=\delta_{nn}(R_{ms}-\delta_{ms}),\quad R_{mn}R_{mn}=-\delta_{mm}\delta_{nn}, \end{equation} which are consequences of (\ref{06}). As a result, we obtain the equation \begin{equation} \partial_s\partial_s\varphi+3\partial_s\varphi\partial_s\varphi=0. \end{equation} Since $\varphi=\varphi(x^2)$, this equation is equivalent to the ordinary differential equation \begin{equation} x^2\varphi''+4\varphi'+3x^2(\varphi')^2=0. \end{equation} where $\varphi'=\partial\varphi/\partial (x^2)$. Solving this equation, we find \begin{equation}\label{07} \varphi=\frac{1}{3}\ln\left(c_1+\frac{c_2}{x^6}\right). \end{equation} This is a solution of the anti-self-duality equations (\ref{04}). \par Interestingly, the ansatz (\ref{05}) is a solution to the self-duality equations as $\varphi=\ln(\lambda^2+x^2)$. To show this, we represent the matrices $R_{mn}$ in the following form \begin{equation} R_{mn}=\frac{1}{2}(e^t_me_n-e^t_ne_m), \end{equation} where $e_8$ is the unit $8\times8$ matrix, $e_m$ is an image of $\Gamma_m\in Cl_{0,7}(\mathbb{R})$ as $m\ne8$, and $e_m^t$ signifies the transposition of the matrix $e_m$. In the case, the potential \begin{equation} \tilde{A}_m=-\frac{R_{mn}x_n}{\lambda^2+x^2} \end{equation} and the gauge field strength \begin{equation}\label{09} \tilde{F}_{mn}=\frac{2\lambda^2R_{mn}}{(\lambda^2+x^2)^2}. \end{equation} This is exactly a solution of the self-duality equations that was obtained in Ref.~\cite{gros84}. \par Let us return again to the obtained solution of the anti-self-duality equations. If we substitute the solution (\ref{07}) into (\ref{08}), then we get the gauge field strength \begin{equation}\label{10} F_{mn}=\frac{2\lambda^2x^2}{(\lambda^2+x^6)^2}(4R_{mp}x_nx_p-4R_{np}x_mx_p-R_{mn}x^2), \end{equation} where $\lambda^2=c_2/c_1$. It is not difficult to see that the solutions (\ref{09}) and (\ref{10}) retain heir form when $x_n$ is replaced by $x_n-b_n$, where $b_n\in\mathbb{R}$. Further, the gauge transformations of $A(x)$ and $\tilde{A}(x)$ induce the transformations $R_{mn}\to U^{-1}R_{mn}U$, there $U\in Spin(8)$, which only lead to a change in the basis of $so(8)$ and therefore leaves the solutions unchanged. Consequently, the solutions (\ref{09}) and (\ref{10}) have the same number of free parameters and the same gauge group. At the same time, the potentials $A(x)$ and $\tilde{A}(x)$ are gauge nonequivalent. To show this, it suffices to note that \begin{equation} \text{tr}F^2_{mn}=-56^2\frac{4\lambda^4x^8}{(\lambda^2+x^6)^4}\ne\text{tr}\tilde{F}^2_{mn}. \end{equation} Note that the field strength (\ref{10}) is not a function of $x^2$ and therefore the found solution is not rotationally invariant. This fundamentally distinguishes it from the known solutions to the anti-auto-duality equations in eight dimensions. \par Thus, the formula (\ref{07}) indeed gives a new solution of the anti-self-duality equations (\ref{04}). Note also that the resulting anti-self-dual solution becomes self-dual and the self-dual solution be anti-self-dual if, instead of $\phi$, we will use the homomorphism $\phi':Cl_{0,8}^0(\mathbb{R})\to M_{8}(\mathbb{R})$ with the kernel $\text{Ker}\,\phi'=\{\alpha(1+\Gamma_9)\mid \alpha\in\mathbb{R}\}$. \section{D7-brane effective action} We now consider the Euclidean $D7$-brane in Type IIB string theory. On the world-volume of this $D7$-brane, there is an eight-dimensional Yang-Mills theory which is naturally realized as low-energy effective field theory. In order to see the (anti-)self-dual instanton effects of the obtained solutions, we consider the $\alpha'$ corrections to the gauge theory. The gauge part of the effective action can be written as \begin{equation} S_{D}=S_2+S_4+\dots, \end{equation} where the first term is the quadratic Yang-Mills action in eight dimensions \begin{equation} S_2=\frac{1}{2g_{YM}^2}\int d^8x\,\text{tr}(F^2), \end{equation} while the second part is a quartic action of the form \begin{equation}\label{31} S_4=\frac{(4\pi\alpha')^2}{4!g_{YM}^2}\int d^8x\,\text{tr}(t_8F^4)-2\pi iC_0k. \end{equation} Here $t_8$ is the ten-dimensional extension of the eight-dimensional light-cone gauge ``zero-mode'' tensor, i.e. \begin{align} t_8F^4&=F^{MN}F_{PN}F_{MS}F^{PS}+\frac{1}{2}F^{MN}F_{PN}F^{PS}F_{MS}\notag\\ &-\frac{1}{4}F^{MN}F_{MN}F^{PS}F_{PS}-\frac{1}{8}F^{MN}F^{PS}F_{MN}F_{PS}. \end{align} Moreover, $k$ is the fourth Chern number \begin{equation}\label{24} k=\frac{1}{4!(2\pi)^4}\int\text{tr}(F\wedge F\wedge F\wedge F) \end{equation} and $C_0$ is a scalar field of the closed string RR sector. \par Following~\cite{bill09}, we will interpret the eight-dimensional instantons as the $D$-instantons, i.e. as instantons embedded in $D7$-branes. Such instantons are sources for RR 0-form $C_0$. When the (anti-)self-duality condition holds, the trace \begin{equation} \text{tr}(t_8F^4)=\pm\frac{1}{2}\text{tr}(F\wedge F\wedge F\wedge F), \end{equation} and hence the quartic action $S_4$ becomes \begin{equation} S_4=-2\pi i(C_0\pm\frac{i}{g_s})k. \end{equation} This precisely matches the action of the action of $k$ $D$-instantons. Thus, the eight-dimensional (anti-)self-dual instantons become the $D$-instantons when $S_2=0$, $S_4\ne0$, and all the $O(\alpha'^4/g_{YM}^2)$ terms vanish. The second condition is fulfilled in the zero-slope limit $\alpha'\to0$ with fixed $\alpha'^2/g_{YM}^2$. \par On the other side, it follows from the identities (\ref{30}) that $\text{tr}F_{mnps}^2=0$ and therefore \begin{equation} \frac{1}{12}\text{tr}(t_8F^4)=\frac{1}{4!\cdot 2^4}\text{tr}(\epsilon^{mnpsijkl}F_{mn}F_{ps}F_{ij}F_{kl})=0. \end{equation} Hence the fourth Chern number and the quartic action (\ref{31}) are equal to zero. Thus, the solution to the anti-self-duality equations found in the previous section is not the $D$-instanton embedded in the $D7$-brane.
1,941,325,220,159
arxiv
\section{Introduction}\label{section1} \vspace{-1em} ~~Recently, the distributed cooperative control of multi-agent systems has aroused extensive attentions in multiple fields, including wireless communication, robotics and distributed computation as shown in \cite{c1}-\cite{c8}. As an important research topic on the distributed control, formation control refers to design a control strategy with the neighboring information such that a group of autonomous agents reach an expected geometrical shape. In the past two decades, several classical formation control methodologies were investigated, such as the leader-follower method \cite{c9}, the virtual-structure-based strategy \cite{c10} and the behavioral approach \cite{c11}, among many others. Beard {\it et al}. \cite{c12} showed that each of the above-mentioned classical methodologies has its corresponding weakness. Ren \cite{c13} addressed formation control problems by implementing the consensus-based approach and showed that the above-mentioned classical methodologies could be unified in the framework of the consensus-based formation control. Inspired by the development of the consensus theory in the control community (e.g., \cite{c14}-\cite{c19}), the newly developed consensus-based formation control strategies were reported in many application fields including mobile robots, intelligent ground vehicles and unmanned aerial vehicles (see \cite{c20}-\cite{c25} and the references therein). \par In many practical circumstances, multi-agent systems may suffer external disturbances due to environmental uncertainties, which can drive these systems to oscillations or divergences. For example, in the formation flying of multiple quadrotors, atmospheric disturbances can be regarded as additional forces and may cause instabilities in both the attitude and position dynamics. It is significant to address disturbance rejection problems such that multi-agent systems can reach the asymptotical disturbance rejection while conserving the closed-loop stability. Jafarian {\it et al}. \cite{c26} studied the formation keeping control for a group of nonholonomic wheeled robots with matched input disturbances, where the disturbances were compensated by internal-model-based controllers. In \cite{c27}, the time-invariant formation tracking control for a group of quadrotors with unknown bounded disturbances was achieved by designing an ${H_\infty }$ control controller, where the disturbance cannot be rejected by the proposed method in the whole desired frequency range. Liu {\it et al}. \cite{c28} proposed a robust compensating filter to handle time-invariant formation control problems of multiple quadrotors with disturbance rejections in the whole frequency domain as much as desired. \par Note that the desired formation was time-invariant in \cite{c20}-\cite{c28}. However, time-varying formation configurations are required in many applications due to complex external environments and/or variable mission situations. For example, the formation shape should be changed in the obstacle avoidance for multiple mobile robots. Several significant results about the time-varying formation control were obtained in \cite{c29}-\cite{c33}. Cooperative time-varying formation control methods was studied in \cite{c29}, where the formation was characterized by time-varying external parameters. A time-varying formation of collaborative unmanned aerial vehicles and unmanned ground vehicles was achieved in \cite{c30}. Time-varying formation tracking control was reached when considering the influence of switching topologies in \cite{c31}. Dong {\it et al}. \cite{c32} investigated the time-varying formation analysis and design problems for second-order multi-agent systems with directed topologies, where a formation feasibility condition was proposed to show that not all expected formation could be achieved. Time-varying group formation control for multi-agent systems with directed topologies was showed in \cite{c33}. However, further investigates on disturbance rejections of the time-varying formation with the influence of external disturbances were not considered in \cite{c29}-\cite{c33}, and the disturbance rejection methods for time-invariant formations in \cite{c26}-\cite{c28} cannot be implemented since the expected formation is time-varying. To the best of our knowledge, robust time-varying formation design problems for second-order multi-agent systems with unknown external disturbances have not been investigated extensively. \par Motivated by the above-mentioned facts, the current paper develops an extended-state-observer method to tackle the robust time-varying formation control problem for multi-agent systems subjected to external disturbances. With the disturbance compensation, a novel robust time-varying formation control protocol is proposed, using only relative neighboring information. By regarding external disturbances as additional states, an extended state observer (ESO) is constructed to determine the disturbance compensation. Then, the closed-loop dynamics of the whole multi-agent system is divided into two parts. The first one is the formation agreement dynamics, which is utilized to derive an explicit expression of the formation center function. The second part, called the disagreement dynamics, can describe the relative motion among agents. Sufficient conditions of the robust time-varying formation design are determined via algebraic Riccati equation techniques, together with the formation feasibility conditions. Moreover, the tracking performance and the robustness stability of the closed-loop system are analyzed. \par Compared with the existing results about the time-varying formation control of multi-agent systems, the current paper contains the following three novel features. Firstly, to achieve the disturbance rejection control objective, a robust time-varying formation control protocol is proposed with the robust disturbance compensation. Time-varying formation protocols in \cite{c29}-\cite{c33} cannot deal with the robust time-varying formation control problems when the influence of external disturbances is considered. Secondly, an ESO is constructed to determine the robust disturbance compensation, which can actively compensate the external disturbance in real time. Tracking performances and robust properties are analyzed with the ESO and the formation feasibility condition. However, disturbance compensations and robust properties were not considered in \cite{c29}-\cite{c33}. Thirdly, an explicit expression of the formation center function is deduced to show the macroscopic motion of the whole formation under the influence of external disturbances. It is revealed that that the disturbance compensation has effects on formation center functions. However, \cite{c29}-\cite{c31} did not give the formation center function and the formation center functions in \cite{c32} and \cite{c33} could not determine the impact of disturbance compensation. \par An outline of the current paper is presented as follows. Section \ref{section2} gives the problem description. In Section \ref{section3}, an explicit expression of the formation center functions is determined. In Section \ref{section4}, sufficient conditions of the robust time-varying formation design are shown and the tracking performance and the robustness stability are analyzed. Section \ref{section5} illustrates the effectiveness of theoretical results via a numerical simulation. Conclusions are stated in Section \ref{section6}.\par Notations: Let ${\mathbb{R}^n}$ and ${\mathbb{R}^{n \times m}}$ be the $n$-dimension real column vector and the $n \times m$-dimension real matrix, respectively. For simplicity, $0$ uniformly represents the zero number, zero vectors and zero matrices. ${{\mathbf{1}}_N}$ stands for an $N$-dimensional column vector with each entry being $1$. ${P^{ - 1}}$, ${P^H}$ and ${P^T}$ denote the inverse matrix, the Hermitian adjoint matrix and the transpose matrix of $P$, respectively. The norm used here is respectively defined as ${\left\| {h(t)} \right\|_1} = {\max _i}\left( {\sum\nolimits_j {\int_0^\infty {\left| {{h_{ij}}(t)} \right|dt} } } \right)$, $\left\| {p(t)} \right\| = {\left\| {p(t)} \right\|_2} = {\left( {\sum\nolimits_{i = 1}^n {{{\left| {{p_i}(t)} \right|}^2}} } \right)^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}}$ and ${\left\| {p(t)} \right\|_\infty } = {\max _i}{\sup _{t \geqslant 0}}\left| {{p_i}(t)} \right|$, where $\left| \cdot \right|$ is the absolute value, $h(t) = \left[ {{h_{ij}}(t)} \right] \in {\mathbb{R}^{m \times n}}$ and $p(t) = \left[ {{p_i}(t)} \right] \in {\mathbb{R}^n}$. The Kronecker product is represented by the notation $ \otimes $.\par \section{Problem description }\label{section2} \vspace{-0.5em} ~~~~Consider a group of $N$ identical agents with the dynamics of the $i$th agent described by: \begin{eqnarray}\label{1} \left\{ {\begin{array}{*{20}{c}} {{{\dot p}_i}(t) = {v_i}(t),} \hfill \\ {{{\dot v}_i}(t) = {\alpha _p}{p_i}(t) + {\alpha _v}{v_i}(t) + {u_i}(t) + {\omega _i}(t),} \hfill \\ \end{array} } \right. \end{eqnarray} where $i = 1,2, \cdots ,N $, ${p_i}(t)$, ${v_i}(t)$ and ${u_i}(t) \in {\mathbb{R}^n}$ represent the position, the velocity and the control input, respectively, ${\omega _i}(t) \in {\mathbb{R}^n}$ is the unknown bounded external disturbance and ${\alpha _p}$ and ${\alpha _v}$ are the damping constants. The interaction topology among agents is described by a digraph $G$, where agent $i$ is represented by the $i$th node, the interaction channel among nodes is denoted by an edge and the interaction strength is depicted by the edge weight ${w_{ij}}$. Note that ${w_{ij}} > 0$ if agent $j$ belongs to the neighbor set ${N_i}$ of agent $i$ and ${w_{ij}} = 0$ otherwise. For the digraph $G$, the weighted adjacency matrix is $W = {[{w_{ij}}]_{N \times N}}$ and $D = {\text{diag}}\{ {d_1},{d_2}, \cdots ,{d_N}\} $ stands for the in-degree matrix. Define the Laplacian matrix of $G$ as $L = D - W$. A directed path from node $i$ to node $j$ is a finite ordered sequence of edges described as $\left\{ {({v_i},{v_m}),({v_m},{v_n}), \cdots ,({v_l},{v_j})} \right\}$. A digraph is said to have a spanning tree if a root node $i$ exists such that it at least has a directed path to every other node. \begin{lem} [\cite{c34}]\label{lemma1} If $G$ has a spanning tree, then $0$ is its single eigenvalue with ${\mathbf{1}_N}$ being the related eigenvector and other $N - 1$ eigenvalues have positive real parts; that is, $0 = {\lambda _1} < \operatorname{Re} ({\lambda _2}) \leqslant \cdots \leqslant \operatorname{Re} ({\lambda _N})$. \end{lem} Let ${f_i}(t) = {[f_{ip}^T(t),f_{iv}^T(t)]^T} \in {\mathbb{R}^{2n}}$ $(i \in \{ 1,2, \cdots ,N\} )$ be a piecewise continuously differentiable vector, then the expected time-varying formation is specified by a vector $f(t) = {[f_1^T(t),f_2^T(t), \cdots ,f_N^T(t)]^T}$. By considering the disturbance compensation, a robust time-varying formation control protocol is proposed as follows: \begin{eqnarray}\label{2} {u_i}(t) = {K_u}\sum\limits_{j \in {N_i}} {{w_{ij}}\left( {{x_j}(t) - {x_i}(t) - {f_j}(t) + {f_i}(t)} \right)} - \alpha {f_i}(t) + {\dot f_{iv}}(t) - {z_i}(t), \end{eqnarray} where $i \in \{ 1,2, \cdots ,N\} $, ${x_i}(t) = {[p_i^T(t),v_i^T(t)]^T}$, $\alpha = [{\alpha _p},{\alpha _v}] \otimes {I_n}$, ${K_u} \in {\mathbb{R}^{n \times 2n}}$ is the gain matrix and ${z_i}(t)$ is the robust disturbance compensation, which is determined by the following ESO: \begin{eqnarray}\label{3} \left\{ \begin{gathered} {{\dot g}_i}(t) = {z_i}(t) + {u_i}(t) + {\alpha _p}{p_i}(t) + {\alpha _v}{v_i}(t) - {\beta _{ig}}\left( {{g_i}(t) - {v_i}(t)} \right), \hfill \\ {{\dot z}_i}(t) = - {\beta _{iz}}\left( {{g_i}(t) - {v_i}(t)} \right), \hfill \\ \end{gathered} \right. \end{eqnarray} where ${\beta _{ig}}$ and ${\beta _{iz}}$ are bandwidth constants, and ${g_i}(t)$ is the intermediate variable of the ESO.\par Let $x(t) = {[x_1^T(t),x_2^T(t), \cdots ,x_N^T(t)]^T}$, ${\theta _1} = {[1,0]^T} \otimes {I_n}$ and ${\theta _2} = {[0,1]^T} \otimes {I_n}$, then multi-agent system (\ref{1}) with protocol (\ref{2}) can be rewritten as a global closed-loop system with the following dynamics: \[ \dot x(t) = \left( {{I_N} \otimes \left( {{\theta _1}\theta _2^T + {\theta _2}\alpha } \right) - L \otimes {\theta _2}{K_u}} \right){x}(t) - \left( {{I_N} \otimes {\theta _2}\alpha - L \otimes {\theta _2}{K_u}} \right)f(t)\] \vspace{-3em} \begin{eqnarray}\label{4} \hspace{6em} + \left( {{I_N} \otimes {\theta _2}\theta _2^T} \right)\dot f(t) + \left( {{I_N} \otimes {\theta _2}} \right)\left( {\omega (t) - z(t)} \right). \end{eqnarray} \begin{df} \label{definition1} For any given positive constant $\varepsilon $ and bounded initial states $x(0)$, multi-agent systems (\ref{1}) is said to be robust time-varying formation-reachable by protocol (\ref{2}) if all states involved in the global closed-loop system (\ref{4}) are bounded and there exist a gain matrix ${K_u}$, a vector-valued function $c(t)$ and a finite constant ${t_\varepsilon }$ such that $\left\| {{x_i}(t) - {f_i}(t) - c(t)} \right\| \leqslant \varepsilon $ $(\forall i \in \{ 1,2, \cdots ,N\} )$, $\forall t \geqslant {t_\varepsilon}$, where $c(t)$ is the formation center function and $\varepsilon $ is called the time-varying formation error bound, respectively.\par \end{df} The control objective of the current paper is to design the robust time-varying formation control protocol such that second-order multi-agent systems with external disturbances can reach the expected robust time-varying formation. The following three problems are focused: (i) Determining an explicit expression of formation center functions; (ii) Designing the gain matrix ${K_u}$ of protocol (\ref{2}); (iii) Analyzing the tracking performance and the robustness property of the global closed-loop system.\par \section{Formation center functions}\label{section3}\vspace{-0.5em} ~~~This section gives an explicit expression of formation center functions and shows impacts of the time-varying formation and the disturbance compensation on the formation center function, respectively. \par Let ${\xi _i}(t) = {x_i}(t) - {f_i}(t)$ $(i \in \{ 1,2, \cdots ,N\} )$ and $\xi (t) = {[\xi _1^T(t),\xi _2^T(t), \cdots ,\xi _N^T(t)]^T}$, then global closed-loop system (\ref{4}) can be transformed into \[ \dot \xi (t) = \left( {{I_N} \otimes \left( {{\theta _1}\theta _2^T + {\theta _2}\alpha } \right) - L \otimes {\theta _2}{K_u}} \right)\xi (t) + \left( {{I_N} \otimes {\theta _2}} \right)\left( {{\omega}(t) - {z}(t)} \right) \] \vspace{-3em} \begin{eqnarray}\label{5} \hspace{6em} + \left( {{I_N} \otimes {\theta _1}\theta _2^T} \right)f(t) - \left( {{I_N} \otimes {\theta _1}\theta _1^T} \right)\dot f(t). \end{eqnarray} Let $U = [{{\mathbf{1}}_N},\tilde u] \in {\mathbb{R}^{N \times N}}$ be a nonsingular matrix with $\tilde u = [{\tilde u_2},{\tilde u_3}, \cdots ,{\tilde u_N}] \in {\mathbb{R}^{N \times (N - 1)}}$ such that ${U^{ - 1}}LU = J$, where ${U^{ - 1}} = {[\bar u_1^H,{\bar u^H}]^H}$ with $\bar u = {[\bar u_2^H,\bar u_3^H, \cdots ,\bar u_N^H]^H} \in {\mathbb{R}^{(N - 1) \times N}}$ and $J$ is the Jordan canonical form of $L$.\par According to Lemma \ref{lemma1} and the structure of $U$, one can obtain that $J = \text{diag}\{ 0,\tilde J\} $, where $\tilde J$ consists of the corresponding Jordan blocks of ${\lambda _i}$ $(i = 2,3, \cdots ,N)$. Let $\tilde \xi (t) = ({U^{ - 1}} \otimes {I_{2n}})\xi (t) = {[{\kappa ^T}(t),{\varphi ^T}(t)]^T}$, in which $\kappa (t) = {\tilde \xi _1}(t)$ and $\varphi (t) = {[\tilde \xi _2^T(t),\tilde \xi _3^T(t), \cdots ,\tilde \xi _N^T(t)]^T}$, then multi-agent system (\ref{5}) can be transformed into \begin{eqnarray}\label{6} \dot \kappa (t) = \left( {{\theta _1}\theta _2^T + {\theta _2}\alpha } \right)\kappa (t) + \left( {{{\bar u}_1} \otimes {\theta _2}} \right)\left( {\omega (t) - z(t)} \right) + \left( {{{\bar u}_1} \otimes {\theta _1}} \right)\left( {{f_v}(t) - {{\dot f}_p}(t)} \right), \end{eqnarray} \[ \dot \varphi (t) = \left( {{I_{N - 1}} \otimes \left( {{\theta _1}\theta _2^T + {\theta _2}\alpha } \right) - \tilde J \otimes {\theta _2}{K_u}} \right)\varphi (t) + (\bar u \otimes {\theta _2})\left( {\omega (t) - z(t)} \right) \] \vspace{-3em} \begin{eqnarray}\label{7} \hspace{8em} + \left( {\bar u \otimes {\theta _1}} \right)\left( {{f_v}(t) - {{\dot f}_p}(t)} \right), \end{eqnarray} where ${f_v}(t) = {[f_{1v}^T(t),f_{2v}^T(t), \cdots ,f_{Nv}^T(t)]^T}$ and ${\dot f_p}(t) = {[\dot f_{1p}^T(t),\dot f_{2p}^T(t), \cdots ,\dot f_{Np}^T(t)]^T}$. \par Subsystems (\ref{6}) and (\ref{7}) depict the formation agreement and disagreement dynamics of multi-agent system (\ref{1}), which describe the absolute movement of the whole system and the relative movement among agents, respectively. According to subsystem (\ref{6}), the following theorem determines the impact of the disturbance compensation on the formation center function and shows an explicit expression of the formation center function, which describes the macroscopic motion of the whole formation.\par \begin{thm} \label{theorem1} For any given $\varepsilon > 0$, if multi-agent system (\ref{1}) reaches the expected robust time-varying formation $f(t)$, then the formation center function $c(t)$ satisfies that \[\left\| {c(t) - {c_0}(t) - {c_z}(t) - {c_f}(t)} \right\| \leqslant \varepsilon ,{\text{ }}\forall t \geqslant {{t_\varepsilon }},\] where ${t_\varepsilon }$ is a finite constant and \vspace{-5pt} \[{c_0}(t) = {e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )t}}({\bar u_1} \otimes {I_{2n}})x(0),\] \[{c_z}(t) = \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \varsigma )}}\left( {{{\bar u}_1} \otimes {\theta _2}} \right)\left( {\omega (\varsigma ) - z(\varsigma )} \right)} ds,\] \[{c_f}(t) = \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \tau )}}\left( {{{\bar u}_1} \otimes {\theta _2}} \right)\left( {{{\dot f}_v}(\tau ) - {\alpha _p}{f_p}(\tau ) - {\alpha _v}{f_v}(\tau )} \right)} d\tau - \left( {{{\bar u}_1} \otimes {I_{2n}}} \right)f(t).\] \end{thm}\vspace{-10pt} \begin{proof} Let ${e_1} \in {\mathbb{R}^N}$ denote a unit vector with its first element being $1$. Define the following auxiliary functions: \begin{eqnarray}\label{8} {\xi _a}(t) = (U \otimes {I_{2n}}){[{\kappa ^T}(t),0]^T}, \end{eqnarray} \begin{eqnarray}\label{9} {\xi _d}(t) = (U \otimes {I_{2n}}){[0,{\varphi ^T}(t)]^T}, \end{eqnarray} with $\left\| {U \otimes {I_{2n}}} \right\| = {\varepsilon _N}$. Due to $({U^{ - 1}} \otimes {I_{2n}})\xi (t) = {[{\kappa ^T}(t),{\varphi ^T}(t)]^T}$, it can be obtained from (\ref{8}) and (\ref{9}) that \begin{eqnarray}\label{10} \xi (t) = {\xi _a}(t) + {\xi _d}(t). \end{eqnarray} Since $U \otimes {I_{2n}}$ is nonsingular, one can concluded that ${\xi _a}(t)$ and ${\xi _d}(t)$ are linearly independent. It follows from (\ref{8}) and the fact ${[{\kappa ^T}(t),0]^T} = {e_1} \otimes \kappa (t)$ that \begin{eqnarray}\label{11} {\xi _a}(t) = \left( {U \otimes {I_{2n}}} \right)\left( {{e_1} \otimes \kappa (t)} \right) = U{e_1} \otimes \kappa (t) = {\mathbf{1}_N} \otimes \kappa (t). \end{eqnarray} From (\ref{10}) and (\ref{11}), one can show that \begin{eqnarray}\label{12} {\xi _d}(t) = \xi (t) - {\mathbf{1}_N} \otimes \kappa (t). \end{eqnarray} From (\ref{9}), (\ref{10}) and (\ref{12}), one can find that for any given positive constant $\varepsilon $, there exists a finite constant ${t_{\varepsilon }}$ such that $\left\| {{x_i}(t) - {f_i}(t) - \kappa (t)} \right\| \leqslant \varepsilon $ $(i \in \{ 1,2, \cdots ,N\} )$, $\forall t \geqslant {t_\varepsilon}$, if $\left\| {\varphi (t)} \right\| \leqslant {\varepsilon \mathord{\left/ {\vphantom {\varepsilon {{\varepsilon _N}}}} \right. \kern-\nulldelimiterspace} {{\varepsilon _N}}} = {\varepsilon _\varphi }$, $\forall t \geqslant {t_\varepsilon}$, which means that $\varphi (t)$ represents the time-varying formation error and $\kappa (t)$ shows one of the candidates of formation center functions, respectively.\par From (\ref{8}), one can obtain that \vspace{-5pt} \begin{eqnarray}\label{13} \kappa (0) = ({\bar u_1} \otimes {I_{2n}})(x(0) - f(0)). \end{eqnarray} One can show that \[\begin{gathered} \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \tau )}}\left( {{{\bar u}_1} \otimes {\theta _1}} \right)\left( {{f_v}(\tau ) - {{\dot f}_p}(\tau )} \right)} d\tau \hfill \\ = \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \tau )}}\left( {{{\bar u}_1} \otimes {\theta _1}} \right){f_v}(\tau )} d\tau + {e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )t}}\left( {{{\bar u}_1} \otimes {\theta _1}} \right){f_p}(0) \hfill \\ \end{gathered} \] \vspace{-2em} \begin{eqnarray}\label{14} \hspace{5em} - \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \tau )}}\left( {{{\bar u}_1} \otimes ({\theta _1}\theta _2^T + {\theta _2}\alpha ){\theta _1}} \right){f_p}(\tau )} d\tau - \left( {{{\bar u}_1} \otimes {\theta _1}} \right){f_p}(t), \end{eqnarray} and \[\begin{gathered} \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \tau )}}\left( {{{\bar u}_1} \otimes {\theta _2}} \right){{\dot f}_v}(\tau )} d\tau \hfill \\ = \left( {{{\bar u}_1} \otimes {\theta _2}} \right){f_v}(t) - {e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )t}}\left( {{{\bar u}_1} \otimes {\theta _2}} \right){f_v}(0) \hfill \\ \end{gathered} \] \vspace{-2em} \begin{eqnarray}\label{15} \hspace{7em} + \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \tau )}}\left( {{{\bar u}_1} \otimes ({\theta _1}\theta _2^T + {\theta _2}\alpha ){\theta _2}} \right){f_v}(\tau )} d\tau . \end{eqnarray} By the structure of $f(t)$, it can be found that \begin{eqnarray}\label{16} \left( {{{\bar u}_1} \otimes {\theta _1}} \right){f_p}(t) + \left( {{{\bar u}_1} \otimes {\theta _2}} \right){f_v}(t) = \left( {{{\bar u}_1} \otimes {I_{2n}}} \right)f(t), \end{eqnarray} \begin{eqnarray}\label{17} {e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )t}}\left( {\left( {{{\bar u}_1} \otimes {\theta _1}} \right){f_p}(0) + \left( {{{\bar u}_1} \otimes {\theta _2}} \right){f_v}(0)} \right) = {e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )t}}\left( {{{\bar u}_1} \otimes {I_{2n}}} \right)f(0). \end{eqnarray} Then, it follows from (\ref{14})-(\ref{17}) that \[\begin{gathered} \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \tau )}}\left( {{{\bar u}_1} \otimes {\theta _1}} \right)\left( {{f_v}(\tau ) - {{\dot f}_p}(\tau )} \right)} d\tau \hfill \\ = \int_0^t {{e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )(t - \tau )}}\left( {{{\bar u}_1} \otimes {\theta _2}} \right)\left( {{{\dot f}_v}(\tau ) - {\alpha _p}{f_p}(\tau ) - {\alpha _v}{f_v}(\tau )} \right)} d\tau \hfill \\ \end{gathered} \] \vspace{-2em} \begin{eqnarray}\label{18} \hspace{-2em} + {e^{({\theta _1}\theta _2^T + {\theta _2}\alpha )t}}\left( {{{\bar u}_1} \otimes {I_{2n}}} \right)f(0) - \left( {{{\bar u}_1} \otimes {I_{2n}}} \right)f(t). \end{eqnarray} In virtue of (\ref{6}), (\ref{13}) and (\ref{18}), the conclusion of Theorem \ref{theorem1} can be obtained. \end{proof} \section{Robust time-varying formation design }\label{section4}\vspace{-0.5em} ~~~~In this section, firstly, an algorithm is presented to show the procedure of designing the robust time-varying formation control protocol. Then, sufficient conditions of the robust time-varying formation design are shown and the tracking performance and the robustness stability of multi-agent systems are analyzed, respectively.\par The core idea of designing robust time-varying formation control protocol (\ref{2}) is to determine the gain matrix and the robust disturbance compensation. The following algorithm with four steps is presented to design protocol (\ref{2}).\vspace{10pt} \hspace{-10pt}{\it Robust Time-Varying Formation Design Algorithm}\par Step 1: Check the following formation feasibility condition for the expected time-varying formation. \vspace{-10pt} \begin{eqnarray}\label{19} {\left\| {{f_{iv}}(t) - {{\dot f}_{ip}}(t)} \right\|_\infty } \leqslant {\varepsilon _f},{\text{ }}\forall t \geqslant {t_\varepsilon },{\text{ }}\forall i \in \{ 1,2, \cdots ,N\} . \end{eqnarray} If condition (\ref{19}) is satisfied, then go to Step 2; else the expected time-varying formation cannot be reached by multi-agent system (\ref{1}) with protocol (\ref{2}) and the algorithm stops.\par Step 2: Solve the following algebraic Riccati equation for a positive definite matrix $P$ \vspace{-10pt} \begin{eqnarray}\label{20} P({\theta _1}\theta _2^T + {\theta _2}\alpha ) + {({\theta _1}\theta _2^T + {\theta _2}\alpha )^T}P - P{\theta _2}\theta _2^TP + I = 0. \end{eqnarray}\par Step 3: Set the gain matrix $K_u$ as ${K_u} = {\operatorname{Re} ^{ - 1}}({\lambda _2})\theta _2^TP$.\par Step 4: Choose sufficiently large bandwidth constants ${\beta _{ig}}$ and ${\beta _{iz}}$ $(i \in \{ 1,2, \cdots ,N\} )$ of ESO (\ref{3}) to effectively estimate the robust disturbance compensation.\par With the robust time-varying formation design algorithm, tracking performances and robustness stability properties are analyzed in the following theorem.\par \begin{thm} \label{theorem2} For any given bounded initial states, if formation feasibility condition (\ref{19}) is satisfied, then multi-agent system (\ref{1}) reaches the robust time-varying formation by protocol (\ref{2}) designed in the robust time-varying formation design algorithm. \end{thm}\vspace{-10pt} \begin{proof} Firstly, consider the stability of the following subsystem: \begin{eqnarray}\label{21} {\dot \eta _k}(t) = \left( {{\theta _1}\theta _2^T + {\theta _2}\alpha - {\lambda _k}{\theta _2}{K_u}} \right){\eta _k}(t), \end{eqnarray} where $\forall k \in \{ 2,3, \cdots ,N\} $. Then construct the Lyapunov function candidate as follows: \begin{eqnarray}\label{22} {V_k}(t) = \eta _k^H(t)P{\eta _k}(t). \end{eqnarray} Let ${K_u} = {\operatorname{Re} ^{ - 1}}({\lambda _2})\theta _2^TP$, then differentiating $V(t)$ along the trajectories of (\ref{21}) yields \begin{eqnarray}\label{23} {\dot V_k}(t) = \eta _k^H(t)\left( {{{({\theta _1}\theta _2^T + {\theta _2}\alpha )}^T}P + P({\theta _1}\theta _2^T + {\theta _2}\alpha ) - 2\operatorname{Re} ({\lambda _k}){{\operatorname{Re} }^{ - 1}}({\lambda _2})P{\theta _2}\theta _2^TP} \right){\eta _k}(t). \end{eqnarray} Substituting $P({\theta _1}\theta _2^T + {\theta _2}\alpha ) + {({\theta _1}\theta _2^T + {\theta _2}\alpha )^T}P = P{\theta _2}\theta _2^TP - I$ into (\ref{23}) gives \vspace{-5pt} \begin{eqnarray}\label{24} {\dot V_k}(t) = \eta _k^H(t)\left( {\left( {1 - 2\operatorname{Re} ({\lambda _k}){{\operatorname{Re} }^{ - 1}}({\lambda _2})} \right)P{\theta _2}\theta _2^TP - I} \right){\eta _k}(t). \end{eqnarray} Due to $0 < \operatorname{Re} ({\lambda _2}) \leqslant \cdots \leqslant \operatorname{Re} ({\lambda _N})$, one can derive from (\ref{24}) that ${\dot V_k}(t) \leqslant - \eta _k^H(t){\eta _k}(t)$ ($\forall k \in \{ 2,3, \cdots ,N\} $). Therefore, ${\eta _k}(t)$ converges to $0$ asymptotically, which means that ${\theta _1}\theta _2^T + {\theta _2}\alpha - {\lambda _k}{\theta _2}{K_u}$ is Hurwitz. By the structure of $\tilde J$, one can conclude that ${I_{N - 1}} \otimes ({\theta _1}\theta _2^T + {\theta _2}\alpha ) - \tilde J \otimes {\theta _2}{K_u}$ is Hurwitz.\par Then, the tracking performance and the robustness stability is analyzed. Let $A = {I_{N - 1}} \otimes ({\theta _1}\theta _2^T + {\theta _2}\alpha ) - \tilde J \otimes {\theta _2}{K_u}$, then subsystem (\ref{7}) can be rewritten as \begin{eqnarray}\label{25} \dot \varphi (t) = A\varphi (t) + (\bar u \otimes {\theta _2})\left( {\omega (t) - z(t)} \right) + \left( {\bar u \otimes {\theta _1}} \right)\left( {{f_v}(t) - {{\dot f}_p}(t)} \right). \end{eqnarray} By Laplace transform, (\ref{3}) can be converted to \begin{eqnarray}\label{26} \left\{ \begin{gathered} {z_i}(s) - {\omega _i}(s) + ({\beta _{ig}} + s)\left( {{v_i}(s) - {g_i}(s)} \right) = 0, \hfill \\ s{z_i}(s) + {\beta _{iz}}{g_i}(s) - {\beta _{iz}}{v_i}(s) = 0, \hfill \\ \end{gathered} \right. \end{eqnarray} where $i \in \{ 1,2, \cdots ,N\} $. From (\ref{26}), it can be shown that \begin{eqnarray}\label{27} {z_i}(s) = {G_i}(s){\omega _i}(s), \end{eqnarray} where ${G_i}(s) = {{{\beta _{iz}}} \mathord{\left/ {\vphantom {{{\beta _{iz}}} {({s^2} + {\beta _{ig}}s + {\beta _{iz}})}}} \right. \kern-\nulldelimiterspace} {({s^2} + {\beta _{ig}}s + {\beta _{iz}})}}$, $i \in \{ 1,2, \cdots ,N\} $. Let ${\beta _{ig}} = 2{\sigma _i}$ and ${\beta _{iz}} = \sigma _i^2$, then one can obtain that \begin{eqnarray}\label{28} {G_i}(s) = \frac{{\sigma _i^2}}{{{{(s + {\sigma _i})}^2}}}. \end{eqnarray} From (\ref{27}), it follows that \begin{eqnarray}\label{29} \omega (s) - z(s) = {\text{diag}}\{ 1 - {G_1}(s),1 - {G_2}(s), \cdots ,1 - {G_N}(s)\} \omega (s) = {\Phi _N}(s)\omega (s). \end{eqnarray} Define \begin{eqnarray}\label{30} \left\{ \begin{gathered} {\rho _\omega } = {\left\| {{{(s{I_{2n(N - 1)}} - A)}^{ - 1}}\left( {\bar u{\Phi _N}(s) \otimes {\theta _2}} \right)} \right\|_1}, \hfill \\ {\rho _f} = {\left\| {{{(s{I_{2n(N - 1)}} - A)}^{ - 1}}\left( {\bar u \otimes {\theta _1}} \right)} \right\|_1}, \hfill \\ {\upsilon _{ \varphi (0)}} = {\left\| {{e^{At}}\varphi (0)} \right\|_\infty }. \hfill \\ \end{gathered} \right. \end{eqnarray} From (\ref{25}), (\ref{27}), (\ref{29}) and (\ref{30}), it can be derived that \begin{eqnarray}\label{31} {\left\| {\varphi (t)} \right\|_\infty } \leqslant {\upsilon _{\varphi (0)}} + {\rho _\omega }{\left\| {\omega (t)} \right\|_\infty } + {\rho _f}{\left\| {{f_v}(t) - {{\dot f}_p}(t)} \right\|_\infty }. \end{eqnarray} For agent $i$ $(i \in \{ 1,2, \cdots ,N\} )$, since $\omega (t)$ is bounded, there exist two positive constants ${\gamma _{ \varphi i}}$ and ${\delta _{\omega \varphi i}}$ such that \begin{eqnarray}\label{32} {\left\| {{\omega _i}(t)} \right\|_\infty } \leqslant {\gamma _{ \varphi i}}{\left\| {{u_i}(t)} \right\|_\infty } + {\delta _{\omega \varphi i}},{\text{ }}i \in \{ 1,2, \cdots ,N\} . \end{eqnarray} It follows from (\ref{32}) that positive constants ${\gamma _{ \varphi }}$ and ${\delta _{\omega \varphi }}$ exist such that \begin{eqnarray}\label{33} {\left\| {\omega (t)} \right\|_\infty } \leqslant {\gamma _{ \varphi}}{\left\| {u(t)} \right\|_\infty } + {\delta _{\omega \varphi }}. \end{eqnarray} By (\ref{2}), (\ref{3}) and (\ref{29}), one can show that \begin{eqnarray}\label{34} {\left\| {u(t)} \right\|_\infty } = {\delta _{u \varphi 1}}{\left\| {\varphi (t)} \right\|_\infty } + {\delta _{u \varphi 2}}{\left\| {\omega (t)} \right\|_\infty } + {\delta _{u \varphi 3}}, \end{eqnarray} where ${\delta _{u \varphi 1}}$, ${\delta _{u \varphi 2}}$ and ${\delta _{u \varphi 3}}$ are positive constants. Substituting (\ref{34}) into (\ref{33}), one can obtain that ${\upsilon _{\varphi }}$ and ${\upsilon _e}$ exist such that \begin{eqnarray}\label{35} {\left\| {\omega (t)} \right\|_\infty } \leqslant {\upsilon _{\varphi }}{\left\| {\varphi (t)} \right\|_\infty } + {\upsilon _e}. \end{eqnarray} If ${\left\| {\bar u \otimes {\theta _2}} \right\|_\infty }$ is bounded and ${\sigma _i}$ $(i = 1,2, \cdots ,N)$ are sufficiently large, then it can be deduced from (\ref{31}) and (\ref{35}) that \begin{eqnarray}\label{36} \left\{ \begin{gathered} {\left\| {\omega (t)} \right\|_\infty } \leqslant \frac{{{\upsilon _{\varphi }}{\upsilon _{\varphi (0)}} + {\upsilon _e}}} {{1 - {\upsilon _{\varphi }}{\rho _\omega }}}, \hfill \\ {\left\| {\varphi (t)} \right\|_\infty } \leqslant \frac{{{\upsilon _{ \varphi (0)}} + {\upsilon _e}{\rho _\omega }}} {{1 - {\upsilon _{ \varphi }}{\rho _\omega }}}. \hfill \\ \end{gathered} \right. \end{eqnarray} It follows from (\ref{36}) that \begin{eqnarray}\label{37} \left\{ \begin{gathered} {\left\| {\omega (t)} \right\|_\infty } \leqslant {{\tilde \upsilon }_\omega }, \hfill \\ {\left\| {\varphi (t)} \right\|_\infty } \leqslant {{\tilde \upsilon }_{\varphi }}, \hfill \\ \end{gathered} \right. \end{eqnarray} where ${\tilde \upsilon _\omega }$ and ${\tilde \upsilon _{ \varphi }}$ are positive constants. According to the formation feasibility condition (\ref{19}), one has that \begin{eqnarray}\label{38} {\left\| {{f_v}(t) - {{\dot f}_p}(t)} \right\|_\infty } \leqslant {\varepsilon _f},{\text{ }}\forall t \geqslant {t_f}. \end{eqnarray} From (\ref{31}), (\ref{37}) and (\ref{38}), one can obtain that \begin{eqnarray}\label{39} \mathop {\max }\limits_i \left| {{\varphi _i}(t)} \right| \leqslant \mathop {\max }\limits_i \left| {c_{_{2n(N - 1),i}}^T{e^{At}}\varphi (0)} \right| + {\rho _\omega }{\tilde \upsilon _\omega } + {\rho _f}{\varepsilon _f},{\text{ }}\forall t \geqslant {t_f}, \end{eqnarray} where $i \in \{ 2,3, \cdots ,N\} $, ${c_{2n(N - 1),i}}$ is a $2n(N - 1)$-dimensional unit column vector with the $i$th element $1$ and other elements $0$. For the bounded initial states ${ \varphi _i}(0)$ $(i \in \{ 2,3, \cdots ,N\} )$, one can find that ${\varphi _i}(t)$ is bounded. It can also be obtained that the states of the robust disturbance compensation ${z_i}(t)$ and the control protocol ${u_i}(t)$ are bounded. It follows that all states involved in the closed-loop system (\ref{4}) are bounded. Furthermore, since $A$ is Hurwitz, there exists a finite constant ${t_\varepsilon } \geqslant {t_f}$ such that $\left\| {\varphi (t)} \right\| \leqslant {\varepsilon _\varphi }$, $\forall t \geqslant {t_\varepsilon }$ for any given positive constant ${\varepsilon _\varphi }$, which means that multi-agent system (\ref{1}) is robust time-varying formation-reachable by protocol (\ref{2}). This completes the proof. \end{proof} \vspace{-10pt} \section{Numerical simulations}\label{section5} \vspace{-5pt} ~~~~In this section, a simulation example is provided to demonstrate the effectiveness of the theoretical results obtained in previous sections. \par Consider a second-order multi-agent system containing six agents in the $XYZ$ space ($n = 3$), where the interaction topology among agents is described as a 0-1 weighted digraph in Figure 1. The dynamics of each agent can be described by (\ref{1}) with ${\alpha _p} = - 0.01$ and ${\alpha _v} = 0$. Let ${x_i}(t) = {\left[ {{p_{iX}}(t),{p_{iY}}(t),{p_{iZ}}(t),{v_{iX}}(t),{v_{iY}}(t),{v_{iZ}}(t)} \right]^T}$ $(i \in \{ 1,2, \cdots ,6\} )$, where ${p_{iX}}(t)$, ${p_{iY}}(t)$, ${p_{iZ}}(t)$ and ${v_{iX}}(t)$, ${v_{iY}}(t)$, ${v_{iZ}}(t)$ are positions and velocities along the $X$ axis, $Y$ axis and $Z$ axis, respectively. The initial states of each agent are set as ${x_1}(t) = {\left[ {0.6,1.2,0.5, - 1.2, - 0.3,0.8} \right]^T}$, ${x_2}(t) = {\left[ { - 1.5, - 0.3,1.8, - 1.6,2.3,1.1} \right]^T}$, ${x_3}(t) = {\left[ {2.1,0.8, - 1.6,0.3, - 1.9,2.5} \right]^T}$, ${x_4}(t) = {\left[ {3.8,1.7, - 2.6,1.8, - 3.3,1.5} \right]^T}$, ${x_5}(t) = {\left[ {4.5,1.9, - 1.2, - 2.9,3.5, - 1.4} \right]^T}$ and ${x_6}(t) = {\left[ { - 4.2,2.9,3.8, - 5.1, - 3.5,2.7} \right]^T}$.\par \begin{figure}[!htb] \begin{center} \scalebox{1.1}[1.1]{\includegraphics{./fig1.eps}} \vspace{0em} \caption{Interaction topology $G$.} \end{center}\vspace{0em} \end{figure} The six agents are required to follow a time-varying formation in the form of \[{f_i}(t) = 3\left[ \begin{gathered} \sin (t + {{(i - 1)\pi } \mathord{\left/ {\vphantom {{(i - 1)\pi } 3}} \right. \kern-\nulldelimiterspace} 3}) \hfill \\ \cos (t + {{(i - 1)\pi } \mathord{\left/ {\vphantom {{(i - 1)\pi } 3}} \right. \kern-\nulldelimiterspace} 3}) \hfill \\ - \sin (t + {{(i - 1)\pi } \mathord{\left/ {\vphantom {{(i - 1)\pi } 3}} \right. \kern-\nulldelimiterspace} 3}) \hfill \\ \cos (t + {{(i - 1)\pi } \mathord{\left/ {\vphantom {{(i - 1)\pi } 3}} \right. \kern-\nulldelimiterspace} 3}) \hfill \\ - \sin (t + {{(i - 1)\pi } \mathord{\left/ {\vphantom {{(i - 1)\pi } 3}} \right. \kern-\nulldelimiterspace} 3}) \hfill \\ - \cos (t + {{(i - 1)\pi } \mathord{\left/ {\vphantom {{(i - 1)\pi } 3}} \right. \kern-\nulldelimiterspace} 3}) \hfill \\ \end{gathered} \right],{\text{ }}(i = 1,2, \cdots ,6).\] It can be found from ${f_i}(t)$ that both positions and velocities of six agents can take shape regular hexagons with time-varying edges. One can see that formation feasibility condition (\ref{19}) is satisfied. The external disturbances are generated by \[{\omega _i}(t) = \left[ \begin{gathered} (2.5 + 0.2(i - 1))\sin t + 1.5 + 1.2(i - 1) \hfill \\ (1.5 + 0.2(i - 1))\sin t + 2.5 + 1.2(i - 1) \hfill \\ (2 + 0.2(i - 1))\sin (t + 0.4\pi ) + 3 + 0.2(i - 1) \hfill \\ \end{gathered} \right],{\text{ }}(i = 1,2, \cdots ,6).\] Choose the bandwidth constants of ESO (\ref{3}) as ${\beta _{ig}} = 2{\sigma _i}$ and ${\beta _{iz}} = \sigma _i^2$ with ${\sigma _i} = 10$ $(i \in \{ 1,2, \cdots ,6\} )$. From Theorem \ref{theorem2}, one can determine that ${K_u} = [1.0654,1.8576] \otimes {I_3}$.\par Figures 2 and 3 describe the position and velocity trajectory of six agents and the formation center at $t = 0$s, $t = 10$s, $t = 15$s and $t = 20$s, respectively, where the position and velocity states of agents are represented by asterisks, plus signs, circles, x marks, pentagrams and squares, and the formation centers are denoted by hexagrams. Figure 4 presents the curves of the formation centers for positions and velocities within $t = 20$s, where the initial and final states are depicted by circles and squares, respectively. Figure 5 shows the trajectory of the formation center within $t = 20$s.\par From Figures 2(a)-(b) and 3(a)-(b), one can find that the multi-agent system can achieve the regular pentagon formation in both position and velocity states. Figures 2(b)-(d) and 3(b)-(d) present that the formation keeps rotation in position and velocity states, respectively; that is, the formation is time-varying. From the simulation results shown in Figures 2-5, one can conclude that second-order multi-agent system (\ref{1}) with external disturbance achieves the robust time-varying formation by protocol (\ref{2}).\par \begin{figure}[!htb] \begin{center} \scalebox{0.4}[0.4]{\includegraphics{./fig2-1.eps}} \scalebox{0.4}[0.4]{\includegraphics{./fig2-2.eps}} \put (-340, 50){\rotatebox{90} {{\scriptsize ${p_{iZ}}(t)$}}} \put (-273, 0){ {{\scriptsize ${p_{iY}}(t)$}}} \put (-193, 10){ {{\scriptsize ${p_{iX}}(t)$}}} \put (-276, -12) {{ \scriptsize (a)~{\it t} = 0s}} \put (-173, 50){\rotatebox{90} {{\scriptsize ${p_{iZ}}(t)$}}} \put (-106, 0){ {{\scriptsize ${p_{iY}}(t)$}}} \put (-16, 10){ {{\scriptsize ${p_{iX}}(t)$}}} \put (-109, -12) {{ \scriptsize (b)~{\it t} = 10s}} \end{center}\vspace{-2em} \end{figure} \begin{figure}[!htb] \begin{center} \scalebox{0.4}[0.4]{\includegraphics{./fig2-3.eps}} \scalebox{0.4}[0.4]{\includegraphics{./fig2-4.eps}} \put (-340, 50){\rotatebox{90} {{\scriptsize ${p_{iZ}}(t)$}}} \put (-273, 0){ {{\scriptsize ${p_{iY}}(t)$}}} \put (-193, 10){ {{\scriptsize ${p_{iX}}(t)$}}} \put (-276, -12) {{ \scriptsize (c)~{\it t} = 15s}} \put (-173, 50){\rotatebox{90} {{\scriptsize ${p_{iZ}}(t)$}}} \put (-106, 0){ {{\scriptsize ${p_{iY}}(t)$}}} \put (-16, 10){ {{\scriptsize ${p_{iX}}(t)$}}} \put (-109, -12) {{ \scriptsize (d)~{\it t} = 20s}} \vspace{0em} \caption{Position curves of six agents and the formation center at different time.} \end{center}\vspace{-2em} \end{figure} \begin{figure}[!htb] \begin{center} \scalebox{0.4}[0.4]{\includegraphics{./fig3-1.eps}} \scalebox{0.4}[0.4]{\includegraphics{./fig3-2.eps}} \put (-340, 50){\rotatebox{90} {{\scriptsize ${v_{iZ}}(t)$}}} \put (-273, 0){ {{\scriptsize ${v_{iY}}(t)$}}} \put (-193, 10){ {{\scriptsize ${v_{iX}}(t)$}}} \put (-276, -12) {{ \scriptsize (a)~{\it t} = 0s}} \put (-173, 50){\rotatebox{90} {{\scriptsize ${v_{iZ}}(t)$}}} \put (-106, 0){ {{\scriptsize ${v_{iY}}(t)$}}} \put (-16, 10){ {{\scriptsize ${v_{iX}}(t)$}}} \put (-109, -12) {{ \scriptsize (b)~{\it t} = 10s}} \end{center}\vspace{-2em} \end{figure} \begin{figure}[!htb] \begin{center} \scalebox{0.4}[0.4]{\includegraphics{./fig3-3.eps}} \scalebox{0.4}[0.4]{\includegraphics{./fig3-4.eps}} \put (-340, 50){\rotatebox{90} {{\scriptsize ${v_{iZ}}(t)$}}} \put (-273, 0){ {{\scriptsize ${v_{iY}}(t)$}}} \put (-193, 10){ {{\scriptsize ${v_{iX}}(t)$}}} \put (-276, -12) {{ \scriptsize (c)~{\it t} = 15s}} \put (-173, 50){\rotatebox{90} {{\scriptsize ${v_{iZ}}(t)$}}} \put (-106, 0){ {{\scriptsize ${v_{iY}}(t)$}}} \put (-16, 10){ {{\scriptsize ${v_{iX}}(t)$}}} \put (-109, -12) {{ \scriptsize (d)~{\it t} = 20s}} \vspace{0em} \caption{Velocity curves of six agents and the formation center at different time.} \end{center}\vspace{-2em} \end{figure} \begin{figure}[!htb] \begin{center} \scalebox{0.4}[0.4]{\includegraphics{./fig4-1.eps}} \scalebox{0.4}[0.4]{\includegraphics{./fig4-2.eps}} \put (-342, 50){\rotatebox{90} {{\scriptsize ${c_{pZ}}(t)$}}} \put (-283, -5){ {{\scriptsize ${c_{pY}}(t)$}}} \put (-193, 10){ {{\scriptsize ${c_{pX}}(t)$}}} \put (-316, -20) {{ \scriptsize (a)~Formation center for positions}} \put (-168, 50){\rotatebox{90} {{\scriptsize ${c_{vZ}}(t)$}}} \put (-116, -5){ {{\scriptsize ${c_{vY}}(t)$}}} \put (-16, 10){ {{\scriptsize ${c_{vX}}(t)$}}} \put (-149, -20) {{ \scriptsize (b)~Formation center for velocities}} \vspace{0em} \caption{Curves of the formation center for positions and velocities.} \end{center}\vspace{-2em} \end{figure} \begin{figure}[!htb] \begin{center} \scalebox{0.4}[0.4]{\includegraphics{./fig5.eps}} \put (-165, 52){\rotatebox{90} {{\scriptsize $\left\| {\varphi (t)} \right\|$}}} \put (-90, -5){ {{\scriptsize {\it t}~/~\it s}}} \caption{Trajectory of the time-varying formation error.} \end{center}\vspace{0em} \end{figure} \section{Conclusions}\label{section6} ~~In the current paper, robust time-varying formation design problems for second-order multi-agent systems with external disturbances and directed topologies were studied. A new robust time-varying formation control protocol was proposed with only relative neighboring information and an ESO was designed to estimate and compensate the external disturbances. An explicit expression of the formation center function was derived, where the impacts of the disturbance compensation and the time-varying formation on the motion mode of the whole formation were determined. Sufficient conditions of the robust time-varying formation design were presented via algebraic Riccati equation technique together with the formation feasibility conditions. The tracking performance and the robustness stability of multi-agent systems were analyzed. It was proven that multi-agent systems can reach the expected robust time-varying formation if the gain matrix can be designed and the bandwidth constants of the ESO could be selected properly.
1,941,325,220,160
arxiv
\section{Introduction} \label{int} A nagging question in contemporary modern physics is about the nature of dark matter (DM) and its feasible non-gravitational interaction with the standard model (SM) particles. This problem is in fact deemed straddling both particle physics and cosmology. On the cosmology side, precise measurements of the Cosmic Microwave Background (CMB) anisotropy not only demonstrate the existence of dark matter but also provide us with the current dark matter abundance in the universe \cite{Ade:2013zuv,Hinshaw:2012aka}. On the particle physics side, the dedicated search is to find direct detection (DD) of the DM interaction with the ordinary matter via Spin Independent (SI) or Spin Dependent (SD) scattering of DM-nucleon in underground experiments like LUX \cite{Akerib:2016vxi}, XENON1T \cite{Aprile:2017iyp} and PandaX-II \cite{Cui:2017nnn}. Although in these experiments the enticing signal is not shown up so far, the upper limit on the DM-matter interaction strength is provided for a wide range of the DM mass. Among various candidates for particle DM, the most sought one is the Weakly Interacting Massive Particle (WIMP). Within WIMP paradigm there exist a class of models where SI scattering cross section is suppressed significantly at leading order in perturbation theory, hence the model eludes the experimental upper limits in a large region of the parameter space. The interaction type of the WIMP-nucleon in these models are pseudoscalar or axial vector at tree level resulting in momentum or velocity suppressed cross section \cite{Fitzpatrick:2012ix}. The focus here is on models with pseudoscalar interaction between the DM particles and the SM quarks. In this case there are both SI and SD elastic scattering of the DM off the nucleon at tree level. Both type of the interactions are momentum dependent while the SD cross section gets suppressed much stronger than the SI cross section due to an extra momentum transfer factor, $q^2$. Thus, in these models taking into account beyond tree level contributions which could be leading loop effects or full one-loop effects are essential. We recall several earlier works done in this direction with emphasis on DM models with a pseudoscalar interaction. Leading loop effect on DD cross section is studied in an extended two Higgs doublet model in \cite{Arcadi:2017wqi,Sanderson:2018lmj,Abe:2018emu}. Within various DM simplified models in \cite{Li:2018qip,Herrero-Garcia:2018koq,Hisano:2018bpz} and in a singlet-doublet dark matter model in \cite{Han:2018gej} the loop induced DD cross sections are investigated. Full one-loop contribution to the DM-nucleon scattering cross section in a Higgs-portal complex scalar DM model can be found in \cite{Azevedo:2018exj}. In \cite{Ishiwata:2018sdi} direct detection of a pseudo scalar dark matter is studied by taking into account higher order corrections both in QCD and non-QCD parts. In this work we consider a model with fermionic DM candidate, $\psi$, which interacts with a pseudoscalar mediator $P$ as $P \bar \psi \gamma^5 \psi$. The pseudoscalar mediator will be connected to the SM particles via mixing with the SM Higgs with an interaction term as $P H^{\dagger}H$. In this model the DM-nucleon interaction at tree level is of pseudoscalar type and thus its scattering cross section is highly suppressed over the entire parameter space. The leading loop contribution to the DD scattering cross section being spin independent is computed and viable regions are found against the direct detection bounds. Beside constraints from observed relic density, the invisible Higgs decay limit is imposed when it is relevant. The outline of this article is as follows. In Sec.~\ref{model} we recapitulate the pseudoscalar DM model. We then present our main results concerning the direct detection of the DM including analytical formula for the DD cross section and numerical analysis in Sec.~\ref{DD}. Finally we finish with a conclusion. \section{The Pseudoscalar Model} \label{model} The model we consider in this research as a renormalizable extension to the SM, consists of a new gauge singlet Dirac fermion as the DM candidate and a new singlet scalar acting as a mediator, which connects the fermionic DM to SM particles via the Higgs portal. The new physics Lagrangian comes in two parts, \begin{equation} {\cal L} = {\cal L}_{\text{DM}} +{\cal L}_{\text{scalar}} \,. \end{equation} The first part, ${\cal L}_{\text{DM}}$, introduces a pseudoscalar interaction term as \begin{equation} \label{DM-lag} {\cal L}_{\text{DM}} = \bar \psi (i {\not}\partial-m_{\text{\text{dm}}})\psi -i g_{d}~ P \bar{\psi} \gamma^5 \psi \,, \end{equation} and the second part, ${\cal L}_{\text{scalar}}$, incorporates the singlet pseudoscalar and the SM Higgs doublet as \begin{equation} \begin{aligned} {\cal L}_{\text{scalar}} {}& = \frac{1}{2} (\partial_{\mu} P)^2 - \frac{m^{2}}{2} P^2 - \frac{g_3}{6} P^3 -\frac{g_4}{24} P^4 + \mu^{2}_{H} H^{\dagger}H \\ & - \lambda (H^{\dagger}H)^2 + g_0 P - g_1 P H^{\dagger}H - g_2 P^2 H^{\dagger}H \,. \end{aligned} \end{equation} The pseudoscalar field is assumed to acquire a zero vacuum expectation value ({\it vev}), $\braket{P} = 0$, while it is known that the SM Higgs develops a non-zero {\it vev} where $\braket{H} = v_h = 246$ GeV. Having chosen $\braket{P} = 0$, the tadpole coupling $g_0$ is fixed appropriately. After expanding the Higgs doublet in unitary gauge as $H = ( 0 ~~ v_h+h')^T$, we write down the scalar fields in the basis of mass eigenstates $h$ and $s$, in the following expressions \begin{equation} h' = h \cos\theta - s \sin\theta \,, ~~~~~ P = h \sin\theta + s \cos\theta \,. \end{equation} The mixing angle, $\theta$, is induced by the interaction term $P H^{\dagger}H$ and is obtained by the relation $\sin2\theta = 2g_1 v_h/(m_h^2-m_s^2)$, in which $m_h = 125$ GeV and $m_s$ are the physical masses for the Higgs and the singlet scalar, respectively. The quartic Higgs coupling is modified now and is given in terms of the mixing angle and the physical masses of the scalars as $\lambda = (m_h^2 \cos^2\theta + m_s^2 \sin^2\theta)/(2v_h^2)$. We can pick out as independent free parameters a set of parameters as $\theta$, $g_d$, $g_2$, $g_3$, $g_4$ and $m_s$. The coupling $g_1$ is then fixed by the relation $g_1 = \sin2\theta (m_h^2-m_s^2)/(2v_h)$. Recent study on the DM and the LHC phenomenology of this model can be found in \cite{Ghorbani:2014qpa,Baek:2017vzd} and its electroweak baryogenesis is examined in \cite{Ghorbani:2017jls}. For DM masses in the range $m_{\text{dm}} < m_h/2$, one can impose constraint on the parameters $g_d$, $\theta$ and $m_{\text{dm}}$ from invisible Higgs decay measurements with Br($h\to$ invisible) $\lesssim 0.24$ \cite{Khachatryan:2016whc}. Given the invisible Higgs decay process, $h \to \bar \psi \psi$, we find for small mixing angle the condition $g_d \sin \theta \lesssim 0.16~\text{GeV}^{1/2}/(m_h^2-4m_{\text{dm}}^2)^{1/4}$ \cite{Kgh-MonoHiggs2017}. We compute DM relic density numerically over the model parameter space by applying the program {\tt micrOMEGAs} \cite{Barducci:2016pcb}. The observed value for the DM relic density used in our numerical computations is $\Omega h^2 = 0.1198\pm 0.0015$ \cite{Ade:2015xua}. The DM production in this model is via the popular freeze-out mechanism \cite{Lee:1977ua} in which it is assumed that DM particles have been in thermal equilibrium with the SM particles in the early universe. We find the viable region in the parameter space respecting the constraints from observed relic density and invisible Higgs decay in Fig.~\ref{relic-space}. The parameters chosen in this computation are $\sin \theta = 0.02$, $g_{3} = 200$ GeV and $g_2 = 0.1$. It is evident in the plot that regions with $m_{\text{dm}} < m_h/2$ are excluded by the invisible Higgs decay constraints. The analytical formulas for the DM annihilation cross sections are given in appendix \ref{AnnCS}. \begin{figure} \hspace{1.9cm} \includegraphics[width=.5\textwidth,angle =-90]{relic-space.eps} \caption{The viable region shown in the $m_s-m_{\text{dm}}$ plane respects the restrictions from the observed relic density and the measurements of the invisible Higgs decay.} \label{relic-space} \end{figure} \section{Direct Detection} \label{DD} In the model we study here the DM interaction with the SM particles is of pseudoscalar type, and at tree level its Spin Independent cross section is obtained in the following formula \begin{equation} \sigma^p_{\text{SI}} = \frac{2}{\pi} \frac{\mu^4 A^2}{m_{\text{dm}}^2} v_{\text{dm}}^2 \,, \end{equation} where $\mu$ is the reduced mass of the DM and the proton, $v_{\text{dm}} \sim 10^{-3}$ is the DM velocity, and $A$ is given by \begin{equation} A = \frac{g_d \sin 2\theta}{2v_h}(\frac{1}{m_h^2}-\frac{1}{m_s^2}) \times 0.28~m_p \,, \end{equation} where the number $0.28$ incorporates the hadronic form factor and $m_p$ denotes the proton mass. Therefore, the DM-nucleon scattering cross section is velocity suppressed at tree level. Other words, the entire parameter space of this model resides well below the reach of the direct detection experiments. The current underground DD experiments like LUX \cite{Akerib:2016vxi} and XENON1T \cite{Aprile:2017iyp} granted us with the strongest exclusion limits for DM mass to be in the range $\sim 10$ GeV up to $\sim 10$ TeV. The future DD experiments can only probe direct interaction of the DM-nucleon down to the cross sections comparable with that of the neutrino background (NB), $\sigma_{\text{NB}} \sim {\cal O}(10^{-13})$ pb \cite{Billard:2013qya}. In the present model, as we will see in our numerical results the tree level DM-nucleon DD cross section is orders of magnitude smaller than NB cross sections. For such a model with the DM-nucleon cross section being velocity-suppressed at tree level, it is mandatory to go beyond tree level and find the SI cross section. The leading diagrams (triangle diagrams) contributing to the SI cross section are drawn in Fig.~\ref{triangle}. There are also contributing box diagrams to the DM-nucleon scattering process. The box diagrams bring in a factor of $m_q^3$ ($q$ stands for light quarks) as shown in \cite{Ipek:2014gua}, while the triangle diagrams are proportional to $m_q$. Thus, we consider the box diagrams to have sub-leading effects. We then move on to compute the leading loop effects on the SI scattering cross section. \begin{figure} \begin{center} \includegraphics[width=.7\textwidth,angle =0]{pseudo-loop.eps} \end{center} \caption{The leading loop diagrams for DM Spin Independent elastic scattering off the SM quarks.} \label{triangle} \end{figure} In the following we write out the full expression for the DM-quark scattering amplitude when scalars in the triangle loop have masses $m_i$ and $m_j$ and that coupled to quarks has mass $m_k$, \begin{equation} i{\cal M}^{ijk} = \Big[\frac{C_k}{(p_1-p_2)^2-m_{k}^2} \Big] \bar q q~ \times \int \frac{d^4q}{(2\pi)^4} \frac{g_d^2 ~~\bar \psi(p_{2}) \gamma^5 (\slashed{q}+m_{\text{dm}}) \gamma^5 \psi(p_1) } {[(p_2-q)^2-m_{i}^2][(p_1-q)^2-m_{j}^2][q^2-m_{\text{dm}}^2]} \,. \end{equation} In the above, the indices $i,j$ and $k$ stand for the Higgs ($h$) or the singlet scalar ($s$). In the expression above, we have $C_h= -m_q/v_h \cos \theta$ and $C_s= m_q/v_h \sin \theta$. The corresponding effective scattering amplitude in the limit that the momentum transferred to a nucleon is $q^2 \sim 0$, follows this formula, \begin{equation} i{\cal M}^{ijk}_{\text{eff}} = -i \frac{m_{\text{dm}} g_d^2}{16\pi^2} C_{ijk} F(\beta_i, \beta_j) \frac{C_k}{m_k^2} ~(\bar q q)(\bar \psi \psi) \,, \end{equation} in which $\beta_i = m_i^2/m_{\text{dm}}^2$ and $\beta_j = m_j^2/m_{\text{dm}}^2$, and the loop function $F(\beta_i, \beta_j)$ is given in appendix \ref{DDcs}. In the cases that the two scalar masses in the triangle loop are identical, i.e. $m_i = m_j$, then let's take $\beta_i = \beta_j $ and represent $F(\beta_i, \beta_j)$ by $F(\beta_i)$ which is provided by appendix \ref{DDcs}. The validity of these loop functions are verified upon performing numerical integration of the Feynman integrals and making comparison for a few distinct input parameters. $C_{ijk}$ is the trilinear scalar coupling, where there are four of them corresponding to the vertices $hhh$, $hhs$, $ssh$ and $sss$ as appeared in Fig.~\ref{triangle}. Putting together all the six triangle diagrams, we end up having the expression below for the total effective SI scattering amplitude, \begin{equation} \begin{aligned} {\cal M}_{\text{eff}} {} & = \frac{m_q}{v_h} \frac{m_{\text{dm}} g_d^2}{16\pi^2} \Big[ \frac{\cos \theta}{m_h^2} C_{hhh} F(\beta_h) +\frac{\cos \theta}{m_h^2} C_{hsh} F(\beta_h, \beta_s) +\frac{\cos \theta}{m_h^2} C_{ssh} F(\beta_s) \\ & -\frac{\sin \theta}{m_s^2} C_{hhs} F(\beta_h) -\frac{\sin \theta}{m_s^2} C_{hss} F(\beta_h, \beta_s) -\frac{\sin \theta}{m_s^2} C_{sss} F(\beta_s) \Big] ~(\bar q q)(\bar \psi \psi) \\ & \equiv m_q ~\alpha ~(\bar q q)(\bar \psi \psi) \,, \end{aligned} \end{equation} The Spin Independent DM-proton scattering is \begin{equation} \sigma^p_{\text{SI}} = \frac{4\alpha_p^2\mu^2}{\pi} \,, \end{equation} in which $\mu$ is the reduced mass of the DM and the proton, and \begin{equation} \alpha_p = m_{p} \alpha \Big( \sum_{q = u,d,s} F^{p}_{T_q} + \frac{2}{9} F^{p}_{T_g} \Big) \sim 0.28~ m_{p} \alpha \,, \end{equation} where $m_p$ is the proton mass and the quantities $F^{p}_{T_q}$ and $F^{p}_{T_g}$ define the scalar couplings for the strong interaction at low energy. The trilinear couplings in terms of the mixing angle and the relevant couplings in the Lagrangian and, the DD cross section at tree and loop level are given in appendix \ref{DDcs}. The scalar form factors used in our numerical computations are, $F^{p}_{u} = 0.0153$, $F^{p}_{d} = 0.0191$ and $F^{p}_{s} = 0.0447$ \cite{Belanger:2013oya}. To obtain the scalar form factors, the central values of the following sigma-terms are used, $\sigma_{\pi N} = 34\pm 2$ MeV and $\sigma_{s} = 42\pm 5$ MeV. We computed the correction to the DD cross section at loop level by including the uncertainty on the two sigma-terms. We found that the corresponding uncertainty on the DD cross section are not big enough to be seen in the plots. However, we estimated the uncertainty for a given benchmark point with $m_{\text{dm}} \sim 732$ GeV, $g_{d} \sim 2.17$, $g_3 = 10$ GeV and $\sin \theta = 0.02$. The result is $\sigma^{p}_{loop} = (3.084 \pm 0.12)\times 10^{-10}$ pb. \begin{figure} \centering \begin{subfigure}[b]{0.55\textwidth} \hspace{-1.8cm} \includegraphics[width=\textwidth,angle =-90]{landa3-10-gd.eps} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} \hspace{-1.8cm} \includegraphics[width=\textwidth,angle =-90]{landa3-200-gd.eps} \end{subfigure} \caption{Shown are the DM-proton scattering cross section against the DM mass. In the upper panel $g_3 = 10$ GeV and in the lower panel $g_3 = 200$ GeV. The mixing angle is such that $\sin \theta = 0.02$. The vertical color spectrum indicates the range of the dark coupling $g_d$. Here, the observed relic density constraint is applied, The upper limits from LUX and XENON1T and also XENONnT projection are shown.} \label{direc-gs} \end{figure} In the first part of our scan over the parameter space we wish to compare the DM-proton SI cross section at tree level with the SI cross section stemming from leading loop effects. To this aim, we consider for the DM mass to take values as $10~ \text{GeV} < m_{\text{dm}} < 2~ \text{TeV}$, and the scalar mass in the range $20~ \text{GeV} < m_s < 500~\text{GeV}$. The dark coupling varies such that $0 < g_d < 3$. The mixing angle in these computations is chosen a small value being $\sin \theta = 0.02$. Reasonable values are chosen for the couplings, $g_2 = 0.1$ and $g_4 = 0.1$. Taking into account constraints from Planck/WMAP on the DM relic density, we show the viable parameter space in terms of the DM mass and $g_d$ in Fig.~\ref{direc-gs} for two distinct values of the coupling $g_3$ fixed at $10$ GeV and $200$ GeV. Regions excluded by the invisible Higgs decay measurements are also shown in Fig.~\ref{direc-gs}. As expected the tree level SI cross section is about 10 orders of magnitude below the neutrino background. On the other hand, for both values of $g_3$, the leading loop effects are sizable in a large portion of the parameter space. A general feature apparent in the plots is that for $g_d \gtrsim 2.5$, the DM mass smaller than $600$ GeV gets excluded by direct detection bounds. In addition, with the same values in the input parameters, we show the viable regions in terms of the DM mass and the single scalar mass in Fig.~\ref{direc-ms}. It is found that in both cases of the coupling $g_3$, a wide range of the scalar mass, i.e, $10~\text{GeV} < m_s < 500~\text{GeV}$ lead to the SI cross sections above the neutrino floor. It is also evident from the results in Fig.~\ref{direc-ms} that the viable region with $m_s \sim 10$ GeV located at $m_{\text{dm}} \lesssim 100$ GeV in the case that $g_3 = 10$ GeV, is shifted to regions with $m_{\text{dm}} \gtrsim 250$ GeV in the case that $g_3 = 200$ GeV. \begin{figure} \centering \begin{subfigure}[b]{0.55\textwidth} \hspace{-1.8cm} \includegraphics[width=\textwidth,angle =-90]{landa3-10-ms.eps} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} \hspace{-1.8cm} \includegraphics[width=\textwidth,angle =-90]{landa3-200-ms.eps} \end{subfigure} \caption{Shown are the DM-proton scattering cross section against the DM mass. In the upper panel $g_3 = 10$ GeV and in the lower panel $g_3 = 200$ GeV. The mixing angle is such that $\sin \theta = 0.02$. The vertical color spectrum indicates the range of the singlet scalar mass, $m_s$. Here, the observed relic density constraint is applied, The upper limits from LUX and XENON1T and also XENONnT projection are shown.} \label{direc-ms} \end{figure} In the last part of our computations we perform an exploratory scan in order to find the region of interest which are the points with the SI cross sections above the neutrino floor and below the DD upper limits, with other constraints imposed including the observed DM relic density and the invisible Higgs decay. The scan is done with these input parameters: $ 10~\text{GeV} < m_{\text{dm}} < 2~\text{TeV}$, $ 20~\text{GeV} < m_s < 1~\text{TeV}$, $0< g_d < 3$, $g_1=g_4 = 0.1$ and $g_3$ fixed at $200$ GeV. Our results are shown in Fig.~\ref{viable-region}. The mixing angle is set to $\sin\theta = 0.02$ in the left panel and $\sin\theta = 0.07$ in the right panel. It can be seen that for larger mixing angle the viable region is slightly broadened towards heavy pseudoscalar masses for the DM mass $ 60~\text{GeV} < m_{\text{dm}} < 300$ GeV, also is shrank towards regions with $m_{\text{dm}} \gtrsim 60$ GeV due to the invisible Higgs decay constraint. We also realize that if we confine ourselves to dark coupling $g_d \lesssim 1$ there are still regions with $m_{\text{dm}} \lesssim 400$ GeV which are within reach in the future direct detection experiments. Concerning indirect detection of DM, the Fermi Large Area Telescope (Fermi-LAT) collected gamma ray data from the Milky Way Dwarf Spheroidal Galaxies for six years \cite{Ackermann:2015zua}. The data indicates no significant gamma-ray excess. However, it can provide us with exclusion limits on the DM annihilation into $b\bar b$, $\tau \bar \tau$, $u\bar u$ and $W^+ W^-$ in the final state. As pointed out in \cite{Baek:2017vzd} the Fermi-LAT data can exclude regions in the parameter space with $m_{\text{dm}} < 80$ GeV and also resonant region with $m_{\text{dm}} \sim m_s/2$. A few comments are in order on the LHC constraints beside the invisible Higgs decay measurements. Concerning the mono-jet search in this scenario, it is pointed out in \cite{Baek:2017vzd} that even in the region with $m_s > 2 m_{\text{dm}}$ which has the largest production rate, the signal rate is more than one order of magnitude beneath the current LHC reach, having chosen the small mixing angle. In the same study it is found out that bounds corresponding to di-Higgs production at the LHC via the process $pp \to s \to h h$, with different final states ($4b,2b2\gamma,2b2\tau$) are not strong enough to exclude the pseudoscalar mass in the relevant range for small mixing angle as we chose in this study. \begin{figure} \hspace{-1.5cm} \begin{minipage}{0.41\textwidth} \includegraphics[width=\textwidth,angle =-90]{exclusion-sin02.eps} \end{minipage} \hspace{2.7cm} \begin{minipage}{0.41\textwidth} \includegraphics[width=\textwidth,angle =-90]{exclusion-sin07.eps} \end{minipage} \caption{Viable regions in the parameter space residing above the neutrino floor and below the current direct detection exclusion bounds are shown. Constraints from the observed relic density and the invisible Higgs decay are applied as well. In the left plot, the mixing angle is such that $\sin \theta = 0.02$ and in the right plot $\sin \theta = 0.07$. In both plots, $g_3 = 200$ GeV. The vertical color spectrum indicates the range of the dark coupling, $g_d$. } \label{viable-region} \end{figure} \section{Conclusions} \label{conclusion} We revisited a DM model whose fermionic DM candidate has a pseudoscalar interaction with the SM quarks at tree level leading to the suppressed SI direct detection elastic cross section. In the present model we obtained analytically the leading loop diagrams contributing to the SI elastic scattering cross section. Our numerical analysis taking into account the limits from the observed relic density, suggests that regions with dark coupling $g_d \gtrsim 2.5$ and reasonable values for the other parameters, get excluded by DD upper bounds. It is also found that regions with $g_d \lesssim 0.25$ are excluded because they reside below the neutrino floor. However, a large portion of the parameter space stands above the neutrino floor remaining accessible in the future DD experiments such as XENONnT. We also found regions of the parameter space above the neutrino floor while evading the current LUX/XENON1T DD upper limits, respecting the observed DM relic density and the invisible Higgs decay experimental bound. The viable region is slightly broadened for the moderate DM mass when $\sin\theta =0.07$ in comparison with the case when $\sin\theta = 0.02$, both at $g_3 = 200$ GeV.
1,941,325,220,161
arxiv
\section{Introduction} Many datasets in fields like ecology, epidemiology, remote sensing, sensor networks and demography appear naturally aggregated, that is, variables in these datasets are measured or collected in intervals, areas or supports of different shapes and sizes. For example, census data are usually sampled or collected as aggregated at different administrative divisions, e.g. borough, town, postcode or city levels. In sensor networks, correlated variables are measured at different resolutions or scales. In the near future, air pollution monitoring across cities and regions could be done using a combination of a few high-quality low time-resolution sensors and several low-quality (low-cost) high time-resolution sensors. Joint modelling of the variables registered in the census data or the variables measured using different sensor configurations at different scales can improve predictions at the point or support levels. In this paper, we are interested in providing a general framework for multi-task learning on these types of datasets. Our motivation is to use multi-task learning to jointly learn models for different tasks where each task is defined at (potentially) a different support of any shape and size and has a (potentially) different nature, i.e. it is a continuous, binary, categorical or count variable. We appeal to the flexibility of Gaussian processes (GPs) for developing a prior over such type of datasets and we also provide a scalable approach for variational Bayesian inference. Gaussian processes have been used before for aggregated data \citep{Smith:binned:2018, Law:variational:aggregate:neurips:2018, tanaka2019refining} and also in the context of the related field of \emph{multiple instance learning} \citep{Kim:MIL:GP:ICML:2010, Kandemir:MIL:GP:BMVC:2016,Haubmann:MIL:GP:CVPR:2017}. In multiple instance learning, each instance in the dataset consists of a set (or \emph{bag}) of inputs with only one output (or label) for that whole set. The aim is to provide predictions at the level of individual inputs. \citet{Smith:binned:2018} provide a new kernel function to handle single regression tasks defined at different supports. They use cross-validation for hyperparameter selection. \citet{Law:variational:aggregate:neurips:2018} use the weighted sum of a latent function evaluated at different inputs as the prior for the rate of a Poisson likelihood. The latent function follows a GP prior. The authors use stochastic variational inference (SVI) for approximating the posterior distributions. \citet{tanaka2019refining} mainly use GPs for creating data from different auxiliary sources. Furthermore, they only consider Gaussian regression and they do not include inducing variables. While \citet{Smith:binned:2018} and \citet{Law:variational:aggregate:neurips:2018} perform the aggregation at the latent prior stage, \citet{Kim:MIL:GP:ICML:2010, Kandemir:MIL:GP:BMVC:2016} and \citet{Haubmann:MIL:GP:CVPR:2017} perform the aggregation at the likelihood level. These three approaches target a binary classification problem. Both \citet{Kim:MIL:GP:ICML:2010} and \citet{Haubmann:MIL:GP:CVPR:2017} focus on the case for which the label of the bag corresponds to the maximum of the (unobserved) individual labels of each input. \citet{Kim:MIL:GP:ICML:2010} approximate the maximum using a softmax function computed using a latent GP prior evaluated across the individual elements of the bag. They use the Laplace approximation for computing the approximated posterior \citep{Rasmussen2006}. \citet{Haubmann:MIL:GP:CVPR:2017}, on the other hand, approximate the maximum using the so called \emph{bag label likelihood}, introduced by the authors, which is similar to a Bernoulli likelihood with soft labels given by a convex combination between the bag labels and the maximum of the (latent) individual labels. The latent individual labels in turn follow Bernoulli likelihoods with parameters given by a GP. The authors provide a variational bound and include inducing inputs for scalable Bayesian inference. \citet{Kandemir:MIL:GP:BMVC:2016} follow a similar approach to \citet{Law:variational:aggregate:neurips:2018} equivalent to setting all the weights in \citeauthor{Law:variational:aggregate:neurips:2018}'s model to one. The sum is then used to modulate the parameter of a Bernoulli likelihood that models the bag labels. They use a Fully Independent Training Conditional approximation for the latent GP prior \citep{Snelson:FITC:2006}. In contrast to these previous works, we provide a multi-task learning model for aggregated data that scales to large datasets and allows for heterogeneous outputs. At the time of submission of this paper, the idea of using multi-task learning for aggregated datasets was simultaneously proposed by \citet{hamelijnck2019multi} and \citet{tanaka2019spatially}, two additional models to the one we propose in this paper. In our work, we allow heterogenous likelihoods which is different to both \citet{hamelijnck2019multi} and \citet{tanaka2019spatially}. We also allow an exact solution to the integration of the latent function through the kernel in \citet{Smith:binned:2018}, which is different to \citet{hamelijnck2019multi}. Also, for computational complexity, inducing inputs are used, another difference from the work in \citet{tanaka2019spatially}. Other relevant work is described in Section \ref{sec:related:work}. For building the multi-task learning model we appeal to the linear model of coregionalisation \citep{Journel:miningBook78, Goovaerts:book97} that has gained popularity in the multi-task GP literature in recent years \citep{Bonilla:Multitask:2008, alvarez2012kernels}. We also allow different likelihood functions \citep{MorenoMunoz:HetMOGP:2018} and different input supports per individual task. Moreover, we introduce inducing inputs at the level of the underlying common set of latent functions, an idea initially proposed in \citet{Alvarez:NeurIPS:2008}. We then use stochastic variational inference for GPs \citep{hensman2013gaussian} leading to an approximation similar to the one obtained in \citet{MorenoMunoz:HetMOGP:2018}. Empirical results show that the multi-task learning approach developed here provides accurate predictions in different challenging datasets where tasks have different supports. \section{Multi-task learning for aggregated data at different scales} In this section we first define the basic model in the single-task setting. We then extend the model to the multi-task setting and finally provide details for the stochastic variational formulation for approximate Bayesian inference. \subsection{Change of support using Gaussian processes}\label{sec:change:support} Change of support has been studied in geostatistics before \citep{Gotway:Incompatible:2002}. In this paper, we use a formulation similar to \citet{Kyriakidis:AreaToPoint:2004}. We start by defining a stochastic process over the input interval $(x_a, x_b)$ using \begin{align*} f(x_a, x_b) = \frac{1}{\Delta_x}\int_{x_a}^{x_b} u(z)dz, \end{align*} where $u(z)$ is a latent stochastic process that we assume follows a Gaussian process with zero mean and covariance $k(z,z')$ and $\Delta_x=|x_b-x_a|$. Dividing by $\Delta_x$ helps to keep the proportionality between the length of the interval and the area under $u(z)$ in the interval. In other words, the process $f(\cdot)$ is modeled as a density meaning that inputs with widely differing supports will behave in a similar way. The first two moments for $f(x_a, x_b)$ are given as $\mathbb{E}[f(x_a, x_b)]= 0$ and $\mathbb{E}[f(x_a, x_b), f(x'_a, x'_b)] = \frac{1}{\Delta_x \Delta_{x'}}\int_{x_a}^{x_b} \int_{x'_a}^{x'_b} \mathbb{E}[u(z)u(z')]dz'dz$. The covariance for $f(x_a, x_b)$ follows as $\cov[f(x_a, x_b), f(x'_a, x'_b)]= \frac{1}{\Delta_x \Delta_{x'}}\int_{x_a}^{x_b} \int_{x'_a}^{x'_b} k(z,z')dz'dz$ since $\mathbb{E}[u(z)]=0$. Let us use $k(x_a, x_b, x'_a, x'_b)$ to refer to $\cov[f(x_a, x_b), f(x'_a, x'_b)]$. We can now use these mean and covariance functions for representing the Gaussian process prior for $f(x_a, x_b) \sim \mathcal{GP}(0, k(x_a, x_b, x'_a, x'_b))$. For some forms of $k(z,z')$ it is possible to obtain an analytical expression for $k(x_a, x_b, x'_a, x'_b)$. For example, if $k(z,z')$ follows an Exponentiated-Quadratic (EQ) covariance form, $k(z,z')=\sigma^2\exp\{-\frac{(z-z')^2}{\ell^2}\}$, where $\sigma^2$ is the variance of the kernel and $\ell$ is the length-scale, it can be shown that $k(x_a, x_b, x'_a, x'_b)$ follows as \begin{align*} k(x_a, x_b, x'_a, x'_b) & = \frac{\sigma^2\ell^2}{2 \Delta_x \Delta_{x'}} \left[h\left(\frac{x_b-x'_a}{\ell}\right) + h\left(\frac{x_a-x'_b}{\ell}\right) - h\left(\frac{x_a-x'_a}{\ell}\right) - h\left(\frac{x_b-x'_b}{\ell}\right) \right], \end{align*} where $h(z) = \sqrt{\pi}z\text{erf}(z) +e^{-z^2}$ with $\text{erf}(z)$, the error function defined as $\text{erf}(z)=\frac{2}{\sqrt{\pi}}\int_0^ze^{-r^2}dr$. Other kernels for $k(z,z')$ also lead to analytical expressions for $k(x_a, x_b, x'_a, x'_b)$. See for example \citet{Smith:binned:2018}. So far, we have restricted the exposition to one-dimensional intervals. However, we can define the stochastic process $f$ over a general support $\upsilon$, with measure $|\upsilon|$, using \begin{align*} f(\upsilon) = \frac{1}{|\upsilon|} \int_{\boldz\in v} u(\boldz)d\boldz. \end{align*} The support $\upsilon$ generally refers to an area or volume of any shape or size. Following similar assumptions to the ones we used for $f(x_a, x_b)$, we can build a GP prior to represent $f(\upsilon)$ with covariance $k(\upsilon, \upsilon')$ defined as $k(\upsilon, \upsilon')= \frac{1}{|\upsilon||\upsilon'|}\int_{\boldz\in\upsilon} \int_{\boldz'\in\upsilon'} k(\boldz,\boldz')d\boldz' d\boldz$. Let $\boldz\in\mathbb{R}^p$. If the support $\upsilon$ has a regular shape, e.g. a hyperrectangle, then assumptions on $u(\boldz)$ such as additivity or factorization across input dimensions will lead to kernels that can be expressed as addition of kernels or product of kernels acting over a single dimension. For example, let $u(\boldz)=\prod_{i=1}^pu_i(z_i)$, where $\boldz=[z_1,\cdots, z_p]^{\top}$, and a GP over each $u_i(z_i)\sim \mathcal{GP}(0, k(z_i, z'_i))$. If each $k(z_i, z'_i)$ is an EQ kernel, then $k(\upsilon, \upsilon')=\prod_{i=1}^pk(x_{i,a}, x_{i,b}, x'_{i,a}, x'_{i,b})$, where $(x_{i,a}, x_{i,b})$ and $(x'_{i,a}, x'_{i,b})$ are the intervals across each input dimension. If the support $\upsilon$ does not follow a regular shape, i.e it is a polytope, then we can approximate the double integration by numerical integration inside the support. \subsection{Multi-task learning setting} Our inspiration for multi-task learning is the linear model of coregionalisation (LMC) \citep{Journel:miningBook78}. This model has connections with other multi-task learning approaches that use kernel methods. For example, \citet{seeger2005semiparametric} and \citet{Bonilla:Multitask:2008} are particular cases of LMC. A detailed review is available in \citet{alvarez2012kernels}. In the LMC, each output (or task in our case) is represented as a linear combination of a common set of latent Gaussian processes. Let $\{u_{q}(\boldz)\}_{q =1}^{Q}$ be a set of $Q$ GPs with zero means and covariance functions $k_q(\boldz, \boldz')$. Each GP $u_{q}(\boldz)$ is sampled independently and identically $R_q$ times to produce $\{u^i_{q}(\boldz)\}_{i=1,q=1}^{R_q, Q}$ realizations that are used to represent the outputs. Let $\{f_d(\upsilon)\}_{d=1}^D$ be a set of tasks where each task is defined at a different support $\upsilon$. We use the set of realizations $u^i_{q}(\boldz)$ to represent each task $f_d(\upsilon)$ as \begin{align} f_d(\upsilon) & = \sum_{q=1}^Q\sum_{i=1}^{R_q}\frac{a^i_{d,q}}{|\upsilon|} \int_{\boldz\in v} u_q^i(\boldz)d\boldz,\label{eq:lmc:area} \end{align} where the coefficients $a^i_{d,q}$ weight the contribution of each integral term to $f_d(\upsilon)$. Since $\cov[u_q^i(\boldz), u_{q'}^{i'}(\boldz')]= k_q(\boldz, \boldz')\delta_{q,q'}\delta_{i,i'}$, with $\delta_{\alpha,\beta}$ the Kronecker delta between $\alpha$ and $\beta$, the cross-covariance $k_{f_d,f_{d'}}(\upsilon, \upsilon')$ between $f_d(\upsilon)$ and $f_{d'}(\upsilon')$ is then given as \begin{align*} k_{f_d,f_{d'}}(\upsilon, \upsilon') = \sum_{q=1}^Q \frac{b^q_{d,d'}}{|\upsilon||\upsilon'|}\int_{\boldz\in \upsilon} \int_{\boldz'\in \upsilon'} k_q(\boldz,\boldz')d\boldz' d\boldz, \end{align*} where $b^q_{d,d'} = \sum_{i=1}^{R_q} a^i_{d,q} a^i_{d',q}$. Following the discussion in Section \ref{sec:change:support}, the double integral can be solved analytically for some options of $\upsilon$, $\upsilon'$ and $k_q(\boldz,\boldz')$. Generally, a numerical approximation can be obtained. It is also worth mentioning at this point that the model does not require all the tasks to be defined at the area level. Some of the tasks could also be defined at the point level. Say for example that $f_d$ is defined at the support level $\upsilon$, $f_{d}(\upsilon)$, whereas $f_{d'}$ is defined at the point level, say $\mathbf{x}\in \mathbb{R}^p$, $f_{d'}(\mathbf{x})$. In this case, $f_{d'}(\mathbf{x}) = \sum_{q=1}^Q\sum_{i=1}^{R_q}a^i_{d',q} u^i_q(\mathbf{x})$. We can still compute the cross-covariance between $f_{d}(\upsilon)$ and $f_{d'}(\mathbf{x})$, $k_{f_d,f_{d'}}(\upsilon, \mathbf{x})$, leading to, $k_{f_d,f_{d'}}(\upsilon, \mathbf{x}) = \sum_{q=1}^Q \frac{b^q_{d,d'}}{|\upsilon|}\int_{\boldz\in v} k_q(\boldz,\mathbf{x})d\boldz$. For the case $Q=1$ and $p=1$ (i.e. dimensionality of the input space), this is, $z,z',x\in\mathbb{R}$, $\upsilon=(x_a, x_b)$ and an EQ kernel for $k(z,z')$, we get $k_{f_d,f_{d'}}(\upsilon, x) = \frac{b_{d,d'}}{\Delta_x}\int_{x_a}^{x_b} k(z,x)dz = \frac{b_{d,d'}\ell}{2\Delta_x}\left[\text{erf}\left(\frac{x_b-x}{\ell}\right)+\text{erf}\left(\frac{x-x_a}{\ell}\right)\right]$ (we used $\sigma^2=1$ to avoid an overparameterization for the variance). Again, if $\upsilon$ does not have a regular shape, we can approximate the integral numerically. Let us define the vector-valued function $\mathbf{f}(\upsilon) = [f_1(\upsilon), \cdots, f_D(\upsilon)]^\top$. A GP prior over $\mathbf{f}(\upsilon)$ can use the kernel defined above so that \begin{align*} \mathbf{f}(\upsilon)\sim \mathcal{GP}\left(\mathbf{0}, \sum_{q=1}^Q\frac{1}{|\upsilon||\upsilon'|}\mathbf{B}_q \int_{\boldz\in v} \int_{\boldz'\in v'} k_q(\boldz,\boldz')d\boldz' d\boldz\right), \end{align*} where each $\mathbf{B}_q\in\mathbb{R}^{D\times D}$ is known as a coregionalisation matrix. The scalar term $\int_{\boldz\in v} \int_{\boldz'\in v'} k_q(\boldz,\boldz')d\boldz' d\boldz$ modulates $\mathbf{B}_q$ as a function of $\upsilon$ and $\upsilon'$. The prior above can be used for modulating the parameters of likelihood functions that model the observed data. The most simple case corresponds to the multi-task regression problem that can be modeled using a multivariate Gaussian distribution. Let $\mathbf{y}(\upsilon) = [y_1(\upsilon), \cdots, y_D(\upsilon)]^\top$ be a random vector modeling the observed data as a function of $\upsilon$. In the multi-task regression problem $\mathbf{y}(\upsilon)\sim \mathcal{N}(\bm{\mu}(\upsilon), \bm{\Sigma})$, where $\bm{\mu}(\upsilon) = [\mu_1(\upsilon), \cdots, \mu_D(\upsilon)]^\top$ is the mean vector and $\bm{\Sigma}$ is a diagonal matrix with entries $\{\sigma_{y_d}^2 \}_{d=1}^D$. We can use the GP prior $\mathbf{f}(\upsilon)$ as the prior over the mean vector $\bm{\mu}(\upsilon) \sim \mathbf{f}(\upsilon)$. Since both the likelihood and the prior are Gaussian, both the marginal distribution for $\mathbf{y}(\upsilon)$ and the posterior distribution of $\mathbf{f}(\upsilon)$ given $\mathbf{y}(\upsilon)$ can be computed analytically. For example, the marginal distribution for $\mathbf{y}(\upsilon)$ is given as $\mathbf{y}(\upsilon) \sim \mathcal{N}(\mathbf{0}, \sum_{q=1}^Q\frac{1}{|\upsilon||\upsilon'|}\mathbf{B}_q \int_{\boldz\in v} \int_{\boldz'\in v'} k_q(\boldz,\boldz')d\boldz' d\boldz+\bm{\Sigma})$. \citet{MorenoMunoz:HetMOGP:2018} introduced the idea of allowing each task to have a different likelihood function and modulated the parameters of that likelihood function using one or more elements in the vector-valued GP prior. For that general case, the marginal likelihood and the posterior distribution cannot be computed in closed form. \subsection{Stochastic variational inference} Let $\mathcal{D} = \{\bm{\Upsilon}, \mathbf{y}\}$ be a dataset of multiple tasks with potentially different supports per task, where $\bm{\Upsilon} = \{\bm{\upsilon}_d\}_{d=1}^D$, with $\bm{\upsilon}_d=[\upsilon_{d,1},\cdots, \upsilon_{d,N_d}]^{\top}$, and $\mathbf{y}=[\mathbf{y}_1, \cdots, \mathbf{y}_D]^{\top}$, with $\mathbf{y}_d=[y_{d,1}, \cdots, y_{d,N_d}]^{\top}$ and $y_{d, j}=y_d(\upsilon_{d,j})$. Notice that $\mathbf{y}$ without $\upsilon$ refers to the output vector for the dataset. We are interested in computing the posterior distribution $p(\mathbf{f}|\mathbf{y})= p(\mathbf{y}|\mathbf{f})p(\mathbf{f})/p(\mathbf{y})$, where $\mathbf{f}=[\mathbf{f}_1, \cdots, \mathbf{f}_D]^{\top}$, with $\mathbf{f}_d=[f_{d,1}, \cdots, f_{d,N_d}]^{\top}$ and $f_{d, j}=f_d(\upsilon_{d,j})$. In this paper, we will use stochastic variational inference to compute a deterministic approximation of the posterior distribution $p(\mathbf{f}|\mathbf{y})\approx q(\mathbf{f})$, by means of the the well known idea of \textit{inducing variables}. In contrast to the use of SVI for traditional Gaussian processes, where the inducing variables are defined at the level of the process $\mathbf{f}$, we follow \citet{alvarez2010efficient} and \citet{MorenoMunoz:HetMOGP:2018}, and define the inducing variables at the level of the latent processes $u_q(\boldz)$. For simplicity in the notation, we assume $R_q=1$. Let $\mathbf{u}=\{\mathbf{u}_q\}_{q=1}^{Q}$ be the set of inducing variables, where $\mathbf{u}_q = [u_q(\boldz_1),\cdots, u_q(\boldz_{M})]^{\top}$, with $\boldZ=\{\boldz_m\}_{m=1}^M$ the inducing inputs. Notice also that we have used a common set of inducing inputs $\boldZ$ for all latent functions but we can easily define a set $\boldZ_q$ per inducing vector $\mathbf{u}_q$. For the multi-task regression case, it is possible to compute an analytical expression for the Gaussian posterior distribution over the inducing variables $\mathbf{u}$, $q(\mathbf{u})$, following a similar approach to \citet{alvarez2010efficient}. However, such approximation is only valid for the case in which the likelihood model $p(\mathbf{y}|\mathbf{f})$ is Gaussian and the variational bound obtained is not amenable for stochastic optimisation. An alternative for finding $q(\mathbf{u})$ also establishes a lower-bound for the log-marginal likelihood $\log p(\mathbf{y})$, but uses numerical optimisation for maximising the bound with respect to the mean parameters, $\bm{\mu}$, and the covariance parameters, $\mathbf{S}$, for the Gaussian distribution $q(\mathbf{u})\sim\mathcal{N}(\bm{\mu},\mathbf{S})$ \citep{MorenoMunoz:HetMOGP:2018}. Such numerical procedure can be used for any likelihood model $p(\mathbf{y}|\mathbf{f})$ and the optimisation can be performed using mini-batches. We follow this approach. \paragraph{Lower-bound} The lower bound for the log-marginal likelihood follows as \begin{align*} \log p(\mathbf{y}) \ge \int \int q(\mathbf{f}, \mathbf{u}) \log\frac{p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\mathbf{u})p(\mathbf{u})}{q(\mathbf{f}, \mathbf{u})}d\mathbf{f} d\mathbf{u} =\mathcal{L}, \end{align*} where $q(\mathbf{f}, \mathbf{u})=p(\mathbf{f}|\mathbf{u})q(\mathbf{u})$, $p(\mathbf{f}|\mathbf{u})\sim \mathcal{N}(\mathbf{K}_{\mathbf{f}\mathbf{u}}\mathbf{K}^{-1}_{\mathbf{u}\boldu} \mathbf{u},\mathbf{K}_{\mathbf{f}\boldf} -\mathbf{K}_{\mathbf{f}\mathbf{u}}\mathbf{K}^{-1}_{\mathbf{u}\boldu}\mathbf{K}_{\mathbf{f}\mathbf{u}}^{\top})$, and $p(\mathbf{u}) \sim\mathcal{N}(\mathbf{0}, \mathbf{K}_{\mathbf{u}\boldu})$ is the prior over the inducing variables. Here $\mathbf{K}_{\mathbf{f}\mathbf{u}}$ is a blockwise matrix with matrices $\mathbf{K}_{\mathbf{f}_d, \mathbf{u}_q}$. In turn each of these matrices have entries given by $k_{f_d, u_q}(\upsilon, \boldz')= \frac{a_{d,q}}{|\upsilon|}\int_{\boldz\in\upsilon} k_q(\boldz, \boldz')d\boldz$. Similarly, $\mathbf{K}_{\mathbf{u}\boldu}$ is a block-diagonal matrix with blocks given by $\mathbf{K}_{q}$ with entries computed using $k_{q}(\boldz, \boldz')$. The optimal $q(\mathbf{u})$ is chosen by numerically maximizing $\mathcal{L}$ with respect to the parameters $\bm{\mu}$ and $\mathbf{S}$. To ensure a valid covariance matrix $\mathbf{S}$ we optimise the Cholesky factor $\mathbf{L}$ for $\mathbf{S}=\mathbf{L}\mathbf{L}^{\top}$. See Appendix \ref{sec:appendix:inference:more} for more details on the lower bound. The computational complexity is similar to the one for the model in \citet{MorenoMunoz:HetMOGP:2018}, $\mathcal{O}(QM^3+JNQM^2)$, where $J$ depends on the types of likelihoods used for the different tasks. For example, if we model all the outputs using Gaussian likelihoods, then $J=D$. For details, see \citet{MorenoMunoz:HetMOGP:2018}. \paragraph{Hyperparameter learning} When using the multi-task learning method, we need to optimise the hyperparameters associated with the LMC, these are: the coregionalisation matrices $\mathbf{B}_q$, the hyperparameters of the kernels $k_q(\boldz, \boldz')$, and any other hyperparameter associated to the likelihood functions $p(\mathbf{y}|\mathbf{f})$ that has not been considered as a member of the latent vector $\mathbf{f}(\upsilon)$. Hyperparameter optimisation is done using the lower bound $\mathcal{L}$ as the objective function. First $\mathcal{L}$ is maximised with respect to the variational distribution $q(\mathbf{u})$ and then with respect to the hyperparameters. The two-steps are repeated one after the other until reaching convergence. Such style of optimisation is known as variational EM (Expectation-Maximization) when using the full dataset \citep{beal2003variational} or stochastic version, when employing mini-batches \citep{hoffman2013stochastic}. In the Expectation step we compute a variational posterior distribution and in the Maximization step we use a variational lower bound to find point estimates of any hyperparameters. For optimising the hyperparameters in $\mathbf{B}_q$, we also use a Cholesky decomposition for each matrix to ensure positive definiteness. So instead of optimising $\mathbf{B}_q$ directly, we optimise $\mathbf{L}_q$, where $\mathbf{B}_q= \mathbf{L}_q \mathbf{L}_q^{\top}$. For the experimental section, we use the EQ kernel for $k_q(\boldz, \boldz)$, so we fix the variance for $k_q(\boldz, \boldz)$ to one (the variance per output is already contained in the matrices $\mathbf{B}_q$) and optimise the length-scales $\ell_q$. \paragraph{Predictive distribution} Given a new set of test inputs $\bm{\Upsilon}_*$, the predictive distribution for $p(\mathbf{y}_*|\mathbf{y}, \bm{\Upsilon}_*)$ is computed using $p(\mathbf{y}_*|\mathbf{y}, \bm{\Upsilon}_*)=\int_{\mathbf{f}_*}p(\mathbf{y}_*|\mathbf{f}_*)q(\mathbf{f}_*)d\mathbf{f}_*$, where $\mathbf{y}_*$ and $\mathbf{f}_*$ refer to the vector-valued functions $\mathbf{y}$ and $\mathbf{f}$ evaluated at $\bm{\Upsilon}_*$. Notice that $q(\mathbf{f}_*)\approx p(\mathbf{f}_*|\mathbf{y})$. Even though $\mathbf{y}$ does not appear explicitly in the expression for $q(\mathbf{f}_*)$, it has been used to compute the posterior for $q(\mathbf{u})$ through the optimisation of $\mathcal{L}$ where $\mathbf{y}$ is explicitly taken into account. We are usually interested in the mean prediction $\mathbb{E}[\mathbf{y}_*]$ and the predictive variance $\operatorname{var}[\mathbf{y}_*]$. Both can be computed by exchanging integrals in the double integration over $\mathbf{y}_*$ and $\mathbf{f}_*$. See Appendix \ref{sec:appendix:inference:more} for more details on this. \section{Related work}\label{sec:related:work} Machine learning methods for different forms of aggregated datasets are also known under the names of \emph{multiple instance learning}, \emph{learning from label proportions} or \emph{weakly supervised learning on aggregate outputs} \citep{Kuck:Individuals:ti:Groups:2005, Musicant:Sup:learning:aggregate:2007, Quadrianto:JMLR:2009, Patrini:NoLabelNoCry:2014, Kotzias:Individual:Group:Deep:Features:2015, Bhowmik:Aggregated:AISTATS:2015}. \citet{Law:variational:aggregate:neurips:2018} provided a summary of these different approaches. Typically these methods start with the following setting: each instance in the dataset is in the form of a set of inputs for which there is only one corresponding output (e.g. the proportion of class labels, an average or a sample statistic). The prediction problem usually consists then in predicting the individual outputs for the individual inputs in the set. The setting we present in this paper is slightly different in the sense that, in general, for each instance, the input corresponds to a support of any shape and size and the output corresponds to a vector-valued output. Moreover, each task can have its own support. Similarly, while most of these ML approaches have been developed for either regression or classification, our model is built on top of \citet{MorenoMunoz:HetMOGP:2018}, allowing each task to have a potentially different likelihood. As mentioned in the introduction, Gaussian processes have also been used for multiple instance learning or aggregated data \citep{Kim:MIL:GP:ICML:2010, Kandemir:MIL:GP:BMVC:2016, Haubmann:MIL:GP:CVPR:2017, Smith:binned:2018, Law:variational:aggregate:neurips:2018, tanaka2019refining, hamelijnck2019multi, tanaka2019spatially}. Compared to most of these previous approaches, our model goes beyond the single task problem and allows learning multiple tasks simultaneously. Each task can have its own support at training and test time. Compared to other multi-task approaches, we allow for heterogeneous outputs. Although our model was formulated for a continuous support $\mathbf{x}\in \upsilon_{d,j}$, we can also define it in terms of a finite set of (previously defined) inputs in the support, e.g. a set $\{\mathbf{x}_{d, j, 1}, \cdots, \mathbf{x}_{d, j, K_{d,j}}\}\in \upsilon_{d,j}$ which is more akin to the bag formulations in these previous works. This would require changing the integral $\frac{1}{|\upsilon_{d,j}|}\int_{\boldz\in\upsilon_{d,j}}u_q^i(\boldz)d\boldz$ in \eqref{eq:lmc:area} for the sum $\frac{1}{K_{d,j}}\sum_{\forall \mathbf{x}\in \upsilon_{d,j}}u_q^i(\mathbf{x}_{d,j,k})$. In geostatistics, a similar problem has been studied under the names of \emph{downscaling} or \emph{spatial disaggregation} \citep{zhang2014scale}, particularly using different forms of \emph{kriging} \citep{Goovaerts:book97}. It is also closely related to the problem of \emph{change of support} described with detail in \citet{Gotway:Incompatible:2002}. Block-to-point kriging (or area-to-point kriging if the support is defined in a surface) is a common method for downscaling, this is, provide predictions at the point level provided data at the block level \citep{Kyriakidis:AreaToPoint:2004, GoovaertsSupport2010}. We extend the approach introduced in \citet{Kyriakidis:AreaToPoint:2004} later revisited by \citet{GoovaertsSupport2010} for count data, to the multi-task setting, including also a stochastic variational EM algorithm for scalable inference. If we consider the high-resolution outputs as high-fidelity outputs and low-resolution outputs as low-fidelity outputs, our work also falls under the umbrella of \emph{multi-fidelity models} where co-kriging using the linear model of coregionalisation has also been used as an alternative \citep{Peherstorfer:Multifidelity:2018, fernndezgodino2016review}. \section{Experiments} In this section, we apply the multi-task learning model for prediction in three different datasets: a synthetic example for two tasks that each have a Poisson likelihood, a two-dimensional input dataset of fertility rates aggregated by year of conception and ages in Canada, and an air-pollution sensor network where one task corresponds to a high-accuracy, low-frequency particulate matter sensor and another task corresponds to a low-cost, low-accuracy, high resolution sensor. In these examples, we use $k$-means clustering over the input data, with $k=M$, to initialise the values of the inducing inputs, $\mathbf{Z}$, which are also kept fixed during optimisation. We assume the inducing inputs are points, but they could be defined as intervals or supports. For standard optimisation we used the LBFGS-B algorithm and when SVI was needed, the Adam optimiser, included in \textit{climin} library, was used for the optimisation of the variational distribution (variational E-step) and the hyperparameters (variational M-step). The implementation is based on the GPy framework and is available on Github: \url{https://github.com/frb-yousefi/aggregated-multitask-gp}. \paragraph{Synthetic data} \input{synthetic} \paragraph{Fertility rates from a Canadian census} \input{fertility} \paragraph{Air pollution monitoring network} \input{airpollution} \section{Conclusion} In this paper, we have introduced a powerful framework for working with aggregated datasets that allows the user to combine observations from disparate data types, with varied support. This allows us to produce both finely resolved and accurate predictions by using the accuracy of low-resolution data and the fidelity of high-resolution side-information. We chose our inducing points to lie in the latent space, a distinction which allows us to estimate multiple tasks with different likelihoods. SVI and variational-EM with mini-batches make the framework scalable and tractable for potentially very large problems. A potential extension would be to consider how the ``mixing'' achieved through coregionalisation could vary across the domain by extending, for example, the Gaussian Process Regression Network model \citep{wilson2011gaussian} to be able to deal with aggregated data. Such model would allow latent functions of different lengthscales to be relevant at different locations in the domain. In summary, this framework provides a vital toolkit, allowing a mixture of likelihoods, kernels and tasks and paves the way to the analysis of a very common and widely used data structure - that of values over a variety of supports on the domain. \section{Acknowledgement} MTS and MAA have been financed by the Engineering and Physical Research Council (EPSRC) Research Project EP/N014162/1. MAA has also been financed by the EPSRC Research Project EP/R034303/1. \small \bibliographystyle{plainnat}
1,941,325,220,162
arxiv
\section{Introduction} The process of isospectrally reducing a matrix was first considered in \cite{Bunimovich:2012:IGT}, where it was shown that a weighted digraph could be reduced while maintaining the eigenvalues of the graph's weighted adjacency matrix, up to a known set. The motivation in this setting was to allow one to simplify the structure of a complicated network (graph) while preserving its spectral information. One of the main results of this paper is that any weighted digraph can be uniquely reduced to a graph on any subset of its nodes via some sequence of isospectral reductions. Later it was shown in \cite{Bunimovich:2011:IGR} that such matrix reductions could be used to improve the classical eigenvalue estimates of Gershgorin, Brauer, Brualdi, and Varga \cite{Gershgorin:1931:UDA,Brauer:1947:LCR,Brualdi:1982:MED,Varga:2004:GC}. Specifically, the eigenvalue estimates associated with Gershgorin and Brauer both improve for any matrix reduction and can be successively improved by further matrix reductions. The eigenvalue estimates of Brualdi and Varga are more complicated but can be shown to improve for specific types of matrix reductions. In this paper we generalize this previous work by first showing that a matrix can be isospectrally reduced over any of its principal submatrices, under mild conditions. This is an improvement over the isospectral reduction method presented in \cite{Bunimovich:2011:IGR,Bunimovich:2012:IGT,Bunimovich:2012:IC}, since in these three papers a submatrix is required to have a particular form in order for the reduction to be defined. This fundamental improvement allows us to avoid the sequence of reductions that were previously necessary for certain matrix reductions. We prove in a more general setting that a sequence of reductions still leads to a uniquely defined matrix that depends only on the final reduction (see theorem \ref{theorem2}). As defined in \cite{Bunimovich:2011:IGR} a matrix with rational function entries has both a spectrum and an inverse spectrum. When a square matrix is isospectrally reduced, the result is a smaller matrix that again has a spectrum and an inverse spectrum. The relation between the spectrum and inverse spectrum of the reduced and unreduced matrices is dictated by the specific submatrix over which the matrix is reduced (see theorem \ref{maintheorem}). Expanding on the work done in \cite{Bunimovich:2011:IGR}, we show that it is possible to not only use the eigenvalue estimates associated with Gershgorin to estimate the eigenvalues of a matrix, but to estimate its inverse eigenvalues. This is done by introducing the concept of the spectral inverse of a matrix, i.e. the matrix in which the eigenvalues are the inverse eigenvalues of the original matrix and vice versa. Therefore, the results found in \cite{Bunimovich:2011:IGR} allow us to give estimates of the inverse eigenvalues of a matrix and use matrix reductions to improve them (see theorem \ref{theorem4}). Another reason we care about isospectral reductions is that they naturally arise in network models when we do not have access to all the network nodes. We use a mass spring network to illustrate this: the isospectral reduction amounts to the response of the network where we only have access to some terminal nodes (see example \ref{ex:spring0}). In the case where all nodes are accessible (i.e. all nodes are terminal nodes), the eigenvalues correspond to frequencies for which there is a non-zero node displacement that results in zero forces. For the reduced matrix, the eigenvalues indicate frequencies for which a non-zero displacement of the terminal nodes generates zero forces at the terminals. The inverse eigenvalues of the reduced matrix correspond to resonance frequencies, i.e. frequencies for which there is an extremely large force generated by a finite displacement of the terminals. The pseudospectrum of a matrix gives us the scalars that behave like eigenvalues to within a certain tolerance. This concept is particularly useful in analyzing the properties of matrices that are non-normal, i.e. do not have an orthogonal eigenbasis. The pseudospectrum of a complex valued matrix has been introduced independently many times (see \cite{Trefethen:2005:SP} for details). It has also been studied in the case of matrix polynomials \cite{Lancaster:2005:PMP,Boulton:2008:PMP}. Here, we extend the definition of pseudospectrum to matrices with rational function entries. By use of the spectral inverse we also define the inverse pseudospectrum of a matrix in Section \ref{sec:psire}. As with complex valued matrices, the pseudospectra we define for a matrix with rational function entries is a subset of the complex plane whose elements behave, within some tolerance, as eigenvalues. Similarly, the inverse pseudospectra of a matrix is the set of scalars that act as inverse eigenvalues for a given tolerance. For the mass spring network we consider, the pseudospectra of the stiffness matrix are the values for which there are node displacements that generate forces that are {\em small} relative to the displacement. The same is true of inverse pseudospectra, except these give way to forces that are {\em large} relative to the displacement. A given tolerance determines how ``large'' and ``small'' these forces are. We show that the pseudospectra of a reduced matrix are always contained in the pseudospectra of the original matrix for a given tolerance. This implies that the eigenvalues of a reduced matrix are less susceptible to perturbations than the original one. The paper is organized as follows. Section \ref{sec:matred} introduces and extends the theory of isospectral matrix reductions. This section also includes the spectral inverse of a matrix along with the Gershgorin type estimates of a matrix' inverse eigenvalues. In Section \ref{sec:psi} we define the pseudospectrum and inverse pseudospectrum of a matrix with rational function entries and show that the pseudospectrum of a matrix shrinks in size as the matrix is reduced. Throughout the paper we consider numerous examples, including the mass spring network mentioned above, which is used to give a physical interpretation to the concepts introduced in this paper. \section{Isospectral Matrix Reductions}\label{sec:matred} In the first part of this paper we introduce the class of matrices we wish to consider; namely those matrices which have rational function entries. The reason we consider this class of matrices, as mentioned in the introduction, is that such matrices arise naturally if we wish or need to reduce the size of a matrix (or system) we are considering while maintaining its spectral properties. This procedure of isospectrally reducing a matrix and describing the spectrum of such matrices is the main focus of this section. \subsection{Matrices with Rational Function Entries}\label{sec:mat} The class of matrices we consider are those square matrices whose entries are rational functions of $\lambda$. Specifically, let $\complex[\lambda]$ be the set of polynomials in the complex variable $\lambda$ with complex coefficients. We denote by $\BW$ the set of rational functions of the form $$w(\lambda)=p(\lambda)/q(\lambda)$$ where $p(\lambda),q(\lambda)\in\complex[\lambda]$ are polynomials having no common linear factors and where $q(\lambda)$ is not identically zero. More generally, each rational function $w(\lambda)\in\mathbb{W}$ is expressible in the form $$w(\lambda)=\frac{a_i\lambda^i+a_{i-1}\lambda^{i-1}+\dots+a_0}{b_j\lambda^j+b_{j-1}\lambda^{j-1}+\dots+b_0}$$ where, without loss in generality, we can take $b_j=1$. The domain of $w(\lambda)$ consists of all but a finite number of complex numbers for which $q(\lambda)=b_j\lambda^j+b_{j-1}\lambda^{j-1}+\dots+b_0$ is zero. Addition and multiplication on the set $\mathbb{W}$ are defined as follows. For $p(\lambda)/q(\lambda)$ and $r(\lambda)/s(\lambda)$ in $\mathbb{W}$ let \begin{align} \label{eq:add}\left(\frac{p}{q}+\frac{r}{s}\right)(\lambda)=& \frac{p(\lambda)s(\lambda)+q(\lambda)r(\lambda)}{q(\lambda)s(\lambda)}; \ \text{and}\\ \label{eq:mult}\left(\frac{p}{q}\cdot\frac{r}{s}\right)(\lambda)=&\frac{p(\lambda)r(\lambda)}{q(\lambda)s(\lambda)} \end{align} where the common linear factors in the right hand side of equations (\ref{eq:add}) and (\ref{eq:mult}) are canceled. The set $\mathbb{W}$ is then a field under addition and multiplication. Because we are primarily concerned with the eigenvalues of a matrix, which is a set that includes multiplicities, we note the following. The element $\alpha$ of the set $A$ that includes multiplicities has \textit{multiplicity} $m$ if there are $m$ elements of $A$ equal to $\alpha$. If $\alpha\in A$ with multiplicity $m$ and $\alpha\in B$ with multiplicity $n$ then\\ \indent (i) the \textit{union} $A\cup B$ is the set in which $\alpha$ has multiplicity $m+n$; and\\ \indent (ii) the \textit{difference} $A-B$ is the set in which $\alpha$ has multiplicity $m-n$ if $m-n>0$ and where $\alpha\notin A-B$ otherwise. \begin{definition}\label{def1.1} Let $\BW^{n\times n}$ denote the set of $n\times n$ matrices with entries in $\BW$. For a matrix $M(\lambda)\in\BW^{n\times n}$ the determinant $$\det\big(M(\lambda)-\lambda I\big) = p(\lambda)/q(\lambda)$$ for some $p(\lambda)/q(\lambda)\in\mathbb{W}$. The {\em spectrum} (or eigenvalues) of $M(\lambda)$ is defined as \[ \sigma\M{M} = \{\lambda\in\complex:p(\lambda)=0\}. \] The {\em inverse spectrum} (or \emph{resonances}) of $M(\lambda)$ is defined as \[ \sigma^{-1}\M{M} = \{\lambda\in\complex:q(\lambda)=0\}. \] \end{definition} Both $\sigma(M)$ and $\sigma^{-1}(M)$ are understood to be sets that include multiplicities. For example, if the polynomial $p(\lambda)\in\mathbb{C}[\lambda]$ factors as $$p(\lambda)=\prod_{i=1}^m(\lambda-\alpha_i)^{n_i} \ \ \text{for} \ \alpha_i\in\mathbb{C} \ \text{and} \ n_i\in\mathbb{N}$$ then $\{\lambda\in\mathbb{C}:p(\lambda)=0\}$ is the set in which $\alpha_i$ has multiplicity $n_i$. \begin{remark} Since $\mathbb{C}\subset\mathbb{W}$, definition \ref{def1.1} is an extension of the standard definition of the eigenvalues of a matrix to the larger class of matrices $\mathbb{W}^{n\times n}$. In particular, if $M\in\mathbb{C}^{n\times n}$ then $\sigma(M)$ are the standard eigenvalues of $M$. \end{remark} In what follows we may, for convenience, suppress the dependence of the matrix $M(\lambda)\in\mathbb{W}^{n\times n}$ on $\lambda$ and simply write $M$. One reason for this is that for much of what we do in this paper we do not evaluate $M(\lambda)$ at any particular point $\lambda\in\mathbb{C}$. Rather, we consider $M$ formally as a matrix with rational function entries. However, when we do consider the matrix $M(\lambda)\in\mathbb{W}^{n\times n}$ to be a function of $\lambda$ we mean $M$ is the function $$M:\dom(M)\rightarrow\mathbb{C}^{n\times n},$$ where $\dom(M)$ are the complex numbers $\lambda$ for which every entry of $M(\lambda)$ is defined. Surprisingly, it may be the case that $\sigma(M)\nsubseteq \dom(M)$ as the following example shows. \begin{example}\label{ex:0} Consider the matrix $M(\lambda)\in\mathbb{W}^{2\times 2}$ given by $$M(\lambda)=\left[\begin{array}{cc} 0&\frac{1}{\lambda}\\ 0&0 \end{array}\right].$$ As one can compute, $\det(M(\lambda)-\lambda I)=\lambda^2$ implying $\sigma(M)=\{0,0\}$. Therefore, $\sigma(M)\nsubseteq \dom(M)$. \end{example} \subsection{Isospectral Matrix Reductions}\label{sec:imr} We can now describe an {\em isospectral reduction} of a matrix $M\in\mathbb{W}^{n\times n}$. We then compare the spectrum of $M$ to the spectrum of its isospectral reduction. Let $M\in\mathbb{W}^{n\times n}$ and $N=\{1,\ldots,n\}$. If the sets $\mathcal{R},\mathcal{C}\subseteq N$ are non-empty we denote by $M_{\mathcal{R}\mathcal{C}}$ the $|\mathcal{R}| \times |\mathcal{C}|$ \emph{submatrix} of $M$ with rows indexed by $\mathcal{R}$ and columns by $\mathcal{C}$. Suppose the non-empty sets $\mathcal{B}$ and $\mathcal{I}$ form a partition of $N$. The {\em Schur complement} of $M_{\mathcal{II}}$ in $M$ is the matrix \begin{equation} M/M_{\mathcal{II}} = M_{\mathcal{BB}} - M_{\mathcal{BI}} M_{\mathcal{II}}^{-1} M_{\mathcal{IB}}, \label{eq:schur} \end{equation} assuming $M_{\mathcal{II}}$ is invertible. The Schur complement arises in many applications. For example, if the matrix $M$ is the Kirchhoff matrix of a network of resistors with $n$ nodes then its Schur complement is the Dirichlet to Neumann (or voltage to currents) map of the network given by considering the nodes in $\mathcal{B}$ as terminal or \emph{boundary} nodes and the nodes in $\mathcal{I}$ as \emph{interior} nodes (see e.g. \cite{Curtis:1998:cpg}). A physical interpretation of an isospectral reduction is given in example~\ref{ex:spring0}. We are now ready to define the isospectral reduction of a matrix $M\in\BW^{n\times n}$. \begin{definition}\label{def1} For $M(\lambda) \in\BW^{n\times n}$ let $\mathcal{B}$ and $\mathcal{I}$ form a non-empty partition of $N$. The {\em isospectral reduction} of $M$ over the set $\mathcal{B}$ is the matrix \begin{equation} R_{\lambda}(M;\mathcal{B})=M_{\mathcal{BB}} - M_{\mathcal{BI}}(M_{\mathcal{II}}-\lambda I)^{-1} M_{\mathcal{IB}} \in \BW^{|\mathcal{B}|\times|\mathcal{B}|}. \label{eq:isred} \end{equation} if the matrix $M_{\mathcal{II}}-\lambda I$ is invertible. \end{definition} Note that the reduced matrix $R_{\lambda}(M;\mathcal{B})$ is a Schur complement plus a multiple of the identity: \begin{equation}\label{eq:sch} R_{\lambda}(M;\mathcal{B})=(M-\lambda I)/(M_{\mathcal{II}}-\lambda I)+\lambda I. \end{equation} More often than not we suppress the dependence of $R_{\lambda}(M;\mathcal{B})$ on $\lambda$ and instead write it as $R(M;\mathcal{B})$. \begin{example}\label{ex:2} Consider the matrix $M\in\BW^{6\times 6}$ with $(0,1)$-entries given by $$M=\left[ \begin{array}{cccccc} 0&0&1&1&0&0\\ 0&1&0&0&1&1\\ 1&0&1&0&0&0\\ 0&1&0&1&0&0\\ 1&0&0&0&0&0\\ 0&1&0&0&0&0 \end{array} \right].$$ For $\mathcal{B}=\{1,2\}$ and $\mathcal{I}=\{3,4,5,6\}$ we have $$(M_{\cI\cI}-\lambda I)^{-1}=\left[ \begin{array}{cccc} \frac{1}{1-\lambda}&0&0&0\\ 0&\frac{1}{1-\lambda}&0&0\\ 0&0&-\frac{1}{\lambda}&0\\ 0&0&0&-\frac{1}{\lambda} \end{array} \right].$$ The isospectral reduction of $M$ over $\mathcal{B}=\{1,2\}$ is then defined as \[ \begin{aligned} R(M;\mathcal{B})&= \left[ \begin{array}{cc} 0&0\\ 0&1 \end{array} \right]- \left[ \begin{array}{cccc} 1&1&0&0\\ 0&0&1&1 \end{array} \right] \left[ \begin{array}{cccc} \frac{1}{1-\lambda}&0&0&0\\ 0&\frac{1}{1-\lambda}&0&0\\ 0&0&-\frac{1}{\lambda}&0\\ 0&0&0&-\frac{1}{\lambda} \end{array} \right] \left[ \begin{array}{cc} 1&0\\ 0&1\\ 1&0\\ 0&1 \end{array} \right].\\ &=\left[ \begin{array}{cc} \frac{1}{\lambda-1}&\frac{1}{\lambda-1}\\ \frac{1}{\lambda}&\frac{\lambda+1}{\lambda} \end{array} \right]\in\BW^{2\times 2}. \end{aligned} \] \end{example} If a matrix has an isospectral reduction the spectrum and inverse spectrum of the isospectral reduction and the original matrix are related as follows. \begin{theorem}\label{maintheorem}\textbf{(Spectrum and Inverse Spectrum of Isospectral Reductions)}\\ For $M(\lambda) \in\BW^{n\times n}$ let $\mathcal{B}$ and $\mathcal{I}$ form a non-empty partition of $N$. If $R_{\lambda}(M;\mathcal{B})$ exists then its spectrum and inverse spectrum are given by \begin{align*} \sigma\big(R(M;\mathcal{B})\big)&= \M{\sigma(M)\cup\sigma^{-1}(M_{\mathcal{II}})}- \M{\sigma (M_{\mathcal{II}})\cup\sigma^{-1}(M)}; ~\text{and}\\ \sigma^{-1}\big(R(M;\mathcal{B})\big)&=\M{\sigma(M_{\mathcal{II}})\cup\sigma^{-1}(M)}-\M{\sigma(M)\cup\sigma^{-1} (M_{\mathcal{II}})}. \end{align*} \end{theorem} \begin{proof} For $M\in\BW^{n\times n}$, we may assume without loss of generality that $M$ has the block matrix form \begin{equation}\label{eq3.1} M=\begin{bmatrix} M_{\mathcal{II}} & M_{\mathcal{IB}}\\ M_{\mathcal{BI}} & M_{\mathcal{BB}} \end{bmatrix} \end{equation} where $M_{\mathcal{II}}-\lambda I$ is invertible. Note that the determinant of a matrix and that of its Schur complement are related by the identity \begin{equation}\label{eq:detschur} \det\begin{bmatrix} A & B\\ C & D \end{bmatrix} =\det(A)\cdot\det(D-CA^{-1}B), \end{equation} provided the submatrix $A$ is invertible. Using this identity on the matrix $M-\lambda I$ yields \[ \det(M-\lambda I)=\det(M_{\mathcal{II}}-\lambda I) \cdot\det\M{(M_{\mathcal{BB}}-\lambda I)-M_{\mathcal{BI}} (M_{\mathcal{II}}-\lambda I)^{-1}M_{\mathcal{IB}}}. \] Therefore, \begin{equation*} \det\big(R(M;\mathcal{B})-\lambda I\big)=\frac{\det(M-\lambda I)}{\det(M_{\mathcal{II}}-\lambda I)}. \end{equation*} To compare the eigenvalues of $R(M;\mathcal{B})$, $M$, and $M_{\mathcal{II}}$ write \[ \det(M-\lambda I)=\frac{p(\lambda)}{q(\lambda)}\ \ \text{and} \ \ \det(M_{\cI\cI}-\lambda I)=\frac{t(\lambda)}{u(\lambda)}, \] for some $p/q,t/u\in\mathbb{W}$. Hence, \[ \det\big(R(M;\mathcal{B})-\lambda I\big)=\frac{p(\lambda)u(\lambda)}{q(\lambda)t(\lambda)}. \] Let $P=\{\lambda\in\complex:p(\lambda)=0\}$, $Q=\{\lambda\in\complex:q(\lambda)=0\}$, $T=\{\lambda\in\complex:t(\lambda)=0\}$, and $U=\{\lambda\in\complex:u(\lambda)=0\}$, with multiplicities. By canceling common linear factors, definition \ref{def1.1} implies \begin{align*} \sigma\big(R(M;\mathcal{B})\big)=&\{\lambda\in\complex:p(\lambda)u(\lambda)=0\}-\{\lambda\in\complex: q(\lambda)t(\lambda)=0\}\\ =&(P\cup U)-(Q\cup T); \ \text{and}\\ \sigma^{-1}\big(R(M;\mathcal{B})\big)=&\{\lambda\in\complex: q(\lambda)t(\lambda)=0\}-\{\lambda\in\complex:p(\lambda)u(\lambda)=0\}\\ =&(Q\cup T)-(P\cup U). \end{align*} Since $P=\sigma(M)$, $Q=\sigma^{-1}(M)$, $T=\sigma^{-1}(M_{\mathcal{II}})$, and $R=\sigma(M_{\mathcal{II}})$ the result follows. \end{proof} Since a matrix $M\in\mathbb{C}^{n\times n}$ has no inverse spectrum (i.e. $\sigma^{-1}(M) = \emptyset$), theorem~\ref{maintheorem} applied to complex valued matrices has the following corollary. \begin{corollary}\label{cor1} For $M(\lambda) \in\complex^{n\times n}$ let $\mathcal{B}$ and $\mathcal{I}$ form a non-empty partition of $N$. Then \[ \sigma\big(R(M;\mathcal{B})\big)=\sigma(M)-\sigma(M_{\cI\cI}) \ \ \text{and} \ \ \sigma^{-1}\big(R(M;\mathcal{B})\big) =\sigma(M_{\cI\cI})-\sigma(M). \] \end{corollary} \begin{example}\label{ex:3} Let $M$, $\mathcal{B}$ and $\mathcal{I}$ be as in example~\ref{ex:2}. As one can compute $\sigma(M)=\{2,-1,1,1,0,0\}$ and $\sigma(M_{\mathcal{II}})=\{1,1,0,0\}$. By corollary~\ref{cor1} we then have \[ \begin{aligned} \sigma\big(R(M;\mathcal{B})\big)& = \{2,-1,1,1,0,0\}-\{1,1,0,0\} = \{2,-1\}; \ and\\ \sigma^{-1}\big(R(M;\mathcal{B})\big)& = \{1,1,0,0\}-\{2,-1,1,1,0,0\} = \emptyset. \end{aligned} \] Observe that, by reducing $M$ over $\mathcal{B}$ we lose the eigenvalues corresponding to the ``interior'' degrees of freedom $\sigma(M_{\mathcal{II}})=\{1,1,0,0\}$. That is, if we knew both $\sigma(M_{\mathcal{II}})$ and $\sigma(R(M;\mathcal{B}))$ but not $\sigma(M)$, then corollary~\ref{cor1} states that the set $\sigma(M_{\mathcal{II}})$ is the most by which $\sigma(R(M;\mathcal{B}))$ and $\sigma(M)$ can differ. \end{example} Theorem \ref{maintheorem} therefore describes exactly which eigenvalues we may gain from an isospectral reduction and which we may lose. In this way an isospectral reduction of a matrix preserves the spectral information of the original matrix. However, it may not always be possible to find an isospectral reduction of a matrix $M\in\mathbb{W}^{n\times n}$. For instance, consider the matrix $M\in\mathbb{W}^{2\times 2}$ given by \begin{equation}\label{mat1} M=\left[\begin{array}{cc} 1&1\\ 1&\lambda \end{array}\right]. \end{equation} For $\mathcal{B}=\{1\}$ and $\mathcal{I}=\{2\}$ note that $M_{\mathcal{II}}-\lambda I=[0]$, which is not invertible. Therefore, $M$ cannot be isospectrally reduced over $\mathcal{B}$. In general there is no way to know beforehand if the isospectral reduction $R(M;\mathcal{B})$ exists without computing $(M_{\mathcal{II}}-\lambda I)^{-1}$. However, the following subset of $\mathbb{W}^{n\times n}$ can always be reduced over any nonempty subset $\mathcal{B}\subset N$. For any polynomial $p(\lambda)\in\complex[\lambda]$, let $\deg(p)$ denote its degree. If $w(\lambda)=p(\lambda)/q(\lambda)$ where both $p(\lambda),q(\lambda)\in\complex[\lambda]$ are not identically zero we define the degree of the rational function $w(\lambda)$ by $$\pi(w)=\deg(p)-\deg(q).$$ In the case where $p(\lambda)=0$ we let $\pi(w)=0$. \begin{definition} We denote by $\BW_{\pi}$ the set of rational functions $$\BW_{\pi}=\{w\in\BW:\pi(w)\leq0\}$$ and let $\BW^{n\times n}_{\pi}$ be the set of $n\times n$ matrices with entries in $\BW_{\pi}$. \end{definition} \begin{lemma}\label{lemma1} If $M(\lambda) \in\BW^{n\times n}_{\pi}$ and $\mathcal{B}\subset N$ is non-empty then $R_{\lambda}(M;\mathcal{B})\in\mathbb{W}_{\pi}^{|\mathcal{B}|\times |\mathcal{B}|}$. \end{lemma} \begin{proof} Let $M\in\BW^{n\times n}_{\pi}$. The inverse of the matrix $M-\lambda I$ is given by \begin{equation}\label{eq1} (M-\lambda I)^{-1}=\frac{1}{\det(M-\lambda I)}\adj(M-\lambda I) \end{equation} where $\adj(M-\lambda I)$ is the adjugate matrix of $M-\lambda I$, i.e. the matrix with entries \begin{equation} \adj(M-\lambda I)_{ij} = (-1)^{i+j} \det(\cM_{ji}), ~ 1 \leq i,j \leq n, \end{equation} where $\cM_{ij} \in \BW^{(n-1)\times(n-1)}$ is obtained by deleting the $i-$th row and $j-$th column of $M-\lambda I$. Note that \begin{equation} \det(M-\lambda I)=\sum_{\rho\in \mathcal{P}_n}\Big(\sgn(\rho)\prod_{i=1}^n(M-\lambda I)_{i,\rho(i)}\Big) \label{eq:detsum} \end{equation} where the sum is taken over the set $\mathcal{P}_n$ of permutations on $N$. The sign $\sgn(\rho)$ of the permutation $\rho\in\mathcal{P}_n$ is 1 (resp. $-1$) if $\rho$ is the composition of an even (resp. odd) number of permutations of two elements. Using equations (\ref{eq:prod}) and (\ref{eq:lambda}) in \ref{app:degree}, the term in \eqref{eq:detsum} corresponding to the identity permutation $\rho=\id\in\mathcal{P}_n$ has degree $n$ while for $\rho\neq\id$ the other terms have degree strictly smaller than $n$. Equation (\ref{eq:sum}) then implies \begin{equation}\label{eq:deg1} \pi\big(\det(M-\lambda I)\big)=n. \end{equation} Therefore $\det(M-\lambda I)$ is not identically zero, implying via equation \eqref{eq1} that the inverse $(M-\lambda I)^{-1}$ exists. Similarly, for $i\in N$ the matrix $\mathcal{M}_{ii}$ is equal to $\widetilde{\mathcal{M}}_{ii}-\lambda I$ for some $\widetilde{\mathcal{M}}\in\mathbb{W}_{\pi}^{(n-1)\times(n-1)}$. Hence, \begin{equation}\label{eq:deg2} \pi\big(\det(\cM_{ii})\big) =n-1, ~ \text{for} ~ i\in N. \end{equation} For $i\neq j$ the matrices $\cM_{ij}\in\mathbb{W}^{(n-1)\times(n-1)}$ contain $n-2$ entries of the form $M_{k \ell} - \lambda$ where all other entries of $\cM_{ij}$ belong to the set $\mathbb{W}_{\pi}$. Hence, equations (\ref{eq:prod}) and (\ref{eq:lambda}) imply that for $i\neq j$ \begin{equation}\label{eq:deg3} \pi\big(\det(\cM_{ij})\big)\leq n-2, ~ \text{for} ~ i,j\in N \end{equation} since for $\rho\in\mathcal{P}_{n-1}$ at most $n-2$ terms in the product $\prod_{k=1}^{n-1}(\mathcal{M}_{ij})_{k,\rho(k)}$ have the form $M_{k \ell}-\lambda$. Given that the degree of $\det(\cM_{ij})$ in (\ref{eq:deg3}) may be zero, equations \eqref{eq:deg1}--\eqref{eq:deg3} together with (\ref{eq:ratio}) imply that $\pi( (M-\lambda I)^{-1}_{ij}) \leq 0$ for all $1\leq i,j\leq n$. Hence, $(M-\lambda I)^{-1} \in \BW_\pi^{n \times n}$. Therefore, if $\mathcal{B}$ and $\mathcal{I}$ form a nonempty partition of $N$ then $$\left[(M-\lambda I)^{-1}\right]_{\mathcal{II}}\in\mathbb{W}^{|\mathcal{I}|\times|\mathcal{I}|}_{\pi}.$$ Definition \ref{def1} along with equations (\ref{eq:prod}) and (\ref{eq:lambda}) then imply that $R(M;\mathcal{B})$ has entries in $\mathbb{W}_{\pi}$. \end{proof} Note that lemma \ref{lemma1} implies the existence of any isospectral reduction $R(M;\mathcal{B})$ if $M\in\mathbb{W}^{n\times n}_{\pi}$ and $\mathcal{B}\subset N$. In particular, any complex valued matrix can be reduced over any index set. Since the matrix $M$ given in (\ref{mat1}) does not belong to $\mathbb{W}_{\pi}^{2\times 2}$ lemma \ref{lemma1} does not apply in this particular case. \begin{remark} Because a matrix $M\in\BW^{n\times n}_{\pi}$ can be reduced over any nonempty index set $\mathcal{B} \subset N$, the isospectral reductions presented here are more general than those given in \cite{Bunimovich:2011:IGR,Bunimovich:2012:IGT,Bunimovich:2012:IC}. In these three papers, for $M$ to be reduced over the index set $\mathcal{B}$ the matrix $M_{\mathcal{II}}$ was required to be similar to an upper triangular matrix. Here, we have no such restriction. \end{remark} In the following example we demonstrate how one can use an isospectral reduction to study the dynamics of a mass-spring network in which access is limited. \begin{example}\label{ex:spring0} Consider the mass-spring network illustrated in figure~\ref{fig:spring}, with nodes at locations $x_i$, $i=1,2,3,4$ lying on a line and edges representing springs between nodes. For simplicity we assume that all the springs have the same spring constant ($k=1$) and that all the nodes have unit mass. (The precise position of the nodes on the line does not matter for this discussion.) Suppose each node $x_i$ is subject to a time harmonic displacement $u_i(\omega) e^{j\omega t}$ with frequency $\omega$ in the direction of the line and $j = \sqrt{-1}$. Then the resulting force at node $x_i$ is also time harmonic in the direction of the line and is of the form $f_i(\omega) e^{j \omega t}$. Writing the balance of forces acting on each node with the laws of motion, one can show that the vector of forces $\bbf(\omega) = [ f_1(\omega), \ldots, f_4(\omega)]^T$ is linearly related to the vector of displacements $\bu(\omega) = [ u_1(\omega), \ldots, u_4(\omega)]^T$ by the equation \begin{equation} \bbf(\omega) = (K - \omega^2 I)\bu(\omega). \end{equation} Here the matrix $K$ is the {\em stiffness} matrix \[ K = \begin{bmatrix} 1 & -1\\ -1 & 2 & -1 \\ & -1 & 2 & -1\\ && -1 & 1 \end{bmatrix}. \] If we let $\lambda \equiv \omega^2$, we see that the eigenmodes of the stiffness matrix $K$ correspond to non-zero displacements that do not generate forces. For instance, the eigenmode corresponding to the zero frequency is $\bu = [1,1,1,1]^T$, i.e. by displacing all nodes by the same amount, there are no net forces at the nodes. Suppose we only have access to certain terminal (or boundary) nodes of this network, say $\cB = \{1,4\}$. Then we can write the equilibrium of forces at the interior nodes $\cI = \{2,3\}$ and conclude that the net forces $\bbf_\cB$ at the terminal nodes depend linearly on the displacements $\bu_\cB$ at the terminal nodes according to the equation \begin{equation} \bbf_\cB(\omega) = ( R_{\omega^2} ( K; \cB) - \omega^2 I) \bu_\cB(\omega). \end{equation} The spectrum and inverse spectrum of the response are $$\sigma(R_{\omega^2} ( K; \cB))=\{2\pm\sqrt{2},2,0\} \text{and} \sigma^{-1}(R_{\omega^2} ( K; \cB))=\{3,1\}.$$ The eigenvalues of $R_{\omega^2} ( K; \cB)$ correspond to frequencies for which there is a displacement of the boundary nodes $\cB$ that generate no forces at these nodes. Conversely, the resonances (or inverse eigenvalues) of $R_{\omega^2} ( K; \cB)$ correspond to frequencies at which there is a displacement of the boundary nodes for which the resulting forces are infinitely large. \end{example} \begin{figure} \begin{center} \includegraphics[width=8cm]{spring} \end{center} \caption{The mass spring network of example~\ref{ex:spring0} with boundary nodes $\mathcal{B}=\{1,4\}$ and interior nodes $\mathcal{I}=\{2,3\}$.} \label{fig:spring} \end{figure} \subsection{Sequential Reductions} In the previous section we observed that the isospectral reduction $R(M;\mathcal{B})$ of $M\in\BW^{n\times n}_{\pi}$ is again a matrix in $\BW^{m\times m}_{\pi}$. It is therefore possible to reduce the matrix $R(M;\mathcal{B})$ again over some subset of $\mathcal{B}$. That is, we may sequentially reduce the matrix $M$. However, a natural question is to what extent does a sequentially reduced matrix depends on the particular sequence of index sets over which it has been reduced. As it turns out, if a matrix has been reduced over the index set $\mathcal{B}_1$ then $\mathcal{B}_2$ up to the index set $\mathcal{B}_m$ then the resulting matrix depends only on the index set $\mathcal{B}_m$. To formalize this, let $M\in\mathbb{W}^{n\times n}_{\pi}$ and suppose there are non-empty sets $\mathcal{B}_1,\dots,\mathcal{B}_m$ such that $N\supset\mathcal{B}_1\supset,\dots,\supset\mathcal{B}_m$. Then $M$ can be \emph{sequentially reduced} over the sets $\mathcal{B}_1,\dots,\mathcal{B}_m$ where we write $$R_{\lambda}(M;\mathcal{B}_1,\dots,\mathcal{B}_m)= R_{\lambda}\big(\dots R_{\lambda}(R_{\lambda}(M;\mathcal{B}_1);\mathcal{B}_2)\dots;\mathcal{B}_m\big).$$ If $M$ is sequentially reduced over the index sets $\mathcal{B}_1,\dots,\mathcal{B}_m$ we call $\mathcal{B}_m$ the \emph{final index set} of this sequence of reductions. \begin{theorem}\label{theorem2}\textbf{(Uniqueness of Sequential Reductions)} For $M(\lambda)\in\mathbb{W}^{n\times n}_{\pi}$ suppose $N\supset\mathcal{B}_1\supset,\dots,\supset\mathcal{B}_m$ where $\mathcal{B}_m$ is non-empty. Then $$R_{\lambda}(M;\mathcal{B}_1,\dots,\mathcal{B}_m)=R_{\lambda}(M;\mathcal{B}_m).$$ \end{theorem} That is, in a sequence of reductions the resulting matrix is completely specified by the final index set. To prove theorem \ref{theorem2} we first require the following lemma. \begin{lemma}\label{lemma2} Let the non-empty sets $\mathcal{B}$, $\mathcal{I}$, and $\mathcal{J}$ partition $N$. If $M(\lambda) \in\BW^{n\times n}_{\pi}$ then $R_{\lambda}(M;\mathcal{B}\cup\mathcal{I},\mathcal{B})=R_{\lambda}(M;\mathcal{B})$. \end{lemma} \begin{proof} Assume without loss of generality that $M \in \BW^{n\times n}_{\pi}$ can be written as \[ M(\lambda) = \begin{bmatrix} M_{\mathcal{BB}} & M_{\mathcal{BI}} & M_{\mathcal{BJ}}\\ M_{\mathcal{IB}} & M_{\mathcal{II}} & M_{\mathcal{IJ}}\\ M_{\mathcal{JB}} & M_{\mathcal{JI}} & M_{\mathcal{JJ}} \end{bmatrix}. \] Using the definition of isospectral reduction we have \begin{equation}\label{eq:smb} R_{\lambda}(M;\mathcal{B}) = M_{\mathcal{BB}} - \begin{bmatrix} M_{\mathcal{BI}} & M_{\mathcal{BJ}} \end{bmatrix} \begin{bmatrix} M_{\mathcal{II}} - \lambda I & M_{\mathcal{IJ}}\\ M_{\mathcal{JI}} & M_{\mathcal{JJ}} - \lambda I \end{bmatrix}^{-1} \begin{bmatrix} M_{\mathcal{IB}} \\ M_{\mathcal{JB}} \end{bmatrix} \ \ \text{and} \end{equation} \begin{equation}\label{eq:smbi} R_{\lambda}(M;\mathcal{B}\cup\mathcal{I}) = \begin{bmatrix} M_{\mathcal{BB}} & M_{\mathcal{BI}} \\ M_{\mathcal{IB}} & M_{\mathcal{II}} \end{bmatrix} - \begin{bmatrix} M_{\mathcal{BJ}} \\ M_{\mathcal{IJ}} \end{bmatrix} (M_{\mathcal{JJ}} - \lambda I)^{-1} \begin{bmatrix} M_{\mathcal{JB}} & M_{\mathcal{JI}} \end{bmatrix}. \end{equation} Taking the isospectral reduction of $R_{\lambda}(M;\mathcal{B}\cup\mathcal{I})$ over $\mathcal{B}$ in \eqref{eq:smbi} we have \begin{multline}\label{eq:ssmbib} R_{\lambda}(M;\mathcal{B}\cup\mathcal{I},\mathcal{B}) = M_{\mathcal{BB}} - M_{\mathcal{BJ}} K(\lambda)^{-1} M_{\mathcal{JB}}\\ -\left[ (M_{\mathcal{BI}} - M_{\mathcal{BJ}} K(\lambda)^{-1} M_{\mathcal{JI}}) T(\lambda)^{-1} (M_{\mathcal{IB}} - M_{\mathcal{IJ}} K(\lambda)^{-1} M_{\mathcal{JB}}) \right], \end{multline} where $K(\lambda) \equiv M_{\mathcal{JJ}} - \lambda I$ and $T(\lambda) \equiv M_{\mathcal{II}} - \lambda I - M_{\mathcal{IJ}} K(\lambda)^{-1} M_{\mathcal{JI}}$. Note that both $K(\lambda)^{-1}$ and $T(\lambda)^{-1}$ exist following the proof of lemma \ref{lemma1}. To show the desired result we need to verify that expressions \eqref{eq:smb} and \eqref{eq:ssmbib} are equal. Recall the following identity for the inverse of a square matrix $M$ with $2\times 2$ blocks: \begin{equation}\label{eq:2x2inv} M^{-1} = \begin{bmatrix} A & B\\ C & D \end{bmatrix}^{-1} = \begin{bmatrix} E^{-1} & -E^{-1} B D^{-1}\\ -D^{-1} C E^{-1} & D^{-1} + D^{-1} C E^{-1} B D^{-1} \end{bmatrix}, \end{equation} where $E = A - B D^{-1} C$ is the Schur complement of $D$ in $M$. The determinantal identity \eqref{eq:detschur} implies that $M$ is invertible if and only if $D$ and $E$ are invertible. Using \eqref{eq:2x2inv} to find the inverse of the $2 \times 2$ block matrix appearing in \eqref{eq:smb} we get \begin{equation} \label{eq:IJblock} \begin{bmatrix} M_{\mathcal{II}} - \lambda I & M_{\mathcal{IJ}}\\ M_{\mathcal{JI}} & M_{\mathcal{JJ}} - \lambda I \end{bmatrix}^{-1}= \end{equation} \begin{equation*} \begin{bmatrix} T(\lambda)^{-1} & - T(\lambda)^{-1} M_{\mathcal{IJ}} K(\lambda)^{-1}\\ - K(\lambda)^{-1} M_{\mathcal{JI}} T(\lambda)^{-1} & K(\lambda)^{-1} + K(\lambda)^{-1} M_{\mathcal{JI}} T(\lambda)^{-1} M_{\mathcal{IJ}} K(\lambda)^{-1} \end{bmatrix}. \end{equation*} Using \eqref{eq:IJblock} in \eqref{eq:smb} we get \eqref{eq:ssmbib} completing the proof. \end{proof} We now give a proof of theorem \ref{theorem2}. \begin{proof} For $M\in\mathbb{W}^{n\times n}_{\pi}$ suppose $N\subset\mathcal{B}_1\supset\dots\supset\mathcal{B}_m$ where $\mathcal{B}_m\neq\emptyset.$ If $m=2$ then lemma \ref{lemma2} directly implies that $R_{\lambda}(M;\mathcal{B}_1,\mathcal{B}_2)=R_{\lambda}(M;\mathcal{B}_2)$. For $2\leq k<m$ suppose $R_{\lambda}(M;\mathcal{B}_1,\dots,\mathcal{B}_k)=R_{\lambda}(M;\mathcal{B}_k)$. Then $$R_{\lambda}(M;\mathcal{B}_1,\dots,\mathcal{B}_k,\mathcal{B}_{k+1})=R_{\lambda}(M;\mathcal{B}_k,\mathcal{B}_{k+1}) =R_{\lambda}(M;\mathcal{B}_{k+1})$$ where the second equality follows from lemma \ref{lemma2}. By induction it then follows that $R_{\lambda}(M;\mathcal{B}_1,\dots,\mathcal{B}_m)=R_{\lambda}(M;\mathcal{B}_m)$. \end{proof} \begin{example} Let $M\in\complex^{4\times 4}$ be the matrix given by $$M=\left[\begin{array}{cccc} 1&0&1&0\\ 0&1&0&1\\ 0&1&1&1\\ 1&0&1&1 \end{array}\right]$$ and let $\mathcal{B}=\{1,2\}$. Our goal in this example is to illustrate that $$R_{\lambda}(M;\mathcal{B}) = R_{\lambda}(M;\mathcal{B}\cup \{3\},\mathcal{B})= R_{\lambda}(M;\mathcal{B}\cup \{4\},\mathcal{B}).$$ As one can compute $$R_{\lambda}(M;\mathcal{B}\cup \{3\})= \begin{bmatrix} 1&0&1\\ \frac{1}{\lambda-1}&1&\frac{1}{\lambda-1}\\ \frac{1}{\lambda-1}&1&\frac{\lambda}{\lambda-1} \end{bmatrix} \ \ \text{and} \ \ R_{\lambda}(M;\mathcal{B}\cup \{4\})=\left[\begin{array}{ccc} 1&\frac{1}{\lambda-1}&\frac{1}{\lambda-1}\\ 0&1&1\\ 1&\frac{1}{\lambda-1}&\frac{\lambda}{\lambda-1} \end{array}\right].$$ Although $R_{\lambda}(M;\mathcal{B}\cup \{3\})\neq R_{\lambda}(M;\mathcal{B}\cup \{4\})$, note that by reducing both of these matrices over $\mathcal{B}=\{1,2\}$ one has $$R_{\lambda}(M;B) = R_{\lambda}(M;\mathcal{B}\cup \{3\},\mathcal{B})= R_{\lambda}(M;\mathcal{B}\cup \{4\},\mathcal{B})= \begin{bmatrix} \frac{\lambda^2-2\lambda+1}{\lambda^2-2\lambda}&\frac{\lambda-1}{\lambda^2-2\lambda}\\ \frac{\lambda-1}{\lambda^2-2\lambda}&\frac{\lambda^2-2\lambda+1}{\lambda^2-2\lambda} \end{bmatrix}.$$ As a final observation, we note that $\sigma(M)=\{\frac{1}{2}(3\pm\sqrt{5}),\frac{1}{2}(1\pm\sqrt{-3})\}$ and $\sigma(M_{II})=\{0,2\}$ for $I = \{3,4\}$. Hence the matrix $M$ and the reduced matrix $R(M,\mathcal{B})$ have the same eigenvalues by corollary~\ref{cor1}. That is, an isospectral reduction need not have any effect on the spectrum of a matrix. (In this example the inverse spectrum does change with the reduction). \end{example} \subsection{Spectral Inverse} \label{sec:spinv} Although a matrix $M\in\mathbb{W}(\lambda)^{n\times n}$ has both a spectrum and an inverse spectrum, the techniques that have been developed to analyze its spectral properties have been restricted to its spectrum \cite{Bunimovich:2011:IGR,Bunimovich:2012:IGT,Bunimovich:2012:IC}. The goal in this section is to introduce a new matrix transformation that exchanges a matrix' spectrum and inverse spectrum. This transformation allows us to investigate the inverse spectrum of these matrices with tools meant to study its spectrum. Additionally, we use this transformation to define the inverse pseudospectrum (or pseudoresonances) of a matrix from the pseudospectrum of a matrix (Section \ref{sec:psi}). \begin{definition} For $M(\lambda)\in\mathbb{W}^{n\times n}$ let $S_{\lambda}(M)\in\mathbb{W}^{n\times n}$ be the matrix $$S_{\lambda}(M)=(M(\lambda)-\lambda I)^{-1}+\lambda I\in\mathbb{W}^{n\times n},$$ if the inverse $(M(\lambda)-\lambda I)^{-1}$ exists. The matrix $S_{\lambda}(M)$ is called the \emph{spectral inverse} of the matrix $M(\lambda)$. \end{definition} We typically write the spectral inverse of $M\in\mathbb{W}^{n\times n}$ as $S(M)$ unless otherwise needed. We also observe that not every matrix $M\in\BW^{n\times n}$ has a spectral inverse. For instance, the matrix $$M=\left[ \begin{array}{cc} \lambda&0\\ 0&\lambda \end{array} \right]$$ cannot be spectrally inverted. However, if $M$ has a spectral inverse then the following holds. \begin{theorem}\label{theorem3} Suppose $M(\lambda)\in\BW^{n\times n}$ has a spectral inverse $S(M)$. Then $$\sigma\big(S(M)\big)=\sigma^{-1}(M) \ \ \text{and} \ \ \sigma^{-1}\big(S(M)\big)=\sigma(M).$$ \end{theorem} \begin{proof} Let $M(\lambda)\in\BW^{n\times n}$ with spectral inverse $S(M)$. Note that $$\det\big((S(M)-\lambda I)(M-\lambda I)\big)=\det\big((M-\lambda I)^{-1}(M-\lambda I)\big)=\det(I)=1.$$ As the determinant is multiplicative then $$\det(S(M)-\lambda I)=\det(M-\lambda I)^{-1},$$ and the result follows. \end{proof} A matrix $M\in\mathbb{W}^{n\times n}$ may or may not have a spectral inverse. However, if $M\in\mathbb{W}^{n\times n}_{\pi}$ then the proof of lemma \ref{lemma1} implies that $M-\lambda I$ is invertible. Therefore, $S(M)$ exists. This result is stated in the following lemma. \begin{lemma}\label{lemma3} If $M(\lambda)\in\mathbb{W}^{n\times n}_{\pi}$, then $M(\lambda)$ has a spectral inverse. \end{lemma} \begin{example}\label{ex:5} Let $M\in\BW^{4\times 4}_{\pi}$ be the matrix given by $$M=\left[ \begin{array}{cccc} \frac{1}{\lambda}&\frac{1}{\lambda}&0&0\\ 0&\frac{1}{\lambda}&1&0\\ 0&0&\frac{1}{\lambda}&0\\ 0&0&0&\frac{1}{\lambda} \end{array} \right]$$ for which we have $$\det\big(M(\lambda)-\lambda I\big)=\frac{\lambda^8-4\lambda^6+6\lambda^4-4\lambda^2+1}{\lambda^4}.$$ As one can calculate, the spectral inverse $S(M)$ is the matrix $$S(M)=\left[ \begin{array}{cccc} \frac{-\lambda}{\lambda^2-1}&\frac{-\lambda}{(\lambda^2-1)^2}&\frac{-\lambda^2}{(\lambda^2-1)^3}&\frac{-\lambda^3}{(\lambda^2-1)^4}\\ 0&\frac{-\lambda}{\lambda^2-1}&\frac{-\lambda^2}{(\lambda^2-1)^2}&\frac{-\lambda^3}{(\lambda^2-1)^3}\\ 0&0&\frac{-\lambda}{\lambda^2-1}&\frac{-\lambda^2}{(\lambda^2-1)^2}\\ 0&0&0&\frac{-\lambda}{\lambda^2-1} \end{array} \right]+\lambda I.$$ Taking the determinant of $S(M)-\lambda I$ one has $$\det\big(S(M)-\lambda I\big)=\frac{\lambda^4}{\lambda^8-4\lambda^6+6\lambda^4-4\lambda^2+1}.$$ That is, $\det\big(S(M)-\lambda I\big)=\det(M(\lambda)-\lambda I)^{-1}$. \end{example} Observe, that for any $M\in\mathbb{W}_{\pi}^{n\times n}$ the spectral inverse $S(M)\notin\mathbb{W}_{\pi}^{n\times n}$. Therefore, we have no guarantee that $S(M)$ can be isospectrally reduced. However, the following holds. \begin{theorem}\label{theorem4}\textbf{(Reductions of the Spectral Inverse)} For $M(\lambda)\in\mathbb{W}^{n\times n}_{\pi}$ suppose $N\supset\mathcal{B}_1\supset,\dots,\supset\mathcal{B}_m$ where $\mathcal{B}_m$ is non-empty. Then \begin{enumerate}[(i)] \item $R_{\lambda}\big(S(M);\mathcal{B}_m\big)$ exists; \item $R_{\lambda}(S(M);\mathcal{B}_1,\dots,\mathcal{B}_m)=R_{\lambda}(S(M);\mathcal{B}_m)$; and \item $R_{\lambda}(S(M);\mathcal{B})=(M-\lambda I)^{-1}/\left[(M-\lambda I)^{-1}\right]_{\mathcal{II}}+\lambda I$ where $\mathcal{I}=N-\mathcal{B}_m$. \end{enumerate} \end{theorem} \begin{proof} For $M \in\BW^{n\times n}_{\pi}$ suppose $\mathcal{B}$ and $\mathcal{I}$ form a non-empty partition of $N$. By lemmas~\ref{lemma1} and \ref{lemma3}, the matrix $S(M)$ exists and \[ S(M) - \lambda I = (M-\lambda I)^{-1} \in \BW_\pi^{n \times n}. \] Equating blocks in the previous equation gives that the matrices $[S(M)]_{\cB\cB} - \lambda I$, $[S(M)]_{\cB\cI}$, $[S(M)]_{\cI\cB}$ and $[S(M)]_{\cI\cI} - \lambda I$ all have entries in $\BW_\pi$. Moreover $[S(M)]_{\cI\cI} - \lambda I$ is not identically zero so its inverse exists. We deduce that the reduction of $S(M)$ exists and is \[ \begin{aligned} R_\lambda(S(M); \cB) -\lambda I & = ([S(M)]_{\cB\cB} - \lambda I) - [S(M)]_{\cB\cI} \left([S(M)]_{\cI\cI} - \lambda I \right)^{-1} [S(M)]_{\cI\cB} \\ &\in \BW_\pi^{|\cB| \times |\cB|}. \end{aligned} \] To prove (iii), simply notice that $[S(M)]_{\cB\cB} - \lambda I = \left[(M-\lambda I)^{-1} \right]_{\cB\cB}$, $[S(M)]_{\cI\cB} = [S(M) - \lambda I]_{\cI \cB} = \left[(M-\lambda I)^{-1} \right]_{\cI\cB}$, $[S(M)]_{\cB\cI} =\left[(M-\lambda I)^{-1} \right]_{\cB\cI}$ and $[S(M)]_{\cI\cI} - \lambda I = \left[(M-\lambda I)^{-1} \right]_{\cI\cI}$. These relations imply (iii). Substituting each submatrix $M_{\mathcal{RC}}$ in the proof of lemma 2 by the matrix $$S(M)_{\mathcal{RC}}= \begin{cases} (M-\lambda I)^{-1}_{\mathcal{RC}}+\lambda I \ &\text{if} \ \mathcal{R}=\mathcal{C},\\ (M-\lambda I)^{-1}_{\mathcal{RC}} \ &\text{otherwise} \end{cases}$$ and then following the proof of theorem \ref{theorem2} using $S(M)$ instead of $M$ yields a proof of part (ii). \end{proof} Theorem \ref{theorem4} states that any matrix $M\in\mathbb{W}^{n\times n}_{\pi}$ has a spectral inverse and that this inverse can be reduced over any index set. Observe the similarity between equations (\ref{eq:sch}) and part (iii) of theorem \ref{theorem4}. \subsection{Gershgorin-Type Estimates} If $M\in\mathbb{W}^{n\times n}$ then its inverse spectrum $\sigma^{-1}(M)$ are the complex numbers at which the determinant $\det(M-\lambda I)$ is undefined. Since the determinant of a matrix is composed of various products and sums of its entries then equations (\ref{eq:add}) and (\ref{eq:mult}) imply the following proposition. Hereinafter for $A \subset \complex$, the set $\overline{A}$ is the complement of $A$ in $\complex$. \begin{proposition}\label{prop0} If $M(\lambda)\in\mathbb{W}^{n\times n}$ then $\sigma^{-1}(M)\subseteq \overline{\dom(M)}$. \end{proposition} Phrased another way, the inverse eigenvalues of a matrix $M\in\mathbb{W}^{n\times n}$ are complex numbers at which the matrix $M$ is undefined, i.e. in the complement of $\dom(M)$. However, it is not always the case that the converse holds as the following example demonstrates. \begin{example} Consider the reduced matrix $$R(M;\mathcal{B})=\left[ \begin{array}{cc} \frac{1}{\lambda-1}&\frac{1}{\lambda-1}\\ \frac{1}{\lambda}&\frac{\lambda+1}{\lambda} \end{array} \right]\in\BW^{2\times 2} $$ found in example \ref{ex:2}. As computed in example \ref{ex:3}, we have $\sigma^{-1}(R(M;\mathcal{B}))=\emptyset$ and yet $\overline{\dom(R(M;\mathcal{B}))} = \{ 0,1 \}$. \end{example} To improve upon proposition \ref{prop0} we look for methods of estimating the inverse spectrum of a matrix. The following well-known theorem due to Gershgorin gives a simple method for approximating the eigenvalues of a square matrix with complex valued entries. \begin{theorem}{\textbf{(Gershgorin \cite{Gershgorin:1931:UDA})}}\label{theorem:gersh} Let $M\in\mathbb{C}^{n\times n}$. Then all eigenvalues of $M$ are contained in the set $$\Gamma(M)=\bigcup^n_{i=1}\big\{\lambda\in \mathbb{C}:|\lambda-M_{ii}|\leq \sum_{j=1,j\neq i}^n |M_{ij}|\big\}.$$ \end{theorem} In \cite{Bunimovich:2011:IGR} it was shown that Gershgorin's theorem can be extended to matrices $M\in\mathbb{W}^{n\times n}$. Our goal in this Section is to further extend this result by using the spectral inverse introduced in Section~\ref{sec:spinv} to estimate the inverse spectrum (or resonances) of matrix $M\in\mathbb{W}^{n\times n}$. To do so we first define the notion of a \textit{polynomial extension} of the matrix $M$. \begin{definition} For $M(\lambda)\in\mathbb{W}^{n\times n}$ with entries $M_{ij}=p_{ij}/q_{ij}$ let $L_i(M)$=$\prod_{j=1}^{n}q_{ij}$ for $1\leq i\leq n$. We call the matrix $\overline{M}$ given by $$\overline{M}_{ij}=\begin{cases} L_iM_{ij} \hspace{.85in} i\neq j\\ L_i\big(M_{ij}-\lambda\big)+\lambda \hspace{0.2in} i=j \end{cases}, \ \ 1\leq i,j\leq n $$ the \textit{polynomial extension} of $M$. \end{definition} Note that for any $M\in\mathbb{W}^{n\times n}$ the matrix $\overline{M}\in\mathbb{C}[\lambda]^{n\times n}$. The following theorem extends Gershgorin's original theorem to matrices in $\mathbb{W}^{n\times n}$ (see theorem 3.4 in \cite{Bunimovich:2011:IGR}). \begin{theorem}\label{theorem5} Let $M(\lambda)\in\mathbb{W}^{n\times n}$. Then $\sigma(M)$ is contained in the set $$\Gamma(M)=\bigcup_{i=1}^n\big\{\lambda\in\mathbb{C}:|\lambda-\overline{M}_{ii}|\leq\sum_{j=1,j\neq i}^n |\overline{M}_{ij}|\big\}.$$ \end{theorem} We call the set $\Gamma(M)$ the \emph{Gershgorin-type region} of the matrix $M$ or simply its Gershgorin region. (The notation in \cite{Bunimovich:2011:IGR} is $\mathcal{BW}_{\Gamma}(M)$). An immediate corollary to theorem \ref{theorem3} and theorem \ref{theorem5} is the following. \begin{corollary}\label{cor2} Let $M(\lambda)\in\mathbb{W}^{n\times n}$. Then $\sigma^{-1}(M)$ is contained in the set $$\Gamma\big(S(M)\big)=\bigcup_{i=1}^n\big\{\lambda\in\mathbb{C}: |\lambda-\overline{S(M)}_{ii}|\leq\sum_{j=1,j\neq i}^n |\overline{S(M)}_{ij}|\big\}.$$ \end{corollary} \begin{example}\label{ex:6} Let $M\in\mathbb{W}^{4\times 4}$ be the matrix considered in example \ref{ex:5}. Then $$\overline{S(M)}=\left[ \begin{array}{cccc} -\lambda(\lambda^2-1)^9&-\lambda(\lambda^2-1)^8&-\lambda^2(\lambda^2-1)^7&-\lambda^3(\lambda^2-1)^6\\ 0&-\lambda(\lambda^2-1)^5&-\lambda^2(\lambda^2-1)^4&-\lambda^3(\lambda^2-1)^3\\ 0&0&-\lambda(\lambda^2-1)^2&-\lambda^2(\lambda^2-1)^1\\ 0&0&0&-\lambda \end{array} \right]+\lambda I.$$ The region $\Gamma\big(S(M)\big)$ is shown in figure \ref{fig1} (left). \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.49\textwidth]{gersh1} & \includegraphics[width=0.49\textwidth]{gersh2} \end{tabular} \end{center} \caption{Left: $\Gamma(S(M))$. Right: $\Gamma(R(S(M);\mathcal{B}))$ where the inverse spectrum $\sigma^{-1}(M)=\{0,0,0,0\}$ is indicated by a ``$\times$''.}\label{fig1} \end{figure} We note that the Gershgorin-type region $\Gamma(S(M))$ is the union of the sets $$\Gamma(S(M))_i=\big\{\lambda\in\mathbb{C}: |\lambda-\overline{S(M)}_{ii}|\leq\sum_{j=1,j\neq i}^n |\overline{S(M)}_{ij}|\big\},$$ for $i=1,2,3,4$. The regions $\Gamma(S(M))_1$, $\Gamma(S(M))_2$, and $\Gamma(S(M))_3$ in figure \ref{fig1} are shown in blue, green, and red respectively. Transparency is used to highlight the intersections. The same strategy is used in Section \ref{sec:psi} to display pseudospectra (or inverse pseudospectra) of a matrix.The set $\Gamma(S(M))_4=\{0\}$ is contained in the inverse spectrum $\sigma^{-1}(M)=\{0,0,0,0\}$, which is indicated in the figure. \end{example} One of the main results of \cite{Bunimovich:2011:IGR} is that the Gershgorin region of a reduced matrix $R(M;\mathcal{B})$ is a subset of the Gershgorin region of the unreduced matrix $M\in\mathbb{W}^{n\times n}$ (see theorem 5.1 \cite{Bunimovich:2011:IGR}). In the same way the inverse eigenvalue estimates given in corollary \ref{cor2} can be improved via the process of isospectral matrix reduction. \begin{theorem}{\textbf{(Improved Inverse Eigenvalue Estimates)}}\label{impgersh} Let $M(\lambda)\in\mathbb{W}^{n\times n}_{\pi}$ where $\mathcal{B}$ is any nonempty subset of $N$. Then $$\Gamma\big(R(S(M);\mathcal{B})\big)\subseteq\Gamma\big(S(M)\big).$$ \end{theorem} A proof of theorem \ref{impgersh} can be obtained by following the proof of theorem 5.1 in \cite{Bunimovich:2011:IGR} and by using theorem \ref{theorem4}(ii). \begin{example}\label{ex:7} Let $M\in\mathbb{W}^{4\times 4}$ be the matrix given in example \ref{ex:5}. For the index set $\mathcal{B}=\{1,2,3\}$, the reduction of the spectral inverse of $M$ is $$R(S(M);\mathcal{B})=\left[ \begin{array}{ccc} \frac{-\lambda}{\lambda^2-1}&\frac{-\lambda}{(\lambda^2-1)^2}&\frac{-\lambda^2}{(\lambda^2-1)^3}\\ 0&\frac{-\lambda}{\lambda^2-1}&\frac{-\lambda^2}{(\lambda^2-1)^2}\\ 0&0&\frac{-\lambda}{\lambda^2-1}. \end{array} \right]+\lambda I$$ Its polynomial extension follows: $$\overline{R(S(M);\mathcal{B})}=\left[ \begin{array}{ccc} -\lambda(\lambda^2-1)^9&-\lambda(\lambda^2-1)^8&-\lambda^2(\lambda^2-1)^7\\ 0&-\lambda(\lambda^2-1)^5&-\lambda^2(\lambda^2-1)^4\\ 0&0&-\lambda(\lambda^2-1)^2 \end{array} \right]+\lambda I.$$ The Gershgorin-type region of the reduced matrix $R(S(M);\mathcal{B})$ is shown in figure \ref{fig1} (right) where one can see that $\sigma^{-1}(M)\subset\Gamma\big(R(S(M);\mathcal{B})\big)\subset\Gamma\big(S(M)\big).$ The regions $\Gamma(R(S(M);\cB))_1$ and $\Gamma(R(S(M);\cB))_2$ are in blue and red, respectively. \end{example} \begin{remark} In this section we have considered how Gershgorin-type estimates can be used to estimate the inverse spectrum of a matrix $M\in\mathbb{W}^{n\times n}$. We note that the same is true of the eigenvalue estimates associated with Brauer, Brualdi, and Varga (see \cite{Bunimovich:2011:IGR} for details). \end{remark} \section{Pseudospectra and pseudoresonances}\label{sec:psi} A pseudospectrum of a matrix $M\in\complex^{n \times n}$ is essentially the collection of scalars that behave, to within a given tolerance, as an eigenvalue of $M$. These values indicate to what extent the eigenvalues of the matrix $M$ are stable under perturbation of the matrix entries. See e.g. \cite{Trefethen:2005:SP} for a review of pseudospectra including their history and applications. We first extend the notion of pseudospectra to matrices in $\BW^{n\times n}_\pi$. Then we show that the spectral inverse of a matrix can be used to define {\em inverse pseudospectra} for matrices in $\BW^{n\times n}_\pi$. The inverse pseudospectra or \emph{pseudoresonances} of $M$ are the scalars that behave, to within a certain tolerance, as inverse eigenvalues or resonances of $M$. We study pseudoresonances and their relation to pseudospectra in Section~\ref{sec:psire}. In Section~\ref{sec:redpsi} we show that an isospectral reduction shrinks the pseudospectrum of matrix for a given tolerance. Throughout this discussion we consider the simple mass-spring network introduced in Section \ref{sec:imr} to give a physical interpretation to these concepts. Before formally extending the notion of pseudospectra to matrices in $\mathbb{W}^{n\times n}$ we note that pseudospectra has been previously generalized to matrix polynomials in \cite{Lancaster:2005:PMP,Boulton:2008:PMP}. \subsection{Pseudospectra} \label{sec:psisp} For a matrix $A\in\mathbb{C}^{n\times n}$, if $\lambda\in\sigma(A)$ then there is always at least one eigenvector $\textbf{v}\in\mathbb{C}^n$ of $A$ associated with $\lambda$. However, recall from Section \ref{sec:mat} that a matrix $M(\lambda)\in\mathbb{W}_{\pi}^{n\times n}$ may have an eigenvalue $\lambda_0$ for which $M(\lambda_0)$ is undefined. This may seem problematic especially if we would like to find an eigenvector associated with $\lambda_0$. In fact, it is still possible to do so. Assuming $\lambda_0$ is a solution to the equation $\det(M(\lambda)-\lambda I)=0$, the standard theory of linear algebra implies that there is a vector $\textbf{v}$ such that when the product $(M(\lambda)-\lambda I)\textbf{v}$ is evaluated at $\lambda=\lambda_0$, the result is the zero vector. Keeping this sequence in mind, we define the product of a matrix and vector as follows. For any $M(\lambda)\in\mathbb{W}^{n\times n}_{\pi}$ and $\textbf{v}\in\mathbb{C}^n$ we let the product $$(M(\lambda)-\lambda I)\textbf{v}\equiv (M(s) - s I) \bv|_{s=\lambda}.$$ This definition allows us to associate an eigenvectors to each eigenvalue of a matrix $M(\lambda)\in\mathbb{W}_{\pi}^{n\times n}$. To demonstrate this idea we give the following example. \begin{example} Consider the matrix $M(\lambda)\in\mathbb{W}^{2\times 2}$ given by $$M(\lambda)= \left[ \begin{array}{cc} 1&\frac{1}{\lambda-1}\\ 0&1 \end{array} \right]. $$ Here, one can readily see that $\sigma(M)=\{1,1\}$. Although $M(1)$ is undefined, the vector $\mathbf{v}=[1 \ 0]^T$ has the property $$(M(1)- 1I)\mathbf{v}= \left[ \begin{array}{cc} 1-s&\frac{1}{s-1}\\ 0&1-s \end{array} \right] \left[ \begin{array}{c} 1\\ 0 \end{array} \right]\Big|_{s=1}= \left[ \begin{array}{c} 0\\ 0 \end{array} \right]. $$ By definition the vector $\mathbf{v}$ is an eigenvector associated with the eigenvalue $1$ despite the fact that $M(\lambda)$ is not defined for $\lambda=1$. Importantly, for the vector norm $||\cdot||$ we have $$||(M(\lambda)- \lambda I)\mathbf{v}||= \left\|\left[ \begin{array}{c} 1-\lambda\\ 0 \end{array} \right]\right\|. $$ Hence, the size of $(M(\lambda)-\lambda I)\mathbf{v}$ varies continuously with respect to $\lambda$ even where $M(\lambda)$ is undefined. This is useful since we study values of $\lambda$ that act almost like eigenvalues of $M(\lambda)$. \end{example} Suppose that for a given tolerance $\epsilon>0$, there is a scalar $\lambda \in \complex$ and a unit vector $\bv\in\complex^n$ for which $\| (M(\lambda) - \lambda I) \bv\| < \eps$. If this is the case then the vector $\bv$ is said to be an {\em $\eps$-pseudoeigenvector} of the matrix $M(\lambda)$ corresponding to the {\em $\eps$-pseudoeigenvalue} $\lambda$. The $\epsilon-$pseudospectrum of $M(\lambda)$ is defined as the set of all such $\lambda$. We state this and two other equivalent definitions of the $\epsilon-$pseudospectrum below. For $\Omega\subset\mathbb{C}$, let $cl(\Omega)$ be the closure of $\Omega$ in $\mathbb{C}$. \begin{definition}\label{def:psisp} Let $\epsilon>0$. The {\em $\epsilon$-pseudospectrum} of $M(\lambda) \in\mathbb{W}^{n\times n}_\pi$ is defined equivalently by: \begin{enumerate}[(a)] \item {\em Eigenvalue perturbation}: \[ \sigma_\epsilon(M)=cl\big(\{\lambda \in \complex : \| (M(\lambda) - \lambda I) \bv\| < \eps ~\text{for some $\bv \in \complex^n$ with $\|\bv\|=1$}\}\big). \] \item {\em The resolvent}: \[ \sigma_\epsilon(M) = cl\big(\{ \lambda \in \complex : || (M(\lambda)-\lambda I)^{-1} || > \epsilon ^{-1}\}\big). \] \item {\em Perturbation of the matrix}: \[ \sigma_\epsilon(M) =cl\big(\{ \lambda \in \complex : \lambda \in \sigma(M(\lambda)+E) ~\text{for some}~E\in \complex^{n \times n}~\text{with}~||E||<\eps \}\big). \] \end{enumerate} \end{definition} As a consequence of definition \ref{def:psisp}, the eigenvalues of a matrix $M\in\mathbb{W}_{\pi}^{n\times n}$ belong to all its pseudospectra: $$\sigma(M)\subset\sigma_\epsilon(M) \ \text{for each} \ \epsilon>0.$$ The proof that definitions \ref{def:psisp}(a)--(c) are equivalent (provided the vector norm in (a) and the operator norm in (b)--(c) are consistent) relies on the proof that definitions \ref{def:psisp}(a)--(c) are equivalent for scalar valued matrices. For completeness, the proofs are included in \ref{app:psi}. We now compare the pseudospectra of a matrix and its reduction. \begin{example}\label{ex:ps1} Consider the matrices $M$ and $R(M;\mathcal{B})$ given in example~\ref{ex:2} where $\mathcal{B}=\{1,2\}$. The pseudospectra of both matrices are displayed in figure~\ref{fig:ps1} for $\epsilon=1$, $10^{-1/2}$, $10^{-1}$ using the matrix 2-norm. Notice that although $0,1\in\sigma(M)$ these values do not belong to $\sigma( R(M;\mathcal{B}))$ because of cancellations resulting from the matrix reduction, i.e. $M_{\mathcal{II}}=\{0,0,1,1\}$. However, for the $\epsilon$ we consider $0,1\in\sigma_{\eps}(R(M;\mathcal{B}))$ meaning that these eigenvalues remain as pseudoeigenvalues of the reduced matrix. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.49\textwidth]{e1_ps0_0} & \includegraphics[width=0.49\textwidth]{e1_ps1_0}\\ $\sigma_{\epsilon}(M)$ & $\sigma_{\epsilon}(R(M;\{1,2\}))$ \end{tabular} \end{center} \caption{Pseudospectra of the matrices given in example~\ref{ex:2} for $\eps =1$ (blue), $\eps=10^{-1/2}$ (green) and $\eps=10^{-1}$ (red), obtained with the matrix 2-norm. The respective spectra are indicated by ``$\times$''.}\label{fig:ps1} \end{figure} \end{example} To give a possible physical interpretation of pseudospectra we again consider a mass-spring network. \begin{example}\label{ex:spring1} For the mass-spring network considered in example \ref{ex:spring0} recall that the eigenvalues of $K$ correspond to frequencies for which there exists a non-zero displacement that generates no forces on these nodes. The pseudoeigenvalues of this system have a similar physical interpretation. Namely, the pseudospectra indicate the frequencies for which there is a displacement that generates ``small'' forces relative to the (norm of the) displacement. For example, as the frequency $\omega^2=2.1$ in figure~\ref{fig:spring2}(right) is within the green tolerance region there is a non-zero vector of displacements such that the forces generated from this displacement have norm $\eps=10^{-1/2}$ times less than the norm of this displacement vector. That is, if we only have access to the boundary nodes $\cB = \{1,4\}$ then the pseudoeigenvalues of $R_{\omega^2} ( K; \cB)$ correspond to frequencies for which there is a displacement at the boundary nodes $\cB$ that generates very small forces on these nodes. The pseudospectra regions of $R_{\lambda} ( K; \cB)$ are shown in figure~\ref{fig:spring2}(b) for $\eps=1$, $10^{-1/2}$, $10^{-1}$. Observe that the pseudospectra of $R_{\lambda} ( K; \cB)$ are included in the pseudospectra of $K$ for a given tolerance $\eps$. That is, less access to network nodes means there are less frequencies for which displacements generate relatively small forces. Phrased less formally, the more a network is reduced, the less susceptible to perturbations its eigenvalues are. \end{example} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.49\textwidth]{spring1} & \includegraphics[width=0.49\textwidth]{spring2}\\ $\sigma_{\epsilon}(K)$ & $\sigma_{\epsilon}(R(K;\{1,2\}))$ \end{tabular} \end{center} \caption{Pseudospectra of the stiffness matrix $K$ for the mass spring system in example~\ref{ex:spring0} and of its reduction $R_\lambda(K,\{1,4\})$. The latter corresponds to the effective stiffness of the mass-spring system when we only have access to nodes $\{1,4\}$. The tolerances shown are $\eps=1$ (blue), $\eps=10^{-1/2}$ (green) and $\eps=10^{-1}$ (red), using the matrix 2-norm. The ``$\times$'' correspond to spectra of the respective matrices.}\label{fig:spring2} \end{figure} Note that in both examples \ref{ex:ps1} and \ref{ex:spring1} we have $\sigma(M)\subset\sigma_{\eps}(R(M;\mathcal{B}))$ for the $\epsilon$ we consider. It seems that even under reduction, the $\epsilon$-pseudospectrum remembers where the eigenvalues of the original matrix are. However, this is not always the case, as the following example shows. \begin{example}\label{ex:non} Consider the matrix $M\in\mathbb{C}^{3\times 3}$ given by $$M=\left[\begin{array}{ccc} 0&1&0\\ 1&0&0\\ 0&1&0 \end{array}\right],$$ with $\sigma(M)=\{0,\pm 1\}$. By reducing $M$ over $\mathcal{B}=\{1\}$ we obtain the matrix $R(M;\mathcal{B})=[1/\lambda]$ for which $$\|(R(M;\mathcal{B})-\lambda I)^{-1}\|=\Big|\frac{\lambda}{1-\lambda^2}\Big|.$$ Hence, $0\notin\sigma_{\epsilon}(R(M;\mathcal{B}))$ for any $\epsilon$. Moreover, as $\sigma(M_{\mathcal{II}})=\{0,0\}$ for $\mathcal{I}=\{2,3\}$ it is not always the case that either $\sigma(M)$ or $\sigma(M_{\mathcal{II}})$ is contained in $\sigma_{\epsilon}(R(M;\mathcal{B}))$. \end{example} \subsection{Pseudoresonances} \label{sec:psire} Recall that the resonances of a matrix $M(\lambda)\in\mathbb{W}^{n\times n}_{\pi}$ are the eigenvalues of its spectral inverse. Thus we may think of ``almost resonances'' or pseudoresonances of $M(\lambda)$ as pseudoeigenvalues of $S(M)$. The precise definition is below, together with other equivalent definitions. These are analogous to the pseudospectra definitions~\ref{def:psisp}(a)--(c). \begin{definition}\label{def:psir} Let $\epsilon>0$. The set of $\epsilon-$pseudoresonances of a matrix $M(\lambda) \in \BW^{n \times n}_\pi$ is defined equivalently by: \begin{enumerate}[(a)] \item {\em Resonance perturbation:} \[ \sigma_{\eps}^{-1}(M) = cl\big(\{ \lambda \in \complex : \| (M(\lambda) - \lambda I)^{-1} \bv \| < \epsilon ~\text{for some}~ \bv\in\complex^n ~\text{with}~ \|\bv\| = 1\}\big). \] \item {\em The inverse resolvent:} \[ \sigma_{\eps}^{-1}(M) = cl\big(\{ \lambda \in \complex : || M(\lambda) - \lambda I || > \epsilon^{-1} \}\big). \] \item {\em Perturbation of the spectral inverse:} \[ \sigma_{\eps}^{-1}(M) = cl\big(\{ \lambda \in \complex : \lambda \in \sigma(S(M) + E) ~\text{for some}~E \in \complex^{n\times n}~\text{with}~||E||<\eps \}\big). \] \end{enumerate} \end{definition} Note that definition \ref{def:psir} is simply definition \ref{def:psisp} in which $M(\lambda)$ is replaced by the matrix $S(M)$ on the right hand side of parts (a)--(c). Hence, the equivalence of definitions \ref{def:psir}(a)--(c) follow from arguments similar those in \ref{app:psi}. Moreover, if $M(\lambda)\in\mathbb{W}^{n\times n}_{\pi}$ then $$\sigma^{-1}(M)\subset \sigma^{-1}_{\epsilon}(M) \ \text{for} \ \text{each} \ \epsilon>0.$$ Observe that if $w(\lambda)=p(\lambda)/q(\lambda)\in\mathbb{W}_{\pi}$ then by definition $\pi(p)\leq\pi(q)$. Hence we have the limit, $$\lim_{|\lambda|\rightarrow\infty}|w(\lambda)|=c,$$ for some constant $c\geq 0$. Therefore $||M(\lambda)-\lambda I||=\mathcal{O}(\lambda)$ for large $\lambda$, for matrices $M \in \BW_\pi^{n\times n}$. This leads to the following remark. \begin{remark} If $M\in\mathbb{W}_{\pi}^{n\times n}$ then the value $\lambda = \infty$ is always a pseudoresonance. This means that for each $\eps>0$ the set $\sigma^{-1}_\eps(M)$ contains the complement of a ball centered at the origin with sufficiently large radius. (See figure \ref{fig:pr1} for example.) \end{remark} \begin{example}\label{ex:non2} In figure~\ref{fig:pr1} we show the pseudoresonance regions of the matrix $R(M;\{1,2\})$ from example~\ref{ex:2} for $\eps=1$, $10^{-1/2}$, $10^{-1}$. As is shown in example~\ref{ex:3}, the inverse spectrum of $R(M;\{1,2\})$ is empty. However, the pseudoresonance regions reveal that the eigenvalues $\sigma(M_{\mathcal{II}})=\{0,1\}$ act as resonances. Specifically, $\sigma(M_{\mathcal{II}})\subset\sigma^{-1}_{\eps}(R(M;\{1,2\}))$. In figure \ref{fig:ps1} (left) and figure \ref{fig:pr1} note that for the $\epsilon$ we consider $$\sigma_{\eps}(R(M;\{1,2\}))\cap\sigma_{\eps}^{-1}(R(M;\{1,2\}))\neq\emptyset.$$ That is, values near the set $\sigma(M_{\mathcal{II}})=\{0,1\}$ are both $\eps$-pseudoeigenvalues and $\eps$-pseudoresonances of $R(M;\{1,2\})$. \end{example} As it turns out, the situation in example \ref{ex:non2} does not hold for every matrix reduction. Similar to example \ref{ex:non}, if $$M= \left[ \begin{array}{cc} 1&1\\ 0&0 \end{array} \right],$$ and we consider the sets $\mathcal{B}=\{1\}$ and $\mathcal{I}=\{2\}$, then one can show the set $\sigma(M_{\mathcal{II}})=\{0\}$ is not contained in $\sigma_\eps^{-1}(R(M;\mathcal{B}))$ for small $\epsilon>0$. That is, the eigenvalues $\sigma(M_{\mathcal{II}})$ do not always act as resonances of $R(M;\mathcal{B})$. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{e1_pr1_0} \end{center} \caption{Pseudoresonance regions for the matrix $R(M;\{1,2\})$ given in example~\ref{ex:2}, with $\eps = 10^{-1/2}$ (red), and $\eps = 10^{-1}$ (blue). All the points in the display region belong to the pseudoresonance region for $\eps = 1$.}\label{fig:pr1} \end{figure} As with the pseudospectra studied in Section \ref{sec:psisp} we give a physical interpretation of pseudoresonances using a mass spring system. \begin{example}\label{ex:spring2} The mass spring system considered in example~\ref{ex:spring0} has resonances when restricted to a set of boundary nodes $\cB\subset \{1,4\}$. The pseudoresonances of the reduced system correspond to frequencies for which there is a displacement on the boundary that generates relatively large forces at these nodes. In figure~\ref{fig:spring3} we display some pseudoresonance regions of the mass-spring system restricted to the set $\mathcal{B}=\{1,4\}$. \end{example} As we allow $\eps$ to be any positive value there is nothing preventing an eigenvalue of a matrix $M$ from also being an $\eps$-pseudoresonance of $M$ (or a resonance from being a $\eps$-pseudoeigenvalue). In other words, we could have an $\eps>0$ for which \[ \sigma^{-1}(M) \cap \sigma_\eps(M) \neq \emptyset ~~\text{or}~~ \sigma(M) \cap \sigma_\eps^{-1}(M) \neq \emptyset \] as the following example shows. \begin{example}\label{ex:nondis} Consider the following matrix $M(\lambda)\in\BW^{2 \times 2}_\pi$ given by \[ M(\lambda) = \begin{bmatrix} \frac{1}{\lambda-1} & 0\\ 0 & 0 \end{bmatrix}. \] The spectrum and inverse spectrum of $M(\lambda)$ are respectively \[ \sigma(M) = \{ 0 , (1\pm \sqrt{5})/{2} \} ~\text{and}~ \sigma^{-1}(M) = \{1\}. \] Now notice that for $0 \in \sigma(M)$ we have \[ \| M(0) - 0 I \| = 1, \] which implies that $0 \in \sigma^{-1}_\eps(M)$ for all $\eps\geq1$. The resolvent of $M$ is \[ (M(\lambda) - \lambda I)^{-1} = \begin{bmatrix} \frac{\lambda -1}{-\lambda^2+\lambda+1} & 0 \\ 0 & -\frac{1}{\lambda} \end{bmatrix}. \] Hence, for $\lambda=1$ we have \[ \| (M(1) - I)^{-1} \| = 1, \] which means that $1 \in \sigma_\eps(A)$ for all $\eps\geq1$. \end{example} \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{spring3} \end{center} \caption{Pseudoresonance regions of the matrix $R_\lambda(K;\{1,4\})$ given in example~\ref{ex:spring0}, for $\eps = 1$ (blue), $\eps = 10^{-1/2}$ (green) and $\eps = 10^{-1}$ (red). Resonances are shown with $\times$. All the points in the display region, the white region excepted, belong to the pseudoresonance region for $\eps=1$.}\label{fig:spring3} \end{figure} As the pseudoresonances of a matrix $M\in\mathbb{W}^{n\times n}_{\pi}$ can be defined in terms of the pseudoeigenvalues of the spectral inverse $S(M)$, we can generalize theorem~\ref{theorem3} as follows. \begin{theorem}\label{thm:psir} Suppose $M(\lambda) \in \BW^{n \times n}_\pi$ and $\epsilon>0$. Then \begin{equation*} \sigma_\eps^{-1} ( M ) = \sigma_{\eps} (S(M)) \ \ \text{and} \ \ \sigma_{\eps} (M) = \sigma_\eps^{-1} (S (M) ). \end{equation*} \end{theorem} \begin{proof} Let $M(\lambda) \in \BW^{n \times n}_\pi$ and $\epsilon>0$. Observe that, \begin{align*} \sigma_{\eps}(M) &= cl\big(\{ \lambda \in \complex : || (M(\lambda) - \lambda I)^{-1} || > \epsilon^{-1} \}\big); \ \text{and}\\ \sigma_{\eps}^{-1}(S(M)) &= cl\big(\{ \lambda \in \complex : || S(M) - \lambda I || > \epsilon^{-1} \}) \end{align*} from definitions \ref{def:psisp}(b) and \ref{def:psir}(b) respectively. Since $S(M)-\lambda I=(M(\lambda) - \lambda I)^{-1}$ then $\sigma_{\eps} (M) = \sigma_\eps^{-1} (S (M) )$. The equality $\sigma_\eps^{-1} ( M ) = \sigma_{\eps} (S(M))$ follows similarly. \end{proof} Because of the seemingly invertible relationship between pseudospectra and inverse pseudospectra in theorem \ref{thm:psir}, it is tempting to think the $\eps-$pseudoresonances of a matrix is the complement of its $\eps^{-1}-$pseudoeigenvalues. In general, however, the two are not equal as can be seen in the next proposition. \begin{theorem} For $M(\lambda) \in \BW^{n \times n}_\pi$ let $\eps>0$. Then $cl\big(\overline{ \sigma_{1/\eps} ( M )}\big) \subseteq \sigma_\eps^{-1} (M)$. However, the reverse inclusion does not hold in general. \end{theorem} This theorem means that, in general, there is not enough information in the pseudospectra of a matrix to reconstruct its pseudoresonances. We now proceed with the proof of the proposition. \begin{proof} For $M(\lambda) \in \BW^{n \times n}_\pi$ and a matrix norm $||\cdot||$, the inequality \begin{equation}\label{eq:lst} ||M(\lambda)-\lambda I||^{-1}\leq ||(M(\lambda)-\lambda I)^{-1}|| \end{equation} holds for any $\lambda\in \dom(M)-\sigma(M)$. Let $\interior(\Omega)$ denote the \emph{interior} of the set $\Omega\subseteq\mathbb{C}$, i.e. the largest open subset of $\Omega$. For $\epsilon>0$, using definition \ref{def:psisp}(b) \begin{align*} cl\big(\overline{ \sigma_{1/\eps} ( M )}\big)&=cl\big(\overline{cl(\{\lambda\in\mathbb{C}:||(M(\lambda)-\lambda I)^{-1}||> \epsilon\})}\big)\\ &=cl\big(\interior(\{\lambda\in\mathbb{C}:||(M(\lambda)-\lambda I)^{-1}||\leq \epsilon\})\big)\\ &=cl\big(\{\lambda\in\mathbb{C}:||(M(\lambda)-\lambda I)^{-1}||\leq \epsilon\}\big) \end{align*} Similarly, it follows from definition \ref{def:psir}(b) that \begin{align*} \sigma_\eps^{-1} (M)&=cl\big(\{\lambda\in\mathbb{C}:||M(\lambda)-\lambda I||>\epsilon^{-1}\}\big)\\ &=cl\big(\{\lambda\in\mathbb{C}:||M(\lambda)-\lambda I||^{-1}\leq \epsilon\}\big). \end{align*} By inequality \eqref{eq:lst} the set $$\{\lambda\in\mathbb{C}:||(M(\lambda)-\lambda I)||^{-1}\leq \epsilon\}\subseteq\{\lambda\in\mathbb{C}:||(M(\lambda)-\lambda I)^{-1}||\leq \epsilon\}$$ implying the first half of the result. To show that the reverse inclusion does not hold in general, take for instance the matrix $M(\lambda)$ from example~\ref{ex:nondis}. It is easy to compute $\| M(2) - 2I \| = 2$ and $\| (M(2) - 2I)^{-1} \| = 1$. Taking $\eps = 2/3$, we clearly have $2 \in \sigma_{2/3}^{-1}(M) \cap \sigma_{3/2}(M)$. \end{proof} \subsection{Pseudospectra Under Isospectral Reduction} \label{sec:redpsi} One of the major goals of this paper is to understand how the pseudospectra of a matrix $M\in\BW^{n \times n}_\pi$ is affected by an isospectral reduction. In order to study this change in pseudospectra, we need to consider two vector norms. Specifically, we need one norm $\|\cdot\|$ defined on $\complex^n$ for the pseudospectrum of $M$ and another norm $\|\cdot\|'$ defined on $\complex^m$ ($m<n$) for the pseudospectrum of $R(M;\mathcal{B})$. Our comparison of the pseudospectra of the original and reduced matrices assumes that for $\bv = (\bv_\cB^T, \bv_\cI^T)^T \in \complex^n$ these two norms are related by \begin{align} \label{eq:normprop} \norm{\bv} = \norm{ \begin{bmatrix} \bv_\cB \\ \bv_\cI \end{bmatrix} } \geq \norm{ \begin{bmatrix} \bv_\cB \\ 0 \end{bmatrix} } = \norm{\bv_\cB}'. \end{align} Examples of norms satisfying property \eqref{eq:normprop} are the $p-$norms for $1\leq p \leq \infty$. For the sake of simplicity, we use the same notation for both of these $\complex^n$ and $\complex^{m}$ norms. The following theorem describes how the $\epsilon$-pseudospectrum of a matrix $M(\lambda)$ is related to the $\epsilon$-pseudospectrum of the isospectral reduction $R_\lambda(M;\cB)$. It says that the $\eps$-pseudospectra of the reduced matrix is contained in the $\eps$-pseudospectra of the original matrix for each $\eps>0$. \begin{theorem} \label{thm:red} For $M(\lambda) \in\BW^{n\times n}_{\pi}$ let $\mathcal{B}\subset N$. Then $\sigma_\eps(R(M;\cB)) \subseteq \sigma_\eps(M)$ for any $\epsilon>0$ provided the $\complex^n$ and $\complex^{|\cB|}$ norms in the pseudospectra definitions satisfy \eqref{eq:normprop}. \end{theorem} \begin{proof} For $M(\lambda)\in\mathbb{W}^{n\times n}_{\pi}$ let $\mathcal{B}$ and $\mathcal{I}$ form a non-empty partition of $N$. We assume, without loss of generality, that for a vector $\bv\in\complex^n$ we have $\bv = (\bv_\cB^T,\bv_\cI^T)^T$. For $\widetilde{\lambda}_0 \in\mathbb{C}$ and $\epsilon>0$ suppose there is a unit vector $\mathbf{v}_{\mathcal{B}}\in\mathbb{C}^{|\mathcal{B}|}$ such that \begin{equation}\label{ineq:last} ||(R(M;\mathcal{B})-\widetilde{\lambda}_0 I)\mathbf{v}_{\mathcal{B}}||<\epsilon. \end{equation} As $\sigma(M_{\mathcal{II}})$ and $\overline{\dom(M)}$ are finite sets, then by continuity there is a neighborhood $U$ of $\widetilde{\lambda}_0$ such that \begin{enumerate}[(i)] \item $M(\lambda)\in\mathbb{C}^{n\times n}$ for $\lambda\in U-\{\widetilde{\lambda}_0\}$; \item $\sigma(M_{\mathcal{II}})\cap(U-\{\widetilde{\lambda}_0\})=\emptyset$; and \item $||(R(M;\mathcal{B})-\lambda I)\mathbf{v}_{\mathcal{B}}||<\epsilon$ for $\lambda\in U-\{\widetilde{\lambda}_0\}$. \end{enumerate} Observe that, for each $\lambda_0\in U-\{\widetilde{\lambda}_0\}$ it follows that the vector $$\mathbf{v}_{\mathcal{I}}=-(M(\lambda_0)_{\mathcal{II}}-\lambda_0 I)^{-1}M(\lambda_0)_{\mathcal{IB}}\mathbf{v}_{\mathcal{B}}$$ is defined. Let $\bv = (\bv_\cB^T,\bv_\cI^T)^T$ and note that \begin{align*} (M(\lambda_0)-\lambda_0 I)\mathbf{v}&= \left[\begin{array}{c} (M-\lambda I)_{\mathcal{BB}}\mathbf{v}_{\mathcal{B}}+(M-\lambda I)_{\mathcal{BI}}\mathbf{v}_{\mathcal{I}}\\ (M-\lambda I)_{\mathcal{IB}}\mathbf{v}_{\mathcal{B}}+(M-\lambda I)_{\mathcal{II}}\mathbf{v}_{\mathcal{I}} \end{array}\right]\Big|_{\lambda=\lambda_0}\\ &= \left[\begin{array}{c} M_{\mathcal{BB}}\mathbf{v}_{\mathcal{B}}-M_{\mathcal{BI}}(M_{\mathcal{II}}-\lambda I)^{-1}M_{\mathcal{IB}}\mathbf{v}_{\mathcal{B}}\\ M_{\mathcal{IB}}\mathbf{v}_{\mathcal{B}}-(M_{\mathcal{II}}-\lambda I)(M_{\mathcal{II}}-\lambda I)^{-1}M_{\mathcal{IB}}\mathbf{v}_{\mathcal{B}} \end{array}\right]\Big|_{\lambda=\lambda_0}\\ &=\left[\begin{array}{c} (R(M;\mathcal{B})-\lambda I)\mathbf{v}_{\mathcal{B}}\\ 0 \end{array}\right]\Big|_{\lambda=\lambda_0}. \end{align*} By the property \eqref{eq:normprop} of the norms in $\complex^n$ and $\complex^{|\cB|}$ we must have \begin{equation}\label{eq:basicineq} \| (M(\lambda_0)-\lambda_0 I)\bv \| = \| (R(M(\lambda_0);\cB) - \lambda_0 I) \bv_\cB \| < \eps. \end{equation} As $\mathbf{v}_{\mathcal{B}}\neq\mathbf{0}$, consider the unit vector $\bu = \bv / \| \bv \| \in \complex^n$. Again by \eqref{eq:normprop} we have $\| \bv \| \geq \| \bv_\cB \| = 1$. Hence, we get the bound \[ \| (M(\lambda_0) - \lambda_0 I) \bu \| = \frac{ \| (M(\lambda_0) - \lambda_0 I) \bv \|} { \| \bv \| } \leq \| (M(\lambda_0)-\lambda_0 I) \bv \| < \eps, \] where the last inequality comes from \eqref{eq:basicineq}. This implies $\lambda_0 \in \sigma_\eps(M)$. As this holds for any $\lambda_0\in U-\{\widetilde{\lambda}_0\}$ then $\widetilde{\lambda}_0\in cl(\sigma_\eps(M))$. Since $\sigma_\eps(M)$ is a closed set then in fact $\widetilde{\lambda}_0\in\sigma_\eps(M)$. Since $\widetilde{\lambda}_0$ is an arbitrary point in $\sigma_\eps(R(M;\mathcal{B}))$, the result follows by inequality (\ref{ineq:last}). \end{proof} \begin{remark} Theorem \ref{thm:red} states that the $\eps$-pseudospectrum of a matrix becomes a subset of this region as the matrix is reduced. However, for $\eps$-pseudoresonances of a matrix there is no such inclusion result. \end{remark} \begin{example}\label{ex:spring3} In the mass-spring system of example~\ref{ex:spring0}, we consider four different sets of boundary nodes $\{1,2,3,4\} \supset \{1,2,4\} \supset \{1,4\} \supset \{1\}$. Note that theorem~\ref{thm:red} implies that the corresponding pseudospectra for a given $\epsilon$ obey the same inclusions. This is shown in figure \ref{fig:spring4} for $\eps = 1$, $10^{-1/2}$, and $\eps=10^{-1}$. In physical terms, this means that as we increase the number of internal degrees of freedom (or decrease the number of boundary nodes), it becomes harder to find frequencies for which there is a displacement that generates forces of magnitude below a certain fixed level. Hence the less boundary nodes we have, the more robust are the frequencies that generate small forces. \end{example} \begin{figure} \begin{center} \begin{tabular}{c@{}c} \raisebox{5em}{\rotatebox{90}{$\eps = 1$}} & \includegraphics[width=0.8\textwidth]{spring4}\\ \raisebox{4em}{\rotatebox{90}{$\eps=10^{-1/2}$}} & \includegraphics[width=0.8\textwidth]{spring5}\\ \raisebox{4em}{\rotatebox{90}{$\eps=10^{-1}$}} & \includegraphics[width=0.8\textwidth]{spring6} \end{tabular} \end{center} \caption{Pseudospectra of the matrix $K$ from the mass-spring system of example~\ref{ex:spring0} (blue) together the pseudospectra for the reduced matrices where the terminal nodes are $\cB =\{1,2,4\}$ (cyan), $\cB = \{1,4\}$ (green) and $\cB = \{1\}$ (red). Note how the pseudospectra shrink as the number of boundary nodes decreases.} \label{fig:spring4} \end{figure} Notice that the inclusion given in theorem \ref{thm:red} is not a strict inclusion. In fact, it may be the case that a matrix $M$ and its reduction $R(M;\mathcal{B})$ have the same pseudospectra as the following example demonstrates. \begin{example} Consider the matrix $M\in\mathbb{C}^{4\times 4}$ given by \[ M= \begin{bmatrix} 1&1&0&0\\ 1&1&0&0\\ 0&0&1&1\\ 0&0&1&1 \end{bmatrix} ~~\text{and its reduction}~~ R(M;\cB) = \begin{bmatrix} \frac{\lambda}{\lambda-1}&0&0\\ 0&1&1\\ 0&1&1 \end{bmatrix}, \] where $\cB = \{2,3,4\}$. Computing the Euclidean induced matrix norm of the resolvents we get \[ \begin{aligned} \| (M-\lambda I)^{-1} \| &= \max( |\lambda|^{-1}, |\lambda-2|^{-1} ) ~~\text{and}\\ \| (R(M;\cB) - \lambda I)^{-1} \| & = \max ( |\lambda|^{-1}, |\lambda-2|^{-1}, |\lambda-1||\lambda|^{-1}|\lambda-2|^{-1} ). \end{aligned} \] To show that the pseudospectra of $M$ and $R(M;\mathcal{B})$ are the same, we only need to demonstrate that the norms above are equal. This happens if we can show the inequality \begin{equation}\label{eq:ni1} |\lambda-1||\lambda|^{-1}|\lambda-2|^{-1} \leq \max( |\lambda|^{-1}, |\lambda-2|^{-1} ). \end{equation} Notice that the triangle inequality implies \begin{equation}\label{eq:ni2} | \lambda - 1 | \leq \frac{1}{2} |\lambda - 2| + \frac{1}{2} |\lambda| \leq \max(|\lambda|,|\lambda-2|). \end{equation} Inequality \eqref{eq:ni1} follows for $\lambda \notin \{0,2\}$ by dividing \eqref{eq:ni2} by $|\lambda||\lambda-2|$. As $\{0,2\} \subset \sigma(M),\sigma(R(M;\mathcal{B}))$, then both $0$ and $2$ are included in the pseudospectra of these matrices. We conclude that $\sigma_{\eps}( M ) = \sigma_{\eps}(R(M;\cB))$ for all $\eps>0$. \end{example} \section{Conclusion} Isospectral graph reductions allow one to reduce the size of a matrix while maintaining its set of eigenvalues up to a known set. Prior to this paper it was known that a matrix could be isospectrally reduced over any principal submatrix of a particular form. One of our main results removes this restriction. This new, more general method of isospectral reduction allows one to reduce a matrix over any principal submatrix without any other consideration (other than existence). Consequently, we are able to study matrix reduction in a simpler and computationally more efficient way compared with those used in \cite{Bunimovich:2011:IGR,Bunimovich:2012:IGT,Bunimovich:2012:IC}. An additional improvement to previous work is the introduction of a spectral inverse. The spectral inverse of a matrix, which interchanges a matrix' spectrum and inverse spectrum, allows one to use the previous results found in \cite{Bunimovich:2011:IGR,Bunimovich:2012:IGT,Bunimovich:2012:IC} to analyze the inverse spectrum of a matrix. In particular, we show that the Gershgorin-type estimates in \cite{Bunimovich:2011:IGR} can also be used to estimate a matrix' inverse spectrum. One of our main goals here is determining whether the notion of pseudospectra can be extended to the class of matrices we consider. In fact, because a matrix with rational function entries has both a spectrum and inverse spectrum we are able to extend the notion of pseudospectrum to such matrices and also introduce the notion of inverse pseudospectrum. Moreover, we are able to show that the pseudospectrum of a matrix shrinks under reduction. Therefore, the eigenvalues of a reduced matrix are less susceptible to perturbations. This fact has implications to systems modeled by reduced matrices. For instance, the mass spring network we consider throughout this paper is modeled using a matrix with integer entries. However, if we have access to only some terminal nodes, the frequency response at the terminals is a matrix with rational function entries which can be obtained by reducing the stiffness matrix where all nodes are terminal nodes. Our result shows that having less terminal nodes, means the eigenvalues of the frequency response are less susceptible to perturbations than the eigenvalues of the matrix where all the nodes are terminal nodes. \section*{Acknowledgements} The work of F. Guevara Vasquez was partially supported by the National Science Foundation grant DMS-0934664.
1,941,325,220,163
arxiv
\section*{Introduction} The increasing adoption of electronic technologies is widely recognized as a critical strategy for making health care more cost-effective. Smartphone-based m-health applications have the potential to change many of the modern-day techniques of how healthcare services are delivered by enabling remote diagnosis [1], but it is yet to realize its fullest potential. There has been a paradigm shift in the research on medical sciences, and technologies like point-of-care diagnosis and analysis have developed with more custom-designed smartphone applications coming into prominence. Due to the high rate of infection, with the total number of confirmed cases exceeding twenty million since its recent outbreak, COVID-19 was chosen as the initial disease target for us to study. With studies confirming that chest X-rays are irreplaceable in a preliminary screening of COVID-19, we started with chest X-rays as the tool to detect the presence of coronavirus (COVID-19) in the patients.[2] Chest X-ray is the primary imaging technique that plays a pivotal role in disease diagnosis using medical imaging for any pulmonary disease. Classic machine learning models have been previously used for the auto-classification of digital chest images [3][4]. Reclaiming the advances of those fields to the benefit of clinical decision making and computer-aided systems using deep learning is becoming increasingly nontrivial as new data emerge[5][6][7], with Convolutional Neural Networks (CNNs) spearheading the medical imaging domain [8]. A key factor for the success of CNNs is its ability to learn distinct features automatically from domain-specific images, and the concept has been reinforced by transfer learning [9]. However, the process of learning distinct features by standard supervised learning using Convolutional Neural Networks can be computationally non-efficient and data expensive. The above methods become incapacitated when combined with a shortage of data. Our approach represents a substantial conceptual advance over all other published methods by overcoming the problem of data scarcity using a one-shot learning approach with the implementation of a Siamese Neural Network. Contrasting to its counterparts, our method has the added advantage of being more generalizable and handles extreme class imbalance with ease. We leverage open chest X-Ray datasets of COVID-19 and various other diseases that were publicly available (refer datasets section)[10]. Once a Siamese network has been tuned, it can capitalize on powerful discriminative features to generalize the predictive power of the network not just to new data, but to entirely new classes from unknown distributions[11][12]. Using a convolutional architecture, we can achieve reliable results that exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks. The world is being crippled by COVID-19, an acute resolved disease whose onset might result in death due to massive alveolar damage and progressive respiratory failure [13]. A robust and accurate automatic diagnosis of COVID-19 is vital for countries to prompt timely referral of the patient to quarantine, rapid intubation of severe cases in specialized hospitals, and ultimately curb the spread. The definitive test for SARS-CoV-2 is the real-time reverse transcriptase-polymerase chain reaction (RT-PCR) test. However, with sensitivity reported as low as 60-70\%[14] and as high as 95-97\%[15], a meta-analysis concluded the pooled sensitivity of RT-PCR to be 89\%[16]. These numbers point out false negatives to be a real clinical problem, and several negative tests might be required in a single case to be confident about excluding the disease[17]. A resource-constrained environment demands imaging for medical triage to be restricted to suspected COVID-19 patients who present moderate-severe clinical features and a high pretest probability of disease, and medical imaging done in an early phase might be feature deficient [18][19]. Although the cause of COVID-19 was quickly identified to be the SARS-CoV-2 virus, scientists are still working around the clock to fully understand the biology of the mutating virus and how it infects human cells[20]. All these calls for a robust pre-diagnosis method, which hopes to provide higher generalization, work efficiently with insufficient feature data, and tackles the problem of data scarcity. This is where our proposed method of Data Augmentation Generative Adversarial Network (DAGAN) exploited by a Convolutional Siamese Neural Network with attention mechanism comes into the picture, exhibiting a state of the art accuracy and sensitivity. \subsection{Generative Adversarial Networks} Generative Adversarial Networks (GANs) are deep learning based generative models which take root from a game theoretic scenario where two networks compete against each other as an adversary. The constituent network models – a Generative Network and a Discriminative Network play a zero-sum game. GAN architecture paved way for sophisticated domain-specific data augmentation by treating an unsupervised problem as a supervised one, thus automatically training the generative model. \begin{figure}[h!] \includegraphics[width=12cm, height=6cm]{gan.jpg} \caption{Figurative representation of data flow in a Generative Adversarial Network} \label{a1} \end{figure} The Generative Network utilizes the latent space, which is a projection or compression of the data distribution to generate plausible training examples. Latent space is the end result of mapping the points in the multidimensional vector space to points in the problem domain. Random vectors drawn from data distributions like the Gaussian distribution are used to seed the generative process. The Discriminative Network has the primary objective of classifying real (training set samples) and fake instances (generator samples). The Discriminative network component of the GANs are usually normal binary classification models, with real instances as positive examples and fake instances as negative examples. It connects to two loss functions, the generator loss and the discriminator loss. Training of discriminative networks penalizes the discriminator loss with the examples from generative networks with constant weights and real examples as its input, ignoring the generator loss. Similarly the generator is trained to create competent plausible samples and modifies its weights based on the generator loss. \subsubsection{Data Augmentation Generative Adversarial Networks (DAGAN)} Data Augmentation procedure is crucial in the training procedure of a deep learning model as it has proven to be an effective solution in tackling the problem of overfitting at numerous occasions. As the data could be made more generalized, by providing the same with suitable augmentation strategies. In the case of images, Augmentation plays a crucial role. As to correctly identify and recognise specific features in the same, a diverse set of considerably different sets of images are required. Image augmentation techniques are henceforth found in diversely different ways, ranging from simple transforms (rotation) to adversarial data multiplicative methods such the one we would be using for our purposes, called the data augmentation generative adversarial networks (DAGAN). \begin{figure}[h!] \includegraphics[width=12cm, height=6cm]{dagan_n.png} \caption{The DAGAN architecture } \label{a2} \end{figure} The purpose and uniqueness of DAGAN when compared to other types of GANs , is the ability to generate distinctive augmented images for any given image sample while preserving the distinctive class features intact. A general network architecture of the same is provided in Figure 2. \textbf{Generator} : The Generator component of the DAGAN contains an encoder which provides a unique latent space representation for a given image and a decoder which generates an image given a latent representation. Any given image is first passed through the encoder to attain the corresponding latent representation, to which a scaled noise ( usually sampled from a Gaussian distribution ) is added to obtain a modified latent vector. The same is then passed through the decoder to obtain the corresponding augmented image. \textbf{Discriminator} : The discriminator component of the DAGAN is similar to other GANs, where the basic purpose of which is to perform a binary classification to tell apart the generated and real images. The discriminator takes as input a fake distribution (generated images) and a real distribution (Images belonging to the same class). \subsection{One Shot Learning and Siamese Networks} Forming an ideal dataset for a typical multi-class classification task using standard supervised learning methods is quite difficult. In addition to class imbalance issues, data for certain tasks such as medical image analysis could rarely be collected to meet ideal standards. One-shot learning methods helps tackle these issues effectively. In the Deep Learning literature, Siamese Neural Networks are typically used to perform one-shot learning. The Siamese Neural Network is a pair of neural networks trying to learn discriminative features from a pair of data points from two different classes.In our case the Siamese Networks would consist of two twin Convolutional Neural Networks which accept distinct inputs but are joined together by an energy function.The latent vector is the overall output from either of the twin neural networks, it is a unique and meaningful representation of individual images passed. In one shot learning the overall training objective is to obtain a vector valued function (Neural Network) which provides meaningful latent representation vectors to the each image passed. As any machine learning task, the one shot learning too has a loss function whose value conveys how close the network is in attaining optimal parameter values. In the case of Siamese Networks, the loss is a similarity measure between the latent vector outputs , enforced by a binary class label (like or unlike). The energy function takes as input the latent vectors formed by the CNN’s at their last dense layer (for each pair input passed) and outputs an energy value. The overall goal during the training process (optimization) can now be conveyed in terms of the energy function. The energy (output of the energy function) of a like pair is minimised and between unlike pairs it is maximised. The typical energy functions used could be anything from a simple euclidean norm to a fairly advanced function such as the contrastive energy function. A typical example of the contrastive energy is explained in brief. The contrastive energy function takes in two vectors as input and in general performs the following computation. \begin{equation} \mathcal{L}(W) = \sum_{i=1}^{P}L(W, (Y, \vec{X_1}, \vec{X_2})^i) \label{eq1} \end{equation} Where, \begin{equation} L(W, (Y, \vec{X_1}, \vec{X_2})^i) = (1 - Y)L_S(D_W^i) + Y L_D(D_W^i) \end{equation} \indent D\textsubscript{W} is the parameterized distance function as mentioned below \begin{equation} D_W(\vec{X_1}, \vec{X_2}) = \left\|G_W(\vec{X_1}) - G_W(\vec{X_2}) \right\|_2 \end{equation} \indent Y is the binary class label. \\ \indent L\textsubscript{S} and L\textsubscript{D} are functions chosen as per the task. \\ \begin{figure}[h!] \includegraphics[width=12cm, height=6cm]{siamese.jpg} \caption{ Feature comparison methods represented by a Siamese Neural Network architecture } \label{a3} \end{figure} \subsection{Attention Mechanism} Data no matter how clean will have irrelevant features, many of the predictive or analytic tasks does not rely on all of the features present in raw data. One of the factors that sets us humans apart from computers is our instinct of contextual relevance while performing any of our day to day activities. Our brains are adept at such tasks which makes us able to perform complex tasks quite easily. Attention is a deep learning technique designed to mimic this very property of our brain. Attention, as the name suggests is a methodology by which a neural network learns to selectively focus on relevant features and ignoring the rest. Attention was first introduced in the branch of natural language processing (NLP) [21], where it enabled contextual understanding for Sequence to Sequence models (Ex: Machine Translation) which led to better performance of the same. Attention mechanism in NLP solved the problem of vanishing gradients for Recurrent Neural Networks and at the same time brought in feature relevance understanding which boosts performance. The revolutionary impacts of deep learning paved way for creation of more efficient network architectures such as the Transformer (BERT) [22], which are widely applied these days. Moreover attention has been applied to other fields related to deep learning such as the ones focusing on signal and visual processing. \subsubsection{Visual Attention Mechanism} Images are a very abstract from of data, they contain numerous amounts of patterns (features) which could be analysed using latest computational tools to gain understanding on them. For many machine learning tasks such as regression or classification, identifying features of contextual relavance would improve the model performance and simplify the task. The same is the case for machine learning applied to images. For most images, the regions could be broadly classified as background and objects, where objects are of prime focus and background doesn't contribute to inference. Because of the same, knowing where to look and what regions to focus on while making an inference from images helps boost the performance of the model. Convolutional neural networks(CNN) are one of the best feature extraction tools for images in today's deep learning literature, attention applied to Convolutional features will help pick out relevant features of interest from the large pool of features extracted by a CNN. \begin{figure}[h!] \includegraphics[width=12cm, height=6cm]{xray_att.jpg} \caption{A visualization of the attention mechanism applied to Convolutional features overlaid on X-ray images } \label{a4} \end{figure} \section{Related Work} The outbreak of the COVID-19 [23] pandemic and the increasing count of the number of deaths have captured the attention of most researchers across the world. Several works have been published which aim to either study this virus or in a way aim to curb the spread. Owing to the supremacy of computer vision and deep learning in the field of medical imaging, most of the researchers are using these tools as means to diagnose COVID-19. Chest X-ray (CXR) and Computed Tomography (CT) are the imaging techniques that play an important role in the detection of COVID-19 [24], [25]. As inferred from literature Convolutional Neural Network (CNN) remains the preferred choice of researchers for tackling COVID-19 from digitized images and several reviews have been carried out to highlight it’s recent contributions to COVID-19 detection[26]-[28]. For example in [29] a CNN based on Inception network was applied to detect COVID-19 disease within computed tomography (CT). They achieved a total accuracy of 89.5\% with specificity of 0.88 and sensitivity of 0.87 on their internal validation and a total accuracy of 79.3\% with specificity of 0.83 and sensitivity of 0.67 on the external testing dataset. In [30] a modified version of the ResNet-50 pre-trained network was used to classify CT images into three classes: healthy, COVID-19 and bacterial pneumonia. Their model results showed that the architecture could accurately identify the COVID-19 patients from others with an AUC of 0.99 and sensitivity of 0.93. Also their model could discriminate COVID-19 infected patients and bacteria pneumonia-infected patients with an AUC of 0.95, recall (sensitivity) of 0.96. In [31] a CNN architecture called COVID-Net based on transfer learning was applied to classify the Chest X- ray (CXR) images into four classes of normal, bacterial infection, non-COVID and COVID-19 viral infection. The architecture attained a best accuracy of 93.3\% on their test dataset. In [32] the authors proposed a deep learning model with 4 convolutional layers and 2 dense layers in addition to classical image augmentation and achieved 93.73\% testing accuracy. In [33] the authors presented a transfer learning method with a deep residual network for pediatric pneumonia diagnosis. The authors proposed a deep learning model with 49 convolutional layers and 2 dense layers and achieved 96.70\% testing accuracy. In [9] the authors proposed a modified CNN based on class decomposition, termed as Decompose Transfer Compose model to improve the performance of pre-trained models on the detection of COVID-19 cases from chest x-ray images. ImageNet pre-trained ResNet model was used for transfer-learning and they also used Data Augmentation and histogram modification technique to enhance contrast of each image. Their proposed DeTraC-ResNet 18 model achieved an accuracy of 95.12\%. In [34] the authors proposed a pneumonia chest x-ray detection based on generative adversarial networks (GAN) with a fine-tuned deep transfer learning for a limited dataset. The authors chose AlexNet, GoogLeNet, Squeeznet, and Resnet18 are selected as deep transfer learning models. The distinctive observation drawn for this paper was the use of GAN for generating similar examples of the dataset besides tackling the problem of overfitting. Their work used 10\% percent of the original dataset while generating the other 90\% using GAN. In [35] the authors presented a method to generate synthetic chest X-ray (CXR) images by developing an Auxiliary Classier Generative Adversarial Network (ACGAN) based model. Utilizing three publicly available datasets of IEEE Covid Chest X-ray dataset[10], COVID-19 Radiography Database [36] and COVID-19 Chest X-ray Dataset [37] the authors demonstrated that synthetic images produced by the ACGAN based model could improve the performance of CNN(VGG-16 in their case) for COVID-19 detection. The classification results showed an accuracy of 85\% with the CNN alone, and with the addition of the synthetically generated images via ACGAN the accuracy increased to 95\% . Thus having understood the advantages that GAN offers on training models with relatively smaller datasets, in our research we implemented the DAGAN combined with the attention based Siamese Neural Networks for getting the optimum results out of a relatively smaller dataset used for training our model[10]. \section{Methods} For our experiments the application was build using Android studio. MVVM (Model View View-Model) architecture has been used in the app, which helps in proper state management following the UI Material Design guidelines while building. For storage of the App data(like the local X-Rays samples) and Authentication, Firebase is used. The deep learning model was trained using publicly available datasets to test for robustness of the same. Some of the major issues with such datasets were lack of data, inherent noise features and class imbalance. In the proposed methodology, all three of these issues were tackled effectively. The smartphone application would pave way for improvement of the existing model and provide ease of access to state of the art disease diagnosis for common pulmonary diseases to everyone.A semi-live training scenario was build on the cloud which enables the model to imporve over time gradually without intervention. \subsection{Application Usage Overview} The android application acts as an accessible platform which assists doctors or patients in uploading the X-rays samples to be inferred by the deep learning model, and obtain corresponding diagnosis results, as seen in Figure 5. The application is as a cloud-user interface which enables wider accessibility and help the model imporve by the cloud build semi-live training scenario. The algorithm is deployed in a FAS ( Function as a Service ), which gets triggered when a user uploads a sample. There are primarily 2 categories of users : doctors and patients. Users under the doctor category would be verified and could act as a potential source for labeled training data. Under the patient category the inference mechanism gets triggered which enables the backend model to provide the user with a diagnosis result for the uploaded sample. \begin{figure}[h!] \includegraphics[width=12cm, height=10cm]{app_workflow.jpg} \caption{Abstract workflow of our smartphone application utilizing an AI cloud platform } \label{c1} \end{figure} \subsection{Backend Model} The backend deep learning architecture mainly consists of two deep learning models- the DAGAN for robust and effective data augmentation, followed by the Convolutional Siamese Network with attention mechanism. The Siamese Network is proven to be data efficient through our experiments. Both of these networks are pretrained on publicly available datasets. To obtain the pretrained DAGAN model , suitably processed X-ray images were provided with corresponding class labels. For the pretrained Siamese Network, visually variant augmented samples with in-class features preserved were generated using the DAGAN model. Then these generated samples were paired up for all possible combinations. Each of the pairs were assigned a binary label based on the classes on which the two images in a pair belonged to - 0 if both images are from the same class of pulmonary diseases and 1 otherwise. The resulting dataset was then used to train the Siamese Network. A set of well labelled and noise free images are selected to be the standard dataset for comparison. During inference procedure one of the twin among the Siamese Network generates a latent vector for the uploaded image by a forward pass. The second twin generates a latent vector for an image in the standard dataset. The obtained latent vectors are then compared using an energy function. The energy values of all classes in the standard dataset are obtained using a similar procedure, and the class with the lowest average value is selected. The class thus selected becomes the diagnosis for the particular uploaded image. The diagnosis made is conveyed back to the user through an online database. \begin{figure}[h!] \includegraphics[width=13cm, height=15cm]{Image_infererence_workflow.png} \caption{General inference process for images} \label{c2} \end{figure} \subsection{Semi-live training} X-Ray images used by the backend model could show large variance due to a variety of reasons, which includes lighting condition while the picture is taken, the X-Ray machine specifications or camera quality of the user's smartphone etc. Since the challenges such as this due to data variation should be accounted in a real world scenario, a semi-live training scenario was introduced, which enables the model parameters of the pretrained model to further adapt to new or variant data. The scenario is triggered when sufficient amounts of data is obtained. \begin{figure}[h!] \includegraphics[width=12cm, height=13cm]{live_train_flow.png} \caption{Overview of the training process of the Siamese Network during semi-live training scenario} \label{c3} \end{figure} \section{Experiments and Results} Two different datasets are used to obtain a variety of comparison results for proper model evaluation. On the first and second datasets the tasks are formulated as binary and multiclass classification respectively. Dataset description and corresponding comparison results are given below. For the selection of the standard dataset, expert advice and third party help were utilized. \subsection{Datasets} \begin{figure}[h!] \includegraphics[width=12cm, height=6cm]{dataset1.jpg} \caption{Class distribution of dataset - 1 [10] (Graphical representation)} \label{d1} \end{figure} \begin{figure}[h!] \includegraphics[width=12cm, height=6cm]{dataset2.jpg} \caption{Class distribution of dataset - 2 [38] (Graphical representation)} \label{d2} \end{figure} Both the datasets used in this study are publicly available. Apart from the selection of the standard dataset, no specific dataset cleansing was done. Training process was done on the data including even those images with inherent noise features present, the same helps in confirming the robustness of the proposed model. Dataset-1 was published in 2018, the X-rays images obtained were part of clinical care conducted year to year from Guangzhou Medical Center from 5,863 different patients. Dataset-2 was published as an effort to give out relevant data for widespread studies that were conducted to tackle the COVID-19 pandemic situation. Test set size of datasets 1 and 2 were selected as 20\% of images from each class. The training set was further enlarged using the DAGAN model to ensure a generalized training for the proposed model. \subsection{Comparative Study} A good amount of images were selected as the testing set for the proposed model, so as to robustly test and evaluate the proposed method. As per the split of 20\%, data from each class of the two datasets were randomly selected to be included in the test set. For dataset(1) used for the binary classification task the test set consisted of 1170 images out of the 5860 images, as for dataset(2) used for the multiclass classification task the test set consisted of 180 images out of the 905 images. No generation of images were done on the testing set, as it is considered important to conduct model evaluation on real world data samples.Since the testing set in both experiments are large, the confidence interval for the testing accuracy of the proposed model was calculated by assuming a Gaussian distribution for the dataset proportion. \begin{table}[h] \centering \begin{tabular}{llll} \hline \textbf{Method} & \textbf{Year} & \textbf{Description} & \textbf{\begin{tabular}[c]{@{}l@{}}Testing \\ Accuracy\end{tabular}} \\ \hline {[}39{]} & 2018 & Convolutional Neural Network (CNN) & 92.80\% \\ \hline {[}40{]} & 2019 & \begin{tabular}[c]{@{}l@{}}Deep learning model with 4 convolutional\\ layers and 2 dense layers + classical\\ Augmentation\end{tabular} & 93.73\% \\ \hline {[}41{]} & 2019 & \begin{tabular}[c]{@{}l@{}}Deep learning model with 7 convolutional\\ layers and 3 dense layers\end{tabular} & 95.30\% \\ \hline {[}42{]} & 2019 & \begin{tabular}[c]{@{}l@{}}Deep learning model with 49 convolutional\\ layers and 2 dense layers\end{tabular} & 96.70\% \\ \hline {[}43{]} & 2020 & \begin{tabular}[c]{@{}l@{}}Convolutional Neural Network (CNN) \\ + Random forest\end{tabular} & 97.00\% \\ \hline {[}34{]} & 2020 & GAN + Resnet18 & 99.00\% \\ \hline \begin{tabular}[c]{@{}l@{}}Proposed \\ Method\end{tabular} & 2020 & DAGAN + Attention Siamese Net & 99.30 +/- 0.63\% \\ \hline \end{tabular} \caption{Comparison of testing accuracy of proposed model with related works conducted on dataset-1} \label{tab:my-table} \end{table} \begin{table}[h] \centering \begin{tabular}{llll} \hline \textbf{Method} & \textbf{Year} & \textbf{Description} & \textbf{\begin{tabular}[c]{@{}l@{}}Testing \\ Accuracy\end{tabular}} \\ \hline {[}44{]} & 2020 & Using pre-trained model of CheXNet & 90.50\% \\ \hline {[}45{]} & 2020 & \begin{tabular}[c]{@{}l@{}}Extracts the features from chest x-ray \\ images using FrMEMs moment\end{tabular} & 96.09\% \\ \hline {[}46{]} & 2020 & \begin{tabular}[c]{@{}l@{}}Two-level Hierarchical Deep Neural \\ Network and transfer learning\end{tabular} & 97.80\% \\ \hline \begin{tabular}[c]{@{}l@{}}Proposed \\ Method\end{tabular} & 2020 & DAGAN + Attention Siamese Net & 98.40 +/- 2.18\% \\ \hline \end{tabular} \caption{Comparison of testing accuracy of proposed model with related works conducted on dataset-2} \label{tab:my-table 1} \end{table} The tables illustrate the robustness of the proposed model as well as point out how effective the model is under the scenario, where there is a lack of availability of training data. \subsection{Final Model Analysis} For our purposes, in order obtain a robust and deployable model, we combine both datasets and train a multiclass classification model which is robustly evaluated for performance. The testing set is selected to be 20\% of images from each class, at random. The model thus obtained, achieved a testing accuracy of 97.8\%. The validation set is selected to be 20\% of images in each class, from the training set \begin{figure}[h!] \includegraphics[width=12cm, height=6cm]{conve.png} \caption{The validation loss vs epoch curve} \label{co1} \end{figure} \begin{figure}[h!] \includegraphics[width=12cm, height=7cm]{diss_range.png} \caption{Range of variation of dissimilarity index for like and unlike classes} \label{co2} \end{figure} Detailed analysis was carried to determine the class separation boundary of the latent space, Figure 11 illustrates the large class separation found. The illustration (Figures [12-14]) shows how effective is the latent space representation so formed by training the model, in representing the lower dimensional projection of images. \begin{figure}[h!] \includegraphics[width=12cm, height=7cm]{com_cov.png} \caption{Comparing a chest X-Ray image of a COVID-19 positive patient with test set images in the COVID-19 class } \label{co3} \end{figure} \newpage \begin{figure}[h!] \includegraphics[width=12cm, height=7cm]{com_pne.png} \caption{Comparing a chest X-Ray image of a COVID-19 positive patient with test set images in the Pneumocystis class} \label{co4} \end{figure} \begin{figure}[h!] \includegraphics[width=12cm, height=7cm]{com_norm.png} \caption{Comparing a chest X-Ray image of a COVID-19 positive patient with test set images in the Normal class} \label{co5} \end{figure} The dissimilarity index values are high when unlike classes are compared, at the same time a low dissimilarity is obtained when like classes are compared. \section*{Conclusion} In the wake of the global pandemic preventive and therapeutic solutions are in the limelight as doctors and healthcare professionals work to tackle the threat, with diagnostic methods having extensive capabilities being the need of the hour. The COVID-19 outbreak has caused an adverse effect in all walks of day to day life worldwide. Fact remains that the spread of such a disease could have been prevented during the early stages, with the help of accurate methods of diagnosis. Medical images such as X-rays and CT scans are of great use when it comes to disease diagnosis, particularly chest X-rays being pivotal in diagnosis of many common and dangerous respiratory diseases. Radiologists can infer many crucial facts from a chest X-ray which can be put to use in diagnosing several diseases. Today’s AI methods that mimic disease diagnosis as done by radiologists could outperform any human radiologist, owing to the higher pattern recognition capabilities and the lack of the human element of error or inefficiency in turn paving way for extensive research in this area. A common and efficient method to employ would be to use a Convolutional Neural Network (CNN) based classifier , which could accurately recognise patterns from images to make necessary predictions. The limitation of the same being the requirement of huge amounts of data to obtain a classifier model with enough generalizability and accuracy. Metrics improvement of an existing model is a hard task since retraining process for a large deep learning model would be expensive in terms of time and computation, vulnerable to scalability issues in retraining. Hence we adopted feature comparison based methods which are superior to feature recognising methods in these respects, exploiting a deep Generative Network for data augmentation. The model exhibited profound comparison metrics having very distinguishable dissimilarity indices. Similar classes showed remarkably low indices ranging from 0.05 to 0.6 , while different classes had higher values lying between 1.98 and 2.67. These performance indices of dissimilarity and the the large gap between these classes consolidates the fact that our model is able to clearly demarcate and classify diseases with state of the art efficiency. The limitations of this study include the inevitable noise factor on the dataset used, to tackle the same a cloud based live training method has been employed which uses properly annotated and identified data from medical practitioners worldwide. The underlying method could be employed to detect several other diseases if necessary, modification required for the current model minimal as compared to any deep learning based backend systems. Doctors and Radiologists can leverage the ability of our application to make a reliable remote diagnosis, thereby saving considerable time which can be devoted to medication or prescriptive measures. Due to the high generalisability and data efficiency of the method , the application could prove itself to be a great tool in not only in accurately diagnosing diseases of interest, but to also conduct crucial studies on emerging or rare respiratory conditions. \section*{Code Availability} The custom Python code and android app used in this study are available from the corresponding author upon reasonable request and is to be used only for educational and research purposes.
1,941,325,220,164
arxiv
\section{Introduction}\label{sec:introduction} Suppose that you move to a new city and are interested in exploring the local music scene. Typically, you might pick up the arts section of the local newspaper or go online to find a community notice board. Either way you would likely come to a long listing of music events where each event description would provide a small amount of contextual information: the names of the artists, the name and location of the venue, the date and start time of the event, the price of the tickets, and perhaps a few genre labels or a sentence fragment that reflects the kind of music you would expert to hear at the event. While this ``public list of events'' model has been successful at getting fans to music events for many decades, we can use modern recommender systems to make music event discovery more efficient and effective. For example, companies like BandsInTown\footnote{https://www.bandsintown.com} and SongKick\footnote{https://www.songkick.com/} help users \emph{track} artists so that that the user can be notified when a favorite artist will be playing nearby. They also recommend upcoming events with artists who are similar to one or more of the artists that the user has selected to track. These services have been successful in growing both the number of users and in the number of artists and events covered by their service. For example, BandsInTown claims to have 38 million users and lists events for over 430,000 artists\footnote{According to https://en.wikipedia.org/wiki/Bandsintown on March 28, 2018.}. Event listings are added by aggregating information of ticket sellers (e.g., Ticketmaster\footnote{https://www.ticketmaster.com/}, TicketFly\footnote{https://www.ticketfly.com/}) and by artist managers and booking agents who have the ability to directly upload tour dates for their touring artists to these services. While this coverage is impressive, a large percentage of the events found in local newspapers are not listed on these commercial music event recommendation services. Many talented artists play at small venues (e.g., neighborhood pubs, coffee shops, and DIY shows) and are often not represented by (pro-active, tech-savvy) managers. Yet many music fans enjoy the intimacy of a small venue and a personal connection with local artists and may have a hard time discovering these events. As such, our goal is to develop a locally-focused music event recommendation system to help foster music discovery within a local music community. Here we define \emph{local} as all music events within a small geographic region (e.g., 10 square miles). This includes national and regional touring acts who may pass through town but it also includes non-touring artists (e.g., a high school punk band, a barber shop quartet, a jazz trio from the nearby music conservatory, or a neighborhood hip hop collective.) What makes this problem technically challenging is that a large percentage of our local artists have a small \emph{digital footprint} or no digital footprint at all. That is, we may not be able to find these artists on sites that typically provide useful music information \cite{turnbull2008five} (e.g., Spotify\footnote{https://developer.spotify.com/}, Last.fm\footnote{https://www.last.fm/api}, AllMusic\footnote{https://www.allmusic.com/}). Similarly, we often do not have music recordings from these artists so we will not be able to make use of content-based methods for automatic tagging \cite{turnbull2008semantic} or acoustic similarity\cite{mcfee2012learning}. Rather, we will rely the small amount of contextual information that can be scraped from the event listings in the local newspaper or community notice board. We will first introduce the concept of a \emph{Music Event Graph} as a 4-partite graph that connects genre tags to popular artists to event artists to events. We then use latent semantic analysis (LSA) \cite{deerwester1990indexing} to embedding tags and artists into a latent feature space. We show that LSA is particularly advantageous when considering new or not well-known (long-tail) artists who have small digital footprints. This approach also allows us to independently control the \emph{popularity bias}\cite{turnbull2008five} of our event recommendation algorithm so that events with popular artists are no more or less likely to be recommended than events featuring more obscure local artists. \section{Related Work}\label{sec:related-work} We have been unable to find previous research on the specific task of music event recommendation though there is a significant amount of work on both music recommendation \cite{celma2010, schedl2017} (i.e., recommending songs and artists) and event recommendation \cite{macedo15,dooms11, minkov2010} (i.e., events posted on social networking sites.) It both cases, it is common to explore content-based (i.e., the substance of an item), collaborative filtering-based (e.g., usage patterns from a large group of users), and hybrid approaches. We consider our approach to be a hybrid approach since we make use of both social tags (content) and artist similarity (collaborative filtering\footnote{While the details of the Last.fm algorithm for computing artist similarity remain a corporate trade secret, it would be reasonable to expect that these scores are computed using some form of collaborative filtering based on the large quantities of user listening histories that they collect \cite{barrington2009}.}) scores from Last.fm. As with many successful recommender systems, we make use of matrix factorization to embed data into a low dimensional space \cite{koren2009}. In particular, we use Latent Semantic Analysis (LSA) \cite{deerwester1990indexing} which is a common approach used in both text information retrieval \cite{manning2008} and music information retrieval systems (e.g., \cite{levy2007semantic, laurier2009, oramas2015semantic}). LSA is relatively easy to implement\footnote{For example, see http://scikit-learn.org/stable/modules/generated/ sklearn.decomposition.TruncatedSVD.html}, can improve recommendation accuracy, provides a compact representation of the data, works well with sparse input data, and can help alleviate problems caused by synonymy and polysemy \cite{manning2008}. We should note that other embedding techniques, such as probabilistic LSA \cite{hofmann1999probabilistic} and latent Dirichlet allocation (LDA) \cite{blei2003latent}, could also be used as an alternative to LSA. \section{Event Recommendation} When developing an event recommendation system, we will consider an interactive experience with three steps: \begin{enumerate} \item \textbf{User selects genre tags}: Ask the user to select one or more tags from a list of board genres (``rock'', ``hip hop'', ``reggae'') based on the most common genres of the artists who are playing at upcoming local events. \item \textbf{User selects preferred popular artists}: Ask the user to select one or more artists from a list of recognizable mainstream artists (The Beatles, Jay-Z, Bob Marley) based on the selected genres and related to the artists who are playing an upcoming event. \item \textbf{Display of recommended event list}: Show recommended events (with justification) to the user based on the the selected genre tag and popular artist preferences. \end{enumerate} This is a common \emph{onboarding} process for both commercial music event services (e.g., BandsInTown) and music streaming services (e.g., Apple Music) since it quickly gives recommender systems a small but sufficient amount of music preference information for new users. After onboarding, a user can drill down into specific artists or events, as well as listen to related music, explore a map of venues, etc. In this section, we describe the concept of a Music Event Graph and show how we can use it to efficiently recommend local music events based on the music preference information that is collected during user onboarding. \subsection{Music Event Graphs} When considering event recommendation, there are two phases that we need to consider: offline computation of relevance information for all upcoming events and real-time personalized event recommendation. We will use a Music Event Graph to help us structure our event recommendation system. The music event graph is a k-partite graph with $k=4$ levels. Our four levels represent common genre tags, popular artists, event artists, and events as is shown in Figure \ref{fig:event-graph-creation} \begin{figure} \centering \includegraphics[width=3.3in]{CreatingTheEventGraph.png} \caption{Construction a Music Event Graph: First we collect \emph{events} and the \emph{event artists} who will be performing at these events (right nodes and edges). Then, we select \emph{genre tags} and related \emph{popular artists} (left nodes and edges). Finally, we connect event artists to popular artists based on our artist similarity calculation (middle edges).} \label{fig:event-graph-creation} \end{figure} To construct the graph, we follow the following steps: \begin{enumerate} \item Collect a set of upcoming local \emph{events} \item Construct the set of \emph{event artists} from all of the local events \item Find the most frequently used \emph{genre tags} (e.g., ``rock'', ``jazz'', ``hip hop'') associated with the event artists. \item Using the genre tags, create a set of \emph{popular artists} by selecting the most well-known artists that are strongly associated with each genre. \item For each event artist, find the most similar artists from the set of popular artists. \end{enumerate} In Section \ref{sec:artist-similarity}, we will describe how we use harvested tags and artist similarity information to compute similarity between pairs of artists, as well as between artists and tags. These similarities are represented as real-valued weights, and as such, the event graph contains weighted edges. Based on the interactive design described above, we can efficiently recommend events using a Music Event Graph. The user selects one or more preferred genres and then a set of relevant popular artists. Next our algorithm selects the event artists and their related events that are connected to the user's selected genres and popular artists. This graph traversal algorithm is depicted in Figure \ref{fig:event-graph-recommendation}. \begin{figure} \centering \includegraphics[width=3.3in]{EventGraphRecommendation.png} \caption{Construction the Music Event Graph: a user selects genre tags $\{t_1,t_3\}$. She is then shown popular artists $\{pa_1, pa_2, pa_5, pa_6\}$ and selects $\{pa_2, pa_6\}$. We then use the graph to strongly recommend event $e_4$ with artists $ea_3$ and $ea_5$ based on multiple connections. We would also recommend events $e_1$ and $e_2$ based on their connections through $ea_1$ and $ea_3$, respectively. } \label{fig:event-graph-recommendation} \end{figure} We note that our algorithm uses weighted edges to compute a user-specific relevance scores for each event as we move from left to right in the graph structure. In addition, we can use the graph structure to provide \emph{recommendation transparency} \cite{sinha2002role} by keeping track of the paths that are used to get from the user genre and popular artist selections to the recommend event artists and events. \section{Artist Similarity and Tag Affinity}\label{sec:artist-similarity} At the core of the event recommendation system, we use Latent Semantic Analysis (LSA) when calculating artist similarity and artist-tag affinity. That is, we use truncated single value decomposition (SVD) to transform a large, sparse data matrix of artist similarity and tag information into a lower dimensional matrix such that each artist and tag is embedded into a dense, k-dimensional \emph{latent} feature space. Note that $k$ is a hyperparameter that is set based on empirical evaluation. We can then calculate artist-artist or artist-tag similarity using the cosine distance between pairs of vectors in this latent space. Before we describe LSA, we will start with some useful notation for our problem setup: \begin{itemize} \item [$\mathcal{A}$]: set of artists. \item [$\mathcal{T}$]: set of tags. Tags are any free text token that can be used to describe music. This may include genres, emotions, instruments, usages, etc. \item [$\mathcal{T}^G$]: a small subset of \emph{genre} tags (e.g., ``rock'', ``country'', ``blues'') that are frequently used to categorize music. \item [$\mathcal{A}^P$]: set of \emph{popular} artists where each artist in the set is none of the most recognizable artists associated with at least one of the genre tags $\mathcal{T}^G$. \item [$\mathcal{E}$]: set of local music events \item [$\mathcal{A}^{\mathcal{E}}$]: set of \emph{event} artists where each artist has one or more upcoming events in $\mathcal{E}$ \item [$\mathcal{F}$]: set of features where $\mathcal{F}= \mathcal{A} \cup \mathcal{T}$. That is, we will describe each artist $a$ as a (sparse) feature vector of artist similarity and tag affinity values in $\mathbb{R}^{|\mathcal{A}|+|\mathcal{T}|}$ \item [$X$]: (sparse) raw data matrix. The dimension of $X \in \mathbb{R}^{|\mathcal{A}| \times |\mathcal{F}|}$ where the $x_{i,j} \in [0,1]$ represents the affinity between the $i$-th artist (a row) and $j$-th feature (column). A value of 0 represents either no affinity or unknown affinity. Note that all artists are self-similar so that $x_{i,i} = 1$. In terms of practical implementation, we can construct $X$ by stacking our $|\mathcal{A}| \times |\mathcal{A}|$ artist similarity matrix next to our$ |\mathcal{A}| \times |\mathcal{T}|$ artist-tag affinity matrix. \end{itemize} LSA uses the truncated SVD algorithm to decompose the raw data matrix $X$ as follows: \begin{equation} X \approx X_k = U_k \Sigma_k V_k^{T} \end{equation} such that the matrix $X_k$ is a rank-$k$ approximation of $X$, $U_k$ is an $|\mathcal{A}| \times k$ matrix, $\Sigma_k$ is a diagonal $k \times k$ matrix of singular values, and $V_k^{T}$ is a $k \times |\mathcal{F}|$ matrix. We will then project each artist and tag in a $k$ dimensional latent feature space: \begin{equation} X_{SVD} = \Sigma_k V_k^{T} \end{equation} where $X_{SVD} \in \mathbb{R}^{k \times |\mathcal{F}|}$ or equivalently $\mathbb{R}^{k \times (|\mathcal{A}|+|\mathcal{T}|)}$ by construction. That is, the first $|\mathcal{A}|$ columns of $X_{SVD}$ represent artists and the last $|\mathcal{T}|$ columns represent tags all embedded into the same $k$ dimensional space. We can also embed a new artist with raw feature vector $\mathbf{x} \in \mathbb{R}^{1 \times |\mathcal{F}|}$ by computing \begin{equation} \mathbf{x}_{SVD} = \mathbf{x} V_k \Sigma_k^{-1}. \end{equation} so that $\mathbf{x}_{SVD} \in \mathbb{R}^{1 \times k}$ is projected in the same latent feature space. Finally, we can compute artist-artist, artist-tag, or tag-tag similarity in the embedded space by comparing their respective (column) vectors in $X_{SVD}$. For example, if we have two latent feature vectors $p$ and $q$, we can compute their cosine similarity: \begin{equation} \cos(p, q) = \frac{p \cdot q}{||p||~||q||} \end{equation} where $p$ and $q$ are $k$-dimensional vectors and $||x|| = \sqrt[]{\sum_{i=1}^{k} x_i^2}$ is the l2-norm of a vector $x$. One nice property of cosine similarity, is that it tends to remove popularity bias. That is, we normalize the feature vectors by their length (l2-norm) such that each artist (and tag) vector is the same length. Without length normalization, popular artists which tend to have a bigger digital footprint (resulting in a denser raw feature vector with a bigger l2-norm) tend to produce larger similarity scores on average than if we did not normalize by length. \section{Event and Artist Data}\label{sec:data} The data for our experiments is constructed by scraping local events from both TicketFly\footnote{https://www.ticketfly.com scraped February 15, 2018.} and the web-based public event calendar from a local newspaper\footnote{Details omitted during anonymous review process.}. We collected a total of $|\mathcal{E}| = 96$ events with 66 events from TicketFly, 36 events from the local newspaper, and 6 overlapping events between both websites. These events produced a set $|\mathcal{A}^{\mathcal{E}}| = 154$ event artists. We are also able to download short biographies of almost all of the event artists for events obtained from Ticketfly. The local newspaper only provides us with 1 to 3 genre tags for about half of the events we obtained from their site. We then used the Last.fm API \footnote{https://www.last.fm/api} to collect music information (popularity, biography text, artist similarity scores, and tags affinity scores) for each of our event artists. We then use snowball sampling on the similar artists and obtain this same Last.fm music information. We continue sampling these non-event artists until we have a set of 10,000 artists (i.e., $|\mathcal{A}| = 10,000$.) We define our set of tags $\mathcal{T}$ as the 1585 tags which are associated with 20 or more artists. Our set of genre tags $\mathcal{T}^{\mathcal{G}}$ are the top 20 tags which are most frequently associated with our event artists $\mathcal{A}^{\mathcal{E}}$. These include tags like ``rock'', ``jazz'', and ``reggae''. However, we manually prune tags which are obviously not genres like ``seen live'' and ``favorites''. Finally, for each artist, we concatenate all available biographies (Last.fm, TicketFly, local newspaper) and attempt to find each of our tags in the combined biography text. If a tag is found, we label the artist with that tag. This is especially important since otherwise, many of our event artists would not be labeled with any tags. In the end, we have 977,270 artist similarities and 456,867 artist-tag affinities. \section{Exploring Artist Similarity in the Long Tail} The core of our local event recommendation algorithm is our artist similarity calculation based on Latent Semantic Analysis (LSA). In this section, we show that most local event artists are relatively obscure \emph{long-tail} artists and that they tend to have small digital footprints. We also explore the relationship between digital footprint size and the accuracy of our artist similarity calculation. \subsection{Long-tail Event Artists} \begin{figure} \centering \includegraphics[width=3.3in]{LongTail.png} \caption{(Top) Plot of Last.fm Listener Count vs. popularity rank (divided by total number of artists) of 10,000 artists with most. (Bottom) Histogram of popularity for 154 event artists placed into 10 deciles of overall artist popularity. Note that most event artists reside in the long-tail of this popularity distribution.} \label{fig:long-tail} \end{figure} In the top plot of Figure \ref{fig:long-tail}, we rank all 10,000 of our artists by their Last.fm listener counts. This shows a typical long-tail (power-law) distribution where a small number of popular artists in the short-head (left) receive much more attention than the vast majority of other artists in the long tail (right) \cite{celma2010,anderson2004long}. For example, 16.3\% of the most popular artists represent 80\% of the listener counts. In the bottom plot, we show a histogram of the event artists' Last.fm listener counts broken down into deciles. We note that a disproportionate number of local event artists reside in the long-tale of this popularity distribution. In particular, 99 of the 154 event artists (64.2\%) are in the lowest three deciles of the ranking. \subsection{The Digital Footprint of Event Artists } \begin{figure} \centering \includegraphics[width=3.3in]{FootprintDistribution.png} \caption{Cumulative distributions of digital footprint size (i.e., the number of nonzero artist similarity and tag affinity scores for each artists) for event artists and all artists. } \label{fig:footprint-dist} \end{figure} As we discussed in the Introduction, obscure artists tend to have small digital footprints. To show this, we will consider the digital footprint of an artist to be the number of artist similarities plus the number of tag affinities for that artist. Equivalently, it is the number of nonzero values in the row of our raw data matrix $X$ that is associated with the artist. We note that digital footprint size is correlated with popularity rank ($r = -0.56$) such that popular artists tend to have a larger digital footprint. In Figure \ref{fig:footprint-dist}, we plot the empirical cumulative distribution for both event artists and all artists as a function of the digital footprint size. We see that about 27.2\% of the event artists have 15 or fewer digital footprints whereas only 2.8\% of all artists have so few digital footprints. This suggests that it will be important for us to design an artist similarity algorithm that works well in this \emph{small digital footprint} setting. \subsection{Artist Similarity with Latent Semantic Analysis} \begin{figure} \centering \includegraphics[width=3.3in]{ReducedDigitalFootprint.png} \caption{Plot of artist similarity ranking performance as a function of the (artificially reduced) digital footprint size for various LSA embeddings with ranks of 32, 64, 128, and 256 dimensions. The \emph{Raw} approach represents computing cosine distances without first applying LSA to the raw artist data vectors. } \label{fig:reduced-footprint} \end{figure} In Section \ref{sec:artist-similarity}, we introduced LSA as a algorithm for computing artist similarity. However, as we observed in the previous subsection, we are particularly interested in the case where an artist is represented by a small number of artist similarities and tag affinities (i.e., a small digital footprint.) To explore this, we will artificially reduce the digital footprint of artists to a fixed sized and see how well LSA is able to accurately compute artist similarity. To do this, we randomly split our data set of $\mathcal{A} = 10,000$ artists into a training set with $\mathcal{A}_{train} = 9,000$ artists and a test set of $\mathcal{A}_{test} = 1,000$ artists. Note that this involves removing 1000 rows \emph{and} 1000 columns from our raw data matrix $X$ since artists are also features. The training data will be used to to calculate our matrix decompositions $\mathcal{A}_{train} \approx U_k \Sigma_k V_k^{T}$ for a given embedding dimension $k$. Before projecting the $\mathcal{A}_{test}$ into the latent feature space, we limit the digital footprint size of each artists by randomly selecting artist similarity and tag affinity features to zero out. We can then project $\mathcal{A}_{test}$ into the latent feature space and calculate the cosine distance between each pair of test set artists. Finally, we can calculate the Area Under the ROC Curve (AUC) \cite{manning2008} for each artist where the original artist similarities serve as the ground truth. Figure \ref{fig:reduced-footprint} shows a plot of artificially reduced digital footprint size verses average AUC over the 1,000 test set artists for various LSA embedding dimensions. We also plot the curve for when we compute cosine distances between the \emph{raw} test artist vectors without projecting into a latent feature space. Here we note that LSA shows a improvement over raw cosine distance in small footprint setting of between 1 and 16 nonzero features. Once the digital footprint is larger than 128 nonzero features, the raw cosine approach slightly out performs LSA-based approach. However, the compactness of representing each artist with 32 or 64 floating point numbers may be advantageous in terms of storage size and computation time when we consider a much larger set of artists and tags. As such, we will use 64-dimensional LSA embeddings for the remaining experiments in this paper. \section{Exploring Event Recommendation} To explore the performance of event recommendation using event graphs and LSA-based artist similarity, we conducted a small user study with a short 2-phase survey. We recruited 51 participants who were very familiar with the local music scene and attend live events in the area on a weekly basis. In the first phase of our survey, we asked participants to select between 1 and 3 genres from a set of 20 common genres. For each selected genre, the test subject was then asked to select between 1 and 3 artists from a set of 16 popular artists that were representative of the genre (i.e., having a high cosine similarity score between the 64-dimensional latent feature vectors of the genre and the artist.) In the second stage, participants were shown a list of the 154 event artists in our data set. They were asked to select all artists that they would like to see at a live event in the local area and were required to select 5 or more event artists. To evaluate our system, we use each test subject's selected genres or popular artists from phase 1 of the survey to rank order the 154 event artists using one of the approaches described below. In all cases, we embedded artists and tags into a 64-dimensional latent feature space using LSA with the data set that is described in Section \ref{sec:data}. We then calculate the area under the ROC curve (AUC) for each user where ground truth relevance is determined from phase 2 of the survey. Each test subject provides multiple genre and multiple popular artist preferences. We explore a number of ways to combine these preferences to produce one ranking of the event artists for each test subject. We consider \emph{early fusion} and \emph{late fusion} steps for a number of approaches. In early fusion, we start with a set of latent feature vectors where each vector is associated with one of the users genre or artist preferences. We consider three approaches: \begin{itemize} \item \textbf{average} the latent feature vectors into one vector \item \textbf{cluster} the latent feature vectors and use the $k$ centroid vectors \item \textbf{none} use all of the latent feature vectors \end{itemize} When clustering, we use the $k$-means clustering algorithm\footnote{http://scikit-learn.org/stable/modules/generated/ sklearn.cluster.KMeans.html} with the number of clusters ($k$) equal to the rounded natural log of the number of user preferences. For \emph{late fusion}, we must output one ranking of the event artists for each user. We consider three approaches \begin{itemize} \item \textbf{average cosine} ranks event artists by the average of the cosine similarity scores between the event artist vector and each vector in the set of user preference vectors. \item \textbf{average rank} creates one ranking of event artists for each user preference vector, calculates the average rank for each event artist over this set of rankings, and then ranks them by this average rank. \item \textbf{interleave} creates a set of rankings of the event artists for each user preference vector, and then constructs a final ranking by alternating between these ranking lists and picking top remaining artists that have not already been added to the final ranking. \end{itemize} \begin{table}[] \centering \caption{Event artist recommendation performance. The mean and standard deviation of AUC for our 8 expert test subjects when considering popular artist preferences, genre preferences, and both preferences together. See text for details on the seven approaches and the two baselines. } \vspace{3mm} \label{tab:event-artist-recommend} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{1}{|c|}{Approach} & \multicolumn{3}{|c|}{ User Preferences } \\ \hline \multicolumn{1}{|c|}{Early / Late Fusion} & Artists & Genres & Both \\ \hline none / avg. cosine & \textbf{.79} (.09) & .69 (.16) & .74 (.12) \\ none / avg. rank & .75 (.11) & .66 (.15) & .76 (.09) \\ none / interleave & \textbf{.79} (.11) & .69 (.15) & .71 (.15) \\ average / cosine & \textbf{.79} (.09) & .69 (.16) & .74 (.12) \\ cluster / avg. cosine & .78 (.09) & .69 (.18) & .74 (.11) \\ cluster / avg. rank & .74 (.13) & .66 (.20) & .68 (.17) \\ cluster / interleave & .78 (.10) & .69 (.16) & .75 (.09) \\ \hline \multicolumn{1}{|c|}{Baseline} & \multicolumn{3}{|c|}{~} \\ \hline random & \multicolumn{3}{|c|}{.50 (.12)}\\ popularity & \multicolumn{3}{|c|}{.53 (-)}\\ \hline \end{tabular} \end{table} Table \ref{tab:event-artist-recommend} shows average AUCs (and standard deviations) for our seven early/late fusion approaches when we use each user's popular artist preferences, genre tag preferences, and both sets of preferences together. We also include a popularity baseline that ranks all event artists by their Last.fm listener count as well as a random shuffle baseline. We observe that artist preferences alone result in the best performance and a number of our proposed early/late fusion approaches produce similar results. We should also mention that we collected survey data from individuals who attended local shows on a less frequent (monthly) basis. The results for these test subjects was significantly lower (average AUC of 0.61) and more variable (AUC standard deviation of 0.15) for our best performing approach (Genre Preferences / None / Interleave.) Having done error analysis on many of these less regular attendees, we often found that they selected a very eclectic set of event artists which did not match their preferences. As such, it would have been difficult for any recommender system to make accurate recommendations for many of these test subjects. This suggests that test subjects need to have a high level of familiarity with the local music community in order to provide useful ground truth for our experiment. \section{Discussion} In this paper, we explored the understudied task of local music event recommendation. This is an exciting task for the research community because it involves many interesting problems: long-tail recommendation, the new user \& new artist cold start problems, multiple types of music information (artist similarity, tags), and user preference modeling. It is also an interesting problem outside of the academic research community since music event recommender systems can be used to help grow and support the local arts community. By promoting the work of talented local musicians, such systems can help fans discover new artists and help musicians reach new audiences. These audiences in turn attend more events which help sustain concert venues, music festivals, and other (local) businesses who benefit from direct ticket sales and other forms of indirect support (e.g., food, drinks, merchandise.) While we were able to evaluate our system using a survey of local music experts, a more natural way to evaluate music event recommendation would be to build an interactive application thats collects user feedback over a longer period of time. We plan to develop such an app in the coming months and hope that it will be useful for expanding on the research that is presented in this paper.
1,941,325,220,165
arxiv
\section{Introduction} Recent work has shown that spatial structures with density fluctuations weaker at long wavelengths than those of a typical random point set may have desirable physical properties, and such structures are said to be {\em hyperuniform}~\cite{To03a}. Crystals and quasicrystals are hyperuniform, as are a variety of disordered systems, including certain equilibrium structures, products of nonequilibrium self-assembly protocols, and fabricated metamaterials. (For examples, see Refs.~\cite{Man2013,Haberko2013,Dre15,Torquato2015,Hex15,Castro-Lopez2017,Torquato2018}.) One approach to generating point sets with nontrivial spatial fluctuations is to use substitution tilings as templates. Our aim in this paper is to characterize the degree of hyperuniformity in such systems and thereby provide design principles for creating hyperuniform (or anti-hyperuniform) point sets with desired scaling properties. Substitution tilings are self-similar, space-filling tilings generated by repeated application of a rule that replaces each of a finite set of tile types with scaled copies of some or all of the tiles in the set~\cite{Frank2008}. We are interested in the properties of point sets formed by decorating each tile of the same type in the same way. We consider here only one-dimensional (1D) tilings. Although generalization to higher dimensions would be of great interest, the 1D case already reveals important conceptual features. Substitution rules are known to produce a variety of structures with qualitatively different types of structure factors $S(k)$. Some rules generate periodic or quasiperiodic tilings, in which case $S(k)$ consists of Bragg peaks on a reciprocal space lattice supported at sums and differences of a (small) set of basis wavevectors, which in the quasiperiodic case form a dense set. Others produce limit-periodic structures consisting Bragg peaks located on a different type of dense set consisting of wavenumbers of the form $\pm k_0 n/p^m$, where $n$, $m$, and $p$ are positive integers~\cite{Godreche1989,Baake2011}. Still others produce structures for which $S(k)$ is singular at a dense set of points but does not consist of Bragg peaks~\cite{Bom86,Godreche1992,Baake2017}, for which $S(k)$ is absolutely continuous~\cite{Baake2012}, or for which the nature of the spectrum has not been clearly described. The precise natures of the spectra are in many cases not fully understood. In this paper, we present a simple ansatz that predicts the scaling properties relevant for assessing the hyperuniformity (or anti-hyperuniformity) of 1D substitution tilings. We illustrate the validity of the ansatz via numerical computations for a variety of example tilings that fall in different classes with respect to hyperuniformity measures. We also delineate the full range of behaviors that can be obtained using the substitution construction method, including a novel class in which $Z(k)$ decays faster than any power. Section~\ref{sec:alpha} reviews the definition of the scaling exponent $\alpha$ associated with both the integrated Fourier intensity $Z(k)$ and the variance $\sigma^2(R)$ in the number of points covered by a randomly placed interval of length $2R$. We then review the classification of tilings based on the value of $\alpha$. Section~\ref{sec:substitution} reviews the substitution method for creating tilings, using the well-known Fibonacci tiling as an illustrative example. The substitution matrix ${\bm M}$ is defined and straightforward results for tile densities are derived. Section~\ref{sec:scaling} presents a heuristic discussion of the link between density fluctuations in the tilings and the behaviors of $S(k)$ and $Z(k)$, which leads to a prediction for $\alpha$. The prediction is shown to be accurate for example tilings of three qualitatively distinct types~\cite{Torquato2018}: strongly hyperuniform (Class~I), weakly hyperuniform (Class~III), and anti-hyperuniform. Section~\ref{sec:alpharange} shows, based on the heuristic theory, that the range of possible values of $\alpha$ produce by 1D substitution rules is $[-1,3]$ and that this interval is densely filled. Section~\ref{sec:lp} considers substitutions that produce limit-periodic tilings. Examples are presented of four distinct classes: logarithmic hyperuniform (Class~II), weakly hyperuniform (Class~III), anti-hyperuniform, and an anomalous class in which $Z(k)$ approaches zero faster than any power law. Finally, Section~\ref{sec:discussion} provides a summary of the key results, including a table showing which types of tilings can exhibit the various classes of (anti-)hyperuniformity. \section{Classes of hyperuniformity} \label{sec:alpha} For systems having a structure factor $S({\bm k})$ that is a smooth function of the wavenumber $k$, $S({\bm k})$ tends to zero as $k$ tends to zero \cite{To03a}, typically scaling as a power law \begin{equation} S({\bm k}) \sim k^{\alpha}. \label{eqn:hyper} \end{equation} In one dimension, a unified treatment of standard cases with smooth $S(k)$ and quasicrystals with dense but discontinuous $S(k)$ is obtained by defining $\alpha$ in terms of the scaling of the integrated Fourier intensity \begin{equation} Z(k) = 2 \int_0^k S(q) \, dq\,. \label{Z} \end{equation} In both cases, $\alpha$ may be defined by the relation~\cite{Oguz2016} \begin{equation} Z(k) \sim k^{1+\alpha} \quad {\rm as} \; k \rightarrow 0\,. \label{eqn:Z2} \end{equation} Systems with $\alpha>0$ have long wavelength spatial fluctuations that are suppressed compared to Poisson point sets and are said to be hyperuniform~\cite{To03a}. Prototypical strongly hyperuniform systems (with $\alpha>1$) include crystals and quasicrystals. We refer to systems with $\alpha<0$ as anti-hyperuniform~\cite{Torquato2018}. Prototypical examples of anti-hyperuniformity include systems at thermal critical points. An alternate measure of hyperuniformity is based on the local number variance of particles within a spherical observation window of radius $R$ (an interval of length $2R$ in the 1D case), denoted by $\sigma^2(R)$. If $\sigma^2(R)$ grows more slowly than the window volume (proportional to $R$ in 1D) in the large-$R$ limit, the system is hyperuniform. The scaling behavior of $\sigma^2(R)$ is closely related to the behavior of $Z(k)$ for small $k$~\cite{To03a,Oguz2016}. For a general point configuration in one dimension with a well-defined average number density $\rho$, $\sigma^2(R)$ can be expressed in terms of $S(k)$ and the Fourier transform ${\tilde \mu}(k;R)$ of a uniform density interval of length $2R$: \begin{equation} \sigma^2(R)= 2R\rho \Big[\frac{1}{2\pi} \int_{-\infty}^{\infty} S({k}) {\tilde \mu}(k;R) d{k}\Big] \label{eqn:local} \end{equation} with \begin{equation} {\tilde \mu}(k;R)= 2\frac{\sin^2(k R)}{k^2 R}\,, \label{eqn:alpha-k} \end{equation} where $\rho$ is the density. (See Ref.~\cite{To03a} for the generalization to higher dimensions.) One can express the number variance alternatively in terms of the integrated intensity~\cite{Oguz2016}: \begin{equation} \sigma^2(R)= - 2R\rho\Bigg[\frac{1}{2\pi} \int_0^\infty Z(k) \frac{\partial {\tilde{\mu}(k;R)}}{\partial k} dk \Bigg] \,. \label{eqn:local-1} \end{equation} For any 1D system with a smooth or quasicrystalline structure factor, the scaling of $\sigma^2(R)$ for large $R$ is determined by $\alpha$ as follows~\cite{To03a,Za09,Torquato2018}: \begin{equation} \label{eqn:alphanu} \sigma^2(R) \sim \left\{\begin{array}{ll} R^0, & \alpha > 1 \quad {\rm (Class\ I)} \\ \ln R, & \alpha = 1 \quad {\rm (Class\ II)} \\ R^{1-\alpha}, & \alpha < 1 \quad {\rm (Class\ III)} \end{array}\right.\,. \end{equation} For hyperuniform systems, we have $\alpha>0$, and the distinct behaviors of $\sigma^2(R)$ define the three classes, which we refer to as strongly hyperuniform (Class~I), logarithmic hyperuniform (Class~II), and weakly hyperuniform (Class III). Systems with $\alpha < 0$ are called ``anti-hyperuniform.'' The bounded number fluctuations of Class~I occur trivially for one-dimensional periodic point sets (crystals) and are also known to occur for certain quasicrystals, including the canonical Fibonacci tiling described below~\cite{Oguz2016}. Other quasiperiodic point sets (not obtainable by substitution) are known to belong to Class~II~\cite{Kesten1966,Aubry1987,Oguz2016}. \section{Substitution tilings and the substitution matrix} \label{sec:substitution} A classic example of a substitution tiling is the one-dimensional Fibonacci tiling composed of two intervals (tiles) of length $L$ and $S$. The tiling is generated by the rule \begin{equation} \label{eqn:fibsub} L\rightarrow LS; \quad S\rightarrow L\,, \end{equation} which leads to a quasiperiodic sequence of $L$ and $S$ intervals. An important construct for characterizing the properties of the tiling is the {\em substitution matrix} \begin{equation} {\bm M} = \left( \begin{array}{cc} 0 & 1 \\ 1 & 1 \end{array}\right), \end{equation} which acts on the column vector $(N_S,N_L)$ to give the numbers of $S$ and $L$ tiles resulting from the substitution operation. If the lengths $L$ and $S$ are chosen such that the ratio $L/S$ remains fixed, which in the present case requires $L/S = (1+\sqrt{5})/2\equiv\tau$, the substitution operation can be viewed as an affine stretching of the original tiling by a factor of $\tau$ followed by the division of each stretched $L$ tile into an $LS$ pair, as illustrated in Fig.~\ref{fig:substitution}. \begin{figure} \includegraphics[width=\columnwidth]{figsubstitution.pdf} \caption{The Fibonacci substitution rule. The tiling on the upper line is uniformly stretched, then additional points are added to form tiles congruent to the originals.} \label{fig:substitution} \end{figure} Given a finite sequence with $N_S$ tiles of length $S$ and $N_L$ tiles of length $L$, the numbers of $L$'s and $S$'s in the system after one iteration of the substitution rule is given by the action of the substitution matrix on the column vector $(N_S,N_L)$. More generally, substitution rules can be defined for systems with more than two tile types, leading to substitution matrices with dimension $D$ greater than 2. We present explicit reasoning here only for the $D=2$ case. A substitution rule for two tile types is characterized by a substitution matrix \begin{equation} {\bm M} = \left( \begin{array}{cc} a & b \\ c & d \end{array}\right)\,. \end{equation} The associated rule may be the following: \begin{equation} S\rightarrow \underbrace{SS\ldots S}_{a}\underbrace{LL\ldots L}_{c}, \quad L\rightarrow \underbrace{SS\ldots S}_{b}\underbrace{LL\ldots L}_{d}\,, \end{equation} but different orderings of the tiles in the substituted strings are possible, and the choice can have dramatic effects. Note, for example, that the rule \begin{equation} S\rightarrow SL, \quad L\rightarrow SLSL \end{equation} produces the periodic tiling $\ldots SLSLSL \ldots$, while the rule \begin{equation} S\rightarrow SL, \quad L\rightarrow SLLS \end{equation} produces the more complicated sequence discussed below in Section~\ref{sec:lp}. Defining the substitution tiling requires assigning finite lengths to $S$ and $L$. We let $\xi$ denote the length ratio $L/S$, and we consider only cases where the substitution rule preserves this ratio (i.e., $(bS + dL)/(aS + cL) = L/S$) so that the rule can be realized by affine stretching followed by subdivision. This requires \begin{equation} \xi = \frac{d-a+\sqrt{(a-d)^2 + 4bc}}{2c}\,. \end{equation} For all discussions and plots below, we measure lengths in units of of the short tile length, $S$. The $SL$ sequence generated by a substitution rule is obtained by repeated application of that operation to some seed, which we will take to be a string containing $n_S$ short intervals and $n_L$ long ones. We are interested in point sets formed by decorating each $L$ tile with $\ell$ points and each $S$ tile with $s$ points. The total number of points at the $m^{th}$ iteration is \begin{equation} {\cal N}_m = (s,\ell)\cdot {\bm M}^m \cdot (n_S,n_L), \end{equation} and the length of the tiling at the same step is \begin{equation} {\cal X}_m = (1,\xi) \cdot {\bm M}^m \cdot (n_S,n_L). \end{equation} Let $\lambda_1$ and $\lambda_2$ be the eigenvalues of ${\bm M}$, with $\lambda_1$ being the largest, and let ${{\bm v}}_1$ and ${{\bm v}}_2$ be the associated eigenvectors. We have \begin{align} \lambda_1 = a + c\,\xi\,; & \quad \lambda_2 = d - c\,\xi\,; \\ {{\bm v}}_1 = \left(b/c,\, \xi\right)\,; & \quad {{\bm v}}_2 = \left(-\xi,\, 1\right)\,. \end{align} The unit vectors $(1,0)$ and $(0,1)$ may be expressed as follows: \begin{align} (1,0) & = u(c\,{{\bm v}}_1 - c\,\xi {{\bm v}}_2)\,, \\ (0,1) & = u(c\,\xi {{\bm v}}_1 + b\, {{\bm v}}_2)\,, \end{align} where $u = 1/(b+c\,\xi^2)$. We then have \begin{eqnarray} {\bm M}^m \cdot(n_S,n_L) &=& {\bm M}^m \cdot\big(n_S(1,0)+n_L(0,1)\big) \nonumber \\ &=& u \bigg( \lambda_1^m \left(c\,n_S+ c\,\xi n_L\right){{\bm v}}_1 \\ &\ & \quad + \lambda_2^m \left(-c\,\xi\,n_S+ b\,n_L\right){{\bm v}}_2 \bigg)\,. \nonumber \end{eqnarray} The density of tile vertices after $m$ iterations, $\rho_m = {\cal N}_m/{\cal X}_m$, is thus \begin{equation} \label{eqn:rhom} \rho_m = \overline{\rho} + \left(\frac{\xi(s\xi-\ell)}{b+c\,\xi^2}\right)\left(\frac{c\,\xi n_S - b\, n_L}{n_S+\xi n_L }\right) \left( \dfrac{\lambda_2}{\lambda_1} \right)^m, \end{equation} with $\overline{\rho} = (b s+c\ell\,\xi)/(b+c\,\xi^2)$, where we have used the fact that $(1,\xi)\cdot{{\bm v}}_2 = 0$. \section{Scaling properties of 1D substitution tilings} \label{sec:scaling} As long as the coefficient of $(\lambda_2/\lambda_1)^m$ in Eq.~(\ref{eqn:rhom}) does not vanish, the deviations of $\rho$ from $\overline{\rho}$ for portions of the tiling that are mapped into each other by substitution are related by \begin{equation} \delta\rho_{m+1} = \dfrac{\lambda_2}{\lambda_1}\delta\rho_m \,. \label{eqn:deltarho} \end{equation} If the coefficient does vanish, which requires that $\xi$ be rational, the tiling may be periodic, but the ordering of the intervals in the seed becomes important. We will revisit this point below. For now we assume that the tiling is not periodic. We make three conjectures regarding nonperiodic substitution tilings, supported, as we shall see, by numerical experiments. The results are closely related to recently derived rigorous results~\cite{Baake2018b}. \begin{description} \item[Conjecture 1] We take Eq.~(\ref{eqn:deltarho}) to be the dominant behavior of density fluctuations throughout the system, not just for the special intervals that are directly related by substitution. That is, we assume that there exists a characteristic amplitude of the density fluctuations at a given length scale after averaging over all intervals of that length, and that the $\delta\rho$ in Eq.~(\ref{eqn:deltarho}) can be interpreted as that characteristic amplitude. \item[Conjecture 2] We assume that the Fourier amplitudes $A(k)$ scale the same way as the density fluctuations at the corresponding length scale: \begin{equation} A(k/\lambda_1) = \frac{\lambda_2}{\lambda_1}A(k)\,. \end{equation} This implies the form \begin{equation} \label{eqn:Ak} A(k) \sim k^{(-\ln |\lambda_2/\lambda_1 |/ \ln |\lambda_1 |)} = k^{1-(\ln |\lambda_2 |/ \ln |\lambda_1 |)} \,. \end{equation} Squaring to get $S(k)$, we have \begin{equation} \label{eqn:sofk} S(k) \sim k^{(2 - 2 \ln |\lambda_2 |/ \ln |\lambda_1 |)}\,. \end{equation} This conjecture may not hold when interference effects are important, as in the case discussed in Sec.~\ref{sec:lp} below. \item[Conjecture 3] While $Z(k)$ is an integral of $S(k)$, the exponent must be calculated carefully when $S(k)$ consists of singular peaks. In the Fibonacci projection cases, the scaling of peak positions and intensities conspire to make $Z(k)$ scale with the same exponent as the envelope of $S(k)$~\cite{Oguz2016}. We assume that this property carries over to substitution tilings with more than one eigenvalue greater than unity. Though the diffraction pattern is not made up of Bragg peaks~\cite{Bom86,Godreche1990}, we conjecture that it remains sufficiently singular for the relation to hold. Thus we immediately obtain \begin{equation} \label{eqn:alpha} \alpha = 1 - 2\left( \frac{\ln |\lambda_2 |}{\ln \lambda_1}\right)\,. \end{equation} \end{description} Note that this calculation of the scaling exponent makes no reference to the distinction between substitutions with $|\lambda_2|<1$ and those with $|\lambda_2|>1$. In the former case, $\lambda_1$ is a Pisot-Vijayaraghavan (PV) number, $S(k)$ consists of Bragg peaks, and $\sigma^2(R)$ remains bounded for all $R$. In the latter case, the form of $S(k)$ is more complex~\cite{Bom86}, and quantities closely related to $\sigma^2(R)$, including the ``wandering exponent'' associated with lifts of the sequence onto a higher dimensional hypercubic lattice, are known to show nontrivial scaling exponents~\cite{Godreche1990}. From Eq.~(\ref{eqn:alpha}), we see that the hyperuniformity condition $\alpha > 0$ requires $|\lambda_2| < \sqrt{\lambda_1}$. Though the result was obtained for substitutions with only $D=2$ tile types, it holds for $D>2$ as well, so long as all ratios of tile lengths are preserved by the substitution rules; i.e., the dominant contribution to the long-wavelength fluctuations still scales like $|\lambda_2| / \lambda_1$. This distinction between hyperuniform and anti-hyperuniform substitution tilings thus divides the non-PV numbers into two classes that, to our knowledge, have not previously been identified as significantly different. We note, for example, that the analysis presented in Ref.~\cite{Baake2018}, which treats substitution matrices of the form $(0,n,1,1)$ and shows that they have singular continuous spectra (having no Bragg component or absolutely continuous component) for $n>2$, does not detect any qualitative difference between the cases $n=3$ and $n=5$. The former case is hyperuniform, with $\lambda = (1/2)(1\pm\sqrt{13})$ and $\alpha \approx 0.37$, while the latter is anti-hyperuniform, with $\lambda = (1/2)(1\pm\sqrt{21})$ and $\alpha \approx -0.14$. For the Fibonacci case, we have $\lambda_1 = \tau$ and $\lambda_2 = -1/\tau$, yielding $\alpha=3$, which agrees with the explicit calculation in Ref.~\cite{Oguz2016}). Considering $(a,b,c,d)$ of the form $(0,n,n,n)$ for arbitrary $n$, we find cases that allow explicit checks of our predictions for $\alpha$ for both hyperuniform and anti-hyperuniform systems. We have $\lambda_1 = n\tau$ and $\lambda_2 = -n/\tau$, yielding \begin{equation} \alpha = 1-2\left( \frac{\ln n - \ln \tau} {\ln n + \ln \tau}\right)\,. \label{eq:exp_n} \end{equation} For $n\ge 2$, the system, the presence of more than one eigenvalue with magnitude greater than unity gives rise to more complex spectral features, possibly including a singular continuous component. For $2 \le n \le 4$, our calculation predicts $0 < \alpha < 1$ and hence $\sigma^2(R) \sim R^{1-\alpha}$. We numerically verify the latter result for $n=2$ using a set of 954,369 points generated by 12 iterations of the substitution tiling, where the decoration consists of placing one point at the rightmost edge of each tile (with $s=\ell=1$). Fig.~\ref{fig:numvar_n02} shows the computed number variance. For each point, a window of length $2R$ is moved continuously along the sequence and averages are computed by weighting the number of points in the window by the interval length over which that number does not change. A regression analysis yields $\sigma^2 (R) \sim R^{0.36}$, in close agreement with the predicted exponent from Eq.~(\ref{eqn:sofk}): $1-\alpha = 2 (\ln 2 - \ln \tau) / (\ln 2 + \ln \tau) \approx 0.36094$. \begin{figure} \centering \includegraphics[width=\columnwidth]{fib2s2loglog.png} \caption{Log-log plot of the number variance (black dots) for a non-PV substitution tiling corresponding to $(a,b,c,d)=(0,2,2,2)$ decorated with points of equal weight at each tile boundary. The variance was computed numerically for the tiling created by 11 iterations of the substitution on the initial seed $SL$. The red dashed line has the predicted slope $1-\alpha \approx 0.36$.} \label{fig:numvar_n02} \end{figure} For $n \ge 5$, the calculated value of $\alpha$ is negative, approaching $-1$ as $n$ approaches infinity. The point set is therefore anti-hyperuniform; it contains density fluctuations at long wavelengths that are stronger than a those of a Poisson point set. For $n=5$, we have $\alpha = -0.0793\ldots$. Fig.~\ref{fig:numvar_n05} shows a log-log plot of the computed number variance along with the line corresponding to $\sigma^2 (R) \sim R^{1-\alpha}$. Again the agreement between the numerical result and the predicted value is quite good. \begin{figure} \centering \includegraphics[width=\columnwidth]{fib5s2loglog.png} \caption{Log-log plot of the number variance (black dots) for an anti-hyperuniform substitution tiling corresponding to $(a,b,c,d)=(0,5,5,5)$ decorated with a points of equal weight at each tile boundary. The variance was computed numerically for the tiling created by 6 iterations of the substitution on the initial seed $SL$. The red dashed line has the predicted slope $1-\alpha \approx 1.08$.} \label{fig:numvar_n05} \end{figure} Intuition derived from theories based on nonsingular forms of $S(k)$ suggest that a negative value of $\alpha$ should be associated with a divergence in $S(k)$ for small $k$, though it remains true that $Z(k)$ converges to zero for $\alpha > -1$. For singular spectra, the envelope of $S(k)$ scales like $Z(k)$, and we do not expect any dramatic change in the behavior of $S(k)$ as $\alpha$ crosses from positive (hyperuniform) to negative (anti-hyperuniform). The theories presented in Refs.~\cite{Baake2017} and~\cite{Godreche1990} may provide a path to the computation of scaling properties of $S(k)$ in these cases. It is worth noting, however, that the various classes of behavior can be realized by substitutions that produce limit-periodic tilings with $S(k)$ consisting entirely of Bragg peaks with no singular-continuous component, as shown in Section~\ref{sec:lp} below. For rules that yield rational values of the length ratio $\xi$, the coefficient of $(\lambda_2/\lambda_1)^m$ in Eq.~(\ref{eqn:rhom}) can vanish for appropriate choices of $n_S$ and $n_L$, suggesting that there are no fluctuations about the average density that scale with wavelength. This reflects the fact that the sequence of intervals associated with the substitutions can be chosen to generate a periodic pattern. (A simple example is $S\rightarrow L$ and $L\rightarrow SLS$, which generates the periodic sequence $\ldots SLSLSL\ldots$, with $\xi = 2$, $\lambda_1=2$, and $\lambda_2=-1$.) For such cases, $S(k)$ is identically 0 for all $k$ smaller than the reciprocal lattice basis vector. For other interval sequence choices corresponding to the same ${\bm M}$, the tiling can be limit-periodic, and we would expect the scaling to be given by applying the above considerations with generic choices of the ordering, which would yield $\alpha=1$ and therefore a logarithmic scaling of $\sigma^2(R)$. This case is presented in more detail in Section~\ref{sec:lp} below, and the logarithmic scaling is confirmed. \section{Achievable values of {\boldmath\large $\alpha$}} \label{sec:alpharange} Beyond establishing that substitution tilings exist for each hyperuniformity class, it is natural to ask whether any desired value of $\alpha$ can be realized by this construction method. Here we show that if ${\bm M}$ is full rank, $\alpha$ always lies between $-1$ and $3$. First, note that the maximum value of $|\lambda_2/\lambda_1|$ is 1, by definition, which sets the lower bound on $\alpha$ via Eq.~(\ref{eqn:alpha}). The upper bound on $\alpha$ is obtained when $|\lambda_2|$ is as small as possible, but there is a limit on how small this can be. The product of the eigenvalues of ${\bm M}$ is equal to $\det {\bm M}$, so $|\lambda_2|$ cannot be smaller than $(|\det{\bm M}| / \lambda_1)^{\frac{1}{(D-1)}}$. But $|\det{\bm M}|$ is an integer, and the smallest value nonzero value it can take is $1$. (The case $\lambda_2 = 0$ is discussed in Section~\ref{sec:lp} below. For $D\ge 3$, one can have $\det{\bm M} = 0$ with nonzero $\lambda_2$. The analysis of such cases is beyond our present scope.) Hence we have \begin{equation} |\lambda_2| \ge \lambda_1^{-1/(D-1)} \,, \end{equation} implying \begin{equation} \alpha = 1 - 2 \frac{\ln|\lambda_2|}{\ln\lambda_1} \le \frac{D+1}{D-1}\,. \end{equation} Thus the maximum value of $\alpha$ obtainable by this construction method is $3$, which can occur for $D=2$, as in the Fibonacci case. The family of substitutions considered in Section~\ref{sec:scaling} above produces a discrete set of values of $\alpha$ ranging from $-1$ to $3$. By considering two additional families, we can show that the possible values of $\alpha$ densely fill this interval. For \begin{equation} {\bm M} = \left(\begin{array}{cc} a & 0 \\ c & d\end{array}\right) \end{equation} with $d>a+1$ and $2c<(d-a)$, we have $\lambda_1 = a$ and $\lambda_2 = d$. Note that $\xi = (d-a)/c$ is rational here; we assume that the substitution sequences for the two tiles are chosen so as to avoid periodicity. We have \begin{equation} \alpha = 1 - 2 \frac{\ln a}{\ln d}\,. \end{equation} For fixed $a$, $d$ can range from $a+2$ to $\infty$. As $d$ approaches infinity, $\alpha$ approaches $1$. For $d = a+2$, as $a$ approaches infinity, $\alpha$ approaches $-1$. For sufficiently large $d$, the values of $a$ between $1$ and $d-2$ yield an arbitrarily dense set of $\alpha$'s between $-1$ and $1$. Another class of ${\bm M}$'s produces $\alpha$'s between $1$ and $3$. For \begin{equation} {\bm M} = \left(\begin{array}{cc} 0 & b \\ b & n\,b \end{array}\right) \end{equation} with $n>b$, we have \begin{align} \xi & = \frac{1}{2}(n + \sqrt{n^2+4})\,,\\ \lambda_{1,2} & = \frac{b}{2}(n \pm \sqrt{n^2+4})\,,\\ \end{align} We thus obtain \begin{equation} \alpha = 1 - 2 \frac{\ln b - \ln 2 + \ln (\sqrt{n^2+4}-n)}{\ln b - \ln 2 + \ln (\sqrt{n^2+4}+n)}\,. \end{equation} For large $n$, we have \begin{equation} \alpha \approx 1 - 2 \frac{\ln b - 2\ln 2 - \ln n}{\ln b + \ln n}\,, \end{equation} which approaches $3$ for $b \ll n$ and approaches $1$ for $b=n$. By making $n$ as large as desired, the values of $b$ between $1$ and $n$ give $\alpha$'s that fill the interval between $1$ and $3$ with arbitrarily high density. \section{Limit-periodic tilings} \label{sec:lp} For a limit-periodic tiling, the set of tiles is a union of periodic patterns with ever increasing lattice constants of the form $a p^n$, where $p$ is an integer and $n$ runs over all positive definite integers~\cite{Godreche1989,Baake2011,Socolar2011}. We show here that there exist limit-periodic tilings of four hyperuniformity classes: logarithmic (Class II), weakly hyperuniform (Class III), anti-hyperuniform, and an anomalous case in which $Z(k)$ decays to zero faster than any power law as $k$ goes to zero. The latter corresponds to a rule for which $\det{\bm M} = 0$ (and $\lambda_2 = 0$), in which case $\alpha$ is not well defined. The existence of anti-hyperuniform limit-periodic tilings shows that anti-hyperuniformity does not require exotic singularities in $S(k)$ for small $k$. Generally, it requires only that $Z(k)$ grows sub-linearly with $k$. \subsection{The logarithmic case ($\alpha = 1$)} The rule $L\rightarrow LSS$, $S\rightarrow L$ with $S=1$ and $L=2$ yields the well-known ``period doubling'' limit-periodic tiling. The eigenvalues of the substitution matrix are $\lambda_1 = 2$ and $\lambda_2 = -1$, leading to the prediction $\alpha=1$ and therefore quadratic scaling of $Z(k)$ and logarithmic scaling of $\sigma^2(r)$. Numerical results for $\sigma^2(R)$ are in good agreement with this prediction~\cite{Torquato2018b}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figlpdoublingsigma.png} \caption{The $\alpha=1$ (period doubling) limit-periodic tiling. Top: the tile boundaries with each point plotted at a height corresponding to the value of $n$ for the sublattice it belongs to. Bottom: Plot of the number variance. The horizontal dotted line marks $\sigma^2=2/9$, which is obtained for every $R$ of the form $2^n$ with integer $n\geq-1$. The dashed lines indicate upper bounds, and the open circles are analytically calculated values for $R = 2^n/3$. See text for details.} \label{fig:numvar_lpa1} \end{figure} In fact, one can show explicitly via direct calculation of $\sigma^2(R)$ that the scaling is logarithmic. The calculation outlined in the appendix shows that \begin{equation} \label{eqn:s2sum} \sigma^2(R) = \frac{1}{3}\sum_{n=0}^\infty \left\{\frac{w}{2^n}\right\}\left(1-\left\{\frac{w}{2^n}\right\}\right)\,, \end{equation} where $w = 2R$ and $\{x\}$ denotes the fractional part of $x$. From this it follows that for $R = 2^{n-1}/3$ with $n\geq 1$ we have \begin{equation} \sigma^2(R) = \frac{2}{27} \left( \frac{13}{3} + n \right)\,, \end{equation} demonstrating clear logarithmic growth for this special sequence of $R$ values. One can also derive an upper bound over the interval $2^{n-1}<R\leq 2^n$ by assuming that the summand in Eq.~(\ref{eqn:s2sum}) takes its maximum possible value on the intervals $(0,1/2], (1/2,1], (2^{m-1},2^{m}]$, for $m\leq n$, and maximizing the possible sum of the exponentially decaying remaining contributions. The result is \begin{equation} \sigma^2(R) < \frac{1}{4} \left(1 + \frac{n}{3}\right) \quad {\rm for\ } 2^{n-1}<R\leq 2^n\,. \end{equation} This upper bound also grows logarithmically and is shown as a series of dashed lines in Fig.~\ref{fig:numvar_lpa1}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figperioddoublingS.png} \includegraphics[width=\columnwidth]{figperioddoublingZ.png} \caption{The $\alpha=1$ limit-periodic tiling. Top: a logarithmic plot of the analytically computed $S(k)$ (arbitrarily scaled) including $k_{mn}$ with $n\leq 3$. Bottom: a log-log plot of $Z(k)$ computed numerically from $S(k)$. The dashed red line shows the expected quadratic scaling law.} \label{fig:lpdoubling} \end{figure} It is instructive to carry out a more detailed analysis of $S(k)$ for this particularly simple case as well. (See also Ref.~\cite{Torquato2018b}.) The tiling generated by applying the substitution rule repeatedly to a single $L$ with its left edge at $x=1$ consists of points located at positions $4^{\ell} (2j+1)$, where $\ell$ and $j$ range over all positive integers (including zero). The structure factor therefore consists of peaks at $k_{mn} = 2\pi m/(a p^n)$, with $a=2$ and $p=4$, for arbitrarily large $n$ and all integer $m$. For $m$ not a multiple of $4^{n-1}$, the peak at $k_{mn}$ gets nonzero contributions only from the lattices with $\ell \geq n$. These can be summed as follows: \begin{eqnarray} S(k_{mn}) & = & \lim_ {N\rightarrow\infty} \bigg| \sum_{\ell = n}^{\infty} \left( \frac{1}{2\times 4^{\ell}}\right) \nonumber \\ \ & \ & \times \frac{1}{N} \sum_{j=0}^{N-1} \exp\left(\frac{2\pi i m 4^{\ell}(2j+1)}{2\times4^n}\right) \bigg|^2 \\ \ & = & \left(\frac{1}{9 \times 4^{2n}}\right) 4^{\rm{mod}_2(m+1)}\,, \end{eqnarray} where the factor of $1/(2\times 4^{\ell})$ in the first line is the density of the sublattice with that lattice constant. Applying this reasoning to each value of $n$ gives a result that can be compactly expressed as \begin{equation} S(k_{m\nu}) = \left[\frac{{\rm GCD}\left(2^{\nu},m\right)}{3 \times 4^{\nu}} \right]^2\,, \end{equation} where $\nu$ is an arbitrarily large integer, ${\rm GCD}()$ is the greatest common denominator function, and $m$ can now take any positive integer value. Figure~\ref{fig:lpdoubling} shows plots of $S(k)$ and $Z(k)$ for this tiling. (See also Ref.~\cite{Torquato2018b} for an explicit expression for $Z(k)$ and proof of the quadratic scaling.) Note that the apparent repeating unit in the plot of $Z(k)$ spans only a factor of 2, even though the scaling factor for the lattice constants is 4. A similar effect occurs in the Poisson and anti-hyperuniform cases below. In the present case, the construction in the appendix showing that the density can be expressed using lattice constants $1/2^n$ explains the origin of the effect. \subsection{A Poisson scaling example ($\alpha = 0$) \\ and weak hyperuniformity ($0<\alpha<1$)} The substitution rule \begin{equation} S\rightarrow LL, \quad L\rightarrow LLSSSS \label{eqn:lprule} \end{equation} with $S=1$ and $L=2$ produces a limit-periodic tiling with $a=2$ and $p = 16$. Equation~(\ref{eqn:alpha}) yields $\alpha = 0$, which is the value corresponding to a Poisson system. Figure~\ref{fig:lppoisson} shows the result of direct computations of $Z(k)$ including all of the Bragg peaks at $k = 2\pi n/ (a p^3)$ and of $\sigma^2(R)$. Values of $\sigma^2$ were computed from a sequence of $21,889$ points obtained by seven iterations of the substitution rule on an initial $L$ tile. For each point, a window of length $2R$ is moved continuously along the sequence for the computation of the averages. \begin{figure} \centering \includegraphics[width=\columnwidth]{figlppoissonZ.png} \includegraphics[width=\columnwidth]{figlppoissonsigma.png} \caption{Comparison of direct computation of $Z(k)$ and $\sigma^2(R)$ with the predicted scaling laws for a limit-periodic tiling with $\alpha = 0$. The dashed red lines show the expected linear scaling laws. The inset shows the piecewise parabolic behavior of $\sigma^2(R)$ over a small span of $R$ values.} \label{fig:lppoisson} \end{figure} Limit-periodic examples of weak hyperuniformity (Class~III) are afforded by substitutions of the form \begin{equation} {\bm M} = \left( \begin{array}{cc} 0 & 2n \\ 2 & 2(n-1) \end{array}\right)\,, \end{equation} with $n\geq 3$ with $L/S=n$, which yields \begin{eqnarray} \alpha & = & \frac{\ln n - \ln 2}{\ln n + \ln 2} \\ \ & = & \{0.226294, 1/3, 0.39794, 0.442114, \ldots\}\,. \end{eqnarray} \subsection{Anti-hyperuniformity ($\alpha < 0$)} The substitution rule \begin{equation} S\rightarrow LLL, \quad L\rightarrow LLLSSSSSS \label{eqn:lprule} \end{equation} with $S=1$ and $L=2$ produces a limit-periodic tiling with $a=2$ and $p = 36$. Equation~(\ref{eqn:alpha}) yields \begin{equation} \alpha = 1-2\frac{\ln 3}{\ln 6} = -0.226294\ldots\,, \end{equation} which indicates anti-hyperuniform fluctuations. Figure~\ref{fig:lpantihyper} shows the result of a direct computation of $Z(k)$ including all of the Bragg peaks at $k = 2\pi n/ (a p^3)$. \begin{figure} \centering \includegraphics[width=\columnwidth]{figlpantihyperZ.png} \caption{Comparison of direct computation of $Z(k)$ with the predicted scaling law for a limit-periodic tiling with $\alpha = -0.226294\ldots$. The dashed red line shows the expected scaling law with slope $1+\alpha$.} \label{fig:lpantihyper} \end{figure} More generally, substitution matrices of the form \begin{equation} {\bm M} = \left( \begin{array}{cc} 0 & 2 n \\ n & n \end{array}\right) \end{equation} with $n\geq 3$ and $L/S=2$ yield limit-periodic tilings anti-hyperuniform tilings with \begin{eqnarray} \alpha & = & 1 - 2\frac{\ln n}{\ln 2n} = \frac{\ln 2 - \ln n}{\ln 2 + \ln n}\\ \ & = &\{-0.226294, -1/3, -0.39794, \ldots\}\,. \end{eqnarray} \subsection{A $\lambda_2 = 0$ case ($\alpha$ undefined)} A special class of tilings is derived from substitution matrices of dimension $D=2$ that have $\lambda_2 = 0$ (and hence $\det{\bm M} = 0$). Such rules can produce periodic tilings, limit-periodic ones, or more complex structures. The criteria for limit-periodicity can be obtained by analyzing constant-length substitution rules in which each $L$ is considered to be made up of two tiles of unit length: $L = AB$. If the induced substitution rule on $S$, $A$, and $B$ exhibits appropriate coincidences, the tiling is limit-periodic~\cite{Dekking1978,Queffelec1995}. For the substitution matrix $(1,1,2,2)$, the rule $[S\rightarrow SL$; $L\rightarrow SLSL]$ produces a periodic tiling, and $[S\rightarrow SL$; $L\rightarrow SLLS]$, for example, produces a limit-periodic tiling. For the limit-periodic cases, the analysis above would suggest $\alpha \rightarrow \infty$, or, more properly, $\alpha$ is not well defined. We present here an analysis a particular case for which the convergence of $Z(k)$ to zero is indeed observed to be faster than any power law. The substitution rule \begin{equation} S\rightarrow SL, \quad L\rightarrow SLLS \label{eqn:lprule} \end{equation} with $S=1$ and $L=2$ produces a limit-periodic tiling with $a=1$ and $p = 3$. Inspection of the point set (displayed in Fig.~\ref{fig:lp0}) reveals that the number of points in the basis of each periodic subset for $n\ge 2$ is $2^{n-2}$. The density of points in subset $n\ge 2$ is $(1/4)(2/3)^n$. The substitution matrix ${\bm M} = (1,1,2,2)$ has eigenvalues $\lambda_1 =3$ and $\lambda_2 = 0$. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figlppoints.png} \includegraphics[width=\columnwidth]{figformfac.png} \includegraphics[width=\columnwidth]{figlp0Z.png} \includegraphics[width=\columnwidth]{figlp0s2.png} \caption{Top: Periodic sublattices of the limit-periodic point set generated by Eq.~(\ref{eqn:lprule}). Each point is plotted at a height $n$ corresponding to the subset that contains it. Points of the same color form a periodic pattern with period $3^n$. Second: Deviation of $|A(k_n)|$ from $1/3^n$. Third: The integrated structure factor for the limit-periodic tiling with $\lambda_2 = 0$, computed from subsets with $n\le 8$. The straight red (dashed) line of slope 5 is a guide to the eye for observing the concavity of the curve. Bottom: Plot of the number variance for the limit-periodic tiling with $\lambda_2 = 0$.} \label{fig:lp0} \end{figure} The unusual scaling in this case arises from interference effects associated with the form factors of the different periodic subsets. Let $k_n = 2\pi/3^n$, the fundamental wavenumber for the $n$th subset, and let $X_n$ denote the set of points in a single unit cell of the $n$th subset. $S(k_n)$ has contributions coming from all subsets of order $n$ and higher. (Subsets of lower order do not contribute, as their fundamental wavenumber is larger than $k_n$.) After some algebra, we find \begin{align} A(k_n) = & \frac{1}{3^n} \sum_{x\in X_n} e^{2\pi i x/3^n} \\ \ & + \frac{1}{3^{n+1}}\bigg[ \sum_{x\in X_{n+1}}\!\!\! e^{2\pi i x/3^n} + \sum_{x\in X_{n+2}}\!\!\! e^{2\pi i x/3^n} \bigg].\nonumber \end{align} Numerical evaluation of the sums over the unit cell bases reveals that $A(k_n)$ is suppressed by the interference from subsets of higher order. Figure~\ref{fig:lp0} shows the behavior of the quantity $F_n=3^n |A(k_n)|$, revealing a rapid decay for small $k_n$. The red (dashed) line shows the curve $F_x = (1/3) (3x)^{-\ln(9x)/2}$, which appears to fit the points well. An analytic calculation of $Z(k_n)$ is beyond our present reach. The middle panel of Fig.~\ref{fig:lp0} shows the results of a numerical computation that includes all peaks $k = 2\pi m/p^6$, with $p=3$. It is clear that $Z(k)$ is concave downward on the log-log plot, consistent with the expectation that $Z(k)$ goes to zero faster \onecolumngrid \begin{table*}[h] \begin{tabular}{|c|c|c|c|c|c|c|} \hline \ & Anti- & Weakly & Logarithmically & \multicolumn{3}{c|}{Strongly hyperuniform} \\ \ & hyperuniform & hyperuniform & hyperuniform & \multicolumn{3}{l|}{\ } \\ \ & \ & (Class III) & (Class II) & (Class I) & Anomalous & Gapped \\ \hline \ & $-1\leq\alpha \leq 0$ & $0 < \alpha < 1$ & $\alpha = 1$ & $1<\alpha \leq 3$ & $\alpha \rightarrow \infty$ & $\alpha$ irrelevant \\ \hline Periodic & $-$ & $-$ & $-$ & $-$ & $-$ & $\bullet$ \\ Quasiperiodic & ? & ? & $\bullet$ & $\bullet$ & ? & $-$ \\ Non-PV & $\bullet$ & $\bullet$ & $-$ & $-$ & $-$ & $-$ \\ Limit-periodic & $\bullet$ & $\bullet$ & $\bullet$ & ? & $\bullet$ & $-$ \\ \hline \end{tabular} \caption{Types of 1D tilings and their possible hyperuniformity classes. A bullet indicates that tilings of the given type exist, a dash that there are no such tilings, and a question mark that we are not sure whether such tilings exist. } \label{tab:types} \end{table*} \twocolumngrid than any power of $k$. Note that the curve is not reliable for the smallest values of $k$ due to the cutoff on the resolution of $k$ values that are included. The deviation from power law scaling is most easily seen in the increasing with $n$ of the step sizes of the large jumps at $k = 2\pi/3^n$. (Compare to the constant step sizes in Figs.~\ref{fig:lpdoubling}, \ref{fig:lppoisson}, and~\ref{fig:lpantihyper}.) For completeness, the bottom panel of Fig.~\ref{fig:lp0} also shows a plot of the number variance for this tiling. As expected, $\sigma^2(R)$ is bounded from above. We note that the curve appears to be piecewise parabolic, which is also the case for the standard Fibonacci quasicrystal~\cite{Oguz2016}, though the technique for calculating $\sigma^2(R)$ based on the projecting the tiling vertices from a 2D lattice is not applicable here. \section{Discussion} \label{sec:discussion} We have presented a heuristic method for calculating the hyperuniformity exponent $\alpha$ characterizing point sets generated by substitution rules that preserve the length ratios of the intervals between points. The calculation relies only on the relevant substitution matrix and an assumption that the tile order under substitution does not lead to a periodic tiling. The method performs well in that it yields a value of $\alpha$ consistent with direct measurements of the scaling of $\sigma^2(R)$ in several representative cases. This allows for a straightforward construction of point sets with any value of $\alpha$ between $-1$ and $3$. It is well known that substitution rules can be divided into distinct classes corresponding to substitution matrices having eigenvalues that are not PV numbers lead to structure factors $S(k)$ that are singular continuous~\cite{Bom86,Baake2017}, while substitution rules for which $|\lambda_2| < 1$ yield Bragg peaks. Our analysis shows that this distinction corresponds to $\alpha$ greater than or less than unity, respectively. From the perspective of hyperuniformity, on the other hand, the critical value of $\alpha$ is zero, which corresponds to $|\lambda_2|=\sqrt{\lambda_1}$. To achieve $\alpha<0$, a naive comparison to scaling theories for systems with continuous spectra would suggest that $S(k)$ must diverge for small $k$. We find, however, that anti-hyperuniformity, which does require sub-linear scaling of $Z(k)$, can occur without any divergence both in cases where the spectrum is singular continuous, as for non-PV substitutions, and in cases where the spectrum consists of a dense set of Bragg peaks, as in some limit-periodic systems. Finally, our investigations led us to consider the results of applying substitution rules for which $\lambda_2 = 0$, which turned up a novel case of a limit-periodic tiling for which $S(k)$ approaches zero faster than any power law. The physical implications of this type of scaling have yet to be explored. The different tiling types and their hyperuniformity properties are summarized in Table~\ref{tab:types}. Examples of quasiperiodic tilings in Classes~I and~II are presented in Ref.~\cite{Oguz2016}. Note, however, that the Class~II case is not a substitution tiling. We do not know whether some other construction methods might yield quasiperiodic tilings that are in Class~III, anti-hyperuniform, or even anomalous. For non-PV tilings (which are substitution tilings by definition), at least two eigenvalues of the substitution matrix must be greater than unity, which rules out Class~II and Class~I. We conjecture that there are no limit-periodic tilings in Class~I. We can prove this for $D=2$ substitutions based on the fact that limit-periodicity requires the two eigenvalues to be rational and the fact that ${\bm M}$ has only integer elements requires their sum and product to be integers, but we do not have a proof for $D > 2$. \section{Acknowledgements} J.E.S.S.~thanks Michael Baake, Franz G{\"a}hler, Uwe Grimm and Lorenzo Sadun for helpful conversations at a workshop sponsored by the International Centre for Mathematical Sciences in Edinburgh. P.J.S. thanks the Simons Foundation for its support and New York University for their generous hospitality during his leave at the Center for Cosmology and Particle Physics where this work was completed. S.T. was supported by the National Science Foundation under Award No.~DMR-1714722.
1,941,325,220,166
arxiv
\section{Introduction} The noble gas radioisotope \Ar{39} with a half-life of \SI{268\pm8}{years} \cite{Stoenner1965,Chen2018} has long been identified as an ideal dating isotope for water and ice in the age range 50-\SI{1800}{years} due to its chemical inertness and uniform distribution in the atmosphere \cite{Lal1963, Loosli1968}. However, its extremely low isotopic abundances of $10^{-17}-10^{-15}$ in the environment have posed a major challenge in the analysis of \Ar{39}. In the past, it could only be measured by Low-Level Counting, which requires several tons of water or ice \cite{Loosli1983}.\\ \indent In recent years, the sample size for \Ar{39} dating has been drastically reduced by the emerging method Atom Trap Trace Analysis (ATTA), which detects individual atoms via their fluorescence in a magneto-optical trap (MOT). This laser-based technique was originally developed for \Kr{81} and \Kr{85} \cite{Chen1999, Jiang2012, Lu2014, Tian2019} and has later been adapted to \Ar{39}, realizing dating of groundwater, ocean water and glacier ice \cite{Jiang2011,Ritterbusch2014,Ebser2018, Feng2019}. The latest state-of-the-art system reaches an \Ar{39} loading rate of $\sim$\SI{10}{atoms/h} for modern samples and an \Ar{39} background of $\sim$\SI{0.1}{atoms/h} \cite{Tong2021,Gu2021}. Still, its use in applications like ocean circulation studies and dating of alpine glaciers is hampered by the low count rate, which determines the measurement time, precision and sample size.\\ \indent Laser cooling and trapping of argon atoms in the ground level is not feasible due to the lack of suitable lasers at the required vacuum ultra violet (VUV) wavelength. As it is the case for all noble gas elements, argon atoms need to be excited to the metastable level $1s_5$ where the $1s_5-2p_9$ cycling transition at \SI{811}{nm} can be employed for laser cooling and trapping (Paschen notation \cite{Paschen1919} is used here, the corresponding levels in Racah notation \cite{Racah1942} can be found in Fig. \ref{transition} in Appendix \ref{app:ar_transitions}). The $1s_5$ level is $\sim$\SI{10}{eV} above the ground level and, in operational ATTA instruments, is populated by electron-impact excitation in a RF-driven discharge with an efficiency of only $ 10^{-4}-10^{-3} $. Increasing this efficiency would raise the loading rate of \Ar{39} accordingly. \\ \indent Since the discharge excites atoms into not only the metastable $1s_5$ but also many other excited levels, the metastable $1s_5$ population can be enhanced by transferring atoms from these other excited levels to the metastable $1s_5$ via optical pumping (Fig. \ref{fig:scheme}). This mechanism has been demonstrated in a spectroscopy cell for argon with an increase of 81\% \cite{Hans2014,Frolian2015} and for xenon with an increase by a factor of 11 \cite{Hickman2016, Lamsal2020}. It has also been observed in an argon beam with an increase of 21\% \cite{Hans2014}. While these experiments were done on stable and abundant isotopes, a \SI{60}{\%} increase in loading rate has recently been observed for the rare isotopes \Kr{81} and \Kr{85} \cite{Zhang2020}.\\ \indent In this work, we theoretically and experimentally examine the enhancement of metastable production by optical pumping for the rare \Ar{39} as well as the abundant argon isotopes. We identify the $1s_4-2p_6$ transition at \SI{801}{nm} and the $1s_2-2p_6$ transition at \SI{923}{nm} as the most suitable candidates. Implementing the enhancement scheme for \Ar{39} on these transitions requires knowing the respective frequency shifts, which we calculate and experimentally confirm. Moreover, loading rate measurements support the theoretically predicted transfer process between $1s_2$ and $1s_4$ levels when driving the 923-nm and 801-nm transitions simultaneously. \begin{figure*} \centering \noindent \includegraphics[width=17cm]{OP_Ar_scheme.png} \caption{ (a) Scheme for enhancing the population in the metastable level $1s_{5}$ by driving the $1s-2p$ levels with Rabi frequencies $\Omega_{24}$, $\Omega_{34}$ and detunings $\delta_{24}$, $\delta_{34}$. $\Gamma_{ij}$ denotes the spontaneous emission rate from level $i$ to $j$. (b) The optical pumping scheme chosen in this work on the $1s_4-2p_6$ transition at \SI{801}{nm} and $1s_2-2p_6$ transition at \SI{923}{nm}, shown for \Ar{39} which has a nuclear spin $I = 7/2$. } \label{fig:scheme} \end{figure*} \subsection{Transfer efficiency}\label{theory_transfer_efficiency} We solve the Lindblad master equation (see details in Appendix \ref{Lindblad_master_equations}) for the 6-level system shown in Fig. \ref{fig:scheme}(a) which corresponds to the even argon isotopes without hyperfine structure. The resulting steady-state solution $\widetilde{\rho}_{55}(t\to+\infty)$ for the final population in the metastable level can be obtained analytically as a function of the initial populations in $ {\left |2 \right\rangle} $ and $ {\left |3 \right\rangle} $, using the initial condition \begin{equation} \widetilde{\rho}_{ij}(t=0)=0\text{\ \ for \ }(i,j)\not=(2,2) \ \text{and} \ (i,j)\not=(3,3). \end{equation} If only one transition is driven, e.g. $\Omega_{34}=0$, then $\widetilde{\rho}_{55}(t\to+\infty)$ simplifies to the expressions given in \cite{Zhang2020}. We use these expressions to calculate the transfer efficiency $ \widetilde{\rho}_{55}(t\to+\infty) $ for the different $ 1s-2p $ transitions in even argon isotopes as a function of laser power. The transitions with the highest transfer efficiencies are shown in Table \ref{tab:transfer_efficiencies} (see Table \ref{tab:trans_frac} in Appendix \ref{argon_transition_scheme_trans_frac} for all transitions). \begin{table}[H] \caption{ Argon transitions with the highest transfer efficiencies from each $ 1s $ level, calculated for a laser beam of 9-mm diameter and different powers $P$. For driving both transitions simultaneously (bottom row) an equal population in $1s_2$ and $1s_4$ and equal power $P$ for each laser beam is assumed. } \centering \def1.3{1.3} \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}ccccc@{}} \hline\hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Lower\\ level\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Upper\\ level\end{tabular}} & \multirow{2}{*}{$ \lambda \SI{}{(nm)}$} & \multicolumn{2}{c}{$ \widetilde{\rho}_{55}(t\to+\infty) $} \\ \cline{4-5} & & & $ P=\SI{0.5}{W} $ & $ P\rightarrow +\infty \SI{}{W} $ \\ \hline $ 1s_{4} $ & $ 2p_{6} $ & 801 & 0.03 & 0.05 \\ $ 1s_{3} $ & $ 2p_{10} $ & 1047 & 0.77 & 0.77 \\ $ 1s_{2} $ & $ 2p_{6} $ & 923 & 0.15 & 0.17 \\ \hline $ 1s_{2}+1s_{4} $ & $ 2p_{6} $ & 801+923 & 0.12 &0.08 \\ \hline \hline \end{tabular*} \label{tab:transfer_efficiencies} \end{table} From the metastable level $1s_3$ (see Fig. \ref{transition} in Appendix \ref{argon_transition_scheme_trans_frac}), the $1s_3-2p_{10}$ transition at \SI{1047}{nm} has the highest transfer efficiency of \SI{77}{\%}. Since $1s_3$ is also metastable, only a few mW of laser power are needed to saturate the transition. However, experimentally we only achieve an increase in the metastable $1s_5$ population of $\sim$\SI{10}{\%} by pumping this transition. Since the increase in the population of the metastable $1s_5$ is the product of the transfer efficiency (=0.77, Table \ref{tab:transfer_efficiencies}) and the initial population in the $1s_3$, it follows that the latter is only $10\%/0.77=13\% $ of that in the metastable $1s_5$. Given this limitation, optical pumping on $1s_3$ is not investigated further in this work.\\ \indent The transfer efficiency from $1s_2$ is the highest for the 923-nm transition to $2p_{6}$, reaching a high-power limit of \SI{17}{\%}. From $1s_4$ the transfer efficiency is the highest for the 801-nm transition to $2p_{6}$, reaching a high-power limit of \SI{5}{\%}. Since the populations of these levels in the argon discharge are not known, the actual increase in the metastable population needs to be determined experimentally. In the following we focus on these two transitions as illustrated in Fig. \ref{fig:scheme}(b) for the odd isotope \Ar{39}.\\ \begin{figure}[h!] \centering \vspace{0.3cm} \noindent \includegraphics[width=0.4\textwidth]{OP_Ar_setup.png} \caption{RF-driven discharge source of metastable argon atoms in the ATTA setup. The optical pumping laser beams are sent into the source counter-propagating to the atomic beam. } \label{fig:setup} \end{figure} \begin{figure*}[t!] \centering \subfloat{ \label{801freq.1} \includegraphics[width=0.5\textwidth]{Ar36_Ar38_Ar40_vs_800_freq.jpg} \subfloat{ \label{922freq.2} \includegraphics[width=0.5\textwidth]{Ar36_Ar38_Ar40_vs_922_freq.jpg}} \captionsetup{justification=raggedright} \caption{Gain in the MOT loading rate for the abundant argon isotopes vs frequency of the (a) 801-nm and (b) 923-nm optical pumping light, measured in the enriched sample. For each transition, $f_{40}$ denotes the resonance frequency of \Ar{40} at rest as monitored in a spectroscopy cell.} \label{fig:even isotopes} \end{figure*} \indent Interestingly, when these two transitions are driven simultaneously (i.e. $\Omega_{24}\neq0 , \Omega_{34}\neq0$) the final population in the metastable level $\rho_{55}$ is smaller than the sum of the individually driven transitions (see bottom row of table \ref{tab:transfer_efficiencies}). This effect is the consequence of stimulated emission from $2p_6$ to $1s_4$ by the 801-nm light, together with the 923-nm light effectively transferring atoms from $1s_2$ to $1s_4$. In the same way atoms are also transferred from $1s_4$ to $1s_2$. However, since the decay rate to the ground level from $1s_4$ is three times higher than from $1s_2$ (see Fig. \ref{transition} in Appendix \ref{app:ar_transitions}), the total increase in the metastable level is lower than the sum of the individually driven transitions. As the laser power increases also the stimulated emission increases, leading to a further decrease in the combined transfer efficiency to the metastable level. \subsection{Isotope shifts and hyperfine splittings for \Ar{39}} \label{iso_hyper} The total frequency shifts of \Ar{39} for the 801-nm and 923-nm transitions consist of the isotope shifts and the hyperfine splittings. The hyperfine coefficients of \Ar{39} for $1s_2$ and $1s_4$ were measured in \cite{Traub1967}, whereas for $2p_6$ they can be calculated from the corresponding hyperfine coefficients measured for \Ar{37} \cite{Klein1996}, using the measured nuclear magnetic dipole moments and electric quadrupole moments of \Ar{39} and \Ar{37} \cite{Armstrong1971,Zhang2020,Stone2015,Klein1996}. The resulting hyperfine coefficients are shown in Table \ref{tab:A_B} in Appendix \ref{A_B_ar_cal}. Isotope shifts of neither the 801-nm transition nor the 923-nm transition for any argon isotope have been found in the literature. The isotope shifts for \Ar{36} and \Ar{38} have therefore been measured in this work (see below), allowing us to calculate the isotope shifts for \Ar{39} \cite{King1963, Heilig1974, Zhang2020}. The resulting isotope shifts and hyperfine splittings for \Ar{39} relative to \Ar{40} are given in Table \ref{tab:ar_cal} in Appendix \ref{A_B_ar_cal}. \section{Experimental Setup} For measuring the metastable population increase by optical pumping in \Ar{39} as well as the stable argon isotopes, we use an ATTA system as described in \cite{Tong2021}. Metastable argon atoms are generated in a RF-driven discharge by electron impact (Fig. \ref{fig:setup}) and are subsequently laser cooled and detected in a magneto-optical trap. Single \Ar{39} atoms are detected via their 811-nm fluorescence in the MOT using an electron-multiplying charged coupled device (EMCCD) camera. During a measurement of \Ar{39} (\Ar{39}/Ar=$8\times 10^{-16}$ in modern air), the stable and abundant \Ar{38} (\Ar{38}/Ar=\SI{0.06}{\%} in air) is measured as well to account for drifts in the trap loading efficiency. The loading rate of \Ar{38} for this normalization purpose is measured by depopulating the MOT with a quenching transition and detecting the emitted fluorescence \cite{Jiang2012, Tong2021}. For testing optical pumping on \Ar{38} and the other stable argon isotopes the loading rate is measured by first clearing the MOT with a quenching transition and then the initial linear part of the rising slope of the MOT fluorescence is measured \cite{Cheng2013}. \\ \indent For optical pumping, we shine in the \SI{923}{nm} and \SI{801}{nm} laser beams counter-propagating to the atomic beam (Fig. \ref{fig:setup}). The laser beams are weakly focused and slightly larger than the inner diameter of the source tube (\diameter \SI{10}{mm}). \begin{figure*}[t!] \subfloat{ \label{801hyperfine.1} \hspace{-0.6cm} \includegraphics[width=0.55\textwidth]{Ar39_vs_801nm.jpg}} \subfloat{ \label{923hyperfine.2} \hspace{-0.8cm} \includegraphics[width=0.55\textwidth]{Ar39_vs_922nm.jpg}} \captionsetup{justification=raggedright} \caption{\Ar{39} loading rate gain versus frequency of the (a) 801-nm and (b) 923-nm light, measured in the enriched sample. $f_{40} $ denotes the resonance frequency of \Ar{40} at rest as monitored in a spectroscopy cell. The Doppler-shift obtained for \Ar{40} has been subtracted from the frequency to obtain the \Ar{39} frequency spectrum at rest. The error of each \Ar{39} data point is $\sim$\SI{5}{\%}. The dashed green lines indicate the calculated frequencies of the hyperfine transitions.} \label{fighyperfine} \end{figure*} The optical pumping light is generated by tapered amplifiers seeded with diode lasers, providing up to \SI{1.0}{W} of usable laser power at \SI{801}{nm} and \SI{1.6}{W} at \SI{923}{nm}. For measuring the different argon isotopes, the laser frequency needs to be tuned and stablilized over several GHz. For this purpose, the two lasers are locked by a scanning transfer cavity lock \cite{Zhao1998, Subhankar2019}, using a diode laser locked to the 811-nm cooling transition of metastable \Ar{40} as the master. In order to increase counting statistics for \Ar{39}, we use an enriched sample prepared by an electromagnetic mass separation system \cite{Jia2020}. In the enriched sample, \Ar{40} is largely and \Ar{36} partially removed so that \Ar{39} and \Ar{38} are enriched by a factor $\sim200$. The ratio of \Ar{39} and \Ar{38} is not changed in the enrichment process \cite{Tong2022}, which is important for the normalization described above. The \Ar{40}, \Ar{36} and \Ar{38} abundances in the enriched sample are \SI{60}{\%}, \SI{30}{\%} and \SI{10}{\%}, respectively. \section{Results and Discussion} The loading rate of the stable argon isotopes is measured versus the frequencies of the 801-nm and 923-nm light (Fig. \ref{fig:even isotopes}). A clear increase in the loading rate is observed for all isotopes. For \Ar{40} we obtain most probable Doppler shifts around \SI{-230}{MHz} in agreement with the expected temperature of the liquid-nitrogen-cooled atomic beam. The small \Ar{40} feature mirrored on the positive detuning side is likely caused by the optical pumping light reflected at the window behind the source. The window is partially coated by metal which has been sputtered by argon ions that are produced in the discharge. From the observed resonances for \Ar{36} and \Ar{38} we obtain the isotope shifts with respect to \Ar{40} for the 801-nm as well as the 923-nm transition shown in Table \ref{tab:isotope shifts}. Based on these measured isotope shifts, we calculate the isotope shifts for \Ar{39} (Table \ref{tab:isotope shifts}) using King plots \cite{King1963}. Interestingly, the loading rate of \Ar{36} shows a pronounced increase also at the \Ar{40} resonance for both, the 801-nm and the 923-nm transition. Looking closely, an increase in loading rate is visible for each isotope at the resonances of the other two isotopes. This additional increase is likely caused by metastable exchange collisions, e.g. transferring an increase in the metastable population of \Ar{40} to that of \Ar{36}. The maximum loading rate gain is lower for \Ar{40} than for the less abundant \Ar{36} and \Ar{38}. This difference is discussed in more detail below.\\ \begin{figure*}[t!] \centering \subfloat{ \label{801power.1} \hspace{-0.8cm} \includegraphics[width=0.56\textwidth]{Ar36_Ar38_Ar40_Ar39_vs_801_power.jpg}} \subfloat{ \label{922power.2} \hspace{-0.8cm} \includegraphics[width=0.56\textwidth]{Ar36_Ar38_Ar40_Ar39_vs_922_power.jpg}} \captionsetup{justification=raggedright} \caption{(a) Loading rate gain of the argon isotopes vs laser power of the (a) 801-nm light and (b) 923-nm light, measured with the enriched argon sample. The lines are saturation fits according to the expressions given in \cite{Zhang2020}. } \label{fig.power} \end{figure*} \indent Fig. \ref{fighyperfine} shows the \Ar{39} loading rate gain vs frequency of the 801-nm and 923-nm light. For both transitions, a clear increase in the loading rate is observed. For \SI{923}{nm}, the $F=9/2\rightarrow11/2$ transition is the strongest as expected from the multiplicity and transition strength \cite{Axner2004}. Moreover, the measurements confirm the other calculated hyperfine transitions. For \SI{801}{nm}, the overlap of the $F=9/2\rightarrow11/2 $ and $ F=7/2\rightarrow7/2 $ transition is the strongest. The loading rate increase is lower compared to that achieved with the 923-nm light. Accordingly, the different hyperfine transitions are resolved less clearly. Nevertheless, the measurements are in good agreement with the calculated hyperfine transitions. In order to address not only one but two hyperfine levels of \Ar{39}, we add sidebands to the 801-nm and 923-nm light. At \SI{801}{nm} no increase is detectable by adding a sideband resonant with the overlap of the $F=7/2\rightarrow9/2 $ and $ F=5/2\rightarrow5/2 $ transitions. At \SI{923}{nm} we observe a maximum increase of only $\sim$\SI{10}{\%}, although according to Fig. \ref{923hyperfine.2} an increase of \SI{40}{\%} appears possible. Likely, the increase by adding a sideband is compensated by the decrease due to the lower laser power on the carrier frequency. \\ \indent The loading rate gain as a function of laser power is shown in Fig. \ref{fig.power}. As already observed in Fig. \ref{801freq.1}, the maximum loading rate gain is lower for \Ar{40} than for the less abundant \Ar{36} and \Ar{38}. This may be caused by the higher density of \Ar{40} leading to a stronger trapping of the \SI{764}{nm} fluorescence (see Fig. \ref{fig:scheme}), which can quench other metastable atoms. Moreover, the saturation intensity is significantly lower for \Ar{40} than for \Ar{36} and \Ar{38}. This may also be caused by the higher density of \Ar{40}, leading to trapping of the re-emitted 801-nm and 923-nm light. The saturation intensity for \Ar{39} is difficult to assess due to the large measurement uncertainties and the contribution from neighbouring hyperfine levels. \\ \indent Table \ref{tab:loading_rate} lists the maximum loading rate gains of the different argon isotopes for the 801-nm and the 923-nm transitions, as well as for both transitions driven together. As predicted by the calculation in section \ref{theory_transfer_efficiency} and Appendix \ref{Lindblad_master_equations}, driving \begin{table}[h] \caption{Loading rate gains obtained for different argon isotopes and different transitions, measured in the enriched argon sample.} \centering \def1.3{1.3} \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline\hline Isotope & \SI{801}{nm} & \SI{923}{nm} & \SI{801}{nm} + \SI{923}{nm} \\ \hline \begin{tabular}[c]{@{}l@{}}\Ar{40} \end{tabular} &1.4 &1.7 & \begin{tabular}[c]{@{}l@{}}1.8\end{tabular} \\ \begin{tabular}[c]{@{}l@{}}\Ar{38} \end{tabular} & 1.6 & 2.5 & 2.8 \\ \begin{tabular}[c]{@{}l@{}}\Ar{36} \end{tabular} & 1.6 & 2.2 & 2.6 \\ \Ar{39} & 1.4 & 1.8 & 2.0 \\ \hline\hline \end{tabular*} \label{tab:loading_rate} \end{table} both transitions simultaneously results in a lower gain than the addition of the individual gains. This result confirms the transfer due to stimulated emission between $1s_2$ and $1s_4$ via the intermediate $2p_6$, driven by the 923-nm and 801-nm light. For \Ar{39} a two-fold gain in the loading rate is obtained by optical pumping when simultaneously using 801-nm and 923-nm light and addressing the $F=9/2$ level. According to the loading rate gain obtained for the other hyperfine levels (Fig. \ref{fighyperfine}), if sidebands are introduced with additional laser power a near three-fold gain in the loading rate should be possible. \\ \indent As mentioned above and observed in Fig. \ref{fig.power}, the loading rate gain varies for different isotopes. Moreover, we observe that the loading rate gain depends on density and sample composition. In order to examine the dependence, we measure the \Ar{36} and \Ar{40} loading rate gains vs. argon pressure in the chamber at the outlet of the source tube (Fig. \ref{fig:gain_vs_pressure}). In this measurement, atmospheric argon (abundances of \Ar{40}, \Ar{36} and \Ar{38} are \SI{99.6}{\%}, \SI{0.33}{\%} and \SI{0.06}{\%}, respectively) is used instead of the enriched sample. \begin{figure}[h!] \centering \noindent \includegraphics[width=0.5\textwidth]{OP_Ar_gain_vs_pressure.jpg} \caption{Loading rate gain vs argon pressure in the chamber at the outlet of the source tube (Fig. \ref{fig:setup}) for the 923-nm transition, measured with atmospheric argon. The pressure inside the source tube is considerably higher than at the outlet of the source tube. The lines are guides-to-the-eye. } \label{fig:gain_vs_pressure} \end{figure} The loading rate gains of the two isotopes differ significantly. For \Ar{36} the loading rate gain increases with the argon pressure whereas for \Ar{40} the loading rate gain decreases beyond a maximum. Moreover, the loading rate gain for \Ar{36} in this measurement with atmospheric argon reaches the value 3.3 whereas it is only 2.2 when measured with the enriched sample (\Ar{36} abundance=\SI{30}{\%}) as in Fig. \ref{fig.power}. These findings indicate that the populations of the $1s$-levels in the discharge depend on pressure and composition. These dependences might be caused by various mechanisms such as trapping of light from the VUV ground level transitions, which together with the optical pumping light can produce metastable argon atoms. \section{Conclusion and Outlook} We have realized a two-fold increase of the \Ar{39} loading rate in an atom trap system via optical pumping in the discharge source. A three-fold increase is expected by adding sidebands with additional laser power that cover all the hyperfine levels of \Ar{39}. Similarly, we obtain an increase of the MOT loading rate by a factor 2-3 for the stable argon isotopes \Ar{36}, \Ar{38} and \Ar{40}. We observe that the loading rate gain varies for different isotopes and that it depends on the argon pressure in the discharge as well as the abundance of the respective isotope. We assign these dependences to the complex population dynamics of the $1s$-levels in the discharge via mechanisms such as radiation trapping and metastable exchange collisions. Consequently, using the method presented here for practical \Ar{39} analysis requires a stable control of the pressure so that the loading rate gain due to optical pumping for both \Ar{39} and \Ar{38} stays constant during measurements. \\ \indent The hitherto unknown isotope shifts in \Ar{36} and \Ar{38} as well as the \Ar{39} spectra for the 801-nm and 923-nm transitions have been measured in this work. They constitute an important contribution to the efforts on optically generating metastable argon via resonant two-photon excitation \cite{Wang2021, Dong2022}. For a more precise measurement of the hyperfine coefficients and the isotope shifts, spectroscopy on samples highly enriched in \Ar{39} will be necessary \cite{Welte2009, Williams2011}. \\ \indent The presented method for enhanced production of metastable argon can be directly implemented in existing ATTA setups to increase the \Ar{39} loading rate by a factor 2-3. For state-of-the-art ATTA systems, the \Ar{39} loading rate is $\sim$\SI{10}{atoms/h}. For \Ar{39} analysis at a precision level of \SI{5}{\%}, this loading rate leads to a measuring time of $\sim$\SI{50}{h} during which the \Ar{39} background in the ATTA system increases linearly with time. Therefore, the two-fold increase in \Ar{39} loading rate realized in this work constitutes a significant advance for measuring time, precision and sample size of \Ar{39} analysis in environmental applications such as dating of alpine ice cores and large scale ocean surveys. \begin{acknowledgments}This work is funded by the National Natural Science Foundation of China (41727901, 41961144027, 41861224007), National Key Research and Development Program of China (2016YFA0302200), Anhui Initiative in Quantum Information Technologies (AHY110000). \\ \\ \indent Y.-Q. Chu and Z.-F. Wan contributed equally to this work.\\ \\ \indent \textit{ An edited version of this paper was published by APS \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.105.063108}{Physical Review A 105, 063108 (2022)}. Copyright 2022 American Physical Society.} \end{acknowledgments} \clearpage
1,941,325,220,167
arxiv
\section{Introduction} Automatic speaker verification (ASV) technology is ubiquitous nowadays, being applied to user authentication in an increasingly diverse array of applications from smart-home technology to telephone banking to health-sector applications~\cite{hansen2015speaker,kinnunen2010overview}. Despite its success, there are justified concerns surrounding the vulnerabilities of ASV to spoofing, namely manipulation from inputs specially crafted to deceive the system and provoke false acceptances. The community has responded to this threat with spoofing countermeasures that aim to detect and deflect such attacks. The effort has been spearheaded through the ASVspoof initiative which emerged from the first special event on the topic held at INTERSPEECH in 2013~\cite{EURECOM+4018,evans2013spoofing}. The first edition of ASVspoof, held in 2015~\cite{wu2015asvspoof}, showed that some voice conversion and speech synthesis spoofing attacks can be detected with ease. The infamous S10 unit selection speech synthesis algorithm,\footnote{\url{http://mary.dfki.de/}} however, was shown to present substantial difficulties. The constant Q cepstral coefficient (CQCC) front-end~\cite{todisco2016new,todisco2017constant}, introduced subsequently in 2016, was the first solution capable of detecting the S10 attack, reducing equal error rates (EERs) by some $72\%$ relative to the state of the art at the time. While better performing solutions have since emerged, the CQCC front-end remains popular. Over half of ASVspoof 2019 participants (26 out of 48) used CQCC representations as a component of their submission. This includes a number of top-performing systems, e.g.~\cite{chettri2019ensemble, alluri2019iiit,yang2019sjtu,lavrentyeva2019stc}. CQCC features are, however, not a \emph{silver bullet}. Even if near-to-zero EERs can be obtained for some attacks, for others results are poor. Among those in the ASVspoof 2015 database, for the S8 tensor-based approach to voice conversion~\cite{saito2011one}, EERs obtained using CQCCs are substantially higher than those obtained with more conventional linear frequency cepstral coefficient (LFCC) features. These observations have led us to ask ourselves \emph{why}? What could account for these performance differences and what can be learned from the explanation? The answer lies in the artefacts that characterise spoofing attacks for which each front-end excels. Artefacts are signatures or tell tale signs in spoofed speech that are left behind by voice conversion and speech synthesis algorithms. They hence provide the means to distinguish between bona fide and spoofed speech. While end-to-end deep learning architectures, which operate directly upon the signal and which have the capacity to identify such artefacts automatically, may diminish the need to understand what makes one representation outperform another, explainability is \emph{always} an issue of scientific interest. While we have learned a great deal about the vulnerabilities of ASV to spoofing and the pattern recognition architectures that function well in detecting spoofing attacks, we have probably learned surprising little about what characterises spoofing attacks at the signal, spectrum or feature level. An understanding at this level would surely place us in a better position to design better performing countermeasures capable of detecting a broader range of attacks with greater reliability. The same understanding might also provide some assurance that the artefacts being detected are in fact indicative of spoofing attacks rather than database collection and pre-processing procedures. The work presented in this paper hence revisits CQCC features and attempts to show why they are effective in detecting some attacks but less effective in detecting others. In particular, we are interested in shedding light upon what is being detected in the signal, i.e.\ the artefacts that characterise a spoofing attack. The paper is organised as follows. Section~2 presents a brief review of the CQCC front-end. Section~3 describes a sub-band approach to examine where information relevant to spoofing detection is located in the spectrum. Section~4 presents countermeasure performance and sub-band analysis results for the ASVspoof 2019 database. Section~5 describes the impact of variable spectro-temporal resolution whereas Section 6 presents additional experiments designed to validate our findings. A discussion of these is presented in Section~7, together with some directions for future work. \section{Constant Q cepstral coefficients} This section describes the motivation behind the development of constant Q cepstral coefficients (CQCCs). It shows how CQCC features are derived starting with the constant Q transform (CQT), describes differences to spectro-temporal decomposition with the discrete Fourier transform (DFT) and presents comparative results for the ASVspoof 2015 database. \subsection{Motivation} \begin{table*}[!t] \centering \caption{EERs (\%) for the ASVspoof 2015 database, evaluation partition. Results for GMM-CQCC and GMM-LFCC baseline systems, reproduced from~\cite{todisco2016new}. Results for two attacks (S8 and S10) show stark differences in performance for each system.} \setlength\tabcolsep{6pt} \begin{tabular}{ *{11}{c}} \hline System &S1 &S2 & S3 & S4 & S5 &S6 &S7 &\textbf{S8} &S9 &\textbf{S10}\\ \hline\hline GMM-CQCC& 0.005& 0.106& 0.000& 0.000& 0.130 &0.098& 0.064 &\color{purple}\textbf{1.033}& 0.053 &\color{blue}\textbf{1.065}\\ \hline GMM-LFCC&0.027& 0.408& 0.000& 0.000& 0.114& 0.149& 0.011& \color{blue}\textbf{0.074}& 0.027& \color{purple}\textbf{8.185}\\ \hline \end{tabular} \label{Tab:2015 ASV spoof challenge results} \end{table*} One strategy to spoof an ASV system would be to generate and present speech signals whose features match well those corresponding to genuine, bona fide speech produced by the target speaker (the identity being claimed by the attacker). In this sense, the characteristics of a speech signal that distinguish different speakers are unlikely to be the same as those that distinguish between bona fide and spoofed speech. These assumptions provided the motivation behind the development of CQCCs; instead of using features designed specifically for ASV, a popular choice in the early years of anti-spoofing research, we sought to use features that were fundamentally \emph{different} to those designed for ASV. The fundamental assumption was that, if the spoofer is trying to produce speech signals that \emph{replicate} the same features used for ASV, then features optimised for ASV cannot be the best approach to detect spoofed speech signals; features not optimised for ASV would have better potential for spoofing detection since the spoofer was not trying to produce speech signals which \emph{replicate} these features. Although it was discovered later that CQCCs were also useful for other speech tasks including speaker diarization, ASV and utterance verification~\cite{todisco2017constant,kinnunen2016utterance}, they were not \emph{designed} for ASV. The inspiration for CQCCs came from prior experience of using the constant Q transform (CQT) for music processing tasks~\cite{schorkhuber2014matlab}. \subsection{From the CQT to CQCCs} \label{section:CQCC} The perceptually motivated CQT~\cite{Youngberg78,Brown91} approach to the spectro-temporal analysis of a discrete signal $x(n)$ is defined by: \begin{equation} X^{CQ}(k,n)=\sum_{l=n-\left \lfloor N_k/2 \right \rfloor}^{n+\left \lfloor N_k/2 \right \rfloor}x(j)a_{k}^{*}(l-n+N_{k}/2) \label{eq:CQT} \end{equation} where $n$ is the sample index, $k = 1, 2,..., K$ is the frequency bin index, $a_k(n)$ are the basis functions, $*$ is the complex conjugate and $N_k$ is the frame length. The basis functions $a_k(n)$ are defined by: \begin{equation} a_k(n)= g_k(n) e^{j 2\pi n {f_{k}}/{f_{s}}}, n \in \mathbb{Z} \label{eq:atoms} \end{equation} where $g_k(n)$ is zero-centred window function, $f_{k}$ is the center of each frequency bin and $f_{s}$ is the sampling rate. Further details are available in \cite{schorkhuber2014matlab}. The center of each frequency bin $f_{k}$ is defined according to where $f_{1}$ is the center of the lowest frequency bin and $B$ is the number of bins per octave. $B$ determines the trade-off between spectral and temporal resolutions. $N_k \in \mathbb{R}$ in Eqs.~\ref{eq:CQT} and \ref{eq:atoms} is real-valued and inversely proportional to $f_k$. As a result, the summation in Eq.~\ref{eq:CQT} is over a number of samples that is dependent upon the frequency. The spectro-temporal resolution is hence also dependent upon the frequency. The quality (Q) factor, given by $Q=f_k/\delta f$, where $\delta f$ is the bandwidth, reflects the selectivity of each filter in the filterbank. For the CQT transform, Q is constant for all frequency bins $k$; filters are logarithmically spaced. The CQT gives $X^{CQ}(k,n)$ a geometrically spaced spectrum. This is resampled using a spline interpolation method to a uniform, linear scale, giving $\bar{X}^{CQ}(l,n)$ which attributes equal weighting to information across the full spectrum~\cite{todisco2016new}. CQCCs~\cite{todisco2016new} are obtained from the discrete cosine transformation (DCT) of the logarithm of the squared-magnitude CQT as follows: \begin{equation} CQCC(p,n)= \sum_{l=0}^{L-1}\mathrm{log}\left |\bar{X}^{CQ}(l,n) \right |^{2} \mathrm{cos}\left [ \frac{p\left ( l-\frac{1}{2} \right )\pi}{L} \right ] \label{eq:cqcc} \nonumber \end{equation} where $p = 0, 1,..., L-1$ and where $l$ is now the linear-scale frequency bin index. Full details of the CQCC extraction algorithm are available in~\cite{todisco2016new}. Efficient implementations of the CQT can be found in~\cite{schorkhuber2014matlab} and~\cite{Velasco11}. \subsection{Differences to the DFT and LFCC} The principal differences between the CQT and the discrete Fourier transform (DFT) relate to the spectro-temporal resolution. Whatever the algorithm, spectral decompositions essentially act as a filterbank. For the DFT, the set of filterbank frequencies are linearly distributed and the bandwidth of each filter is constant. For the DFT, Q is no longer constant, but is linearly distributed. The series of filters is no longer logarithmically spaced, but linearly spaced. This ensures that the DFT exhibits constant spectral resolution. This is not the case for the CQT. Compared to the DFT and as a direct consequence of the constant Q property, the CQT-derived spectrum has greater frequency resolution at lower frequencies than at higher frequencies. The CQT-derived spectrogram will then exhibit greater temporal resolution at higher frequencies than at lower frequencies. \subsection{Performance} Both CQCC and LFCC front-ends are typically used with simple Gaussian mixture model (GMM) classifiers. The performance obtained with GMM-CQCC and GMM-LFCC systems in terms of EER for the evaluation partition of the ASVspoof 2015 database is illustrated in Table~\ref{Tab:2015 ASV spoof challenge results}, which reproduces results from~\cite{todisco2016new}. Results are shown independently for each of the 10 different attacks. Results for two attacks show marked differences for the CQCC and LFCC front-ends. These are S8, for which LFCCs outperform CQCCs by 93\% relative, and S10 for which CQCCs outperform LFCCs by 87\% relative. While the motivation for our work stems from these two differences for the ASVspoof 2015 database, results reported later in this paper relate to the more recent ASVspoof 2019 database. \section{Sub-band analysis} The differences in the spectro-temporal resolution of the DFT and CQT lead to one potential explanation for differences in the performance of GMM-LFCC and GMM-CQCC systems. This relates to differences in the spectral resolution at lower frequencies and temporal resolution at higher frequencies. Put differently, it might suggest that the artefacts that distinguish spoofed speech from bona fide speech might reside in specific sub-bands, rather than in the full band signal. This hypothesis is supported by~\cite{quatieri2006discrete,deller1993discrete,sriskandaraja2016investigation,yang2019significance} which all found this to be the case for synthetic speech. Accordingly, we set out to determine whether differences at the sub-band level could explain differences in the performance of LFCC and CQCC representations. Given the stark difference in temporal resolution at high frequencies, our hypothesis has always been that this is where the artefacts are located for attacks such as S10. The assumption is that these artefacts are then only captured by using a spectro-temporal decomposition with a higher temporal resolution. \subsection{Experiments} The set of experiments designed to test this hypothesis consist in an extensive sub-band analysis whereby the GMM-LFCC and GMM-CQCC classifiers are applied to the ASVspoof 2019 logical access database at a sub-band level. In any single experiment, the entire database is processed with a low-pass and/or high-pass filter. With both low-pass and high-pass filters, the result is a band-pass filter with cut-in $f_{min}$ and cut-off $f_{max}$. Corresponding GMM models are retrained each time. Cut-in and cut-off frequencies are varied in steps of 400~Hz between 0~Hz and the sampling frequency of 8~kHz. The EER is then determined in the usual manner~\cite{brummer2013bosaris}. \begin{figure}[!t] \centering \input{figures/triangle_test.tex} \caption{A 2-dimensional heatmap visualisation of sub-band analysis results for an arbitrary spoofing attack. The horizontal axis depicts the high-pass cut-in frequency $f_{\text{min}}$ whereas the vertical axis depicts the low pass cut-off frequency $f_{\text{max}}$. The colour bar to the right of the plot depicts the EER obtained for each band-pass filter configuration (pair of $f_{\text{min}}$ and $f_{\text{max}}$).} \label{Figure. triangle explaination} \end{figure} \subsection{Heatmap visualisation} Results are visualised in the form of a 2-dimensional heatmap, an example of which is illustrated in Fig.~\ref{Figure. triangle explaination}. The horizontal vertical axes of the heatmap signify the cut-in frequency $f_{\text{min}}$ and cut-off frequency $f_{\text{max}}$ of the high-pass and low-pass filter respectively. For band-pass filters, $f_{\text{min}}<f_{\text{max}}$, hence the triangular form. The EER corresponding to each band-pass filter configuration is indicated with the colour bar to the right of Fig.~\ref{Figure. triangle explaination}, with blue colours indicating low EERs and red colours indicating higher EERs. The left-most column of the heatmap shows the EER for a decreasingly aggressive low-pass filter, moving from bottom-to-top. The top-most row of the heatmap shows the EER for a increasingly aggressive high-pass filter moving from left-to-right. Everywhere else corresponds to a band-pass filter, with the full-band configuration being at the top-left. EERs along the diagonal of Fig.~\ref{Figure. triangle explaination} illustrate the benefit of spectrum information at the sub-band level. The heatmap serves as a crude means to evaluate which sub-bands are the least and most informative in distinguishing between spoofed and bona fide speech. For the arbitrary example illustrated in Fig.~\ref{Figure. triangle explaination}, EERs along the diagonal suggest that information at lower frequencies is discriminative, whereas use of that at higher frequencies is not sufficiently discriminative to be used alone. EERs in the left-most column show that the most discriminative information of all is contained at low frequencies. So long as this information is used, then the EER is low. EERs in the top-most row show that as soon as low frequency information is discarded, then the EER increases. Information between~1 and 3~kHz and above 7~kHz is non-discriminative. That between~3 and 7~kHz is less discriminative, with reasonable EERs being obtained only when the information in a sufficient number of sub-bands is combined. \section{Experiments with the \\ASVspoof 2019 database} Described here is our use of the ASVspoof 2019 database, the specific configurations of GMM-LFCC and GMM-CQCC spoofing countermeasures, the subset of spoofing attacks with which this work was performed, and the results of sub-band analysis. \subsection{Database and protocols} The ASVspoof 2019 database consist of two different protocols based on logical access (LA) and physical access (PA) use case scenarios~\cite{todisco2019asvspoof}. The LA scenario relates to spoofing attacks performed with voice conversion and speech synthesis algorithms whereas the PA scenario involves exclusively replay spoofing attacks. Since the work reported in this paper stems from observations made for spoofing attacks performed with voice conversion and synthetic speech algorithms, our experiments were performed with the LA dataset only. All experiments were performed with the standard ASVspoof 2019 protocols which consist of independent train, development and, evaluation partitions. Spoofed speech in each partition is generated using a set of voice conversion and speech synthesis algorithms~\cite{wang2019asvspoof}. There are 19 different spoofing attacks. Attacks in the training and development set were created with a set of 6 algorithms (A01-A06), whereas those in the evaluation set were created with a set of 13 algorithms (A07-A19). All experiments reported in this paper were performed with the evaluation set only. \begin{table*}[!t] \centering \caption{EERs (\%) for the ASVspoof 2019 logical access database, evaluation partition. Results for both baseline systems, GMM-CQCC (linear scale), GMM-LFCC and GMM-CQCC (geometric scale). Results for GMM-CQCC (linear) and GMM-LFCC reproduced from~\cite{todisco2019asvspoof}. Results for six attacks (highlighted) show stark differences in performance for each system.} \setlength\tabcolsep{4.2pt} \begin{tabular}{ *{14}{c}} \hline System &\textbf{A07} &A08&A09&A10&A11&A12&\textbf{A13}&\textbf{A14}&A15&\textbf{A16}&\textbf{A17}&A18 &\textbf{A19} \\ \hline\hline GMM-CQCC (linear)&\color{blue}{\textbf{0.00}}&0.04&0.14&15.16&0.08&4.74& \color{purple}{\textbf{26.15}}&\color{purple}{\textbf{10.85}}&1.26&\color{blue}{\textbf{0.00}}&\textbf{19.62}&3.81&\color{blue}{\textbf{0.04}} \\ \hline GMM-LFCC&\color{purple}{\textbf{12.86}}&0.37&0.00&18.97&0.12&4.92&\textbf{9.57}&\textbf{1.22}&2.22&\color{purple}{\textbf{6.31}}&\color{blue}{\textbf{7.71}}&3.58&\textbf{13.94}\\ \hline GMM-CQCC (geometric)&\textbf{3.39}&0.34&0.46&6.86&4.62&3.58& \color{blue}{\textbf{4.23}}&\color{blue}{\textbf{0.67}}&1.52&\textbf{4.00}&\color{purple}{\textbf{25.04}}&19.63&\color{purple}{\textbf{29.46}}\\ \hline \end{tabular} \label{Tab:2019 ASV spoof results} \end{table*} \begin{table*}[!t] \centering \caption{A summary of the six spoofing attack algorithms (voice conversion and speech synthesis) from the ASVspoof 2019 logical access database used in this work. * indicates neural networks. Full details for each algorithm can be found in~\cite{wang2019asvspoof}.} \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Attack & Input & Input processor & Duration & Conversion & Speaker represent. & Outputs & Waveform generator \\ \hline A07~\cite{wavecyclegan,wu2016merlin} & Text & NLP & RNN* & RNN* & One hot embed. & MCC, F0, BA & WORLD \\ \hline A13~\cite{li2015generative,kobayashi2018intra} & Speech (TTS) & WORLD & DTW & Moment matching* & - & MCC & Waveform filtering \\ \hline A14~\cite{ljliu2018wav,Kawahara1999Restructuring} & Speech (TTS) & ASR* & - & RNN* & - & MCC, F0, BAP & STRAIGHT \\ \hline A16~\cite{McAuliffe/etal:2017:IS} & Text & NLP & - & CART & - & MFCC, F0 & Waveform concat. \\ \hline A17~\cite{2019arXiv190711898H,kobayashi2018intra} & Speech (human) & WORLD & - & VAE* & One hot embed. & MCC, F0 & Waveform filtering \\ \hline A19 & Speech (human) & LPCC/MFCC & - & GMM-UBM & - & LPC & Spectral filtering + OLA \\ \hline \end{tabular}% } \label{tab:asvspoof_2019_attack_details} \end{table*} \subsection{Countermeasures} Experiments were performed using the two spoofing countermeasures that were distributed with the ASVspoof 2019 challenge database. They are the GMM-LFCC and GMM-CQCC baseline systems\footnote{\url{https://www.asvspoof.org/asvspoof2019/asvspoof2019_evaluation_plan.pdf}}~\cite{todisco2019asvspoof}. The front-end representation used by the GMM-LFCC system comprises 20 static, velocity ($\Delta$) and acceleration ($\Delta \Delta$) LFCC coefficients, thereby giving 60-dimension feature vectors. The 90-dimension GMM-CQCC representation comprises 29 static with zero energy coefficient, $\Delta$, $\Delta\Delta$ coefficients. The back-end is common to both systems and consists of a traditional GMM classifier with two models, one for bona fide speech and one for spoofed speech, both with 512 models. Scores are conventional log-likelihood ratios. Neither countermeasure system was re-optimised; this work is not concerned with optimisation. Results reported here are those obtained with the original ASVspoof 2019 baseline countermeasures as reported in~\cite{todisco2019asvspoof}. \begin{figure*}[!t] \centering {\textbf{GMM-CQCC (linear)}} \subfloat[\label{fig:triangles:A07_lin_cqt}]{\input{figures/A07_linear_CQT_clean}} \subfloat[\label{fig:triangles:A16_lin_cqt}]{\input{figures/A16_linear_CQT_clean}} \subfloat[\label{fig:triangles:A19_lin_cqt}]{\input{figures/A19_linear_CQT_clean}} \subfloat[\label{fig:triangles:A13_lin_cqt}]{\input{figures/A13_linear_CQT_clean}} \subfloat[\label{fig:triangles:A14_lin_cqt}]{\input{figures/A14_linear_CQT_clean}} \subfloat[\label{fig:triangles:A17_lin_cqt}\protect\hphantom{MAKESPACE}]{\input{figures/A17_linear_CQT_clean}} \\ \vspace{0.2cm} {\textbf{GMM-LFCC }} \subfloat[\label{fig:triangles:a}]{\input{figures/A07_linear_FFT_clean}} \subfloat[\label{fig:triangles:a}]{\input{figures/A16_linear_FFT_clean}} \subfloat[\label{fig:triangles:a}]{\input{figures/A19_linear_FFT_clean}} \subfloat[\label{fig:triangles:a}]{\input{figures/A13_linear_FFT_clean}} \subfloat[\label{fig:triangles:a}]{\input{figures/A14_linear_FFT_clean}} \subfloat[\label{fig:triangles:a}\protect\hphantom{MAKESPACE}]{\input{figures/A17_linear_FFT_clean}} \\ \vspace{0.2cm} {\textbf{GMM-CQCC (geometric)}} \subfloat[\label{fig:triangles:a}]{\input{figures/A07_geometric_CQT_clean.tex}} \subfloat[\label{fig:triangles:a}]{\input{figures/A16_geometric_CQT_clean.tex}} \subfloat[\label{fig:triangles:a}]{\input{figures/A19_geometric_CQT_clean.tex}} \subfloat[\label{fig:triangles:a}]{\input{figures/A13_geometric_CQT_clean.tex}} \subfloat[\label{fig:triangles:a}]{\input{figures/A14_geometric_CQT_clean.tex}} \subfloat[\label{fig:triangles:a}\protect\hphantom{MAKESPACE}]{\input{figures/A17_geometric_CQT_clean.tex}} \caption{2-D heatmap visualisations of sub-band analysis results for the six different spoof attacks. The first row (a-f) shows results for the GMM-CQCC (linear) system. The second row (g-l) shows corresponding results for the GMM-LFCC system. The third row (m-r) shows results for the GMM-CQCC (geometric) system.} \label{fig:triangles} \end{figure*} \subsection{Metric and performance} Even though it is not the primary metric for the ASVspoof 2019 database, performance is assessed in terms of the EER which is determined according to a convex hull approach \cite{brummer2013bosaris}. While the community has moved towards use of the tandem detection cost function (t-DCF) metric~\cite{kinnunen2018t}, the primary metric for the ASVspoof 2019 database, the work reported in this paper is concerned exclusively with countermeasure performance rather than the impact of spoofing and countermeasures upon ASV performance. The latter is a related, but different issue. Use of the EER is then appropriate and sufficient for this purposes of the work presented in this paper. EER results for the two countermeasures and the evaluation partition of the ASVspoof 2019 LA database are illustrated in Table~\ref{Tab:2019 ASV spoof results}. Just as is the case for the 2015 database, results for the 2019 database show substantial variations in performance for the GMM-LFCC and GMM-CQCC countermeasures. For attacks A07, A16 and A19, the GMM-CQCC system outperforms GMM-LFCC system whereas, for attacks A13, A14 and A17, it is the GMM-LFCC system that performs best, albeit still with relatively high error rates. Subsequent experiments were performed separately for this subset of 6 specific spoofing attacks, all being examples of where one front-end representation leads to substantially better results than the other. Brief details of the specific algorithms used in creating each attack are illustrated in Table~\ref{tab:asvspoof_2019_attack_details}. There are four voice conversion algorithms (A13, A14, A17 and A19) and two speech synthesis algorithms (A07 and A16), even if the input to two of the voice conversion algorithms is also synthetic speech (A13 and A14). Further details of each algorithm are available in~\cite{wang2019asvspoof}. \subsection{Sub-band analyses} We sought to explain differences in the performance of GMM-LFCC and GMM-CQCC systems using what we could learn from sub-band analysis, results for which are shown in Fig.~\ref{fig:triangles}. Heatmaps are shown for each of the 6 spoofing attacks: A07, A16, A19 for which the GMM-CQCC system performs best; A13, A14 and A17 for which the GMM-LFCC system performs best. Results for the GMM-CQCC and GMM-LFCC systems are illustrated in rows one and two respectively. In each case, the EER for the baseline system is denoted by the top-left-most point in each 2D heatmap, i.e.~the full-band case. Figs.~\ref{fig:triangles} (a)-(c) show that the baseline GMM-CQCC system (top-left-most point) gives low EERs whereas Figs.~\ref{fig:triangles} (g)-(i) show that the GMM-LFCC system gives consistently worse results. Figs.~\ref{fig:triangles} (d)-(f) and (j)-(l) show the opposite, even if performance for A17 (l) is still poor for both the baseline systems. Upon inspection of heatmaps for both LFCC and CQCC front-ends, we see that the critical information for the detection of all three attacks lies at high frequencies. While it is not visible in the plots at this scale, the discriminative information lies above 7.6~kHz; near-to-zero EERs can be obtained using information between~7.6 and 8~kHz only, using either front-end. The situation for attacks A13, A14 and A17 is a little different. The diagonals of the heatmaps in Figs.~\ref{fig:triangles} (d), (e), (j) and (k) all suggest that the most informative information is at the lower frequencies. The left-most columns of Figs.~\ref{fig:triangles} (d), (e) and (j) suggest that the most critical information is located at the very lowest sub-bands. For A17, neither front-end performs especially well. The CQCC front-end fails completely, whereas the LFCC front-end succeeds in capturing information across the majority of the full band. \begin{figure*}[!t] \centering \includegraphics{CQCC_geometric_linear_scale.pdf}\\ \includegraphics[width=\textwidth,trim={0cm 0cm 0cm 7.5cm},clip]{CQCC_geometric_log_scale.pdf}\\ \includegraphics{CQCC_linear_linear_scale.pdf}\\ \includegraphics{DFT_linear_scale.pdf}\\ \caption{Illustrations of spectra for an arbitrary speech frame derived using the CQT and DFT (blue) and the second basis function of the discrete cosine transform used in cepstral analysis (black). The top plot shows the CQT-derived spectrum without resampling, hence the logarithmic sampling (compression of vertical blue lines) in the linear frequency domain. The second plot shows the same spectrum but with a warped, logarithmic frequency scale. The third plot shows the CQT-derived spectrum after resampling to a uniform, linear frequency scale (Section 2; thus more vertical lines). The forth plot shows the corresponding DFT-derived spectrum.} \label{Figure.resample} \end{figure*} \section{Spectro-temporal resolution} Results presented above dispel somewhat the hypothesis that CQCCs produces better results than LFCCs for some attacks on account of the higher temporal resolution at high frequencies. If this were true, reliable performance would not have been obtained with the LFCC front-end using high frequency sub-bands alone. The explanation lies elsewhere. The same results show that the discriminative information simply lies in the highest sub-bands; with appropriate band-pass filtering, both front-ends perform well for A07, A16 and A19 attacks. It is more than this, though, since such a straightforward explanation does not account for why the original, \emph{full-band} CQCC front-end performs well, i.e.~\emph{without} band-pass filtering. The explanation for this observation requires us to revisit the issue of spectro-temporal resolution. Fig.~\ref{Figure.resample} illustrates a set of DFT and CQT-derived spectra for an arbitrary speech frame. Each plot also shows the second basis function of the DCT used in cepstral analysis. The vertical bars serve to illustrate the \emph{spectral} resolution across the spectrum and give some indication of the \emph{temporal} resolution, which is inversely proportional to the spectral resolution. The top plot in Fig.~\ref{Figure.resample} shows the CQT-derived spectrum (without re-sampling)~\cite{brown1999computer}. It shows clearly that the spectral resolution is higher at lower frequencies than at higher frequencies. The DCT basis function in this plot is \emph{warped} to the same, non-linear frequency scale. The second plot shows exactly the same data, now plotted with a logarithmic frequency scale, hence the regularity of the vertical bars and DCT basis function. These plots show how, without resampling, the DCT will attribute greater \emph{weight} to the low frequency components than to high frequency components. The third plot shows the CQT-derived spectrum after resampling. The the vertical bars show how resampling acts to linearise the rate of sampling in the frequency domain. Note the difference in frequency scales for the second and third plots. The higher spectral resolution (and hence lower temporal resolution) for lower frequencies is still clearly apparent; the spectrum is much smoother at higher frequencies. A comparison of the cosine basis functions to the CQT-derived spectra in the second and third plots shows that resampling acts to dilute (weaken) information at lower frequencies but to distil (emphasise) information at higher frequencies. Hence, spoofing artefacts at high frequencies are emphasised by the CQCC front-end. The fourth plot of Fig.~\ref{Figure.resample} shows the DFT-derived spectrum and the linear sampling in the frequency domain. Here, the DCT acts to weight all frequency components uniformly. Spoofing artefacts at high frequencies are not emphasised; reliable performance is then obtained only by band-pass filtering. The explanation for why the LFCC front-end outperforms the CQCC front-end for attacks A13, A14 and A17 is now straightforward. The artefacts for these attacks are at lower and mid-range frequencies where the CQT acts to dilute information, hency why the GMM-CQCC system performs poorly for these attacks. Since the DFT gives uniform weighting to information at all frequencies, the GMM-LFCC system gives better, though still somewhat poor performance. This is because information at low frequencies is not \emph{emphasised}. \section{Validation} We designed one final experiment to validate our findings. The CQT-derived spectra in the top two plots of Fig.~\ref{Figure.resample} show dense sampling of the spectrum at lower frequencies but sparse sampling at higher frequencies. Hence, without resampling, the application of cepstral analysis should result in the emphasis of information situated at lower frequencies. In this case, a CQCC front-end \emph{without} linear resampling should produce better detection performance in the case of attacks A13 and A14. Performance for attack A17 might not be improved, since the artefacts appear not to be localised at low frequencies. We repeated the sub-band analysis described in Section~3 using the geometrically-scaled CQCC front-end. Results are illustrated in Figs.~\ref{fig:triangles} (m)-(r) for the same set of 6 attacks for which results are described in Section~4. The comparison of the right-most columns in the heatmaps of Figs.~\ref{fig:triangles} (d) and (p) and those in (e) and (q) clearly show that lower EERs are obtained using the geometrically-scaled rather than the linearly-scaled CQCC front-end. Low EERs are even obtained when the front-end is applied without any band-pass filtering. This is because use of the original CQT geometric frequency scale results in the emphasis of information at lower frequencies where the artefacts are located. As expected, attack A17 remains troublesome. EERs for the geometrically-scaled CQCC front-end are presented in the last row of Table~\ref{Tab:2019 ASV spoof results}. For some attacks, the EER for the geometrically-scaled CQCC front-end is slightly higher. For some others, it is slightly lower. Once again, performance for A17 is poor, and even worse than for both the linearly-scaled CQCC and LFCC front-ends. Those for A13 and A14 clearly confirm our findings, with substantially lower EERs than those obtained with both the linearly-scaled CQCC and LFCC front-ends. Use of the original geometric scale of the CQT clearly results in the emphasis of information at lower frequencies where the artefacts are localised. \section{Discussion} This paper reports an explainability study of constant Q cepstral coefficients (CQCCs), a spoofing countermeasure front-end for automatic speaker verification~\cite{todisco2016new,todisco2017constant}. The work aims to explain why the CQCC front-end works so reliably in detecting some forms of spoofing attack, but performs poorly in detecting others. It confirms that different spoofing attacks exhibit artefacts at different frequencies, artefacts that are better captured with specific front-ends. The standard constant Q transform (CQT)~\cite{Youngberg78,Brown91} exhibits a dense sampling of the spectrum at lower frequencies and a sparse sampling at higher frequencies. Hence, geometrically-sampled CQCCs perform well in detecting spoofing artefacts when they are located at low frequencies. Linear sampling shifts the emphasis to higher frequencies so that the detection of spoofing artefacts at high frequencies are emphasised and captured reliably. Taken together, the results presented in this paper show that no single CQCC front-end configuration can perform well for all spoofing attacks; different spoofing attacks produce artefacts at different parts of the spectrum and these can only be detected reliably when the front-end emphasises information in the relevant frequency bands. This finding may explain why classifier fusion has proven to be so important to generalisation, i.e.~reliable performance in the face of varying spoofing attacks. Given that spoofing artefacts can be highly localised in the spectrum, the same finding suggests that we may need to rethink the use of cepstral analysis, which acts to smooth information located across the full spectrum. Approaches to detect localised artefacts need further investigation for multiple applications (the EER represents just one operating point). We may then need to rethink the approach to classifier fusion too. One natural extension of the work would be to look deeper into the artefacts themselves. Preliminary investigations show that those at high frequencies appear to be nothing more than band-limiting effects introduced by the spoofing algorithms, or the data used in their training. The source of those at lower frequencies are less clear. A clearer understanding of the artefacts at the signal level, rather than just the feature or spectrum level is certainly needed. This could help us to design better spoofing countermeasures, even if the same understanding might also help the fraudsters to design better spoofing attacks. Lastly, while it was not the objective of this work to investigate the generalisation capabilities of the CQCC front-end, it is clear that none of the countermeasures for which results are reported in this paper is successful in detecting \emph{all} of the spoofing attacks used in the creation of the ASVspoof 2019 database. Attacks A10, A12, A13, A17 and A18 are all examples where \emph{neither} configuration of the CQCC front-end, \emph{nor} the LFCC front-end are especially effective. The work presented in this paper focused on countermeasure performance exclusively. It did not consider the effect of spoofing and countermeasures upon automatic speaker verification (ASV) performance. Some attacks, e.g.~A17, are known not to be especially effective in fooling ASV, and in this sense they pose no threat; that countermeasures fail to detect such attacks is hence of no serious concern. Other attacks, though, are effective in manipulating ASV and are also difficult to detect reliably. Other work reported by the community has shown far better results than those reported in this paper~\cite{chettri2019ensemble, alluri2019iiit,yang2019sjtu,lavrentyeva2019stc}. Still, even the very best approaches can be vulnerable to certain, specific attacks. The \emph{silver bullet} remains elusive for the time being. \section{Acknowledgements} {\small The work was partially supported by the Voice Personae and RESPECT projects, both funded by the French Agence Nationale de la Recherche~(ANR). } \balance \bibliographystyle{IEEEbib}
1,941,325,220,168
arxiv
\section{Introduction} Aluthge transform of a bounded operator $T$, introduced by Aluthge in \cite{al}, is given by the formula $\widetilde{T}=|T|^{\frac12}U|T|^{\frac12}$, where $T=U|T|$ is the polar decomposition of $T$. It turned out to have many applications, e.g. in the invariant subspace problem (cf. \cite{jkp1}). One of the most important properties of the Aluthge transform is that it transforms a $p$-hyponormal operator into a $(p+\frac12)$-hyponormal one, preserving its spectrum (cf. \cite{al}, \cite{hu}). Moreover, under some conditions, the sequence $\{\widetilde{T}^{(n)}\}$ of consecutive iterations of Aluthge transform is convergent to a normal operator (cf. \cite{jkp2}). Aluthge trasforms of operators were studied also in \cite{cjl}, \cite{fjkp}, \cite{jjs3}, \cite{jkp3}, \cite{rion}. A natural question is which of the above mentioned properties remain true if one considers a closed, densely defined operator $T$, which is not necessarily bounded. In this paper it is shown that Aluthge transform of such an operator may have trivial domain and need not be necessarily closed or even closable. Thus the sequence $\{\widetilde{T}^{(n)}\}$ cannot be defined. What is interesting, $\widetilde{T}$ may have trivial domain even if $T$ is a hyponormal operator, which implies in particular that Aluthge transform does not preserve hyponormality in the unbounded case. An example of such a hyponormal operator is given in this paper in the class of weighted shifts on directed trees. The construction of the example is preceded by a discussion on Aluthge transform for this class of operators. What is important, the directed tree used in the construction is rootless and therefore the operator in question is unitarily equivalent to a composition operator. In turn, an example of an operator whose Aluthge transform is not closable can be realized as the adjoint of a composition operator. Since most of the properties of Aluthge transform is preserved if one replaces $\frac12$ in its definition by any other exponents that sum up to 1 (cf. \cite{al2}, \cite{hu}), in this paper $t$-Aluthge transform is considered for any $t\in(0,1]$, according to the following definition: \begin{df} Let $T$ be a closed, densely defined operator in a Hilbert space $\mathcal{H}$, let $T=U|T|$ be its polar decomposition and let $t\in(0,1]$. Then \emph{$t$-Aluthge transform} of $T$ is given by the formula $\Delta_t(T)=|T|^tU|T|^{1-t}.$ \end{df} \section{Preliminaries} In what follows $\mathbb{Z}$ will denote the set of all integers and $\mathbb{Z}_+ = \{0,1,2,\ldots\}$. For any set $A$ the cardinality of $A$ will be denoted by $\# A$. Let $T$ be any operator in a complex Hilbert space $\mathfrak{H}$. Then $\mathcal{D}(T)$, $\mathcal{N}(T)$, $\mathcal{R}(T)$ denote the domain, the null space and the range of $T$, respectively. For any linear subspace $W$ of $\mathcal{D}(T)$ we denote by $T\restriction_W$ the restriction of $T$ to the subspace $W$. Let $\Gamma(T)\subset \mathcal{H}\times\mathcal{H}$ be the graph of $T$. If the closure of $\Gamma(T)$ in the product topology is a graph of an operator, we call this operator the closure of $T$ and denote by $\overline{T}$. A densely defined operator $T$ is called \emph{hyponormal}, if $\mathcal{D}(T)\subset\mathcal{D}(T^*)$ and $\|Tf\|\geq\|T^*f\|$ for every $f\in\mathcal{D}(T)$. \bigskip Let $\mathfrak{T}=(V,E)$ be a directed tree (i.e. $V$ and $E$ are the sets of vertices and edges, respectively). If $\mathfrak{T}$ has a root, we denote it by $\mathtt{root}$ and we set $V^\circ=V\setminus\{\mathtt{root}\}$. Otherwise, we set $V^\circ=V$. For any vertex $u\in V$ we put $\mathtt{Chi}(u)=\{v\in V\,:\,(u,v)\in E\}$. If $v\in V^\circ$, than by $\mathtt{par}(v)$ we denote the only vertex $u\in V$ such that $v\in\mathtt{Chi}(u)$. \bigskip By $\ell^2(V)$ we understand the complex Hilbert space of functions $f:V\rightarrow \mathbb{C}$ such that $\sum_{v\in V} |f(v)|^2<\infty$, with inner product $\left<f,g\right> = \sum_{v\in V} f(v)\overline{g(v)}$, $f,g\in \ell^2(V)$. For any $u\in V$ we define $e_u\in\ell^2(V)$ as follows: $$e_u(v) = \begin{cases} 1, & u=v\\ 0, & u\neq v \end{cases}.$$ Obviously, $\{e_u\}_{u\in V}$ is an orthonormal basis of $\ell^2(V)$. We denote by $\mathcal{E}_V$ the linear span of $\{e_u\}_{u\in V}$. \bigskip For any system $\boldsymbol{\lambda}=\{\lambda_v\}_{v\in V^\circ}$ we define operator $S_{\boldsymbol{\lambda}}$ in $\ell^2(V)$ by \begin{equation} \mathcal{D}(S_{\boldsymbol{\lambda}}) = \left\{ f\in\ell^2(V)\,:\,\sum_{u\in V} \left(\sum_{v\in\mathtt{Chi}(u)}|\lambda_v|^2\right)|f(u)|^2 <\infty\right\}, \label{eq:DSl} \end{equation} \begin{equation} (S_{\boldsymbol{\lambda}} f)(v) = \begin{cases} \lambda_v f(\mathtt{par}(v)), & v\in V^\circ\\ 0 , & v=\mathtt{root} \end{cases},\qquad f\in\mathcal{D}(S_{\boldsymbol{\lambda}}). \label{eq:Sl} \end{equation} The operator $S_{\boldsymbol{\lambda}}$ is called the \emph{weighted shift} on the directed tree $\mathfrak{T}$ with the system of weights $\boldsymbol{\lambda}$. For any $\boldsymbol{\lambda}=\{\lambda_u\}_{u\in V^\circ}$ we will use the following notations: $V_{\boldsymbol{\lambda}}^+ :=\{u\in V\,:\,S_{\boldsymbol{\lambda}} e_u \neq 0\}$, $\mathtt{Chi}_{\boldsymbol{\lambda}}^+(u) := \mathtt{Chi}(u)\cap V_{\boldsymbol{\lambda}}^+$ for any $u\in V$ and, if $U$ is any subset of $V$, then $\mathtt{Chi}(U):= \bigcup_{u\in U} \mathtt{Chi}(u)$. We also use notatins $\mathtt{Chi}^2(u):=\mathtt{Chi}(\mathtt{Chi}(u))$ and $\mathtt{par}^2(u)=\mathtt{par}(\mathtt{par}(u))$. Recall some useful properties of weighted shifts. \begin{prop}[cf. {\cite[Propositions 3.1.2 and 3.1.3]{jjs}}] Let $S_{\boldsymbol{\lambda}}$ be a weighted shift on a directed tree $\mathfrak{T}=(V,E)$. Then the following assertions hold: \begin{itemize} \item[(i)] $S_{\boldsymbol{\lambda}}$ is a closed operator, \item[(ii)] $e_u\in\mathcal{D}(S_{\boldsymbol{\lambda}})$ if and only if $\sum_{v\in\mathtt{Chi}(u)}|\lambda_v|^2<\infty$ and in this case \begin{equation} S_{\boldsymbol{\lambda}} e_u = \sum_{v\in\mathtt{Chi}(u)} \lambda_v e_v,\qquad \|S_{\boldsymbol{\lambda}} e_u\|^2 = \sum_{v\in\mathtt{Chi}(u)} |\lambda_v|^2, \label{eq:Sleu} \end{equation} \item[(iii)] $S_{\boldsymbol{\lambda}}$ is densely defined if and only if $e_u\in\mathcal{D}(S_{\boldsymbol{\lambda}})$ for every $u\in V$. \end{itemize} \label{prop:Sl} \end{prop} \begin{lem} Let $S_{\boldsymbol{\lambda}}$ be a weighted shift on $\mathfrak{T}=(V,E)$. Then $\mathcal{E}:=\mathcal{E}_V\cap\mathcal{D}(S_{\boldsymbol{\lambda}})$ is a core for $S_{\boldsymbol{\lambda}}$, i.e. $\overline{S_{\boldsymbol{\lambda}}\restriction_{\mathcal{E}}}=S_{\boldsymbol{\lambda}}$. \label{lem:core} \end{lem} \begin{proof} Let $f\in\mathcal{D}(S_{\boldsymbol{\lambda}})$ and let $U:=\{u\in V\,:\, f(u)\neq 0\}$. Since $f\in\ell^2(V)$, the set $U$ is at most countable. If $U$ is finite, then $f\in \mathcal{E}_V$. Otherwise, set $U=\{u_1,u_2,\ldots\}$ and \begin{equation} f_n := \sum_{j=1}^n f(u_j) e_{u_j}, \qquad n=1,2,\ldots \label{eq:fn} \end{equation} Obviously, $f_n\in \mathcal{E}_V$. Since $f\in\mathcal{D}(S_{\boldsymbol{\lambda}})$, for every $n$ we have \begin{equation*} \sum_{u\in V} \left(\sum_{v\in\mathtt{Chi}(u)} |\lambda_v|^2 \right) |f_n(u)|^2 \leq \sum_{u\in V} \left(\sum_{v\in\mathtt{Chi}(u)} |\lambda_v|^2 \right) |f(u)|^2<\infty \end{equation*} and therefore $f_n\in\mathcal{D}(S_{\boldsymbol{\lambda}})$ and hence also $f_n\in\mathcal{E}$. Using Parseval's identity, we get \begin{align*} \|f-f_n\|^2 = \sum_{u\in V} |\left< f-f_n,e_u\right>|^2= \sum_{u\in V} |f(u)-f_n(u)|^2 =\\ = \sum_{u\in U} |f(u)-f_n(u)|^2 = \sum_{j=n+1}^\infty |f(u_j)|^2 \stackrel{n\to\infty}{\longrightarrow} 0, \end{align*} because the series $\sum_{j=1}^\infty |f(u_j)|^2$ is convergent. It remains only to show that $S_{\boldsymbol{\lambda}} f_n \to S_{\boldsymbol{\lambda}} f$. Using (\ref{eq:fn}) and (\ref{eq:Sleu}), we have \begin{equation*} S_{\boldsymbol{\lambda}} f_n = S_{\boldsymbol{\lambda}} \sum_{j=1}^n f(u_j)e_{u_j} = \sum_{j=1}^n f(u_j) \left(\sum_{v\in\mathtt{Chi}(u_j)} \lambda_v e_v\right) . \end{equation*} Using Parseval's identity again, from (\ref{eq:Sl}) we get \begin{align*} \allowdisplaybreaks \|S_{\boldsymbol{\lambda}} (f-f_n)\|^2 &= \sum_{v\in V} |\left<S_{\boldsymbol{\lambda}} (f-f_n),e_v\right>|^2=\\ &= \sum_{v\in V} |(S_{\boldsymbol{\lambda}} f)(v)-(S_{\boldsymbol{\lambda}} f_n)(v)|^2 =\\ &= \sum_{v\in V^\circ} |\lambda_v|^2 |f(\mathtt{par}(v))-f_n(\mathtt{par}(v))|^2 =\\ &= \sum_{u\in V} \sum_{v\in\mathtt{Chi}(u)} |\lambda_v|^2 |f(u)-f_n(u)|^2 =\\ &= \sum_{u\in U} |f(u)-f_n(u)|^2 \left(\sum_{v\in\mathtt{Chi}(u)} |\lambda_v|^2\right) = \\ &= \sum_{j=n+1}^\infty |f(u_j)|^2 \left(\sum_{v\in\mathtt{Chi}(u_j)} |\lambda_v|^2\right) \stackrel{n\to\infty}{\longrightarrow} 0, \end{align*} which completes the proof. \end{proof} Let us recall a criterion for hyponormality which will be used in the sequel. \begin{thm}[cf. {\cite[Theorem 5.1.2, Remark 5.1.5]{jjs}}] Let $S_{\boldsymbol{\lambda}}$ be a weighted shift with weights $\boldsymbol{\lambda}$ on a directed tree $\mathfrak{T}=(V,E)$. Then $S_{\boldsymbol{\lambda}}$ is hyponormal if and only if for every $u\in V$ the following conditions hold: \begin{equation} \text{if }v\in\mathtt{Chi}(u)\text{ and }\|S_{\boldsymbol{\lambda}} e_v\|=0\text{, then }\lambda_v=0, \label{eq:hyp1} \end{equation} \begin{equation} \sum_{v\in\mathtt{Chi}_{\boldsymbol{\lambda}}^+(u)} \frac{|\lambda_v|^2}{\|S_{\boldsymbol{\lambda}} e_v\|^2} \leq 1. \label{eq:hyp} \end{equation} \label{thm:hyp} \end{thm} \section{Polar decompositions of $S_{\boldsymbol{\lambda}}$ and $S_{\boldsymbol{\lambda}}^*$} We begin by recalling the description of the polar decomposition of a weighted shift. \begin{prop}[cf. {\cite[Proposition 3.4.3]{jjs}}] Let $S_{\boldsymbol{\lambda}}$ be a densely defined weighted shift on $\mathfrak{T}=(V,E)$ with weights $\boldsymbol{\lambda}$ and let $\alpha>0$. Then: \begin{itemize} \item[(i)] $\mathcal{D}(|S_{\boldsymbol{\lambda}}|^\alpha) = \{ f\in\ell^2(V)\,:\,\sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^{2\alpha}|f(u)|^2 <\infty \},$ \item[(ii)] for every $u\in V$ we have $e_u\in\mathcal{D}(|S_{\boldsymbol{\lambda}}|^\alpha)$ and $|S_{\boldsymbol{\lambda}}|^\alpha e_u = \|S_{\boldsymbol{\lambda}} e_u\|^\alpha e_u$, \item[(iii)] if $f\in\mathcal{D}(|S_{\boldsymbol{\lambda}}|^\alpha)$, then \begin{equation} (|S_{\boldsymbol{\lambda}}|^\alpha f)(u) = \|S_{\boldsymbol{\lambda}} e_u\|^\alpha f(u),\qquad u\in V. \label{eq:mod} \end{equation} \end{itemize} \label{prop:mod} \end{prop} \begin{prop}[cf. {\cite[Proposition 3.5.1]{jjs}}] Let $S_{\boldsymbol{\lambda}}$ be a densely defined weighted shift on $\mathfrak{T}=(V,E)$ with weights $\boldsymbol{\lambda}$ and let $S_{\boldsymbol{\lambda}} = U|S_{\boldsymbol{\lambda}}|$ be its polar decomposition. Then $U=S_{\boldsymbol{\pi}}$, where \begin{equation} \pi_v = \begin{cases}\displaystyle \frac{ \lambda_v}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|}, & \text{if }\mathtt{par}(v)\in V_{\boldsymbol{\lambda}}^+\\[2ex] 0, & \text{otherwise} \end{cases} ,\quad v\in V^\circ. \label{eq:pi} \end{equation} \label{prop:Spi} \end{prop} The following proposition contains a formula for $S_{\boldsymbol{\lambda}}^*$. \begin{prop}[cf. {\cite[Proposition 3.4.1]{jjs}}] Let $S_{\boldsymbol{\lambda}}$ be a densely defined weighted shift on a directed tree $\mathfrak{T}=(V,E)$. Then \begin{itemize} \item[(i)] $\mathcal{E}_V\subseteq\mathcal{D}(S_{\boldsymbol{\lambda}}^*)$ and \begin{equation*} S_{\boldsymbol{\lambda}}^* e_u=\begin{cases} \overline{\lambda_u}e_{\mathtt{par}(u)}, &u\in V^\circ\\ 0, &u=\mathtt{root} \end{cases} , \end{equation*} \item[(ii)] $\mathcal{D}(S_{\boldsymbol{\lambda}}^*) = \left\{f\in\ell^2(V)\,:\,\sum_{u\in V}\left|\sum_{v\in\mathtt{Chi}(u)} \overline{\lambda_v}f(v)\right|^2<\infty\right\}$, \item[(iii)] $(S_{\boldsymbol{\lambda}}^* f)(u) = \sum_{v\in\mathtt{Chi}(u)} \overline{\lambda_v}f(v)$ for every $u\in V$ and $f\in\mathcal{D}(S_{\boldsymbol{\lambda}}^*)$. \end{itemize} \label{prop:Sl*} \end{prop} Let $S_{\boldsymbol{\lambda}}^*=V|S_{\boldsymbol{\lambda}}^*|$ be the polar decomposition of $S_{\boldsymbol{\lambda}}^*$. From Proposition \ref{prop:Spi} it follows that $V=S_{\boldsymbol{\pi}}^*$ with $\boldsymbol{\pi}$ given by \eqref{eq:pi}. The exact formula for $S_{\boldsymbol{\pi}}^*$ can be easily derived from Proposition \ref{prop:Sl*}. The following theorem gives a formula for powers of modulus of $S_{\boldsymbol{\lambda}}^*$. \begin{thm} Let $S_{\boldsymbol{\lambda}}$ be a densely defined weighted shift on directed tree $\mathfrak{T}=(V,E)$ and let $\alpha>0$. Then the following assertions hold: \begin{itemize} \item[(i)] $\mathcal{D}(|S_{\boldsymbol{\lambda}}^*|^\alpha) = \left\{f\in\ell^2(V)\,:\, \sum_{u\in V^\circ} \|S_{\boldsymbol{\lambda}} e_u\|^{2\alpha-2} \left|\sum_{v\in\mathtt{Chi}(u)} \overline{\lambda_v}f(v) \right|^2 <\infty\right\}$. \item[(ii)] for every $u\in V$ the following formula holds: $$(|S_{\boldsymbol{\lambda}}^*|^\alpha f)(u) = \begin{cases} \|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(u)}\|^{\alpha-2}\lambda_u\sum_{v\in\mathtt{Chi}(\mathtt{par}(u))}\overline{\lambda_v}f(v), &\text{if }u\in \mathtt{Chi}(V_{\boldsymbol{\lambda}}^+)\\ 0,& \text{if }u\in V\setminus \mathtt{Chi}(V_{\boldsymbol{\lambda}}^+) \end{cases},$$ \item [(iii)] the formula \begin{equation} |S_{\boldsymbol{\lambda}}^*|^\alpha = \bigoplus_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^\alpha P_u \label{eq:sum} \end{equation} holds, where $P_u$ is the orthogonal projection from $\ell^2(V)$ onto the linear span of $S_{\boldsymbol{\lambda}} e_u$ for all $u\in V$ (if $S_{\boldsymbol{\lambda}} e_u=0$, then $P_u=0$). \end{itemize} \label{thm:S*a} \end{thm} \begin{proof} We start by proving all assertions for $\alpha=1$. It is known that $\mathcal{D}(|S_{\boldsymbol{\lambda}}^*|)=\mathcal{D}(S_{\boldsymbol{\lambda}}^*)$, which together with (\ref{eq:DSl}), implies the part (i). Moreover, since $S_{\boldsymbol{\lambda}}^*=S_{\boldsymbol{\pi}}^*|S_{\boldsymbol{\lambda}}^*|$ is the polar decomposition of $S_{\boldsymbol{\lambda}}^*$, where $\pi$ is given by (\ref{eq:pi}), we conclude that $|S_{\boldsymbol{\lambda}}^*|=S_{\boldsymbol{\pi}} S_{\boldsymbol{\lambda}}^*$. Hence for all $f\in\mathcal{D}(S_{\boldsymbol{\lambda}}^*)$ and $u\in V$ we obtain \begin{align*} (|S_{\boldsymbol{\lambda}}^*|f)(u) &= \begin{cases} \pi_u (S_{\boldsymbol{\lambda}}^*f)(\mathtt{par}(u)), &\text{if }u\in V^\circ\\[1ex] 0,&\text{if }u=\mathtt{root} \end{cases} =\notag\\[1ex] &\stackrel{(\ref{eq:sum})}{=} \begin{cases} \displaystyle\frac{\lambda_u}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(u)}\|} \sum_{v\in\mathtt{Chi}(\mathtt{par}(u))} \overline{\lambda_v} f(v),&\text{if }u\in \mathtt{Chi}(V_{\boldsymbol{\lambda}}^+)\\[1ex] 0,&\text{if }u\in V\setminus \mathtt{Chi}(V_{\boldsymbol{\lambda}}^+) \end{cases}, \end{align*} so assertion (ii) holds for $\alpha=1$. Let now $P_u$ be as in (iii). Then for every $f\in\ell^2(V)$, $u\in V_{\boldsymbol{\lambda}}^+$ and $w\in V$ we have \begin{align} P_u f &= \frac{ \left<f,S_{\boldsymbol{\lambda}} e_u\right> S_{\boldsymbol{\lambda}} e_u }{\|S_{\boldsymbol{\lambda}} e_u\|^2} = \frac 1{\|S_{\boldsymbol{\lambda}} e_u\|^2} \left< f,\sum_{v\in\mathtt{Chi}(u)}\lambda_v e_v \right> \sum_{w\in\mathtt{Chi}(u)} \lambda_w e_w = \nonumber\\ &= \frac 1{\|S_{\boldsymbol{\lambda}} e_u\|^2} \left(\sum_{v\in\mathtt{Chi}(u)} \overline{\lambda_v} f(v) \right) \sum_{w\in\mathtt{Chi}(u)} \lambda_w e_w. \label{eq:Puf} \end{align} Observe that if $\left<P_u f, P_v g\right> \neq 0$ for some $u,v\in V$ and $f,g\in\ell^2(V)$, then from (\ref{eq:Puf}) it follows that there exists $w\in\mathtt{Chi}(u)\cap\mathtt{Chi}(v)$. This implies that $u=\mathtt{par}(w)=v$. Hence the orthogonality of the sum in (iii) follows. For any $f\in\mathcal{D}(S_{\boldsymbol{\lambda}}^*)$, $u\in V_{\boldsymbol{\lambda}}^+$ and $w\in V$ we infer from (\ref{eq:Puf}) that \begin{align} \allowdisplaybreaks (P_u f)(w) &= \begin{cases}\displaystyle \frac{\lambda_w}{\|S_{\boldsymbol{\lambda}} e_u\|^2} \sum_{v\in\mathtt{Chi}(u)} \overline{\lambda_v}f(v),&\text{if }w\in\mathtt{Chi}(u)\\ 0,&\text{if }w\in V\setminus\mathtt{Chi}(u) \end{cases} = \nonumber\\ &= \begin{cases}\displaystyle \frac{\lambda_w}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(w)}\|^2} \sum_{v\in\mathtt{Chi}(\mathtt{par}(w))} \overline{\lambda_v}f(v),&\text{if }w\in\mathtt{Chi}(u)\\ 0,&\text{if }w\in V\setminus\mathtt{Chi}(u) \end{cases} = \nonumber\\ &= \begin{cases}\displaystyle \frac { 1}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(w)}\|} (|S_{\boldsymbol{\lambda}}^*|f)(w),&\text{if }w\in\mathtt{Chi}(u)\\ 0,&\text{if }w\in V\setminus\mathtt{Chi}(u) \end{cases} . \label{eq:Pufw} \end{align} Hence for all $f\in\mathcal{D}(S_{\boldsymbol{\lambda}}^*)$ and $w\in V^\circ$ we obtain \begin{equation*} (|S_{\boldsymbol{\lambda}}^*|f)(w) = \|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(w)}\| (P_{\mathtt{par}(w)}f)(w) = \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\| (P_u f)(w). \end{equation*} Therefore $|S_{\boldsymbol{\lambda}}^*|f = \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\| P_u f$ for every $f\in\mathcal{D}(S_{\boldsymbol{\lambda}}^*)$. To show that \begin{equation} |S_{\boldsymbol{\lambda}}^*| = \bigoplus_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\| P_u, \label{eq:|Sl*|pu} \end{equation} it now suffices to check the inclusion \begin{equation*} \mathcal{D}\left(\bigoplus_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\| P_u\right) \subseteq \mathcal{D}(S_{\boldsymbol{\lambda}}^*). \end{equation*} Let $f$ belong to the left-hand side. This means that \begin{eqnarray} \infty &>& \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^2 \|P_u f\|^2 \stackrel{(\ref{eq:Puf})}{=} \nonumber\\ &=& \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^2 \left(\frac1{\|S_{\boldsymbol{\lambda}} e_u\|^4} \left| \sum_{v\in\mathtt{Chi}(u)} \overline{\lambda_v}f(v) \right|^2 \left\| S_{\boldsymbol{\lambda}} e_u\right\|^2 \right) =\nonumber\\ &=& \sum_{u\in V} \left| \sum_{v\in\mathtt{Chi}(u)} \overline{\lambda_v}f(v) \right|^2, \label{eq:DOpl} \end{eqnarray} which implies that $f\in\mathcal{D}(S_{\boldsymbol{\lambda}}^*)= \mathcal{D}(|S_{\boldsymbol{\lambda}}^*|)$. This completes the prove of (\ref{eq:|Sl*|pu}). Let now $\alpha>0$ be arbitrary. The assertion (iii) of the theorem follows immediately from (\ref{eq:|Sl*|pu}). To prove (i) it now suffices to observe that, by calculations similar to (\ref{eq:DOpl}), $f\in \mathcal{D}(|S_{\boldsymbol{\lambda}}^*|^\alpha) = \mathcal{D}(\bigoplus_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^\alpha P_u)$ if and only if \begin{equation*} \infty > \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u \|^{2\alpha} \|P_u f\|^2 = \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^{2\alpha-2} \left|\sum_{v\in\mathtt{Chi}(u)} \overline{\lambda_v} f(v) \right|^2. \end{equation*} Finally, let $f\in\mathcal{D}(|S_{\boldsymbol{\lambda}}^*|^\alpha)$ and $w\in V$. Then \begin{align*} \allowdisplaybreaks (|S_{\boldsymbol{\lambda}}^*|^\alpha f)(w) &\stackrel{(iii)}{=} \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^{\alpha} (P_u f)(w) =\\ &=\sum_{u\in V_{\boldsymbol{\lambda}}^+} \|S_{\boldsymbol{\lambda}} e_u\|^{\alpha} (P_u f)(w) =\\[1ex] &=\begin{cases} \|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(w)}\|^\alpha (P_{\mathtt{par}(w)} f)(w), &\text{if }w\in \mathtt{Chi}(V_{\boldsymbol{\lambda}}^+)\\ 0, &\text{if } w\in V\setminus \mathtt{Chi}(V_{\boldsymbol{\lambda}}^+) \end{cases} =\\[2ex] &\stackrel{(\ref{eq:Pufw})}{=} \begin{cases} \displaystyle\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(w)}\|^{2\alpha-2}\lambda_w \sum_{v\in\mathtt{Chi}(\mathtt{par}(w))} \overline{\lambda_v}f(v),&\text{if }w\in\mathtt{Chi}(u)\\ 0,&\text{if }w\in V\setminus\mathtt{Chi}(u) \end{cases}, \end{align*} which is exactly the claim of (ii). Thus the prove is complete. \end{proof} \section{Aluthge transform of a weighted shift} In this section we give a description of the $t$-Aluthge transform of a weighted shift on a directed tree. It turns out that its closure is again a weighed shift on the same tree. \begin{thm} Let $S_{\boldsymbol{\lambda}}$ be a densely defined weighted shift on $\mathfrak{T}=(V,E)$ with $\boldsymbol{\lambda}:V^\circ\to\mathbb{C}$ and let $t\in(0,1]$. Then \begin{itemize} \item[(i)] $\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}})) = \mathcal{D}(S_{\boldsymbol{\mu}})\cap\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})$, where \begin{equation} \mu_v = \begin{cases}\displaystyle \frac{\|S_{\boldsymbol{\lambda}} e_v\|^t}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^t} \lambda_v, & \text{if }\mathtt{par}(v)\in V_{\boldsymbol{\lambda}}^+\\[2ex] 0, & \text{otherwise} \end{cases} ,\quad v\in V^\circ, \label{eq:mu} \end{equation} \item[(ii)] $\Delta_t(S_{\boldsymbol{\lambda}})$ is closable and $\overline{\Delta_t(S_{\boldsymbol{\lambda}})} = S_{\boldsymbol{\mu}}$. \end{itemize} \label{thm:DtSl} \end{thm} \begin{proof} Since $\Delta_t(S_{\boldsymbol{\lambda}}) = |S_{\boldsymbol{\lambda}}|^t S_{\boldsymbol{\pi}} |S_{\boldsymbol{\lambda}}|^{1-t}$, where $\pi$ is given by (\ref{eq:pi}), for any \mbox{$f\in\ell^2(V)$} we have \begin{equation} f\in\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}})) \iff \left(f\in\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})\text{ and }S_{\boldsymbol{\pi}}|S_{\boldsymbol{\lambda}}|^{1-t}f\in\mathcal{D}(|S_{\boldsymbol{\lambda}}|^t)\right) \label{eq:DiffDD} \end{equation} Let $f\in\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})$. Then, using (\ref{eq:mod}) and (\ref{eq:pi}), we obtain for every $v\in V^\circ$ \begin{align} (S_{\boldsymbol{\pi}}|S_{\boldsymbol{\lambda}}|^{1-t}f)(v) &= \pi_v (|S_{\boldsymbol{\lambda}}|^{1-t}f)(\mathtt{par}(v)) =\notag \\[1ex] &= \begin{cases} \displaystyle\frac{\lambda_v}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|} \|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^{1-t} f(\mathtt{par}(v)), & \text{if }\mathtt{par}(v)\in V_{\boldsymbol{\lambda}}^+\\[1ex] 0, & \text{otherwise} \end{cases} = \notag\\[1ex] &= \begin{cases}\displaystyle \frac{\lambda_v}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^t} f(\mathtt{par}(v)), & \text{if }\mathtt{par}(v)\in V_{\boldsymbol{\lambda}}^+\\[1ex] 0, & \text{otherwise} \end{cases} . \label{eq:SpSl} \end{align} From the above equation and Proposition \ref{prop:mod} it follows that $f\in\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}))$ if and only if \begin{align*} \infty &> \sum_{v\in V} \|S_{\boldsymbol{\lambda}} e_v\|^{2t}|(S_{\boldsymbol{\pi}}|S_{\boldsymbol{\lambda}}|^{1-t}f)(v)|^2 =\\ &= \sum_{v\in \mathtt{Chi}(V^+_\lambda)} \|S_{\boldsymbol{\lambda}} e_v\|^{2t} \left|\frac{\lambda_v}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^t} f(\mathtt{par}(v))\right|^2 =\\[1ex] &= \sum_{u\in V^+_\lambda} |f(u)|^2 \sum_{v\in\mathtt{Chi}(u)} \left| \frac{\|S_{\boldsymbol{\lambda}} e_v\|^t}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^t} \lambda_v\right|^2 =\\[1ex] &= \sum_{u\in V} |f(u)|^2 \sum_{v\in\mathtt{Chi}(u)} |\mu_v|^2, \end{align*} which is equivalent to $f\in\mathcal{D}(S_{\boldsymbol{\mu}})$. This, due to (\ref{eq:DiffDD}), proves (i). Let now $f\in\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}))$. Then, using (\ref{eq:mod}) and (\ref{eq:SpSl}), we obtain \begin{align*} (\Delta_t(S_{\boldsymbol{\lambda}})f)(v) &= (|S_{\boldsymbol{\lambda}}|^t S_{\boldsymbol{\pi}} |S_{\boldsymbol{\lambda}}|^{1-t} f)(v) =\\[1ex] &= \|S_{\boldsymbol{\lambda}} e_v\|^t (S_{\boldsymbol{\pi}} |S_{\boldsymbol{\lambda}}|^{1-t}f)(v) =\\[1ex] &= \begin{cases}\displaystyle \frac{\|S_{\boldsymbol{\lambda}} e_v\|^t}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^t} \lambda_v f(\mathtt{par}(v)), & \text{if }\mathtt{par}(v)\in V_{\boldsymbol{\lambda}}^+\\[1ex] 0, & \text{otherwise} \end{cases} =\\[1ex] &= \mu_v f(\mathtt{par}(v)) = (S_{\boldsymbol{\mu}} f)(v), \end{align*} which proves that $\Delta_t(S_{\boldsymbol{\lambda}})\subseteq S_{\boldsymbol{\mu}}$. Hence $\Delta_t(S_{\boldsymbol{\lambda}})$ is closable and $\overline{\Delta_t(S_{\boldsymbol{\lambda}})}\subseteq S_{\boldsymbol{\mu}}$. But from Proposition \ref{prop:mod} we know that $\mathcal{E}_V\subseteq\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})$. Part (ii) follows now from (i) and Lemma \ref{lem:core}. \end{proof} \begin{cor} Let $S_{\boldsymbol{\lambda}}$ be a weighted shift on a directed tree $\mathfrak{T}=(V,E)$ and let $t\in(0,1]$. Suppose there exists a constant $\alpha>0$ such that $\|S_{\boldsymbol{\lambda}} e_u\|\geq\alpha$ for every $u\in V$. Then $\Delta_t(S_{\boldsymbol{\lambda}})=S_{\boldsymbol{\mu}}$, where $\mu$ is given by \eqref{eq:mu}. \end{cor} \begin{proof} According to Theorem \ref{thm:DtSl}, it suffices to show that $\mathcal{D}(S_{\boldsymbol{\mu}})\subseteq\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})$. Let $f\in\mathcal{D}(S_{\boldsymbol{\mu}})$. Then \begin{align*} \infty &> \sum_{u\in V} \left(\sum_{v\in\mathtt{Chi}(u)} |\mu_v|^2\right) |f(u)|^2 =\\ &= \sum_{u\in V} \left(\sum_{v\in\mathtt{Chi}(u)} \frac{\|S_{\boldsymbol{\lambda}} e_v\|^{2t}}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^{2t}}|\lambda_v|^2\right) |f(u)|^2 \geq\\ &\geq \sum_{u\in V} \left(\sum_{v\in\mathtt{Chi}(u)} \frac{\alpha^{2t}}{\|S_{\boldsymbol{\lambda}} e_u\|^{2t}}|\lambda_v|^2\right) |f(u)|^2 =\\[1ex] &= \alpha^{2t} \sum_{u\in V} \frac{\|S_{\boldsymbol{\lambda}} e_u\|^2}{\|S_{\boldsymbol{\lambda}} e_u\|^{2t}} |f(u)|^2 =\alpha^{2t} \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^{2-2t} |f(u)|^2, \end{align*} thus $f\in\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})$. \end{proof} \begin{rem} If $t=1$, then $\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t}|)=\mathcal{D}(I) = \ell^2(V)$. Hence $\Delta_1(S_{\boldsymbol{\lambda}}) = S_{\boldsymbol{\mu}}$, where $\boldsymbol{\mu}$ is given by (\ref{eq:mu}). This is in general not true for $t\in(0,1)$, which is shown by the following example. \end{rem} \begin{ex} Let $t\in(0,1)$ and $\mathfrak{T}=(V,E)$, where $V=\mathbb{N}=\{0,1,\ldots\}$ and $E=\{(n,n+1)\,:\,n\in\mathbb{N}\}$. For any $f\in\ell^2(\mathbb{N})$ such that $f(2k)\neq 0$ for each $k\in\mathbb{N}$, let \begin{equation*} \begin{array}{rcl} \lambda_{2k} &=& 0,\\ \lambda_{2k+1} &=& |f(2k)|^{\frac1{t-1}} \end{array} \end{equation*} for all $k\in\mathbb{N}$. Then $\mu_n=0$ for every $n\in\mathbb{N}$ and therefore $\mathcal{D}(S_{\boldsymbol{\mu}})=\ell^2(V)$. But $f\notin\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})$, because \begin{multline*} \sum_{u\in V} \|S_{\boldsymbol{\lambda}} e_u\|^{2-2t}|f(u)|^2 = \sum_{n=0}^\infty |\lambda_{n+1}|^{2-2t}|f(n)|^2 =\\ =\sum_{k=0}^\infty |\lambda_{2k+1}|^{2-2t}|f(2k)|^2 = \sum_{k=0}^\infty |f(2k)|^{\frac{2-2t}{t-1}}|f(2k)|^2 = \sum_{k=0}^\infty 1 = \infty. \end{multline*} Hence $\Delta_t(S_{\boldsymbol{\lambda}}) \subsetneq S_{\boldsymbol{\mu}}$. This example shows in particular that $\Delta_t(S_{\boldsymbol{\lambda}})$ may not be closed. \end{ex} \section{Aluthge transform of $S_{\boldsymbol{\lambda}}^*$} The following theorem provides a formula for the $t$-Aluthge transform of the adjoint of a weighted shift. \begin{thm} Let $S_{\boldsymbol{\lambda}}$ be a densely defined weighted shift on a directed tree $\mathfrak{T}=(V,E)$ and let $t\in(0,1]$. Then $\mathcal{E}_V\subseteq \mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}^*))$ and \begin{equation*} \Delta_t(S_{\boldsymbol{\lambda}}^*)e_v = \begin{cases} \displaystyle\overline{\lambda_v}\frac {|\pi_{\mathtt{par}(v)}|^2}{\mu_{\mathtt{par}(v)}} S_{\boldsymbol{\lambda}} e_{\mathtt{par}^2(v)}& \text{if }v\in\mathtt{Chi}^2(V_{\boldsymbol{\lambda}}^+)\\[1ex] 0&\text{if }v\in V\setminus\mathtt{Chi}^2(V_{\boldsymbol{\lambda}}^+) \end{cases}, \end{equation*} where $\boldsymbol{\mu}=\{\mu_w\}_{w\in V^\circ}$ and $\boldsymbol{\pi}=\{\pi_w\}_{w\in V^\circ}$ are given by \eqref{eq:mu} and \eqref{eq:pi} respectively. \label{thm:DSl*} \end{thm} \begin{proof} Let $u,v\in V$ be any vertices and let $P_u$ be as in Theorem \ref{thm:S*a}. Then, by \eqref{eq:Puf}, \begin{equation*} P_u e_v = \begin{cases}\displaystyle \frac{\overline{\lambda_v}}{\|S_{\boldsymbol{\lambda}} e_u\|^2} \displaystyle\sum_{w\in\mathtt{Chi}(u)}\lambda_we_w,&\text{if }v\in\mathtt{Chi}(u)\\ 0&\text{if }v\in V\setminus\mathtt{Chi}(u). \end{cases}. \end{equation*} Hence, from Theorem \ref{thm:S*a} (iii) it follows for every $\alpha>0$ that $e_v\in\mathcal{D}(|S_{\boldsymbol{\lambda}}^*|^\alpha)$ and the following equality holds: \begin{equation} |S_{\boldsymbol{\lambda}}^*|^\alpha e_v = \begin{cases}\displaystyle \frac{\overline{\lambda_v}}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^{2-\alpha}} \sum_{w\in\mathtt{Chi}(\mathtt{par}(v))}\lambda_we_w, &\text{if }v\in V^\circ,\\ 0,&\text{if }v=\mathtt{root}. \end{cases}. \label{eq:Sl*aev} \end{equation} Let now $S_{\boldsymbol{\lambda}}^* = U|S_{\boldsymbol{\lambda}}^*|$ be the polar decomposition of $S_{\boldsymbol{\lambda}}^*$. Then, by Proposition \ref{prop:Spi}, $U=S_{\boldsymbol{\pi}}^*$, where $\boldsymbol{\pi}=\{\pi_u\}_{u\in V}$ is given by \eqref{eq:pi}. From Proposition \ref{prop:Sl*} it follows that for every $w\in V^\circ$ \begin{equation} S_{\boldsymbol{\pi}}^* e_w = \overline{\pi_w} e_{\mathtt{par}(w)} = \frac{\overline{\lambda_w}}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(w)}\|}e_{\mathtt{par}(w)}. \label{eq:Spew} \end{equation} Take $u\in V$. From \eqref{eq:Spew} we obtain \begin{align*} \sum_{w\in\mathtt{Chi}(u)} \|\lambda_w S_{\boldsymbol{\pi}}^*e_w\|^2 &= \sum_{w\in\mathtt{Chi}(u)} \frac{|\lambda_w|^4}{\|S_{\boldsymbol{\lambda}} e_u\|^2} \leq \left(\sum_{w\in\mathtt{Chi}(u)} \frac{|\lambda_w|^2}{\|S_{\boldsymbol{\lambda}} e_u\|} \right)^2 =\nonumber\\ &= \left(\frac {\|S_{\boldsymbol{\lambda}} e_u\|^2}{\|S_{\boldsymbol{\lambda}} e_u\|}\right)^2 = \|S_{\boldsymbol{\lambda}} e_u\|^2 \end{align*} Hence the series $\sum_{w\in\mathtt{Chi}(u)} \lambda_w S_{\boldsymbol{\pi}}^*e_w$ is convergent in $\ell^2(V)$ and by (\ref{eq:Spew}) \begin{eqnarray} \sum_{w\in\mathtt{Chi}(u)} \lambda_w S_{\boldsymbol{\pi}}^*e_w &=& \sum_{w\in\mathtt{Chi}(u)} \frac {|\lambda_w|^2}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(w)}\|} e_{\mathtt{par}(w)}=\notag\\ &=&\sum_{w\in\mathtt{Chi}(u)} \frac {|\lambda_w|^2}{\|S_{\boldsymbol{\lambda}} e_u\|} e_u = \|S_{\boldsymbol{\lambda}} e_u\| e_u. \label{eq:szereg} \end{eqnarray} Since the series $\sum_{w\in\mathtt{Chi}(u)} \lambda_w e_w = S_{\boldsymbol{\lambda}} e_u$ is also convergent and $S_{\boldsymbol{\pi}}^*$ is a closed operator, it follows from (\ref{eq:Sl*aev}) and (\ref{eq:szereg}) that $|S_{\boldsymbol{\lambda}}^*|^te_v\in\mathcal{D}(S_{\boldsymbol{\pi}}^*)$ for any $v\in V^\circ$ and \begin{align*} S_{\boldsymbol{\pi}}^*|S_{\boldsymbol{\lambda}}^*|^{1-t} e_v &= \frac{\overline{\lambda_v}}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^{2-(1-t)}} S_{\boldsymbol{\pi}}^*\left(\sum_{w\in\mathtt{Chi}(\mathtt{par}(v))}\lambda_we_w \right) =\\ &=\frac{\overline{\lambda_v}}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^{1+t}} \sum_{w\in\mathtt{Chi}(\mathtt{par}(v))}\lambda_w S_{\boldsymbol{\pi}}^* e_w =\\ &=\frac{\overline{\lambda_v}}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^{t}} e_{\mathtt{par}(v)} \end{align*} Using (\ref{eq:Sl*aev}) again, we get for any $v\in\mathtt{Chi}(V^\circ)$ \begin{multline*} |S_{\boldsymbol{\lambda}}^*|^tS_{\boldsymbol{\pi}}^*|S_{\boldsymbol{\lambda}}^*|^{1-t} e_v =\\ = \frac{ \overline{\lambda_v}}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^{t}}\cdot \frac{\overline{\lambda_{\mathtt{par}(v)}}}{ \|S_{\boldsymbol{\lambda}} e_{\mathtt{par}^2(v)}\|^{2-t}} \sum_{w\in\mathtt{Chi}(\mathtt{par}^2(v))} \lambda_w e_w =\\ = \overline{\lambda_v}\frac {|\pi_{\mathtt{par}(v)}|^2}{\mu_{\mathtt{par}(v)}} S_{\boldsymbol{\lambda}} e_{\mathtt{par}^2(v)} \end{multline*} and clearly $|S_{\boldsymbol{\lambda}}^*|^{1-t}S_{\boldsymbol{\pi}}^*|S_{\boldsymbol{\lambda}}^*|^t e_v=0$ for every $v\in \mathtt{Chi}(\mathtt{root})\cup\{\mathtt{root}\}$. \end{proof} \section{An example of an operator with trivial Aluthge transform} In this section we construct a weighted shift $S_{\boldsymbol{\lambda}}$ with the following properties: $S_{\boldsymbol{\lambda}}$ is densely defined, injective and hyponormal, while $\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}))=\{0\}$ for every $t\in(0,1]$ and $\Delta_t(S_{\boldsymbol{\lambda}}^*)$ is not closable for any $t\in(0,1)$. We also show that such an example can be constructed in the class of composition operators. \bigskip For any sequence $v: \mathbb{Z}\to\mathbb{Z}_+\cup\{-1\}$ we define \begin{align*} m(v)&:= \inf\{n\in\mathbb{Z}\,:\,v_n\neq0\}-2, \\ M(v)&:= \sup\{n\in\mathbb{Z}\,:\,v_n>-1\}. \end{align*} Let \begin{multline} V=\{ v:\mathbb{Z}\to\mathbb{Z}_+\cup\{-1\}\,:\,m(v)>-\infty, M(v)<\infty\\ \text{ and }v_n>-1\text{ for }n\leq M(v)\}, \label{eq:V} \end{multline} \begin{equation} E=\{ (u,v)\in V\times V\,:\,M(v)=M(u)+1\text{ and }u_n=v_n\text{ for }n\leq M(u)\}. \label{eq:E} \end{equation} Then $\mathfrak{T}=(V,E)$ is a rootless directed tree, such that for every $u\in V$ the set $\mathtt{Chi}(u)$ is countable. Vertices of $V$ are sequences of the form $$u=(\ldots,0,0,u_{m(u)},\ldots,u_{M(u)-1},u_{M(u)},-1,-1,\ldots)$$ and for a vertex $u$ given by the above formula we have \begin{align*} \mathtt{par}(u) &= (\ldots,0,u_{m(u)},\ldots,u_{M(u)-1},-1,-1,-1,\ldots),\\ \mathtt{Chi}(u) &= \{ (\ldots,0,u_{m(u)},\ldots,u_{M(u)-1},u_{M(u)},n,-1,\ldots)\,:\,n\in\mathbb{Z}_+ \}. \end{align*} For any $v=(\ldots,0,v_{m(v)},\ldots,v_{M(v)},-1,\ldots)\in V$ let \begin{equation} \lambda_v= \frac{ 2^{v_{m(v)}+\ldots+v_{M(v)-1}} }{v_{M(v)}+1} \label{eq:lambda} \end{equation} In this section $S_{\boldsymbol{\lambda}}$ will always stand for the weighted shift on $\mathfrak{T}$ with weights given by (\ref{eq:lambda}). \bigskip We start by proving that $S_{\boldsymbol{\lambda}}$ is densely defined. This follows from Proposition~\ref{prop:Sl} and the following: \begin{prop} For every $u\in V$, $e_u\in\mathcal{D}(S_{\boldsymbol{\lambda}})$ and \begin{equation*} \|S_{\boldsymbol{\lambda}} e_u\| = 2^{u_{m(u)}+\ldots+u_{M(u)}}\gamma, \end{equation*} where $\gamma=\left(\sum_{n=1}^\infty n^{-2}\right)^{\frac12}$. \label{prop:Sleu} \end{prop} \begin{proof} From \eqref{eq:lambda} we get \begin{align*} \sum_{v\in\mathtt{Chi}(u)} |\lambda_v|^2 &= \sum_{v\in\mathtt{Chi}(u)} \frac {2^{2(v_{m(v)}+\ldots+v_{M(v)-1})}}{(v_{M(v)}+1)^2}=\\ &= \sum_{v\in\mathtt{Chi}(u)} \frac {2^{2(u_{m(u)}+\ldots+u_{M(u)})}}{(v_{M(v)}+1)^2} = 2^{2(u_{m(u)}+\ldots+u_{M(u)})}\gamma^2. \end{align*} The claim follows now from Proposition \ref{prop:Sl} (ii). \end{proof} To show hyponormality of $S_{\boldsymbol{\lambda}}$ we use Theorem \ref{thm:hyp}. \begin{prop} Operator $S_{\boldsymbol{\lambda}}$ is hyponormal. \label{prop:Slhyp} \end{prop} \begin{proof} From Proposition \ref{prop:Sleu} it follows that $\|S_{\boldsymbol{\lambda}} e_v\|>0$ for every $v\in V$, so \eqref{eq:hyp1} is satisfied trivially. As for \eqref{eq:hyp}, for any $u\in V$ we have $\mathtt{Chi}_\lambda^+(u) = \mathtt{Chi}(u)$ and \begin{align*} \sum_{v\in\mathtt{Chi}(u)} \frac{|\lambda_v|^2}{\|S_{\boldsymbol{\lambda}} e_v\|^2} &= \sum_{v\in\mathtt{Chi}(u)} \frac{2^{2(v_{m(v)}+\ldots+v_{M(v)-1})}}{(v_{M(v)}+1)^2\cdot 2^{2(v_{m(v)}+\ldots+v_{M(v)})} \gamma^2} =\\ &= \frac1{\gamma^2} \sum_{v\in\mathtt{Chi}(u)} \frac1{(v_{M(v)}+1)^2 2^{2v_{M(v)}}} < 1, \end{align*} because \begin{equation*} \sum_{v\in\mathtt{Chi}(u)} \frac1{(v_{M(v)}+1)^2 2^{2v_{M(v)}}} < \sum_{v\in\mathtt{Chi}(u)} \frac1{(v_{M(v)}+1)^2} = \gamma^2. \end{equation*} This completes the proof. \end{proof} In turn we show that the Aluthge transform of $S_{\boldsymbol{\lambda}}$ has trivial domain. Moreover, $t$-Aluthge transform of $S_{\boldsymbol{\lambda}}$ has trivial domain for arbitrarily small $t$. \begin{prop} For any $t\in(0,1]$ the domain of $\Delta_t(S_{\boldsymbol{\lambda}})$ is $\{0\}$. \label{prop:trivial} \end{prop} \begin{proof} Let $t\in(0,1]$. From Theorem \ref{thm:DtSl} and Proposition \ref{prop:Sleu} we get \mbox{$\Delta_t(S_{\boldsymbol{\lambda}})\subseteq S_{\boldsymbol{\mu}}$}, where \begin{align} \mu_v &= \frac{\|S_{\boldsymbol{\lambda}} e_v\|^t}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}(v)}\|^t}\lambda_v =\notag\\ &= \frac{2^{t(v_{m(v)}+\ldots+v_{M(v)})}\gamma^t}{2^{t(v_{m(v)}+\ldots+v_{M(v)-1})}\gamma^t}\cdot \frac{2^{v_{m(v)}+\ldots+v_{M(v)-1}}}{v_{M(v)}+1}=\notag\\ &= \frac {2^{v_{m(v)}+\ldots+v_{M(v)-1}+tv_{M(v)}}}{v_{M(v)}+1}. \label{eq:muv} \end{align} Hence for any $u\in V$ we have \begin{align*} \sum_{v\in\mathtt{Chi}(u)} |\mu_v|^2 &= \sum_{v\in\mathtt{Chi}(u)} \frac{ 2^{2(v_{m(v)}+\ldots+v_{M(v)-1}+tv_{M(v)})}}{(v_{M(v)}+1)^2} =\\ &= 2^{2(u_{m(u)}+\ldots+u_{M(u)})} \sum_{v\in\mathtt{Chi}(u)} \frac{2^{2tv_{M(v)}}}{(v_{M(v)}+1)^2} = \infty, \end{align*} and therefore, by Proposition \ref{prop:Sl}, $e_u\notin\mathcal{D}(S_{\boldsymbol{\mu}})$. The claim follows now from Lemma \ref{lem:core}. \end{proof} The fact that $\Delta_t(S_{\boldsymbol{\lambda}}^*)$ is not closable will follow from the lemma below: \begin{lem} For any $t\in(0,1)$ the operator $\Delta_t(S_{\boldsymbol{\lambda}}^*)$ is densely defined and \begin{equation*} \mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}^*)^*) = \mathcal{N}(\Delta_t(S_{\boldsymbol{\lambda}}^*)^*) = \mathcal{N}(S_{\boldsymbol{\lambda}}^*). \end{equation*} \end{lem} \begin{proof} By Theorem \ref{thm:DSl*}, $\mathcal{E}_V\subseteq \mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}^*))$ and obviously $\mathcal{E}_V$ is dense in $\ell^2(V)$. Moreover, since $\mathtt{Chi}^2(V_{\boldsymbol{\lambda}}^+)=V$, we have for every $v\in V$ \begin{equation} \Delta_t(S_{\boldsymbol{\lambda}}^*) e_v = \overline{\lambda_v} \frac{|\pi_{\mathtt{par}(v)}|^2}{\mu_{\mathtt{par}(v)}}S_{\boldsymbol{\lambda}} e_{\mathtt{par}^2(v)}. \label{eq:DtSl*} \end{equation} Let $v=(\ldots,0,v_{m(v)},\ldots,v_{M(v)},-1,\ldots)$. From \eqref{eq:lambda} and \eqref{eq:muv} we obtain \begin{align} \frac{\overline{\lambda_v}}{\mu_{\mathtt{par}(v)}} &= \frac{2^{v_{m(v)}+\ldots+v_{M(v)-1}}}{v_{M(v)}+1} \frac{v_{M(v)-1}+1}{2^{v_{m(v)}+\ldots+v_{M(v)-2}+tv_{M(v)-1}}} =\notag\\ &= \frac{v_{M(v)-1}+1}{v_{M(v)}+1} 2^{(1-t)v_{M(v)-1}}. \label{eq:frac} \end{align} In turn, by \eqref{eq:pi} and \eqref{eq:Sleu} we have \begin{align} |\pi_{\mathtt{par}(v)}|^2 &= \frac{|\lambda_{\mathtt{par}(v)}|^2}{\|S_{\boldsymbol{\lambda}} e_{\mathtt{par}^2(v)}\|^2} =\notag\\ &= \frac {2^{2(v_{m(v)}+\ldots+v_{M(v)-2})}}{(v_{M(v)-1}+1)^2 2^{2(v_{m(v)}+\ldots+v_{M(v)-2})}\gamma^2}=\notag\\ &= \frac1 {(v_{M(v)-1}+1)^2\gamma^2}. \label{eq:piparv} \end{align} Combining \eqref{eq:DtSl*}, \eqref{eq:frac} and \eqref{eq:piparv} leads to the equality \begin{equation*} \Delta_t(S_{\boldsymbol{\lambda}}^*)e_v = \frac{2^{(1-t)v_{M(v)-1}}}{(v_{M(v)-1}+1)(v_{M(v)}+1)\gamma^2} \sum_{w\in\mathtt{Chi}(\mathtt{par}^2(v))} \lambda_w e_w. \end{equation*} Let $f\in\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}^*)^*)$. Then for any $v\in V$ \begin{align} (\Delta_t(S_{\boldsymbol{\lambda}}^*)^*f)(v) &= \left< \Delta_t(S_{\boldsymbol{\lambda}}^*)^*f,e_v\right> = \left< f, \Delta_t(S_{\boldsymbol{\lambda}}^*)e_v\right> =\notag\\ &= \frac{2^{(1-t)v_{M(v)-1}}}{(v_{M(v)-1}+1)(v_{M(v)}+1)\gamma^2} \sum_{w\in\mathtt{Chi}(\mathtt{par}^2(v))} \overline{\lambda_w} f(w)=\notag\\ &= \frac{2^{(1-t)v_{M(v)-1}}}{(v_{M(v)-1}+1)(v_{M(v)}+1)\gamma^2} (S_{\boldsymbol{\lambda}}^* f)(\mathtt{par}^2(v)). \label{eq:DtSl**} \end{align} This gives the inclusion $\mathcal{N}(S_{\boldsymbol{\lambda}}^*) \subseteq \mathcal{N}(\Delta_t(S_{\boldsymbol{\lambda}}^*)^*)$. It suffices to show that $\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}^*)^*)\subseteq \mathcal{N}(S_{\boldsymbol{\lambda}}^*)$. Suppose there exists $f\in\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}^*)^*)$ such that $S_{\boldsymbol{\lambda}}^*f\neq0$. Let $$u=(\ldots,0,u_{m(u)},\ldots,u_{M(u)},-1,\ldots)\in V$$ be such that $(S_{\boldsymbol{\lambda}}^*f)(u) \neq0$. Let $v^{(k)} = (\ldots,0,u_{m(u)},\ldots,u_{M(u)},k,0,-1,\ldots)$ for every $k\in\mathbb{Z}_+$. Then $\mathtt{par}^2(v^{(k)})=u$ and \begin{align*} \|\Delta_t(S_{\boldsymbol{\lambda}}^*)^*f\|^2 &\geq \sum_{k=0}^\infty |(\Delta_t(S_{\boldsymbol{\lambda}}^*)^*f)(v^{(k)})|^2 =\\ &\stackrel{(\ref{eq:DtSl**})}{=} \sum_{k=0}^\infty \frac{2^{2(1-t)k}}{(k+1)^2\gamma^2} \left|(S_{\boldsymbol{\lambda}}^*f)(u)\right|^2 = \infty, \end{align*} because $t<1$. This is a contradiction. Thus $S_{\boldsymbol{\lambda}}^*f=0$, which completes the proof. \end{proof} \begin{cor} Operator $\Delta_t(S_{\boldsymbol{\lambda}}^*)$ is not closable for any $t\in(0,1)$. \label{cor:closable} \end{cor} \begin{proof} Since $S_{\boldsymbol{\lambda}}^*$ is a non-zero closed operator, $\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}^*)^*)=\mathcal{N}(S_{\boldsymbol{\lambda}}^*)$ is not dense in $\ell^2(V)$, which completes the proof. \end{proof} By \cite[Lemma 4.3.1]{jjs3}, every weighted shift on a rootless directed tree with nonzero weights is unitarily equivalent to a composition operator in an $L^2$-space over a $\sigma$-finite measure. From this, together with Propositions \ref{prop:Sleu}, \ref{prop:Slhyp}, \ref{prop:trivial} and Corollary \ref{cor:closable}, we obtain the following theorem. \begin{thm} There exists a hyponormal composition operator $C$ in an \mbox{$L^2$-space} over a $\sigma$-finite measure such that $\mathcal{D}(\Delta_t(C))=\{0\}$ for $t\in(0,1]$ and $\Delta_t(C^*)$ is not closable for $t\in(0,1)$. \end{thm} \begin{rem} For any $u\in V$ let $W:=\mathtt{Des}(u) = \bigcup_{n=0}^\infty \mathtt{Chi}^n (u)$ and let $\boldsymbol{\lambda}'=\{\lambda_v\}_{v\in W\setminus\{u\}}$. Then $S_{\boldsymbol{\lambda}'}$ is a weighted shift on a directed tree with root $u$. Moreover, $S_{\boldsymbol{\lambda}'}$ has all properties claimed for $S_{\boldsymbol{\lambda}}$, i.e. $S_{\boldsymbol{\lambda}'}$ is densely defined, injective and hyponormal, its $t$-Aluthge transform has trivial domain for $t\in(0,1]$ and $t$-Aluthge transform of $S_{\boldsymbol{\lambda}'}^*$ is not closable for $t\in (0,1)$. These assertions can be shown by repeating the proofs of all results from this section with appropriate changes. \label{rem:root} \end{rem} It turns out that the tree given by \eqref{eq:V} and \eqref{eq:E} and the one described by Remark \ref{rem:root} are the only directed trees on which such an example can be constructed. This fact is stated in the following proposition. \begin{prop} Let $\mathfrak{T}=(V,E)$ and $\lambda=\{\lambda_u\}_{u\in V^\circ}\subseteq\mathbb{C}\setminus\{0\}$. Suppose the weighted shift $S_{\boldsymbol{\lambda}}$ is densely defined and $\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}))=\{0\}$ for some $t\in(0,1]$. Then $\#\mathtt{Chi}(u) =\aleph_0$ for every $u\in V$. \end{prop} \begin{proof} Let $u\in V$. Due to Proposition \ref{prop:Sl}, $\overline{\mathcal{D}(S_{\boldsymbol{\lambda}})}=\ell^2(V)$ implies that for $u\in V$ we have $e_u\in\mathcal{D}(S_{\boldsymbol{\lambda}})$ and $$\|S_{\boldsymbol{\lambda}} e_u\|^2 = \sum_{v\in\mathtt{Chi}(u)} |\lambda_v|^2.$$ Since $|\lambda_v|^2>0$ for every $v\in\mathtt{Chi}(u)$ and the above series is convergent, it follows that $\#\mathtt{Chi}(u)\leq\aleph_0$. Let $t\in(0,1]$ be such that $e_u\notin \mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}))$. By Theorem \ref{thm:DtSl}, $\mathcal{D}(\Delta_t(S_{\boldsymbol{\lambda}}))=\mathcal{D}(S_{\boldsymbol{\mu}})\cap\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})$, where $\mu$ is given by \eqref{eq:mu}. Since $e_u\in \mathcal{D}(S_{\boldsymbol{\lambda}})\subseteq\mathcal{D}(|S_{\boldsymbol{\lambda}}|^{1-t})$, it follows that $e_u\notin \mathcal{D}(S_{\boldsymbol{\mu}})$. Hence \begin{equation*} \infty = \sum_{w\in V}\left(\sum_{v\in\mathtt{Chi}(w)} |\mu_v|^2\right)|e_u(w)|^2 = \sum_{v\in\mathtt{Chi}(u)} |\mu_v|^2, \end{equation*} which is possible only if $\#\mathtt{Chi}(u)\geq\aleph_0$. This completes the proof. \end{proof} A similar result with $S_{\boldsymbol{\lambda}}^2$ instead of $\Delta_t(S_{\boldsymbol{\lambda}})$ was obtained in \cite{jjs2}. \subsection*{Acknowledgements} I would like to thank my supervisor, prof. Jan Stochel for encouragement and motivation, as well as substantial help he provided me while working on this paper.
1,941,325,220,169
arxiv
1,941,325,220,170
arxiv
\section{Introduction} In \cite{ArnADE} Arnol'd found a deep connection between simple function singularities and the Dynkin diagrams $A_k$, $D_k$ and $E_k$ and their corresponding Weyl groups. Later in \cite{ArnBCF} he expanded this classification to the diagrams of types $B_k$, $C_k$ and $F_4$, which turned out to be closely connected to simple singularities of functions on manifolds with boundary. The present work expands upon the work of V.A.Vassiliev (see \cite{VVA}) by considering simple boundary singularities of types $B_k$, $C_k$ and $F_4$. For a function singularity $f:({\mathbb R}^n, 0) \to ({\mathbb R}, 0)$, $df|_0 = 0$ and its arbitrary smooth deformation $F:({\mathbb R}^n \times {\mathbb R}^k, 0) \to ({\mathbb R},0)$, which can be viewed as a family of functions $f_\lambda = F(-,\lambda):{\mathbb R}^n \to {\mathbb R}$, s.t. $f_0 = f$, we can define the set $\Sigma = \Sigma(F) \subset {\mathbb R}^k$, called the \textit{(real) discriminant} of $F$ to be the set of all parameters $\lambda$, s.t. $f_\lambda$ has a zero critical value. In cases we will consider this will be an algebraic subset of codimension one in ${\mathbb R}^k$, which divides a small neighborhood of the origin in ${\mathbb R}^k$ into several parts. The following theorem holds for standard versal deformations of real simple function singularities. \begin{theorem}[E.Looijenga, \cite{Lo}] All connected components of the complements of the real discriminant varieties of standard versal deformations of simple real function singularities are contractible. \end{theorem} This theorem implies that the topology of the set ${\mathbb R}^k \setminus \Sigma$ is completely defined by the number of connected components of this set. We will prove a similar result for singularities $B_k$ and $C_k$. The author believes that this should also be true in the case $F_4$, but this is yet to be proven. The topology and combinatorics of such complements was described by V.D. Sedykh in his works \cite{Sed,SedBook} for simple singularities with Milnor number $\leq 6$ (see \cite{Sed}, Theorems 2.8 and 2.9 for the numbers of local components for singularities $D_4$, $D_5$, $D_6$ and $E_6$). Recently (see \cite{VVA}) Vassiliev fully described the topology of the complement ${\mathbb R}^k \setminus \Sigma$ for simple real function singularities and their versal deformations by listing the number of local components in each case and assigning a certain topological characteristic to each of them. We will enumerate the local components of the complements for $B_k$, $C_k$ and $F_4$ boundary singularities and assign to them similar topological invariants. Any simple function singularity, up to stable equivalence, can be realized as a function $f:{\mathbb R}^2 \to {\mathbb R}$ in two variables and has a versal deformation of dimension $\mu$, where $\mu$ is the \textit{Milnor number} of $f$. For any parameter $\lambda \in {\mathbb R}^\mu$ of a versal deformation of a simple singularity, consider the set of lower values $W(\lambda) = \{x \in {\mathbb R}^2 | f_\lambda(x) \leq 0\}$. These sets can go to infinity along several \textit{asymptotic sectors} the number of which stays the same for any $\lambda$ for a given versal deformation. We say that two sets of lower values $W(\lambda_1)$ and $W(\lambda_2)$ are \textit{topologically equivalent} if there exists an orientation-preserving homeomorphism of ${\mathbb R}^2$ which sends $W(\lambda_1)$ to $W(\lambda_2)$ but doesn't permute the asymptotic sectors. Naturally, if two parameters $\lambda_1$ and $\lambda_2$ lie in the same connected component of ${\mathbb R}^\mu \setminus \Sigma$ then the corresponding sets of lower values are topologically equivalent. The main theorem of \cite{VVA} states the converse is also true: \begin{theorem}[Vassiliev,\cite{VVA}] If $\lambda_1$ and $\lambda_2$ are non-discriminant points of the parameter space ${\mathbb R}^\mu$ of a versal deformation of a simple function singularity, and the corresponding sets $W(\lambda_1), W(\lambda_2)$ are topologically equivalent, then $\lambda_1$ and $\lambda_2$ belong to the same component of ${\mathbb R}^\mu \setminus \Sigma$. \end{theorem} Our main goal will be to prove a similar statement for simple boundary singularities, and describe the connected components of the complement of the discriminant and their corresponding sets of lower values. \section{Notions and Definitions} Assuming the reader is familiar with basic notions of singularity theory (for a classic reference see \cite{AVG1,AVG2}) we will only describe analogues of the usual constructions for the case of boundary function singularities. Another good reference is \cite{Lyash}. To avoid ambiguity, further in the text we will use the term ordinary singularity for singularities of functions on manifolds without boundary. Consider the space ${\mathbb R}^n$ with a fixed hyperplane $\{x = (x_1, \ldots, x_n) \in {\mathbb R}^n|x_1 = 0\}$, which will act as a germ of an $n$-dimensional manifold with boundary. A \textit{(real) boundary function singularity} is a germ of a function $f: ({\mathbb R}^n,0) \to ({\mathbb R},0)$ on a manifold with boundary such that $0$ is a critical point of the restriction of $f$ onto the boundary, i.e. $$ \frac{\partial f}{\partial x_i} \bigg|_{x=0} = 0, \; i =2,\ldots,n $$ We'll call two boundary singularities $f_i$, $i = 1,2$ equivalent if there exists a local diffeomorphism $\varphi: {\mathbb R}^n \to {\mathbb R}^n$ preserving the boundary s.t. $\varphi^* f_2 = f_1$. Hence the classification problem can be formulated in terms of describing the orbits of action of the group $Loc_{B}({\mathbb R}^n)$ of local diffeomorphisms preserving the boundary on the space of function germs. The \textit{modality} of a singularity $f$ is the minimal number $m$ such that a sufficiently small neighborhood of $f$ in the space of germs can be covered by a finite number of no more than $m$-parametric orbits of the action of $Loc_B({\mathbb R}^n)$. A singularity of modality $0$ will be called \textit{simple}. As was mentioned earlier, simple boundary singularities are classified by diagrams $B_k$, $C_k$ and $F_4$. A \textit{gradient ideal} of an ordinary singularity is the ideal generated by its partial derivatives: $$ I_f = (\frac{\partial f}{\partial x_1}, \ldots, \frac{\partial f}{\partial x_n} ) $$ which in case of boundary singularities is defined as $$ I_{f|x_1} = (x_1 \frac{\partial f}{\partial x_1}, \ldots, \frac{\partial f}{\partial x_n}). $$ \textit{Local algebra} of a germ $f$ is defined as the factor algebra $$ Q_{f} = {\mathbb R}[[x_1,\ldots,x_n]]/I_f $$ and its boundary analog as $$ Q_{f|x_1} = {\mathbb R}[[x_1,\ldots,x_n]]/I_{f|x_1}. $$ It's easy to see that for equivalent singularities the corresponding local algebras are isomorphic, so this construction gives a powerful invariant of singularities. The \textit{Milnor number} of an ordinary singularity is defined as $\mu = \dim Q_f$, and in case of a boundary singularity as $\mu = \dim Q_{f|x_1}$. A classic result is that a germ $f$ has finite Milnor number iff $f$ is an isolated singularity. For boundary singularities we also define two additional numbers $$ \mu_0 = \dim {\mathbb R}[[x_1,\ldots,x_n]]/(\partial f/\partial x_2 |_{x_1 = 0}, \ldots, \partial f/\partial x_n |_{x_1 = 0}) $$ $$ \mu_1 = \dim {\mathbb R}[[x_1,\ldots,x_n]]/(\frac{\partial f}{\partial x_1}, \ldots, \frac{\partial f}{\partial x_n}). $$ The number $\mu_0$ is the Milnor number of $f$ viewed as a function on the boundary $\{x_1 = 0\}$ and $\mu_1$ is the Milnor number of $f$ viewed as an ordinary germ in ${\mathbb R}^n$. It's easy to see that $\mu = \mu_1 + \mu_0$. Singularities for which $\mu_1 = 0$ will be called purely boundary. All the above definitions also work if we take ${\mathbb C}$ as the base field and consider holomorphic functions instead of smooth ones. The classification of boundary singularities also includes ordinary ones: from a function $f:{\mathbb R}^n \to {\mathbb R}$ we can obtain a purely boundary singularity $\tilde{f}(x_0,x) = x_0 + f(x)$, for a manifold ${\mathbb R}^{n+1}$ with boundary $\{(x_0,x) | x_0 = 0\}$. Thus the simple boundary function singularities also include ones of types $A_k$, $D_k$ and $E_k$. The remaining types, specific to the boundary case, are listed in table 1. \begin{table} \begin{center} \begin{tabular}{c c c} Notation & Normal form & Number of components\\ $B^+_{2k}, k \geq 1$ & $x^{2k} + y^2$ & $k^2$\\ $B^-_{2k}, k \geq 1$ & $x^{2k} - y^2$ & $k^2$\\ $\pm B_{2k + 1}, k \geq 1$ & $\pm(x^{2k +1} + y^2)$ & $(k+1)(k+2)$\\ $C^+_{2k}, k \geq 1$ & $xy + y^{2k}$ & $k^2$\\ $C^-_{2k}, k \geq 1$ & $xy - y^{2k}$ & $k^2$\\ $\pm C_{2k + 1}, k \geq 1$ & $\pm (xy + y^{2k + 1})$ & $(k+1)(k+2)$\\ $F_4$ & $\pm x^2 + y^3$ & 8 \end{tabular} \end{center} \caption{Normal forms of real simple boundary singularities in two variables $(x,y)$, with boundary given by $x = 0$.} \label{t1} \end{table} A \textit{deformation} of a germ $f(x)$ is a function $$ F(x,\lambda):({\mathbb R}^n \times {\mathbb R}^l, 0) \to ({\mathbb R},0) $$ s.t. $F(x,0) = f(x)$. The space ${\mathbb R}^l$ is called the base of deformation $F$. We call a deformation $F$ \textit{versal} if any other deformation can be induced from it, meaning for any $G:{\mathbb R}^n\times {\mathbb R}^k \to {\mathbb R}$, $G(x,0) = f(x)$ we can find \begin{enumerate} \item a germ of a smooth map $\psi:{\mathbb R}^k \to {\mathbb R}^l$ \item and a diffeomorphism $\eta(x,\xi)$ of ${\mathbb R}^n$, $\xi\in {\mathbb R}^k$, which depends smoothly on $\xi$ and s.t. $\eta(x,0) = x$, so that \end{enumerate} $$ G(x, \xi) = F(\eta(x,\xi),\psi(\xi)) $$ Geometrically a deformation is just a germ of a surface at the point $f$ in the space of germs, and it is versal iff this germ is transversal to the orbit of $f$ under the action of $Loc({\mathbb R}^n)$. The deformation is called \textit{miniversal}, if the dimension of the parameter space is minimal. For a boundary function singularity $f$ with local algebra $Q_{f|x_1}$ and (finite) Milnor number $\mu$ the (mini)versal deformation can be obtained by setting $$ F(x,\lambda) = f(x) + \sum_{i = 1}^\mu \lambda_i f_i(x) $$ where $f_i$ form a basis of the local algebra $Q_{f|x_1}$. Of course, the same construction holds in the ordinary case. Hence, the miniversal deformations for simple boundary singularities can be given as follows: \begin{eqnarray} B_\mu & \qquad & f_\lambda(x,y) = x^\mu \pm y^2 + \lambda_1 x^{\mu-1} + \ldots + \lambda_\mu \label{bvd} \\ C_\mu & \qquad & f_\lambda(x,y) = xy \pm y^\mu + \lambda_1 y^{\mu-1} + \ldots + \lambda_\mu \label{cvd} \\ F_4 & \qquad & f_\lambda(x,y) = \pm x^2 + y^3 + \lambda_1 x + \lambda_2 y + \lambda_3 xy + \lambda_4 \label{f4vd} \end{eqnarray} Given a (mini)versal deformation $F(x,\lambda) = f_\lambda(x)$ of a boundary singularity we define the \textit{discriminant variety} as the subset $\Sigma \subset {\mathbb R}^\mu$ of the parameter space consisting of all singular parameter values. A parameter $\lambda \in {\mathbb R}^\mu$ is called \textit{non-singular}, if \begin{enumerate} \item $0$ is a regular value of the function $f_\lambda$ \item the zero level set $V(\lambda) = f^{-1}_\lambda(x)$ is transversal to the boundary. \end{enumerate} In case of ordinary singularities only the first condition is required. For a boundary singularity the discriminant consists of two parts corresponding to these conditions. We will denote by $\Sigma_0$ the component corresponding to the first condition, and by $\Sigma_1$ the one corresponding to the second. The discriminant of an ordinary singularity is an irreducible affine variety, and it follows that in the boundary case the components $\Sigma_i$ are also irreducible. In her note \cite{Shcher} I. G. Shcherbak introduced the notion of \textit{decomposition} of a boundary singularity $f$. The decomposition is defined as a pair (type of $f$ as an ordinary singularity, type of the restriction $f|_{\{x_1=0\}})$. Naturally, any boundary singularity possesses such a decomposition, moreover there exists an involution on the set of boundary singularities that swaps these types. The singularities that go into each other under this involution are called \textit{dual}. In particular, the singularity $B_\mu$ has a decomposition $(A_{\mu-1}, A_1)$, $C_\mu$ the decomposition $(A_1, A_{\mu-1})$ and $F_4$ the decomposition $(A_2, A_2)$. This means that the singularities $B_\mu$ and $C_\mu$ are dual (and $B_2 = C_2$ is self-dual), and $F_4$ is self-dual. For a boundary singularity $f$ that decomposes into $(f_0, f_1)$ the sets $\Sigma_0$ and $\Sigma_1$ are diffeomorphic to the discriminants $\Sigma_{f_0}$ and $\Sigma_{f_1}$ multiplied by euclidean spaces of suitable dimensions. This will be useful later in the study of the discriminant set of $F_4$. As it was mentioned earlier, the following statement holds: \begin{proposition} All connected components of the complements of the real discriminant varieties of versal deformations 1-2 of singularities $B_\mu$, $C_\mu$ are contractible. \end{proposition} Recall that by $W(\lambda) = f_\lambda^{-1}((-\infty, 0])$ we denote the set of lower values for $\lambda$. Given a deformation of a boundary singularity, we say that two sets of lower values $W(\lambda_1)$ and $W(\lambda_2)$ are topologically equivalent if there exists an orientation preserving homeomorphism of ${\mathbb R}^n$ sending one set to the other, which preserves the boundary and does not permute the asymptotic sectors of $f$ and its restriction to the boundary $f|_{x_1 = 0}$. The number of asymptotic sectors is equal to $0$ in case $B^+_{2k}$, $1$ in cases $B_{2k+1}$ and $F_4$ and $2$ in cases $C_\mu$ and $B^-_{2k}$. Now we are ready to formulate the main theorem: \begin{theorem} The numbers of components of the complements of the real discriminant varieties of deformations 1-3 are listed in Table 1, and each of these components is uniquely defined by the topological type of corresponding set of lower values. \end{theorem} The proofs of theorems 3 and 4 in cases $B_\mu$ and $C_\mu$ will be presented in the next section, and the proof in case $F_4$ in sections 4 and 5. \section{Cases $B_\mu$ and $C_\mu$} As it was mentioned earlier, the discriminant $\Sigma$ consists of two components $\Sigma_0$ and $\Sigma_1$ each defined by an appropriate condition. For $B_\mu$ singularities the first condition means that the polynomial $x^\mu + \lambda_1 x^{\mu-1} + \ldots + \lambda_\mu $ can only have simple roots, and the second condition means that $0$ is not a root of this polynomial. For $C_\mu$ these conditions interchange. Denote by $$ h_\lambda(x) = \pm x^\mu + \lambda_1 x^{\mu-1} + \ldots + \lambda_\mu, $$ then the deformation can be expressed as $f_\lambda(x,y) = h_\lambda(x) \pm y^2$, and the equation for the zero set is given by $y^2 = -h_\lambda(x)$. Hence the zero set is symmetric with respect to the line $\{y = 0\}$ and consists of some number of ovals, each intersecting the line $\{y=0\}$ at neighboring pairs of roots of $h_\lambda$, and no more than two non-compact components. Any connected component of the complement ${\mathbb R}^\mu \setminus \Sigma$ is completely defined by the configuration of roots of the polynomial $h_\lambda(x)$. If $\lambda$ is a non-discriminant parameter, then, since the polynomial $h_\lambda$ must only have simple non-zero roots, it can only have $p$ negative and $q$ positive roots, and since they all should be simple the parity of $p+q$ should coincide with that of $\mu$. For any $\lambda$ such that $h_\lambda$ has $p$ negative and $q$ positive roots there is an obvious path to any other $\lambda'$ with equal numbers of negative and positive roots. Moreover, we can construct a homotopy contracting the whole connected component of $\lambda$ to any of its points. It's also easy to see that the numbers $p, q$ uniquely define the topological type of $W(\lambda)$. In case $C_\mu$ the zero set is given by the equation $$ xy = -h_\lambda(y), $$ hence, if $\lambda$ is non-discriminant, it consists of the graph of the function $x = -h_\lambda(y)/y$ (notice that by changing the equation this way we don't lose the solution $y=0$, as otherwise $\lambda$ would be a discriminant parameter). This function has a vertical asymptote $y=0$ and, as before, the topological type of $W(\lambda)$ and the connected component of $\lambda$ is completely defined by the numbers $p$ and $q$ of negative and positive roots of $h_\lambda$. The above description gives proofs of proposition 1 and theorem 3 in cases $B_\mu$ and $C_\mu$. \section{The case $F_4$} In this section we will obtain $6$ out of possible $8$ types of sets of lower values for $F_4$ and study a certain hyperplane section of the discriminant $\Sigma$. For brevity we will denote the versal deformation of $F_4$ by $$ f_\lambda(x,y) = x^2 + y^3 + a x + b y + c xy + d. $$ The non-transversality condition for points of $\Sigma_1$ means the polynomial $y^3 + b y + d$ has a non-simple root. A direct calculation gives the following equation: $$ 27d^2 + 4b^3 = 0, $$ meaning $\Sigma_1$ is a direct product of a cusp in the plane $(b, d)$ and a plane ${\mathbb R}^2$ spanned by coordinates $a, c$. \begin{figure} \begin{tikzpicture}[scale=0.8] \draw (0,0) -- (7,4); \draw (0,0) to[out=-11.3,in=190] (5,0); \draw (0,0) to[out=-11.3, in=155] (5,-2); \draw (5,0) -- (12,4); \draw (7,4) to[out=-11.3,in=190] (12,4); \draw (7,4) to[out=-11.3, in=155] (12,2); \draw (5,-2) -- (12,2); \draw (0,-4) to[out=-11.3,in=190] (5,-4); \draw (0,-4) to[out=-11.3, in=155] (5,-6); \draw (7,0) to[out=-11.3,in=190] (12,0); \draw (7,0) to[out=-11.3, in=155] (12,-2); \draw (0,-4) to[out=72, in=210] (3.5,2); \draw (3.5,2) to[out=30, in=120] (7,0); \draw (5,-4) to[out=72, in=210] (8.5,2); \draw (8.5,2) to[out=30, in=120] (12,0); \draw (5,-6) to[out=72, in=210] (8.5,0); \draw (8.5,0) to[out=30, in=120] (12,-2); \draw [dashed](3.5,2) to[out=-11.3,in=190] (8.5,2); \draw [dashed](3.5,2) to[out=-11.3, in=155] (8.5,0); \draw [dashed](5.77,-1.56) to[out=150,in=210] (3.5,2); \draw [dashed](3.5,2) to[out=20, in=160] (10.93,1.39); \draw [line width = 1.5pt] (0,5) circle (1.5); \draw (-1.5,6.5) node {1}; \draw [->] (0,5) -- (2.5,2.5); \fill [white] (0,5) circle (1.5); \draw [dashed] (0,3.5) -- (0,6.5); \draw [line width = 1pt] (-1.3,4.25) to[out=30,in=150] (1.3,4.25); \draw [line width = 1.5pt] (9,-4) circle (1.5); \draw (9 - 1.5, -4 + 1.5) node {6}; \draw [->] (9,-4) to[out = 90, in = -20] (8.5,0.5); \fill [white] (9,-4) circle (1.5); \draw [dashed] (9,-5.5) -- (9,-2.5); \draw [line width = 1pt] (9,-3.5) circle (0.5); \draw [line width = 1pt] (7.7,-4.75) to[out=30,in=150] (10.3,-4.75); \draw [line width = 1.5pt] (-2,-2) circle (1.5); \draw (-2-1.5, -2 + 1.5) node {2}; \draw [->] (-2,-2) to[out = 90, in = 210] (4,-1); \fill [white] (-2,-2) circle (1.5); \draw [dashed] ({-2 + 1.5*cos(110)},{-2 + 1.5*sin(110)}) -- ({-2 + 1.5*cos(250)},{-2 + 1.5*sin(250)}); \draw [line width = 1pt] (-0.7,-2.75) to[out=170,in=270] (-1.7,-2.25); \draw [line width = 1pt] (-3.3,-2.75) to[out=10,in=270] (-2.3,-2.25); \draw [line width = 1pt] (-1.7,-2.25) to[out=90,in=270] (-1.25,-1.5); \draw [line width = 1pt] (-2.3,-2.25) to[out=90,in=270] (-2.75,-1.5); \draw [line width = 1pt] (-1.25,-1.5) to[out=90, in=90] (-2.75,-1.5); \draw [line width = 1.5pt] (15,2) circle (1.5); \draw (15 - 1.5, 2 + 1.5) node {3}; \draw [->] (15,2) to[out = 170, in = 20] (10.5,2.5); \fill [white] (15,2) circle (1.5); \draw [dashed] ({15 + 1.5*cos(70)},{2 + 1.5*sin(70)}) -- ({15 + 1.5*cos(290)},{2 + 1.5*sin(290)}); \draw [line width = 1pt] (-0.7 + 17,-2.75 + 4) to[out=170,in=270] (-1.7 + 17,-2.25 + 4); \draw [line width = 1pt] (-3.3 + 17,-2.75 + 4) to[out=10,in=270] (-2.3 + 17,-2.25 + 4); \draw [line width = 1pt] (-1.7 + 17,-2.25 + 4) to[out=90,in=270] (-1.25 + 17,-1.5 + 4); \draw [line width = 1pt] (-2.3 + 17,-2.25 + 4) to[out=90,in=270] (-2.75 + 17,-1.5 + 4); \draw [line width = 1pt] (-1.25 + 17,-1.5 + 4) to[out=90, in=90] (-2.75 + 17,-1.5 + 4); \draw [line width = 1.5pt] (2,-7) circle (1.5); \draw (2 - 1.5, -7 + 1.5) node {4}; \draw [->] (2,-7) to[out = 90, in = 210] (4,-4.5); \fill [white] (2,-7) circle (1.5); \draw [dashed] (2,-5.5) -- (2,-8.5); \draw [line width = 1pt] (2.75,-6.5) circle (0.4); \draw [line width = 1pt] (0.7,-7.75) to[out=30,in=150] (3.3,-7.75); \draw [line width = 1.5pt] (14,-4) circle (1.5); \draw (14 - 1.5, -4 + 1.5) node {5}; \draw [->] (14,-4) to[out = 90, in = 300] (11,0.2); \fill [white] (14,-4) circle (1.5); \draw [dashed] (14,-5.5 + 3) -- (14,-8.5 + 3); \draw [line width = 1pt] (13.25,-3.5) circle (0.4); \draw [line width = 1pt] (0.7 + 12,-7.75 + 3) to[out=30,in=150] (3.3 + 12,-7.75 + 3); \end{tikzpicture} \caption{The $3$-dimensional section of the discriminant variety of $F_4$ and corresponding zero sets} \label{F4ds} \end{figure} As we will see later, the equation for $\Sigma_0$ is also computable, however it turns out to be quite complex. However, if we restrict our deformation to dimension $3$ by setting $c = 0$, we will get a nice section of $\Sigma_0$ by the plane $\{c = 0\}$, in which the equation will have the following form: $$ 27 \left(d + \frac{a^2}{4}\right)^2 + 4b^3 = 0 $$ meaning $\Sigma'_0 = \Sigma_0 \cap \{c = 0\}$ is a cuspidal edge bent along the parabola given by equations $d = -a^2/4, b = 0$. As depicted in fig. 1 edges of $\Sigma'_0$ and $\Sigma'_1$ are tangent at the origin, and these sets are tangent along the cusp lying in the plane $\{a = 0\}$. It's easy to see that the local components of the set ${\mathbb R}^3 \setminus \Sigma' = {\mathbb R}^3 \setminus (\Sigma'_0 \cup \Sigma'_1)$ (here ${\mathbb R}^3$ denotes the hyperplane $\{c=0\}$ of the parameter space) are all contractible, and the number of these components is equal to $6$. As also depicted in fig. 1, corresponding sets of lower values are all distinct and correspond to different topological types of the real elliptic curve with respect to the boundary $$ x^2 + y^3 + a x + b y + d = 0. $$ Concrete realizations of these sets are easy to obtain by taking values of the parameter $\lambda = (a,b,0,d)$ to lie in the corresponding component. \begin{figure} \begin{tikzpicture}[scale=1] \draw (5,3) -- (7,3) -- (7,1) -- (5,1) -- (5,3); \draw[dashed] (5.75,3) -- (5.75,1); \draw (5,2) to[out=30,in=90] (6.25,2) to[out=270,in=90] (5.25,1.5) to[out=270,in=180] (7,1.5); \draw (6.65, 2.3) circle (0.25); \draw (6,0.5) node {7}; \draw (7.5,3) -- (9.5,3) -- (9.5,1) -- (7.5,1) -- (7.5,3); \draw[dashed] (8.75,3) -- (8.75,1); \draw (9.5,2) to[out=150,in=90] (8.25,2) to[out=270,in=90] (9.25,1.5) to[out=270,in=0] (7.5,1.5); \draw (7.85, 2.3) circle (0.25); \draw (8.5,0.5) node {8}; \draw (0,0) -- (2,0) -- (2,-2) -- (0,-2) -- (0,0); \draw[dashed] (1,0) -- (1,-2); \draw (0,-1) to[out=30,in=150] (2,-1); \draw (0.5, -1.5) circle (0.3); \draw (1,-2.5) node {9}; \draw (0 + 2.5,0) -- (2 + 2.5,0) -- (2 + 2.5,-2) -- (0 + 2.5,-2) -- (0 + 2.5,0); \draw[dashed] (1 + 2.5,0) -- (1 + 2.5,-2); \draw (0 + 2.5,-1) to[out=30,in=150] (2+ 2.5,-1); \draw (3.5, -1.5) circle (0.3); \draw (1 + 2.5,-2.5) node {10}; \draw (0 + 5,0) -- (2 + 5,0) -- (2 + 5,-2) -- (0 + 5,-2) -- (0 + 5,0); \draw[dashed] (1 + 5.25,0) -- (1 + 5.25,-2); \draw (5,-0.5) to[out=0,in=90] (6.5,-0.5) to[out=270,in=90] (5.25,-1.25) to[out=270,in=180] (7,-1.75); \draw (5.75, -1.3) circle (0.25); \draw (1 + 5,-2.5) node {11}; \draw (0 + 7.5,0) -- (2 + 7.5,0) -- (2 + 7.5,-2) -- (0 + 7.5,-2) -- (0 + 7.5,0); \draw[dashed] (8.25,0) -- (8.25,-2); \draw (7.5,-0.25) to[out=0,in=90] (9.25,-0.5) to[out=270,in=90] (7.75,-1.5) to[out=270,in=180] (9.5,-1.75); \draw (8.75, -0.5) circle (0.25); \draw (8.5,-2.5) node {12}; \draw (0 + 10,0) -- (2 + 10,0) -- (2 + 10,-2) -- (0 + 10,-2) -- (0 + 10,0); \draw[dashed] (11.25,0) -- (11.25,-2); \draw (10,-0.25) to[out=0,in=90] (11.75,-0.5) to[out=270,in=90] (10.25,-1.5) to[out=270,in=180] (12,-1.75); \draw (10.5, -0.615) circle (0.25); \draw (11,-2.5) node {13}; \draw (0 + 12.5,0) -- (2 + 12.5,0) -- (2 + 12.5,-2) -- (0 + 12.5,-2) -- (0 + 12.5,0); \draw[dashed] (13.25,0) -- (13.25,-2); \draw (12.5,-0.25) to[out=0,in=90] (14.25,-0.5) to[out=270,in=90] (12.75,-1) to[out=270,in=180] (14.5,-1); \draw (14, -1.5) circle (0.25); \draw (13.5,-2.5) node {14}; \end{tikzpicture} \caption{Possible remaining topological types of sets of lower values for $F_4$. Notice that for each set in the bottom row, except for №10, the one reflected through the boundary is also a possible type.} \label{F4ds} \end{figure} \section{Further calculations for $F_4$} In order to complete the proof of the main theorem we need to describe the topology of the set ${\mathbb R}^4 \setminus \Sigma$ for $F_4$. In this section we will find out which remaining topological types of sets of lower values can be realized through the versal deformation and calculate the homology of the complement. \subsection{Remaining topological types of $W(\lambda)$ for $F_4$} For the versal deformation $f_\lambda(x,y)$ the corresponding zero set $f_\lambda^{-1}(0)$ is either a non-compact line going to infinity along the $x$-axis, or a union of such a line with a compact oval. Notice also, that since the polynomial $f_\lambda(0,y)$ in $y$ has degree $3$ the number of points of intersection of the zero set with the boundary $\{x=0\}$ is equal to $1$ or $3$ if $\lambda$ is a non-discriminant parameter. Thus the possible remaining topological types of sets of lower values look like ones listed in fig.2. However, only the sets $7$ and $8$ can be realized as ones coming from the versal deformation of $F_4$, and as we'll see later that these are the only ones remaining. Concrete realizations of the types $7$ and $8$ can be obtained as follows. First, a direct computation shows that the component of the set $\Sigma_0 \cap \Sigma_1$ (which will be further denoted by $\Xi_0$) corresponding to functions $f_\lambda$ that have a Morse critical point with zero critical value which lies on the boundary, can be parametrized through $c$ and $d$ the following way: $$ f_\lambda(x,y) = x^2 + y^3 - c \sqrt[3]{\frac{d}{2}} x - 3\sqrt[3]{\frac{d}{4}} y + cxy + d. $$ We first take a sufficiently small $d > 0$ and $c \neq 0$ (the sign of $c$ dictates what type of set of lower values, $7$ or $8$, will be obtained), so that the root of multiplicity $2$ of the polynomial $f_\lambda(0,y)$ in $y$ would be greater than the remaining root. This produces a zero set shown in the leftmost picture of fig.3. Notice that a substitution of the form $x \mapsto x - \varepsilon$ can be realized through our deformation by an appropriate change of parameters $d$ and $a$, hence we can move the boundary away from the crossing to obtain the middle picture of fig.3. Finally, by subtracting a small constant $\delta > 0$ from our function we can remove the crossing so that the curve splits into two separate components, hence we obtain the sets $7$ and $8$. \begin{figure} \begin{tikzpicture}[scale=2] \draw[line width=1pt](0,0) -- (2,0) -- (2,2) -- (0,2) -- (0,0); \draw[dashed,line width=1pt] (1,0) -- (1,2); \draw[line width=0.9pt](0,0.5) to[out=20,in=220] (1,1) to[out=40,in=300] (1.75, 1.5) to[out=120,in=60] (1,1) to[out=240,in=90] (0.75,0.5) to[out=270,in=220] (2,0.5); \draw[->] (2.2,1) -- (2.8,1); \draw[line width=1pt] (3,0) -- (5,0) -- (5,2) -- (3,2) -- (3,0); \draw[dashed, line width=1pt] (3.87,0) -- (3.87,2); \draw[line width=0.9pt] (3,0.5) to[out=20,in=220] (4,1); \draw[line width=0.9pt] (4,1) to[out=40,in=300] (4.75, 1.5); \draw[line width=0.9pt] (4.75, 1.5) to[out=120,in=60] (4,1); \draw[line width=0.9pt] (4,1) to[out=240,in=90] (3.75,0.5); \draw[line width=0.9pt] (3.75,0.5) to[out=270,in=220] (5,0.5); \draw[->] (5.2,1) -- (5.8,1); \draw[line width=1pt] (6,0) -- (8,0) -- (8,2) -- (6,2) -- (6,0); \draw[dashed, line width=1pt] (6.87,0) -- (6.87,2); \draw[line width=0.9pt] (6,0.5) to[out=10,in=90] (7,1); \draw[line width=0.9pt] (7,1) to[out=270,in=90] (6.5, 0.5); \draw[line width=0.9pt] (6.5, 0.5) to[out=270,in=210] (8,0.5); \draw[line width=0.9pt] (7.5,1.4) circle (0.25); \end{tikzpicture} \caption{Construction of the remaining sets of lower values} \label{78} \end{figure} \subsection{Homological calculations for the complement} One way to calculate the number of connected components of the set ${\mathbb R}^4 \setminus \Sigma$ is to study its reduced cohomology groups $\tilde{H}^i({\mathbb R}^4 \setminus \Sigma)$. By Alexander duality ($\tilde{H}^*$ denotes the reduced cohomology group, $\bar{H}_*$ - the Borel-Moore homology) we have $$ \tilde{H}^i({\mathbb R}^4 \setminus \Sigma) \simeq \bar{H}_{3-i}(\Sigma) $$ We will prove the following theorem, which will imply the completeness of the lists of topological types given in this and previous section. \begin{theorem} The (reduced) homology group $\bar{H}_i(\Sigma;{\mathbb Z}_2)$ is isomorphic to $({\mathbb Z}_2)^{7}$ if $i=3$ and is trivial otherwise. \end{theorem} \begin{remark} Such homology groups can be studied by standard methods developed by Vassiliev (see \cite{V88, Vbook}), which were used in \cite{VVA} in order to compute cohomology of the complements of the discriminant varieties of $D_\mu$ singularities. We will use a different approach. \end{remark} \begin{proof} The conditions on the parameters of $\Sigma_0$ yield the following system of polynomial equations: $$ \begin{cases} f_\lambda(x,y) = x^2 + y^3 + a x + b y + c xy + d = 0\\ \frac{\partial f_\lambda}{\partial x} = 2x + a + cy = 0 \\ \frac{\partial f_\lambda}{\partial y} = 3y^2 + cx + b = 0 \end{cases} $$ To get an equation on $\lambda$ we can substitute $x$ by a linear function in $y$ using the second equation. We obtain a system of two polynomial equations, for which we can then write down the resultant in $y$ to eliminate it. Notice that taking the resultant does not add any imaginary solutions for this system, as this would imply that the deformation $f_\lambda$, as a function of complex variable, has two distinct conjugate critical points with critical value $0$, which is impossible for the singularity $A_2$, which is the type of $F_4$ as an ordinary singularity. As it was mentioned earlier, the equation for $\Sigma_1$ has the form $$ 27d^2 + 4b^3 = 0, $$ so we obtain a system of two polynomial equations for the intersection $\Sigma_0 \cap \Sigma_1$. The solution of this system consists of two $2$-dimensional irreducible components $\Xi_0$ and $\Xi_1$, each homeomorphic to ${\mathbb R}^2$. The component $\Xi_0$ is comprised of parameters $\lambda$ corresponding to deformations $f_\lambda$ which have an ordinary Morse critical point with critical value $0$ lying on the boundary and the closure of such points, and the component $\Xi_1$ corresponds to deformations for which the zero set is non-transversal to the boundary and which have an ordinary Morse critical point with critical value $0$ outside the boundary (and their closure). Once again, we can calculate the intersection $\Xi_0 \cap \Xi_1$. Turns out, $\Xi_0$ intersects $\Xi_1$ along two curves $\Psi_0$ and $\Psi_1$ which intersect transversely at the origin and are both homeomorphic to ${\mathbb R}^1$. The curve $\Psi_0$ corresponds to deformations which have a Morse critical point at the origin, such that one of the branches of the curve $f^{-1}_\lambda(0)$ is tangent to the boundary. The curve $\Psi_1$ is comprised of parameters $\lambda$ for which the deformation has a critical point of type $A_2$ lying on the boundary. Recall that the two components of the discriminant $\Sigma$ of a boundary singularity can be obtained from the discriminants of the ordinary singularities into which it decomposes. As $F_4$ has the decomposition $(A_2,A_2)$, the components $\Sigma_0$ and $\Sigma_1$ are both diffeomorphic to a cusp multiplied by ${\mathbb R}^2$, meaning $\Sigma_i$ are both homeomorphic to ${\mathbb R}^3$. Now we can calculate the Borel-Moore homology of $\Sigma$ by first applying the Mayer-Vietoris long exact sequence to the decomposition $\Sigma_0 \cap \Sigma_1 = \Xi_0 \cup \Xi_1$ and after that to $\Sigma = \Sigma_0 \cup \Sigma_1$ (the ${\mathbb Z}_2$ coefficients are omitted): \begin{tikzcd} \ldots \rar & \bar{H}_j(\Psi_0 \cup \Psi_1) \rar & \bar{H}_j(\Psi_0) \oplus \bar{H}_j(\Psi_1) \rar & \bar{H}_j(\Xi_0 \cup \Xi_1) \rar & \ldots \end{tikzcd} Since the one-point compactification of $\Psi_0 \cup \Psi_1$ is homotopy equivalent to a bouquet of $3$ circles, we get that $\bar{H}_1(\Psi_0 \cup \Psi_1) \simeq ({\mathbb Z}_2)^3$ and $0$ in other dimensions. Hence, from the long exact sequence we get $$ \bar{H}_j(\Xi_0 \cup \Xi_1) = \begin{cases} ({\mathbb Z}_2)^5 & \text{for} \; j = 2, \\ 0 & \text{otherwise.} \end{cases} $$ Now applying this calculation to the exact sequence \begin{tikzcd} \ldots \rar & \bar{H}_j(\Sigma_0 \cap \Sigma_1) \rar & \bar{H}_j(\Sigma_0) \oplus \bar{H}_j(\Sigma_1) \rar & \bar{H}_j(\Sigma) \rar & \ldots \end{tikzcd} we get that $$ \bar{H}_j(\Sigma) = \begin{cases} ({\mathbb Z}_2)^7 & \text{for} \; j = 3, \\ 0 & \text{otherwise,} \end{cases} $$ which completes the proof. \end{proof}
1,941,325,220,171
arxiv
\section{Introduction and preliminaries} Let $\mathbb{R}_{0,m}$ be the $2^m$-dimensional real Clifford algebra constructed over the orthonormal basis $(e_1,\ldots,e_m)$ of the Euclidean space $\mathbb R^m$ (see \cite{Cl}). The multiplication in $\mathbb{R}_{0,m}$ is determined by the relations \[e_je_k+e_ke_j=-2\delta_{jk},\quad j,k=1,\dots,m,\] where $\delta_{jk}$ is the Kronecker delta. A basis for the algebra $\mathbb{R}_{0,m}$ is then given by the elements \[e_A=e_{j_1}\cdots e_{j_k},\] where $A=\{j_1,\dots,j_k\}\subset\{1,\dots,m\}$ is such that $j_1<\dots<j_k$. For the empty set $\emptyset$, we put $e_{\emptyset}=1$, the latter being the identity element. Any Clifford number $a\in\mathbb{R}_{0,m}$ may thus be written as \[a=\sum_Aa_Ae_A,\quad a_A\in\mathbb R,\] and its conjugate $\overline a$ is given by \[\overline a=\sum_Aa_A\overline e_A,\quad\overline e_A=\overline e_{j_k}\dots\overline e_{j_1},\;\overline e_j=-e_j,\;j=1,\dots,m.\] Observe that $\mathbb R^{m+1}$ may be naturally embedded in $\mathbb R_{0,m}$ by associating to any element $(x_0,x_1,\ldots,x_m)\in\mathbb R^{m+1}$ the paravector $x=x_0+\underline x=x_0+\sum_{j=1}^mx_je_j$. Let us recall that an $\mathbb R_{0,m}$-valued function $f$ defined and continuously differentiable in an open set $\Omega$ of $\mathbb R^{m+1}$, is said to be (left) monogenic in $\Omega$ if $\partial_xf=0$ in $\Omega$, where \[\partial_x=\partial_{x_0}+\partial_{\underline x}\] is the generalized Cauchy-Riemann operator in $\mathbb R^{m+1}$ and \[\partial_{\underline x}=\sum_{j=1}^me_j\partial_{x_j}\] is the Dirac operator in $\mathbb R^m$. Null-solutions of $\partial_{\underline x}$ are also called monogenic functions. These functions are a fundamental object in Clifford analysis and may be considered as a natural generalization to higher dimension of the holomorphic functions in the complex plane (see e.g. \cite{BDS}). It is worth remarking that $\partial_{\underline x}$ and $\partial_x$ factorize the Laplacian, i.e. \begin{align*} \Delta_{\underline x}&=\sum_{j=1}^m\partial_{x_j}^2=-\partial_{\underline x}^2,\\ \Delta_x&=\partial_{x_0}^2+\Delta_{\underline x}=\partial_x\overline\partial_x=\overline\partial_x\partial_x, \end{align*} and therefore every monogenic function is also harmonic. An interesting fact about the monogenic functions was first observed by Fueter (see \cite{F}) in the setting of quaternionic analysis: it is possible to generate monogenic functions using holomorphic functions in the upper half of the complex plane. This fact, which is now known as Fueter's theorem, was later extended to the case of $\mathbb R_{0,m}$-valued functions by Sce \cite{Sce} ($m$ odd) and Qian \cite{Q} ($m$ even). For further works on this topic we refer the reader to \cite{KQS,LaLe,LaRa,D,DS,DS2,DS3,QS,S}. Let $f(z)=u(t_1,t_2)+iv(t_1,t_2)$ ($z=t_1+it_2$) be a holomorphic function in some open subset $\Xi\subset\mathbb C^+=\{z\in\mathbb C:\;t_2>0\}$; and let $P_k(\underline x)$ be a homogeneous monogenic polynomial of degree $k$ in $\mathbb R^m$, i.e. \begin{alignat*}{2} \partial_{\underline x}P_k(\underline x)&=0,&\quad\underline x&\in\mathbb R^m,\\ P_k(t\underline x)&=t^kP_k(\underline x),&\quad t&\in\mathbb R. \end{alignat*} Put $\underline\omega=\underline x/r$, with $r=\vert\underline x\vert=\sqrt{-\underline x^2}$. In this paper we will focus on the following generalization of Fueter's theorem obtained in \cite{S}. \begin{theorem}\label{Som-Fue-thm} If $m$ is odd, then the function \begin{equation*} \Delta_x^{k+\frac{m-1}{2}}\bigl[\bigl(u(x_0,r)+\underline\omega\,v(x_0,r)\bigr)P_k(\underline x)\bigr] \end{equation*} is monogenic in $\widetilde\Omega=\{x\in\mathbb R^{m+1}:\;(x_0,r)\in\Xi\}$. \end{theorem} \noindent \textbf{Remark:} The case $k=0$ corresponds to Sce's result. The purpose of this paper is to prove that Theorem \ref{Som-Fue-thm} is still valid if we replace $P_k(\underline x)$ by a homogeneous monogenic polynomial $P_k(x_0,\underline x)$ of degree $k$ in $\mathbb R^{m+1}$ (this problem arose from discussions between Qian and Sommen). In order to attain this goal, we will first prove an extension of Fueter's theorem that uses $\mathbb C$-valued functions satisfying the equation \[\partial_{\bar z}\Delta_z^pf(z)=0,\quad p\in\mathbb N_0,\] as initial functions instead of the usual holomorphic functions. Here and throughout $\partial_{\bar z}$ and $\Delta_z$ denote, respectively, the well-known Cauchy-Riemann operator \[\partial_{\bar z}=\frac{1}{2}(\partial_{t_1}+i\partial_{t_2})\] and the Laplacian in two dimensions \[\Delta_z=\partial_{t_1}^2+\partial_{t_2}^2.\] \section{A higher order version of the original\\Fueter's theorem} We begin this section with two essential lemmas. The proof of Lemma \ref{lemma1} may be found in \cite{D,DS}. \begin{lemma}\label{lemma1} Suppose that $f(t_1,\dots,t_d)$ is an $\mathbb R$-valued infinitely differentiable function in an open set of $\mathbb R^d$ and that $D_{t_j}(n)$ and $D^{t_j}(n)$, $n\in\mathbb N_0$, are differential operators defined by \begin{align*} D_{t_j}(n)\{f\}&=\left(\frac{1}{{t_j}}\,\partial_{t_j}\right)^n\{f\},\quad\quad\quad\;\;\,D_{t_j}(0)\{f\}=f,\\ D^{t_j}(n)\{f\}&=\partial_{t_j}\left(\frac{D^{t_j}(n-1)\{f\}}{{t_j}}\right),\;\,D^{t_j}(0)\{f\}=f, \end{align*} $j=1,\dots,d$. Then one has \begin{itemize} \item[{\rm(i)}] $\partial_{t_j}^2D_{t_j}(n)\{f\}=D_{t_j}(n)\{\partial_{t_j}^2f\}-2n D_{t_j}(n+1)\{f\}$, \item[{\rm(ii)}] $\partial_{t_j}^2D^{t_j}(n)\{f\}=D^{t_j}(n)\{\partial_{t_j}^2f\}-2nD^{t_j}(n+1)\{f\}$, \item[{\rm(iii)}] $D^{t_j}(n)\{\partial_{t_j}f\}=\partial_{t_j}D_{t_j}(n)\{f\}$, \item[{\rm(iv)}] $D_{t_j}(n)\{\partial_{t_j}f\}-\partial_{t_j}D^{t_j}(n)\{f\}=2n/t_j\,D^{t_j}(n)\{f\}$. \end{itemize} \end{lemma} \begin{lemma}\label{mainlemma} Suppose that $g$ is an $\mathbb R$-valued infinitely differentiable function in an open set of $\mathbb R^2_+=\{(t_1,t_2)\in\mathbb R^2:\;t_2>0\}$. Then for $n\in\mathbb N_0$, we have that \begin{itemize} \item[{\rm(i)}] $\displaystyle{\Delta_x^n\big(g(x_0,r)P_k(\underline x)\big)=\left(\sum_{j=0}^nd_{k,m}(j)\binom{n}{j}D_r(j)\Delta_z^{n-j}g(x_0,r)\right)P_k(\underline x)}$, \item[{\rm(ii)}] $\displaystyle{\Delta_x^n\big(g(x_0,r)\,\underline\omega\,P_k(\underline x)\big)=\left(\sum_{j=0}^nd_{k,m}(j)\binom{n}{j}D^r(j)\Delta_z^{n-j}g(x_0,r)\right)\underline\omega\,P_k(\underline x)}$, \end{itemize} where \begin{align*} d_{k,m}(j)&=(2k+m-1)(2k+m-3)\cdots(2k+m-(2j-1)),\\d_{k,m}(0)&=1. \end{align*} \end{lemma} \textit{Proof.} It is easily seen that \begin{align*} \partial_{\underline x}g&=\sum_{j=1}^me_j\partial_{x_j}g=\sum_{j=1}^me_j(\partial_rg)(\partial_{x_j}r)\\ &=\sum_{j=1}^me_j(\partial_rg)\frac{x_j}{r}=\underline\omega\partial_rg, \end{align*} \[\Delta_{\underline x}\,\underline\omega=-\partial_{\underline x}^2\,\underline\omega=(m-1)\partial_{\underline x}\left(\frac{1}{r}\right)=-\frac{(m-1)}{r^2}\,\underline\omega,\] \begin{align*} \Delta_xg&=\partial_{x_0}^2g+\Delta_{\underline x}g=\partial_{x_0}^2g-\partial_{\underline x}(\underline\omega\partial_rg)\\ &=\partial_{x_0}^2g+\partial_r^2g+\frac{m-1}{r}\,\partial_rg. \end{align*} Therefore \[\hspace{-1.4cm}\Delta_x(gP_k)=(\Delta_xg)P_k+2\sum_{j=1}^m(\partial_{x_j}g)(\partial_{x_j}P_k)+g(\Delta_{\underline x}P_k)\] \[\hspace{1.8cm}=\left(\partial_{x_0}^2g+\partial_r^2g+\frac{m-1}{r}\,\partial_rg\right)P_k+2\frac{\partial_rg}{r}\sum_{j=1}^mx_j\partial_{x_j}P_k\] \[\hspace{-0.7cm}=\left(\partial_{x_0}^2g+\partial_r^2g+\frac{2k+m-1}{r}\,\partial_rg\right)P_k\] \begin{equation}\label{eq1} \hspace{0.3cm}=\left(\partial_{x_0}^2g+\partial_r^2g+(2k+m-1)D_r(1)\{g\}\right)P_k \end{equation} and \[\hspace{-2.9cm}\Delta_x(g\underline\omega P_k)=(\Delta_{\underline x}\,\underline\omega)gP_k+2\sum_{j=1}^m(\partial_{x_j}\underline\omega)(\partial_{x_j}(gP_k))+\underline\omega\Delta_x(gP_k)\] \[\hspace{1.55cm}=-\frac{(m-1)}{r^2}\,g\underline\omega P_k+2\sum_{j=1}^m\left(\frac{e_j}{r}-\frac{x_j}{r^2}\,\underline\omega\right)\left(\frac{x_j}{r}\,(\partial_rg)P_k+g(\partial_{x_j}P_k)\right)\] \[\hspace{-1.5cm}+\left(\partial_{x_0}^2g+\partial_r^2g+\frac{2k+m-1}{r}\,\partial_rg\right)\underline\omega P_k\] \[\hspace{-0.99cm}=\left(\partial_{x_0}^2g+\partial_r^2g+(2k+m-1)\left(\frac{\partial_rg}{r}-\frac{g}{r^2}\right)\right)\underline\omega P_k\] \begin{equation}\label{eq2} \hspace{-1.75cm}=\left(\partial_{x_0}^2g+\partial_r^2g+(2k+m-1)D^r(1)\{g\}\right)\underline\omega P_k, \end{equation} where we have also used Euler's theorem for homogeneous functions. The proof now follows by induction using equalities (\ref{eq1})-(\ref{eq2}) together with statements (i)-(ii) of Lemma \ref{lemma1}.\hfill$\square$\vspace{0.22cm} The previous lemma allows us to obtain a method to generate functions on $\mathbb R^{m+1}$ that satisfy the equation \begin{equation}\label{pseudopharm2} \Delta_x^pF(x_0,\underline x)=0, \end{equation} using functions fulfilling an equation of the same type in $\mathbb R^2$ \begin{equation}\label{pseudopharm1} \Delta_z^pg(t_1,t_2)=0. \end{equation} \begin{proposition} Let $m$ be odd. If $g$ is an $\mathbb R$-valued solution of (\ref{pseudopharm1}) in $\Xi\subset\mathbb R^2_+$, then \[\Delta_x^{k+\frac{m-1}{2}}\big(g(x_0,r)P_k(\underline x)\big)\] and \[\Delta_x^{k+\frac{m-1}{2}}\big(g(x_0,r)\,\underline\omega\,P_k(\underline x)\big)\] satisfy the equation (\ref{pseudopharm2}) in $\widetilde\Omega=\{x\in\mathbb R^{m+1}:\;(x_0,r)\in\Xi\}$. \end{proposition} \textit{Proof.} We first observe that if $m$ is odd, then each factor which appears in the definition of $d_{k,m}(j)$ is even. Now, by Lemma \ref{mainlemma}, we have that \begin{multline*} \Delta_x^{p+k+\frac{m-1}{2}}\big(g(x_0,r)P_k(\underline x)\big)\\=\left(\sum_{j=0}^{p+k+\frac{m-1}{2}}d_{k,m}(j)\binom{p+k+\frac{m-1}{2}}{j}D_r(j)\Delta_z^{p+k+\frac{m-1}{2}-j}g(x_0,r)\right)P_k(\underline x), \end{multline*} \begin{multline*} \Delta_x^{p+k+\frac{m-1}{2}}\big(g(x_0,r)\,\underline\omega\,P_k(\underline x)\big)\\=\left(\sum_{j=0}^{p+k+\frac{m-1}{2}}d_{k,m}(j)\binom{p+k+\frac{m-1}{2}}{j}D^r(j)\Delta_z^{p+k+\frac{m-1}{2}-j}g(x_0,r)\right)\underline\omega\,P_k(\underline x). \end{multline*} Clearly, the first $k+(m+1)/2$ terms in the above equalities vanish since $g$ is by hypothesis a solution of (\ref{pseudopharm1}). Finally, note that $2k+m-(2j-1)\le0$ for $j\ge k+(m+1)/2$ and therefore $d_{k,m}(j)=0$ for $j\ge k+(m+1)/2$.\hfill$\square$ \begin{theorem}\label{msFThm} Let $f(z)=u(t_1,t_2)+iv(t_1,t_2)$ be a $\mathbb C$-valued function satisfying in some open subset $\Xi\subset\mathbb C^+$ the equation \[\partial_{\bar z}\Delta_z^pf(z)=0,\quad p\in\mathbb N_0.\] If $m$ is odd, then the function \begin{equation*} \Delta_x^{p+k+\frac{m-1}{2}}\bigl[\bigl(u(x_0,r)+\underline\omega\,v(x_0,r)\bigr)P_k(\underline x)\bigr] \end{equation*} is monogenic in $\widetilde\Omega=\{x\in\mathbb R^{m+1}:\;(x_0,r)\in\Xi\}$. \end{theorem} \textit{Proof.} By Lemma \ref{mainlemma}, we get that \begin{multline*} \Delta_x^{p+k+\frac{m-1}{2}}\bigl[\bigl(u(x_0,r)+\underline\omega\,v(x_0,r)\bigr)P_k(\underline x)\bigr]\\ =(2k+m-1)!!\binom{p+k+\frac{m-1}{2}}{k+\frac{m-1}{2}}\bigl(A(x_0,r)+\underline\omega\,B(x_0,r)\bigr)P_k(\underline x), \end{multline*} with \[A=D_r\left(k+\frac{m-1}{2}\right)\{\Delta_z^pu\},\] \[B=D^r\left(k+\frac{m-1}{2}\right)\{\Delta_z^pv\}.\] It thus remains to prove that $A$ and $B$ satisfy the Vekua-type system (see \cite{LB,D,S2,S3}) \begin{equation*} \left\{\begin{array}{ll}\partial_{x_0}A-\partial_rB&=\displaystyle{\frac{2k+m-1}{r}}\,B\\\partial_{x_0}B+\partial_rA&=0.\end{array}\right. \end{equation*} In order to do that, it will be necessary to use the fact that $u$ and $v$ satisfy in $\Xi$ the system \begin{equation*} \left\{\begin{array}{ll}\partial_{t_1}\Delta_z^pu-\partial_{t_2}\Delta_z^pv&=0\\\partial_{t_1}\Delta_z^pv+\partial_{t_2}\Delta_z^pu&=0,\end{array}\right. \end{equation*} and statements (iii)-(iv) of Lemma \ref{lemma1}. Indeed, \[\begin{split} \partial_{x_0}A-\partial_rB&=D_r\left(k+\frac{m-1}{2}\right)\{\partial_{x_0}\Delta_z^pu\}-\partial_rD^r\left(k+\frac{m-1}{2}\right)\{\Delta_z^pv\}\\ &=D_r\left(k+\frac{m-1}{2}\right)\{\partial_r\Delta_z^pv\}-\partial_rD^r\left(k+\frac{m-1}{2}\right)\{\Delta_z^pv\}\\ &=\frac{2k+m-1}{r}\,D^r\left(k+\frac{m-1}{2}\right)\{\Delta_z^pv\}\\ &=\frac{2k+m-1}{r}\,B \end{split}\] and \[\begin{split} \partial_{x_0}B+\partial_rA&=D^r\left(k+\frac{m-1}{2}\right)\{\partial_{x_0}\Delta_z^pv\}+\partial_rD_r\left(k+\frac{m-1}{2}\right)\{\Delta_z^pu\}\\ &=D^r\left(k+\frac{m-1}{2}\right)\{\partial_{x_0}\Delta_z^pv\}+D^r\left(k+\frac{m-1}{2}\right)\{\partial_r\Delta_z^pu\}\\ &=D^r\left(k+\frac{m-1}{2}\right)\{\partial_{x_0}\Delta_z^pv+\partial_r\Delta_z^pu\}\\ &=0, \end{split}\] which completes the proof.\hfill$\square$ \section{Fueter's theorem with an extra monogenic factor $P_k(x_0,\underline x)$} Before starting the proof of the main theorem, we need to recall two basic results of Clifford analysis (see \cite{BDS,DSS,MR}). \begin{theorem}[CK-extension theorem]\label{CK} Every function $g(\underline x)$ analytic in $\mathbb R^m$ has a unique monogenic extension $\mathsf{CK}[g]$ to $\mathbb R^{m+1}$, which is given by \begin{equation*} \mathsf{CK}[g(\underline x)](x)=\sum_{j=0}^\infty\frac{(-x_0)^j}{j!}\,\partial_{\underline x}^jg(\underline x). \end{equation*} \end{theorem} Let us denote by $\mathsf{P}(k)$, $k\in\mathbb N$, the set of all $\mathbb R_{0,m}$-valued homogeneous polynomials of degree $k$ in $\mathbb R^m$. It contains the important subspace $\mathsf{M}^+(k)$ consisting of all homogeneous monogenic polynomials of degree $k$. \begin{theorem}[Almansi-Fischer decomposition]\label{Fischer} Let $k\in\mathbb N$. Then \[\mathsf{P}(k)=\bigoplus_{n=0}^k\underline x^n\mathsf{M}^+(k-n).\] \end{theorem} We are now ready to prove the final result. \begin{theorem} Let $f(z)=u(t_1,t_2)+iv(t_1,t_2)$ be a $\mathbb C$-valued holomorphic function in some open subset $\Xi\subset\mathbb C^+$ and assume that $P_k(x_0,\underline x)$ is a homogeneous monogenic polynomial of degree $k$ in $\mathbb R^{m+1}$. If $m$ is odd, then the function \begin{equation*} \Delta_x^{k+\frac{m-1}{2}}\bigl[\bigl(u(x_0,r)+\underline\omega\,v(x_0,r)\bigr)P_k(x_0,\underline x)\bigr] \end{equation*} is monogenic in $\widetilde\Omega=\{x\in\mathbb R^{m+1}:\;(x_0,r)\in\Xi\}$. \end{theorem} \textit{Proof.} It is clear from Theorem \ref{CK} that $P_k(x_0,\underline x)=\mathsf{CK}\left[P_k(0,\underline x)\right](x)$. By Theorem \ref{Fischer}, there exist unique $P_{k-n}(\underline x)\in\mathsf{M}^+(k-n)$ such that \[P_k(x_0,\underline x)=\sum_{n=0}^k\mathsf{CK}\left[\underline x^nP_{k-n}(\underline x)\right](x).\] Thus, it suffices to prove the monogenicity of \[\Delta_x^{k+\frac{m-1}{2}}\bigl[\bigl(u(x_0,r)+\underline\omega\,v(x_0,r)\bigr)\mathsf{CK}\left[\underline x^nP_{k-n}(\underline x)\right](x)\bigr],\quad n=0,\dots,k.\] As \[\mathsf{CK}\left[\underline x^nP_{k-n}(\underline x)\right](x)=\sum_{j=0}^k\frac{(-x_0)^j}{j!}\,\partial_{\underline x}^j\big(\underline x^nP_{k-n}(\underline x)\big)\] and on account of the equality \[\partial_{\underline x}\big(\underline x^nP_{k-n}(\underline x)\big)=\left\{\begin{array}{ll}-(2k+m-n-1)\underline x^{n-1}P_{k-n}(\underline x),&\text{if}\;n\;\text{odd},\\-n\underline x^{n-1}P_{k-n}(\underline x),&\text{if}\;n\;\text{even},\end{array}\right.\] we may conclude that $\mathsf{CK}\left[\underline x^nP_{k-n}(\underline x)\right](x)$ is of the form \[\mathsf{CK}\left[\underline x^nP_{k-n}(\underline x)\right](x)=\left(\sum_{j=0}^kc_jx_0^j\underline x^{n-j}\right)P_{k-n}(\underline x),\quad c_j\in\mathbb R.\] Therefore \[\mathsf{CK}\left[\underline x^nP_{k-n}(\underline x)\right](x)=\big(U(x_0,r)+\underline\omega\,V(x_0,r)\big)P_{k-n}(\underline x),\] where $U$ and $V$ are $\mathbb R$-valued homogeneous polynomial of degree $n$ in the variables $x_0$ and $r$. Its corresponding $\mathbb C$-valued function $g(z)=U(t_1,t_2)+iV(t_1,t_2)$ clearly satisfies \[\partial_{\bar z}^{n+1}g(z)=0,\quad z\in\mathbb C,\] whence \[\partial_{\bar z}^{n+1}\big(f(z)g(z)\big)=0,\quad z\in\Xi,\] i.e. $f(z)g(z)$ is $(n+1)$-holomorphic in $\Xi$. It then follows that \[\partial_{\bar z}\Delta_z^n\big(f(z)g(z)\big)=0,\quad z\in\Xi.\] The proof now follows by using Theorem \ref{msFThm}.\hfill$\square$ \subsection*{Acknowledgment} D. Pe\~na Pe\~na was supported by a Post-Doctoral Grant of \emph{Funda\c{c}\~ao para a Ci\^encia e a Tecnologia}, Portugal (grant number: SFRH/BPD/45260/2008).
1,941,325,220,172
arxiv
\section{#1}} \newcommand{\setcounter{equation}{0} \section*{Appendix}}{\addtocounter{section}{1} \setcounter{equation}{0} \section*{Appendix \Alph{section}}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \textwidth 160mm \textheight 230mm \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\hspace{0.1cm}}{\hspace{0.1cm}} \newcommand{\hspace{0.7cm}}{\hspace{0.7cm}} \newcommand{\stackrel}{\stackrel} \newcommand{\theta}{\theta} \newcommand{\alpha}{\alpha} \newcommand{\sigma}{\sigma} \newcommand{\rightarrow}{\rightarrow} \newcommand{\label}{\label} \newcommand{\beta}{\beta} \newcommand{\bar{z}}{\bar{z}} \newcommand{\kappa}{\kappa} \begin{document} \setcounter{page}{0} \topmargin 0pt \oddsidemargin 5mm \renewcommand{\thefootnote}{\arabic{footnote}} \newpage \setcounter{page}{0} \begin{titlepage} \begin{flushright} OUTP-95-385\\ ISAS/EP/95/78 \end{flushright} \vspace{0.5cm} \begin{center} {\large {\bf The Spin-Spin Correlation Function in the Two-Dimensional \\ Ising Model in a Magnetic Field at $T=T_{c}$}}\\ \vspace{1.8cm} {\large G. Delfino} \\ \vspace{0.5cm} {\em Theoretical Physics, University of Oxford}\\ {\em 1 Keble Road, Oxford OX1 3NP, United Kingdom}\\ \vspace{0.5cm} {\large and}\\ \vspace{0.5cm} {\large G. Mussardo}\\ \vspace{0.5cm} {\em International School for Advanced Studies,\\ and \\ Istituto Nazionale di Fisica Nucleare\\ 34014 Trieste, Italy}\\ \end{center} \vspace{1.2cm} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \begin{abstract} \noindent The form factor bootstrap approach is used to compute the exact contributions in the large distance expansion of the correlation function $<\sigma(x) \sigma(0)>$ of the two-dimensional Ising model in a magnetic field at $T=T_{c}$. The matrix elements of the magnetization operator $\sigma(x)$ present a rich analytic structure induced by the (multi) scattering processes of the eight massive particles of the model. The spectral representation series has a fast rate of convergence and perfectly agrees with the numerical determination of the correlation function. \end{abstract} \vspace{.3cm} \end{titlepage} \newpage \noindent \resection{Introduction} Over the past few years, considerable progress has been made in the use of conformal invariance methods and scattering theory for the understanding of the critical points and the nearby scaling region of two-dimensional statistical models (see, for instance \cite{ISZ,GM}). At the critical points, the correlation functions of the statistical models fall into a scale-invariant regime and their computation may be achieved by solving the linear differential equations obtained by the representation theory of the infinite-dimensional conformal symmetry \cite{BPZ}. The situation is different away from criticality. The scaling region may be described in terms of the relevant deformations of the fixed point actions. These deformations destroy the long-range fluctuations of the critical point and the associated quantum field theories are usually massive. If an infinite number of conserved charges survive the deformation of the critical point action, the corresponding QFT can be efficiently characterized (on-shell) by the relativistic scattering processes of the massive excitations. In this case, the integrals of motion severely restrict the bound state structure of the theory and force the $S$-matrix to be elastic and factorizable \cite{Zam,ZZ}. Once the exact $S$-matrix of a model is known, one can proceed further and investigate the off-shell behaviour of the theory by means of the Form Factor approach \cite{KW,Smirnov}. This consists in computing matrix elements of the local fields on the set of asymptotic states and reconstructing their correlation functions in terms of the spectral density representation. As argued in \cite{CMpol}, and also confirmed by the explicit solution of several two-dimensional massive QFT \cite{YZ,JLG,ZamYL,DM,FMS}, a general property of the spectral series of massive integrable QFT is their very fast rate of convergence for {\em all} distance scales. This important quality of the spectral representation series may be regarded as the key point to the success of the Form Factor approach. The aim of this paper is to apply the Form Factor approach and compute the spin-spin correlation function $G(x) = <\sigma(x) \sigma(0) >$ of the Ising model in a magnetic field $h$ at $T =T_c$ (in the sequel, this model will be referred to as IMMF). For the other integrable deformation of the critical Ising model, i.e. thermal, the spin-spin correlation function has been exactly determined in \cite{MW,McBook} and this result, together with its remarkable connection with the Painleve' functions \cite{Painleve,BB}, may be regarded as one of the main accomplishments in statistical mechanics. On the other hand, very little is known about the spin-spin correlation function for $h\neq 0$ at $T=T_c$ whose determination has been a long-standing problem of statistical mechanics. As we will show in this paper, the computation of $G(x)$ can now be approached analytically, thanks to the scattering formulation of the model proposed by Zamolodchikov \cite{Zam}. Apart from the importance of computing $G(x)$ itself, there are other related theoretical issues which render this calculation instructive. The first issue concerns the rich structure of bound states and higher-order poles of the $S$-matrix. Higher-order poles of the $S$-matrix of two-dimensional field theories can be naturally interpreted as singularities associated to multi-scattering processes \cite{ColT,BCDS,ChM}. On this basis, one would expect a similar hierarchy of higher-order poles in the form factors as well. There is, however, an important difference between the pole structures of the $S$-matrix and those of the form factors. In fact, the $S$-matrix contains {\em simultaneously} information about the $s$-channel as well as the $u$-channel. Correspondingly, the poles of the $S$-matrix are always arranged in pairs, with their positions located in crossing symmetrical points. This is in concordance with the two different ways of looking at a scattering process, i.e. in the direct or in the crossed channel. On the contrary, the $u$-variable plays no role in the calculation of the form factors as these only depend on the $s$-variables of all subchannels of the asymptotic states on which the matrix elements are computed. The absence of $u$-channel singularities in the form factors implies that their analytic structure may be different from that of the $S$-matrix and it is therefore an interesting problem to understand how the higher-order poles enter the form factor calculation. The second theoretical issue that emerges from the computation of the spin-spin correlation function in the IMMF is a careful reconsideration of the so-called {\em minimality prescription}, which is usually invoked for computing the form factors. Briefly stated, this consists in assuming that the form factors have the minimal analytic structure, compatible with the nature of the operator and the bound state pattern of the scattering theory. So far, this prescription has been successfully applied to solve several two-dimensional QFT as, for instance, those of refs.\,\cite{YZ,JLG,ZamYL,DM,FMS}, despite the fact that the theoretical reason of its validity was lacking. One of the basic motivations why minimality is often adopted is because frequently the asymptotic behaviour of the matrix elements is not easy to determine. In this paper, we will present a simple argument which will allow us to place a quite restrictive upper bound on the high energy limit of form factors. Using this criterion, one can see that the minimality prescription is violated in the IMMF and therefore extra polynomials in the variable $s$ have to be included in the matrix elements of the field $\sigma(x)$. These polynomials, nevertheless, can be uniquely determined since the scattering theory always provides a sufficient number of constraints. As a matter of fact, the equations which fix the extra polynomials are usually overdetermined and this gives rise to self-consistency conditions which are indeed fulfilled by the IMMF form factors. The last point we would like to mention is the successful comparison of our theoretical determination of $<\sigma(x) \sigma(0)>$ with numerical simulations. These simulations have been carried out in the last few years by two different groups \cite{R,LR,D} and in particular, a collection of high-precision numerical estimates of the spin-spin correlation function of the IMMF, for different values of the magnetic field and different sizes of the lattice, can be found in \cite{LR}. Although there is no doubt about the actual existence of higher-mass particles, which can be easily extracted by transfer matrix diagonalization of the IMMF \cite{numspectrum} or directly from the lattice \cite{nienhuis}, it was quite difficult to see the presence of these higher-mass states from the analysis of the numerical data of the spin-spin correlation function. Namely, a best fit of the two-point function seemed to be compatible with the only contribution of the fundamental particle \cite{R}, a result which appears intriguing. In fact, from a theoretical point of view there is no reason why decoupling of higher-mass particles should occur in the spin-spin correlator since the IMMF has apparently no selection rules related to any symmetry. Indeed, as we will see, the magnetization field $\sigma(x)$ couples to {\em all} particles of the theory and it is only the small values of the relative couplings which could be responsible for a possible misleading interpretation of the numerical results. The paper is organized as follows. In Section 2, we briefly outline the main features of scattering theory of the IMMF, the exact mass spectrum of the model and some other quantities which will prove useful in the sequel of the paper. In Section 3, we analyse the Form Factor approach and the properties of the spectral representation series of the correlation functions. In Section 4, a general constraint on the asymptotic behaviour of the form factors is introduced and applied to the computation of the matrix elements of the magnetization operator $\sigma(x)$ of the IMMF. We also discuss the occurrence and the interpretation of higher order poles in the form factors. Comparison of our theoretical results versus numerical simulations, as well as the saturation of the sum-rules satisfied by the spin-spin correlation function, are presented in Section 5. The paper also contains three appendices. In Appendix A we discuss the general bootstrap approach to the computation of the form factors of the model. Appendix B gathers useful mathematical formulas to deal with the monodromy properties of the matrix elements. Appendix C presents specific examples of matrix elements with singularities associated to higher order poles in the scattering amplitudes. \resection{Scattering theory} In the continuum limit, the IMMF may be regarded as a perturbed CFT. At the critical point, ($T=T_c$ and $h=0$), the Ising model is described by the conformal minimal model ${\cal M}_{3,4}$ with central charge $C = \frac{1}{2}$ \cite{BPZ}. There are three conformal families in the model, those of the identity, magnetization and energy operators, denoted respectively as $[I = (0,0)]$, $\left[\sigma = \left(\frac{1}{16},\frac{1}{16}\right)\right]$ and $\left[\epsilon = \left(\frac{1}{2},\frac{1}{2}\right)\right]$, where the numbers $(\Delta_i,\Delta_i)$ in the round brackets are their conformal weights. We can move the system away from criticality by modifying the action ${\cal A}_0$ of the critical point as \begin{equation} {\cal A} = {\cal A}_0 + h \int d^2x \, \sigma(x) \,\,\,. \label{action} \end{equation} For small values of the magnetic field, the system is still at $T = T_c$. However, the coupling to the magnetic field induces a mass scale $M(h)$ in the problem and destroys the long-range fluctuations of the critical point. Using the action (\ref{action}), or the corresponding lattice Hamiltonian, one can in principle analyse the properties of the model in terms of a perturbative expansion in $h$ \cite{McCoy,Dotsenko}. There are, however, some important questions which cannot be easily addressed by using such perturbative formulation, as, for instance, the determination of the exact mass spectrum of the model or the computation of its correlation functions. In order to answer such questions, we have to rely on a non-perturbative approach that exploits the most important feature of the dynamics of the model, namely its integrability. The IMMF has, in fact, an infinite number of conserved charges of spin $s=1,7,11,13,17,19,23,29$ (mod $30$). As shown in a remarkable paper by Zamolodchikov \cite{Zam}, the existence of these conserved charges allows one to define a self-consistent scattering theory which provides an alternative route to the perturbative formulation. Let us briefly outline the main results discussed in \cite{Zam}. The scattering theory which describes the scaling limit of the IMMF contains eight different species of self-conjugated particles $A_{a}$, $a=1,\ldots,8$ with masses \begin{eqnarray} m_1 &=& M(h)\,, \nonumber \\ m_2 &=& 2 m_1 \cos\frac{\pi}{5} = (1.6180339887..) \,m_1\,,\nonumber\\ m_3 &=& 2 m_1 \cos\frac{\pi}{30} = (1.9890437907..) \,m_1\,,\nonumber\\ m_4 &=& 2 m_2 \cos\frac{7\pi}{30} = (2.4048671724..) \,m_1\,,\nonumber \\ m_5 &=& 2 m_2 \cos\frac{2\pi}{15} = (2.9562952015..) \,m_1\,,\\ m_6 &=& 2 m_2 \cos\frac{\pi}{30} = (3.2183404585..) \,m_1\,,\nonumber\\ m_7 &=& 4 m_2 \cos\frac{\pi}{5}\cos\frac{7\pi}{30} = (3.8911568233..) \,m_1\,, \nonumber\\ m_8 &=& 4 m_2 \cos\frac{\pi}{5}\cos\frac{2\pi}{15} = (4.7833861168..) \,m_1\,, \nonumber \end{eqnarray} Within the standard CFT normalization of the magnetization operator, fixed by the equation \begin{equation} <\sigma(x)\sigma(0)>=\frac{1}{\,\,|x|^{\frac{1}{4}}}\,\,,\hspace{1cm}|x|\rightarrow 0\label{uv} \end{equation} the overall mass scale $M(h)$ has been exactly determined in \cite{Fateev}, \begin{equation} M(h) \,=\, {\cal C} \, h^{\frac{8}{15}} \label{fat} \end{equation} where \begin{equation} {\cal C} \,=\, \frac{4 \sin\frac{\pi}{5} \Gamma\left(\frac{1}{5}\right)} {\Gamma\left(\frac{2}{3}\right) \Gamma\left(\frac{8}{15}\right)} \left(\frac{4 \pi^2 \Gamma\left(\frac{3}{4}\right) \Gamma^2\left(\frac{13}{16}\right)}{\Gamma\left(\frac{1}{4}\right) \Gamma^2\left(\frac{3}{16}\right)}\right)^{\frac{4}{5}} \, \,=\, 4.40490858... \end{equation} The scattering processes in which the eight particles are involved are completely elastic (the final state contains exactly the same particles as the initial one with unchanged momenta) and, due to the factorization of multiparticle scattering amplitudes induced by integrability, they are entirely characterised by the two-particle amplitudes $S_{ab}$ (Figure 1). These are functions of the relativistic invariant Mandelstam variable $s=(p_a + p_b)^2$ or, equivalently, of $u=(p_a - p_b)^2$. $S_{ab}$ has a branch root singularity in the variable $s$ at the threshold $s=(m_a + m_b)^2$. By crossing symmetry, an analogous branch point also appears at the threshold of the $u$-channel, namely at $s=(m_a - m_b)^2$ (Figure 2). Those are the only branch cuts of the $S$-matrix, due to the elastic nature of the scattering processes. The other possible singularities of the scattering amplitudes $S_{ab}$ are simple and higher-order poles in the interval $ (m_a - m_b)^2 < s < (m_a + m_b)^2$ which are related to the bound state structure. An important simplification in the analysis of the analytic structure of the $S$-matrix comes from the parameterization of the external momenta in terms of the rapidity variable $\theta$, i.e. $p_{a}^{0}=m_{a}\cosh\theta_{a}$, $p_{a}^{1}=m_{a}\sinh\theta_{a}$. The mapping $ s(\theta_{ab}) = m_a^2 + m_b^2 + 2 m_a m_b \cosh\theta_{ab}$, where $\theta_{ab}=\theta_a-\theta_b$, or equivalently $ u(\theta_{ab}) = s(i \pi -\theta_{ab}) = m_a^2 + m_b^2 - 2 m_a m_b \cosh\theta_{ab}$, transforms the amplitudes $S_{ab}$ into meromorphic functions $S_{ab}(\theta_{ab})$, which satisfy the equations \begin{equation} S_{ab}(\theta)S_{ab}(-\theta)=1\,\,\,; \label{unitarity} \end{equation} \begin{equation} S_{ab}(\theta) = S_{ab}(i\pi-\theta)\,\, , \label{crossing} \end{equation} expressing the unitarity and the crossing symmetry of the theory, respectively. The simple poles of $S_{ab}(\theta)$ with positive residues are related to bound state propagation in the $s$-channel, as shown in Figure 3, whereas those with negative residues are associated to bound states in the $u$-channel. Suppose that the particle $A_c$ with mass squared \begin{equation} m_c^2 = m_a^2 + m_b^2 + 2m_a m_b \cos u_{ab}^c, \hspace{1cm} u_{ab}^c\in(0,\pi) \label{triangle} \end{equation} is a stable bound state in the $s$-channel of the particles $A_{a}$ and $A_{b}$. In the vicinity of the resonance angle $\theta = i u_{ab}^c$, the amplitude becomes \begin{equation} S_{ab}(\theta\simeq iu_{ab}^c)\simeq\frac{i\left(\Gamma_{ab}^c\right)^2}{\theta-iu_{ab}^c}\,\,, \label{spole} \end{equation} with $\Gamma_{ab}^c$ denoting the three-particle coupling\footnote{Note that the $S$-matrix cannot determine the three-particle couplings but only their square. This results in an ambiguity for the sign of all $\Gamma_{ab}^c$ which can be solved by defining a consistent set of form factors.}. Since the bootstrap principle gives the possibility to consider the bound states on the same footing as the asymptotic states, the amplitudes $S_{ab}$ are related each other by the functional equations \cite{Zam,ZZ} \begin{equation} S_{il}(\theta) \,=\,S_{ij}(\theta + i \overline u_{jl}^k)\, S_{ik}(\theta - i \overline u_{lk}^j) \,\, , \label{bootstrap} \end{equation} ($\overline u_{ab}^c \equiv \pi - u_{ab}^c$). For the IMMF, the solution of eqs. (\ref{unitarity}), (\ref{crossing}) and (\ref{bootstrap}) are given by \begin{equation} S_{ab}(\theta) = \prod_{\alpha\in{\cal A}_{ab}} \left(f_{\alpha}(\theta)\right)^{p_\alpha} \label{smatr} \end{equation} where \begin{equation} f_{\alpha}(\theta) \equiv \frac{\tanh\frac{1}{2}\left(\theta+i\pi \alpha\right)} {\tanh\frac{1}{2}\left(\theta-i\pi \alpha\right)}\,\,\,. \label{elesmatrix} \end{equation} The sets of numbers ${\cal A}_{ab}$ and their multiplicities $p_\alpha$, specifying the amplitudes (\ref{smatr}), can be found in Table 1\footnote{Note that the numbers ${\cal A}_{ab}$ of Table 1 should be read in units of $\frac{1}{30}$.}. In order to correctly interpret this collection of data, notice that the functions $f_{\alpha}(\theta)$ satisfy the equation $f_{\alpha}(\theta)=f_{1 -\alpha}(\theta)$ as well as $f_{\alpha}(\theta) = f_{\alpha}(i \pi -\theta)$. Hence, they have two poles located at the crossing symmetrical positions $\theta = i \pi \alpha$ and $\theta = i \pi (1-\alpha)$. Therefore, the poles of the $S$-matrix of the IMMF will always appear in pairs. Applying eq. (\ref{spole}) to the simple poles with positive residues at $\theta = \frac{2 \pi i}{3}$, $\theta = \frac{2 \pi i}{5}$ and $\theta = \frac{i\pi}{15}$ in $S_{11}(\theta)$ (related to the bound states $A_1$, $A_2$ and $A_3$ respectively), we can extract \[ \Gamma_{11}^1 \,=\,\sqrt{\frac{2 \tan \frac{2\pi}{3} \,\tan\frac{8\pi}{15} \,\tan\frac{11 \pi}{30}} {\tan\frac{2\pi}{15} \,\tan\frac{3\pi}{10}}} \,=\,10.990883.. \] \begin{equation} \Gamma_{11}^2 \,=\, \sqrt{\frac{2 \tan \frac{2\pi}{5} \,\tan\frac{8\pi}{15} \,\tan\frac{7 \pi}{30}} {\tan\frac{2\pi}{15} \,\tan\frac{\pi}{6}}} \,=\,14.322681.. \end{equation} \[ \Gamma_{11}^3 \,=\,\sqrt{\frac{2 \tan \frac{\pi}{15} \,\tan\frac{11\pi}{30} \,\tan\frac{7 \pi}{30}} {\tan\frac{\pi}{6} \,\tan\frac{3\pi}{10}}} \,=\,1.0401363.. \] Other three-particle couplings can be obtained similarly. In addition to simple poles, the $S$-matrix of the IMMF presents higher-order poles due to multi-scattering processes\footnote{A detailed analysis of the nature of higher-order poles in the $S$-matrix of the ($1 + 1$) integrable theories can be found in \cite{ColT,BCDS,ChM}.}. The odd order poles correspond to bound state poles while those of even order do not. Their appearance is an unavoidable consequence of the iterative application of the bootstrap equations (\ref{bootstrap}). In relation with the calculation of the Form Factors, some of these poles will be considered in Section 4 and in Appendix C. \resection{Correlation Functions and Form Factors} Once the spectrum and the $S$-matrix are known, we can investigate the off-shell behaviour of the theory. We can compute the two-point (as well as higher-point) correlation functions of the model through the unitarity sum (Figure 4) \begin{eqnarray} & & <\Phi(x) \Phi(0)> = \sum_{n=0}^{\infty} \int_{\theta_1 >\theta_2 \ldots >\theta_n} \frac{d\theta_1}{2\pi} \cdots \frac{d\theta_n}{2\pi} \label{spectral} \\ & & \,\,\,\,\,\,|<0|\Phi(0)|A_{a_1}(\theta_1) \cdots A_{a_n}(\theta_n)>|^2 e^{-|x| \sum_{k=1}^n m_k \cosh\theta_k} \nonumber \end{eqnarray} Basic quantities of this approach are the Form Factors (FF), i.e. the matrix elements of the local operators $\Phi$ on the asymptotic states (Figure 5), defined as \begin{equation} F^{\Phi}_{a_1,\ldots ,a_n}(\theta_1,\ldots,\theta_n) = <0| \Phi(0)|A_{a_1}(\theta_1),\ldots,A_{a_n}(\theta_n)>\,\,\,. \label{form} \end{equation} A detailed discussion of the form factor approach and the mathematical properties of the matrix elements (\ref{form}) can be found in \cite{KW,Smirnov}. Here we simply consider the basic equations we need for our discussion. For a scalar operator $\Phi(x)$, relativistic invariance requires that its form factors depend only on the rapidity differences $\theta_i-\theta_j$. The elasticity of the scattering processes, together with the crossing symmetry and the completeness relation of the asymptotical states, permit to derive the following monodromy equations satisfied by the FF \begin{equation} \begin{array}{lll} F^{\Phi}_{a_1,..,a_i,a_{i+1},.. a_n}(\theta_1,..,\theta_i, \theta_{i+1}, .. , \theta_{n}) &=& \,S_{a_i a_{i+1}}(\theta_i-\theta_{i+1}) \, F^{\Phi}_{a_1,..a_{i+1},a_i,..a_n}(\theta_1,..,\theta_{i+1},\theta_i ,.., \theta_{n}) \,\, ,\\ F^{\Phi}_{a_1,a_2,\ldots a_n}(\theta_1+2 \pi i, \dots, \theta_{n-1}, \theta_{n} ) &=& F^{\Phi}_{a_2,a_3,\ldots,a_n,a_1}(\theta_2 ,\ldots,\theta_{n}, \theta_1) \,\, . \end{array} \label{permu1}\\ \end{equation} In terms of the $s$-variable of the channel $(A_{a_i}\, A_{a_{j}})$, the first equation in (\ref{permu1}) implies that in the form factors there is a branch cut in the $s$-plane extending from $s = (m_{a_i} + m_{a_j})^2$ to $s = \infty$. This is similar to what happens in the $S$-matrix. However, a difference between the analytic structure of the FF and the $S$-matrix comes from the second equation, which on the contrary shows that the FF do not have a $u$-channel cut extending from $s=-\infty$ to $s=(m_{a_i} -m_{a_j})^2$. Apart from these monodromy properties, the FF are expected to have poles induced by the singularities of the $S$-matrix. A particular role is played by the simple poles. Among them, we can select two different classes which admit a natural particle interpretation. The first type of simple poles are the so-called kinematical poles related to the annihilation processes of a pair of particle and anti-particle states. These singularities are located at $\theta_{a}-\theta_{\overline a} = i\pi$ and for the corresponding residue we have \begin{equation} -i\lim_{\tilde\theta \rightarrow \theta} (\tilde\theta - \theta) F^{\Phi}_{a,\overline a,a_1,\ldots,a_n} (\tilde\theta + i\pi,\theta,\theta_1,\ldots,\theta_n) = \left(1 - \prod_{j=1}^{n} S_{a,a_j}(\theta -\theta_j)\right)\, F^{\Phi}_{a_1,\ldots,a_n}(\theta_1,\ldots,\theta_{n}) . \label{recursive} \end{equation} This equation can be graphically interpreted as an interference process due to the two different kinematical pictures drawn in Fig.\,6. A second class of simple poles is related to the presence of bound states appearing as simple poles in the $S$-matrix. If $\theta = i u_{ab}^c$ and $\Gamma_{ab}^c$ are the resonance angle and the three-particle coupling of the fusion $A_a \times A_b \rightarrow A_c$ respectively, then FF involving the particles $A_a$ and $A_b$ will also have a pole at $\theta = i u_{ab}^c$, with the residue given by (Fig.\,7) \begin{equation} -i \lim_{\theta_{ab} \rightarrow i u_{ab}^c} (\theta_{ab} - i u_{ab}^c) \, F^{\Phi}_{a,b,a_i,\ldots,a_n} \left(\theta_a, \theta_b ,\theta_1,\ldots,\theta_n\right) = \Gamma_{ab}^c \, F^{\Phi}_{c,a_i,\ldots,a_n}\left(\theta_c,\theta_1,\ldots,\theta_n\right) \label{recurb} \end{equation} where $\theta_c = (\theta_a \overline u_{bc}^a + \theta_b \overline u_{ac}^b)/u_{ab}^c$. In general, the FF may also present simple poles which do not fall into the two classes above. In addition, they may also have higher-order poles and indeed, their analytic structure may be quite complicated. We will came back to this point in Section 4 where the specific example of the IMMF will be discussed. Although eqs.\,(\ref{recursive}) and (\ref{recurb}) do not exhaust all pole information, nevertheless they induce a recursive structure in the space of the FF, which may be useful for their determination\footnote{ An important general aspect of the Form Factor approach which is worth mentioning is that the validity of eqs. (\ref{permu1}), (\ref{recursive}) and (\ref{recurb}) do not rely on the choice of any specific local operator $\Phi(x)$. This observation, originally presented in \cite{JLG}, may be used to classify the operator content of the massive integrable QFT, as explicitly shown in refs. \cite{JLG,KM,K,Smirn}.}. Finding the general solution of eqs.\,(\ref{permu1}), (\ref{recursive}) and (\ref{recurb}) for the IMMF poses a mathematical problem of formidable complexity, as described in Appendix A. Fortunately, because of an important property of the spectral series (\ref{spectral}), an accurate knowledge of the correlation functions can be reached with limited mathematical effort. This property consists of a very fast rate of convergence for {\em all} distance scales \cite{CMpol}. In view of this, the correlation functions can be determined with remarkable accuracy by truncating the series to the first few terms only. This statement appears to be obviously true in the infrared region (large $M r$), where more terms included into the series only add exponentially small contributions to the final result. The fast rate of convergence in the crossover and ultraviolet regions seems less obvious. In fact, for small values of the scaling variable $M r$, the correlators usually present power-law singularities and all numbers of particles are in principle supposed to significantly contribute to the sum. However, this does not seem to be the case of integrable QFT with sufficiently mild singularities in the ultraviolet region for a ``threshold suppression effect'' discussed in \cite{CMpol}. Although this result was originally derived for QFT with only one species of particles in the spectrum, we expect that it also applies to the IMMF\footnote{As we will show in Sect.\,5, this will be indeed confirmed by: (a) a direct comparison with the numerical determination of the correlation function; (b) the saturation of the sum-rule related to the second derivative of the free-energy of the model (the zero-moment of the correlation function) and (c) the saturation of the sum-rule derived from the $c$-theorem (second moment of the correlation function).}, due to the smooth singularity of $G(x)$ at the origin, i.e. $G(x) \sim x^{-1/4}$. If this crucial property of the spectral series is taken for granted, the first terms of the series are expected to saturate the values of the correlation function with a high degree of precision and we can only concentrate on their analytic determination. The number of terms to be included in the series essentially depends on the accuracy we would like to reach in the ultraviolet region and, to this aim, it is convenient to order them according to the energy of the particle states. For the IMMF, the first seventeen states are collected in Table 2. The most important contributions to the sum come from the one-particle states $A_1$, $A_2$ and $A_3$, and for the correlation function we have correspondingly \begin{equation} G(r) \,=\,\left|<0|\sigma(0)|0>\right|^2 + \frac{\left|\Upsilon_1\right|^2}{\pi} K_0(m_1 r) + \frac{\left|\Upsilon_2\right|^2}{\pi} K_0(m_2 r) + \frac{\left|\Upsilon_3\right|^2}{\pi} K_0(m_3 r) + {\cal O}\left(e^{-2 m_1 r}\right) \label{threek} \end{equation} where $K_0(x)$ is the modified Bessel function and we have defined \begin{equation} \Upsilon_i \equiv <0|\sigma(0)|A_i> \,\,\,. \label{oneparticleff} \end{equation} These matrix elements will be exactly determined in next section. Concerning the vacuum expectation value of $\sigma(x)$, it can be easily obtained from the relationship between this field and the trace $\Theta(x)$ of the stress-energy tensor. Since $\sigma(x)$ plays the role of the perturbing field in the theory under consideration, it is related to $\Theta(x)$ as \begin{equation} \Theta(x) \,=\, 2\pi \,h (2 - 2 \Delta_{\sigma})\,\sigma(x) \label{theta} \end{equation} On the other hand, the vacuum expectation of $\Theta(x)$ can be exactly determined by the Thermodynamic Bethe Ansatz and its value is given by \cite{Fateev} \begin{equation} <0|\Theta(0)|0> = \frac{\pi m_1^2}{\varphi_{11}} \,\, , \label{vacuum} \end{equation} where \begin{equation} \varphi_{11} = 2 \sum_{\alpha \in {\cal A}_{11}} \sin\pi \alpha = 2\,\left(\sin\frac{2\pi}{3} + \sin\frac{2\pi}{5} + \sin\frac{\pi}{15} \right) \,\,\,. \end{equation} Hence, using the above formulas and eq. (\ref{fat}), we have \begin{equation} <0|\sigma(0)|0> \,=\,\frac{4 {\cal C}^2}{15 \varphi_{11}} h^{1/15} \,=\, (1.07496..) \,\, h^{1/15} \label{vacuumev} \end{equation} Eq.\,(\ref{threek}) provides the first terms of the large-distance expansion of the correlation function $<\sigma(x) \sigma(0)>$. A more refined determination of $G(x)$ may be obtained by computing the FF of the higher-mass states, as discussed in the next section and in Appendix A. \resection{Form Factors of the IMMF} In the framework of the Form Factor bootstrap approach to integrable theories, the two-particle FF play a particularly important role, both from a theoretical and from a practical point of view. {}From a theoretical point of view, they provide the initial conditions which are needed for solving the recursive equations. Moreover, they also encode all the basic properties that the matrix elements with higher number of particles inherit by factorization, namely the asymptotic behaviour and the analytic structure. In other words, once the two-particle FF of the considered operator have been given, the determination of all other matrix elements is simply reduced to solve a well-defined mathematical problem. From a practical point of view, the truncation of the spectral series at the two-particle level usually provides a very accurate approximation of the correlation function, which goes even further than the crossover region. This section is mainly devoted to the discussion of the basic features of the two-particle FF in the IMMF. In the general case, the FF $F^{\Phi}_{ab}(\theta)$ must be a meromorphic function of the rapidity difference defined in the strip ${\it Im}\theta\in(0,2\pi)$. Its monodromy properties are dictated by the general equations (\ref{permu1}), once specialized to the case $n=2$ \begin{equation} F^{\Phi}_{ab}(\theta)=S_{ab}(\theta)F^{\Phi}_{ab}(-\theta)\,\,, \label{w1} \end{equation} \begin{equation} F^{\Phi}_{ab}(i\pi+\theta)=F^{\Phi}_{ab}(i\pi-\theta)\,\,\,. \label{w2} \end{equation} Thus, denoting by $F^{\it min}_{ab}(\theta)$ a solution of eqs.\,(\ref{w1}),(\ref{w2}) free of poles and zeros in the strip and also requiring asymptotic power boundness in momenta, we conclude that $F^{\Phi}_{ab}(\theta)$ must be equal to $F^{\it min}_{ab}(\theta)$ times a rational function of $\cosh\theta$. The poles of this extra function are determined by the singularity structure of the scattering amplitude $S_{ab}(\theta)$. A simple pole in $F^{\Phi}_{ab}(\theta)$ associated to the diagram of Fig.\,8 corresponds to a positive residue simple pole in $S_{ab}(\theta)$ (see eq.(\ref{spole})) and in this case we can write \begin{equation} F^{\Phi}_{ab}(\theta\simeq iu_{ab}^c)\simeq\frac{i \Gamma_{ab}^c}{\theta-iu_{ab}^c}F^{\Phi}_c\,\,\,. \label{pole} \end{equation} The single particle FF $F^{\Phi}_c$ is a constant because of Lorentz invariance. Other poles induced by higher order singularities in the scattering amplitudes will be considered later in this section. The kinematical poles discussed in Section 3 do not appear at the two-particle level if the operator ${\Phi}(x)$ is local with respect to the fields which create the particles, which is the case of interest for us. Summarizing, the two-particle FF can be parameterized as \begin{equation} F^{\Phi}_{ab}(\theta)=\frac{Q^{\Phi}_{ab}(\theta)}{D_{ab}(\theta)} F^{min}_{ab}(\theta)\,\,, \label{param} \end{equation} where $D_{ab}(\theta)$ and $Q^{\Phi}_{ab}(\theta)$ are polynomials in $\cosh\theta$: the former is fixed by the singularity structure of $S_{ab}(\theta)$ while the latter carries the whole information about the operator ${\Phi}(x)$. An upper bound on the asymptotic behaviour of FF and then on the order of the polynomial $Q^{\Phi}_{ab}(\theta)$ in eq.\,(\ref{param}) comes from the following argument. Let $2\Delta_{\Phi}$ be the scaling dimension of the scalar operator ${\Phi}(x)$ in the ultraviolet limit, i.e. \begin{equation} <{\Phi}(x){\Phi}(0)>\,\sim\frac{1}{\,\,\,|x|^{4\Delta_{\Phi}}}\,\,, \hspace{0.1cm}\hs\hspace{0.1cm}\hs|x|\rightarrow 0\,\,. \end{equation} Then in a massive theory \begin{equation} M_p\equiv\int d^2x\,|x|^p<{\Phi}(x){\Phi}(0)>_c \hspace{.5cm} < +\infty \hspace{1cm} {\rm if} \hspace{.8cm} p > 4\Delta_{\Phi}-2\,\,, \label{moment} \end{equation} where the subscript $c$ denotes the connected correlator. The two-point correlator may be expressed in terms of its euclidean Lehmann representation as \begin{equation} <\Phi(x)\Phi(0)>_c=\int d^2p\,\,e^{ipx}\int d\mu^2\,\,\frac{\rho(\mu^2)}{p^2+\mu^2}\,\,\,, \label{leh} \end{equation} where the spectral function $\rho$ is given by \[ \rho(\mu^2)=\frac{1}{2\pi}\sum_{n=1}^{\infty}\int_{\theta_1>\ldots>\theta_n} \frac{d\theta_1}{2\pi}\ldots\frac{d\theta_n}{2\pi}|F^{\Phi}_{a_1,\ldots,a_n} (\theta_1,\ldots,\theta_n)|^2\delta(\sum_{k=1}^{n}m_k\cosh\theta_k-\mu) \delta(\sum_{k=1}^nm_k\sinh\theta_k)\,\,\,, \] Substituting eq.\,(\ref{leh}) into the definition of $M_p$ and performing the integrations over $p$, $\mu$, and $x$, one finds \begin{equation} M_p\sim\sum_{n=1}^\infty\int_{\theta_1>\ldots>\theta_n}\,\, d\theta_1\ldots d\theta_n \frac{|F^{\Phi}_{a_1,\ldots,a_n} (\theta_1,\ldots,\theta_n)|^2} {\left(\sum_{k=1}^n m_k\cosh\theta_k\right)^{p+1}}\,\, \delta\left(\sum_{k=1}^nm_k\sinh\theta_k\right)\,\,\,. \label{aaa} \end{equation} Eq.\,(\ref{moment}) can now be used to derive an upper bound for the real quantity $y_{\Phi}$, defined by \begin{equation} \lim_{|\theta_i|\rightarrow\infty} F^{\Phi}_{a_1,\ldots,a_n}(\theta_1,\ldots,\theta_n) \sim \, e^{y_{\Phi}|\theta_i|}\,\,\,. \label{bound} \end{equation} This can be achieved by firstly noting that taking the limit $\theta_i\rightarrow+\infty$ in the integrand of eq.\,(\ref{aaa}), the delta-function forces some other rapidity $\theta_j$ to diverge to minus infinity as $-\theta_i$. Since the matrix element $F^\Phi_{a_1,\ldots,a_n}(\theta_1,\ldots,\theta_n)$ depends on the rapidity differences, it will contribute to the integrand a factor $e^{2y_\Phi|\theta_i|}$ in the limit $|\theta_i|\rightarrow\infty$. Then eq.\,(\ref{moment}) leads to the constraint \begin{equation} y_{\Phi}\,\leq\,\Delta_\Phi\,\,\,. \label{bbb} \end{equation} Note that this conclusion may not hold for non-unitary theories since not all the terms in the expansion over intermediate states are guaranteed to be positive in these cases. Let us see how the aforementioned considerations apply to the specific case of the IMMF. An appropriate solution of eqs.\,(\ref{w1}) and (\ref{w2}), corresponding to the scattering amplitudes reported in Table 1, can be written as \begin{equation} F^{min}_{ab}(\theta)=\left(-i\sinh\frac{\theta}{2}\right)^{\delta_{ab}} \prod_{\alpha\in{\cal A}_{ab}}\left(G_{\alpha}(\theta) \right)^{p_\alpha}\,\,, \label{fmin} \end{equation} where \begin{equation} G_{\alpha}(\theta)=\exp\left\{2\int_0^\infty\frac{dt}{t}\frac{\cosh\left( \alpha - \frac{1}{2}\right)t}{\cosh\frac{t}{2}\sinh t}\sin^2\frac{(i\pi-\theta)t}{2\pi}\right\}\,\,\,. \label{block} \end{equation} For large values of the rapidity \begin{equation} G_{\alpha} (\theta) \,\sim\, \exp(|\theta|/2) \,\,\, , |\theta|\rightarrow\infty \,\,, \label{dec} \end{equation} independent of the index $\alpha$. Other properties of this function are discussed in Appendix B. An analysis of the two-particle FF singularities which will be described later on in this section, suggests that the pole terms appearing in the general parameterization eq.\,(\ref{param}) could be written as \begin{equation} D_{ab}(\theta)=\prod_{\alpha\in {\cal A}_{ab}} \left({\cal P}_\alpha(\theta)\right)^{i_\alpha} \left({\cal P}_{1-\alpha}(\theta)\right)^{j_\alpha} \,\,\,, \label{dab} \end{equation} where \begin{equation} \begin{array}{lll} i_{\alpha} = n+1\,\,\, , & j_{\alpha} = n \,\,\, , & {\rm if} \hspace{.5cm} p_\alpha=2n+1\,\,\,; \\ i_{\alpha} = n \,\,\, , & j_{\alpha} = n \,\,\, , & {\rm if} \hspace{.5cm} p_\alpha=2n\,\,\, , \end{array} \end{equation} and we have introduced the notation \begin{equation} {\cal P}_{\alpha}(\theta)\equiv \frac{\cos\pi\alpha-\cosh\theta}{2\cos^2\frac{\pi\alpha}{2}}\,\,\,. \label{polo} \end{equation} Both $F^{min}_{ab}(\theta)$ and $D_{ab}(\theta)$ have been normalized to 1 in $\theta=i\pi$. Finally, let us turn our attention to the determination of the polynomials $Q^{\Phi}_{ab}(\theta)$ for the specific operator we are interested in, namely the magnetization field $\sigma(x)$. In view of the relation (\ref{theta}), this is the same as considering the analogous problem for the trace of the energy-momentum tensor $\Theta(x)$. For reasons which will become immediately clear, we will consider the latter operator in the remainder of this section. The conservation equation $\partial_\mu T^{\mu\nu}=0$ implies the following relations among the FF of the different components of the energy-momentum tensor \begin{eqnarray} F^{T^{++}}_{a_1,\ldots,a_n}(\theta_1,\ldots,\theta_n) &\sim& \frac{P^+}{P_-} F^{\Theta}_{a_1,\ldots,a_n}(\theta_1,\ldots,\theta_n)\,\,\,;\\ F^{T^{--}}_{a_1,\ldots,a_n}(\theta_1,\ldots,\theta_n) &\sim& \frac{P^-}{P_+} F^{\Theta}_{a_1,\ldots,a_n}(\theta_1,\ldots,\theta_n)\,\,, \label{mismatch} \end{eqnarray} where $x^\pm=x^0\pm x^1$ are the light-cone coordinates and $P^\pm\equiv\sum_{i=1}^n p_{a_i}^\pm$. The requirement that all the components of the energy-momentum tensor must exhibit the same singularity structure, leads to conclude that the FF of $\Theta(x)$ must contain a factor $P^+P^-$. However, the case $n=2$ is special because, if the two particles have equal masses, the mismatch of the singularities disappears in eqs.\,(\ref{mismatch}) and no factorisation takes place. From this analysis, we conclude that for our model we can write \begin{equation} Q^\Theta_{ab}(\theta) = \left(\cosh\theta + \frac{m_a^2+m_b^2}{2m_am_b}\right)^{1-\delta_{ab}} P_{ab}(\theta)\,\,, \end{equation} where \begin{equation} P_{ab}(\theta)\equiv\sum_{k=0}^{N_{ab}} a^k_{ab}\,\cosh^k\theta\,\,\,. \end{equation} The degree $N_{ab}$ of the polynomials $P_{ab}(\theta)$ can be severely constrained by using eqs.\,(\ref{bound}) and (\ref{bbb}). Additional conditions for these polynomials are provided by the normalization of the operator $\Theta(x)$, that for the diagonal elements $F^\Theta_{aa}$, reads \begin{equation} F_{aa}^\Theta(i\pi)=<A_a(\theta_a)|\Theta(0)|A_a(\theta_a)>=2\pi m^2_a\,\,\,. \label{ipi} \end{equation} Using all the information above, we can now proceed in the computation of the IMMF form factors, starting from the simplest two-particle FF of the model, namely $F_{11}^{\Theta}(\theta)$ and $F_{12}^{\Theta}(\theta)$. First of all, by using eqs.\,(\ref{bbb}) and (\ref{dec}) one concludes that $N_{11}\leq 1$ and $N_{12}\leq 1$. In view of the normalization condition (\ref{ipi}), only one unknown parameter, say $a^1_{11}$, is necessary in order to have the complete expression of $F^\Theta_{11}(\theta)$. On the contrary, we need two parameters, $a^0_{12}$ and $a^1_{12}$, to specify $F^\Theta_{12}(\theta)$. To determine all three unknown parameters, note that the scattering amplitude $S_{11}(\theta)$ possesses three positive residue poles at $\theta = i\frac{2\pi}{3}$, $\theta = i\frac{2\pi}{5}$ and $\theta = i\frac{\pi}{15}$ which correspond to the particles $A_1$, $A_2$ and $A_3$ respectively; on the other hand, $S_{12}(\theta)$ exhibits four positive residue poles at $\theta = i\frac{4\pi}{5}$, $\theta = i\frac{3\pi}{5}$, $\theta = i\frac{7\pi}{15}$ and $\theta = i\frac{4\pi}{15}$ associated to $A_1$, $A_2$, $A_3$ and $A_4$. Hence, since three poles are common to both amplitudes and no multiple poles appear in both of them, eq.\,(\ref{pole}) provides a system of three linear equations which uniquely determine the coefficients $a^1_{11}$, $a^0_{12}$ and $a^1_{12}$ \begin{equation} \frac{1}{\Gamma_{11}^c} {\rm Res}_{\theta=iu_{11}^c}F_{11}^{\Theta}(\theta)= \frac{1}{\Gamma_{12}^c} {\rm Res}_{\theta=iu_{12}^c}F_{12}^{\Theta}(\theta) \hspace{1cm}c=1,2,3\,\,. \end{equation} The result of this calculation can be expressed in terms of the mass ratios $\hat m_i = m_i/m_1$ as \begin{equation} P_{11}(\theta) \,=\,\frac{2\pi m_1^2}{\hat m_3 \hat m_7} (2\cosh\theta + 2 + \hat m_3 \hat m_7) \end{equation} \begin{equation} P_{12}(\theta) \,=\,H_{12} \left(2 \hat m_2 \cosh\theta + \hat m_2^2 + \hat m_8^2\right) \end{equation} where \[ H_{12} = (1.912618..) \, m_1^2 \,\,\,. \] Equations (\ref{pole}) can now be used to obtain the one-particle form factors $F^{\Theta}_a$ ($a=1,\ldots,4$), whose numerical values are reported in Table 3. In particular, \[ F^\Theta_1 \,=\, \frac{\pi m^2_1}{\Gamma_{11}^1 \hat m_3 \hat m_7} \frac{(1 + \hat m_3 \hat m_7)\, \left[G_{\frac{2}{3}} G_{\frac{2}{5}} G_{\frac{1}{15}} \left(\frac{2 \pi i}{3}\right) \right]\cos^2\frac{\pi}{5} \cos^2\frac{\pi}{30} } {\sin\frac{8\pi}{15} \,\sin\frac{2\pi}{15} \,\sin\frac{3\pi}{10}\, \sin\frac{11\pi}{30}} \] \[ F^{\Theta}_2 \,=\, -\frac{4 \pi m_1^2}{\Gamma_{11}^2 \hat m_3 \hat m_7} \frac{\sin\frac{\pi}{5}}{\sin\frac{2 \pi}{5}} \frac{ \left(2 \cos\frac{2\pi}{5} + 2 + \hat m_3 \hat m_7\right)\, \left[G_{\frac{2}{3}} G_{\frac{2}{5}} G_{\frac{1}{15}} \left(\frac{2\pi i}{5}\right)\right] \left(\cos\frac{\pi}{3}\,\cos\frac{\pi}{30}\,\cos\frac{\pi}{5}\right)^2 } {\sin\frac{2\pi}{5} \,\sin\frac{8\pi}{15}\, \sin\frac{\pi}{6} \,\sin\frac{7\pi}{30}} \] \[ F^{\Theta}_3 \,=\, \frac{4 \pi m_1^2}{\Gamma_{11}^3 \hat m_3 \hat m_7} \frac{\sin\frac{\pi}{30}}{\sin\frac{\pi}{15}} \frac{\left(2 \cos\frac{\pi}{15} + 2 + \hat m_3 \hat m_7\right)\, \left[G_{\frac{2}{3}} G_{\frac{2}{5}} G_{\frac{1}{15}} \left(\frac{i\pi}{15}\right) \right] \left(\cos\frac{\pi}{5} \cos\frac{\pi}{3} \cos^2\frac{\pi}{30}\right)^2 } {\sin\frac{3\pi}{10}\,\sin\frac{11\pi}{30}\, \sin\frac{\pi}{6}\,\sin\frac{7\pi}{30}} \] In order to continue in the bootstrap procedure and compute the other one-particle and two-particle FF, we have to firstly consider the multiple poles of the scattering amplitudes. Such poles are known to represent the two-dimensional analog of anomalous thresholds associated to multi-scattering processes \cite{ColT}. These are processes in which the two ingoing particles decay into their ``constituents'', which interact and then recombine to give a two-particle final state. In the general framework of relativistic scattering theory, the location of this kind of singularities is determined by the so-called Landau rules \cite{eden}. In the two-dimensional case, such rules admit the following simple formulation: singularities occur only for those values of the momenta for which a space-time diagram of the process can be drawn as a geometrical figure with all (internal and external) particles on mass-shell and energy-momentum conservation at the vertices. The simplified two-dimensional kinematics only selects discrete values of the external momenta for which such a construction is possible and this is the reason why in two dimensions the ``anomalous'' singularities appear as poles rather than branch cuts. The order of the pole and its residue can be determined using the Cutkosky rule \cite{eden} which states that the discontinuity across the singularity associated to the abovementioned diagram is obtained evaluating it as if it were a Feynman graph but by inserting the complete scattering amplitudes at the interaction points and by replacing the internal propagators with mass-shell delta-functions $\theta(p^0)\,\delta(p^2-m^2)$. For a diagram containing $P$ propagators and $L$ loops, $P-2L$ delta-functions survive the $L$ two-dimensional integrations; since the singularity whose discontinuity is a single delta-function is a simple pole, the graph under consideration leads to a pole of order $P-2L$ in the amplitude \cite{BCDS}. Let us initially consider the second order poles. A second order pole at $\theta=i\varphi$ occurs in the amplitude $S_{ab}(\theta)$ if one of the two diagrams in Figures 9.a and 9.b can be actually drawn, namely if \begin{equation} \eta \equiv \,\pi-u_{cd}^a-u_{de}^b \in[0,\pi)\,\,\, . \label{constraint} \end{equation} The quantity $i\eta$ is the rapidity difference between the intermediate propagating particles $A_c$ and $A_e$. From these figures, it is easy to see that \begin{equation} \varphi=u_{ad}^c+u_{db}^e-\pi\,\,\,. \end{equation} The crossing symmetry expressed by eq.\,(\ref{crossing}) obviously implies that, in addition to the double pole in $\theta=i\varphi$, an analogous pole must be present in $\theta=i(\pi-\varphi)$. Since the residues of the two poles are now equal, it is impossible to distinguish between a direct and a crossed channel, and the two poles must be treated on exactly the same footing. At the diagrammatic level, this fact is reflected by the possibility to find a diagram satisfying eq.\,(\ref{constraint}) also for $\theta=i(\pi-\varphi)$. Hence, let us consider only one of these poles, the one located at $\theta=i\varphi$. In the vicinity of this pole, the scattering amplitude can be approximated as (see Fig.\,9.a) \begin{equation} S_{ab}(\theta) \,\simeq\, \frac{(\Gamma_{cd}^a \Gamma_{de}^b)^2S_{ce}(\eta)}{(\theta-i\varphi)^2}\,\,\,. \label{res2} \end{equation} Note that the expression of this residue, which is obtained for $\eta>0$, is also valid in the limiting situation $\eta=0$ (Fig.\,9.b), for which a residue $(\Gamma_{cd}^a \Gamma_{de}^b \Gamma_{cf}^b \Gamma_{fe}^a)$ is expected. In fact, the consistency of the theory requires \begin{equation} \Gamma_{cd}^a \Gamma_{de}^b \, =\, \Gamma_{cf}^b \Gamma_{fe}^a\,\, , \label{identity} \end{equation} an equation that is indeed satisfied for the three-point couplings of the IMMF. Moreover, in the case $\eta=0$, the ``fermionic'' nature of the particles, expressed by the relations \begin{equation} S_{ab}(0) \,=\, \left\{ \begin{array}{cl} -1 & if\hspace{.5cm} a=b \,\,\, ; \\ 1 & if\hspace{.5cm} a\neq b\,\,\, , \end{array} \right. \end{equation} implies that the two particles $A_c$ and $A_e$ propagating with the same momentum in Fig.\,9.b cannot be of the same species. In this case, the factor $S_{ce}(\eta=0)$ in eq.(\ref{res2}) equals unity. The double pole at $\theta=i\varphi$ in $S_{ab}(\theta)$ induces a singularity at the same position in $F_{ab}^{\Phi}(\theta)$. For $\eta > 0$, this is associated to the diagram on the left hand side of Fig.\,10. Since the singularity is now determined by a single triangular loop, the form factor $F_{ab}^{\Phi}(\theta)$ has only a simple pole rather than a double pole. The residue is given by \begin{equation} \Gamma_{cd}^a \Gamma_{de}^b S_{ce}(\eta) F_{ce}^{\Phi}(-\eta)\,\,\,. \end{equation} Eq.\,(\ref{w1}) can now be used to write (see the right hand side of Fig.\,10) \begin{equation} F_{ab}^{\Phi}(\theta\simeq i\varphi) \simeq i \,\frac{\Gamma_{cd}^a \Gamma_{de}^b F_{ce}^{\Phi}(\eta)}{\theta-i\varphi}\,\,\,. \label{ffres2} \end{equation} As written, this result also holds for $\eta=0$. The poles of order $p>2$ in the scattering amplitudes and the corresponding singularities in the two-particle FF can be treated as a ``composition'' of the cases $p=1$ and $p=2$. This is the case, for instance, of a third order pole with positive residue at $\theta=i\varphi$ in $S_{ab}(\theta)$. In the $S$-matrix, a third order pole occurs if the scattering angle $\eta$ in Fig.\,9.a coincides with the resonance angle $u_{ce}^f$. The corresponding diagram is drawn in Figure 11 and in this case we have \begin{equation} S_{ab}(\theta\simeq i\varphi)\simeq i \, \frac{(\Gamma_{cd}^a \Gamma_{de}^b \Gamma_{ec}^f)^2}{(\theta-i\varphi)^3}\,\,\,. \label{res3} \end{equation} The third-order pole at the crossing-symmetric position $\theta=i(\pi-\varphi)$ has negative residue since it corresponds to the crossed channel pole. With respect to the case $p=2$, the pole at $\theta=i\varphi$ in $F_{ab}^{\Phi}(\theta)$ becomes double (see Figure 12) \begin{equation} F_{ab}^{\Phi}(\theta\simeq i\varphi)\simeq -\, \frac{\Gamma_{cd}^a \Gamma_{de}^b \Gamma_{ec}^f}{(\theta-i\varphi)^2} F_f^{\Phi}\,\,\,, \label{ffres3} \end{equation} while the pole at $\theta=i(\pi-\varphi)$ stays simple. The above analysis suggests the validity of the following general pattern for the structure of the form-factor poles: a pole of order $2n$ at $\theta=i\varphi$ in the crossing-symmetric scattering amplitude $S_{ab}(\theta)$ will induce a pole of order $n$ both at $\theta=i\varphi$ and at $\theta=i(\pi-\varphi)$ in the two-particle form factor $F_{ab}^{\Phi}(\theta)$; viceversa, a positive residue pole of order $(2n+1)$ at $\theta = i\varphi$ in $S_{ab}(\theta)$ will induce a pole of order $(n+1)$ at $\theta = i \varphi$ and a pole of order $n$ at the crossing symmetrical position $\theta=i(\pi-\varphi)$ in $F_{ab}^{\Phi}(\theta)$. We have used this result to write the parameterization of eq.\,(\ref{dab}). Moreover, in integrable QFT, these arguments can be easily extended to the higher matrix elements $F_{a_1,\ldots,a_n}^\Phi(\theta_1,\ldots,\theta_n)$ with $n>2$, since the complete factorisation of multiparticle processes prevents the generation of new singularities. In other words, the singularity structure of the n-particle FF is completely determined by the product of the poles present into each two-particle sub-channel. We have used eqs.\,(\ref{pole}), (\ref{ffres2}), (\ref{ffres3}) to continue the bootstrap procedure for the two-particle FF of $\Theta(x)$ in the IMMF up to the level $A_3A_3$, and an illustration of the method through a specific example may be found in Appendix C. The results obtained for the coefficients $a^k_{ab}$ (the only unknown quantities in the parameterization of eq.\,(\ref{param}) after the pole structure has been fixed) are summarized for convenience in Table 4; Table 3 contains the complete list of the one-particle matrix elements. Two important comments are in order here. The first is that in all the determinations of $F_{ab}^\Theta$ except $F_{11}^\Theta$ and $F_{12}^\Theta$, a number of equations larger than the number of unknown parameters is obtained. The fact that these overdetermined systems of equations always admit a solution\footnote{Solutions can only be found by choosing the three-point couplings $\Gamma_{ab}^c$ either all positive or all negative. Hence, this restricts the ambiguity of the three-point couplings to an overall $\pm $ sign only. We are not aware of any other explanation for this constraint on the $\Gamma_{ab}^c$.} provides a highly nontrivial check of the results of this section. The second point is that, since the pole structure has been identified, there is no obstacle, in principle, to continue further the bootstrap procedure and to achieve any desired precision in the determination of the correlation function in the ultraviolet region. Actually, we will show in the next section that the information contained in Tables 3 and 4 are more than enough for practical purposes. Nevertheless, from a purely theoretical point of view, it would be obviously desirable to have a complete solution of the recursive equations. A possible approach to this non trivial mathematical problem is suggested in Appendix A. \resection{Comparison with Numerical Simulations} The data collected in Tables 3 and 4, together with the vacuum expectation value eq.\,(\ref{vacuum}) and the three-particle matrix element $F_{111}^\Theta(\theta_1,\theta_2,\theta_3)$ given in Appendix A provide us with the complete large-distance expansion of the correlator $<\Theta(x)\Theta(0)>$ up to order $e^{-(m2+m3)|x|}$. A first check of the degree of convergence of the series, and then of its practical utility, is obtained by exploiting the exact knowledge of the second and zeroth moments of the correlation function we are considering. Indeed, in a massive theory the $c$-theorem sum rule provides the relation \cite{Zamcth} \begin{equation} C = \frac{3}{4\pi}\int d^2x|x|^2<\Theta(x)\Theta(0)>_c\,\,\,, \label{cth} \end{equation} where $C$ is the central charge of the conformal theory describing the ultraviolet fixed point. For the Ising model $C=\frac{1}{2}$. In addition, if we write the singular part of the free energy per unit volume as $f_s\simeq-UM^2(h)$, a double differentiation with respect to $h$ leads to the identity \begin{equation} U = \frac{1}{\pi^2}\int d^2x<\Theta(x)\Theta(0)>_c\,\,\,. \label{bulk} \end{equation} On the other hand, the exact value of the universal amplitude $U$ is obtained by plugging eq.\,(\ref{vacuum}) into \begin{equation} U = \frac{4\pi}{M^2(h)}<0|\Theta|0> = 0.0617286.. \end{equation} The contributions to the sum rules (\ref{cth}) and (\ref{bulk}) from the first eight states in the spectral representation of the connected correlator are listed in the Tables 5 and 6, together with their partial sums. The numerical data are remarkably close to their theoretical values. Notice that a very fast saturation is also observed in the case of the zeroth moment, despite the absence of any suppression of the ultraviolet singularity. Let us now directly compare the theoretical prediction of the connected correlation function $G_c(x) = <\sigma(x) \sigma(0)>_c$ with its numerical evaluation. A collection of high-precision numerical estimates of $G_c(x)$, for different values of the magnetic field $h$ and different size $L$ of the lattices, can be found in the reference \cite{LR}. We have decided to consider the set of data relative to $L=64$ and $h=0.075$, where the numerical values of $G(x)$ are known on $32$ lattice space (Table 7). Such a choice was dictated purely by the requirement to use data where the effects of numerical errors are presumed to be minimized. Errors can be in fact induced either from the finite size $L$ of the sample or from the residue fluctuations of the critical point, which may be not sufficiently suppressed for small values of the magnetic field $h$. In order to compare the numerical data with our theoretical determination, we only need to fix two quantities. The first consists in extracting the relationship between the inverse correlation length, expressed in lattice units, and the mass scale $M(h)$ entering the form factor expansion. The second quantity we need is the relative normalization of the operator $\sigma(x)$ defined on the lattice, denoted by $\sigma_{\rm lat}(x)$, with the operator $\sigma(x)$ entering our theoretical calculation in the continuum limit. Let us consider the two issues separately. The correlation length $\xi$ is easily extracted by using eq.\,(\ref{threek}) to analyse the exponential decay of the numerical data collected in Table 7. As a best fit of this quantity, we obtain \begin{equation} M(h =0.075) \,=\,\xi^{-1} \,=\,5.4(3)\,\,\,. \label{latcor} \end{equation} Let us turn our attention to the second problem. The easiest way to set the normalization of $\sigma(x)$ with respect to $\sigma_{\rm lat}(x)$ is to compare their vacuum expectation value. The lattice determination of this quantity can be found in \cite{D,LR} and, within the numerical precision, it is given by \begin{equation} <\sigma_{\rm lat}(0)> \,=\,1.000(1) \,h^{1/15} \,\,\, . \end{equation} On the other hand, the theoretical estimate of $<\sigma(0)>$ was given in eq. (\ref{vacuumev}). Hence, comparing the two results, the relative normalization is expressed by the constant ${\cal N}$ as \begin{equation} \sigma_{\rm lat}(x) \,=\,{\cal N}\,\sigma(x) \,=\,0.930(3)\, \,\sigma(x) \,\,\, . \label{latcont} \end{equation} Once these two quantities are fixed, there are no more adjustable parameters to compare the numerical data with the large-distance expansion of $G_c(x)$. The form factors of the field $\sigma(x)$, entering the series (\ref{spectral}) can be easily recovered from those of $\Theta(x)$, by using the relationship of these fields given by (\ref{theta}), and for the correlation function we have \begin{equation} <\sigma(x) \sigma(0)>_c \,=\,\left(\frac{4}{15 \pi h}\right)^2 \,<\Theta(x) \Theta(0)>_c\,\,\,. \end{equation} The comparison between the two determinations of $G_c(x)$ can be found in Figures 13 and 14. In Fig.\,13 we have only included the first three terms of $G_c(x)$ (those relative to the form factors of the one-particle states $A_1$, $A_2$, $A_3$). As shown in this figure, they can reproduce correctly the behaviour of the correlation function on the whole infrared and crossover regions. A slight deviation of the theoretical curve from the numerical values is only observed for the first points of the ultraviolet region, where a better approximation can be obtained by including more terms in the form factor series. This is shown in Figure 14, where five more contributions (those relative to form factors up to state $A_1 A_3$) have been added to the series. \resection{Conclusion} The basic results of this paper can be summarised as follows. The Zamolodchikov S-matrix for the IMMF has been used as the starting point to implement a bootstrap program for the FF of the magnetization operator. Although the general solution of the bootstrap recursive equations remains a challenging mathematical problem, the matrix elements yielding the main contributions to the spectral representation of the correlator $G(x)=<\sigma(x)\sigma(0)>$ have been explicitly computed. This has enabled us to write a large-distance expansion for $G(x)$ which is characterised by a very fast rate of convergence and provides accurate theoretical predictions for comparison with data coming from high precision numerical simulations. It would be interesting to obtain analogous results for the other relevant operator of the theory, namely the energy density $\varepsilon(x)$. To this aim, the only difficulty one has to face is the determination of the initial conditions for the form factor bootstrap equations appropriate for this operator. In the case of the field $\sigma(x)$, we solved this problem by exploiting the proportionality with the trace $\Theta(x)$ of the energy-momentum tensor. Notice that, due to the absence of symmetries in the space of states of the IMMF, the occurrence of the polynomials $Q_{ab}^\Phi(\theta)$ in the two-particle FF is precisely what is needed in order to distinguish between the matrix elements of $\sigma(x)$ and those of $\varepsilon(x)$. In conclusion, it must be remarked that the methods discussed in this paper can be generally used within the framework of integrable QFT. As a matter of fact, here they have been applied to a model which, for the absence of internal symmetries and the richness of its pole structure, can be considered as an extreme case of complexity. For instance, similar results to those contained in this paper can be obtained for other physically interesting situations, such as the thermal deformations of the tricritical Ising and three-state Potts models. The exact S-matrices for these models were determined in ref.\,\cite{ChM,Zamo-Fateev}. \vspace{1cm} {\em Acknowledgments}. We are grateful to J.L. Cardy for useful discussions. \newpage
1,941,325,220,173
arxiv
\section{Introduction} General relativity is a geometrical theory of gravitation that is determined by the metric of spacetime. Brans-Dicke theory of gravity\cite{B1, B2, B3}, a scalar-tensor theory, introduces an adjustment to the initial framework of Einstein in order to include Mach's Principle by including a scalar field $\phi$. The astrophysical observations supplied to scientist with the construct of low energy string theories with scalar fields are very efficient in depicting the large scale structure of spacetime and subatomic physics; so it is very reasonable to recognize them also at large scales. Brans-Dicke theory covers the geodesic postulate; the motion of a test mass is determined by the geodesics with the gravitational effects of all particles embedded in the metric and the associated Levi-Civita connection. The assumption that the matter lagrangian is independent of the scalar field $\phi$ secures the equivalence principle. The string unification of gravity and all quantum powers make researchers think on the property and reasonableness of the theory of gravitational interaction being depicted, by geometry alone. There are several hints that a non-Riemannian setting of spacetime may contribute a refined and coherent way to describe gravitational fields\cite{B4,B5,B6,B6-a}. A first order variation of the Brans-Dicke Lagrangian implies that the spacetime connection can admit a non-vanishing torsion. The field equations obtained are equivalent to up to a shift in the coupling constant \cite{B7}. Under the influence of gravity alone, non-spinning test masses follow geodesic equations of motion in a Riemannian spacetime and autoparallels of a connection with torsion \cite{B8} in a non-Riemannian geometry. The existence of other matter fields will reshape the geometry of spacetime, and in the presence spinorial matter, torsion will be picked up by differential equations \cite{B9,B10}. In this article, the gradient of the scalar field determines the space-time torsion, and a particular choice of the scalar field applied to parallely propagating gravitational plane waves will reduce the field equations to the vacuum Einstein equations. In particular, we are motivated on the interpretation in a Riemannian and non-Riemannian geometry. Explicitly, the Brans-Dicke parameter and the autoparallels of the connection with torsion which differ from the geodesic equations of motion will help us understand the coherence of the explanation. The article is organized as follows: in Section 2, we present the Brans-Dicke theory of gravity with the gravitational field equations and the equations of motion of a non-spinning test mass in a non-Riemannian setting in the language of exterior differential forms. Section 3 details the plane wave solutions in Rosen coordinates and a particular choice of scalar field that reduce the field equations to source-free Einstein equations. Section 4 is devoted to concluding remarks. \bigskip \section{Brans-Dicke Theory Field Equations} \noindent Let $M$ be a 4-dimensional spacetime manifold, the field equations are determined from the action $I=\int_{M} \mathcal{L},$ where the Brans-Dicke Lagrange density 4-form \begin{equation} {\mathcal{L}} = \frac{\alpha^2}{2}R_{ab} \wedge *(e^a \wedge e^b) -\frac{c}{2} d\alpha \wedge *d\alpha. \end{equation} \noindent $\{e^a\}$ 's denote the co-frame 1-forms in terms of which the space-time metric is given by $g=\eta_{ab}e^{a} \otimes e^b$ with $\eta_{ab}=diag(-+++)$. ${}^*$ stands for the Hodge map. The volume form is $*1 = e^0 \wedge e^1 \wedge e^2 \wedge e^3$. We write the Brans-Dicke scalar field as $\phi=\alpha^2$ for convenience. $\{{\omega}^{a}_{\;\;b}\}$ are the connection 1-forms that satisfy the Cartan structure equations \begin{equation} de^a + \omega^{a}_{\;\;b} \wedge e^b = T^a \end{equation} with the torsion 2-forms $T^a$ and \begin{equation} d \omega^{a}_{\;\;b} + \omega^{a}_{\;\;c} \wedge \omega^{c}_{\;\;b} =R^{a}_{\;\;b} \end{equation} with the curvature 2-forms $R^{a}_{\;\;b}$ of spacetime. We vary the action with respect to the co-frame 1-forms $e^a$, connection 1-forms $\omega^{a}_{\;\;b}$ and the scalar field $\alpha$. The corresponding coupled field equations turn out to be $(c \neq 0)$ in the non-Riemannian description: \begin{eqnarray} -\frac{\alpha^2}{2} R^{bc} \wedge *(e_a \wedge e_b \wedge e_c) &=& c\hspace{1mm}\tau_a[\alpha], \nonumber \\ T^a &=& e^a \wedge \frac{d \alpha}{\alpha} , \nonumber \\ c d*d\alpha^2 &=& 0; \label{eq:FE-NR} \end{eqnarray} where \begin{eqnarray} \tau_a[\alpha]=\frac{1}{2} \left ( \iota_a d\alpha *d\alpha + d\alpha\wedge \iota_a *d\alpha \right ) \equiv T_{ab} *e^b \end{eqnarray} are the energy-momentum 3-forms of the scalar field and $\iota_a$ denote the interior products that satisfy $\iota_a e^b = \delta^{b}_{\;\; a}$. Denoting the Levi-Civita connection 1-forms $\{{\hat{\omega}}^{a}_{\;\;b}\}$, fixed in a unique way by the metric tensor through the Cartan structure equations \begin{equation} de^a + {\hat{\omega}}^{a}_{\;\;b} \wedge e^b = 0, \end{equation} we obtain the equivalent field equations to the above field equations (\ref{eq:FE-NR}) in the Riemannian description as: \begin{eqnarray} -\frac{\alpha^2}{2}{\hat{R}}^{bc} \wedge *(e_a \wedge e_b \wedge e_c) &=& \frac{(c-6)}{2} \left ( \iota_a d\alpha *d\alpha + d\alpha \wedge \iota_a *d\alpha \right ) \nonumber \\ & & +\hat{D}(\iota_a*d\alpha^2), \nonumber \\ c d*d\alpha^2 &=&0, \end{eqnarray} where $\hat{D}$ denotes the covariant exterior derivative with respect to the Levi-Civita connection. Then the coupling constant $c$ can be identified in the classical Brans-Dicke theory by \begin{eqnarray} c=4\omega +6 \end{eqnarray} where $\omega$ denotes the conventional Brans-Dicke coupling constant. \section{Plane Wave Solutions} \noindent We consider the plane fronted gravitational wave metric in Rosen coordinates $(u,v,x,y)$ to solve the Brans-Dicke field equations\cite{B11,B12}: \begin{eqnarray} g = 2\mbox{d} u \otimes \mbox{d} v + \frac{\mbox{d} x \otimes \mbox{d} x}{f(u)^2} + \frac{\mbox{d} y \otimes \mbox{d} y}{h(u)^2}. \label{eq:PP-metrik0} \end{eqnarray} We further take a scalar field \begin{eqnarray} \alpha&=& \alpha(u). \end{eqnarray} \noindent We note that the scalar field equation to the system is again identically satisfied. Working out the expressions for the curvature, torsion and the scalar field stress-energy-momentum tensors, the Einstein field equations to be solved reduce to the following second order differential equation: \begin{eqnarray} \frac{f_{uu}}{f} -\frac{2f_u^2}{f^2} + \frac{h_{uu}}{h} -\frac{2h_u^2}{h} -\frac{2\alpha_{uu}}{\alpha}+\frac{4\alpha_u^2}{\alpha^2}=\frac{c\alpha_u^2}{\alpha^2}. \label{eq:FieldEq} \end{eqnarray} \noindent For a particular choice of $\alpha$ such that \begin{eqnarray} \alpha(u)= \Big((\omega+1)(C_1u+C_2)\Big)^{\frac{1}{2(\omega+1)}}, \label{eq:spe-alpha} \end{eqnarray} where $C_1$ and $C_2$ are arbitrary integration constants, all the $\alpha$-dependent terms in (\ref{eq:FieldEq}) drop out and we recover the source-free Einstein field equation: \begin{eqnarray} \Big(\frac{f_{uu}}{f} -\frac{2f_u^2}{f^2} + \frac{h_{uu}}{h} -\frac{2h_u^2}{h}\Big)=0. \end{eqnarray} If we study the articles of Brans and Dicke, we notice that the right hand side of the field equations in a Riemannian setting is composed of two terms\cite{B11-a}: one term contains the Brans-Dicke parameter multiplying the scalar field stress-energy tensor; that comes from the kinetic term of the scalar field in the action, while the other does not contain any parameter. In fact it comes from the geometry of the spacetime itself, and it cannot be seen explicitly in the non-Riemannian description. We emphasize that there exists such parameter values where these two terms cancel each other out and the field equations reduce to the source-free Einstein equations as we have shown above. Thus in certain spacetime geometries the geodesic equations of motion are not affected by the scalar field at all in the conventional Riemannian approach. There is a scalar field but non-spinning test masses are not influenced by it. We have a conceptual problem in the conventional Riemannian interpretation whereas in the non-Riemannian description there is none. Let us decorate this observation by explicitly giving \noindent the geodesic equations as functions of proper time $\tau$ and constants of motion $p_u, p_x, p_y$ neatly as \cite{B12} \begin{eqnarray} u(\tau)=u(0)+\frac{p_u\tau}{m} , \nonumber \\ v(\tau)=v(0)-\frac{m\tau}{2p_u}-\frac{p_x^2}{2p_u^2}\int_0^\tau f(u)^2du-\frac{p_y^2}{2p_u^2}\int_0^\tau h(u)^2du ,\nonumber\\ x(\tau)=x(0)+\frac{p_x}{p_u}\int_0^\tau f(u)^2du, \quad y(\tau)=y(0)+\frac{p_y}{p_u}\int_0^\tau h(u)^2du. \end{eqnarray} Similarly the autoparallel equations are given as \begin{eqnarray} u(\tau)=u(0)+\int_{0}^{\tau}\frac{d\tau}{\alpha}, \nonumber\\ v(\tau)=v(0)-\frac{1}{2}\int_{0}^{\tau}\alpha(u)^2du-\frac{p_x^2}{2m^2}\int_{0}^{\tau}f^2(u)du-\frac{p_y^2}{2m^2}\int_{0}^{\tau}h^2(u)du, \nonumber\\ x(\tau)=x(0)+\frac{p_x}{m}\int_{0}^{\tau}f^2(u)du, \quad y(\tau)=y(0)+\frac{p_y}{m}\int_{0}^{\tau}h^2(u)du, \end{eqnarray} where in the former case a test mass does not interact with the field and in the latter it clearly does. \vspace{2mm} \noindent Taking the Brans-Dicke field equations with torsion and looking at solutions, we have found a particular gravitational plane wave solution. The metric is the source-free (vacuum) Einstein metric and the terms coming from torsion and the stress-energy momentum tensor cancel out and the scalar field equation is satisfied. In the non-Riemannian description, we do not have any issues, on the right hand side, the energy momentum tensor is different than zero and positive definite. The left hand side of the field equations, the geometry is affected by torsion, given by the gradient of the scalar field. When we look at the autoparallel equations they are not the source-free Einstein geodesic equations. We find different equations of motion as expected. Had we insisted on the classical Brans-Dicke interpretation, this solution would be a solution again. But what kind of a solution? The left hand side ensures the Einstein vacuum equations while at the right hand side, there would have been solutions such that the improvement term and the energy momentum tensor cancel out and we would have interpreted them as "ghosts". The fundamental problem is when we consider the geodesic hypothesis. The presence of the scalar field is assumed not to affect the geodesics in this case. We have a ghost field in a sense analogous to the ghost neutrino problem\cite{B13,B14}, The theory is not scale invariant, the metric is the vacuum Einstein metric and the equations of motion are the geodesics that are implied by the Einstein field equations, and the test masses do not see the scalar field. In the standart interpretation, this is an issue but in an interpretation with torsion, it is not. This we believe is a strong case for the non-Riemannian interpretation of Brans-Dicke gravity. \section{Concluding Remarks} In this article, we studied a gravitational plane wave solution of the Brans-Dicke field equations to point the consistentency interpretation of the non-Riemannian setting. The spacetime geometry is relaxed to admit torsion, depending on the gradient of the scalar field. A particular choice of the scalar field $\alpha$ reduces the field equations to the source-free Einstein field equations. The clarification on the sign is of utmost importance. There are two terms that cancel out in the right hand side, one is the shifted energy-momentum tensor and another term that comes from the geometry. What we explicitly want to show is the existence of such a configuration for $0 < c < 6$ i.e. for a negative $\omega$, specifically $-\frac{3}{2}<\omega<0$. This may well be a dark energy candidate. On the other hand, we have detailed that if we try to explain the theory in a Riemannian description, we come to a pathological case where there is a scalar field present that cancels, in a way, the right hand side of the Einstein equations and that in the same geodesic hypothesis the scalar test mass does not interact with this scalar field. If we adopt a more consistent hypothesis, that scalar test masses should follow autoparallels of the connection with torsion, then the identification of the energy-momentum tensor and the equations of motion are coherent. Torsion explicitly affects as it should, the equations of motion of the scalar test masses. Brans-Dicke field equations and the equivalence principle should be given in a non-Riemannian setting to be viable, modest and coherent. That is why we have discussed this in a particular plane wave example with a negative Brans-Dicke parameter $\omega$, dark $\omega$, which can depict some form of dark energy. \section{Acknowledgement} Y.\c{S}. is grateful to Ko\c{c} University for its hospitality and partial support. \vskip 1cm {\small
1,941,325,220,174
arxiv
\section{INTRODUCTION} The existence and impact of the magnetic field in astrophysical events have continued to excite researchers, positing interesting issues pertaining to plasma physics. With the advent of high intensity lasers, it has been possible to make interesting observations on the dynamical evolution of magnetic field in laboratory experiments on laser matter interaction \cite{Stamper,Fujioka,mondal,Flacco,gaurav}. The intense lasers ionize the matter into plasma state and dump their energy into the lighter electron species, generating relativistic electron beam (REB) \cite{Modena,malka,joshi} in the medium. Though the propagation of relativistic electron beam with current more than Alfv\'en current limit i.e. $I= (m_ec^{3}/e)\gamma_b = 17{\beta\gamma_ {b}} kA$, where $\beta=v_b/c$, $v_b$ is velocity of beam and $\gamma_b=(1-v_b^2/c^2)^{-1}$ is relativistic Lorentz factor, is not permitted in the vacuum ( as the associated magnetic fields are large enough to totally curve back the trajectories of the electrons). In plasma medium, this is achieved as the current due to REBs are compensated by the return shielding current in the opposite direction provided by the electrons of the background plasma medium. The two currents initially overlap spatially, resulting in zero net currents and so no magnetic field is present initially. The combination of forward and reverse shielding current is, however, susceptible to several micro-instabilities. A leading instability in the relativistic regime is the filamentation instability \cite{fried}. It is often also termed as the Weibel instability \cite{weibel}. The filamentation/ Weibel instability creates spatial separation of the forward and reverse shielding currents. The current separation leads to the generation of the magnetic field at the expense of the kinetic energy of the beam and plasma particles. The typical scale length at which the Weibel separation has the maximum growth rate is at the electron skin depth scale $c/\omega_{pe}$. The Weibel separation, thus, leads to the formation of REB current filaments of the size of electron skin depth scale and is responsible for the growth of the magnetic energy in the system. The dynamics, long term evolution and energetics associated with the Weibel instability of current filaments are of central importance in many contexts. For instance, in fast ignition concept of fusion, the energetic REB is expected to create an ignition spark at the compressed core of the target for which it has to traverse the lower density plasma corona \cite{tabak_05,john,taguchi,hill,honda} and dump its energy at the central dense core of the target. This requires a complete understanding of REB propagation in the plasma medium. In astrophysical scenario, the generation of cosmological magnetic field and relativistic collisionless shock formation in gamma ray bursts have often been attributed to the collisionless Weibel instability \cite{cosmaggen,Huntington,med1,med2,silva}. The formation of collisionless shock and it's behavior depends on long term evolution and dynamics of the magnetic field generated through Weibel destabilization process. The growth of magnetic field through the Weibel destabilization process influences the propagation of REB filaments. In this nonlinear stage, the current filaments are observed to coalesce and form the larger structure. There are indications from previous studies \cite{polo} that at the early nonlinear stage of evolution there is a growth of magnetic field energy. Subsequently, however, the magnetic energy shows decay. The physical mechanism for the observed decay of magnetic field at later stages is the focus of the present studies. There are suggestions that the merging process of super Alfv\'enic currents carrying filaments leads to the decay of magnetic energy \cite{polo}. On the other hand, the mechanism of magnetic reconnection which rearranges the magnetic topology in plasma is also invoked which converts the magnetic energy to kinetic energy of the particles. This can result in thermal particles or the particles may even get accelerated \cite{recon1,recon2} by the reconnection process. The merger of current filaments leading to $X$ point formation where reconnection happens has been shown in the schematic diagram Fig.~\ref{fig:schematic}. In this work, we have studied the linear and non-linear stage of Weibel instability in detail with the help of 2-D Particle - In- Cell (PIC) simulation. The 2-D plane of simulation is perpendicular to the current flow direction. The initial condition is chosen as two overlapping oppositely propagating electron currents. The development of the instability, the characteristic features during nonlinear phase etc., are studied in detail. The paper has been organized as follows. The simulation set up has been discussed in section II. The observations corresponding to the linear phase of instability is presented in section III. The nonlinear phase of the instability covered in section IV. Section V provides for the summary and the discussion. \section{SIMULATION SET-UP} We employ OSIRIS2.0 \cite{os1, os2} Particle - In - Cell (PIC) code to study the evolution of the two counterstreaming electron current flows in a 2-D $x1-x2$ plane perpendicular to the current flow direction of $\pm \hat{z}$. We have considered the ion response to be negligible and treated them as merely providing a stationary neutralizing background. Thus the dynamics is governed by electron species alone. However, this would not be applicable at longer times where ion response may become important and introduce new features. The boundary conditions are chosen to be periodic for both the electromagnetic field and the charged particles in all direction. We choose the area of the simulation box $R$ as $64\times64$ $(c/\omega_{pe})^2$ corresponding to $640\times640$ cells. The time step is chosen to be $7.07\times 10^{-2}/\omega_{pe}$ where $\omega_{pe}=\sqrt{4\pi n_{0e}e^2/m_e}$ and $n_{0e}=n_{0b}+n_{0p}$ is the total electron density which is the sum of beam and the plasma electrons denoted by suffix $b$ and $p$ respectively. The total number of electrons and ions per cell in the simulations are chosen to be $500$ each. The quasi neutrality is maintained in the system by choosing equal number of electrons and ions. The velocity of beam electrons and the cold plasma electrons are chosen to satisfy current neutrality condition. The uniform plasma density $n_{0e}$ is taken as $1.1\times 10^{22} cm^{-3}$ and the ratio of electron beam density to background electron density has been taken as($n_{0b}/n_{0p}=1/9$) is the simulations presented here. The fields are normalized by $m_ec\omega_{pe}/e$. The evolution of field energy normalized by $m_{e}c^2n_{0e}$ is averaged over the simulation box. We have carried simulations for the choice of cold as well as finite temperature beams. Several choices of beam temperature were considered. \section{LINEAR STAGE OF INSTABILITY} The charge neutrality and the current balance condition, chosen initially ensures that there are no electric and magnetic fields associated with the system and equilibrium conditions are satisfied. We observe a development of magnetic field structures of the typical size of electron skin depth with time. This can be seen in Fig.~\ref{fig:mag_theta} where the contours of the transverse magnetic field have been shown in the 2D $x1-x2$ plane. The development of this magnetic field can be understood as arising due to the spatial separation of forward and return currents through Weibel destabilization process. The growth of transverse magnetic field energy with time has been shown in Fig.3 for the two cases with following parameters: (I) $v_{0b}=0.9c$ and (II) $v_{0b}=0.9c$, $T_{0b}=1kev$, $T_{0p}=0.1kev$. The initial linear phase of growth is depicted by the straight line region in the log linear plot. A comparison with the analytically estimated maximum growth rate for the two cases has been provided by the dashed line drawn alongside. The growth rate for case (I) with $v_{0b}=0.9c$ the growth rate obtained from the simulation by measuring the half of the slope of the magnetic energy evolution in Fig.~\ref{fig:l_growth_Rate} is $0.18$ and it compares well with the analytical value of $0.1879$ ($ \delta^{cold}_{max}\sim \left(\frac{v_{0b}}{c}\right)\sqrt{\frac{n_{0b}/n_{0p}}{\gamma_{0b}}}\omega_{p}$\cite{Godfrey}). Similarly for case (II) when the beam temperature is finite the growth rate of $0.023$ from simulation agrees well with the analytical estimate obtained from kinetic calculations for these parameters of $0.025$ ($ \delta^{hot}_{max}\sim \frac{2\sqrt{6}}{9\sqrt{\pi}}\frac{\left[\omega^2_{b}v^2_{0b}m_{e}/T_{b}+\omega^2_{b}(\gamma_{0b}-1/\gamma^3_{0b})\right]^{3/2}}{\omega^2_{p}(v^2_{0p}+T_{p}/m_{e})c}(T_{p}/m_{e})^{3/2}$\cite{bao}). It should be noted that this is consistent with the well known characteristic feature of the reduction in Weibel growth rate by increasing beam temperature. After the linear phase of growth, it can be observed from Fig.~\ref{fig:l_growth_Rate} that when the normalized magnetic energy becomes of the order of unity the increase in magnetic energy considerably slows down. This reflects the onset of the nonlinear regime. We discuss the nonlinear regime of the instability in the next section in detail. \section{NONLINEAR STAGE OF INSTABILITY} When the Weibel separated magnetic fields acquire significant magnitude, they start influencing the dynamics of beam and plasma particles. This backreaction signifies the onset of nonlinear regime. The plot of the magnetic energy growth in Fig.~\ref{fig:l_growth_Rate} clearly, shows that at around $t \sim 50 \omega_{pe}t$ (cold beam-plasma system) the system enters the nonlinear phase. The characteristics behaviour in the nonlinear regime has been described in the subsections below: \subsection{Current filaments} In the non-linear stage of WI, the current filaments, flowing in the same direction, merge with each other with time and organize as bigger size filaments. During the initial nonlinear stage magnetic field energy keeps growing, albeit at a rate which is much slower than the linear growth rate and then saturates (Fig.~\ref{fig:nonl_growth_Rate}). Subsequently, the magnetic field energy decreases as can be observed from the plot of Fig.~\ref{fig:nonl_growth_Rate}. A rough estimate of the saturated magnetic field can be made by the following simple consideration. The spatial profile of the magnetic field in the current filament is mimicked as a sinusoidal function with $k$ representation the inverse of the filament size. The amplitude of the magnetic field is $B_0$ which in the nonlinear regime significantly deflects the trajectories of the electrons. The transverse motion of an electron in the plane of the magnetic field is then given by \begin{equation} \frac{d^2r}{dt^2}=\frac{ev_{z}}{mc\gamma_{0b}}B_{0}sin(kr) \end{equation} The bounce frequency of a magnetically trapped electron is thus \begin{equation} \omega^2_{m}=\frac{ev_{0b}kB_{0}}{m_{e}c\gamma_{0b}} \end{equation} the saturation would occur when the typical bounce frequency becomes equal to the maximum linear growth rate of instability. Therefore, the saturated magnetic field can be estimated by comparing the linear growth of filamentation instability to the bounce frequency ($\omega_{m}=\delta_{m}$). In case of mono-energetic distribution function, the saturated magnetic field is \begin{equation} B_{sat}\sim \left(\frac{ m_{e}v_{0b}n_{0b}}{ekcn_{0p}}\right)\omega_{p}^2 \label{B_sat} \end{equation} The estimate provided by the eq.(~\ref{B_sat}) compares well with the observed saturated value of the magnetic field which is equal to 0.1. \subsection{Alfv\'en limited filaments} The process of merging of like current filaments can be seen from the plots of temporal evolution of the current densities shown in Fig.~\ref{fig:evolution}. The current in the filament is essentially due to the beam electrons as illustrated in Fig.~\ref{fig:evolution}. The current in the filament, however, should not exceed the Alfv\'en limit. The value of the Alfv\'en limit for our simulations is $35 kA$. In Fig.~\ref{fig:alfven_limit}, we show the evolution of the beam current and the number of filaments in the simulation box. The number of filaments keeps dropping slowly, however, the average beam current in the filament keeps increasing. Since the beam particles convert the partial kinetic energy into magnetic field energy and slow down, the Alfv\'en current limit drops with time and finally saturates. The magnetic field keep on increasing until the average beam current lower than Alfv\'en current limit and at particular time, the average beam current cross the Alfv\'en current limit. This is also the time after which there is no increase in the magnetic energy of the system. After the saturation of instability, the magnetic field energy ($|B_{\perp}|^2(n_0m_ec^2)$ zoomed $100\times$ for better view) starts decaying as shown in Fig.~\ref{fig:nonl_growth_Rate}. In fact, one observes that the magnetic energy reduces after this time. This decay in the magnetic field energy can be understood on the basis of magnetic reconnection phenomena. \subsection{Electron jet formation} At later times when only few filaments are left in the system, we can track each of the filaments individually. We choose two such filaments which are about to converge and observe their behaviour as they coalesce with each other in Fig.~\ref{fig:jet}. The figures show the formation of electron jets in the plane as the two structures merge with each other. The structures basically follow the EMHD\cite{king,stenzel,das} dynamics. Thus as the filaments come near each other they carry the $\vec{B} - \nabla^2 \vec{B} $ with them. The filament scales being longer than electron skin depth, the magnetic field is essentially carried by the electron flow in the plane. However, when the filaments hit each other the magnetic fields get compressed against each other and scale sizes smaller than electron skin depth are formed. The collisionless inertia driven reconnection takes place and generating energetic electron jets in the direction orthogonal to the direction at which the filaments approach each other. There is a change in magnetic field topology and accelerated electrons are observed. This is responsible for the reduction in magnetic field energy. In fact, each of the sudden drops in magnetic field energy can be times with such reconnection events in the simulation. The increase in the perpendicular energy of the electrons $W_{\perp}$ has been shown in Fig.~\ref{fig:nonl_growth_Rate}. It is interesting to note that nature tries to use ingenious and rapid techniques to relax a highly asymmetric system such as the one with inter-penetrating electron current flows along $\pm \hat{z}$. The system being collisionless it would not have been possible to convert $W_{\parallel}$ (kinetic energy parallel to $\hat{z}$ axis to $W_{\perp}$). The magnetic field provides an intermediary role to aid this process and symmetrize the system. \section{CONCLUSIONS} In the present article, we have studied theoretically as well as numerically (PIC simulation) the Weibel instability in beam plasma system. We have shown that the linear stage of Weibel instability shows good agreement with PIC simulation results. In the non-linear stage, the current filaments having the flow in the same direction merge into each other. There is a rearrangement of the magnetic field lines through reconnection process leading to energetic electron jets in the plane. This has been clearly observed in the simulation. So ultimately the parallel flow energy gets converted into the perpendicular electron energy. Basically, the excitation of the instability and the nonlinear evolution tries to relax the configuration towards an isotropic configuration rapidly in a collsionless system. \bibliographystyle{unsrt}
1,941,325,220,175
arxiv
\section{Introduction} \label{intro} Let $G$ be a reductive algebraic group over the algebraically closed field $k$ of characteristic $p\neq 2$. Let $\theta$ be an involutive automorphism of $G$ and let $d\theta:{\mathfrak g}\longrightarrow{\mathfrak g}$ be the corresponding linear involution of ${\mathfrak g}=\mathop{\rm Lie}\nolimits(G)$. There is a direct sum decomposition ${\mathfrak g}={\mathfrak k}\oplus{\mathfrak p}$, where ${\mathfrak k}=\{ x\in{\mathfrak g}|d\theta(x)=x\}$ and ${\mathfrak p}=\{ x\in{\mathfrak g}|d\theta(x)=-x\}$. Let $G^\theta=\{ g\in G|\theta(g)=g\}$ and let $K$ be the connected component of $G^\theta$ containing the identity element. $K$ is reductive and normalises ${\mathfrak p}$, and ${\mathfrak k}=\mathop{\rm Lie}\nolimits(K)$. The idea of the representation $G^\theta\rightarrow\mathop{\rm GL}\nolimits({\mathfrak p})$ as a `generalized version' of the adjoint representation goes back at least as far as Cartan; but achieved a certain maturity in the well-known work \cite{kostrall}. There Kostant and Rallis show that the action of $G^\theta$ on ${\mathfrak p}$ exhibits similar properties to the adjoint action of $G$ on ${\mathfrak g}$. In the set-up of \cite{kostrall}, ${\mathfrak g}$ is a complex reductive Lie algebra, $G$ is the adjoint group of ${\mathfrak g}$ and $\theta$ is an involution of ${\mathfrak g}$ defined over a real form ${\mathfrak g}_{\mathbb R}$. Many of the arguments in \cite{kostrall} use compactness properties and $\mathfrak{sl}(2)$-triples. These arguments are not valid in positive characteristic. On the other hand, Kostant-Rallis' results are generally assumed to be true over arbitrary (algebraically closed) fields of characteristic zero. More recent work by Vust \cite{vust} and Richardson \cite{rich2} considers an analogous `symmetric space' decomposition in a reductive algebraic group $G$. The object corresponding to ${\mathfrak p}$ is the closed set $P=\{ g\theta(g^{-1})\,|\,g\in G\}$: $G$ acts on $P$ by the {\it twisted} action $x*(g\theta(g^{-1}))=xg\theta(g^{-1})\theta(x^{-1})$. If $x\in K$, this action is just ordinary conjugation. (It was proved by Richardson that the twisted action induces a $G$-equivariant isomorphism of varieties $\sigma:G/G^\theta\rightarrow P$, where $G/G^\theta$ is the space of left cosets of $G$ modulo $G^\theta$.) This paper will extend the analysis in the first two chapters of \cite{kostrall} to the case where $p$ is a good prime. Our exposition proceeds along similar lines to \cite{kostrall}. The main obstacles to be overcome are the construction of a $d\theta$-equivariant trace form on ${\mathfrak g}$ (Sect. \ref{sec:3}) and the replacement of the language of $\mathfrak{sl}(2)$-triples with that of associated cocharacters (Sect. \ref{sec:5}). These adjustments allow us to generalise all of the relevant parts of \cite{kostrall}. In addition: in Sect. \ref{sec:5.5} and Sect. \ref{sec:6.3} we give a new proof the number of irreducible components of the variety ${\cal N}$ of nilpotent elements of ${\mathfrak p}$ (following Sekiguchi \cite{sek} in characteristic zero); we show in Sect. \ref{sec:6.4} that a conjecture of Richardson concerning the quotient morphism $\pi:P\rightarrow P\ensuremath{/ \hspace{-1.2mm}/} K$ is false; finally, we apply a theorem of Skryabin to describe the ring $k[{\mathfrak p}]^{K_i}$, where $K_i$ is the $i$-th Frobenius kernel of $K$. A torus $A$ in $G$ is {\it $\theta$-split} if $\theta(a)=a^{-1}$ for all $a\in A$. It was proved by Vust that the set of maximal $\theta$-split tori are $K$-conjugate. Let ${\mathfrak a}$ be a toral algebra contained in ${\mathfrak p}$. If ${\mathfrak a}$ is maximal such, then by abuse of terminology we say that ${\mathfrak a}$ is a {\it maximal torus} of ${\mathfrak p}$. \begin{lemma} Let ${\mathfrak a}$ be a maximal torus of ${\mathfrak p}$. Then ${\mathfrak z}_{\mathfrak g}({\mathfrak a})\cap{\mathfrak p}={\mathfrak a}$, and there exists a unique maximal $\theta$-split torus $A$ of $G$ such that $\mathop{\rm Lie}\nolimits(A)={\mathfrak a}$. \end{lemma} We reintroduce Kostant and Rallis' definition of a Cartan subspace, and check that it is valid in positive characteristic. We provide a short proof of the following result from \cite{kostrall}. \begin{theorem} Any two Cartan subspaces of ${\mathfrak p}$ are conjugate by an element of $K$. The Cartan subspaces of ${\mathfrak p}$ are just the maximal tori of ${\mathfrak p}$. An element of ${\mathfrak p}$ is semisimple if and only if it is contained in a Cartan subspace. \end{theorem} The only assumption required for the above is (A) that $p$ is good for $G$. We make the further assumptions from Sect. \ref{sec:3} onwards: (B) the derived subgroup of $G$ is simply-connected, and (C) there exists a non-degenerate $G$-equivariant symmetric bilinear form $\kappa:{\mathfrak g}\times{\mathfrak g}\longrightarrow k$. The hypotheses (A),(B), and (C) are sometimes known as the standard hypotheses. In order to make maximum use of the assumption (C), we would like the form $\kappa$ to be $d\theta$-equivariant. This is straightforward in characteristic zero, but requires a more subtle argument if the characteristic is positive. In order to construct the required $\kappa$, we develop a $\theta$-stable version of a reduction theorem of Gordon and Premet. We then use this reduction theorem to prove our desired result. \begin{lemma} The trace form $\kappa$ in (C) may be chosen to be $d\theta$-equivariant. \end{lemma} The $d\theta$-equivariance of $\kappa$ allows us to proceed as in \cite{kostrall} in Sect. \ref{sec:4}. \begin{lemma}\label{centdimintro} Let $x\in{\mathfrak p}$. Then $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak k}(x)-\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak p}(x)=\mathop{\rm dim}\nolimits{\mathfrak k}-\mathop{\rm dim}\nolimits{\mathfrak p}$. \end{lemma} With Lemma \ref{centdimintro} we can define regularity: an element $x\in{\mathfrak p}$ is {\it regular} if $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak g}(x)\leq\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak g}(y)$ for all $y\in{\mathfrak p}$. \begin{lemma} Let $x\in{\mathfrak p}$. The following are equivalent: (i) $x$ is regular, (ii) $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak g}(x)=\mathop{\rm dim}\nolimits{\mathfrak g}^A$, (iii) $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak k}(x)=\mathop{\rm dim}\nolimits{\mathfrak k}^A$, (iv) $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak p}(x)=\mathop{\rm dim}\nolimits A$. \end{lemma} Recall that, for a rational representation $\rho:H\longrightarrow\mathop{\rm GL}\nolimits(V)$, an element $v\in V$ is {\it $H$-unstable} if $0$ is in the closure of $\rho(H)(v)$. \begin{lemma} Let $x\in{\mathfrak p}$. Then $x$ is $K$-unstable if and only if $x$ is nilpotent. \end{lemma} It follows fairly quickly that: \begin{lemma} Let $x\in{\mathfrak p}$, and let $x=x_s+x_n$ be the Jordan-Chevalley decomposition of $x$. The unique closed (resp. minimal) $K$-orbit in the closure of $\mathop{\rm Ad}\nolimits K(x)$ is $\mathop{\rm Ad}\nolimits K(x_s)$. \end{lemma} It is well-known from Mumford's Geometric Invariant Theory that the closed $K$-orbits in ${\mathfrak p}$ are in one-to-one correspondence with the $k$-rational points of the quotient ${\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K = \mathop{\rm Spec}\nolimits(k[{\mathfrak p}]^K)$. We have a Chevalley Restriction Theorem for ${\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$. The proof follows Richardson's proof of the corresponding result for the action of $K$ on $P=\{ g^{-1}\theta(g)\,|\,g\in G\}$. \begin{theorem}\label{Chevintro} Let $A$ be a maximal $\theta$-split torus of $G$, and let $W=N_G(A)/Z_G(A)$. Let ${\mathfrak a}=\mathop{\rm Lie}\nolimits(A)$. Then the natural embedding $j:{\mathfrak a}\rightarrow{\mathfrak p}$ induces an isomorphism of affine varieties $j':{\mathfrak a}/W\rightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$. Hence $k[{\mathfrak p}]^K\cong k[{\mathfrak a}]^W$. \end{theorem} If ${\mathfrak g}$ is a complex reductive Lie algebra with adjoint group $G$, then by a well-known classical result $k[{\mathfrak g}]^G$ is a polynomial ring in $(\mathop{\rm rk}\nolimits{\mathfrak g})$ indeterminates. Here a straightforward application of Demazure's theorem on Weyl group invariants gives the analogous result: \begin{lemma} There are $r=\mathop{\rm dim}\nolimits A$ algebraically independent homogeneous polynomials $f_1,f_2,\ldots,f_r$ such that $k[{\mathfrak a}]^W=k[f_1,f_2,\ldots,f_r]$. Moreover, $$\sum_{w\in W} t^{l(w)}=\prod_{i=1}^r{\frac{1-t^{\deg f_i}}{1-t}}$$ where $l$ is the length function on $W$ corresponding to a basis of simple roots in $\Phi_A$. \end{lemma} In Sect. \ref{sec:5} we consider in more detail the set of nilpotent elements of ${\mathfrak p}$, denoted ${\cal N}$. In general ${\cal N}$ is not irreducible (and therefore not normal as 0 is in every irreducible component). However, it is straightforward to prove (following \cite{kostrall}): \begin{theorem} Let ${\cal N}_1,{\cal N}_2,\ldots,{\cal N}_m$ be the irreducible components of ${\cal N}$. The number of $K$-orbits in ${\cal N}$ is finite. Each irreducible component ${\cal N}_i$ is normalized by $K$, contains a unique open $K$-orbit, and is of codimension $r=\mathop{\rm rk}\nolimits A$ in ${\mathfrak p}$ (where $A$ is a maximal $\theta$-split torus). An element of ${\cal N}_i$ is in the open $K$-orbit if and only if it is regular. \end{theorem} Let $K^*=\{ g\in G\,|\,g^{-1}\theta(g)\in Z(G)\}$. In \cite{kostrall} it was proved that the irreducible components of ${\cal N}$ are permuted transitively by $K^*$. For the proof, Kostant and Rallis showed that any regular nilpotent element of ${\mathfrak p}$ can be embedded as the nilpositive element in a principal normal $\mathfrak{sl}(2)$-triple, and that any two principal normal $\mathfrak{sl}(2)$-triples are conjugate by an element of $K^*$. (A principal normal $\mathfrak{sl}(2)$-triple $\{ h,e,f\}$ is one such that $e,f\in{\mathfrak p}$ are regular and $h\in{\mathfrak k}$.) Clearly, this argument cannot be applied if the characteristic is small. To prove it in our case we replace the language of $\mathfrak{sl}(2)$-triples with that of (Pommerening's) associated cocharacters. A reinterpretation of Kawanaka's theorem \cite{kawanaka} on nilpotent orbits in graded semisimple Lie algebras gives the following: \begin{corollary}\label{cocharintro} Let $e\in{\cal N}$. Then there exists a cocharacter $\lambda:k^\times\longrightarrow K$ which is associated to $e$. Any two such cocharacters are conjugate by an element of $Z_K(e)^\circ$. \end{corollary} The key step in proving that the set of regular nilpotent elements is a single $K^*$-conjugacy class is the following lemma. For the proof, we reduce by a number of tricks to the case where $G$ is almost simple, $e$ is semiregular in ${\mathfrak g}$, and $\theta=\mathop{\rm Ad}\nolimits\lambda(t_0)$, where $\lambda$ is an associated cocharacter for $e$ and $t_0=\sqrt{-1}$. It is then fairly straightforward to prove the Lemma case-by-case (see Sect. \ref{sec:5.4}). \begin{lemma} Let $e\in{\cal N}$ and let $\lambda:k^\times\longrightarrow K$ be associated to $e$. There exists $g\in G$ such that $(\mathop{\rm Int}\nolimits g)\circ\lambda$ is $\theta$-split. Equivalently $\mathop{\rm Int}\nolimits n\circ\lambda=-\lambda$, where $n=g^{-1}\theta(g)$. \end{lemma} As a consequence, we have: \begin{corollary}\label{omegaintro} Let $A$ be a maximal $\theta$-split torus of $G$ and let $\Pi$ be a basis for $\Phi_A=\Phi(G,A)$. Then $e$ is regular in ${\mathfrak p}$ if and only if $\lambda$ is $G$-conjugate to the cocharacter $\omega:k^\times\longrightarrow A\cap G^{(1)}$ satisfying $\langle\alpha,\omega\rangle=2$ for all $\alpha\in\Pi$. \end{corollary} The above Corollary shows that any regular nilpotent element of ${\mathfrak p}$ is even (see Rk. \ref{eiseven} for details). It is now a fairly straightforward task to deduce that: \begin{theorem}\label{denseorbitintro} The regular elements ${\cal N}_{reg}\subset{\cal N}$ are a single $K^*$-orbit. Hence $K^*$ permutes the components of ${\cal N}$ transitively. \end{theorem} It is easy to give examples such that ${\cal N}$ is not irreducible. In \cite{sek}, Sekiguchi classified (over $k={\mathbb C}$) the involutions for which ${\cal N}$ is not irreducible. Our analysis of associated cocharacters, combined with the classification of involutions (see for example \cite{springer}), simplifies the task of extending Sekiguchi's results to positive characteristic. We begin with the following observation. \begin{theorem}\label{gthetaorbsintro} Let $e,\lambda$ be as above and let $C=Z_G(\lambda)\cap Z_G(e)$ (the reductive part of $Z_G(e)$, see \cite[Thm. 2.3]{premnil}). Let $Z=Z(G)$ and $P=\{ g^{-1}\theta(g)\,|\, g\in G\}$. Denote by $\tau:G\longrightarrow P$ the morphism $g\mapsto g^{-1}\theta(g)$, and by $\Gamma$ the set of $G^\theta$-orbits in ${\cal N}_{reg}$. (a) The map from $K^*$ to $\Gamma$ given by $g\mapsto gG^\theta\cdot e$ is surjective and induces a one-to-one correspondence $K^*/G^\theta C\longrightarrow\Gamma$. (b) The morphism $\tau$ induces an isomorphism $K^*/G^\theta C\longrightarrow (Z\cap A)/{\tau(C)}$. Since $Z\subseteq C$, there is a surjective map $(Z\cap A)/{\tau(Z)}\longrightarrow (Z\cap A)/{\tau(C)}$. (c) The embedding $F^*\hookrightarrow K^*$ induces a surjective map $F^*/{F(Z\cap A)}\rightarrow\Gamma$. (d) The map $F^*\rightarrow Z\cap A$, $a\mapsto a^2$ induces an isomorphism $F^*/{F(Z\cap A)}$ $\longrightarrow Z\cap A/{(Z\cap A)^2}$. \end{theorem} Thm. \ref{gthetaorbsintro} holds for an arbitrary reductive group $G$. If $G$ is semisimple and simply-connected, then $G^\theta=K$ by a result of Steinberg, hence the $G^\theta$-orbits in ${\cal N}_{reg}$ are in one-to-one correspondence with the irreducible components of ${\cal N}$. We can use this observation together with Thm. \ref{gthetaorbsintro} to describe the number of irreducible components of ${\cal N}$ for any involution of an almost simple group. An involution $\theta$ of $G$ is of {\it maximal rank} if the maximal $\theta$-split torus $A$ is a maximal torus of $G$. If $G$ is almost simple and $\theta$ is of maximal rank, then $(Z\cap A)/{\tau(C)}=Z/Z^2$. For example, Thm. \ref{gthetaorbsintro} implies immediately that the variety of $n\times n$ symmetric nilpotent matrices has two irreducible components if $n$ is even, and is irreducible if $n$ is odd. (See Sect. \ref{sec:5.5} and Sect. \ref{sec:6.3} for further details.) In Sect. \ref{sec:6} we generalise Kostant-Rallis' construction of a reductive subalgebra ${\mathfrak g}^*\subset{\mathfrak g}$ such that ${\mathfrak a}$ is a Cartan subalgebra of ${\mathfrak g}^*$. \begin{theorem}\label{subalgintro} Let $\omega$ be as in Cor. \ref{omegaintro} and let $E\in{\mathfrak g}(2;\omega)$ be such that $[{\mathfrak g}^\omega,E]={\mathfrak g}(2;\omega)$. Let ${\mathfrak g}^*$ be the Lie subalgebra of ${\mathfrak g}$ generated by $E,d\theta(E)$ and ${\mathfrak a}$. (a) ${\mathfrak a}$ is a Cartan subalgebra of ${\mathfrak g}^*$. There exists a reductive group $G^*$ satisfying the standard hypotheses (A)-(C), such that $\mathop{\rm Lie}\nolimits(G^*)={\mathfrak g}^*$. (b) There is an involutive automorphism $\theta^*$ of $G^*$ such that $d\theta^*=d\theta|_{{\mathfrak g}^*}$. \end{theorem} In \cite{kostrall}, it was proved that each fibre of the quotient morphism $\pi:{\mathfrak p}\longrightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ has a dense open (regular) $K^*$-orbit. Let $K^*$ act on $P$ by conjugation (this is valid by \cite[8.2]{rich2}). In \cite{rich2}, Richardson conjectured that there is a dense open $K^*$-orbit on each fibre of the quotient morphism $\pi_P:P\longrightarrow P\ensuremath{/ \hspace{-1.2mm}/} K=P\ensuremath{/ \hspace{-1.2mm}/} {K^*}\cong A/ W_A$ (see \cite[8.3-4]{rich2}). \begin{theorem} (a) There is a dense open $K^*$-orbit in each fibre of $\pi_{\mathfrak p}$. (b) The corresponding statement for $\pi_P$ is false. \end{theorem} We draw some further conclusions from Thm. \ref{subalgintro}. Let ${\mathfrak k}^*={\mathfrak g}^*\cap{\mathfrak k}$, ${\mathfrak p}^*={\mathfrak g}^*\cap{\mathfrak p}$. By Thm. \ref{Chevintro} and the Chevalley Restriction Theorem, $k[{\mathfrak p}]^K\cong k[{\mathfrak a}]^{W_A}\cong k[{\mathfrak g}^*]^{G^*}$. \begin{lemma} If two elements of ${\mathfrak g}^*$ are $G^*$-conjugate, then they are $G$-conjugate. \end{lemma} This allows us to establish the following equivalence: \begin{lemma} Let $x\in{\mathfrak p}^*$. The following are equivalent: (i) $x$ is a regular element of ${\mathfrak p}$, (ii) $x$ is a regular element of ${\mathfrak g}^*$, (iii) ${\mathfrak z}_{{\mathfrak k}^*}(x)=0$, (iv) $\mathop{\rm dim}\nolimits{\mathfrak z}_{{\mathfrak p}^*}(x)=r=\mathop{\rm dim}\nolimits{\mathfrak a}$. \end{lemma} Let $e\in{\mathfrak p}^*$ be a regular nilpotent element. By Cor. \ref{cocharintro} there exists a cocharacter $\lambda:k^\times\longrightarrow (G^*)^{\theta^*}$ which is associated to $e$. Hence we can choose a $\lambda$-graded subspace ${\mathfrak v}$ of ${\mathfrak p}^*$ such that $[e,{\mathfrak g}^*]\oplus{\mathfrak v}={\mathfrak g}^*$. Then we also have $[e,{\mathfrak k}]\oplus{\mathfrak v}={\mathfrak p}$. It is known (\cite{veldkamp,premtang}) that every element of $e+{\mathfrak v}$ is regular in ${\mathfrak g}^*$, that the embedding $e+{\mathfrak v}\hookrightarrow{\mathfrak g}^*$ induces an isomorphism $e+{\mathfrak v}\longrightarrow{\mathfrak g}^*\ensuremath{/ \hspace{-1.2mm}/} G^*$, and that each regular orbit in ${\mathfrak g}^*$ intersects $e+{\mathfrak v}$ in precisely one point. \begin{lemma} Let $j$ be the composite of the isomorphisms $k[{\mathfrak p}]^K\rightarrow k[{\mathfrak a}]^{W}\rightarrow k[{\mathfrak g}^*]^{G^*}$ and let $f\in k[{\mathfrak p}]^K,g\in k[{\mathfrak g}^*]^{G^*}$. Then $j(f)=g$ if and only if $f|_{e+{\mathfrak v}}=g|_{e+{\mathfrak v}}$. Hence the embedding $e+{\mathfrak v}\hookrightarrow {\mathfrak p}$ induces an isomorphism $e+{\mathfrak v}\longrightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$, and each regular $K^*$-orbit in ${\mathfrak p}$ intersects $e+{\mathfrak v}$ in exactly one point. \end{lemma} In particular we have: \begin{corollary} Let $k[{\mathfrak p}]^K=k[u_1,u_2,\ldots,u_r]$. The differentials $(du_i)_x$ are linearly independent for any regular $x\in{\mathfrak p}$. \end{corollary} The above observation allows us to apply Skryabin's theorem on infinitesimal invariants to show that: \begin{theorem} Let $k[{\mathfrak p}]^{(p^i)}$ denote the ring of all $p^i$-th powers of elements of $k[{\mathfrak p}]$ and let $K_i$ denote the $i$-th Frobenius kernel of $K$. (a) $k[{\mathfrak p}]^{K_i}=k[{\mathfrak p}]^{(p^i)}[u_1,u_2,\ldots,u_r]$, and is free of rank $p^{ir}$ over $k[{\mathfrak p}]^{(p^i)}$. (b) $k[{\mathfrak p}]^{K_i}$ is a locally complete intersection. \end{theorem} {\it Notation.} The connected component of an algebraic group $G$ (containing the identity element) will be denoted $G^\circ$. If $\theta$ is an automorphism of $G$, then we denote by $G^\theta$ the isotropy subgroup $\{ g\in G\,|\,\theta(g)=g\}$. We use similar notation for the fixed points of an algebra or Lie algebra with respect to an automorphism or group of automorphisms. If $x\in G$, then $Z_G(x)$ (resp. ${\mathfrak g}^x$) will denote the centralizer of $x$ in $G$ (resp. in ${\mathfrak g}$). Similar notation will be used, where appropriate, for the centralizers in $K,{\mathfrak k},{\mathfrak p}$, etc. We write $x=x_s x_u$ (resp. $x=x_s+x_n$) for the Jordan-Chevalley decomposition of $x\in G$ (resp. $x\in{\mathfrak g}$), where $x_s$ is the semisimple part and $x_u$ is the unipotent part (resp. $x_n$ is the nilpotent part) of $x$. Throughout the paper we write $g\cdot x$ (resp. $g\cdot\lambda$) for $\mathop{\rm Ad}\nolimits g(x)$ (resp. $\mathop{\rm Ad}\nolimits g\circ\lambda$), where $g\in G$ and $x\in{\mathfrak g}$ (resp. $\lambda$ is a cocharacter in $G$).\linebreak[2] {\it Acknowledgement.} I would like to thank Alexander Premet for his advice and encouragement. I also thank Dmitri Panyushev for directing me towards the paper of Sekiguchi and alerting me to some mistakes in the calculation of the number of irreducible components of ${\cal N}$. This paper was written in part with the support of an Engineering and Physical Sciences Research Council PhD studentship. \section{Preliminaries} \label{sec:1} Let $G$ be a reductive algebraic group over the algebraically closed field $k$ of characteristic not equal to $2$. We assume throughout that $\mathop{\rm char}\nolimits k=p$ is good for $G$. (Let $\Delta$ be a basis for the root system $\Phi$ of $G$, let $\hat\alpha$ be the longest element of $\Phi$ relative to $\Delta$, and let $\hat\alpha=\sum_{\beta\in\Delta}m_\beta \beta$. Then $p$ is good for $G$ if and only if $p>m_\beta$ for all $\beta\in\Delta$.) Let $\theta:G\longrightarrow G$ be an involutive automorphism and let $K$ denote the connected component of the isotropy subgroup $G^\theta$. Let ${\mathfrak g}=\mathop{\rm Lie}\nolimits(G)$. Then ${\mathfrak g}={\mathfrak k}\oplus{\mathfrak p}$, where ${\mathfrak k}=\{ x\in{\mathfrak g}|\;d\theta(x)=x\}$, ${\mathfrak p}=\{ x\in{\mathfrak g}|\;d\theta(x)=-x\}$. Clearly $[{\mathfrak k},{\mathfrak k}]\subseteq{\mathfrak k}$, $[{\mathfrak k},{\mathfrak p}]\subseteq{\mathfrak p}$, and $[{\mathfrak p},{\mathfrak p}]\subseteq{\mathfrak k}$. Hence we have a ${\mathbb Z}/2{\mathbb Z}$-grading of ${\mathfrak g}$. By \cite[8.1]{steinberg}, $K$ is reductive. Moreover, $\mathop{\rm Lie}\nolimits(K)={\mathfrak k}$ by \cite[\S 9.1]{bor}. The following result is due to Steinberg \cite[7.5]{steinberg}: {\it - There exists a Borel subgroup $B$ of $G$ and a maximal torus $T$ contained in $B$ such that $\theta(B)=B,\,\theta(T)=T$.} Following Springer \cite{springer} we call such a pair $(B,T)$ a {\it fundamental pair}. Let $(B,T)$ be a fundamental pair and let $\Delta$ be the basis of the root system $\Phi=\Phi(G,T)$ corresponding to $B$. Let $\{ h_\alpha,e_\beta\,:\,\alpha\in\Delta,\beta\in\Phi\}$ be a Chevalley basis for ${\mathfrak g}'=\mathop{\rm Lie}\nolimits(G^{(1)})$. There exist constants $\{c(\alpha)\in k^\times:\alpha\in\Phi\}$ and an automorphism $\gamma$ of $\Phi$ with $\gamma(\Delta)=\Delta$ such that $d\theta(e_\alpha)=c(\alpha)e_{\gamma(\alpha)}$ for each $\alpha\in\Phi$. It is easy to see that: {\it - $c(\alpha)c(\gamma(\alpha))=1$, - If $\gamma(\alpha)\neq\alpha$, then either $\gamma(\alpha)$ and $\alpha$ are orthogonal, or they generate a root system of type $A_2$, - $c(\alpha)c(-\alpha)=1$, - $\theta(h_\alpha)=h_{\gamma(\alpha)}$ for all $\alpha\in\Delta$.} If $G$ is semisimple, then the data $\gamma$ and $\{ c(\alpha),\alpha\in\Delta\}$ fully determine $d\theta$. In the general reductive case, we need a little more preparation. Recall that ${\mathfrak g}$ is a {\it restricted} Lie algebra. Thus there is a canonical $p$-operation on ${\mathfrak g}$, denoted $x\mapsto x^{[p]}$. If $G$ is a closed subgroup of some $\mathop{\rm GL}\nolimits(V)$, then ${\mathfrak g}$ is a subalgebra of $\mathfrak{gl}(V)$ and the $p$-operation is just the restriction to ${\mathfrak g}$ of the $p$-th power map of matrices. An element $t\in{\mathfrak g}$ is a {\it toral element} if $t^{[p]}=t$. A subalgebra of ${\mathfrak g}$ is a {\it toral algebra} if it is commutative and has a basis of toral elements. If $T$ is a torus in $G$ then $\mathop{\rm Lie}\nolimits(T)$ is a toral algebra in ${\mathfrak g}$. For a toral algebra ${\mathfrak s}\subseteq{\mathfrak g}$, we denote by ${\mathfrak s}^{tor}$ the set of all toral elements in ${\mathfrak s}$: ${\mathfrak s}^{tor}$ is a vector space over the prime subfield ${\mathbb F}_p$ of $k$, and ${\mathfrak s}\cong{\mathfrak s}^{tor}\otimes_{{\mathbb F}_p} k$. \begin{lemma} \label{redcase} Let $\theta$ be an automorphism of $G$ of order $m$, $p\nmid m$, let $T$ be a $\theta$-stable torus in $G$ and let ${\mathfrak t}=\mathop{\rm Lie}\nolimits(T)$, ${\mathfrak t}'=\mathop{\rm Lie}\nolimits(T\cap G^{(1)})$. There exists a $\theta$-stable toral algebra ${\mathfrak s}$ such that ${\mathfrak t}={\mathfrak t}'\oplus{\mathfrak s}$, and hence ${\mathfrak g}={\mathfrak g}'\oplus{\mathfrak s}$ (vector space direct sum). If $m|(p-1)$, then we can choose a toral basis for ${\mathfrak s}$ consisting of eigenvectors for $d\theta$. \end{lemma} \begin{proof} As $d\theta$ is a restricted Lie algebra automorphism, the sets ${\mathfrak t}^{tor}$ and $({\mathfrak t}')^{tor}$ are $d\theta$-stable. Therefore by Maschke's Theorem there is a $d\theta$-stable ${\mathbb F}_p$-vector space ${\mathfrak s}^{tor}$ such that ${\mathfrak t}^{tor}=({\mathfrak t}')^{tor}\oplus{\mathfrak s}^{tor}$. Let ${\mathfrak s}$ be the toral algebra generated by ${\mathfrak s}^{tor}$. Then ${\mathfrak t}={\mathfrak t}'\oplus{\mathfrak s}$. To prove the second assertion, we consider the action of $d\theta$ on ${\mathfrak s}^{tor}$. As $\theta$ has order $m$, the minimal polynomial $m(t)$ of $d\theta|_{{\mathfrak s}^{tor}}$ divides $(t^m-1)$. But if $m$ divides $(p-1)$ then there is a primitive $m$-th root of unity in ${\mathbb F}_p$, hence $m(t)$ splits over ${\mathbb F}_p$ as a product of distinct linear factors. In other words $d\theta|_{{\mathfrak s}^{tor}}$ is diagonalizable. Choose a basis for ${\mathfrak s}^{tor}$ consisting of eigenvectors for $d\theta$. This completes the proof. \end{proof} Let us return now to the case where $\theta$ is an involution. It may be illustrative at this point to give explicit bases for ${\mathfrak k}$ and ${\mathfrak p}$. For ${\mathfrak k}$: $\left\{ \begin{array}{ll} h_{\alpha_i} & \alpha_i\in\Delta,\gamma(\alpha_i)=\alpha_i, \\ h_{\alpha_i}+h_{\gamma(\alpha_i)} & \alpha_i\in\Delta,\gamma(\alpha_i)\neq\alpha_i, \\ e_\alpha & \mbox{$\alpha\in\Phi,\gamma(\alpha)=\alpha$ and $c(\alpha)=1$}, \\ e_\alpha+c(\alpha)e_{\gamma(\alpha)} & \alpha\in\Phi,\gamma(\alpha)\neq\alpha, \\ t_i & 1\leq i\leq l. \end{array} \right. $ For ${\mathfrak p}$: $\left\{ \begin{array}{ll} h_{\alpha_i}-h_{\gamma(\alpha_i)} & \alpha_i\in\Delta,\gamma(\alpha_i)\neq\alpha_i, \\ e_\alpha & \mbox{$\alpha\in\Phi,\gamma(\alpha)=\alpha$ and $c(\alpha)=-1$}, \\ e_\alpha-c(\alpha)e_{\gamma(\alpha)} & \alpha\in\Phi,\gamma(\alpha)\neq\alpha, \\ t_j' & 1\leq j\leq h. \end{array} \right. $ The elements $t_i,t_j'$ are toral elements spanning the toral algebra ${\mathfrak s}$ of Lemma \ref{redcase}. With this description we can prove the following useful lemma: \begin{lemma}\label{nontriv} The following are equivalent: (i) ${\mathfrak p}$ is a toral algebra contained in ${\mathfrak z}({\mathfrak g})$, (ii) There are no non-central semisimple elements in ${\mathfrak p}$, (iii) There are no non-zero nilpotent elements in ${\mathfrak p}$, (iv) $\theta|_{G^{(1)}}$ is trivial. \end{lemma} \begin{proof} Clearly (i) $\Rightarrow$ (ii) and (i) $\Rightarrow$ (iii). Suppose (iv) holds. Then, by the above remarks ${\mathfrak p}$ is a toral algebra contained in ${\mathfrak t}$. Let $t\in{\mathfrak p}$ and let $\alpha\in\Phi$, hence $e_\alpha\in{\mathfrak g}'\subseteq{\mathfrak k}$. Then $[t,e_\alpha]=d\alpha(t)e_\alpha\in{\mathfrak p}\Rightarrow d\alpha(t)=0$. Thus $t\in{\mathfrak z}({\mathfrak g})$, and (i) holds. To complete the proof we will show that (ii)$\Rightarrow$(iv) and (iii)$\Rightarrow$(iv). Keep the notation from above, and suppose that $\theta|_{G^{(1)}}$ is non-trivial. We will show that (ii) cannot hold. Assume first of all that $\theta|_{G^{(1)}}$ is inner. There is some $\alpha\in\Delta$ such that $e_\alpha\in{\mathfrak p}$. Moreover $e_{-\alpha}\in{\mathfrak p}$ also, since $c(\alpha)c(-\alpha)=1$. Hence $s=e_{\alpha}+e_{-\alpha}$ is a semisimple element of ${\mathfrak p}$. But $s$ is not in ${\mathfrak h}$ and therefore $s\notin{\mathfrak z}$ (see \cite[2.3]{me}). Assume therefore that $\gamma$ is non-trivial. Then $\alpha\neq\gamma(\alpha)$ for some $\alpha\in\Delta$. Hence $h=h_\alpha - h_{\gamma(\alpha)}\in{\mathfrak p}$. If (ii) holds then $h\in{\mathfrak z}$, hence $\mathop{\rm char}\nolimits k=3$ and $\alpha,\gamma(\alpha)$ generate a subsystem of $\Phi$ of type $A_2$. Thus $[e_\alpha,e_{\gamma(\alpha)}]=Ne_{\alpha+\gamma(\alpha)}\in{\mathfrak p}$, $N\neq 0$. Therefore $e_{\alpha+\gamma(\alpha)}\in{\mathfrak p}$, and by the same argument $e_{-(\alpha+\gamma(\alpha))}\in{\mathfrak p}$. Let $s= e_{\alpha+\gamma(\alpha)} + e_{-(\alpha+\gamma(\alpha))}$. Then $s$ is a semisimple element of ${\mathfrak p}$ not in ${\mathfrak z}({\mathfrak g})$. We have shown that (ii) $\Rightarrow$ (iv). It remains to prove that if $\theta|_{G^{(1)}}$ is non-trivial then there is a non-zero nilpotent element of ${\mathfrak p}$. If $\gamma$ is non-trivial, then we choose $\alpha$ with $\gamma(\alpha)\neq\alpha$ and set $n= e_\alpha - d\theta(e_\alpha)=e_\alpha-c(\alpha)e_{\gamma(\alpha)}$. If $\theta|_{G^{(1)}}$ is inner, then we can choose $\alpha\in\Phi$ with $e_\alpha\in{\mathfrak p}$. This completes the proof. \end{proof} We will require the following observation of Steinberg: \begin{lemma}\label{sccover} Let $G$ be a semisimple group and let $\theta$ be an automorphism of $G$. Let $\pi:G_{sc}\rightarrow G$ be the universal covering of $G$. Then there exists a unique automorphism $\theta_{sc}$ of $G_{sc}$ such that the following diagram is commutative: \begin{diagram} G_{sc} & \rTo^{\theta_{sc}} & G_{sc} \\ \dTo^\pi & & \dTo^\pi \\ G & \rTo^{\theta} & G \\ \end{diagram} If $\theta$ is an involution, then so is $\theta_{sc}$. \end{lemma} \begin{proof} The first statement follows from \cite[9.16]{steinberg}. But now by uniqueness, if $\theta$ is of order 2 then so is $\theta_{sc}$. \end{proof} Finally, we make the following observation for later reference. \begin{lemma}\label{GLautos} Let $G=\mathop{\rm GL}\nolimits(n,k),{\mathfrak g}=\mathop{\rm Lie}\nolimits(G),{\mathfrak g}'=\mathop{\rm Lie}\nolimits(G^{(1)})$. We denote by $\mathop{\rm Aut}\nolimits G$ (resp. $\mathop{\rm Aut}\nolimits{\mathfrak g}$) the (abstract) group of algebraic automorphisms of $G$ (resp. restricted Lie algebra automorphisms of ${\mathfrak g}$). (i) $\mathop{\rm Aut}\nolimits G$ contains $\mathop{\rm Int}\nolimits G$, the inner automorphisms, as a normal subgroup of index 2. For $n\geq 3$ (resp. $n=2$) let $\phi:G\longrightarrow G$ be the involution given by $g\mapsto {^t}g^{-1}$ (resp. $g\mapsto g/{(\det g)}$) and let $C$ be the subgroup of $\mathop{\rm Aut}\nolimits G$ generated by $\phi$. Then $\mathop{\rm Aut}\nolimits G$ is the semidirect product of $\mathop{\rm Int}\nolimits G$ by $C$ (resp. the direct product of $\mathop{\rm Int}\nolimits G$ and $C$). (ii) The natural map $\mathop{\rm Aut}\nolimits G\rightarrow \mathop{\rm Aut}\nolimits(G^{(1)})$ is bijective if $n\geq 3$, and surjective with kernel $C$ for $n=2$. (iii) For any $\theta\in\mathop{\rm Aut}\nolimits G$, the differential $d\theta$ is a restricted Lie algebra automorphism of $G$. The map $d:\mathop{\rm Aut}\nolimits G\longrightarrow\mathop{\rm Aut}\nolimits{\mathfrak g}$ is injective and $d:\mathop{\rm Aut}\nolimits G^{(1)}\longrightarrow\mathop{\rm Aut}\nolimits{\mathfrak g}'$ is bijective for all $n$ and $p$. (iv) If $p\nmid n$ then $\mathop{\rm Aut}\nolimits{\mathfrak g}\cong\mathop{\rm Aut}\nolimits{\mathfrak g}'\times{\mathbb F}_p^\times$. If $p\, |\, n$ then $\mathop{\rm Aut}\nolimits{\mathfrak g}\cong\mathop{\rm Aut}\nolimits{\mathfrak g}'\times B$, where $B$ is the cyclic group of order $p$ generated by the automorphism $x\mapsto x+(\mathop{\rm tr}\nolimits x)I$ and $I$ is the identity matrix. (v) If $2\neq p\,|\,n$ then for any involution $\eta$ of the restricted Lie algebra ${\mathfrak g}'$ there is a unique involutive automorphism $\theta$ of $G$ (resp. $\psi$ of ${\mathfrak g}$) such that $d\theta|_{{\mathfrak g}'}=\eta$ (resp. $\psi|_{{\mathfrak g}'}=\eta$). \end{lemma} \begin{proof} If $n=2$, then all automorphisms of $G^{(1)}$ are inner. Otherwise, $\mathop{\rm Aut}\nolimits G^{(1)}$ is generated by $\mathop{\rm Int}\nolimits G^{(1)}$ together with the outer automorphism $g\mapsto {^t}g^{-1}$ (\cite[\S 14.9]{bor}). Hence the restriction map $\mathop{\rm Aut}\nolimits G\rightarrow\mathop{\rm Aut}\nolimits G^{(1)}$ is surjective for any $n$. Suppose $\theta\in\mathop{\rm Aut}\nolimits G$ is such that $\theta(g)=g\;\forall g\in G^{(1)}$. Then $\theta$ is trivial unless $\theta(z)=z^{-1}$ for all $z\in Z(G)$. This possibility clearly only occurs if $n=2$ and $\theta:g\mapsto g/{(\det g)}$. Hence we have proved (i) and (ii). The automorphism group of the abstract Lie algebra ${\mathfrak g}'$ is given in \cite{hog}. We can see easily from the tables in \cite{hog} that $d:\mathop{\rm Aut}\nolimits G^{(1)}\longrightarrow\mathop{\rm Aut}\nolimits{\mathfrak g}'$ is bijective (and that any automorphism of the abstract Lie algebra ${\mathfrak g}'$ is a restricted Lie algebra automorphism) unless $n=p=2$. We deal with this case as follows: Let $\{ h,e,f\}$ be the standard basis for ${\mathfrak g}'$. Then $h$ is the identity matrix, and in fact is the only non-zero toral element of ${\mathfrak g}'$. Hence any $\theta\in\mathop{\rm Aut}\nolimits{\mathfrak g}'$ satisfies $\theta(h)=h$. Suppose $\theta(e)=x$. Then, since any two non-zero nilpotent elements of ${\mathfrak g}'$ are conjugate, there exists $g\in G^{(1)}$ such that $\mathop{\rm Ad}\nolimits g(e)=x$. But there is a unique nilpotent element $y\in {\mathfrak g}'$ such that $[x,y]=h$. Hence $\mathop{\rm Ad}\nolimits g(f)=y=\theta(f)$. It follows that $\theta=\mathop{\rm Ad}\nolimits g$. Thus differentiation $d:\mathop{\rm Aut}\nolimits G^{(1)}\longrightarrow\mathop{\rm Aut}\nolimits{\mathfrak g}'$ is surjective. Injectivity follows from the fact that $\mathop{\rm ker}\nolimits\mathop{\rm Ad}\nolimits=Z(G)$. We have shown that $d:\mathop{\rm Aut}\nolimits G^{(1)}\longrightarrow\mathop{\rm Aut}\nolimits{\mathfrak g}'$ is bijective for all $n$ and $p$. Therefore $d:\mathop{\rm Aut}\nolimits G\longrightarrow\mathop{\rm Aut}\nolimits{\mathfrak g}$ is injective for all $n\geq 3$. Injectivity for $n=2$ will follow from (iv), since $d\phi:x\mapsto x-(\mathop{\rm tr}\nolimits x)I$. Suppose first of all that $p\nmid n$. Then ${\mathfrak g}={\mathfrak z}({\mathfrak g})\oplus{\mathfrak g}'$, hence $\mathop{\rm Aut}\nolimits{\mathfrak g}\cong\mathop{\rm Aut}\nolimits{\mathfrak g}'\times\mathop{\rm Aut}\nolimits{\mathfrak z}$. The toral algebra ${\mathfrak z}$ is generated by the identity matrix. Hence $\mathop{\rm Aut}\nolimits{\mathfrak z}$ consists of the maps $\lambda I\longrightarrow m\lambda I$ with $m\in{\mathbb F}_p^\times$. Thus $\mathop{\rm Aut}\nolimits{\mathfrak g}\cong\mathop{\rm Aut}\nolimits{\mathfrak g}'\times{\mathbb F}_p^\times$. Assume therefore that $p\, |\, n$. As $\mathop{\rm Aut}\nolimits G\longrightarrow\mathop{\rm Aut}\nolimits G^{(1)}$ is surjective and $\mathop{\rm Aut}\nolimits G^{(1)}\cong\mathop{\rm Aut}\nolimits{\mathfrak g}'$, any automorphism of ${\mathfrak g}'$ can be extended to an automorphism of ${\mathfrak g}$. Therefore $\mathop{\rm Aut}\nolimits{\mathfrak g}\longrightarrow\mathop{\rm Aut}\nolimits{\mathfrak g}'$ is surjective. Let $\phi\in\mathop{\rm Aut}\nolimits{\mathfrak g}$ be such that $\phi(x)=x\;\forall x\in{\mathfrak g}'$. Let $e_{ij}$ be the matrix with 1 in the $(i,j)$-th position and 0 elsewhere. By considering the values $d\alpha(d\theta(e_{11}))$ for $\alpha\in\Phi$, we see that $d\theta(e_{11})=e_{11}+\lambda I$ for some $\lambda\in k$. Moreover $e_{11}^{[p]}=e_{11}$, hence $\lambda\in{\mathbb F}_p$. It follows that $\theta$ must be of the form $\theta_\lambda:x\mapsto x+\lambda(\mathop{\rm tr}\nolimits x)I$ for some $\lambda\in{\mathbb F}_p$. Moreover $\theta_\lambda$ is a valid automorphism of ${\mathfrak g}$ for each $\lambda\in{\mathbb F}_p$. The description of $\mathop{\rm Aut}\nolimits{\mathfrak g}$ follows. To prove (v), suppose $2\neq p\, |\, n$. Then $\mathop{\rm Aut}\nolimits G\longrightarrow\mathop{\rm Aut}\nolimits{\mathfrak g}'$ is bijective, hence for each involution $\eta$ of ${\mathfrak g}'$ there is a unique automorphism $\theta$ of $G$, necessarily involutive, such that $d\theta|_{{\mathfrak g}'}=\eta$. Moreover, $\mathop{\rm Aut}\nolimits{\mathfrak g}\cong \mathop{\rm Aut}\nolimits{\mathfrak g}'\times B$, where $B$ is a cyclic group of order $p$. Hence there is a unique element $\psi\in\mathop{\rm Aut}\nolimits{\mathfrak g}$ of order 2 such that $\psi|_{{\mathfrak g}'}=\eta$. \end{proof} \section{Cartan Subspaces} \label{sec:2} \subsection{Maximal Toral Algebras} \label{sec:2.1} In \cite{kostrall}, Kostant and Rallis defined Cartan subspaces of ${\mathfrak p}$ and showed that any two Cartan subspaces are $K$-conjugate. In this section we will show that this extends to positive characteristic. We follow \cite{kostrall}, although Lemma \ref{sep} and Cor. \ref{cart} are new. We begin with two easy lemmas. \begin{lemma} Let $x\in{\mathfrak g}$, and denote the Jordan-Chevalley decomposition of $x$ by $x_s+x_n$. Then $x\in{\mathfrak k}$ (resp. ${\mathfrak p}$) if and only if $x_s,x_n\in{\mathfrak k}$ (resp. ${\mathfrak p}$). \end{lemma} \begin{proof} Any automorphism of ${\mathfrak g}$ maps semisimple (resp. nilpotent) elements to semisimple (resp. nilpotent) elements. Thus $\theta(x)=\theta(x_s)+\theta(x_n)$ is the Jordan-Chevalley decomposition of $\theta(x)$ for any $x\in{\mathfrak g}$. Hence $\theta(x)=\lambda x$ if and only if $\theta(x_s)=\lambda x_s,\theta(x_n)=\lambda x_n$. \end{proof} The following lemma is in \cite{rich2}. For completeness, we reproduce a proof here. \begin{lemma}\label{stabletori} Let $T$ be a $\theta$-stable torus of $G$. Let $T_+=(T\cap K)^\circ$ and $T_-=\{t\in T|\theta(t)=t^{-1}\}^\circ$. Then $T=T_+\cdot T_-$ and the intersection is finite. Let ${\mathfrak t}=\mathop{\rm Lie}\nolimits(T)$. Then ${\mathfrak t}\cap{\mathfrak k}=\mathop{\rm Lie}\nolimits(T_+)$ and ${\mathfrak t}\cap{\mathfrak p}=\mathop{\rm Lie}\nolimits(T_-)$. \end{lemma} \begin{proof} Clearly $T_+$ and $T_-$ are subtori of $T$. We consider the surjective morphism $p_+:T\longrightarrow T_+$, $t\mapsto t\theta(t)$. Evidently $T_-$ is the connected component of $\mathop{\rm ker}\nolimits p_+$ containing the identity element. Hence $\mathop{\rm dim}\nolimits T_-+\mathop{\rm dim}\nolimits T_+=\mathop{\rm dim}\nolimits T$. Moreover $T_+\cap T_-$ is clearly finite. Thus $T_+\cdot T_-=T$. Clearly $\mathop{\rm Lie}\nolimits(T_+)\subseteq{\mathfrak k}$ and $\mathop{\rm Lie}\nolimits(T_-)\subseteq{\mathfrak p}$. Therefore ${\mathfrak t}\supseteq\mathop{\rm Lie}\nolimits(T_+)\oplus\mathop{\rm Lie}\nolimits(T_-)$. By equality of dimensions ${\mathfrak t}=\mathop{\rm Lie}\nolimits(T_+)\oplus\mathop{\rm Lie}\nolimits(T_-)$, from which the second part of the lemma follows immediately. \end{proof} We call a toral algebra ${\mathfrak a}$ a {\it maximal torus} of ${\mathfrak p}$ if it is maximal in the collection of toral algebras contained in ${\mathfrak p}$. \begin{lemma}\label{torcent} Let ${\mathfrak a}$ be a maximal torus of ${\mathfrak p}$. Then ${\mathfrak z}_{\mathfrak p}({\mathfrak a})={\mathfrak a}$. \end{lemma} \begin{proof} Let $L=Z_G({\mathfrak a})$. Then $L$ is a $\theta$-stable Levi subgroup of $G$, hence $p$ is good for $G$. Moreover ${\mathfrak l}=\mathop{\rm Lie}\nolimits(L)={\mathfrak z}_{\mathfrak g}({\mathfrak a})={\mathfrak z}_{\mathfrak k}({\mathfrak a})\oplus{\mathfrak z}_{\mathfrak p}({\mathfrak a})$ by \cite[\S 9.1]{bor}. Since ${\mathfrak a}$ is maximal all semisimple elements of ${\mathfrak l}\cap{\mathfrak p}$ are in ${\mathfrak a}$. Applying Lemma \ref{nontriv}, we see that ${\mathfrak z}_{\mathfrak p}({\mathfrak a})$ is a toral algebra. Thus ${\mathfrak z}_{\mathfrak p}({\mathfrak a})={\mathfrak a}$. \end{proof} A torus $A$ in $G$ is {\it $\theta$-split} or {\it $\theta$-anisotropic} if $\theta(a)=a^{-1}$ for all $a\in A$. \begin{lemma}\label{maxsplittori} Let ${\mathfrak a}$ be a maximal torus of ${\mathfrak p}$. Then there is a unique maximal $\theta$-split torus $A$ of $G$ such that ${\mathfrak a}=\mathop{\rm Lie}\nolimits(A)$. \end{lemma} \begin{proof} Let $L=Z_G({\mathfrak a})$ and let ${\mathfrak l}=\mathop{\rm Lie}\nolimits(L)={\mathfrak z}_{\mathfrak g}({\mathfrak a})$. Since ${\mathfrak l}\cap{\mathfrak p}={\mathfrak a}\subseteq {\mathfrak z}({\mathfrak l})$, $\theta|_{L^{(1)}}$ is trivial by Lemma \ref{nontriv}. Let $S$ be any maximal torus of $L$: then $S=(S\cap L^{(1)})\cdot Z(L)^\circ$. Hence $A=S_-\subset Z(L)$. Moreover, ${\mathfrak a}\subseteq{\mathfrak z}({\mathfrak l})\subseteq\mathop{\rm Lie}\nolimits(S)$ by \cite[2.3]{me}. It follows that $\mathop{\rm Lie}\nolimits(A)={\mathfrak a}$. It remains to prove uniqueness. But $A\subset Z(L)$, hence $A$ is the unique maximal $\theta$-split torus of $L$. \end{proof} \subsection{Summary of Results On Maximal $\theta$-split Tori} \label{sec:2.2} The main idea of \cite{kostrall} is that the pair $(G^\theta,{\mathfrak p})$ (with $G^\theta$ acting on ${\mathfrak p}$ via the adjoint representation) can be thought of as a generalised version of the pair $(G,{\mathfrak g})$. In the new setting the role of Cartan subalgebra of ${\mathfrak g}$ is taken by the maximal toral algebra ${\mathfrak a}$ of ${\mathfrak p}$. By Lemma \ref{maxsplittori} there exists a maximal $\theta$-split torus $A$ of $G$ such that $\mathop{\rm Lie}\nolimits(A)={\mathfrak a}$. Hence it is useful to recall some results of Vust, Richardson, and Springer concerning maximal $\theta$-split tori. By Vust we have (\cite[\S 1]{vust}): {\it - Any two maximal $\theta$-split tori of $G$ are conjugate by an element of $K$.} It follows immediately from Lemma \ref{maxsplittori} that any two maximal tori in ${\mathfrak p}$ are conjugate by an element of $K$ (this also follows from Thm. \ref{carts} below). Let $F$ be the finite group of all $a\in A$ satisfying $a^2=e$, the identity element of $G$. It is easy to see that $F\subset G^\theta$, hence that $F$ normalizes $K$. Moreover: {\it - $G^\theta=F\cdot K$ (\cite[\S 1]{vust})}. If $G$ is not adjoint, we are in fact more interested in the group $K^*=\{ g\in G\,|\,g^{-1}\theta(g)\in Z(G)\}$ introduced by Richardson in \cite{rich2}. Let $\pi:G\longrightarrow G/Z(G)=\overline{G}$ be the projection onto the adjoint quotient $\overline{G}$, and let $\overline\theta$ be the unique involutive automorphism of $\overline{G}$ making the following diagram commutative: \begin{diagram} G & \rTo^{\theta} & G \\ \dTo^\pi & & \dTo^\pi \\ \overline{G} & \rTo^{\overline\theta} & \overline{G} \\ \end{diagram} Then $K^*=\pi^{-1}(\overline{G}^{\overline\theta})$. We have (see \cite[8.1]{rich2}): {\it - $F^*$ normalizes $K$ and $K^*=F^*\cdot K$.} Let $\Phi_A=\Phi(G,A)$, the roots of $G$ relative to $A$, let $S$ be a maximal torus of $G$ containing $A$, let $\Phi_S=\Phi(G,S)$ and let $W_S=W(G,S)$. By \cite[2.6(iv)]{rich2} $S$ is $\theta$-stable. Denote by $\theta^*$ the automorphism of $\Phi_S$ induced by $\theta$. A parabolic subgroup $P$ of $G$ is {\it $\theta$-split} if $P\cap\theta(P)$ is a Levi subgroup of $P$ (and therefore also of $\theta(P)$). By Vust \cite[\S 1]{vust}: {\it - Let $P\supset A$ be a $\theta$-split parabolic subgroup of $G$. Then $P$ is a minimal $\theta$-split parabolic if and only if $P\cap\theta(P)=Z_G(A)$. Any two minimal $\theta$-split parabolic subgroups of $G$ are conjugate by an element of $K$.} Fix a minimal $\theta$-split parabolic subgroup $P$ of $G$ containing $S$ and let $B$ be a Borel subgroup of $G$ such that $S\subset B\subset P$. Let $\Delta_S$ be the corresponding basis of simple roots in $\Phi_S$. For a subset $I$ of $\Delta_S$, denote by $\Phi_I$ the subsystem of $\Phi_S$ generated by $\{\alpha:\alpha\in I\}$, by $W_I$ the subgroup of $W_S$ generated by $\{s_\alpha:\alpha\in I\}$, and by $w_I$ the longest element of $W_I$ relative to this Coxeter basis. By \cite[1.3-4]{springer} (established in \cite{springer2}) we have: \begin{lemma}\label{basis} There is a subset $I$ of $\Delta_S$ and a graph automorphism $\psi$ of $\Phi_S$ such that: (i) $\psi(\Delta_S)=\Delta_S$ and $\psi(I)=I$, (ii) $\theta^*(\alpha)=-w_I(\psi(\alpha))=-\psi(w_I(\alpha))$ for all $\alpha\in\Phi_S$, (iii) $\theta^*(\alpha)=\alpha$ for any $\alpha\in\Phi_I$. The maximal $\theta$-split torus $A'=A\cap G^{(1)}$ of $G^{(1)}$ can be characterised as follows: $A'=\{ s\in S\cap G^{(1)}\,|\,\alpha(s)=1,\beta(s)=\psi(\beta)(s):\alpha\in I,\beta\in\Delta_S\setminus I\}^\circ$. \end{lemma} It follows that $\Pi=\{\alpha|_A\, :\, \alpha\in\Delta_S\setminus I\}$ is a basis for $\Phi_A$. Note that for $\alpha,\beta\in\Delta_S\setminus I$, $\alpha|_A=\beta|_A$ if and only if $\beta\in\{\alpha,\psi(\alpha)\}$. (We will use $\Delta$ or $\Delta_T$ to denote a basis of roots relative to a maximal torus $T$ of $G$, and $\Pi$ to denote a basis of simple roots in $\Phi_A$, where $A$ is a maximal $\theta$-split torus of $G$.) The `baby Weyl group' $W_A=N_G(A)/Z_G(A)$ was described by Richardson \cite[\S 4]{rich2}: {\it - Let $W_1=\{ w\in W_S\,|\, w(A)=A\}$, $W_2=\{ w\in W_1\,|\, w|_A=1|_A\,\}$. Then the restriction $w\mapsto w|_A$ induces an isomorphism $W_1/W_2\rightarrow W_A$.} Let $\Gamma$ be the group of automorphisms of $S$ generated by $W$ and $\theta$, let $X(S)$ be the group of characters of $S$ and let $E=X(S)\otimes_{\mathbb Z}{\mathbb R}$. There exists a $\Gamma$-equivariant inner product $(.\, ,.):E\times E\rightarrow{\mathbb R}$. Let $E_-$ be the $(-1)$ eigenspace for $\theta$: $E_-$ identifies naturally with $X(A)\otimes_{\mathbb Z}{\mathbb R}$. Hence $(.\, ,.)$ restricts to a $W_A$-equivariant inner product on $E_-$. Let $Y(S)$ be the group of cocharacters in $S$. The dual space $E^*$ to $E$ identifies naturally with $Y(S)\otimes_{\mathbb Z}{\mathbb R}$, and the $(-1)$ eigenspace $E_-^*$ identifies with $Y(A)\otimes_{\mathbb Z}{\mathbb R}$. Hence the inner product $(.\, ,.)$ induces a $\Gamma$-equivariant isomorphism $E\rightarrow E^*$, which restricts to a $W_A$-equiviarant isomorphism $E_-\rightarrow E_-^*$. Let $\langle .\, ,.\rangle:X(A)\times Y(A)\longrightarrow{\mathbb Z}$ be the natural pairing. For $\beta\in\Phi_A$, denote by $s_\beta$ the reflection in the hyperplane orthogonal to $\beta$. If $\alpha,\beta\in\Phi_A$, then by abuse of notation we write $\langle\alpha,\beta\rangle$ for $2(\alpha,\beta)/(\beta,\beta)$: hence $s_\beta(\alpha)=\alpha-\langle\alpha,\beta\rangle\beta$. {\it - The set $\Phi_A$ is a (non-reduced) root system in $X(A)$ with Cartan integers $\langle\alpha,\beta\rangle\in{\mathbb Z}$. The Weyl group $W_A$ is generated by the reflections $\{ s_\alpha\,:\, \alpha\in\Phi_A\}$, hence by the set $\{ s_\alpha\,:\,\alpha\in\Pi\}$. Each element of $W_A$ has a representative in $K$. Thus $W_A\cong N_K(A)/Z_K(A)$ (\cite[\S 4]{rich2}).} Note that it follows from Lemma \ref{maxsplittori} that $N_G(A)=N_G({\mathfrak a})$ and $Z_G(A)=Z_G({\mathfrak a})$. Let $\Phi_A^*$ be the set of $\alpha\in\Phi_A$ such that $\alpha/m\in\Phi_A\Rightarrow m=\pm 1$. It follows from the above that $\Phi_A^*$ is a reduced root system. Finally, we observe using the classification of involutions (see Springer, \cite{springer}): \begin{lemma}\label{pisgood} If $p$ is good for $G$, then it is also good for $\Phi_A$. If $\alpha\in\Phi_A$, then $3\alpha\notin\Phi_A$. \end{lemma} \subsection{Cartan subspaces} \label{sec:2.3} Let ${\mathfrak h}$ be a nilpotent subalgebra of ${\mathfrak g}$. We recall (Fitting's Lemma, see \cite[II.4]{jac}) that there is a decomposition ${\mathfrak g}={\mathfrak g}^0({\mathfrak h})\oplus{\mathfrak g}^1({\mathfrak h})$ and a Zariski open subset $U$ of ${\mathfrak h}$ such that $(\mathop{\rm ad}\nolimits u)$ is nilpotent on ${\mathfrak g}^0({\mathfrak h})$ and is non-singular on ${\mathfrak g}^1({\mathfrak h})$ for all $u\in U$. The following lemma appears in \cite{kostrall}. We include the proof (which is identical to Kostant-Rallis') for the readers' convenience. \begin{lemma}\label{fitting} Let ${\mathfrak h}$ be a nilpotent subalgebra of ${\mathfrak g}$ contained in ${\mathfrak p}$. Then $${\mathfrak g}^i({\mathfrak h})=({\mathfrak g}^i({\mathfrak h})\cap{\mathfrak k})\oplus({\mathfrak g}^i({\mathfrak h})\cap{\mathfrak p})\;\; \mbox{for}\;\;\; i=0,1.$$ \end{lemma} \begin{proof} Let $y\in U\subseteq {\mathfrak h}$, where $U$ is the subset of ${\mathfrak h}$ defined above. Since $(\mathop{\rm ad}\nolimits y)$ is nilpotent (resp. non-singular) on ${\mathfrak g}^0({\mathfrak h})$ (resp. ${\mathfrak g}^1({\mathfrak h})$), then the same is true of $(\mathop{\rm ad}\nolimits y)^2$. But $(\mathop{\rm ad}\nolimits y)^2$ also stabilises ${\mathfrak k}$ and ${\mathfrak p}$. Hence ${\mathfrak g}^i(k(\mathop{\rm ad}\nolimits y)^2)={\mathfrak g}^i(k(\mathop{\rm ad}\nolimits y)^2)\cap{\mathfrak k}\oplus{\mathfrak g}^i(k(\mathop{\rm ad}\nolimits y)^2)\cap{\mathfrak p}$ for $i=0,1$. \end{proof} Following \cite{kostrall}, we define a {\it Cartan subspace} of ${\mathfrak p}$ to be a nilpotent algebra ${\mathfrak h}\subseteq{\mathfrak p}$ such that ${\mathfrak g}^0({\mathfrak h})\cap{\mathfrak p}={\mathfrak h}$. \begin{lemma}\label{torcart} Let ${\mathfrak a}$ be a maximal torus of ${\mathfrak p}$. Then ${\mathfrak a}$ is a Cartan subspace. \end{lemma} \begin{proof} As ${\mathfrak a}$ is a toral algebra, ${\mathfrak g}$ is a completely reducible $(\mathop{\rm ad}\nolimits{\mathfrak a})$-module. Thus ${\mathfrak g}^0({\mathfrak a})={\mathfrak z}_{\mathfrak g}({\mathfrak a})$. By Lemma \ref{torcent}, ${\mathfrak z}_{\mathfrak p}({\mathfrak a})={\mathfrak a}$. Hence by Lemma \ref{fitting}, ${\mathfrak g}^0({\mathfrak a})\cap{\mathfrak p}={\mathfrak a}$. \end{proof} Let $x\in{\mathfrak p}$. Then $kx$ is a nilpotent subalgebra of ${\mathfrak g}$. We write ${\mathfrak g}^i(x)$ for ${\mathfrak g}^i(kx)$. Let $q=\mathop{\rm min}\nolimits\{ \mathop{\rm dim}\nolimits({\mathfrak g}^0(x)\cap{\mathfrak p})\}$, and let $Q=\{ x\in{\mathfrak p}\,|\,\mathop{\rm dim}\nolimits({\mathfrak g}^0(x)\cap{\mathfrak p})=q\}$. It is easy to see that $\mathop{\rm dim}\nolimits({\mathfrak g}^0(x)\cap{\mathfrak p})$ is the degree of the first non-zero term in the characteristic polynomial of $(\mathop{\rm ad}\nolimits x)^2|_{\mathfrak p}$. Hence $Q$ is a non-empty open subset of ${\mathfrak p}$. The following result follows immediately from the proof of \cite[Lemma 3]{kostrall}, although it is not explicitly stated there. The proof is similar to Richardson's proof of \cite[3.3]{rich2}. \begin{lemma}\label{sep} Let $x\in{\mathfrak p}$. Then the map $\pi:K\times ({\mathfrak g}^0(x)\cap{\mathfrak p})\longrightarrow{\mathfrak p}$ given by $(k,y)\mapsto \mathop{\rm Ad}\nolimits k(y)$ is a separable morphism. \end{lemma} \begin{proof} We consider the differential of $\pi$ at $(e,x)$, where $e$ is the identity element of $G$. Identify the tangent spaces $T_x({\mathfrak g}^0(x)\cap{\mathfrak p})$ and $T_x({\mathfrak p})$ with $({\mathfrak g}^0(x)\cap{\mathfrak p})$ and ${\mathfrak p}$ respectively. Hence $d\pi_{(e,x)}:{\mathfrak k}\oplus({\mathfrak g}^0(x)\cap{\mathfrak p})\longrightarrow{\mathfrak p}$, $(U,V)\mapsto [U,x]+V$. Therefore $d\pi_{(e,x)}({\mathfrak k}\oplus({\mathfrak g}^0(x)\cap{\mathfrak p}))=[x,{\mathfrak k}]+({\mathfrak g}^0(x)\cap{\mathfrak p})$. By the properties of the Fitting decomposition, $(\mathop{\rm ad}\nolimits x)$ is non-singular on ${\mathfrak g}^1(x)$, hence $(\mathop{\rm ad}\nolimits x)^2$ is non-singular on $({\mathfrak g}^1(x)\cap{\mathfrak p})$. Thus $[x,{\mathfrak k}]\supseteq[x,[x,{\mathfrak p}]]\supseteq[x,[x,({\mathfrak g}^1(x)\cap{\mathfrak p})]]=({\mathfrak g}^1(x)\cap{\mathfrak p})$. It follows that $d\pi_{(e,x)}$ is surjective. By \cite[AG. 17.3]{bor} $\pi$ is separable. \end{proof} \begin{corollary}\label{cart} Let ${\mathfrak h}$ be a Cartan subspace of ${\mathfrak p}$. The map $\pi:K\times{\mathfrak h}\longrightarrow{\mathfrak p}$ given by $(g,h)\mapsto \mathop{\rm Ad}\nolimits g(h)$ is separable, and $K\cdot{\mathfrak h}$ contains a dense open subset of ${\mathfrak p}$. \end{corollary} We can now prove the main theorem of this section. Our proof is somewhat shorter than the proof given in \cite{kostrall}. \begin{theorem}\label{carts} Any two Cartan subspaces of ${\mathfrak p}$ are $K$-conjugate. The Cartan subspaces are just the maximal tori of ${\mathfrak p}$. An element $x\in{\mathfrak p}$ is semisimple if and only if it is contained in a Cartan subspace of ${\mathfrak p}$. \end{theorem} \begin{proof} Let ${\mathfrak h}$ be a Cartan subspace. Let $U$ be the open subset of elements $h\in{\mathfrak h}$ such that ${\mathfrak g}^i(h)={\mathfrak g}^i({\mathfrak h})$ for $i=0,1$. By Cor. \ref{cart}, $K\cdot U$ contains a dense open subset of ${\mathfrak p}$. Hence $(K\cdot U)\cap Q$ is non-empty. But $Q$ is $K$-stable, hence $U\cap Q$ is non-empty. Let $u\in U\cap Q$. Then ${\mathfrak g}^0(u)\cap{\mathfrak p}={\mathfrak h}$. Therefore $\mathop{\rm dim}\nolimits{\mathfrak h}=q$. On the other hand, if $u\in{\mathfrak h}\cap Q$, then ${\mathfrak g}^0(u)\cap{\mathfrak p}\supseteq{\mathfrak h}$, hence ${\mathfrak g}^0(u)\cap{\mathfrak p}={\mathfrak h}$. It follows that $U=Q\cap{\mathfrak h}$. Let ${\mathfrak h}'$ be any other Cartan subspace. Then $K\cdot(Q\cap{\mathfrak h})$ and $K\cdot(Q\cap{\mathfrak h}')$ contain non-empty open subsets of ${\mathfrak p}$, hence their intersection is non-empty. Therefore $(K\cdot(Q\cap{\mathfrak h}))\cap{\mathfrak h}'$ is non-empty. It follows that $g\cdot{\mathfrak h}={\mathfrak h}'$ for some $g\in K$. The remaining statements of the theorem follow at once. \end{proof} \section{A $\theta$-stable reduction} \label{sec:3} We assume from this point on that $G$ has the following three properties: (A) $p$ is good for $G$. (B) The derived subgroup $G^{(1)}$ is simply-connected. (C) There exists a symmetric $G$-invariant non-degenerate bilinear form $B:{\mathfrak g}\times{\mathfrak g}\longrightarrow k$. In this section we will prove a $\theta$-stable analogue of a result of Gordon and Premet (\cite[6.2]{gandp}). An important corollary is that the trace form in (C) may be chosen so that it is invariant with respect to $\theta$. Let $G_i\, (1\leq i\leq l)$ be the minimal normal subgroups of $G^{(1)}$ and let ${\mathfrak g}_i=\mathop{\rm Lie}\nolimits(G_i)$. As $G^{(1)}$ is simply-connected, $G^{(1)}=G_1\times G_2\times\ldots\times G_l$ and ${\mathfrak g}'={\mathfrak g}_1\oplus{\mathfrak g}_2\oplus\ldots\oplus{\mathfrak g}_l$. We introduce new groups $\tilde{G}_i$, defined as follows: $$\tilde{G}_i=\left\{ \begin{array}{ll} GL(V_i) & \mbox{if $G_i$ is isomorphic to $\mathop{\rm SL}\nolimits(V_i)$ and $p\,|\mathop{\rm dim}\nolimits V_i$,} \\ G_i & \mbox{otherwise.} \end{array} \right. $$ Let $\tilde{G}=\tilde{G}_1\times\tilde{G}_2\times\ldots\times\tilde{G}_l,\tilde{\mathfrak g}_i=\mathop{\rm Lie}\nolimits(\tilde{G}_i),\tilde{\mathfrak g}=\mathop{\rm Lie}\nolimits(\tilde{G})$. Identify $G_i$ with the derived subgroup of $\tilde{G}_i$, hence consider $G^{(1)}$ as a subgroup of both $G$ and $\tilde{G}$. Let $(T',B')$ be a fundamental pair for $\theta|_{G^{(1)}}$ (see Sect. \ref{sec:1}) and let $T$ (resp. $\tilde{T}$) be the unique maximal torus of $G$ (resp. $\tilde{G}$) containing $T'$. Let ${\mathfrak h}'=\mathop{\rm Lie}\nolimits(T'),{\mathfrak h}=\mathop{\rm Lie}\nolimits(T),\tilde{\mathfrak h}=\mathop{\rm Lie}\nolimits(\tilde{T})$, ${\mathfrak h}_i={\mathfrak h}\cap{\mathfrak g}_i$, $\tilde{\mathfrak h}_i=\tilde{\mathfrak h}\cap\tilde{\mathfrak g}_i$. \begin{theorem}\label{redthm} There exists a torus $T_0$, an involution $\hat{\theta}$ of $\hat{G}=\tilde{G}\times T_0$, and an injective restricted Lie algebra homomorphism $\psi:{\mathfrak g}\longrightarrow\hat{\mathfrak g}=\mathop{\rm Lie}\nolimits(\hat{G})$ such that: (i) $\psi({\mathfrak g}_i)\subseteq\tilde{\mathfrak g}_i$ for all $i\in\{1,2,\ldots, l\}$ and $\psi({\mathfrak h}')\subseteq\tilde{\mathfrak h}$. (ii) $\hat{\theta}|_{G^{(1)}}=\theta|_{G^{(1)}}$, and the following diagram is commutative: \begin{diagram} {\mathfrak g} & \rTo^{d\theta} & {\mathfrak g} \\ \dTo^\psi & & \dTo^\psi \\ \hat{\mathfrak g} & \rTo^{d\hat{\theta}} & \hat{\mathfrak g} \\ \end{diagram} (iii) There exists a toral algebra ${\mathfrak t}_1$ such that $\hat{\mathfrak g}=\psi({\mathfrak g})\oplus{\mathfrak t}_1$ (Lie algebra direct sum) and $d\hat{\theta}(t)=t\;\forall\, t\in{\mathfrak t}_1$. (iv) $\theta(G_i)=G_j$ implies $\hat{\theta}(\tilde{G}_i)=\tilde{G}_j$. \end{theorem} \begin{proof} The existence of a torus $T_0$, an injective restricted Lie algebra homomorphism $\eta:{\mathfrak g}\longrightarrow\hat{\mathfrak g}=\mathop{\rm Lie}\nolimits(\tilde{G}\times T_0)=\tilde{\mathfrak g}\oplus{\mathfrak t}_0$, and a toral algebra ${\mathfrak s}_1$ such that $\hat{\mathfrak g}=\eta({\mathfrak g})\oplus{\mathfrak s}_1$ was proved by Premet \cite[Lemma 4.1]{comp} and Gordon-Premet \cite[6.2]{gandp}. Identify each ${\mathfrak g}_i$ with its image $\eta({\mathfrak g}_i)\subseteq\tilde{\mathfrak g}_i$. Define an automorphism $\phi$ of the restricted Lie algebra $\hat{\mathfrak g}$ by $\phi(\eta(x))=\eta(d\theta(x))$ for $x\in{\mathfrak g}$, $\phi(s)=s$ for $s\in{\mathfrak s}_1$ and linear extension to all of $\hat{\mathfrak g}$. The main idea of our proof is to find $\phi$-stable restricted subalgebras $\overline{\mathfrak g}_i,{\mathfrak s}_0$, and $\overline{\mathfrak g}_i\oplus\overline{\mathfrak g}_j$ of $\hat{\mathfrak g}$ with ${\mathfrak g}_i\subseteq\overline{\mathfrak g}_i\cong\tilde{\mathfrak g}_i,{\mathfrak s}_0\cong{\mathfrak t}_0$ and $\hat{\mathfrak g}=\sum\overline{\mathfrak g}_i\oplus{\mathfrak s}_0$. {\bf Step 1. The toral algebra ${\mathfrak s}_0$.} Let $\hat{\mathfrak z}={\mathfrak z}(\hat{\mathfrak g}),\tilde{\mathfrak z}={\mathfrak z}(\tilde{\mathfrak g})$ and ${\mathfrak z}_i={\mathfrak z}({\mathfrak g}_i)$. Clearly $\hat{\mathfrak z}=\tilde{\mathfrak z}\oplus{\mathfrak t}_0=\eta({\mathfrak z})\oplus{\mathfrak s}_1$ and $\tilde{\mathfrak z}=\sum{\mathfrak z}_i={\mathfrak z}({\mathfrak g}')$. Hence $\tilde{\mathfrak z}\subseteq\hat{\mathfrak z}$ are $\phi$-stable toral algebras. The restriction of $\phi$ to ${\hat{\mathfrak z}}^{tor}$ has order 1 or 2. Therefore by Maschke's theorem there is a $\phi$-stable ${\mathbb F}_p$-vector space ${\mathfrak s}_0^{tor}$ such that ${\hat{\mathfrak z}}^{tor}={\tilde{\mathfrak z}}^{tor}\oplus{\mathfrak s}_0^{tor}$. Let ${\mathfrak s}_0$ be the toral algebra in $\hat{\mathfrak g}$ generated by ${\mathfrak s}_0^{tor}$. Using the same argument as in the proof of Lemma \ref{redcase} we can choose a toral basis for ${\mathfrak s}_0$ consisting of eigenvectors for $\phi$. This basis can be used to construct an isomorphism of toral algebras $f_0:{\mathfrak s}_0\longrightarrow{\mathfrak t}_0$ and an involutive automorphism $\theta_0:T_0\longrightarrow T_0$ such that the following diagram commutes: \begin{diagram} {\mathfrak s}_0 & \rTo^{f_0} & {\mathfrak t}_0 \\ \dTo^\phi & & \dTo^{d\theta_0} \\ {\mathfrak s}_0 & \rTo^{f_0} & {\mathfrak t}_0 \\ \end{diagram} {\bf Step 2. The subalgebra $\overline{\mathfrak g}_i$, for $\theta$-stable $G_i$} If $\tilde{G}_i=G_i$, there is nothing to prove. So assume $\tilde{G}_i=\mathop{\rm GL}\nolimits(V_i)$ and $p\,|\mathop{\rm dim}\nolimits V_i$. Let $\Delta_i$ be the subset of $\Delta$ corresponding to $G_i$. We define ${\mathfrak m}_i=\sum_{j\neq i}{\mathfrak g}_j$ and ${\mathfrak n}_i={\mathfrak z}_{\hat{\mathfrak g}}({\mathfrak m}_i)\cap\hat{\mathfrak h}$. Clearly $\sum_{j\neq i}{\mathfrak z}_j\oplus{\mathfrak s}_0\subseteq{\mathfrak n}_i=\sum_{j\neq i}{\mathfrak z}_j\oplus\tilde{\mathfrak h}_i\oplus{\mathfrak s}_0\subseteq\hat{\mathfrak h}$ are $\phi$-stable toral algebras. Hence there is a $\phi$-stable toral algebra $\overline{\mathfrak h}_i$ containing ${\mathfrak h}_i$ such that ${\mathfrak n}_i=\overline{\mathfrak h}_i\oplus\sum_{j\neq i}{\mathfrak z}_j\oplus{\mathfrak s}_0$. By \cite[4.2]{me}, the maps $d\alpha|_{\overline{\mathfrak h}_i}$ with $\alpha\in\Delta_i$ are linearly independent. It follows that $\overline{\mathfrak h}_i$ and ${\mathfrak g}_i$ together generate a restricted Lie algebra isomorphic to $\tilde{\mathfrak g}_i$. Let $f_i:\overline{\mathfrak g}_i\longrightarrow\tilde{\mathfrak g}_i$ be an isomorphism such that $f_i(x)=x$ for all $x\in{\mathfrak g}_i$. Then by Lemma \ref{GLautos} there exists a unique involutive automorphism $\theta_i:\tilde{G}_i\longrightarrow\tilde{G}_i$ such that the following diagram commutes: \begin{diagram} \overline{\mathfrak g}_i & \rTo^{f_i} & \tilde{\mathfrak g}_i \\ \dTo^\phi & & \dTo^{d\theta_i} \\ \overline{\mathfrak g}_i & \rTo^{f_i} & \tilde{\mathfrak g}_i \\ \end{diagram} {\bf Step 3. The subalgebras $\overline{\mathfrak g}_i,\overline{\mathfrak g}_j$ when $\theta(G_i)=G_j$.} Once again we may assume that $\tilde{G}_i=\mathop{\rm GL}\nolimits(V_i)$ and $p|\mathop{\rm dim}\nolimits V_i$. We set $\overline{\mathfrak g}_i=\tilde{\mathfrak g}_i,\overline{\mathfrak g}_j=\phi(\tilde{\mathfrak g}_i)$. We have only to show that $\hat{\mathfrak g}=\overline{\mathfrak g}_i\oplus\overline{\mathfrak g}_j\oplus\sum_{k\neq i,j}\tilde{\mathfrak g}_k\oplus{\mathfrak s}_0$. Let $\Delta_i,\Delta_j$ be the subsets of $\Delta$ corresponding respectively to $G_i,G_j$ and let ${\mathfrak n}_{(i,j)}=\{ h\in\hat{\mathfrak h}|\,d\alpha(h)=0\,\forall\,\alpha\in\Delta\setminus(\Delta_i\cup\Delta_j)\}$. Clearly ${\mathfrak n}_{(i,j)}=\tilde{\mathfrak h}_i\oplus\tilde{\mathfrak h}_j\oplus\sum_{k\neq i,j}{\mathfrak z}_k\oplus{\mathfrak s}_0$. The automorphism of $\Phi$ induced by $\theta$ sends $\Delta_i$ onto $\Delta_j$. Hence the differentials $d\alpha|_{\tilde{\mathfrak h}_i\oplus d\theta(\tilde{\mathfrak h}_i)}$ for $\alpha\in\Delta_i\cup\Delta_j$ are linearly independent. It follows by dimensional considerations that $\tilde{\mathfrak h}_i\oplus d\theta(\tilde{\mathfrak h}_i)\oplus\sum_{k\neq i,j}{\mathfrak z}_k\oplus{\mathfrak s}_0 = {\mathfrak n}_{(i,j)}$. Therefore $\tilde{\mathfrak g}_i\oplus d\theta(\tilde{\mathfrak g}_i)\oplus\sum_{k\neq i,j}\tilde{\mathfrak g}_k\oplus{\mathfrak s}_0=\hat{\mathfrak g}$. It is now easy to see that there are isomorphisms $f_j:\overline{\mathfrak g}_j\longrightarrow\tilde{\mathfrak g}_j,\tau_j:\tilde{G}_i\longrightarrow\tilde{G}_j$ and $\theta_{(i,j)}:\tilde{G}_i\times\tilde{G}_j$ such that $f_j(x)=x\;\forall x\in{\mathfrak g}_i$ and the following diagram is commutative: \begin{diagram} \overline{\mathfrak g}_i\oplus\overline{\mathfrak g}_j & \rTo^{(\mathop{\rm Id}\nolimits,f_j)} & \tilde{\mathfrak g}_i\oplus\tilde{\mathfrak g}_j \\ \dTo^\phi & & \dTo^{d\theta_{(i,j)}} \\ \overline{\mathfrak g}_i\oplus\overline{\mathfrak g}_j & \rTo^{(\mathop{\rm Id}\nolimits,f_j)} & \tilde{\mathfrak g}_i\oplus\tilde{\mathfrak g}_j \\ \end{diagram} where $\theta_{(i,j)}:\tilde{G}_i\times\tilde{G}_j\longrightarrow\tilde{G}_i\times\tilde{G}_j$ is given by $(g_i,g_j)\mapsto (\tau^{-1}(g_j),\tau(g_i))$. We now let $f:\sum\overline{\mathfrak g}_i\oplus{\mathfrak s}_0=\hat{\mathfrak g}\longrightarrow\sum\tilde{\mathfrak g}_i\oplus{\mathfrak t}_0=\hat{\mathfrak g}$ and $\hat\theta:\tilde{G}\times T_0\longrightarrow\tilde{G}\times T_0$ be the maps obtained in the obvious way from the $f_i$ and the $\theta_i,\theta_{(i,j)}$ respectively. Then the following diagram is commutative: \begin{diagram} \hat{\mathfrak g} & \rTo^{f} & \hat{\mathfrak g} \\ \dTo^\phi & & \dTo^{d\hat{\theta}} \\ \hat{\mathfrak g} & \rTo^{f} & \hat{\mathfrak g} \\ \end{diagram} Let $\psi=f\circ\eta:{\mathfrak g}\longrightarrow\hat{\mathfrak g}$ and let ${\mathfrak t}_1=f({\mathfrak s}_1)$. Then $\psi,\tilde{\mathfrak g}_i,T_0,{\mathfrak t}_1$ satisfy the requirements of the theorem. \end{proof} \begin{corollary}\label{trace} Let $G$ satisfy the standard hypotheses (A),(B),(C). Suppose that $\mathop{\rm char}\nolimits k\neq 2$ and that $\theta$ is an involutive automorphism of $G$. Then the trace form in (C) may be chosen to be $\theta$-equivariant. \end{corollary} \begin{proof} To prove the corollary we construct a $\hat{\theta}$-equivariant trace form on $\hat{\mathfrak g}$ which restricts to a non-degenerate form on ${\mathfrak g}$. Recall that $\hat{\mathfrak g}=\tilde{\mathfrak g}\oplus{\mathfrak t}_0=\psi({\mathfrak g})\oplus{\mathfrak t}_1$. Identify ${\mathfrak g}$ with its image $\psi({\mathfrak g})$. Let $G_i$ be a minimal normal subgroup of $G$. As is well-known (see for example \cite[I.5]{sands}) there exists a non-degenerate trace form $\kappa_i:\tilde{\mathfrak g}_i\times\tilde{\mathfrak g}_i\longrightarrow k$ associated to a rational representation of $\tilde{G}_i$. Moreover, as $\tilde{\mathfrak g}_i$ is an indecomposable $\tilde{G}_i$-module, $\kappa_i$ is unique up to multiplication by a non-zero scalar. We will prove that $\kappa_i$ is invariant under any automorphism of $\tilde{G}_i$. By Lemma \ref{GLautos} it suffices to prove this for a set of graph automorphisms $\gamma$ generating $\mathop{\rm Aut}\nolimits\tilde{G}_i / \mathop{\rm Int}\nolimits\tilde{G}_i$. Let $\gamma$ be such an automorphism and define a new trace form $\kappa_i^\gamma:(x,y)\mapsto \kappa_i(d\gamma(x),d\gamma(y))$. Then $\kappa_i^\gamma$ is a scalar multiple of $\kappa_i$. Hence it will suffice to find $(x,y)\in\tilde{\mathfrak g}_i\times\tilde{\mathfrak g}_i$ such that $\kappa_i(x,y)=\kappa_i^\gamma(x,y)\neq 0$. Assume first of all that $G_i$ is not of type $A$ (therefore $\tilde{G}_i=G_i)$. Let $(B_i,T_i)$ be a fundamental pair for $\gamma$ and let $\Delta_i$ be the basis of the roots $\Phi_i=\Phi(G_i,T_i)$ corresponding to $B_i$. Let $\{h_{\alpha_i},e_\alpha|\alpha_i\in\Delta_i,\alpha\in\Phi_i\}$ be a Chevalley basis for ${\mathfrak g}_i$. We observe first of all that there exists $\alpha\in\Delta_i$ such that $\gamma(\alpha)=\alpha$. For type $D_n$ we choose $\alpha=\alpha_{n-2}$, and for type $E_6$ we choose $\alpha=\alpha_2$ (we use Bourbaki's numbering conventions \cite{bourbaki}). We have $d\gamma(e_\alpha)=ce_\alpha$ and $d\gamma(e_{-\alpha})=c'e_{-\alpha}$. But $[e_\alpha,e_{-\alpha}]=h_\alpha$, hence $cc'=1$. Therefore $\kappa_i^\gamma(e_\alpha,e_{-\alpha})=\kappa_i(e_\alpha,e_{-\alpha})$. $\kappa_i$ is non-degenerate and $T_i$-invariant. Thus $\kappa_i(e_\alpha,e_{-\alpha})\neq 0$. Assume now that $G_i$ is of type $A$. In this case $G_i$ is isomorphic to $\mathop{\rm SL}\nolimits(V_i)$ and it will be sufficient to prove $\kappa_i^\gamma=\kappa_i$ for $\gamma:g\mapsto {^t}g^{-1}$. Recall that the ordinary trace form $\kappa_i(x,y)=\mathop{\rm tr}\nolimits(xy)$ is non-degenerate on $\tilde{\mathfrak g}_i$. Hence $\kappa_i^\gamma(x,y)=\kappa_i(-{^t}x,-{^t}y)=\mathop{\rm tr}\nolimits({^t}x {^t}y)=\mathop{\rm tr}\nolimits({^t}(yx))=\mathop{\rm tr}\nolimits(yx)=\mathop{\rm tr}\nolimits(xy)=\kappa_i(x,y)$. To construct the form $\hat{\kappa}$ we proceed as follows. For $d\theta$-stable $\tilde{\mathfrak g}_i$ we choose a trace form $\kappa_i$ as above. For each pair $\tilde{\mathfrak g}_i,\tilde{\mathfrak g}_j$ with $d\theta(\tilde{\mathfrak g}_i)=\tilde{\mathfrak g}_j$ we let $\kappa_i$ be a non-degenerate trace form on $\tilde{\mathfrak g}_i$, and define $\kappa_j$ on $\tilde{\mathfrak g}_j$ by $\kappa_j(x,y)=\kappa_i(d\theta(x),d\theta(y))$. Let $\hat{\mathfrak z}={\mathfrak z}(\hat{\mathfrak g}),\tilde{\mathfrak z}={\mathfrak z}(\tilde{\mathfrak g}),{\mathfrak z}={\mathfrak z}({\mathfrak g})$. It is easy to see that $\hat{\mathfrak z}=\tilde{\mathfrak z}\oplus{\mathfrak t}_0={\mathfrak z}\oplus{\mathfrak t}_1$. Moreover $\tilde{\mathfrak z}={\mathfrak z}({\mathfrak g}')\subseteq{\mathfrak z}$. Hence ${\mathfrak z}=\tilde{\mathfrak z}\oplus({\mathfrak z}\cap{\mathfrak t}_0)$. By the same argument as used in the proof of Lemma \ref{redcase} there exists a $\hat{\theta}$-stable toral algebra ${\mathfrak t}_2$ such that ${\mathfrak t}_0={\mathfrak z}\cap{\mathfrak t}_0\oplus{\mathfrak t}_2$. Let $\kappa_z$ be a non-degenerate $\hat{\theta}$-invariant form on ${\mathfrak z}\cap{\mathfrak t}_0$, and let $\kappa_t$ be such a form on ${\mathfrak t}_2$. Any $x\in\hat{\mathfrak g}$ can be expressed uniquely as $(\sum x_i)+ x_z + x_t$, with $x_i\in\tilde{\mathfrak g}_i,x_z\in{\mathfrak z}\cap{\mathfrak t}_0$, and $x_t\in{\mathfrak t}_2$. We define $\kappa(x,y)=\sum\kappa_i(x_i,y_i)+\kappa_z(x_z,y_z)+\kappa_t(x_t,y_t)$ It remains to show that the restriction of $\kappa$ to ${\mathfrak g}$ is non-degenerate. Let $x\in\hat{\mathfrak g}$ be such that $\kappa(x,y)=0\,\forall y\in{\mathfrak g}$. Then $\kappa_i(x_i,{\mathfrak g}_i)=0\,\forall i$, hence $x_i\in{\mathfrak z}_i$. Moreover $\kappa_z(x_z,{\mathfrak z}\cap{\mathfrak t}_0)=0$, hence $x_z=0$. Suppose $x_i\neq 0$. Let $\Delta_i=\{ \alpha_1,\alpha_2,\ldots\}$ be the subset of $\Delta$ corresponding to $G_i$, ordered in the standard way. We have $x_i=\lambda([e_{\alpha_1},e_{-\alpha_1}]+2[e_{\alpha_2},e_{-\alpha_2}]+\ldots)$ and $\lambda\neq 0$. By \cite[3.3]{me} there exists $h\in\tilde{\mathfrak h}_i$ such that $d\alpha_1(h)=1$, and $d\alpha(h)=0\;\forall\alpha\in\Delta\setminus\{\alpha_1\}$. Then $\kappa_i(x_i,h)=\lambda\kappa_i(e_{\alpha_1},e_{-\alpha_1})\neq 0$. This is a contradiction, hence $x_i=0\,\forall i$. It follows that $x\in{\mathfrak t}_2$. Therefore the restriction of $\kappa$ to ${\mathfrak g}$ is non-degenerate. \end{proof} \section{Centralizers and Invariants} \label{sec:4} \subsection{Centralizers} \label{sec:4.1} The following lemma is an important step in \cite{kostrall}. Cor. \ref{trace} allows us to prove it by the same argument. \begin{lemma}\label{centdim} Let $x\in{\mathfrak p}$. Then $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak k}(x)-\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak p}(x)=\mathop{\rm dim}\nolimits{\mathfrak k}-\mathop{\rm dim}\nolimits{\mathfrak p}$. \end{lemma} \begin{proof} Let $\kappa:{\mathfrak g}\times{\mathfrak g}\longrightarrow k$ be a non-degenerate $(\theta,G)$-equivariant symmetric bilinear form. By the $\theta$-equivariance $\kappa({\mathfrak k},{\mathfrak p})=0$. Let $x\in{\mathfrak p}$ and let $\kappa_x:{\mathfrak g}\times{\mathfrak g}\longrightarrow k$ be the alternating bilinear form defined by $\kappa_x(y,z)=\kappa([x,y],z)=\kappa(y,[z,x])$. Clearly $\kappa_x(y,z)=0$ for all $z\in{\mathfrak g}$ if and only if $y\in{\mathfrak z}_{\mathfrak g}(x)$. Hence $\kappa_x$ induces a non-degenerate alternating bilinear form $\overline{\kappa_x}:{\mathfrak g}/{{\mathfrak z}_{\mathfrak g}(x)}\times{\mathfrak g}/{{\mathfrak z}_{\mathfrak g}(x)}\longrightarrow k$. But now ${\mathfrak g}/{{\mathfrak z}_{\mathfrak g}(x)}={\mathfrak k}/{{\mathfrak z}_{\mathfrak k}(x)}\oplus{\mathfrak p}/{{\mathfrak z}_{\mathfrak p}(x)}$. Furthermore ${\mathfrak k}/{{\mathfrak z}_{\mathfrak k}(x)}$ and ${\mathfrak p}/{{\mathfrak z}_{\mathfrak p}(x)}$ are $\overline{\kappa_x}$-isotropic subspaces, hence are maximal such, and their dimensions are equal. \end{proof} The following result will also be useful. \begin{lemma}\label{globalinf} Let $x\in{\mathfrak k}$ or ${\mathfrak p}$. Then $\mathop{\rm Lie}\nolimits(Z_G(x)^\circ)={\mathfrak z}_{\mathfrak g}(x)$ and $\mathop{\rm Lie}\nolimits(Z_K(x)^\circ)={\mathfrak z}_{\mathfrak k}(x)$. \end{lemma} \begin{proof} Clearly $\mathop{\rm Lie}\nolimits(Z_G(x)^\circ)\subseteq{\mathfrak z}_{\mathfrak g}(x)$. To show that $\mathop{\rm Lie}\nolimits(Z_G(x)^\circ)={\mathfrak z}_{\mathfrak g}(x)$, it will therefore suffice to show equality of dimensions. To do this we use the homomorphism $\psi:{\mathfrak g}\longrightarrow\hat{\mathfrak g}$ of Thm. \ref{redthm}. It is easy to see that $\mathop{\rm dim}\nolimits Z_G(x)^\circ=\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak g}(x)$ if and only if $\mathop{\rm dim}\nolimits Z_{\hat{G}} (d\psi(x))=\mathop{\rm dim}\nolimits {\mathfrak z}_{\hat{\mathfrak g}}(d\psi(x))$. But equality is known for each of the components $\tilde{G}_i$ (see \cite[I.5.3]{sands}) hence for $\hat{G}$. Therefore $\mathop{\rm Lie}\nolimits(Z_G(x)^\circ)={\mathfrak z}_{\mathfrak g}(x)$. Now let $L=Z_G(x)^\circ,{\mathfrak l}=\mathop{\rm Lie}\nolimits(L)$. The restriction of $\theta$ to $L$ is a semisimple automorphism, hence $\mathop{\rm Lie}\nolimits((L\cap K)^\circ)={\mathfrak l}\cap{\mathfrak k}$ by \cite[\S 9.1]{bor}. \end{proof} \subsection{Regular Elements} \label{sec:4.2} We say that $x\in{\mathfrak p}$ is {\it regular} if $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak k}(x)\leq\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak k}(y)$ for all $y\in{\mathfrak p}$. We denote by ${\cal R}$ the open subset of regular elements in ${\mathfrak p}$. Let ${\mathfrak a}$ be a Cartan subspace of ${\mathfrak p}$ and let $A$ be a maximal $\theta$-split torus of $G$ such that $\mathop{\rm Lie}\nolimits(A)={\mathfrak a}$ (Lemma \ref{maxsplittori}). We recall (\cite[3.1,3.2]{rich2}) that $Z_G(A)=M\cdot A$ (almost direct product) and ${\mathfrak g}^A={\mathfrak M}\oplus{\mathfrak a}$, where $M=Z_K(A)^\circ,{\mathfrak M}={\mathfrak k}^A=\mathop{\rm Lie}\nolimits(M)$, and that $\mathop{\rm dim}\nolimits{\mathfrak M}-\mathop{\rm dim}\nolimits{\mathfrak a}=\mathop{\rm dim}\nolimits{\mathfrak k}-\mathop{\rm dim}\nolimits{\mathfrak p}$. \begin{lemma}\label{regs} Let $x\in{\mathfrak p}$. The following are equivalent: (i) $x$ is regular, (ii) $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak g}(x)=\mathop{\rm dim}\nolimits{\mathfrak a}+\mathop{\rm dim}\nolimits{\mathfrak M}$, (iii) $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak k}(x)=\mathop{\rm dim}\nolimits{\mathfrak M}$, (iv) $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak p}(x)=\mathop{\rm dim}\nolimits{\mathfrak a}$. \end{lemma} \begin{proof} Let ${\cal S}$ be the set of semisimple elements in ${\mathfrak p}$, which is a non-empty open subset by Cor. \ref{sep} and Thm. \ref{carts}. Hence ${\cal S}\cap{\cal R}$ is non-empty. The equivalence of the four conditions now follows immediately from Lemma \ref{centdim}. \end{proof} \begin{lemma} Let $x\in{\mathfrak p}$. The following are equivalent: (i) $x$ is regular, (ii) $K\cdot x$ is a $K$-orbit of maximal dimension in ${\mathfrak p}$, (iii) $\mathop{\rm codim}\nolimits_{\mathfrak p} K\cdot x=\mathop{\rm dim}\nolimits{\mathfrak a}$, (iv) $\mathop{\rm codim}\nolimits_{\mathfrak g} G\cdot x = \mathop{\rm dim}\nolimits{\mathfrak a}+\mathop{\rm dim}\nolimits{\mathfrak M}$. \end{lemma} \begin{proof} This follows immediately from Lemma \ref{globalinf} and Lemma \ref{regs}. \end{proof} \subsection{Geometric Invariant Theory} \label{sec:4.3} Here we briefly recall the definitions and some important facts concerning Mumford's Geometric Invariant Theory. In positive characteristic this requires the fact that reductive groups are geometrically reductive, proved by Haboush in \cite{haboush}. For details we refer the reader to \cite{mum,luna,haboush}. Let $R$ be an affine algebraic group such that the connected component $R^\circ$ is reductive. Let $X$ be an affine variety on which $R$ acts. Denote the action by $r\cdot x$ for $r\in R,x\in X$. We always assume that the map $R\times X\longrightarrow X$, $(r,x)\mapsto r\cdot x$ is a morphism of varieties. There is an induced action of $R$ on the coordinate ring $k[X]$. The algebra of invariants $k[X]^R$ is finitely generated. Hence we can construct the affine variety $X\ensuremath{/ \hspace{-1.2mm}/} R=\mathop{\rm Spec}\nolimits(k[X]^R)$. The embedding $k[X]^R\hookrightarrow k[X]$ induces a morphism $\pi:X\longrightarrow X\ensuremath{/ \hspace{-1.2mm}/} R$. The affine variety $X\ensuremath{/ \hspace{-1.2mm}/} R$ is the {\it quotient} (of $X$ by $R$) and the map $\pi$ is called the {\it quotient morphism}. If there is possible ambiguity, we will use the notation $\pi_{X,R}$ or $\pi_X$ for the quotient morphism from $X$ to $X\ensuremath{/ \hspace{-1.2mm}/} R$. We have the following facts (see \cite{mum,luna,haboush}): {\it - $\pi$ is surjective. - If $X_1$ and $X_2$ are disjoint closed $R$-stable subsets of $X$, then there exists $f\in k[X]^R$ such that $f(x)=0$ for $x\in X_1$, and $f(x)=1$ for $x\in X_2$. - Let $\xi\in X\ensuremath{/ \hspace{-1.2mm}/} R$. The fibre $\pi^{-1}(\xi)$ is $R$-stable and contains a unique closed $R$-orbit, $T(\xi)$, which is also the unique minimal $R$-orbit in $\pi^{-1}(\xi)$. Hence $\pi$ determines a bijection between the set of closed $R$-orbits in $X$ and the ($k$-rational) points of $X\ensuremath{/ \hspace{-1.2mm}/} R$. - Let $x\in X$ and let $\xi\in X\ensuremath{/ \hspace{-1.2mm}/} R$. Then $\pi(x)=\xi$ if and only if $T(\xi)$ is contained in the closure of $R\cdot x$ in $X$. - Suppose $X$ is irreducible, and that there exists $x\in X$ such that $R\cdot x$ is closed and $\mathop{\rm dim}\nolimits R\cdot x\geq\mathop{\rm dim}\nolimits R\cdot y$ for all $y\in X$. Then $\pi$ is separable (\cite[9.3]{rich2}). - If $X$ is normal, then $X\ensuremath{/ \hspace{-1.2mm}/} R$ is normal. - Let $X,Y$ be two affine varieties admitting (algebraic) $R$-actions and let $f:X\longrightarrow Y$ be an $R$-equivariant morphism of varieties. There exists a unique morphism $\pi(f):X\ensuremath{/ \hspace{-1.2mm}/} R\longrightarrow Y\ensuremath{/ \hspace{-1.2mm}/} R$ such that the following diagram commutes:} \begin{diagram} X & \rTo^f & Y \\ \dTo^{\pi_{X,R}} & & \dTo^{\pi_{Y,R}} \\ X\ensuremath{/ \hspace{-1.2mm}/} R & \rTo^{\pi(f)} & Y\ensuremath{/ \hspace{-1.2mm}/} R \end{diagram} \begin{rk}\label{geoquot} Let $H$ be a reductive group and let $L_1,L_2$ be commuting reductive subgroups of $H$ such that $H=L_1\cdot L_2$. Let $X$ be an affine variety on which $H$ acts. Since $L_1$ commutes with $L_2$, it stabilizes the subring $k[X]^{L_2}$. Hence $L_1$ acts on the quotient $X\ensuremath{/ \hspace{-1.2mm}/} L_2$. Clearly $(k[X]^{L_2})^{L_1}=k[X]^H$. The quotient $(X\ensuremath{/ \hspace{-1.2mm}/} L_2)\ensuremath{/ \hspace{-1.2mm}/} L_1$ therefore identifies naturally with $X\ensuremath{/ \hspace{-1.2mm}/} H$. We will use the notation $\pi_{X,H/L_2}$ for the morphism $X\ensuremath{/ \hspace{-1.2mm}/} L_2\rightarrow X\ensuremath{/ \hspace{-1.2mm}/} H$ induced by the inclusion $k[X]^H\hookrightarrow k[X]^{L_2}$. (Using the notation above, $\pi_{X,H/L_2}=\pi_{X\ensuremath{/ \hspace{-1.2mm}/} L_2,L_1}$.) The following diagram is commutative: \begin{diagram} X & \rTo^{\pi_{X,L_2}} & X\ensuremath{/ \hspace{-1.2mm}/} L_2 \\ \dTo^{\pi_{X,L_1}} & & \dTo^{\pi_{X,H/L_2}} \\ X\ensuremath{/ \hspace{-1.2mm}/} L_1 & \rTo^{\pi_{X,H/L_1}} & X\ensuremath{/ \hspace{-1.2mm}/} H \end{diagram} \end{rk} \subsection{Unstable and closed $K$-orbits} \label{sec:4.4} Let $\rho:G\longrightarrow\mathop{\rm GL}\nolimits(V)$ be a rational representation. For $U\subset V$, we denote by $\overline{U}$ the closure of $U$ in $V$ (in the Zariski topology). Recall that an element $v\in V$ is {\it $G$-unstable} if $0\in\overline{\rho(G)(v)}$. It is well-known that if $\rho$ is the adjoint representation then an element of ${\mathfrak g}$ is $G$-unstable if and only if it is nilpotent. (This is true even if the characteristic is bad, see \cite[9.2.1]{barrich}.) \begin{lemma}\label{unstable} Let $x\in{\mathfrak p}$. Then $x$ is $K$-unstable if and only if it is nilpotent. \end{lemma} \begin{proof} Let $x\in{\mathfrak p}$ be $K$-unstable. Then $0\in\overline{K\cdot x}\subseteq\overline{G\cdot x}$, hence $x$ is $G$-unstable, therefore nilpotent. Suppose on the other hand that $x$ is nilpotent. Let $(B,T)$ be a fundamental pair for $\theta$, let $\Phi=\Phi(G,T)$, let $\Delta$ be the basis of $\Phi$ corresponding to $B$ and let $H=H(\Phi,\Delta)$ be the group of ${\mathbb Z}$-linear maps from the root lattice of $\Phi$ to ${\mathbb Z}$. By Kawanaka \cite{kawanaka} there exists a $\theta$-stable element $h\in H$ such that $x\in{\mathfrak g}(2;h)$ (see Sect. \ref{sec:5.2} for a more detailed account of Kawanaka's theorem). But for any $\theta$-stable $h\in H$ there is some $m\in{\mathbb N}$ and a cocharacter $\lambda:k^\times\longrightarrow (T\cap K)$ such that $(\mathop{\rm Ad}\nolimits\lambda(t))(e_\alpha)=t^{mh(\alpha)}e_\alpha$ for all $\alpha\in\Phi$. Hence $0\in\overline{(\mathop{\rm Ad}\nolimits\lambda(t))(x)}$. \end{proof} This allows us to describe the closed $K$-orbits in ${\mathfrak p}$. \begin{lemma}\label{closed} Let $x\in{\mathfrak p}$ and let $x=x_s+x_n$ be the Jordan-Chevalley decomposition of $x$. Then $K\cdot x_s$ is the unique closed (resp. minimal) orbit in $\overline{K\cdot x}$. \end{lemma} \begin{proof} By standard results of geometric invariant theory there is a unique closed orbit in $\overline{K\cdot x}$, which is also the unique minimal orbit. Let $y\in\overline{K\cdot x}$. Clearly $y$ is in the minimal orbit if and only if $\mathop{\rm dim}\nolimits Z_K(y)\geq \mathop{\rm dim}\nolimits Z_K(y')$ for all $y'\in\overline{K\cdot x}$. But by Lemmas \ref{centdim} and \ref{globalinf} this is true if and only if $\mathop{\rm dim}\nolimits Z_G(y)\geq Z_G(y')$ for all $y'\in\overline{K\cdot x}$. It is well-known that $G\cdot x_s$ is the unique closed orbit in $\overline{G\cdot x}$. Thus $\mathop{\rm dim}\nolimits Z_G(x_s)\geq\mathop{\rm dim}\nolimits Z_G(y)$ for all $y\in\overline{G\cdot x}$. It remains to show that $x_s\in\overline{K\cdot x}$. Let $L=Z_G(x_s)^\circ$. Then $L$ is a $\theta$-stable reductive group satisfying the standard hypotheses (A)-(C), and $x_s,x_n\in{\mathfrak l}=\mathop{\rm Lie}\nolimits(L)={\mathfrak z}_{\mathfrak g}(x_s)$. By Lemma \ref{unstable}, $x_n$ is $(K\cap L)^\circ$-unstable. Hence $x_s\in\overline{(K\cap L)^\circ\cdot x}$. Therefore $x_s\in\overline{K\cdot x}$. This completes the proof. \end{proof} \subsection{Chevalley Restriction Theorem} \label{sec:4.5} We now present a variant of the Chevalley Restriction Theorem. The proof follows Richardson's proof of the corresponding result for the group $G$. We begin with the following lemma, which is a direct analogue of \cite[11.1]{rich2}. Fix a maximal $\theta$-split torus $A$ of $G$ with `baby Weyl group' $W=N_G(A)/Z_G(A)\cong N_K(A)/Z_K(A)$ (\cite[\S 4]{rich2}). Let ${\mathfrak a}=\mathop{\rm Lie}\nolimits(A)$. \begin{lemma}\label{toriconj} Suppose $\mathop{\rm Ad}\nolimits g(Y)$ $\subseteq{\mathfrak a}$ for some $g\in K$. Then there exists $w\in W$ such that $w\cdot y=\mathop{\rm Ad}\nolimits g(y)$ for all $y\in Y$. \end{lemma} \begin{proof} Let $L=Z_G(\mathop{\rm Ad}\nolimits g(Y))^\circ$ and ${\mathfrak l}=\mathop{\rm Lie}\nolimits(L)={\mathfrak z}_{\mathfrak g}(\mathop{\rm Ad}\nolimits g(Y))$: $L$ is $\theta$-stable, reductive and satisfies the standard hypotheses (A)-(C) of \S 3. Since ${\mathfrak a}\subseteq{\mathfrak l}$ and $\mathop{\rm Ad}\nolimits g({\mathfrak a})\subseteq{\mathfrak l}$ there exists $l\in (K\cap L)^\circ$ such that $\mathop{\rm Ad}\nolimits l(\mathop{\rm Ad}\nolimits g({\mathfrak a}))={\mathfrak a}$. Thus $n=(lg)\in N_K({\mathfrak a})=N_K(A)$ by Lemma \ref{maxsplittori}. But $\mathop{\rm Ad}\nolimits n(y)=\mathop{\rm Ad}\nolimits g(y)$ for all $y\in Y$. \end{proof} Since any finite set of points is closed, the set ${\mathfrak a}/W$ of $W$-orbits in ${\mathfrak a}$ has the structure of an affine variety with coordinate ring $k[{\mathfrak a}]^{W}$. \begin{theorem}\label{Chev} Let $A$ be a maximal $\theta$-split torus of $G$, and let $W=N_G(A)/Z_G(A)$. Let ${\mathfrak a}=\mathop{\rm Lie}\nolimits(A)$. Then the natural embedding $j:{\mathfrak a}\longrightarrow{\mathfrak p}$ induces an isomorphism of affine varieties $j':{\mathfrak a}/W\longrightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$. Hence $k[{\mathfrak p}]^K$ is isomorphic to $k[{\mathfrak a}]^W$. \end{theorem} \begin{proof} Let $\pi_{\mathfrak p}=\pi_{{\mathfrak p},K}:{\mathfrak p}\longrightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ and let $\pi_{\mathfrak a}=\pi_{{\mathfrak a},W}:{\mathfrak a}\longrightarrow{\mathfrak a}/W$. Any $K$-invariant function on ${\mathfrak p}$ restricts to a $W$-invariant function on ${\mathfrak a}$. Hence there is a well-defined $k$-algebra homomorphism from $k[{\mathfrak p}]^K$ to $k[{\mathfrak a}]^W$. Taking the induced map on prime ideal spectra we have a morphism $j'$ making the following diagram commutative: \begin{diagram} {\mathfrak a} & \rTo^j & {\mathfrak p} \\ \dTo^{\pi_{\mathfrak a}} & & \dTo^{\pi_{\mathfrak p}} \\ {\mathfrak a}/W & \rTo^{j'} & {\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K \end{diagram} By a standard result of geometric invariant theory the varieties ${\mathfrak a}/W$ and ${\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ are normal. Thus by \cite[\S AG. 18.2]{bor} it will suffice to show that $j'$ is bijective and separable. Recall that the points of ${\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ correspond bijectively with the set of closed $K$-orbits in ${\mathfrak p}$. Moreover by Lemma \ref{closed} the closed $K$-orbits in ${\mathfrak p}$ are precisely the semisimple orbits. But by Thm. \ref{carts} any semisimple orbit meets ${\mathfrak a}$. Hence $j'$ is surjective. Let $a,a'\in{\mathfrak a}$ be such that $\pi_{\mathfrak p}(a)=\pi_{\mathfrak p}(a')$. As $a,a'$ are semisimple they must be in the same $K$-orbit. But by Lemma \ref{toriconj} this implies that $w\cdot a=a'$ for some $w\in W$. Hence $\pi_{\mathfrak a}(a)=\pi_{\mathfrak a}(a')$. Therefore $j'$ is injective. It remains to show that $j'$ is separable. As ${\mathfrak p}$ is irreducible and the set of regular semisimple elements is non-empty, the quotient morphism $\pi=\pi_{{\mathfrak p},K}$ is separable (\cite[9.3]{rich2}). Moreover $\phi:K\times{\mathfrak a}\longrightarrow{\mathfrak p}$, $\phi(g,a)=\mathop{\rm Ad}\nolimits g(a)$ is a separable morphism by Cor. \ref{sep}. Thus $\pi\circ\phi:K\times{\mathfrak a}\longrightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ is separable. We consider the action of $K$ on $K\times{\mathfrak a}$ in which $g'\cdot(g,a)=(g'g,a)$. Since $\pi(\mathop{\rm Ad}\nolimits g(A))=\pi(a)$, the composition $\pi\circ\phi$ factors through the action of $K$ on $K\times{\mathfrak a}$. Note that ${\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ can be thought of as a $K$-variety with the trivial action. Hence there is a morphism $\sigma$ making the following diagram commutative: \begin{diagram} K\times {\mathfrak a} & \rTo^{\pi\circ\phi} & {\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K \\ \dTo^{\pi_{K\times{\mathfrak a},K}} & & \dTo^{\mathop{\rm Id}\nolimits_{{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K}} \\ (K\times {\mathfrak a})\ensuremath{/ \hspace{-1.2mm}/} K & \rTo^{\sigma} & {\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K \end{diagram} Since $\pi\circ\phi$ is separable, so is $\sigma$. Let $i:{\mathfrak a}\longrightarrow K\times{\mathfrak a}$, $i(a)=(e,a)$. Then it is easy to see that $\mu=\pi_{{K\times{\mathfrak a}},K}\circ i:{\mathfrak a}\rightarrow (K\times{\mathfrak a})\ensuremath{/ \hspace{-1.2mm}/} K$ is an isomorphism of varieties, hence that $\sigma\circ\mu:{\mathfrak a}\rightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ is separable. But $\sigma\circ\mu=j'\circ\pi_{\mathfrak a}$. Hence $j'$ is separable. This completes the proof of the theorem. \end{proof} Recall that $K^*=\{ g\in G\,|\;g^{-1}\theta(g)\in Z(G)\}$ normalizes $K$, and that $K^*= K\cdot F^*$, where $F^*=\{ a\in A| a^2\in Z(G)\}$ \cite[8.1]{rich2}. \begin{corollary}\label{invext} $k[{\mathfrak p}]^{K^*}=k[{\mathfrak p}]^K$. \end{corollary} \begin{proof} Clearly $k[{\mathfrak p}]^{K^*}\subseteq k[{\mathfrak p}]^K$. Hence we have to prove that any element of $k[{\mathfrak p}]^K$ is $K^*$-invariant. As $K$ is normal in $K^*$, $K^*$ acts on $k[{\mathfrak p}]^K$. Let $f\in k[{\mathfrak p}]^K$. To show that $f\in k[{\mathfrak p}]^{K^*}$ it will suffice to show that $a\cdot f=f$ for any $a\in F^*$. But $(a\cdot f)(x)=f(a^{-1}\cdot x)=f(x)$ for all $x\in{\mathfrak a}$, hence $(j')^*(a\cdot f)=(j')^*(f)$. Taking inverses under $(j')^*$, we see that $a\cdot f=f$. Thus $f\in k[{\mathfrak p}]^{K^*}$. \end{proof} \subsection{$k[{\mathfrak p}]^K$ is a polynomial ring} \label{sec:4.6} Let ${\mathfrak g}$ be a complex semisimple Lie algebra, let ${\mathfrak t}$ be a Cartan subspace of ${\mathfrak g}$, and let $W$ be the Weyl group of ${\mathfrak g}$ acting on ${\mathfrak t}$. It is well-known from the classical theory that the algebra of invariants ${\mathbb C}[{\mathfrak t}]^W$ is generated by $r=\mathop{\rm dim}\nolimits{\mathfrak t}$ algebraically independent homogeneous generators of degrees $(m_1+1),(m_2+1),\ldots,(m_r+1)$, where the $m_i$ are the {\it exponents} of ${\mathfrak g}$. We will now show that an analogous statement is true for ${\mathfrak a}$. It is a straightforward application of Demazure's theorem \cite{dem} on Weyl group invariants. \begin{lemma}\label{demaz} Let $A$ be a maximal $\theta$-split torus of $G$ and let ${\mathfrak a}=\mathop{\rm Lie}\nolimits(A)$. Let $W=N_G({\mathfrak a})/Z_G({\mathfrak a})=N_G(A)/Z_G(A)$. There are $r$ algebraically independent homogeneous polynomials $f_1,f_2,\ldots,f_r$ (where $r$ is the rank of ${\mathfrak a}$) such that $k[{\mathfrak a}]^W=k[f_1,f_2,\ldots,f_r]$. Moreover $$\sum_{w\in W} t^{l(w)}=\prod_{i=1}^r{\frac{1-t^{\deg f_i}}{1-t}}$$ where $l$ is the length function on $W$ corresponding to a basis of simple roots in $\Phi_A$. \end{lemma} \begin{proof} Let $T$ be a torus of rank $n$ and let ${\mathfrak t}=\mathop{\rm Lie}\nolimits(T)$. The character group $X(T)$ is a free abelian group of rank $n$. There is a natural isomorphism $X(T)\otimes_{\mathbb Z} k\rightarrow{\mathfrak t}^*$ induced by the map $\alpha\otimes 1\mapsto d\alpha$, which is equivariant with respect to any group $H$ of automorphisms of $T$. Hence $k[{\mathfrak t}]^H\cong S(X(T)\otimes_{\mathbb Z} k)^H$. In particular, $k[{\mathfrak a}]^{W}\cong S(X(A)\otimes_{\mathbb Z} k)^{W}$. We recall that, according to Demazure's definition, a reduced root system is a triple ${\cal R}=(M,R,\rho)$, where $M$ is a free ${\mathbb Z}$-module of finite type, $R$ is a subset of $M$, and $\rho:\alpha\mapsto \alpha^\vee$ is a map from $R$ into the dual $M^*$ of $M$ such that: (a) $R$ is finite and $R\cap(2R)=\emptyset$, (b) For every $\alpha\in R, \alpha^\vee(\alpha)=2$, (c) If $\alpha,\beta\in R$, then $\beta-\alpha^\vee(\beta)\alpha\in R$, and $\beta^\vee-\beta^\vee(\alpha)\alpha^\vee\in R^\vee$, where $R^\vee=\rho(R)$. Let $\Phi_A^*$ be the subset of $\Phi_A$ consisting of all roots $\alpha$ such that $\alpha/m\in\Phi_A\Rightarrow m=\pm 1$. By \cite[\S 4]{rich2} there exists a map $\rho$ such that $(X^*(A),\Phi_A^*,\rho)$ is a root system in this sense. Moreover by \cite[4.3]{rich2}, $W$ is generated by the reflections $s_\alpha$ with $\alpha\in\Phi_A^*$. Finally, by Lemma \ref{pisgood}, $p$ is good for $\Phi_A^*$. Hence by \cite[Cor. to Thm. 2, Thm. 3]{dem} $S(X_*(A)\otimes k)^W$ is generated by $r$ algebraically independent homogeneous polynomials, of degrees $d_1,d_2,\ldots,d_r$ such that $$\sum_{w\in W}t^{l(w)}=\prod_{i=1}^r \frac{1-t^{d_i}}{1-t}.$$ \end{proof} We remark that the product $\prod_{i=1}^r \frac{1-t^{d_i}}{1-t}$ here may include a number of factors of the form $(1-t)/(1-t)=1$. \section{The nilpotent cone} \label{sec:5} \subsection{Equidimensionality} \label{sec:5.1} Let ${\cal N}={\cal N}({\mathfrak p})$ be the set of nilpotent elements of ${\mathfrak p}$. In general ${\cal N}$ is not irreducible (see for example Cor. \ref{splitcmpts}). However, we have the following straightforward result (Thm. 3 in \cite{kostrall}). We include the proof, which is similar to Kostant-Rallis', for the convenience of the reader. \begin{theorem}\label{nil1} Let ${\cal N}$ be the affine variety of all nilpotent elements in ${\mathfrak p}$, and let ${\cal N}_1,{\cal N}_2,\ldots,{\cal N}_m$ be the irreducible components of ${\cal N}$. The number of $K$-orbits in ${\cal N}$ is finite. For each $i$, $\mathop{\rm codim}\nolimits_{\mathfrak p}{\cal N}_i = r = \mathop{\rm rk}\nolimits A$, where $A$ is a maximal $\theta$-split torus of $G$. Moreover, $K$ normalizes ${\cal N}_i$, and there is an open $K$-orbit in ${\cal N}_i$. An element of ${\cal N}_i$ is in the open $K$-orbit if and only if it is regular. \end{theorem} \begin{proof} Let $e\in{\cal N}$. Then $g\cdot e\in{\cal N}$ for any $g\in K$ (in fact for any $g\in K^*$). Hence $K$ normalizes ${\cal N}$. But $K$ is connected, therefore $K\cdot {\cal N}_i={\cal N}_i$ for each $i$. By \cite[Thm. D]{rich3} there are finitely many $K$-orbits in ${\cal N}$. Hence each irreducible component of ${\cal N}$ contains a unique open orbit. If $x\in{\mathfrak p}$, then $\mathop{\rm codim}\nolimits_{\mathfrak p} (K\cdot x)\geq r$ by Lemma \ref{regs}. Therefore $\mathop{\rm codim}\nolimits_{\mathfrak p}{\cal N}_i\geq r$. But by Lemmas \ref{unstable} and \ref{demaz}, ${\cal N}$ is the set of zeros of $r$ homogeneous polynomials $u_1,u_2,\ldots,u_r$, where $k[{\mathfrak p}]^K=k[u_1,u_2,\ldots,u_r]$. Therefore $\mathop{\rm codim}\nolimits_{\mathfrak p}{\cal N}_i\leq r$. The remaining statements follow at once. \end{proof} \subsection{Kawanaka's Theorem} \label{sec:5.2} In \cite{kawanaka}, Kawanaka generalised the Bala-Carter theory to classify nilpotent orbits in eigenspaces for automorphisms of semisimple Lie algebras. We now recall Kawanaka's theorem as it applies to the case of an involution. Let $(B,T)$ be a fundamental pair for $\theta$, let $\Delta$ be the basis of the roots $\Phi=\Phi(G,T)$ corresponding to $B$, and let $W_T=W(G,T)$ be the Weyl group. Let $\Lambda_r$ be the root lattice of $\Phi$ and let $H=H(\Phi,\Delta)$ be the abelian group of all homomorphisms from $\Lambda_r$ to ${\mathbb Z}$. An element $h\in H$ is uniquely determined by the values $h(\alpha_i)$ for $\alpha_i\in\Delta$. Hence we may describe an element of $H$ by means of a copy of the Dynkin diagram on $\Delta$ with weights attached to each node. Let $X(T)=\mathop{\rm Hom}\nolimits(T,k^\times)$ and let $Y(T)=\mathop{\rm Hom}\nolimits(k^\times,T)$. Denote by $\langle .\, ,.\rangle:X(T)\times Y(T)\longrightarrow{\mathbb Z}$ the natural $W$-equivariant, ${\mathbb Z}$-bilinear map. Hence $\alpha(\lambda(t))=t^{\langle\alpha,\lambda\rangle}$ for all $t\in k^\times$. The pairing induces a homomorphism $Y(T)\rightarrow H$. We denote by $\overline\lambda$ the element of $H$ corresponding to $\lambda\in Y(T)$. Hence $\overline\lambda(\alpha)=\langle\alpha,\lambda\rangle$ for all $\alpha\in\Phi$. The image of $Y(T)$ is of finite index in $H$. Thus, for any $h\in H$ there exists a positive integer $m$ and a cocharacter $\lambda$ such that $\overline\lambda=mh$. Let $H^+$ be the positive Weyl chamber associated to $\Delta$: $h\in H^+\Leftrightarrow h(\alpha_i)\geq 0\;\forall \,\alpha_i\in\Delta$. The Weyl group $W_T$ acts naturally on $H$, and $\overline{w(\lambda)}=w(\overline\lambda)$ for any $\lambda\in Y(T)$. For any $h\in H$ there exists $w\in W_T$ and $h_+\in H^+$ such that $w(h)=h_+$. Moreover, $h_+$ is unique. For $h\in H$, let ${\mathfrak g}(i;h)=\sum_{h(\alpha)=i}{\mathfrak g}_\alpha$, $i\neq 0$, and ${\mathfrak g}(0;h)={\mathfrak t}\oplus\sum_{h(\alpha)=0}{\mathfrak g}_\alpha$. The decomposition ${\mathfrak g}=\oplus{\mathfrak g}(i;h)$ is a ${\mathbb Z}$-grading of ${\mathfrak g}$, and the $\overline\lambda$-grading coincides with the $(\mathop{\rm Ad}\nolimits\lambda)$-grading for $\lambda\in Y(T)$. If $k={\mathbb C}$, there is a straightforward classification of nilpotent orbits via conjugacy classes of $\mathfrak{sl}(2)$-triples: any nilpotent element $e\in{\mathfrak g}$ can be embedded as the nilpositive element of an $\mathfrak{sl}(2)$-triple $\{ h,e,f\}$; moreover, there is a unique $G$-conjugate $h'$ of $h$ such that $h'\in{\mathfrak t}$ and $\alpha(h')\geq 0$ for all $\alpha\in\Delta$. (It was proved by Dynkin that $\alpha(h')\in\{ 0,1,2\}$ for all $\alpha\in\Delta$.) In this way one can associate to $e$ a unique element of $H(\Phi,\Delta)^+$, called the {\it weighted Dynkin diagram} associated to $e$. We denote the set of all weighted Dynkin diagrams by $H(\Phi,\Delta)_n$. Hence there is a one-to-one correspondence between the elements of $H(\Phi,\Delta)_n$ and the nilpotent conjugacy classes in ${\mathfrak g}$. This argument using $\mathfrak{sl}(2)$-triples is only valid if the characteristic is zero or large. However, Pommerening proved in \cite{pom1,pom2} that the nilpotent orbit structure is essentially the same in all good characteristics. Let $h\in H(\Phi,\Delta)_n$ and let $G(0)_h$ be the unique closed connected subgroup of $G$ such that $\mathop{\rm Lie}\nolimits(G(0)_h)={\mathfrak g}(0;h)$. There is an open $G(0)_h$-orbit in ${\mathfrak g}(2;h)$: let $N_h$ be a representative for the open orbit and set ${\mathfrak o}_h=G\cdot N_h$. The correspondence $h\mapsto{\mathfrak o}_h$ is one-to-one between the elements of $H(\Phi,\Delta)_n$ and nilpotent conjugacy classes in ${\mathfrak g}$. In good characteristic Pommerening replaced weighted Dynkin diagrams with {\it associated characters}. A cocharacter $\lambda$ is associated to $e$ if $e\in{\mathfrak g}(2;\lambda)$ and there is a Levi subgroup $L$ of $G$ such that $\lambda(k^\times)\subset L^{(1)}$ and $e$ is distinguished in $\mathop{\rm Lie}\nolimits(L)$. (A nilpotent element $x\in{\mathfrak g}$ is distinguished if $Z_{G^{(1)}}(x)^\circ$ is a unipotent group.) If $\lambda$ is an associated cocharacter for $e$ and $g\in Z_G(e)$, then $g\cdot\lambda$ is also associated to $e$; moreover, any two associated cocharacters for $e$ are conjugate by an element of $Z_G(e)^\circ$ (\cite[Prop. 11]{mcninch3}). Premet has recently given a short conceptual proof of Pommerening's theorem, valid in all good characteristics. The proof uses the theory of optimal cocharacters for $G$-unstable elements, also called the Kempf-Rousseau theory. If $\rho:G\longrightarrow\mathop{\rm GL}\nolimits(V)$ is a rational representation, then the Kempf-Rousseau theory attaches to a $G$-unstable vector $v\in V$ a collection of {\it optimal} cocharacters. In general the optimal cocharacters depend on the choice of a length function on the set of cocharacters in $G$. (See Sect. \ref{sec:6.2} for the details concerning the Kempf-Rousseau theory.) Let $h\in H(\Phi,\Delta)_n$. As observed in \cite[\S 2.4]{premnil}, there exists a (unique) cocharacter $\lambda:k^\times\longrightarrow T\cap G^{(1)}$ such that $\overline\lambda=h$. (Since this holds for simply-connected $G^{(1)}$, it holds for any isogenous image of $G^{(1)}$, hence for arbitrary reductive groups.) Let $U$ be the unique closed connected $T$-stable subgroup of $G$ such that $\mathop{\rm Lie}\nolimits(U)=\sum_{i>0}{\mathfrak g}(i;h)$ and let $P=P(\lambda)=Z_G(\lambda)\cdot U$ (a parabolic subgroup of $G$). Then, after choosing a suitable length function on the set of cocharacters in $G$, we have (see \cite[Thm. 2.3]{premnil} and \cite[3.5]{mcninch3}): \begin{theorem}[Premet]\label{premorbits} (a) $\lambda$ is optimal for $N_h$. (b) Let $C=Z_G(e)\cap Z_G(\lambda)$. Then $Z_G(e)\subset P$ and $Z_G(e)=C\cdot Z_U(e)$ (semidirect product): $C$ is the reductive part and $Z_U(e)$ the unipotent radical of $Z_G(e)$. (c) Let $S$ be a maximal torus of $C$ and let $L=Z_G(S)$. Then $e$ is a distinguished nilpotent element of $\mathop{\rm Lie}\nolimits(L)$ and $\lambda(k^\times)\subset L^{(1)}$. \end{theorem} Note that by (c) $\lambda$ is associated to $N_h$. It follows that the decomposition in (b) holds for arbitrary $e,\lambda$, where $e$ is nilpotent and $\lambda$ is associated to $e$. We wish to restate Kawanaka's theorem (for the case of an involution) in the language of associated cocharacters. Let $h\in H$ be $\theta$-stable. Define a subalgebra $\overline{\mathfrak g}_h$ of ${\mathfrak g}$ with graded components $\overline{\mathfrak g}_h(i)$ as follows: $\overline{\mathfrak g}_h(i)= \left\{ \begin{array}{ll} {\mathfrak k}(i;h) & \mbox{if $i = 0$ (mod 4),} \\ {\mathfrak p}(i;h) & \mbox{if $i = 2$ (mod 4),} \\ \{ 0\} & \mbox{otherwise.} \end{array} \right.$ Suppose further that $h_+\in H(\Phi,\Delta)_n$. Since $h$ is $W$-conjugate to $h_+$, there exists a unique cocharacter $\lambda:k^\times\longrightarrow T\cap G^{(1)}$ such that $\overline\lambda=h$. But $\theta(h)=h$, hence $\lambda(k^\times)\subset T\cap K\cap G^{(1)}$. The Lie algebra $\overline{\mathfrak g}_h$ is equal to ${\mathfrak g}^{d\psi}=\{ x\in{\mathfrak g}\,|\,d\psi(x)=x\}$, where $t_0=\lambda(\sqrt{-1})$ and $\psi=\mathop{\rm Int}\nolimits t_0\circ\theta$. Moreover, $\psi$ is of order 1,2 or 4, hence is semisimple. It follows that $\overline{\mathfrak g}_h=\mathop{\rm Lie}\nolimits((G^\psi)^\circ)$ and $\overline{G}_h=(G^\psi)^\circ$ is reductive. (This is true for any $\theta$-stable $h\in H$, see \cite{kawanaka}.) Let $\overline{G}_h(0)=Z_K(\lambda)$. Then $T(0)=(T\cap K)^\circ$ is a maximal torus of $\overline{G}_h$, and $\mathop{\rm Lie}\nolimits(\overline{G}_h(0))={\mathfrak k}^\lambda=\overline{\mathfrak g}_h(0)$. Following Kawanaka, $h$ is {\it slim} (with respect to $\theta$) if $\lambda(k^\times)\subset \overline{G}_h^{(1)}$. Let $\alpha\in\Phi=\Phi(G,T)$. Recall that $\theta$ induces an automorphism $\gamma$ of $\Phi$ stabilizing $\Delta$. Denote by ${\mathfrak g}_{(\alpha)}$ the span of the root spaces ${\mathfrak g}_\alpha$ and ${\mathfrak g}_{\gamma(\alpha)}$. If $\gamma(\alpha)\neq\alpha$, then ${\mathfrak g}_{(\alpha)}=({\mathfrak g}_{(\alpha)}\cap{\mathfrak k})\oplus({\mathfrak g}_{(\alpha)}\cap{\mathfrak p})$ and the dimension of each summand is 1. Let $\overline{\alpha}$ denote the restriction of $\alpha$ to $T(0)$. Note that $\overline\alpha=\overline\beta$ if and only if $\beta\in\{\alpha,\gamma(\alpha)\}$. We have $\Phi_h=\Phi(\overline{G}_h,T(0))=\{\overline\alpha\,|\,{\mathfrak g}_{(\alpha)}\cap\overline{\mathfrak g}_h\neq\{ 0\}\}$. Let $\alpha\in\Phi$. There are three possibilities: (i) $\gamma(\alpha)=\alpha$, (ii) $\gamma(\alpha)$ and $\alpha$ are orthogonal, (iii) $\gamma(\alpha)$ and $\alpha$ generate a root system of type $A_2$. Introduce corresponding elements $s_{(\alpha)}$ of $W$: (i) $s_{(\alpha)}=s_\alpha$, (ii) $s_{(\alpha)}=s_\alpha s_{\gamma(\alpha)}$, (iii) $s_{(\alpha)}=s_\alpha s_{\gamma(\alpha)} s_\alpha=s_{\gamma(\alpha)}s_\alpha s_{\gamma(\alpha)}=s_{\alpha+\gamma(\alpha)}$. We can embed the Weyl group $W_h=W(\Phi_h)$ in $W$: $W_h$ is generated by all $s_{(\alpha)}$ with $\overline\alpha\in\Phi_h$. Let $\Phi^+$ be the positive system in $\Phi$ determined by $\Delta$ and let $\Phi_h^+=\{\overline\alpha\in\Phi_h\,|\,\alpha\in\Phi^+\}$. Then $\Phi_h^+$ is a positive system in $\Phi_h$. We let $\Delta_h$ be the corresponding basis. Any $\theta$-stable element $h'$ of $H(\Phi,\Delta)$ gives rise to a well-defined element $\overline{h'}$ of $H(\Phi_h,\Delta_h)$. Kawanaka introduced a subset $H(\Phi,\Delta,\theta)'_n$ of $H$ in order to parametrise the nilpotent $K$-orbits in ${\mathfrak p}$: $h\in H(\Phi,\Delta,\theta)'_n$ if and only if: (i) $h_+\in H(\Phi,\Delta)_n$, (ii) $h$ is $\theta$-invariant, (iii) $h$ is slim with respect to $\theta$, (iv) $\overline{h}_+\in H(\Phi_h,\Delta_h)_n$. Let $W(0)=N_K(T)/Z_K(T)$ and let $W^\theta=\{ w\in W|\theta(w)=w\}$. Let $H(\Phi,\Delta,\theta)_n$ be a set of representatives for the $W(0)$-orbits in $H(\Phi,\Delta,\theta)'_n$. Kawanaka's theorem states that \cite[(3.1.5)]{kawanaka}: \begin{theorem}[Kawanaka] For each $h\in H(\Phi,\Delta,\theta)_n$ choose a representative $N_h$ of the open $\overline{G}_h(0)$-orbit in $\overline{\mathfrak g}_h(2)={\mathfrak p}(2;h)$. Then the correspondence $h\mapsto K\cdot N_h$ is one-to-one between elements of $H(\Phi,\Delta,\theta)_n$ and nilpotent $K$-orbits in ${\mathfrak p}$. We have $K\cdot N_h\subset{\mathfrak o}_{h_+}$, the $G$-orbit determined by $h_+$. Two orbits $K\cdot N_h$ and $K\cdot N_{h'}$ are contained in the same $G$-orbit if and only if $h_+=h'_+$. \end{theorem} (Kawanaka's theorem is stated in a much more general setting, which includes the case of an automorphism of $G$ of finite order prime to $p$.) In view of the remarks above, we have the following: \begin{corollary}\label{assoc} Let $e\in{\cal N}$. Then there exists a cocharacter $\lambda:k^\times\longrightarrow K$ which is associated to $e$. Any two such cocharacters are conjugate by an element of $Z_K(e)^\circ$. \end{corollary} \begin{proof} By Kawanaka's theorem there exists $g\in K$ and $h\in H(\Phi,\Delta,\theta)_n$ such that $g\cdot e=N_h$. But as we have already seen, there exists a unique cocharacter $\lambda:k^\times\longrightarrow T\cap K\cap G^{(1)}$ such that $\overline\lambda=h$. Moreover, $\lambda$ is associated to $N_h$. It follows that $g^{-1}\cdot\lambda$ is associated to $e$. Suppose $\lambda,\mu$ are associated cocharacters for $e$ such that $\lambda(k^\times),\mu(k^\times)\subset K$. There exists $g\in Z_G(e)^\circ$ such that $g\cdot\lambda=\mu$ (\cite[Prop. 11]{mcninch3}). Let $C=Z_G(e)\cap Z_G(\lambda)$: then $Z_G(e)^\circ=C^\circ\cdot Z_U(e)$ (semidirect product), where $Z_U(e)$ is the unipotent radical of $Z_G(e)$. Hence there exists $u\in Z_U(e)$ such that $u\cdot\lambda=\mu$. Since $e\in{\mathfrak p}$, $Z_U(e)$ is $\theta$-stable. But now $u^{-1}\theta(u)\in Z_G(\lambda)\cap Z_U(e)\;\Rightarrow\; u^{-1}\theta(u)=1$. By \cite[III.3.12]{sands} $u\in Z_K(e)^\circ$. \end{proof} This observation allows us to replace the notion of weighted Dynkin diagrams with that of associated cocharacters. If $e\in{\cal N}$ and $\lambda:k^\times\longrightarrow K$ is an associated cocharacter for $e$, we use the notation $\overline{G}_\lambda= (G^{\psi})^\circ,\overline{\mathfrak g}_\lambda=\mathop{\rm Lie}\nolimits(\overline{G}_\lambda)$, where $\psi=\mathop{\rm Int}\nolimits\lambda(\sqrt{-1})\circ\theta$. \begin{rk}\label{nonsc} The theorems of Kawanaka, Pommerening and Premet are true for arbitrary reductive $G$ such that $p$ is good. Hence Cor. \ref{assoc} is true without the assumptions (B),(C) of \S 3. If we assume only that $p$ is good for $G$, then we can define $x\in{\mathfrak p}$ to be regular if $\mathop{\rm dim}\nolimits Z_G(x)$ is minimal: then $\mathop{\rm dim}\nolimits G-\mathop{\rm dim}\nolimits Z_G(x)=\mathop{\rm dim}\nolimits{\mathfrak g}^A$ by Lemma \ref{maxsplittori} and \cite[3.2]{rich2}. (We don't in general have $\mathop{\rm dim}\nolimits {\mathfrak k}-\mathop{\rm dim}\nolimits {\mathfrak p} =\mathop{\rm dim}\nolimits {\mathfrak z}_{\mathfrak k}(x)-\mathop{\rm dim}\nolimits {\mathfrak z}_{\mathfrak p}(x))$ for all $x\in{\mathfrak p}$.) Let $G$ be simply-connected and semisimple and let $\tilde{G}$ be the group defined in \S 3. Then we can lift an involution of $G$ to $\tilde{G}$ by Lemma \ref{GLautos}. Hence Thm. \ref{nil1} is true for any semisimple simply-connected group. Let $G$ be an arbitrary semisimple group and let $\pi:G_{sc}\rightarrow G$ be the universal cover of $G$. Then by the argument in \cite[2.3]{premnil} $\pi$ induces a $G/Z(G)$-equivariant bijection ${\cal N}({\mathfrak g}_{sc})\longrightarrow{\cal N}({\mathfrak g})$. Moreover, any involutive automorphism of $G$ can be lifted to an involutive automorphism of $G_{sc}$. It follows that Thm. \ref{nil1} holds for any semisimple group with involution (assuming $p$ is good). Note that if $p$ is good for $G$ then it is good for $\overline{G}_\lambda$. (This is immediate since $p\neq 2$, therefore $p$ can only be bad for $\overline{G}_\lambda$ if it is of exceptional type: but if $\overline{G}_\lambda$ is of exceptional type then so is $G$, and the semisimple rank of $G$ is greater than that of $\overline{G}_\lambda$.) \end{rk} \subsection{Semiregular Elements in Type $D_n$} \label{sec:5.3} Let $G$ be almost simple, simply-connected of type $D_n$, let $T$ be a maximal torus of $G$ and let $\Delta=\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ be a basis for $\Phi=\Phi(G,T)$, numbered in the standard way. Let ${\mathfrak g}=\mathop{\rm Lie}\nolimits(G)$ and let $\{ h_{\alpha_i},e_\alpha\,|\,\alpha_i\in\Delta,\alpha\in\Phi\}$ be a Chevalley basis for ${\mathfrak g}$. Let $\gamma$ be the graph automorphism which sends $\alpha_{n-1}\mapsto\alpha_n$, $\alpha_n\mapsto\alpha_{n-1}$, and fixes all other elements of $\Delta$. The following lemma is due to Premet. \begin{lemma}\label{sigmaexists} There exists an automorphism $\sigma$ of $G$ satisfying $d\sigma(e_\alpha)=e_{\gamma(\alpha)}$ for all $\alpha\in\Phi$. \end{lemma} \begin{proof} Since any automorphism of ${\mathfrak g}$ gives rise to an automorphism of the adjoint group, and hence by Lemma \ref{sccover} to an automorphism of $G$, it will suffice to show that there is an automorphism of ${\mathfrak g}$ satisfying $e_\alpha\mapsto e_{\gamma(\alpha)}$ for all $\alpha\in\Phi$. Let $\phi$ be the (unique) automorphism of ${\mathfrak g}$ which sends $e_\alpha$ to $e_{\gamma(\alpha)}$ for $\alpha\in\pm\Delta$. Let $I=\{\alpha_1,\alpha_2,\ldots,\alpha_{n-2}\}$ and let $\Phi_I$ be the subsystem of $\Phi$ generated by the elements of $I$. It is easy to see that $\phi(e_\alpha)=e_\alpha$ for any $\alpha\in\Phi_I$. Let $\alpha\in\Phi^+\setminus\Phi_I$. There are four possibilities: (i) $\alpha=\beta+\alpha_{n-1}$ for some $\beta\in\Phi_I$, (ii) $\alpha=\beta+\alpha_n$ for some $\beta\in\Phi_I$, (iii) $\alpha=\beta+\alpha_{n-1}+\alpha_n$ for some $\beta\in\Phi_I^+$, (iv) $\alpha=(\beta+\alpha_{n-1})+(\gamma+\alpha_n)$ for some $\beta,\gamma\in\Phi_I^+$ with $\beta+\alpha_{n-1},\gamma+\alpha_n\in\Phi$. For case (i), $e_\alpha=[e_\beta,e_{\alpha_{n-1}}]\mapsto [e_\beta,e_{\alpha_n}]=e_{\gamma(\alpha)}$. Similarly for case (ii). For (iii), $e_\alpha=[[e_\beta,e_{\alpha_{n-1}}],e_{\alpha_n}]= [[e_\beta,e_{\alpha_n}],e_{\alpha_{n-1}}]$. Hence $\phi(e_\alpha)=e_\alpha=e_{\gamma(\alpha)}$. Finally, if (iv) holds then $e_\alpha=\pm [e_{\beta+\alpha_{n-1}+\alpha_n},e_\gamma]$. But $\phi(e_{\beta+\alpha_{n-1}+\alpha_n})=e_{\beta+\alpha_{n-1}+\alpha_n}$ and $\phi(e_\gamma)=e_\gamma$, by the above. Hence $\phi(e_\alpha)=e_\alpha$. We have proved that $\phi(e_\alpha)=e_{\gamma(\alpha)}$ for any $\alpha\in\Phi^+$. But then by properties of the Chevalley basis $\phi(e_\alpha)=e_{\gamma(\alpha)}$ for any $\alpha\in\Phi$. \end{proof} \begin{rk} The existence of $\sigma$ clearly also holds if $G$ is of adjoint type. However, if $n$ is even and $G$ is intermediate (that is, neither simply-connected nor adjoint) then $\sigma$ does not in general exist. \end{rk} Recall that a nilpotent element $e\in{\mathfrak g}$ is {\it distinguished} if $Z_{G^{(1)}}(e)^\circ$ is a unipotent group, and $e$ is {\it semiregular} if $Z_G(e)$ is the product of $Z(G)$ and a (connected) unipotent group. (Hence a semiregular element is distinguished.) Let $h\in H(\Phi,\Delta)_n$ be the weighted Dynkin diagram corresponding to a semiregular orbit, and let $\lambda:k^\times\longrightarrow T$ be the unique cocharacter satisfying $\langle\alpha,\lambda\rangle=h(\alpha)$ for all $\alpha\in\Phi$ (this exists by \cite[2.4]{premnil}). Let $Y_\lambda$ be the open $Z_G(\lambda)$-orbit in ${\mathfrak g}(2;\lambda)$ and let $E\in Y_\lambda$. It follows from \cite[III.4.28(ii)]{sands} that $\sigma(\lambda(t))=\lambda(t)$, and that $E$ is $Z_G(\lambda)$-conjugate to an element of the form $\sum_{\beta\in\Gamma}e_\gamma$, where $\Gamma$ is a $\gamma$-stable subset of $\{\alpha\in\Phi\,|\,h(\alpha)=2\}$. Hence: \begin{lemma}\label{typed} Let $e$ be a semiregular nilpotent element of ${\mathfrak g}$ and let $\mu$ be an associated cocharacter for $G$. After conjugating $e$ and $\mu$ by an element of $G$, if necessary, we may assume that $\mu(k^\times)\subset G^\sigma$ and $e\in{\mathfrak g}^\sigma$. \end{lemma} We also record the following result to be used in the next subsection. \begin{lemma}\label{semiregred} Let $G$ be any reductive group such that $p$ is good for $G$, and let $e$ be a distinguished nilpotent element of ${\mathfrak g}$. Then there exists a reductive subgroup $L$ of $G$ such that (i) $e$ is a semiregular element of $\mathop{\rm Lie}\nolimits(L)$, (ii) $p$ is good for $L$. \end{lemma} \begin{proof} For any $x\in{\mathfrak g}$, $Z_G(x)=Z(G)\cdot Z_{G^{(1)}}(x)$. Moreover, $e\in\mathop{\rm Lie}\nolimits(G^{(1)})$. Hence, after replacing $G$ by $G^{(1)}$, we may assume that $G$ is semisimple. We now prove the lemma by induction on the order of the group $\overline{A}(e)=Z_G(e)/Z(G)Z_G(e)^\circ$. If $\overline{A}(e)$ is trivial, then we are done. Otherwise, let $x$ be any element of $Z_G(e)\setminus Z(G)Z_G(e)^\circ$, and let $x=x_s x_u$ be the Jordan-Chevalley decomposition of $x$. Then $x_u\in Z_G(e)^\circ$, hence after replacing $x$ by $x_s$ we may assume that $x$ is semisimple. Let $L'=Z_G(x)^\circ$. Then $L'$ is a pseudo-Levi subgroup of $G$, hence is reductive and $p$ is good for $L'$. Since $e$ is distinguished in $G$ (hence in $L'$), $Z(L')^\circ$ is trivial and $Z_{L'}(e)^\circ=(Z_G(e)^\circ)^x$. Let $H=Z_{L'}(e)/Z(G)Z_{L'}(e)^\circ$ and let $\overline{A}_{L'}(e)=Z(L')(e)/Z(L')Z_{L'}(e)^\circ$. Then $H\hookrightarrow Z_G(e)^x/Z(G)(Z_G(e)^\circ)^x$, hence $H$ can be considered as a subgroup of $\overline{A}(e)$. Moreover, $H$ maps surjectively onto $\overline{A}_{L'}(e)$, and the kernel is non-trivial; thus the order of $\overline{A}_{L'}(e)$ is strictly less than that of $\overline{A}(e)$. By the induction hypothesis, there exists a subgroup $L$ of $L'$ satisfying the conditions of the Lemma. \end{proof} \subsection{Regular Nilpotent Elements} \label{sec:5.4} Our goal is to prove that the regular nilpotent elements form a single $K^*$-orbit, where $K^*=\{ g\in G|\,g^{-1}\theta(g)\in Z(G)\}$. The following lemma is the key step. In view of Remark \ref{nonsc}, we assume until further notice only that $p$ is good for $G$. We use Bourbaki's numbering conventions on roots \cite{bourbaki}. \begin{lemma}\label{splitconj} Let $e$ be a nilpotent element of ${\mathfrak p}$ and let $\lambda:k^\times\longrightarrow K$ be associated to $e$. Then there exists $g\in G$ such that $(\mathop{\rm Int}\nolimits g)\circ\lambda$ is $\theta$-split. Equivalently $\mathop{\rm Int}\nolimits n(\lambda)=-\lambda$ where $n=g^{-1}\theta(g)$. \end{lemma} \begin{proof} Recall that if $p$ is good for $G$ then it is good for $\overline{G}_\lambda$ (resp. a pseudo-Levi subgroup of $G$). Hence, after replacing $G$ by $\overline{G}_\lambda$, we have only to prove the lemma under the assumption that $\theta=\mathop{\rm Int}\nolimits\lambda(\sqrt{-1})$ and that all weights of $\lambda$ on ${\mathfrak g}$ are even. Let $S$ be a maximal torus of $Z_G(\lambda)\cap Z_G(e)$. Then $Z_G(S)$ is a $\theta$-stable Levi subgroup of $G$ and $e$ is a distinguished element of $Z_G(S)$ (\cite[Prop. 2.5]{premnil}). Hence, after replacing $G$ by $Z_G(S)$, we may assume that $e$ is distinguished. Let $L$ be a reductive subgroup of $G$ such that $p$ is good for $L$ and $e$ is a semiregular element of $\mathop{\rm Lie}\nolimits(L)$ (Lemma \ref{semiregred}). Let $\mu$ be an associated cocharacter for $e$ in $L$: then $\mu$ is also an associated cocharacter for $e$ in $G$. Hence $\mu$ is $Z_G(e)$-conjugate to $\lambda$. Conjugating $L$ by some element of $Z_G(e)$, if necessary, we may assume that $\lambda(k^\times)\subset L$. It is well-known that $e\in\mathop{\rm Lie}\nolimits(L^{(1)})$ (see \cite[\S 2.3]{premnil}, for example). Replacing $G$ by $L^{(1)}$, we may assume that $G$ is semisimple and that $e$ is semiregular in ${\mathfrak g}$. Now if $\eta:G_{sc}\rightarrow G$ is the universal covering, then by Lemma \ref{sccover} there exists a unique involutive automorphism $\theta_{sc}$ of $G_{sc}$ which lifts $\theta$. By \cite[Rk. 1]{premnil} there is a (unique) cocharacter $\lambda_{sc}$ such that $\eta\circ\lambda_{sc}=\lambda$. Hence $\theta_{sc}=\mathop{\rm Int}\nolimits\lambda_{sc}(\sqrt{-1})$. To prove that $\lambda$ is $G$-conjugate to a $\theta$-split cocharacter, it will clearly suffice to prove that $\lambda_{sc}$ is $G_{sc}$-conjugate to a $\theta$-split cocharacter. Note that the statement of the Lemma does not depend on the choice of $e$: let $e_{sc}$ be any representative for the open $Z_{G_{sc}}(\lambda_{sc})$-orbit in ${\mathfrak g}_{sc}(2;\lambda_{sc})$. Replacing $G,\lambda$, and $e$ respectively by $G_{sc},\lambda_{sc}$ and $e_{sc}$, we may assume that $G$ is semisimple and simply-connected, and that $e$ is semiregular in ${\mathfrak g}$. Finally, let $G_1,G_2,\ldots,G_l$ be the minimal normal subgroups of $G$ and let ${\mathfrak g}_i=\mathop{\rm Lie}\nolimits(G_i)$, $1\leq i\leq l$. There is a unique expression $e=\sum e_i$, where each $e_i\in{\mathfrak g}_i$; thus $e_i$ is semiregular in ${\mathfrak g}_i$. Moreover $\theta$ is inner, hence each component $G_i$ is $\theta$-stable. We may assume therefore that $G$ is almost simple. Any regular nilpotent element is semiregular. In fact, there are no non-regular semiregular nilpotent elements except when $G$ is of type $D$ or $E$. If $G$ is of type $D_n$, then by Lemma \ref{typed} above there exists a non-trivial involutive automorphism $\sigma:G\longrightarrow G$ such that $\lambda(k^\times)\subset G^\sigma$ and $e\in{\mathfrak g}^\sigma$. Since $\theta=\mathop{\rm Ad}\nolimits\lambda(t_0)$, $G^\sigma$ is also $\theta$-stable. The group $G^\sigma$ is semisimple, of type $B_{n-1}$. By Lemma \ref{sccover} we can replace $G$ by the universal covering of $G^\sigma$. (In fact this is unnecessary, as our argument below doesn't require the assumption of simply-connectedness.) Hence it will suffice to prove the lemma in the case where $e$ is semiregular and $G$ is not of type $D$. For type $E$ the semiregular orbits are as follows: $E_6(reg),E_6(a_1)$; $E_7(reg),E_7(a_1),E_7(a_2)$; $E_8(reg),E_8(a_1),E_8(a_2)$ (\cite{som,premnil,mcninchsom}). For each $\alpha\in\Phi$ denote by $U_\alpha$ be the unique connected, unipotent $T$-stable subgroup of $G$ satisfying $\mathop{\rm Lie}\nolimits(U_\alpha)={\mathfrak g}_\alpha$. Let $\epsilon_\alpha:k\longrightarrow U_\alpha$, $\alpha\in\Phi$ be isomorphisms such that $t\epsilon_\alpha(y)t^{-1}=\epsilon_\alpha(\alpha(t)y)$ for all $t\in T$, $y\in k$, and such that $n_\alpha=\epsilon_\alpha(1)\epsilon_{-\alpha}(-1)\epsilon_\alpha(1)\in N_G(T)$, $n_\alpha$ represents the reflection $s_\alpha\in W$. Note that $\theta(\epsilon_\alpha(t))= \left\{ \begin{array}{ll} \epsilon_\alpha(t) & e_\alpha\in{\mathfrak k}, \\ \epsilon_\alpha(-t) & e_\alpha\in{\mathfrak p}. \end{array} \right.$ Let $w_0$ be the longest element of $W$ with respect to the Coxeter basis $s_{\alpha}$, $\alpha\in\Delta$. Let $\hat\alpha$ be the longest root in $\Phi^+$ and let $\Phi_0$ be the set of roots in $\Phi$ which are orthogonal to $\hat\alpha$. Then $\Phi_0$ is a root subsystem of $\Phi$ with basis $\Delta_0=\{\alpha\in\Delta\,|\,\alpha\bot\hat\alpha\}$. Moreover $w_0=s_{\hat\alpha}w_0(\Phi_0)$, where $w_0(\Phi_0)$ is the longest element of $W(\Phi_0)$ with respect to the Coxeter basis $\{ s_{\alpha}:\alpha\in\Delta_0\}$. Inductive application of this statement gives a description of $w_0$ as a product of orthogonal reflections $s_\alpha$ with $\alpha\in\Phi$. We can now prove the lemma by means of the following observation. Suppose $\beta_1,\beta_2,\ldots,\beta_t$ are orthogonal roots with $e_{\beta_i}\in{\mathfrak p}$ for all $i$. Let $$g=\epsilon_{-\beta_1}(1/2)\epsilon_{-\beta_2}(1/2)\ldots\epsilon_{-\beta_t}(1/2)\epsilon_{\beta_1}(-1)\epsilon_{\beta_2}(-1)\ldots\epsilon_{\beta_t}(-1).$$ Then $g^{-1}\theta(g)=\prod_{i=1}^t \epsilon_{\beta_i}(1)\epsilon_{-\beta_i}(-1)\epsilon_{\beta_i}(1)=\prod_{i=1}^t n_i$, where $n_i = n_{\beta_i}$ for each $i$. Moreover $\theta=\mathop{\rm Int}\nolimits t_0$ and $t_0\in T$, hence the induced action of $\theta$ on $W$ is trivial. To show that $\lambda$ is conjugate to a $\theta$-split torus, therefore, it will suffice to show that there is an element $w\in W$ which is conjugate to a product $s_{\beta_1}s_{\beta_2}\ldots s_{\beta_t}$, where the $\beta_i$ are orthogonal, $e_{\beta_i}\in{\mathfrak p}$, and such that $w\cdot\lambda=-\lambda$. Recall that $e$ is regular unless $G$ is of type $E$. {\it Type $A_n$.} In this case $w_0$ is conjugate to $\left\{ \begin{array}{ll} s_{\alpha_1}s_{\alpha_3}\ldots s_{\alpha_n} & \mbox{if $n$ is odd,} \\ s_{\alpha_1}s_{\alpha_3}\ldots s_{\alpha_{n-1}} & \mbox{if $n$ is even.} \end{array} \right.$ But $\langle\lambda,\alpha_i\rangle =2$, hence $e_{\alpha_i}\in{\mathfrak p}$ for all $i$. This proves the lemma in this case. {\it Type $B_n$.} Let $\beta_i= \left\{ \begin{array}{ll} \alpha_i+2\alpha_{i+1}+2\alpha_{i+2}+\ldots+2\alpha_{n} & \mbox{if $i$ is odd, $1\leq i\leq n$,} \\ \alpha_{i-1} & \mbox{if $i$ is even, $2\leq i\leq n$.} \end{array} \right.$ Then the $\beta_i$ are orthogonal, $e_{\beta_i}\in{\mathfrak p}$ for each $i$ and $w_0=s_{\beta_1}s_{\beta_2}\ldots s_{\beta_n}$. {\it Type $C_n$.} Let $\beta_i=2\alpha_i+2\alpha_{i+1}+\ldots+2\alpha_{n-1}+\alpha_n$ for $1\leq i\leq {n-1}$ and let $\beta_n=\alpha_n$. Then the $\beta_i$ are orthogonal, $e_{\beta_i}\in{\mathfrak p}$, and $w_0=s_{\beta_1}s_{\beta_2}\ldots s_{\beta_n}$. {\it Type $F_4$.} Let $\beta_1=\hat{\alpha}=2\alpha_1+3\alpha_2+4\alpha_3+2\alpha_4,\beta_2=\alpha_2+2\alpha_3+2\alpha_4,\beta_3=\alpha_2+2\alpha_3$ and $\beta_4=\alpha_2$. Clearly $e_{\beta_i}\in{\mathfrak p}$, the $\beta_i$ are orthogonal and $w_0=s_{\beta_1}s_{\beta_2}s_{\beta_3}s_{\beta_4}$. {\it Type $G_2$.} Let $\beta_1=3\alpha_1+2\alpha_2$ and $\beta_2=\alpha_1$. Then $w_0=s_{\beta_1}s_{\beta_2}$ is the required expression for $w_0$. {\it Type $E_6$.} Let $\beta_1=\hat{\alpha}=\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6,\beta_2=\alpha_1+\alpha_3+\alpha_4+\alpha_5+\alpha_6,\beta_3=\alpha_3+\alpha_4+\alpha_5,\beta_4=\alpha_4$. Then $w_0=s_{\beta_1}s_{\beta_2}s_{\beta_3}s_{\beta_4}$. If $e$ is regular, then $\langle\lambda,\alpha_i\rangle =2\;\forall i$, hence $e_{\beta_i}\in{\mathfrak p}$ for all $i$. This proves the lemma for $E_6(reg)$. Suppose therefore that $e$ is in the semiregular orbit $E_6(a_1)$. Then $\langle\lambda,\alpha\rangle =2$ for $\alpha_4\neq\alpha\in\Delta$, and $\langle\lambda,\alpha_4\rangle =0$. Thus $w_0 s_{\alpha_4}\cdot\lambda=-\lambda$. Hence it will suffice in this case to show that $s_{\beta_1}s_{\beta_2}s_{\beta_3}$ is conjugate to some element $s_{\gamma_1}s_{\gamma_2}s_{\gamma_3}\in W$ with $e_{\gamma_1},e_{\gamma_2},e_{\gamma_3}\in{\mathfrak p}$. Let $\alpha=\hat\alpha-\alpha_2$. Then $\alpha\in\Phi$ and $s_\alpha(\beta_i)= \left\{ \begin{array}{ll} \alpha_2 & \mbox{if $i=1$,} \\ -(\alpha_2+\alpha_3+2\alpha_4+\alpha_5) & \mbox{if $i=2$,} \\ -(\alpha_1+\alpha_2+\alpha_3+2\alpha_4+\alpha_5+\alpha_6) & \mbox{if $i=3.$} \end{array} \right.$ Therefore $s_\alpha(w_0s_{\alpha_4})s_\alpha^{-1}$ has the required form. This completes the $E_6$ case. {\it Type $E_7$.} Let $\beta_1=\hat{\alpha},\beta_2=\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7,\beta_3=\alpha_7$, $\beta_4=\alpha_2+\alpha_3+2\alpha_4+\alpha_5,\beta_5=\alpha_2,\beta_6=\alpha_3,\beta_7=\alpha_5.$ We have $w_0=s_{\beta_1}s_{\beta_2}\ldots s_{\beta_7}$. If $e$ is regular, then $\langle \lambda,\alpha\rangle =2\;\forall\alpha\in\Delta$. If $e$ is of type $E_7(a_1)$, then $\langle\lambda,\alpha\rangle =2$ for $\alpha_4\neq\alpha\in\Delta$ and $\langle\lambda,\alpha_4\rangle =0$. If $e$ is of type $E_7(a_2)$ then $\langle\lambda,\alpha\rangle = \left\{ \begin{array}{ll} 2 & \mbox{if $\alpha\in\Delta\setminus\{\alpha_4,\alpha_6\}$,} \\ 0 & \mbox{if $\alpha=\alpha_4,\alpha_6$.} \end{array} \right.$ In each case we can see that $e_{\beta_i}\in{\mathfrak p}$ for all $i$. Hence by our earlier observation there exists $g$ such that $n_0=g^{-1}\theta(g)\in N_G(T)$ and $n_0 T=w_0$. {\it Type $E_8$.} For regular $e$ we have $\langle\lambda,\alpha\rangle =2\;\forall\,\alpha\in\Delta$, for subregular $e$ (type $E_8(a_1)$) $\langle\lambda,\alpha\rangle =2$ for all $\alpha_4\neq\alpha\in\Delta$, and $\langle\lambda,\alpha_4\rangle =0$, while for the final case $E_8(a_2)$, we have $$\langle\lambda,\alpha\rangle = \left\{ \begin{array}{ll} 2 & \mbox{if $\alpha\in\Delta\setminus\{\alpha_4,\alpha_6\}$,} \\ 0 & \mbox{if $\alpha=\alpha_4,\alpha_6$.} \end{array} \right.$$ Let $\hat\alpha$ be the longest element of $\Phi^+$ and let $\Phi_0$ be the subsystem of all roots orthogonal to $\hat\alpha$. Then $\Phi_0$ is a subsystem of $\Phi$ isomorphic to $E_7$, and $\{\alpha_1,\alpha_2,\ldots,\alpha_7\}$ is a basis for $\Phi_0$. Identify $\Phi_0$ with $E_7$ and let $\beta_1,\beta_2,\ldots,\beta_7$ be the orthogonal roots given for the $E_7$ case above. Then $w_0=s_{\hat\alpha}s_{\beta_1}s_{\beta_2}\ldots s_{\beta_7}$. Moreover, it is easy to see that $e_{\hat\alpha},e_{\beta_1},e_{\beta_2},\ldots,e_{\beta_7}\in{\mathfrak p}$. Hence there exists $g\in G$ such that $g^{-1}\theta(g)\in N_G(T)$ represents $w_0$. This completes the proof. \end{proof} Let $A$ be a maximal $\theta$-split torus of $G$. The roots $\Phi_A=\Phi(G,A)$ form a non-reduced root system \cite[4.7]{rich2}. Let $\Pi$ be a basis for $\Phi_A$. We can now use Lemma \ref{splitconj} to give a criterion for $e\in{\cal N}$ to be regular. \begin{lemma}\label{regconj} There exists a cocharacter $\omega:k^\times\longrightarrow A\cap{G^{(1)}}$ such that $\langle\omega,\alpha\rangle =2\;\forall\alpha\in\Pi$. Let $e\in{\cal N}$ and let $\lambda:k^\times\longrightarrow K$ be associated to $e$. Then $e$ is regular if and only if $\lambda$ is $G$-conjugate to $\omega$. Hence the set ${\cal N}_{reg}$ of regular nilpotent elements is contained in a single $G$-orbit. \end{lemma} \begin{proof} By Lemma \ref{splitconj}, $\lambda$ is $G$-conjugate to a $\theta$-split cocharacter $\mu$. But any two maximal $\theta$-split tori are conjugate by an element of $K$, hence we may assume that $\mu(k^\times)\subset A$. Moreover, we may assume after conjugating further by an element of $N_K(A)$, if necessary, that $\langle\mu,\alpha\rangle\geq 0$ for all $\alpha\in\Pi$. It follows from the properties of associated cocharacters (see for example \cite[Thm. 2.3(iv)]{premnil}) that $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak g}(e)=\mathop{\rm dim}\nolimits{\mathfrak g}(0;\lambda)+\mathop{\rm dim}\nolimits{\mathfrak g}(1;\lambda)=\mathop{\rm dim}\nolimits{\mathfrak g}(0;\mu)+\mathop{\rm dim}\nolimits{\mathfrak g}(1;\mu)$. But $\mu(k^\times)\subset A$, hence $\mathop{\rm dim}\nolimits{\mathfrak g}(0;\mu)\geq \mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak g}({\mathfrak a})$. Thus by Lemma \ref{regs}, $e$ is regular if and only if $\mu$ is regular in $A$ and all weights of $\mathop{\rm Ad}\nolimits\mu$ on ${\mathfrak g}$ are even. Let $S$ be a maximal torus of $G$ containing $A$. By Lemma \ref{basis} there exists a basis $\Delta_S$ for $S$ such that every element of $\Pi$ can be written in the form $\beta|_A$ for some $\beta\in\Delta_S$. Hence by properties of weighted Dynkin diagrams, $\langle\mu,\alpha\rangle\in\{0,1,2\}$ for each $\alpha\in\Pi$. It follows that $e$ is regular if and only if $\langle\mu,\alpha\rangle=2$ for all $\alpha\in\Pi$. But there exists some regular nilpotent element; hence $\omega$ exists. \end{proof} \begin{rk}\label{eiseven} Let $S$ be a maximal torus of $G$ containing $A$ and let $\Delta_S$ be a basis for $\Phi_S=\Phi(G,S)$, such that $\{\alpha|_A\, :\,\alpha\in\Delta_S,\alpha|_A\neq 1\}$ is a basis for $\Phi_A$. Let $I=\{\alpha\in\Delta_S\, :\,\alpha|_A=1\}$. Then $\omega$ satisfies $\langle\alpha,\omega\rangle= \left\{ \begin{array}{ll} 0 & \mbox{if $\alpha\in I$,} \\ 2 & \mbox{if $\alpha\in\Delta_S\setminus I$.} \end{array} \right.$ \end{rk} \begin{corollary} Let $e$ be a regular nilpotent element of ${\mathfrak p}$. Then $e$ is even. \end{corollary} \begin{proof} Let $\lambda$ be an associated cocharacter for $e$. Then $\lambda$ is conjugate to $\omega$. But now by the remark above $\omega$ is even. \end{proof} Fix a cocharacter $\omega$ as in Lemma \ref{regconj} and denote by $Y_\omega$ the open $Z_G(\omega)$-orbit in ${\mathfrak g}(2;\omega)$. \begin{lemma}\label{ainZ} Let $E\in Y_\omega$. Suppose $a\in A$ and $a\cdot E=E$. Then $a\in Z(G)$. \end{lemma} \begin{proof} Since $Z_G(\omega)\cdot E=Y_\omega$ and $Z_G(\omega)=Z_G(A)$, it follows that $a\cdot E'=E'$ for all $E'\in Y_\omega$. Therefore $a\cdot E'=E'$ for all $E'\in{\mathfrak g}(2;\omega)$, which implies that $\alpha(a)=1\;\forall \alpha\in\Pi$. It follows that $a\in Z(G)$. \end{proof} \begin{lemma}\label{omega} Let $e\in{\cal N}$ be regular and let $\lambda:k^\times\longrightarrow K$ be associated to $e$. Let $g\in G$ be such that $g\cdot e\in{\mathfrak p}$ and $(g\cdot \lambda)(k^\times)\subset K$. Then $g\in K^*$. In particular $C=Z_G(\lambda)\cap Z_G(E)\subseteq K^*$. \end{lemma} \begin{proof} Let $g$ be such that $g\cdot e\in{\mathfrak p}$ and $(g\cdot\lambda)(k^\times)\subset K$, and let $x=g^{-1}\theta(g)$. Assume first of all that $x$ is semisimple. By \cite[6.3]{rich2} there exists a maximal $\theta$-split torus of $G$ containing $x$. Hence, after conjugating $e,\lambda$, and $g$ by a suitable element of $K$, we may assume that $x\in A$. Let $H=Z_G(x)^\circ$ and let ${\mathfrak h}=\mathop{\rm Lie}\nolimits(H)$. We claim that $\lambda$ is an associated cocharacter for $e$ in $H$. Let $d=\mathop{\rm min}\nolimits_{y\in{\mathfrak h}\cap{\mathfrak p}}\mathop{\rm dim}\nolimits Z_H(y)$; since $Z_G(A)\subset H$, $d=\mathop{\rm dim}\nolimits Z_G(A)$. Thus $Z_G(e)^\circ\subset H$. In particular, $C^\circ\subset H$. Recall that $C^\circ$ is a ($\theta$-stable) reductive subgroup of $G$. Hence we can choose a $\theta$-stable maximal torus $S$ of $C^\circ$ (\cite[7.5]{steinberg}). Let $L=Z_G(S)$, a $\theta$-stable Levi subgroup of $G$. By \cite[Prop. 2.5]{premnil}, $e$ is distinguished in ${\mathfrak l}=\mathop{\rm Lie}\nolimits(L)$ and $\lambda(k^\times)\subset L^{(1)}$. Clearly $x\in L$. Hence $e$ is distinguished in $Z_L(x)^\circ=Z_H(S)=Z_G(x,S)^\circ$. Let $T$ be a maximal torus of $Z_H(S)$ containing $\lambda(k^\times)$. Then $T=(T\cap L^{(1)})\cdot Z(L)^\circ = (T\cap Z_H(S)^{(1)})\cdot Z(L)^\circ$. Therefore $\lambda(k^\times)\subset Z_H(S)^{(1)}$, that is, $\lambda$ is an associated cocharacter for $e$ in $H$. Since $A\subset H$, we can consider $\Phi(H,A)$ as a subset of $\Phi_A$. Let $\Phi(H,A)^+=\Phi(H,A)\cap \Phi_A^+$ and let $\Pi_H$ be the corresponding basis for $\Phi(H,A)$. By Lemma \ref{regconj} there exists $\omega_H:k^\times\rightarrow A\cap H^{(1)}$ such that $\langle\alpha,\omega\rangle=2$ for all $\alpha\in\Pi_H$, and $h\in H$ such that $h\cdot\lambda=\omega_H$. But $\lambda$ is $G$-conjugate to $\omega$: hence, since $\omega$ and $\omega_H$ are in the same Weyl chamber in $Y(A)$, we must have $\omega=\omega_H$. Thus $h\cdot\lambda=\omega$ and $E=h\cdot e\in Y_\omega$. Moreover, $x\cdot E=E$. Now by Lemma \ref{ainZ}, $x\in Z(G)$. Suppose therefore that $x$ is not semisimple. Let $x=su$ be the Jordan-Chevalley decomposition of $x$. Since $x\in C$, $s,u\in C$ also. By \cite[III.3.15]{sands}, all unipotent elements of $Z_G(e)$ are in $Z_G(e)^\circ$. Hence by \cite[Pf. of Thm. 2.3, p.347]{premnil}, $u\in C^\circ$. But now $\theta$ acts non-trivially on the derived subgroup of (the reductive group) $C^\circ$, hence there exists a non-central $\theta$-split torus in $C^\circ$ (\cite[\S 1]{vust}). This contradicts the assumption that $e$ is regular, by the above. \end{proof} Thus we have our desired reward. \begin{theorem}\label{regorbit} The set ${\cal N}_{reg}$ of regular nilpotent elements of ${\mathfrak p}$ is a single $K^*$-orbit. Hence $K^*$ permutes the irreducible components of ${\cal N}$ transitively and ${\cal N}$ is the closure of the regular nilpotent $K^*$-orbit. \end{theorem} \begin{proof} Let $e\in{\cal N}_{reg}$ and let $\lambda:k^\times\longrightarrow K$ be an associated cocharacter for $e$. By Cor. \ref{regconj}, ${\cal N}_{reg}=G\cdot e\cap{\mathfrak p}$. Suppose $g\in G$ and $e'=g\cdot e\in{\mathfrak p}$. By Lemma \ref{assoc} there exists an associated cocharacter $\mu:k^\times\longrightarrow K$ for $e'$. Moreover $\mu$ is $Z_G(e')^\circ$-conjugate to $g\cdot\lambda$. Hence there exists $h\in G$ such that $h\cdot e=e'=g\cdot e$ and $h\cdot\lambda=\mu$. But now by Lemma \ref{omega}, $h\in K^*$. We have proved that any element of ${\cal N}_{reg}$ is $K^*$-conjugate to $e$. The regular elements are dense in each irreducible component by Thm. \ref{nil1}. But therefore $\overline{{\cal N}_{reg}}={\cal N}$. This completes the proof. \end{proof} Thm. \ref{regorbit} generalises \cite[Thm. 6]{kostrall} to good positive characteristic. In \cite{sek}, Sekiguchi determined (for $k={\mathbb C}$) the involutions for which the set of nilpotent elements is non-irreducible. The proof comes down to checking which elements of the group $F=\{ a\in A\,|\,a^2\in Z(G)\}$ stabilize a particular irreducible component of ${\cal N}$. The calculations in the classical case were omitted. Fortunately, our analysis of associated cocharacters, together with the classification of involutions (\cite{springer}), considerably simplify the task of generalizing Sekiguchi's results. We begin with the following: \begin{theorem}\label{gthetaorbs} Let $e,\lambda,C$ be as above. Let $Z=Z(G)$, $P=\{ g^{-1}\theta(g)\,|\, g\in G\}$, $\tau:G\longrightarrow P$, $g\mapsto g^{-1}\theta(g)$ and denote by $\Gamma$ the set of $G^\theta$-orbits in ${\cal N}_{reg}$. (a) The map from $K^*$ to $\Gamma$ given by $g\mapsto gG^\theta\cdot e$ is surjective and induces a one-to-one correspondence $K^*/G^\theta C\longrightarrow\Gamma$. (b) The morphism $\tau$ induces an isomorphism $K^*/G^\theta C\longrightarrow (Z\cap A)/{\tau(C)}$. Since $Z\subseteq C$, there is a surjective map $(Z\cap A)/{\tau(Z)}\longrightarrow (Z\cap A)/{\tau(C)}$. (c) The embedding $F^*\hookrightarrow K^*$ induces a surjective map $F^*/{F(Z\cap A)}\rightarrow\Gamma$. (d) The map $F^*\rightarrow Z\cap A$, $a\mapsto a^2$ induces an isomorphism of finte groups $F^*/{F(Z\cap A)}\longrightarrow Z\cap A/{(Z\cap A)^2}$. \end{theorem} \begin{proof} Since $K^*$ permutes the elements of ${\cal N}_{reg}$ transitively, the map in (a) from $K^*$ to $\Gamma$ is surjective and factors through $G^\theta C$. Suppose $g,g'\in K^*$ and $gG^\theta\cdot e=g' G^\theta\cdot e$. Then there exists $x\in G^\theta$ such that $g^{-1}g'\cdot e=x\cdot e$. Moreover, since $g^{-1}g'\cdot\lambda$ is an associated cocharacter for $x\cdot e$ and $g^{-1}g'\cdot\lambda(k^\times)\subset K$, there exists $y\in Z_K(e)^\circ$ such that $yx\cdot\lambda = g^{-1}g'\cdot\lambda$ by Cor. \ref{assoc}. Thus $g\in g'CG^\theta=g' G^\theta C$. Hence the map $K^*/G^\theta C\rightarrow\Gamma$ is one-to-one. This proves (a). Since $K^*=\tau^{-1}(Z\cap A)$, the induced map $\overline{\tau}$ from $K^*$ to $Z\cap A/{\tau(C)}$ is surjective. Suppose therefore that $g\in K^*$ and that there exists $c\in C$ such that $g^{-1}\theta(g)=c^{-1}\theta(c)$. Then $gc^{-1}\in G^\theta$. Hence $g\in CG^\theta=G^\theta C$. It follows that the kernel of $\overline{\tau}$ is $G^\theta C$. We recall by \cite[8.1]{rich2} that $K^*=F^*\cdot K$. Hence there is a surjective map $F^*\rightarrow\Gamma$, $a\mapsto aG^\theta\cdot e$. Moreover, since $F\subset G^\theta$ and $az\cdot e = a\cdot e$ for any $a\in F^*,z\in (Z\cap A)$, this map factors through the cosets of $F(Z\cap A)$ in $F^*$. This proves (c). Finally, the homomorphism $F^*\rightarrow Z\cap A$, $a\mapsto a^2$ is surjective by the definition of $F^*$ and the fact that $A$ is a torus. Suppose $a^2 =z^2$ for some $z\in Z\cap A$. Then $(z^{-1}a)^2$ is the identity element. Hence $z^{-1}a\in F\Rightarrow a\in F(Z\cap A)$. This completes the proof. \end{proof} An involution is {\it split} (or of {\it maximal rank}) if the maximal $\theta$-split torus $A$ is a maximal torus of $G$, and {\it quasi-split} if $Z_G(A)$ is a maximal torus of $G$. Recall (see Sect. \ref{sec:2.2}) that, relative to a maximal torus $S$ containing $A$, there is a basis $\Delta_S$ for $\Phi_S$, a subset $I$ of $\Delta_S$, and a graph automorphism $\psi$ of $\Phi_S$ such that $\theta^*(\beta)=-w_I(\psi(\beta))$ for any $\beta\in\Phi_S$. With this notation, $\theta$ is quasi-split if $I=\emptyset$, and is split if in addition the action of $\psi$ is trivial. \begin{corollary}\label{splitcmpts} Suppose $G$ is almost simple and simply-connected. (a) Let $\theta$ be split. The irreducible components of ${\cal N}$ are in one-to-one correspondence with the elements of $Z/Z^2$. Hence ${\cal N}$ has 4 components if $G$ is of type $D_{2n}$, has 2 components if $G$ is of type $A_{2n-1},B_n,C_n,D_{2n+1},E_7$, and is irreducible if $G$ is of type $A_{2n},E_6,E_8,F_4$, or $G_2$. (b) Let $\theta$ be quasi-split. Then the irreducible components of ${\cal N}$ are in one-to-one correspondence with the elements of $(Z\cap A)/{\tau(Z)}$. (c) Let $\theta$ be any involutive automorphism and let $G$ be of one of the following types: $A_{2n},E_6,E_8,F_4$, or $G_2$. Then ${\cal N}$ is irreducible. \end{corollary} \begin{proof} Since $G$ is semisimple and simply-connected, the isotropy subgroup $G^\theta$ is connected by \cite[8.1]{steinberg}. Hence the irreducible components of ${\cal N}$ are in one-to-one correspondence with the elements of $Z\cap A/\tau(C)$ by Thm. \ref{gthetaorbs}. If $\theta$ is split or quasi-split, then a regular nilpotent element of ${\mathfrak p}$ is also a regular element of ${\mathfrak g}$, hence $C=Z(G)$. Thus $\tau(C)=\tau(Z)$. If $\theta$ is split, then $A$ is a maximal torus of $G$, hence $Z\subset A$. This proves (a) and (b). For (c), the centre $Z$ of $G$ has odd order, hence so does $Z\cap A$. Therefore $(Z\cap A)/{(Z\cap A)^2}$ is trivial. But now by Thm. \ref{gthetaorbs}(d), ${\cal N}$ is irreducible. \end{proof} Note that by Rk. \ref{nonsc}, the description of the number of irreducible components of ${\cal N}$ holds without the assumption of simply-connectedness. Using the notation $({\mathfrak g},{\mathfrak k})$, the split involutions are as follows: - Type $A_n$, $(\mathfrak{sl}(n+1),\mathfrak{so}(n+1))$ (or $(\mathfrak{gl}(n+1),\mathfrak{so}(n+1))$ if $p\, |\,(n+1)$), - Type $B_n$, $(\mathfrak{so}(2n+1),\mathfrak{so}(n)\oplus\mathfrak{so}(n+1))$, - Type $C_n$, $(\mathfrak{sp}(2n),\mathfrak{gl}(n))$, - Type $D_n$, $(\mathfrak{so}(2n),\mathfrak{so}(n)\oplus\mathfrak{so}(n))$, - Type $E_6$, $({\mathfrak e}_6,\mathfrak{sp}(8))$, - Type $E_7$, $({\mathfrak e}_7,\mathfrak{sl}(8))$, - Type $E_8$, $({\mathfrak e}_8,\mathfrak{so}(16))$, - Type $F_4$, $({\mathfrak f}_4,\mathfrak{sp}(6)\oplus\mathfrak{sl}(2))$, - Type $G_2$, $({\mathfrak g}_2,\mathfrak{sl}(2)\oplus\mathfrak{sl}(2))$. Hence Cor. \ref{splitcmpts} confirms no. 2 of Table 1, and no.s 1,2,3,4,6 of Table 2, listed in \cite[p. 161]{sek}. In Sect. 6.3 we deal with the remaining cases. \subsection{A $\theta$-equivariant Springer isomorphism} \label{sec:5.5} Assume once more that $G$ satisfies the conditions (A)-(C) of \S 3. Let ${\cal U}(G)$ be the closed set of unipotent elements in $G$ and let ${\cal N}({\mathfrak g})$ be the nilpotent cone in ${\mathfrak g}$. We let ${\cal U}=\{ u\in{\cal U}(G)\,|\,\theta(u)=u^{-1}\}$. By \cite[6.1]{rich2}, ${\cal U}\subset P$, where $P=\{ g^{-1}\theta(g)\,|\,g\in G\}$. It is well-known (see for example \cite{sands}) that if the characteristic of $k$ is good for $G$, then there exists a $G$-equivariant isomorphism of affine varieties $\psi:{\cal U}(G)\longrightarrow{\cal N}({\mathfrak g})$, sometimes known as the Springer map. It was also stated without proof in \cite[\S 10]{barrich} that there is a $K$-equivariant isomorphism from ${\cal U}$ to ${\cal N}$. We get the desired result in our case with the following proposition. Part (c) is due to McNinch (\cite[Thm. 35]{mcninch}). \begin{proposition}\label{iso} There is a $G$-equivariant isomorphism of affine varieties $\Psi:{\cal U}(G)\longrightarrow{\cal N}({\mathfrak g})$ such that: (a) $\Psi(u^{-1})=-\Psi(u)$ ($u\in {\cal U}(G)$), (b) $\Psi(\theta(u))=d\theta(\Psi(u))$ ($u\in{\cal U}(G)$), (c) $\Psi(u^p)=\Psi(u)^{[p]}$ ($u\in{\cal U}(G)$). Moreover, if (i) $p>3$ or (ii) $G$ has no component of type $D_4$, then we may assume that (b) holds for all automorphisms of $G$. \end{proposition} \begin{proof} As ${\cal U}(G)\subseteq G^{(1)}$ and ${\cal N}({\mathfrak g})\subseteq\mathop{\rm Lie}\nolimits(G^{(1)})$ we may assume that $G$ is semisimple. Let $G_1,G_2,\ldots,G_l$ be the minimal normal subgroups of $G$ and let ${\mathfrak g}_i=\mathop{\rm Lie}\nolimits(G_i)$ for $1\leq i\leq l$. Then $G=G_1\times G_2\times\ldots \times G_l$ and ${\mathfrak g}={\mathfrak g}_1\oplus{\mathfrak g}_2\oplus\ldots\oplus{\mathfrak g}_l$. Let $H$ (resp. $L$) be the subgroup of $G$ generated by all $G_i$ isomorphic to $G_1$ (resp. all $G_i$ not isomorphic to $G_1$) and let ${\mathfrak h}=\mathop{\rm Lie}\nolimits(H),{\mathfrak l}=\mathop{\rm Lie}\nolimits(L)$. Then $G=H\times L$ and ${\mathfrak g}={\mathfrak h}\oplus{\mathfrak l}$. Moreover ${\cal U}(G)={\cal U}(H)\times{\cal U}(L),{\cal N}({\mathfrak g})={\cal N}({\mathfrak h})\oplus{\cal N}({\mathfrak l})$. Any automorphism of $G$ stabilizes $H$ and $L$. Hence we may assume that all minimal normal subgroups of $G$ are isomorphic to $G_1$. Identify $G$ with the product $G_1\times G_1\times\ldots \times G_1$ ($l$ times). Thus we write an element of $G$ as $(g_1,g_2,\ldots ,g_l)$, $g_i\in G_1$. The symmetric group $S_l$ acts on $G$: $\tau(g_1,g_2,\ldots,g_l)=(g_{\tau(1)},g_{\tau(2)},\ldots ,g_{\tau(l)})$. Furthermore, any automorphism of $G$ can be written in the form $\tau\circ(\theta_1,\theta_2,\ldots ,\theta_l)$, where $\theta_i\in\mathop{\rm Aut}\nolimits(G_1)$, $(\theta_1,\theta_2,\ldots ,\theta_l)(g_1,g_2,\ldots,g_l)=(\theta_1(g_1),\theta_2(g_2),\ldots,\theta_l(g_l))$ and $\tau\in S_l$. Thus it will suffice to prove the proposition in the case where $G$ is almost simple. There are three cases: (i) $G$ is not of type $A_n$, (ii) $G=\mathop{\rm SL}\nolimits(n,k)$ with $p\nmid n$, and (iii) $G=\mathop{\rm SL}\nolimits(n,k)$ with $p\, |\, n$. In case (iii) replace $G$ by $\mathop{\rm GL}\nolimits(n,k)$. In all three cases, it is well-known (see for example \cite[I.5]{sands}) that there exists a representation $\rho:G\longrightarrow\mathop{\rm GL}\nolimits(V)$ such that: (i) $d\rho:{\mathfrak g}\longrightarrow\mathfrak{gl}(V)$ is injective, (ii) The associated trace form $\kappa_\rho:{\mathfrak g}\times{\mathfrak g}\longrightarrow k$, $(x,y)\mapsto\mathop{\rm tr}\nolimits(d\rho(x),d\rho(y))$ is non-degenerate. We construct a new representation $\sigma:G\longrightarrow\mathop{\rm GL}\nolimits(V\oplus V)$ defined by $g\mapsto \left( \begin{array}{ll} \rho(g) & 0 \\ 0 & {^t}\rho(g)^{-1} \end{array} \right). $ The associated trace form $\kappa_\sigma=2\kappa_\rho$. Replacing $(\rho,V)$ by $(\sigma,V\oplus V)$, we may assume that $(\rho,V)$ satisfies the further properties: (iii) $d\rho({\mathfrak g})\subseteq\mathfrak{sl}(V)$, (iv) $\mathop{\rm tr}\nolimits(\rho(g)d\rho(x))=-\mathop{\rm tr}\nolimits(\rho(g^{-1})d\rho(x))$ for all $g\in G,x\in{\mathfrak g}$. Finally, construct another representation $\sigma:{\mathfrak g}\longrightarrow\mathfrak{gl}(V\oplus V)$ defined by $g\mapsto \left( \begin{array}{ll} \rho(g) & 0 \\ 0 & \rho(\theta(g)) \end{array} \right) \in\mathop{\rm GL}\nolimits(V\oplus V)$. By the $\theta$-invariance of the trace (see the proof of Thm. \ref{redthm}) $\kappa_\sigma=2\kappa_\rho$. Moreover, it is easy to see that $\sigma$ satisfies (i)-(iv) and that: (v) $\mathop{\rm tr}\nolimits(\sigma(\theta(g))d\sigma(x))=\mathop{\rm tr}\nolimits(\sigma(g)d\sigma(d\theta(x)))$ for all $g\in G,x\in{\mathfrak g}$. Identify ${\mathfrak g}$ with its image $d\sigma({\mathfrak g})$ and let ${\mathfrak g}^\bot=\{ x\in\mathfrak{gl}(V)| \mathop{\rm tr}\nolimits(xy)=0\,\forall y\in{\mathfrak g}\}$. It follows from (ii) and (iii) that $\mathfrak{gl}(V)={\mathfrak g}\oplus{\mathfrak g}^\bot$ and that $I_V\in{\mathfrak g}^\bot$. Let $\iota:\mathop{\rm GL}\nolimits(V)\hookrightarrow\mathfrak{gl}(V)$ be the map embedding $\mathop{\rm GL}\nolimits(V)$ as a Zariski open subset of $\mathfrak{gl}(V)$ and let $\mathop{\rm pr}\nolimits_{\mathfrak g}:\mathfrak{gl}(V)\twoheadrightarrow{\mathfrak g}$ be the projection onto ${\mathfrak g}$ induced by the direct sum decomposition $\mathfrak{gl}(V)={\mathfrak g}\oplus{\mathfrak g}^\bot$. Introduce the map $\eta=\mathop{\rm pr}\nolimits_{\mathfrak g}\circ\iota\circ\sigma:G\longrightarrow{\mathfrak g}$. It follows from \cite[Cor. 6.3]{barrich} that $\eta$ restricts to an isomorphism $\Psi:{\cal U}(G)\longrightarrow{\cal N}({\mathfrak g})$. We claim that (iv) and (v) imply, respectively, (a) and (b) of the proposition. Identify $\mathop{\rm GL}\nolimits(V)$ with its image $\iota(\mathop{\rm GL}\nolimits(V))$. By (iv) we have $\kappa_\sigma(\eta(g),x)=-\kappa_\sigma(\eta(g^{-1}),x)$ for all $x\in{\mathfrak g}$. It follows that $\eta(g^{-1})=-\eta(g)$. This proves (a). By (v), $\kappa_\sigma(\eta(\theta(g)),x)=\kappa_\sigma(\eta(g),d\theta(x))$ for all $x\in{\mathfrak g}$. But $\kappa_\sigma(d\theta(\eta(g)),x)=\kappa_\sigma(\eta(g),d\theta(x))$ for all $x\in{\mathfrak g}$, hence $d\theta(\eta(g))=\eta(\theta(g))$ for any $g\in G$. This proves (b). The proof that $\eta(g^p)=\eta(g)^{[p]}$ is in \cite[Thm. 35]{mcninch}. It can be applied perfectly well here without affecting the rest of the proof. We have constructed the isomorphism $\Psi$ invariant with respect to a given involution $\theta$. But $\mathop{\rm Aut}\nolimits G$ is generated over $\mathop{\rm Int}\nolimits G$ by the group $\Gamma$ of graph automorphisms (for $G=\mathop{\rm GL}\nolimits(n,k)$ with $p\, |\, n$ and $n\neq 2$ this follows from Lemma \ref{GLautos}). Moreover the group of graph automorphisms is either trivial, or cyclic of order 2 (for types $A_n\;(n\geq 2),D_n\;(n\geq 5)$, and $E_6$), or isomorphic to the symmetric group $S_3$ (for type $D_4$). Choose a set of coset representatives $C$ for $\Gamma$. If $p>3$ then we can easily adapt the proof above to make $\eta$ invariant with respect to every element of $C$. If there is a component of type $D_4$, then we need the assumption $p>3$ for the trace form $\kappa_\sigma$ to be non-zero. Hence it is straightforward with these restrictions to construct an isomorphism $\Psi$ satisfying (b) for every element of $C$. But then $\Psi$ satisfies (b) for every element of $\mathop{\rm Aut}\nolimits G$. \end{proof} \begin{corollary}\label{isocor} There is a $K^*$-equivariant isomorphism of affine varieties $\Psi:{\cal U}\longrightarrow{\cal N}$. \end{corollary} \section{A reductive subalgebra} \label{sec:6} \subsection{Preparation} \label{sec:6.1} Fix a cocharacter $\omega:k^\times\longrightarrow A$ as in Lemma \ref{regconj}, and let $Y_\omega=\{ x\in{\mathfrak g}(2;\omega)\,|\,\overline{Z_G(\omega)\cdot x}={\mathfrak g}(2;\omega)\}$, $Y_{-\omega}=\{ x\in{\mathfrak g}(-2;\omega)\,|$ $\overline{Z_G(\omega)\cdot x}={\mathfrak g}(-2;\omega)\}$. Then $\omega$ (resp. $-\omega$) is an associated cocharacter for any $x\in Y_\omega$ (resp. $x\in Y_{-\omega}$). Let $S$ be a maximal torus of $G$ containing $A$. Recall (\cite{springer2} and \cite[1.3-4]{springer} - see also Sect. \ref{sec:2.2}) that there exists a basis $\Delta_S$ for $\Phi_S$, a subset $I$ of $\Delta_S$, and a graph automorphism $\psi:\Phi_S\rightarrow\Phi_S$ (stabilizing $\Delta_S$ and $I$) such that: - $\theta^*(\alpha)=-w_I(\psi(\alpha))$, $\alpha\in\Phi_S$, - $\theta^*(\alpha)=\alpha$, $\alpha\in I$, - $\alpha|_A = 1$ if $\alpha\in I$, and for $\alpha,\beta\in\Delta_S\setminus I$, $\alpha|_A=\beta|_A\;\Leftrightarrow\;beta\in\{\alpha,\psi(\alpha)\}$. - The set $\Pi=\{\alpha|_A\,:\,\alpha\in\Delta_S\setminus I\}$ is a basis for $\Phi_A$. Fix $S,\Delta_S,I,\psi,\Pi$ as above. Let $\Phi_A^*$ be the set of $\alpha\in\Phi_A$ such that $\alpha /2\notin\Phi_A$. For $\alpha\in\Phi_A$, denote by $\Psi_\alpha$ the set of all $\beta\in\Phi_S$ such that $\beta|_A$ is an integer multiple of $\alpha$: $\Psi_\alpha$ is a closed symmetric subset of $\Phi_S$. For $\beta\in\Phi_S$ let $U_\beta$ be the unique closed connected $S$-stable subgroup of $G$ such that $\mathop{\rm Lie}\nolimits(U_\beta)={\mathfrak g}_\beta$. Let $L_\alpha$ be the subgroup of $G$ generated by $S$ together with all subgroups $U_\beta,\beta\in\Psi_\alpha$. Then $L_\alpha$ is a $\theta$-stable connected reductive subgroup of $G$ and $U_\beta\subset L_\alpha$ if and only if $\beta\in\Psi_\alpha$ (\cite[Pf. of 4.6]{rich2}). In fact, we are only concerned here with the following case: \begin{lemma} Let $\alpha\in\Pi$. Then $L_\alpha$ is a standard Levi subgroup of $G$ relative to $(S,\Delta_S)$. \end{lemma} \begin{proof} Let $\beta\in\Delta_S$ be such that $\beta|_A=\alpha$. Then $\theta^*(\beta)=-w_I(\psi(\beta))\in -(\psi(\beta)+{\mathbb Z}I)$. Hence $\Psi_\alpha=\Phi_J$, where $J=I\cup\{\beta,\psi(\beta)\}$. \end{proof} Recall (\cite[\S 1]{vust}) that $Z_G(\omega)=Z_G(A)=M\cdot A$ (almost direct product), where $M=Z_K(A)^\circ$. It is clear from the definition that $Z_{L_\alpha}(\omega_\alpha)=Z_G(A)$. Once more we denote by $\langle .\, ,.\rangle:X(A)\times Y(A)\longrightarrow {\mathbb Z}$ the natural pairing of abelian groups. \begin{corollary}\label{omegaalphacor} There exists a cocharacter $\omega_\alpha:k^\times\longrightarrow A\cap L_\alpha^{(1)}$ such that $\langle\alpha,\omega_\alpha\rangle=2$. We have $\omega_\alpha=\omega+\mu_\alpha$ for some $\mu_\alpha\in Y((Z(L_\alpha)\cap A)^\circ)$. \end{corollary} \begin{proof} All of our earlier results apply to the $\theta$-stable Levi subgroup $L_\alpha$ of $G$. In particular, there exists a cocharacter $\omega_\alpha:k^\times\longrightarrow A\cap L_\alpha^{(1)}$ such that $\langle\alpha,\omega_\alpha\rangle=2$ by Lemma \ref{regconj}. Now, clearly $(\omega_\alpha-\omega)\in Y(A)$. But $\langle\alpha,\omega_\alpha-\omega\rangle=0$, hence $\omega_\alpha-\omega\in Y(Z(L_\alpha)^\circ)$. \end{proof} Let $E=X(A)\otimes_{\mathbb Z}{\mathbb R}$ and let $(.\, ,.):E\times E\rightarrow {\mathbb R}$ be a $W_A$-equivariant inner product. The set $\Phi_A^*$ is a root system in $E$ with Cartan integers $\langle\alpha,\beta\rangle=2(\alpha,\beta)/{(\beta,\beta)}$, $\alpha,\beta\in\Pi$ (\cite[\S 4]{rich2}). \begin{lemma}\label{omegaalphacartan} We have $\langle\beta,\omega_\alpha\rangle=\langle\beta,\alpha\rangle$ for all $\alpha,\beta\in\Pi$. \end{lemma} \begin{proof} Let $E^*$ be the dual space to $E$, naturally identified with $Y(A)\otimes_{\mathbb Z}{\mathbb R}$. The inner product $(.\, ,.)$ induces a $W_A$-equivariant isomorphism $E\rightarrow E^*$. Note that for $x\in E$, $s_\alpha(x)=-x\;\Leftrightarrow\; x\in{\mathbb R}\alpha$. Moreover, $E^*={\mathbb R}\omega_\alpha\oplus (Y((Z(L_\alpha)\cap A)^\circ)\otimes_{\mathbb Z}{\mathbb R})$. Hence for $y\in E^*$, $s_\alpha(y)=-y\;\Leftrightarrow\; y\in{\mathbb R}\omega_\alpha$. It follows that the isomorphism $E\rightarrow E^*$ sends $\alpha$ to $c\omega_\alpha$ for some $c\in{\mathbb R}^\times$. Thus $(\beta,\alpha)=c\langle\beta,\omega_\alpha\rangle$ for all $\beta\in\Phi_A$. But $\langle\alpha,\omega_\alpha\rangle=2$, hence $c=(\alpha,\alpha)/2$. Therefore $\langle\beta,\omega_\alpha\rangle=2(\beta,\alpha)/(\alpha,\alpha)= \langle\beta,\alpha\rangle$ for all $\alpha,\beta\in\Pi$. \end{proof} It follows from the construction of $\omega_\alpha$ that there is an open $Z_G(\omega)$-orbit on ${\mathfrak g}(\alpha;A)$, which we denote $Y_\alpha$. Since $L_\alpha$ is a Levi subgroup of $G$, $\omega_\alpha$ is an associated cocharacter (in $G$) for any $x_\alpha\in Y_\alpha$. \begin{lemma}\label{sl2s} Let $E_\alpha\in Y_\alpha$. Then $d\omega_\alpha(1)=\xi_\alpha [E_\alpha,d\theta(E_\alpha)]$ for some $\xi_\alpha\in k^\times$. \end{lemma} \begin{proof} By properties of associated cocharacters, ${\mathfrak z}_{\mathfrak g}(E_\alpha)\cap {\mathfrak g}(-\alpha;A)=0$. Hence $[E_\alpha,d\theta(E_\alpha)]\neq 0$. But $\mathop{\rm dim}\nolimits A\cap L_\alpha^{(1)}=1$, hence $\mathop{\rm dim}\nolimits{\mathfrak a}\cap\mathop{\rm Lie}\nolimits(L_\alpha^{(1)})=1$. It follows that there exists $\xi_\alpha\in k^\times$ such that $d\omega_\alpha(1)=\xi_\alpha [E_\alpha,d\theta(E_\alpha)]$. \end{proof} \begin{lemma}\label{diffindpt} The differentials $d\alpha:{\mathfrak a}\longrightarrow{\mathfrak a}$, $\alpha\in\Pi$, are linearly independent. \end{lemma} \begin{proof} It follows at once from the definitions that $\cap_{\alpha\in\Pi}\mathop{\rm ker}\nolimits d\alpha={\mathfrak z}({\mathfrak g})\cap{\mathfrak a}$. Moreover, ${\mathfrak z}({\mathfrak g})=\mathop{\rm Lie}\nolimits(Z(G)^\circ)$ by \cite[2.3]{me}. But $Z(G)^\circ$ is a $\theta$-stable torus, hence $Z(G)^\circ=(Z\cap K)^\circ\cdot (Z\cap A)^\circ$ by Lemma \ref{stabletori}. Therefore ${\mathfrak z}({\mathfrak g})\cap{\mathfrak a}=\mathop{\rm Lie}\nolimits((Z\cap A)^\circ)$. But $\mathop{\rm dim}\nolimits A-\mathop{\rm dim}\nolimits (Z\cap A)^\circ=\mathop{\rm rk}\nolimits\Phi_A$ (see for example \cite[Rk. 4.8]{rich2}). This completes the proof. \end{proof} \begin{corollary}\label{omegasindpt} The toral elements $d\omega_\alpha(1)$ are linearly independent. \end{corollary} \begin{proof} Let $E_\alpha\in Y_{\alpha}$ for each $\alpha\in\Pi$. By Lemma \ref{sl2s} there exist $\xi_\alpha\in k^\times$ such that $d\omega_\alpha(1)=\xi_\alpha[E_\alpha,d\theta(E_\alpha)]$ for each $\alpha$. Let $\kappa$ be a non-degenerate $(\theta,G)$-equivariant symmetric bilinear form on ${\mathfrak g}$, let $S$ be a maximal torus of $G$ containing $A$, and let ${\mathfrak s}=\mathop{\rm Lie}\nolimits(S)$. By $S$-equivariance, the restriction of $\kappa$ to ${\mathfrak s}$ is non-degenerate; by $\theta$-equivariance, the restriction to ${\mathfrak a}$ is also non-degenerate. Let $a\in{\mathfrak a}$. Then $\kappa(a,d\omega_\alpha(1))=\xi_\alpha d\alpha(a)\kappa(E_\alpha,d\theta(E_\alpha))$. Since $\kappa|_{{\mathfrak a}\times{\mathfrak a}}$ is non-degenerate, $\kappa(E_\alpha,d\theta(E_\alpha))\neq 0$ and the isomorphism ${\mathfrak a}\rightarrow{\mathfrak a}^*$ induced by $\kappa$ sends $d\omega_\alpha(1)$ to a non-zero multiple of $d\alpha$. By Lemma \ref{diffindpt}, the toral elements $d\omega_\alpha(1)$ are linearly independent. \end{proof} \subsection{Optimal cocharacters and $Y_\omega$.} \label{sec:6.2} Let $H$ be a reductive algebraic group, and let $\rho:H\longrightarrow\mathop{\rm GL}\nolimits(V)$ be a rational representation. Recall that $v\in V$ is $H$-unstable if $0\in\overline{\rho(H)(v)}$: otherwise $v$ is {\it $H$-semistable}. Note that the $H$-unstable elements are the points of $\pi_{V,H}^{-1}(\pi_{V,H}(0))$. We have the Hilbert-Mumford criterion (see \cite{mum}, for example): {\it - $v$ is $H$-unstable if and only if there exists a cocharacter $\lambda:k^\times\longrightarrow H$ such that $v$ is $\lambda(k^\times)$-unstable.} Let $T$ be a maximal torus of $H$, and let $W_T=N_H(T)/T$. Let $Y(T)$ be the lattice of cocharacters in $T$ and let $E^*=Y(T)\otimes_{\mathbb Z}{\mathbb R}$. Let $(.\, ,.):Y(T)\times Y(T)\longrightarrow{\mathbb Z}$ be a $W_T$-equivariant, positive definite symmetric bilinear form, extended linearly to an inner product $(.\, ,.):E^*\times E^*\longrightarrow{\mathbb R}$. There is a corresponding length function $||.||:E^*\longrightarrow{\mathbb R}^{\geq 0}$, $\lambda\mapsto(\lambda,\lambda)^{1/2}$. Any cocharacter $\lambda:k^\times\longrightarrow H$ is $H$-conjugate to an element of $Y(T)$, hence we can describe the set of cocharacters in $H$ as the union $Y(H)=\cup Y(hTh^{-1})$. Moreover, if $\lambda,\mu\in Y(T)$, then $\lambda$ and $\mu$ are $H$-conjugate if and only if they are $W_T$-conjugate. It follows that the length function can be extended to an $H$-equivariant function $||.||:Y(H)\longrightarrow{\mathbb R}^{\geq 0}$. Let $\lambda\in Y(H)$ and let $h\in H$. We say that the limit $\lim_{t\rightarrow 0}\lambda(t)h\lambda(t^{-1})$ exists if the morphism $k^\times\rightarrow H$, $t\mapsto\lambda(t)h\lambda(t^{-1})$ can be extended to a morphism $\eta:k\rightarrow H$. If $\eta$ exists then it is unique: we write $\lim_{t\rightarrow 0}\lambda(t)h\lambda(t^{-1})$ for the image $\eta(0)$. We associate to any cocharacter $\lambda$ the following subgroups of $H$: $$P(\lambda):=\{ h\in H\,|\,\lim_{t\rightarrow 0}\lambda(t)h\lambda(t^{-1})\;\mbox{exists}\},$$ $$U(\lambda):=\{ h\in H\,|\,\lim_{t\rightarrow 0}\lambda(t)h\lambda(t^{-1})= I_H\}, Z(\lambda)=Z_H(\lambda).$$ (Here $I_H$ is the identity element of $H$.) Then $P(\lambda)$ is a parabolic subgroup of $H$ with Levi decomposition $P(\lambda)=Z(\lambda)U(\lambda)$. For $\lambda\in Y(H)$ and $i\in{\mathbb Z}$ set $V(i;\lambda)=\{ v\in V\,|\,\rho(\lambda(t))(v)=t^i v\;\forall t\in k^\times\}$: hence $V=\oplus_{i\in{\mathbb Z}} V(i;\lambda)$. Let $v\in V$, $v=\sum_{i\in{\mathbb Z}} v_i$, $v_i\in V(i;\lambda)$. We write $m(v,\lambda)$ for the mininum $i\in{\mathbb Z}$ such that $v_i\neq 0$. The (non-trivial) cocharacter $\lambda$ is {\it optimal} for $v$ if $m(v,\lambda)/||\lambda||\geq m(v,\mu)/||\mu||$ for all $0\neq\mu\in Y(H)$. A cocharacter $\lambda$ is {\it primitive} if $\lambda/m\in Y(H)\Rightarrow m=\pm 1$. The main result of the Kempf-Rousseau theory is the following (\cite{kempf,rousseau}): \begin{theorem}[Kempf, Rousseau] Let $v$ be an $H$-unstable element of $V$. (a) There exists at least one optimal cocharacter $\lambda\in Y(H)$ for $v$. (b) There is a parabolic subgroup $P(v)$ of $G$ such that $P(v)=P(\lambda)$ for any optimal cocharacter $\lambda$ for $v$. The centralizer $Z_H(v)\subset P(v)$. (c) Let $\Lambda_v$ be the set of all cocharacters in $H$ which are primitive and optimal for $v$. Any two elements of $\Lambda_v$ are conjugate by an element of $P(v)$. Each maximal torus of $P(\lambda)$ contains a unique element of $\Lambda_v$. \end{theorem} Let $T$ be a maximal torus of $H$, and let $\lambda\in Y(T)$. We denote by $T^\lambda$ the subtorus of $T$ generated by all cocharacters $\mu$ with $(\lambda,\mu)=0$, and by $Z^\bot(\lambda)$ the subgroup of $Z(\lambda)$ generated by $Z(\lambda)^{(1)}$ and $T^\lambda$. Then $Z^\bot(\lambda)$ is a closed subgroup of $Z(\lambda)$ of codimension 1, and is independent of the choice of maximal torus $T$ containing $\lambda$. We have the following criterion for optimality (Kirwan \cite{kir}, Ness \cite{ness}): \begin{proposition}[Kirwan, Ness] Let $i\geq 1$, and let $v\in V(i;\lambda)$. Then $\lambda$ is optimal for $v$ if and only if $v$ is $Z^\bot(\lambda)$-semistable. \end{proposition} Consider the adjoint representation $\mathop{\rm Ad}\nolimits:G\longrightarrow \mathop{\rm GL}\nolimits({\mathfrak g})$. Here $x\in {\mathfrak g}$ is $G$-unstable if and only if it is nilpotent. In \cite{premnil}, Premet showed that every nilpotent element $x\in{\mathfrak g}$ has a cocharacter $\lambda$ which is both optimal for and associated to $x$. (In general optimality depends on the choice of length function on $Y(G)$.) Let $\lambda$ be any associated cocharacter for $x$. Then $\lambda$ is optimal for $x$, and either $\lambda$ or $\lambda/2$ is primitive (\cite[Thm. 2.3, Thm. 2.7]{premnil}). On the other hand, if $\lambda$ is optimal for $x$ and $x\in{\mathfrak g}(2;\lambda)$, then $\lambda$ is an associated cocharacter for $x$ (\cite[Thm. 14]{mcninch3}). Let $S$ be a maximal torus of $G$ containing $A$, and let $E=X(S)\otimes_{\mathbb Z}{\mathbb R}$. By \cite[2.6(iv)]{rich2}, $S$ is $\theta$-stable. Let $W_S=N_G(S)/S$, let $\Gamma$ be the group of automorphisms of $S$ generated by $W_S$ and $\theta$, and let $(.\, ,.):E\times E\longrightarrow{\mathbb R}$ be a $\Gamma$-equivariant inner product such that $(\alpha,\beta)\in{\mathbb Z}$ for all $\alpha,\beta\in X(S)$. The inner product induces a $\Gamma$-equivariant isomorphism $E\rightarrow E^*$. Moreover, $E^*$ identifies with $Y(S)\otimes_{\mathbb Z}{\mathbb R}$. Hence we write $(.\, ,.)$ also for the induced inner product on $E^*$. Let $E_-$ (resp. $E^*_-$) denote the $(-1)$ eigenspace in $E$ (resp. $E^*$). Then $E_-$ (resp. $E^*_-$) can be identified with $X(A)\otimes_{\mathbb Z}{\mathbb R}$ (resp. $Y(A)\otimes_{\mathbb Z}{\mathbb R}$). The isomorphism $E\rightarrow E^*$ restricts to a $W_A$-equivariant isomorphism $E_-\rightarrow E^*_-$. Recall (\cite[\S 1]{vust}) that $Z_G(A)=M\cdot A$ (almost direct product), where $M=Z_K(A)^\circ$. Clearly $Z_G(A)^{(1)}\subseteq M$. Since $\omega$ is regular in $A$, $Z_G(\omega)=Z_G(A)$. Let $A^\omega$ denote the subtorus of $A$ generated by all $\mu(k^\times)$, with $\mu\in Y(A)$ such that $(\mu,\omega)=0$. \begin{lemma}\label{zbot} $Z^\bot(\omega)=M\cdot A^\omega$. \end{lemma} \begin{proof} Let $S_0=(S\cap K)^\circ$. By $\theta$-equivariance, $(\mu,\omega)=0$ for all $\mu\in Y(S_0)$. Hence $Z^\bot(\omega)$ contains $S_0\cdot Z_G(A)^{(1)}=M$. The lemma now follows at once. \end{proof} Let $\alpha\in\Pi$ and let $L_\alpha$ be the (Levi) subgroup of $G$ introduced in Sect. \ref{sec:6.1}. Note that $Z_{L_\alpha}(\omega_\alpha)=Z_G(\omega)=M\cdot A$. Let $Z_{L_\alpha}^\bot(\omega_\alpha)$ be the subgroup of $Z_G(A)$ generated by $Z_G(A)^{(1)}$ and $S^{\omega_\alpha}$ (using similar notation to that used above). \begin{lemma}\label{zbot2} (i) $Z_{L_\alpha}^\bot(\omega_\alpha)=M\cdot (Z(L_\alpha)\cap A)^\circ$. (ii) Let $x_\alpha\in{\mathfrak g}(\alpha;A)$. Then $x_\alpha\in Y_\alpha$ if and only if $x_\alpha$ is $M$-semistable. \end{lemma} \begin{proof} By Lemma \ref{zbot} applied to $L_\alpha$, $Z_{L_\alpha}^\bot(\omega_\alpha)=M\cdot A^{\omega_\alpha}$. But $A=(Z(L_\alpha)\cap A)^\circ\cdot\omega_\alpha(k^\times)$, hence (i) follows. Part (ii) now follows from the Kirwan-Ness criterion. \end{proof} For ease of notation, let $\pi_\alpha=\pi_{{\mathfrak g}(\alpha;A),M}$. We can choose homogeneous generators $f_1,f_2,\ldots f_l$ for $k[{\mathfrak g}(\alpha;A)]^M$. Let the respective degrees be $d_1,d_2,\ldots d_l$. Recall (Rk. \ref{geoquot}) that there is a natural action of $A$ on ${\mathfrak g}(\alpha;A)\ensuremath{/ \hspace{-1.2mm}/} M$, induced by the action on ${\mathfrak g}(\alpha;A)$. Clearly $a\cdot f_i=\alpha(a)^{-d_i}f_i$ for any $a\in A$. Let $U_\alpha$ be a vector space with basis $u_1,u_2,\ldots u_l$ and let $A$ act on $U_\alpha$ by: $a\cdot u_i=\alpha(a)^{d_i}u_i$, extending linearly to all of $U_\alpha$. Hence the morphism ${\mathfrak g}(\alpha;A)\longrightarrow U_\alpha$, $x_\alpha\mapsto \sum f_i(x_\alpha)u_i$ induces an $A$-equivariant embedding $\iota_\alpha:{\mathfrak g}(\alpha;A)\ensuremath{/ \hspace{-1.2mm}/} M\hookrightarrow U_\alpha$. Since the $f_i$ are homogeneous, $\iota_\alpha(\pi_\alpha(0))=0$. Hence $x_\alpha\in Y_\alpha$ if and only if $\iota_\alpha(\pi_\alpha(x_\alpha))\neq 0$ (by Lemma \ref{zbot2}). Let $r_0=\mathop{\rm rk}\nolimits\Phi_A^*$. Embed $A$ diagonally in the product $Z_G(A)^{r_0}$, and let $H=M^{r_0}\subset Z_G(A)^{r_0}$. Clearly $H$ commutes with $A$. Let the coordinates of $Z_G(A)^{r_0}$ be indexed by the elements of $\Pi$, and let $Z_G(A)^{r_0}$ act on ${\mathfrak g}(2;\omega)=\oplus_{\alpha\in\Pi}{\mathfrak g}(\alpha;A)$: $(g_\alpha)\cdot\sum y_\alpha=\sum(g_\alpha\cdot y_\alpha)$. It is easy to see that the quotient ${\mathfrak g}(2;\omega)\ensuremath{/ \hspace{-1.2mm}/} H$ is naturally isomorphic to $\prod_{\alpha\in\Pi}{\mathfrak g}(\alpha;A)\ensuremath{/ \hspace{-1.2mm}/} M$. Identify ${\mathfrak g}(2;\omega)\ensuremath{/ \hspace{-1.2mm}/} H$ with $\prod_{\alpha\in\Pi}{\mathfrak g}(\alpha;A)\ensuremath{/ \hspace{-1.2mm}/} M$, let $U=\oplus_{\alpha\in\Pi}U_\alpha$, and let $\iota=(\prod \iota_\alpha):{\mathfrak g}(2;\omega)\ensuremath{/ \hspace{-1.2mm}/} H\longrightarrow U$. Then $\iota$ is an $A$-equivariant embedding. Hence by Rk. \ref{geoquot} the following diagram is commutative: \begin{diagram} {\mathfrak g}(2;\omega) & \rTo & {\mathfrak g}(2;\omega)\ensuremath{/ \hspace{-1.2mm}/} H & \rTo & U \\ \dTo & & \dTo & & \dTo \\ {\mathfrak g}(2;\omega)\ensuremath{/ \hspace{-1.2mm}/} A^\omega & \rTo & {\mathfrak g}(2;\omega)\ensuremath{/ \hspace{-1.2mm}/} A^\omega H & \rTo & U\ensuremath{/ \hspace{-1.2mm}/} A^\omega \end{diagram} (Note that by construction $\iota(\pi_{{\mathfrak g}(2;\omega),H}(0))=0$.) \begin{lemma}\label{alphaunstable} (i) Let $u\in U$. Then $u$ is $A^\omega$-unstable if and only if $u_\alpha=0$ for some $\alpha\in\Pi$. (ii) Let $x=\sum_{\alpha\in\Pi} x_\alpha\in{\mathfrak g}(2;\omega)$. Then $x$ is $A^\omega H$-semistable if and only if $x_\alpha\in Y_\alpha$ for all $\alpha\in\Pi$. \end{lemma} \begin{proof} Since $A^\omega=(A^\omega\cap G^{(1)})^\circ\cdot (Z(G)\cap A)^\circ$ and $(Z(G)\cap A)$ acts trivially on $U$, we may clearly assume that $G$ is semisimple. Suppose that $u\in U$ is $A^\omega$-unstable. By the Hilbert-Mumford criterion, there exists $\mu\in Y(A^\omega)$ such that $u$ is $\mu(k^\times)$-unstable. After replacing $\mu$ by $-\mu$, if necessary, we may assume that $u\in\sum_{i\geq 1}U(i;\mu)$. Note that $U_\alpha\subset\sum_{i\geq 1}U(i;\mu)$ if and only if $\langle\alpha,\mu\rangle>0$. Hence if $u_\alpha\neq 0$ for all $\alpha$, then $\langle\alpha,\mu\rangle>0$ for all $\alpha\in\Pi$. But this implies that $\mu$ and $\omega$ are in the same Weyl chamber in $Y(A)$, which contradicts the assumption that $(\mu,\omega)=0$. Suppose therefore that $u_\alpha=0$ for some $\alpha\in\Pi$. Recall that $\omega=\omega_\alpha+\mu_\alpha$ for some $\mu_\alpha\in Y(Z(L_\alpha))$. Hence $(\omega,\omega)=(\omega_\alpha,\omega_\alpha)+(\mu_\alpha,\mu_\alpha)$ and $(\omega_\alpha,\omega)=(\omega_\alpha,\omega_\alpha)$. It follows that $c=(\omega_\alpha,\omega)/{(\omega,\omega)}<1$. Let $m\in{\mathbb N}$ be such that $\nu=m(\omega_\alpha-c\omega)\in Y(A)$. Then in fact $\nu\in Y(A^\omega)$. Moreover, $\langle\alpha,\nu\rangle>0$ and $\langle\beta,\nu\rangle<0$ for all $\beta\in\Pi\setminus\{\alpha\}$. Hence $u$ is $\nu(k^\times)$-semistable. This proves (i). For ease of notation, let $V={\mathfrak g}(2;\omega)$ and let $V_\alpha={\mathfrak g}(\alpha;A)$. Suppose $x=\sum x_\alpha\in V$. Recall (Rk. \ref{geoquot}) that $\pi_{V,A^\omega H}=\pi_{V,A^\omega H/H}\circ\pi_{V,H}$. Moreover, $V\ensuremath{/ \hspace{-1.2mm}/} H$ embeds as an $A$-stable subset of $U$. It follows that $x$ is an $A^\omega H$-unstable element of $V$ if and only if $\iota(\pi_{V,H}(x))$ is an $A^\omega$-unstable element of $U$. But by (i), this holds if and only if $\iota_\alpha(\pi_\alpha(x_\alpha))=0$ for some $\alpha\in\Pi$. Hence, by Lemma \ref{zbot2} $x$ is $A^\omega H$-semistable if and only if $x_\alpha\in Y_\alpha$ for all $\alpha$. \end{proof} \begin{corollary} Let $x\in {\mathfrak g}(2;\omega)$ be such that $x_\alpha\in Y_\alpha$ for all $\alpha\in\Pi$. Then $x\in Y_\omega$. \end{corollary} \begin{proof} By the Kirwan-Ness criterion, $x\in Y_\omega$ if and only if $x$ is $Z^\bot(\omega)$-semistable. If $x$ is $Z^\bot(\omega)$-unstable, then it is clearly also $A^\omega H$-unstable. But then $x_\alpha\notin Y_\alpha$ for some $\alpha\in\Pi$ by Lemma \ref{alphaunstable}. \end{proof} Hence we have the following equivalent conditions: \begin{proposition}\label{yomega} Let $x=\sum_{\alpha\in\Pi}x_\alpha\in{\mathfrak g}(2;\omega)$. Then the following are equivalent: (i) $x\in Y_\omega$, (ii) $[{\mathfrak g}^\omega,x]={\mathfrak g}(2;\omega)$, (iii) $x_\alpha\in Y_\alpha$ for each $\alpha\in\Pi$, (iv) $[{\mathfrak g}^\omega,x_\alpha]={\mathfrak g}(\alpha;A)$ for each $\alpha\in\Pi$. \end{proposition} \begin{proof} The equivalence of (i) and (ii) is an immediate consequence of the separability of orbits, (see Lemma \ref{globalinf}). Hence (iii) and (iv) are also equivalent (since $L_\alpha$ is a Levi subgroup of $G$). Suppose $[{\mathfrak g}^\omega,x]={\mathfrak g}(2;\omega)$. Then $\oplus_{\alpha\in\Pi}[{\mathfrak g}^\omega,x_\alpha]={\mathfrak g}(2;\omega)$, hence $[{\mathfrak g}^\omega,x_\alpha]={\mathfrak g}(\alpha;A)$. This shows that (ii) $\Rightarrow$ (iv). But by Lemma \ref{alphaunstable}, (iv) $\Rightarrow$ (ii). This completes the proof. \end{proof} \begin{rk} The above proposition differs slightly from \cite[Prop. 19]{kostrall}, which it seeks to imitate. Kostant-Rallis' version considers only elements of ${\mathfrak g}(2;\omega)$ which are contained in the real form ${\mathfrak g}_{\mathbb R}$. Then $x\in Y_\omega\cap{\mathfrak g}_{\mathbb R}$ if and only if $x_\alpha\neq 0$ for each $\alpha$. \end{rk} \subsection{Construction of ${\mathfrak g}^*$} \label{sec:6.3} In \cite{kostrall}, Kostant and Rallis constructed a reductive subalgebra ${\mathfrak g}^*$ of ${\mathfrak g}$ containing ${\mathfrak a}$ as a Cartan subalgebra. We will now generalise this to positive characteristic. Fix $E\in Y_\omega$ and let $d\omega_\alpha(1)=H_\alpha=\xi_\alpha [E_\alpha,d\theta(E_\alpha)]$. Let $F_\alpha=\xi_\alpha d\theta(E_\alpha)$. Hence $\{ H_\alpha,E_\alpha,F_\alpha\}$ is an $\mathfrak{sl}(2)$-triple for each $\alpha$. \begin{lemma}\label{commrels} We have the following relations: (a) $[H_\alpha,H_\beta]=0$ $(\alpha,\beta\in\Pi)$, (b) $[H_\alpha,E_\beta]=-\langle\beta,\alpha\rangle E_\beta$ $(\alpha,\beta\in\Pi)$, (c) $[H_\alpha,F_\beta]=\langle\beta,\alpha\rangle F_\beta$ $(\alpha,\beta\in\Pi)$, (d) $[E_\alpha,F_\beta]=0$ for $\alpha\neq\beta\in\Pi$, (e) $(\mathop{\rm ad}\nolimits E_\alpha)^{-\langle\beta,\alpha\rangle+1}(E_\beta)=(\mathop{\rm ad}\nolimits F_\alpha)^{-\langle\beta,\alpha\rangle+1}(F_\beta)=0$ for $\alpha\neq\beta\in\Pi$, (f) $E_\alpha^{[p]}=F_\alpha^{[p]}=0$, and $H_\alpha^{[p]}=H_\alpha$ for every $\alpha\in\Pi$. \end{lemma} \begin{proof} (a) is immediate since $H_\alpha\in{\mathfrak a}$; (b) and (c) follow from Lemma \ref{omegaalphacartan}. If $\alpha\neq\beta\in\Pi$, then $\alpha-\beta\notin\Phi_A$. Hence (d) follows. Clearly, $\beta+m\alpha\in\Phi_A^*$ $\Leftrightarrow$ $\beta+m\alpha\in\Phi_A$. But the integers $\langle\beta,\alpha\rangle$ are the Cartan integers for $\Phi_A^*$. Hence $\beta+(1-\langle\beta,\alpha\rangle)\notin\Phi_A^*$, which proves (e). Finally, if $\alpha\in\Phi_A$ then $3\alpha\notin\Phi_A$ by Lemma \ref{pisgood}. Hence $E_\alpha^{[p]}=F_\alpha^{[p]}=0$. Since $H_\alpha=d\omega_\alpha(1)$, $H_\alpha$ is a toral element. This proves (f). \end{proof} \begin{proposition}\label{constrpre} Let ${\mathfrak b}^*={\mathfrak b}^*(E)$ be the subalgebra of ${\mathfrak g}$ generated by the elements $E_\alpha,F_\alpha,H_\alpha$. Then ${\mathfrak b}^*$ is a $d\theta$-stable restricted subalgebra of ${\mathfrak g}$, ${\mathfrak a}\cap\mathop{\rm Lie}\nolimits(G^{(1)})$ is a Cartan subalgebra of ${\mathfrak b}^*$, $[{\mathfrak b}^*,{\mathfrak b}^*]={\mathfrak b}^*$, and ${\mathfrak b}^*$ is an almost classical Lie algebra of universal type with root system $\Phi_A^*$. Hence there exists a simply-connected semisimple group $B^*$ such that $\mathop{\rm Lie}\nolimits(B^*)={\mathfrak b}^*$. \end{proposition} \begin{proof} Since the set $\{ H_\alpha,E_\alpha,F_\alpha\}$ is $d\theta$-stable, so is ${\mathfrak b}^*$. Furthermore, $E_\alpha^{[p]}=F_\alpha^{[p]}=0$ and $H_\alpha^{[p]}=H_\alpha$ by Lemma \ref{commrels}(f). It follows that ${\mathfrak b}^*$ is a restricted subalgebra of ${\mathfrak g}$. Let $G_{(1)},G_{(2)},\ldots,G_{(l)}$ be the minimal $\theta$-stable normal subgroups of $G^{(1)}$ and let ${\mathfrak g}_{(1)}=\mathop{\rm Lie}\nolimits(G_{(1)}),{\mathfrak g}_{(2)}=\mathop{\rm Lie}\nolimits(G_{(2)}),\ldots,{\mathfrak g}_{(l)}=\mathop{\rm Lie}\nolimits(G_{(l)})$. Hence $\mathop{\rm Lie}\nolimits(G^{(1)})={\mathfrak g}_{(1)}\oplus{\mathfrak g}_{(2)}\oplus\ldots\oplus{\mathfrak g}_{(l)}$. Moreover $\Phi_A^*\cong \Phi_{(1)}^*\cup\Phi_{(2)}^*\cup\ldots\cup\Phi_{(l)}^*$ is the decomposition of the root system into simple components, where $\Phi_{(i)}^*=\Phi(G_{(i)},A\cap G_{(i)})^*$. Thus ${\mathfrak b}^*={\mathfrak b}_{(1)}^*\oplus{\mathfrak b}_{(2)}^*\oplus\ldots\oplus{\mathfrak b}_{(l)}^*$, where ${\mathfrak b}^*_{(i)}={\mathfrak b}^*\cap{\mathfrak g}_{(i)}$. But therefore we have only to prove the proposition in the case $G=G_{(1)}$. Hence we may assume that $\Phi_A^*$ is irreducible. Let $\{ H_\alpha^{\mathbb C},E_\beta^{\mathbb C},F_\beta^{\mathbb C}:\alpha\in\Pi,\beta\in(\Phi_A^*)^+\}$ be a Chevalley basis for a complex semisimple Lie algebra ${\mathfrak g}_{\mathbb C}$ with root system $\Phi_A^*$. Let ${\mathfrak g}_{\mathbb Z}$ be the ${\mathbb Z}$-subalgebra spanned by the elements $H_\alpha^{\mathbb C},E_\beta^{\mathbb C},F_\beta^{\mathbb C}$. The $k$-Lie algebra ${\mathfrak g}_{\mathbb Z}\otimes_{\mathbb Z} k$ is an almost classical Lie algebra of universal type, and it is generated by $\{ H_\alpha^{\mathbb C}\otimes 1,E_\alpha^{\mathbb C}\otimes 1,F_\alpha^{\mathbb C}\otimes 1:\alpha\in\Pi\}$. Hence by Lemma \ref{commrels} there is a unique Lie algebra homomorphism $\phi:{\mathfrak g}_{\mathbb Z}\otimes k\longrightarrow{\mathfrak b}^*$ such that $H_\alpha^{\mathbb C}\otimes 1\mapsto H_\alpha,E_\alpha^{\mathbb C}\otimes 1\mapsto E_\alpha,F^{\mathbb C}_\alpha\otimes 1\mapsto F_\alpha$. Since ${\mathfrak b}^*$ is generated by the elements $E_\alpha,F_\alpha,\alpha\in\Pi$, $\phi$ is surjective. The ideals of ${\mathfrak g}_{\mathbb Z}\otimes k$ are given in \cite[p. 446-7]{hog}. Since $p$ is good, there is only one case of a non-trivial ideal: when $\Phi_A^*$ is of type $A_n$ and $p|(n+1)$, the centre is of dimension 1. But by Cor. \ref{omegasindpt} the elements $H_\alpha$, $\alpha\in\Pi$ are linearly independent. Hence $\phi$ is injective in all cases. Thus ${\mathfrak b}^*\cong {\mathfrak g}_{\mathbb Z}\otimes k$. Since ${\mathfrak b}^*$ is of universal type, there exists a simply-connected semisimple group $B^*$ such that $\mathop{\rm Lie}\nolimits(B^*)={\mathfrak b}^*$ (see the discussion in \cite[\S 1]{hog}). It remains to show that ${\mathfrak a}\cap\mathop{\rm Lie}\nolimits(G^{(1)})$ is a Cartan subalgebra of ${\mathfrak b}^*$. But by Cor. \ref{omegasindpt}, ${\mathfrak a}\cap\mathop{\rm Lie}\nolimits(G^{(1)})$ is spanned by $H_\alpha\, ,\alpha\in\Pi$. \end{proof} \begin{lemma}\label{wa} Let ${\mathfrak a}'=\mathop{\rm Lie}\nolimits(A\cap G^{(1)})$ and let $W^*=N_{B^*}({\mathfrak a}')/Z_{B^*}({\mathfrak a}')$. Then $W_A=N_G({\mathfrak a})/Z_G({\mathfrak a})$ is naturally isomorphic to $W^*$. \end{lemma} \begin{proof} Since the root system of $B^*$ is identified with $\Phi_A^*$, $N_{B^*}({\mathfrak a})/Z_{B^*}({\mathfrak a})$ is generated by the reflections $s_\alpha,\alpha\in\Pi$. But so is $W_A$ by \cite[4.5]{rich2}. \end{proof} We are now ready to present the main theorem of this section: \begin{theorem}\label{constr} Let $E\in Y_\omega$ and let ${\mathfrak g}^*(E)$ be the Lie subalgebra of ${\mathfrak g}$ generated by $E,d\theta(E)$ and ${\mathfrak a}$. (a) ${\mathfrak g}^*(E)$ is a $d\theta$-stable restricted subalgebra of ${\mathfrak g}$, $[{\mathfrak g}^*(E),{\mathfrak g}^*(E)]={\mathfrak b}^*(E)$, and ${\mathfrak a}$ is a maximal toral algebra in ${\mathfrak g}^*(E)$. (b) There exists a reductive group $G^*$ satisfying the standard hypotheses (A)-(C) of \S 3, such that $\mathop{\rm Lie}\nolimits(G^*)={\mathfrak g}^*(E)$. (c) There is an involutive automorphism $\theta^*$ of $G^*$ such that $d\theta^*=d\theta|_{\mathfrak g}$. \end{theorem} \begin{proof} Let ${\mathfrak g}^*={\mathfrak g}^*(E),{\mathfrak b}^*={\mathfrak b}^*(E)$. Since $[{\mathfrak a},E]=\sum_{\alpha\in\Pi}kE_\alpha$ and $[{\mathfrak a},d\theta(E)]$ $=\sum_{\alpha\in\Pi}kd\theta(E_\alpha)$, ${\mathfrak g}^*$ contains ${\mathfrak b}^*$. Moreover, $[{\mathfrak b}^*,{\mathfrak b}^*]={\mathfrak b}^*$ by Prop. \ref{constrpre} and ${\mathfrak a}$ normalizes ${\mathfrak b}^*$. Hence ${\mathfrak b}^*=[{\mathfrak g}^*,{\mathfrak g}^*]$. Clearly ${\mathfrak g}^*$ is generated by ${\mathfrak a}$ and ${\mathfrak b}^*$. Therefore ${\mathfrak g}^*$ is $d\theta$-stable and closed under the $p$-operation. This proves (a). By Prop. \ref{constrpre}, ${\mathfrak b}^*=\mathop{\rm Lie}\nolimits(B^*)$, where $B^*$ is a simply-connected semisimple group. Let $S$ be a maximal torus of $G$ containing $A$ and let $\Delta_S$ be a basis for $\Phi(G,S)$ such that $\Pi$ can be obtained as $\{\beta|_A\,:\,\beta\in\Delta_S\}$. Let $S'=S\cap G^{(1)},A'=A\cap G^{(1)},{\mathfrak a}'=\mathop{\rm Lie}\nolimits(A')$. Since $G^{(1)}$ is simply-connected, $Y(S')=\oplus_{\beta\in\Delta_S}\beta^\vee$, where $\beta^\vee$ denotes the coroot corresponding to $\beta$. Let $\alpha\in\Pi$ and let $\beta\in\Delta_S$ be such that $\beta|_A=\alpha$. There are three possibilities: (i) $\theta^*(\beta)=-\beta$, (ii), $-\theta^*(\beta)$ and $\beta$ are orthogonal, and (iii) $-\theta^*(\beta)$ and $\beta$ generate a root system of type $A_2$. But now we can describe $\omega_\alpha$ explicitly: in case (i), $\omega_\alpha=\beta^\vee$; in (ii) $\omega_\alpha=\beta^\vee-\theta^*(\beta)^\vee$; and in case (iii), $\omega_\alpha=2(\beta^\vee-\theta^*(\beta)^\vee)$. Let $c_\alpha=1$ if $\alpha$ is of type (i) or (ii), and $c_\alpha=2$ if $\alpha$ is of type (iii). It follows from Lemma \ref{basis} that $\{ \omega_\alpha/c_\alpha\,:\,\alpha\in\Pi\}$ is a basis for $Y(A')$. Let $A_B^*$ be the unique maximal torus of $B^*$ such that $\mathop{\rm Lie}\nolimits(A_B^*)={\mathfrak a}'$ (Lemma \ref{maxsplittori}). Then $Y(A_B^*)$ can be identified with $\oplus_{\alpha\in\Pi}{\mathbb Z}\omega_\alpha\subset Y(A')$. Hence $Y(A_B^*)$ embeds as a sublattice of $Y(A')$ of index $2^i$, where $i$ is the number of roots in $\Pi$ which are of type (iii). Let $\{\chi_\alpha\,:\alpha\in\Pi\}$ be the basis for $X(A')$ which is dual to the basis $\{\omega_\alpha/c_\alpha\,:\,\alpha\in\Pi\}$ for $Y(A')$. Then we can identify $X(A_B^*)$ with $\oplus_{\alpha\in\Pi}{\mathbb Z}(\chi_\alpha/c_\alpha)\subset X(A')\otimes_{\mathbb Z}{\mathbb Q}$. Clearly $X(A')$ is a sublattice of $X(A_B^*)$ of index $2^i$. Now the basis $\{\chi_\alpha\}$ can be lifted to a basis $\{\hat{\chi}_\alpha,z_j\,:\,\alpha\in\Pi\, ,1\leq j\leq r-r_0\}$ for $X(A)$. (Here $r=\mathop{\rm dim}\nolimits A$ and $r_0=\mathop{\rm rk}\nolimits\Phi_A^*$.) Let $\Lambda_X=\oplus_{\alpha\in\Pi}{\mathbb Z}(\hat{\chi}_\alpha/c_\alpha)\oplus{\mathbb Z}z_1\oplus\ldots\oplus{\mathbb Z}z_{r-r_0}\subset X(A)\otimes_{\mathbb Z}{\mathbb Q}$. Clearly $\Lambda_X$ contains $X(A)$ as a sublattice of index $2^i$. The pairing $\langle .\, ,.\rangle:X(A)\times Y(A)\longrightarrow{\mathbb Z}$ can be extended to a ${\mathbb Z}$-bilinear map $\langle .\, ,.\rangle:\Lambda_X\times Y(A)\longrightarrow{\mathbb Q}$. Let $\Lambda_Y=\{\lambda\in Y(A)\,|\,\langle\chi,\lambda\rangle\in{\mathbb Z}\;\forall\chi\in\Lambda_X\}$. Then $\Lambda_Y$ is a sublattice of $Y(A)$ of index $2^i$. Let $A^*$ be the torus with character lattice $\Lambda_X$, that is $A^*=\mathop{\rm Spec}\nolimits (k\Lambda_X)$. Then $A^*$ contains $A_B^*$. Since $\Lambda_Y$ is of index $2^i$ in $Y(A)$, we can identify $\mathop{\rm Lie}\nolimits(A^*)$ with ${\mathfrak a}$. Set $G^*=(B^*\times A^*)/{\mathop{\rm diag}\nolimits(A_B^*)}$. It is easy to see that $G^*$ is reductive and that $\mathop{\rm Lie}\nolimits(G^*)$ can be identified with ${\mathfrak g}^*$. To prove (b) we therefore have only to show that the restriction to ${\mathfrak g}^*$ of the $d\theta$-equivariant trace form $\kappa$ (see Cor. \ref{trace}) is non-degenerate. Let ${\mathfrak s}=\mathop{\rm Lie}\nolimits(S)$. Since $\kappa$ is non-degenerate its restriction to ${\mathfrak s}$ is non-degenerate. But $\kappa$ is also $d\theta$-equivariant. Hence $\kappa(s,a)=0$ for any $s\in{\mathfrak s}\cap{\mathfrak k}$ and any $a\in{\mathfrak a}$. It follows that the restriction $\kappa|_{{\mathfrak a}\times{\mathfrak a}}$ is non-degenerate. To show that $\kappa|_{{\mathfrak g}^*}$ is non-degenerate, it will therefore suffice to show that the restriction to ${\mathfrak g}^*_\alpha\times{\mathfrak g}^*_{-\alpha}$ is non-degenerate for every $\alpha\in\Phi^*_A$. (Here ${\mathfrak g}^*_\alpha={\mathfrak g}(\alpha;A)\cap{\mathfrak g}^*$, a one-dimensional root subspace for each $\alpha\in\Phi_A^*$). But the Weyl group of $G^*$ is isomorphic to $W_A$ by Lemma \ref{wa}. Hence to see that the restriction of $\kappa$ to ${\mathfrak g}^*$ is non-degenerate, we require only that $\kappa(E_\alpha,F_\alpha)\neq 0$ for each $\alpha\in\Pi$. Since $\kappa$ is non-degenerate on ${\mathfrak a}$, there exists $a\in{\mathfrak a}$ such that $\kappa(a,H_\alpha)\neq 0$. But $\kappa(a,H_\alpha)=d\alpha(a)\kappa(E_\alpha,F_\alpha)\neq 0$. Hence $\kappa|_{{\mathfrak g}^*\times{\mathfrak g}^*}$ is non-degenerate. Since $B^*$ is simply-connected, there exists a unique automorphism $\theta_B^*$ of $B^*$ such that $d\theta_B^*=d\theta|_{{\mathfrak b}^*}$ by Lemma \ref{sccover}. Hence the involutive automorphism of $B^*\times A^*$ given by $(g,a)\mapsto (\theta_B^*(g),a^{-1})$ induces an automorphism $\theta^*$ of $G^*=(B^*\times A^*)/\mathop{\rm diag}\nolimits(A_B^*)$ satisfying $d\theta^*=d\theta|_{{\mathfrak g}^*}$. \end{proof} As an immediate consequence of the theorem, all of our earlier results apply to the pair $(G^*,\theta^*)$. \begin{rk} It is possible to construct a group $G^*_0$ such that $\mathop{\rm Lie}\nolimits(G^*_0)={\mathfrak g}^*$ and $A$ is a maximal torus of $G^*_0$. It is clear from the proof of Thm. \ref{constr} that the universal covering of $(G^*_0)^{(1)}$ is isomorphic to $B^*$, and that $B^*\rightarrow (G^*_0)^{(1)}$ is separable, with kernel of order $2^i$. Here $i$ is the number of roots $\alpha\in\Pi$ which are of type (iii) (that is, if $\beta\in\Delta_S$ satisfies $\beta|_A=\alpha$, then $\beta$ and $-\theta^*(\beta)$ generate a root system of type $A_2$). It can be seen from the classification of involutions (proved in odd characteristic by Springer \cite{springer}) that there is at most one root of type (iii) for each component of the root system of $G$. Suppose $G$ is almost simple, hence so is $G^*(=B^*)$. Since the universal covering $G^*\rightarrow G_0^*$ maps $Z(G^*)$ onto $Z(G)\cap A$, we can easily calculate the order of $Z(G)\cap A$ for an arbitrary involution. \end{rk} \begin{lemma}\label{cpts2} Let $G$ be an almost simple (simply-connected) group. (1) Suppose $\theta$ is quasi-split, but not split. (a) ${\cal N}$ has two irreducible components if $G$ is of type $A_{2n+1}$ or $D_{2n+1}$. (b) Otherwise ${\cal N}$ is irreducible (types $A_{2n},D_{2n},E_6$). (2) Let $\theta$ be an involution which is neither split nor quasi-split. If $G$ is of type $A,E_6,E_8,F_4$, or if $\theta$ is an outer involution in type $D$, then ${\cal N}$ is irreducible. \end{lemma} \begin{proof} (1) Let $Z=Z(G)$. Recall from Cor. \ref{splitcmpts} that the components of ${\cal N}$ are in one-to-one correspondence with the elements of $Z\cap A/\tau(Z)$, where $\tau:G\rightarrow G$ is given by $g\mapsto g^{-1}\theta(g)$. Suppose $G$ is of type $E_6$. Then $Z$ is a cyclic group of order 3, hence $(Z\cap A)/(Z\cap A)^2$ is trivial. By Thm. \ref{gthetaorbs}, ${\cal N}$ is irreducible. Similarly, ${\cal N}$ is irreducible if $G$ is of type $A_{2n}$. For $G$ of type $A_{2n+1}$ (resp. $D_{2n+1},D_{2n}$) we can see from \cite[pp. 664-665]{springer} that $\Phi_A^*$ is of type $C_{n+1}$ (resp. $B_{2n-1},B_{2n-1}$). Hence $Z(G^*)$ is of order 2 in each case. Unless $G$ is of type $D_{2n}$, $\theta$ is inner by \cite{springer}, hence $\theta(z)= z$ for any $z\in Z(G)$. On the other hand, an outer automorphism acts non-trivially on the centre. It follows that $\tau(Z)$ is trivial unless $G$ is of type $D_{2n}$, in which case it is of order 2. This shows that ${\cal N}$ has the number of irreducible components indicated. (2) If $G$ is of type $E_6,E_8$, or $F_4$, then $Z/Z^2$ is trivial, hence ${\cal N}$ is irreducible by Thm. \ref{gthetaorbs}. For an inner automorphism in type $A$, $\Phi_A^*$ is of type $C$, hence $Z(G^*)$ is of order 2. Moreover, there exists a root $\alpha\in\Pi$ of type (iii); hence $Z\cap A$ is trivial. It follows that ${\cal N}$ is irreducible. Suppose $\theta$ is a non-split outer automorphism in type $A_{2n+1}$. Then $\Phi_A^*$ is of type $A_n$, and there is no root of type (iii). Therefore $Z\cap A$ is of order $(n+1)$. But (since $\theta$ is outer) we have $z\mapsto z^{-1}$ for $z\in Z$. Thus $\tau(Z)= Z^2$ is of order $(2n+2)/2=(n+1)$. Therefore $Z\cap A=\tau(Z)$, which implies that ${\cal N}$ is irreducible. Finally, suppose $\theta$ is an outer involution in type $D$. Then $\Phi_A^*$ is of type $B$, hence $Z(G^*)$ is of order 2. There is no root of type (iii), hence $Z\cap A$ is also of order 2. But $\theta$ acts non-trivially on the centre, hence $\tau(Z)\neq 1$. It follows that $Z\cap A/{\tau(Z)}$ is trivial. \end{proof} Lemma \ref{cpts2} provides us with two more classes of involution for which ${\cal N}$ has two irreducible components: the quasi-split involutions in type $A_{2n+1}$ and $D_{2n+1}$ are, respectively $(\mathfrak{gl}(2n+2),\mathfrak{gl}(n+1)\oplus\mathfrak{gl}(n+1))$ and $(\mathfrak{so}(4n+2),\mathfrak{so}(2n+2)\oplus\mathfrak{so}(2n))$. We now check the remaining (non-quasi-split) cases. The classification of involutions in \cite{springer} associates to each class of involution a unique {\it Araki diagram}: the Araki diagram for $\theta$ is a copy of the Dynkin diagram on $\Delta_S$, with the action of $\psi$ indicated, and the vertices in $I$ (resp. $\Delta_S\setminus I$) coloured black (resp. white). But then one can easily write down the weighted Dynkin diagram corresponding to $\omega$ (and hence to a regular nilpotent element of ${\mathfrak p}$): $h(\alpha)=2$ if $\alpha\in\Delta_S\setminus I$, and $h(\alpha)=0$ if $\alpha\in I$. Lemma \ref{cpts2} and \cite{springer} reduce us to the following cases: (i) Non-split involutions in type $B_n$. Here there are $(n-1)$ classes of involution, with corresponding weighted Dynking diagrams $$\begin{array}{llll} 2 & 0 & \cdots & 0 \end{array},\;\;\;\; \begin{array}{lllll} 2 & 2 & 0 & \cdots & 0 \end{array},\;\;\;\;\ldots \;\;\; ,\;\;\; \begin{array}{llll} 2 & \cdots & 2 & 0 \end{array}$$ In each case $\Phi_A^*$ is of type $B$, and there is no root $\alpha\in\Pi$ of type (iii). Hence $Z\cap A$ is of order 2. For type $B$ it is easier to carry out the calculations in the adjoint group $\mathop{\rm SO}\nolimits(2n+1)$, which we embed in the standard way in $\mathop{\rm SL}\nolimits(2n+1)$. Let $e$ be a regular nilpotent element of ${\mathfrak p}$ and let $C$ be its `reductive part'. The determination of the number of irreducible components of ${\cal N}$ therefore comes down to the determination of whether $C$ is contained in $K$ or not. (Here $G^\theta/K$ is of order 2.) The embedding of $G$ in $\mathop{\rm SL}\nolimits(2n+1)$ allows us to classify the nilpotent orbits in ${\mathfrak g}$ by partitions of $(2n+1)$, see for example \cite[3.5]{som}. (The only partitions which occur in type $B$ are those such that $i$ appears an even number of times if $i$ is even.) The partitions of $(2n+1)$ corresponding to the above weighted Dynkin diagrams are, respectively, $3^1.1^{2(n-1)},\; 5^1.1^{2(n-2)},\;\ldots ,(2n-1)^1.1^2$. The pair corresponding to a weighted Dynkin diagram as above with $m$ 2's is $(\mathfrak{so}(2n+1),\mathfrak{so}(m)\oplus\mathfrak{so}(2n+1-m))$. It follows that if $m$ is even and $e$ is a regular nilpotent element of ${\mathfrak p}$, then $\theta$ is conjugate to $\mathop{\rm Ad}\nolimits\lambda(\sqrt{-1})$, where $\lambda$ is an associated cocharacter for $e$. But then $Z_G(\lambda)\subset K$, hence $C\subset K$. It follows that in this case, ${\cal N}$ has two irreducible components. Suppose therefore that $m$ is odd. It is easy to see that $K\cong\mathop{\rm SO}\nolimits(m)\times\mathop{\rm SO}\nolimits(2n+1-m)$, and that $G^\theta\cong\{ (g,h)\in\mathop{\rm O}\nolimits(m)\times\mathop{\rm O}\nolimits(2n+1-m)\, |\,\det g=\det h\}$. Here $C/C^\circ$ is of order 2 by Sommers' theorem. In fact, we can see by direct calculation that $C\cong \mathop{\rm O}\nolimits(2n+1-m)$, and that $C/C^\circ$ is generated by an element of $G^\theta\hookrightarrow\mathop{\rm O}\nolimits(m)\times\mathop{\rm O}\nolimits(2n+1-m)$ of the form $(-I,n)$, where $\det n=-1$. But therefore $CK=G^\theta$. It follows that ${\cal N}$ is irreducible in this case. (ii) Non-split involutions in type $C_n$. We consider $G=\mathop{\rm Sp}\nolimits(2n,k)$ as a subgroup of $\mathop{\rm SL}\nolimits(2n,k)$ in the standard way. There are $[n/2]$ classes of non-split involution of $G$, with corresponding weighted Dynkin diagrams $$\begin{array}{llllll} 2 & 0 & 0 & \cdots & 0 \end{array},\;\;\; \begin{array}{llllll} 2 & 0 & 2 & 0 & \cdots & 0 \end{array},\;\;\;\ldots\;\;\;,\;\;\; \left\{ \begin{array}{cc} {\begin{array}{llllll} 2 & 0 & 2 & \cdots & 0 & 2 \end{array}} & \mbox{if $n$ is even,} \\ {\begin{array}{llllll} 2 & 0 & 2 & \cdots & 2 & 0 \end{array}} & \mbox{if $n$ is odd.} \end{array}\right.$$ In each case, the roots $\Phi_A^*$ are of type $B$, and with the exception of the case $\begin{array}{llllll} 2 & 0 & 2 & \cdots & 0 & 2 \end{array}$, there is a root $\alpha\in\Pi$ of type (iii). This shows that $Z\cap A$ is trivial in each except this final case, which is $(\mathfrak{sp}(4n),\mathfrak{sp}(2n)\oplus\mathfrak{sp}(2n))$. Here a regular nilpotent element of ${\mathfrak p}$ is of partition type $(2n)^2$. Up to conjugacy, $\theta$ is equal to conjugation by $\mathop{\rm Int}\nolimits\left(\begin{matrix} A_0 & {} & 0 \\ {} & \ddots & {} \\ 0 & {} & A_0 \end{matrix}\right)$, where $A_0=\left(\begin{matrix} 1 & {} & {} & 0 \\ {} & -1 & {} & {} \\ {} & {} & -1 & {} \\ 0 & {} & {} & 1 \end{matrix}\right)$. Then $e=e_{13}+e_{24}+\ldots-e_{2n-2,2n}\in{\mathfrak g}$ is a regular nilpotent element of ${\mathfrak p}$. Hence if $\lambda$ is the unique diagonal cocharacter which is associated to $e$, then $c=\left(\begin{matrix} n & {} & 0 \\ {} & \ddots & {} \\ 0 & {} & n \end{matrix}\right)\in Z_G(\lambda)\cap Z_G(e)$ and $c^{-1}\theta(c)=-1$, where $n=\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)$. It follows from Cor. \ref{splitcmpts} that ${\cal N}$ is irreducible in this case. (iii) Inner involutions in type $D_{2n}$. There are $(n+1)$ classes of involutions, producing $\Phi_A^*$ of types $B_2,B_4,\ldots ,B_{2n-2},C_n,C_n$. The corresponding weighted Dynkin diagrams are: $$\begin{array}{llllll} {} & {} & {} & {} & {} & 0 \\ 2 & 2 & 0 & \cdots & 0 & {} \\ {} & {} & {} & {} & {} & 0 \end{array} ,\;\;\;\; \begin{array}{llllllll} {} & {} & {} & {} & {} & {} & {} & 0 \\ 2 & 2 & 2 & 2 & 0 & \cdots & 0 & {} \\ {} & {} & {} & {} & {} & {} & {} & 0 \\ \end{array}, \;\;\;\; \ldots \;\;\;\; ,\;\; \begin{array}{llllll} {} & {} & {} & {} & {} & 0 \\ 2 & 2 & 2 & \cdots & 2 & {} \\ {} & {} & {} & {} & {} & 0 \end{array},$$ $$\begin{array}{llllll} {} & {} & {} & {} & {} & 2 \\ 0 & 2 & 0 & \cdots & 2 & {} \\ {} & {} & {} & {} & {} & 0 \end{array},\; \begin{array}{llllll} {} & {} & {} & {} & {} & 0 \\ 0 & 2 & 0 & \cdots & 2 & {} \\ {} & {} & {} & {} & {} & 2 \end{array}$$ Moreover, we have respectively: ${\mathfrak k}=\mathfrak{so}(4n-2)\oplus\mathfrak{so}(2),\,\mathfrak{so}(4n-4)\oplus\mathfrak{so}(4),\,\ldots \, ,\mathfrak{so}(2n+2)\oplus\mathfrak{so}(2n-2),\,\mathfrak{gl}(2n),\mathfrak{gl}(2n)$. (The final two cases are conjugate by an outer involution of $G$.) The nilpotent orbits in ${\mathfrak g}$ are classified in a standard way by partitions of $4n$, see for example \cite[3.5]{som}. (The only partitions which occur in type $D$ are those such that $i$ appears an even number of times if $i$ is even.) The partitions corresponding to the above weighted Dynkin diagrams are $3^1.1^{4n-3},\; 7^1.1^{4n-7},\ldots ,(4n-5)^1.1^5,(2n)^2,(2n)^2$. Hence by Sommers' theorem \cite{som,premnil,mcninchsom}, in each of these cases the group $C=Z_G(\lambda)\cap Z_G(e)$ is connected modulo $Z(G)$. (Here $e$ is a regular nilpotent element of ${\mathfrak p}$ and $\lambda$ is an associated cocharacter for $e$.) Moreover, there is no root of type (iii). Hence $Z\cap A/\tau(C) = Z\cap A\cong Z(G^*)$. Thus ${\cal N}$ has two irreducible components. (iv) Inner involutions in type $D_{2n+1}$. There are $n$ classes of involutions, producing $\Phi_A^*$ of types $B_2,B_4,\ldots ,B_{2n-2},B_n$. The corresponding weighted Dynkin diagrams are: $$\begin{array}{llllll} {} & {} & {} & {} & {} & 0 \\ 2 & 2 & 0 & \cdots & 0 & {} \\ {} & {} & {} & {} & {} & 0 \end{array},\;\;\; \begin{array}{llllllll} {} & {} & {} & {} & {} & {} & {} & 0 \\ 2 & 2 & 2 & 2 & 0 & \cdots & 0 & {} \\ {} & {} & {} & {} & {} & {} & {} & 0 \\ \end{array}, \;\;\;\; \ldots\;\;\;\; ,\;\; \begin{array}{llllll} {} & {} & {} & {} & {} & 0 \\ 2 & 2 & \cdots & 2 & 0 & {} \\ {} & {} & {} & {} & {} & 0 \end{array},$$ $$\mbox{and}\;\; \begin{array}{llllll} {} & {} & {} & {} & {} & 2 \\ 0 & 2 & 0 & \cdots & 0 & {} \\ {} & {} & {} & {} & {} & 2 \end{array}$$ We have, respectively: ${\mathfrak k}=\mathfrak{so}(4n)\oplus\mathfrak{so}(2),\mathfrak{so}(4n-2)\oplus\mathfrak{so}(4),\,\ldots\, ,\mathfrak{so}(2n+4)\oplus\mathfrak{so}(2n-2)$, and $\mathfrak{gl}(2n+1)$. In the final case $\theta^*(\alpha_{2n})=-(\alpha_{2n-1}+\alpha_{2n+1})$. Thus $\alpha_{2n}|_A=\alpha_{2n+1}|_A$ is of type (iii), $\Rightarrow\;A\cap Z(G)$ is trivial $\Rightarrow\;{\cal N}$ is irreducible. For the first $(n-1)$ diagrams, the corresponding partitions of $(4n+2)$ are: $3^1.1^{4n-1},\; 7^1.1^{4n-5},\ldots ,$ $(4n-5)^1.1^7$. By Sommers' theorem $Z_G(\lambda)\cap Z_G(e)$ is connected modulo $Z(G)$ in each case. It follows that ${\cal N}$ has two irreducible components. (v) (Inner) involutions in type $E_7$. Here there are two classes of involutions, with weighted Dynkin diagrams: $$\begin{array}{llllll} 2 & 2 & 2 & 0 & 2 & 0 \\ {} & {} & 0 & {} & {} & {} \end{array} \;\;\;\;\;\mbox{and}\;\;\;\;\; \begin{array}{llllll} 2 & 0 & 0 & 0 & 2 & 2 \\ {} & {} & 0 {} & {} & {} \end{array}.$$ For the first class, which is $({\mathfrak e}_7,\mathfrak{so}(12)\oplus\mathfrak{sl}(2))$, $\Phi_A^*$ is of type $F_4$, hence ${\cal N}$ is irreducible (since the fundamental group of $F_4$ is trivial). For the second, which is $({\mathfrak e}_7,{\mathfrak e}_6\oplus k)$, $\Phi_A^*$ is of type $C_3$ and there is no root of type (iii). Hence $(Z\cap A)/\tau(Z)$ is of order 2. Moreover, by Sommers' theorem (\cite[p. 558]{som} and \cite{premnil,mcninchsom}) $Z_G(\lambda)\cap Z_G(e)$ is connected modulo $Z(G)$. Therefore ${\cal N}$ has two irreducible components. This completes the process of computing the number of irreducible components of ${\cal N}$. The non-irreducible cases match those given by Sekiguchi in \cite{sek} for $k={\mathbb C}$. \begin{proposition} The classes of involution for which ${\cal N}$ is non-irreducible are as follows. - Type $A$: $(\mathfrak{gl}(n),\mathfrak{so}(n))$, $(\mathfrak{gl}(2n),\mathfrak{gl}(n)\oplus\mathfrak{gl}(n))$. - Type $B$: $(\mathfrak{so}(2n+1),\mathfrak{so}(2m)\oplus\mathfrak{so}(2(n-m)+1))$, {\bf only} if the even part $2m < 2(n-m)+1$, - Type $C$: $(\mathfrak{sp}(2n),\mathfrak{gl}(n))$, - Type $D$: $(\mathfrak{so}(2n),\mathfrak{so}(2m)\oplus\mathfrak{so}(2(n-m))$, $(\mathfrak{so}(4n),\mathfrak{gl}(2n))$, $(\mathfrak{so}(4n+2),\mathfrak{so}(2n+1)\oplus\mathfrak{so}(2n+1))$, - Type $E_7$: $(\mathfrak{e}_7,\mathfrak{sl}(8)),(\mathfrak{e}_7,\mathfrak{e}_6\oplus k)$. In each of these cases ${\cal N}$ has two irreducible components, except for $(\mathfrak{so}(4n),\mathfrak{so}(2n)\oplus\mathfrak{so}(2n))$, where there are four components. \end{proposition} \subsection{Applications} \label{sec:6.4} We draw a number of conclusions from Theorem \ref{constr}. Let $S$ be a maximal torus of $G$ containing $A$, and let $\Delta_S$ be a basis for $\Phi_S$ from which $\Pi$ is obtained (see Sect. \ref{sec:2.2}). We can now show that each fibre of the quotient morphism $\pi_{\mathfrak p}:{\mathfrak p}\longrightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ has a dense open $K^*$-orbit. \begin{lemma}\label{fibrelemma} Let $s\in{\mathfrak a}$, let $L=Z_G(s)^\circ$, and let ${\mathfrak l}=\mathop{\rm Lie}\nolimits(L)$. There is a dense open $(K^*\cap L)$-orbit in ${\cal N}({\mathfrak l}\cap{\mathfrak p})$. \end{lemma} \begin{proof} Since $s\in{\mathfrak a}$, $L$ is a $\theta$-stable Levi subgroup of $G$ containing $A$. Let $F_L^*=\{ a\in A\,|\,a^2\in Z(L)\}$. As there is a surjective map from $F_L^*/{F(Z(L)\cap A)}$ to the set of $L^\theta$-orbits in ${\cal N}({\mathfrak l}\cap{\mathfrak p})_{reg}$ (Thm. \ref{gthetaorbs}), it will suffice to show that the map $F^*/{(Z\cap A)}\rightarrow F_L^*/{(Z(L)\cap A)}$ induced by the embedding $F^*\hookrightarrow F_L^*$ is surjective. Let $r_0=\mathop{\rm rk}\nolimits (A\cap G^{(1)})$. The basis $\Pi=\{\alpha_1,\alpha_2,\ldots,\alpha_{r_0}\}$ determines an isomorphism $(\alpha_1,\alpha_2,\ldots,\alpha_{r_0}):A/{(Z\cap A)}\longrightarrow (k^\times)^{r_0}$. (Separability follows from Lemma \ref{diffindpt}.) The subgroup $F/{(Z\cap A)}$ maps onto the set of $r_0$-tuples of the form $(\pm 1,\ldots,\pm 1)$. Since any Levi subgroup of $G^*$ is conjugate to a standard Levi subgroup, there exists $w\in W(G^*,A^*)$ such that $w(Z_{G^*}(s))$ is standard. But $W(G^*,A^*)=W_A$ by Lemma \ref{wa}. Hence, after replacing $s$ by some $W_A$-conjugate, if necessary, there is a subset $J\subseteq \Pi$ such that ${\mathfrak l}$ is spanned by ${\mathfrak g}^A$ and the subspaces ${\mathfrak g}(\alpha;A)$ with $\alpha\in {\mathbb Z}J\cap \Phi_A$. Then $J=\{\beta_1,\beta_2,\ldots,\beta_{r_1}\}$ determines an isomorphism $(\beta_1,\beta_2,\ldots,\beta_{r_1}):A/{(Z(L)\cap A)}\rightarrow (k^\times)^{r_1}$. It is now easy to see that the projection onto the $\beta_i$-coordinates gives a surjective homomorphism $A/{(Z\cap A)}\rightarrow A/{(Z(L)\cap A)}$ which sends $F^*/{(Z\cap A)}$ onto $F_L^*/{(Z(L)\cap A)}$. \end{proof} Hence: \begin{theorem}\label{fibres} Every fibre of $\pi_{\mathfrak p}$ contains a dense (open) $K^*$-orbit. \end{theorem} \begin{proof} Let $\xi\in{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ and let $s$ be a semisimple element of $\pi_{\mathfrak p}^{-1}(\xi)$. We may assume after conjugating by an element of $K$, if necessary, that $s\in{\mathfrak a}$. Let $L=Z_G(s)=Z_G(s)^\circ,{\mathfrak l}=\mathop{\rm Lie}\nolimits(L)$. Thus $\pi^{-1}(\xi)=K\cdot \{s+{\cal N}({\mathfrak l}\cap{\mathfrak p})\}$. By Lemma \ref{fibrelemma} there is an open $K^*\cap L$-orbit in ${\cal N}({\mathfrak l}\cap{\mathfrak p})$. Hence there is a dense $K^*$-orbit in $\pi^{-1}(\xi)$. \end{proof} \begin{rk} Let $P=\{ g^{-1}\theta(g)\,|\,g\in G\}$. Let $x\in G$ and let $x=su$ be the Jordan-Chevalley decomposition of $x$, where $s$ is the semisimple part and $u$ the unipotent part. Then $x\in P$ if and only if $\theta(u)=u^{-1}$ and $s$ is contained in a maximal $\theta$-split torus of $G$ (\cite[6.1]{rich2}). Let ${\cal U}$ denote the set of unipotent elements in $P$; recall (Cor. \ref{isocor}) that there is a $K^*$-equivariant isomorphism $\Psi:{\cal U}\longrightarrow{\cal N}$. Fix a maximal $\theta$-split torus $A$ of $G$. By \cite[11.3-4]{rich2} the action of $K^*$ on $P$ is well-defined and the embedding $A\hookrightarrow P$ induces an isomorphism $A/W_A\longrightarrow P\ensuremath{/ \hspace{-1.2mm}/} K\cong P\ensuremath{/ \hspace{-1.2mm}/} K^*$. Hence each fibre of $\pi_P:P\longrightarrow P\ensuremath{/ \hspace{-1.2mm}/} K$ is $K^*$-stable and contains a unique closed (semisimple) $K$-orbit. In \cite[Rk. 10.4]{rich2} Richardson conjectured that each fibre of $\pi_P:P\rightarrow P\ensuremath{/ \hspace{-1.2mm}/} K$ has a dense open $K^*$-orbit. However, this is not true, as we now show. It follows from the above that every fibre of $\pi_P$ can be written as $K\cdot a({\cal U}\cap Z_G(a))=K\cdot a({\cal U}\cap Z_G(a)^\circ)$ for some $a\in A$. Let $a\in A$, let $L=Z_G(a)^\circ$ and let $V_1,V_2,\ldots,V_l$ be the irreducible components of ${\cal U}\cap L$. By Cor. \ref{isocor} and Lemma \ref{nil1} the $V_i$ are of equal dimension, and each contains an open $(L^\theta)^\circ$-orbit which is just the intersection with the set of $\theta$-regular elements of $L$. (An element $x\in P$ is $\theta$-regular if $\mathop{\rm dim}\nolimits Z_G(x)=\mathop{\rm dim}\nolimits Z_G(A)$. Note that $v\in V_i$ is $\theta$-regular in $L$ if and only if $av$ is $\theta$-regular in $G$.) It follows that each irreducible component of $\pi_P^{-1}(\pi_P(a))$ is of the form $K\cdot aV_i$ for some $i$, and $K\cdot aV_i$ is an irreducible component of $\pi_P^{-1}(\pi_P(a))$ for all $i$. It is now easy to see that there is a dense open $K^*$-orbit in $\pi_P^{-1}(\pi_P(a))$ if and only if $Z_G(a)\cap K^*$ permutes the components $V_i$ transitively. Let $G$ be almost simple, of type $E_8,F_4$, or $G_2$, and let $\theta$ be a split involution of $G$. Since $G$ is both simply-connected and adjoint, $K^*=G^\theta=K$ and $L=Z_G(a)=Z_G(a)^\circ$. It follows that there is a dense open $K^*$-orbit in $\pi_P^{-1}(\pi_P(a))$ if and only if $L^\theta$ permutes the components of ${\cal U}\cap L$ transitively. Let $a$ be a non-regular element of order 2. As $G$ is adjoint, $Z(L)/Z(L)^\circ$ is cyclic of order 2 (see \cite[Prop. 3.2]{premnil}). Hence $Z(L)/Z(L)^2$ is cyclic of order 2. By Cor. \ref{splitcmpts}, the $L^\theta$-orbits in ${\cal N}({\mathfrak l}\cap{\mathfrak p})$ are parametrised by the elements of $Z(L)/Z(L)^2$. Hence by \ref{isocor} there are two regular $L^\theta$-orbits in ${\cal U}\cap L$. It follows that there is more than one regular $K^*$-orbit in $\pi_P^{-1}(\pi_P(a))$. \end{rk} Let $x\in{\mathfrak g}$ be such that $x^{[p]}=0$. McNinch has associated to $x$ a family of {\it optimal} homomorphisms $\rho:\mathop{\rm SL}\nolimits(2)\longrightarrow G$. These behave in a similar way to the $\mathfrak{sl}(2)$-triples in zero (or large) characteristic. Let $\chi:k^\times\longrightarrow\mathop{\rm SL}\nolimits(2)$, $\chi(t)= \left( \begin{array}{ll} t & 0 \\ 0 & t^{-1} \end{array}\right)$, let $X= \left( \begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right)$, and let $Y= \left( \begin{array}{ll} 0 & 0 \\ 1 & 0 \end{array}\right)$. A homomorphism $\rho:\mathop{\rm SL}\nolimits(2)\longrightarrow G$ is optimal for $x$ if $d\rho(X)=x$, and $\rho\circ\chi$ is an associated cocharacter for $x$ in $G$. We have the following facts: {\it - Optimal homomorphisms exist: for any associated cocharacter $\lambda$ for $x$ there is a unique homomorphism $\rho:\mathop{\rm SL}\nolimits(2)\longrightarrow G$ such that $d\rho(X)=x$ and $\rho\circ\chi=\lambda$ (\cite{mcninch} and \cite[Prop. 44]{mcninch2}). - Any two optimal $\mathop{\rm SL}\nolimits(2)$-homomorphisms for $x$ are conjugate by an element of $Z_G(x)^\circ$. (\cite[Thm. 46]{mcninch2}), - $Z_G(x)\cap Z_G(\lambda)=Z_G(\rho(\mathop{\rm SL}\nolimits(2))$ (\cite[Cor. 45]{mcninch2}).} Recall that a homomorphism $\rho:\mathop{\rm SL}\nolimits(2)\longrightarrow G$ is {\it good} (cf. Seitz \cite{seitz}) if all weights of $\rho\circ\chi$ on ${\mathfrak g}$ are less than or equal to $(2p-2)$. {\it - A homomorphism $\rho:\mathop{\rm SL}\nolimits(2)\longrightarrow G$ is optimal for some $x$ if and only if it is good (\cite[Prop. 55]{mcninch2}), - The representation $(\mathop{\rm Ad}\nolimits\circ\rho,{\mathfrak g})$ is a tilting module for $\mathop{\rm SL}\nolimits(2)$ (This follows from \cite[Prop. 4.2]{seitz}. See \cite[Prop. 36 and Pf. of Prop. 37]{mcninch2}).} Let $E,\omega$ be as in Thm. \ref{constr} and let ${\mathfrak g}^*={\mathfrak g}^*(E)$. Let $\alpha\in\Pi$: then $E_\alpha^{[p]}=0$ by Lemma \ref{pisgood}. Moreover, $\omega_\alpha$ is an associated cocharacter for $E_\alpha$ in $L_\alpha$. But $L_\alpha$ is a Levi subgroup of $G$, hence $\omega_\alpha$ is associated to $E_\alpha$ in $G$. Let $L^*_\alpha$ be the (unique) Levi subgroup of $G^*$ such that $\mathop{\rm Lie}\nolimits(L^*_\alpha)= {\mathfrak a}\oplus kE_\alpha\oplus kd\theta(E_\alpha)$. Then $E_\alpha$ is distinguished in $\mathop{\rm Lie}\nolimits(L^*_\alpha)$. By our construction of $G^*$ (see the proof of Thm. \ref{constr}) $\omega_\alpha$ also defines a cocharacter in $A^*$. Hence $\omega_\alpha(k^\times)\subset (L^*_\alpha)^{(1)}$ by the argument used in the proof of Lemma \ref{omegaalphacartan}. It follows that there exist optimal homomorphisms $\rho_\alpha:\mathop{\rm SL}\nolimits(2)\longrightarrow G$ and $\rho'_\alpha:\mathop{\rm SL}\nolimits(2)\longrightarrow G^*$ for $E_\alpha$ such that $\rho_\alpha\circ\chi=\omega_\alpha=\rho'_\alpha\circ\chi$. By uniqueness, $\rho_\alpha(\mathop{\rm SL}\nolimits(2))\subset L_\alpha$ and $\rho'_\alpha(\mathop{\rm SL}\nolimits(2))\subset L^*_\alpha$. By Lemma \ref{sl2s}, $\xi_\alpha d\theta(E_\alpha)$ is the unique element $F_\alpha\in{\mathfrak g}(-\alpha;A)$ such that $[E_\alpha,F_\alpha]=d\omega_\alpha(1)$. Therefore $d\rho_\alpha(Y)=d\rho'_\alpha(Y)=F_\alpha$. It follows that $d\rho_\alpha(x)=d\rho'_\alpha(x)$ for all $x\in\mathfrak{sl}(2)$. Hence we can show: \begin{lemma} (i) ${\mathfrak g}^*$ is normalized by $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$. (ii) $\mathop{\rm Ad}\nolimits\rho_\alpha(g)|_{{\mathfrak g}^*}=\mathop{\rm Ad}\nolimits\rho'_\alpha(g)$ for all $g\in \mathop{\rm SL}\nolimits(2)$. (iii) Let $H$ be the minimal closed subgroup of $G$ containing the subgroups $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$, $\alpha\in\Pi$. Then $H$ is contained in $N_G({\mathfrak g}^*)$. (iv) $\mathop{\rm Ad}\nolimits H|_{{\mathfrak g}^*}=\mathop{\rm Ad}\nolimits G^*$. \end{lemma} \begin{proof} Let $\beta\in\Phi_A^*$, $\beta\neq\pm\alpha$, let $\beta-i\alpha,\ldots,\beta+j\alpha$ be the $\alpha$-chain through $\beta$, let ${\mathfrak g}_{(\beta)}={\mathfrak g}(\beta-i\alpha;A)\oplus\ldots\oplus{\mathfrak g}(\beta+j\alpha;A)$ and let $U={\mathfrak g}^A\oplus\sum{\mathfrak g}(\gamma;A)$, the sum taken over all $\gamma\in\Phi_A\setminus\{\beta-i\alpha,\ldots,\beta+j\alpha\}$. Hence ${\mathfrak g}={\mathfrak g}_{(\beta)}\oplus U$ and each summand is $L_\alpha$-stable, therefore $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$-stable. Since any direct summand in a tilting module is a tilting module (\cite[Thm. 1.1]{donkin2}), ${\mathfrak g}_{(\beta)}$ is a direct sum of indecomposable tilting modules for $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$. For each positive integer $c$ there is a unique tilting module $T(c)$ for $\mathop{\rm SL}\nolimits(2)$ with highest weight $c$: $T(c)$ is simple if $c<p$ (see \cite[Lemma 1.3]{seitz}). But now by our condition on $p$, ${\mathfrak g}_{(\beta)}$ is a direct sum of simple $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$-modules. Moreover, each tilting summand is infinitesimally irreducible, hence ${\mathfrak g}_{(\beta)}$ is completely reducible as a $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$-module, and as an $\mathfrak{sl}(2)$-module (with $\mathfrak{sl}(2)$ acting via $\mathop{\rm ad}\nolimits\circ (d\rho_\alpha)$. It follows that every $\mathfrak{sl}(2)$-submodule of ${\mathfrak g}_{(\beta)}$ is $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$-stable. For $\gamma\in\Phi_A^*$, let ${\mathfrak g}^*_\gamma={\mathfrak g}^*\cap {\mathfrak g}(\gamma;A)$ (a one-dimensional root subspace), and let ${\mathfrak g}^*_{(\beta)}={\mathfrak g}^*_{\beta-i\alpha}\oplus\ldots\oplus{\mathfrak g}^*_{\beta+j\alpha}$. Then ${\mathfrak g}^*_{(\beta)}$ is a simple $d\rho_\alpha(\mathfrak{sl}(2))$-submodule of ${\mathfrak g}_{(\beta)}$, hence is $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$-stable. (In fact ${\mathfrak g}^*_{(\beta)}$ is isomorphic to $T(\langle\beta+j\alpha,\alpha\rangle)$.) Moreover, ${\mathfrak g}^*={\mathfrak g}^*_{-\alpha}\oplus{\mathfrak a}\oplus{\mathfrak g}^*_\alpha\oplus\sum{\mathfrak g}^*_{(\beta)}$, and ${\mathfrak g}^*_{-\alpha}\oplus{\mathfrak a}\oplus{\mathfrak g}^*_\alpha= d\rho_\alpha(\mathfrak{sl}(2))\oplus({\mathfrak z}({\mathfrak l}_\alpha)\cap{\mathfrak a})$. It follows that ${\mathfrak g}^*$ is $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$-stable. This proves (i). But now (iii) follows immediately. We have decomposed ${\mathfrak g}^*$ as $\oplus V_\gamma$, where each $V_\gamma$ is a simple $d\rho_\alpha(\mathfrak{sl}(2))$-module of dimension $\leq 4$ ($\leq 3$ if $p=3$). Each summand is also a simple tilting module for $\rho_\alpha(\mathop{\rm SL}\nolimits(2))$ (resp. $\rho'_\alpha(\mathop{\rm SL}\nolimits(2))$). But now, since $d\rho_\alpha(x)=d\rho'_\alpha(x)$ for all $x\in\mathfrak{sl}(2)$, we must have: $\mathop{\rm Ad}\nolimits\rho_\alpha(g)(v_\gamma)=\mathop{\rm Ad}\nolimits\rho'_\alpha(g)(v_\gamma)$ for all $g\in\mathop{\rm SL}\nolimits(2)$. This proves (ii). But $\mathop{\rm Ad}\nolimits G^*$ is generated by the subgroups $\mathop{\rm Ad}\nolimits\rho'_\alpha(\mathop{\rm SL}\nolimits(2))$. Hence (iv) follows. \end{proof} \begin{corollary}\label{conjugacy} For elements of ${\mathfrak g}^*$, $G^*$-conjugacy implies $G$-conjugacy. \end{corollary} Let ${\mathfrak k}^*={\mathfrak k}\cap{\mathfrak g}^*$, ${\mathfrak p}^*={\mathfrak p}\cap{\mathfrak g}^*$. Clearly ${\mathfrak g}^*={\mathfrak k}^*\oplus{\mathfrak p}^*$ is the symmetric space decomposition of ${\mathfrak g}^*$. \begin{lemma}\label{regstar} Let $x\in{\mathfrak p}^*$. The following are equivalent: (i) $x$ is a ($\theta$-)regular element of ${\mathfrak p}$, (ii) $x$ is a regular element of ${\mathfrak g}^*$, (iii) ${\mathfrak z}_{{\mathfrak k}^*}(x)=0$, (iv) $\mathop{\rm dim}\nolimits{\mathfrak z}_{{\mathfrak p}^*}(x)=r=\mathop{\rm dim}\nolimits{\mathfrak a}$. \end{lemma} \begin{proof} Since ${\mathfrak a}$ is a maximal toral algebra of ${\mathfrak g}^*$, the equivalence of (ii)-(iv) follows immediately from Lemma \ref{regs}. Suppose $x\in{\mathfrak p}^*$, and $x$ is a regular element of ${\mathfrak p}$. Then $\mathop{\rm dim}\nolimits{\mathfrak z}_{{\mathfrak p}^*}(x)\leq r$, hence (iv) holds. It remains to show that if $x$ is a regular element of ${\mathfrak p}^*$, then $x$ is regular in ${\mathfrak p}$. Let $e$ be a regular nilpotent element of ${\mathfrak p}^*$. Then $e$ is $G^*$-conjugate to $E$. But therefore $e$ is $G$-conjugate to $E$ by Cor. \ref{conjugacy}, hence $\mathop{\rm dim}\nolimits{\mathfrak z}_{\mathfrak g}(e)=\mathop{\rm dim}\nolimits{\mathfrak g}^\omega$, that is, $e$ is regular in ${\mathfrak p}$. Suppose therefore that $x$ is a non-nilpotent regular element of ${\mathfrak p}^*$, and that $x=x_s+x_n$ is the Jordan-Chevalley decomposition of $x$. After replacing $x$ by a $(G^*)^{\theta^*}$-conjugate, if necessary, we may assume that $x_s\in{\mathfrak a}$. Let $L=Z_G(x_s),L^*=Z_{G^*}(x_s),{\mathfrak l}=\mathop{\rm Lie}\nolimits(L),{\mathfrak l}^*=\mathop{\rm Lie}\nolimits(L^*)$. Let $\Pi_L$ be a basis for $\Phi(L,A)$, and let $\omega_L:k^\times\longrightarrow A\cap L^{(1)}$ be the unique cocharacter such that $\langle\alpha,\omega_L\rangle=2$ for all $\alpha\in\Pi_L$ (Cor. \ref{regconj}). There exists a unique cocharacter $\omega^*_L:k^\times\longrightarrow A^*\cap (L^*)^{(1)}$ satisfying the same conditions: hence $\omega_L^*$ can be identified with $\omega_L$ (the embedding $Y(A^*)\hookrightarrow Y(A)$ sends $\omega_L^*$ to $\omega_L$). We can therefore choose a representative $E_L$ for the open $Z_L(\omega_L)$-orbit in ${\mathfrak l}(2;\omega_L)$ such that $E_L\in{\mathfrak l}^*$. Clearly $E_L$ is a regular nilpotent element of ${\mathfrak l}^*$. By the argument used for Thm. \ref{constr}, ${\mathfrak l}^*$ is the subalgebra of ${\mathfrak l}$ generated by ${\mathfrak a},E_L$, and $d\theta(E_L)$. Hence $L^*$ and $L$ stand in the same relation as do $G^*$ and $G$. Since $x$ is regular in ${\mathfrak g}^*$, $x_n$ is a regular nilpotent element of ${\mathfrak l}^*$. But then $x_n$ is $L^*$-conjugate to $E_L$, hence $L$-conjugate to $E_L$. It follows that $\mathop{\rm dim}\nolimits({\mathfrak l}\cap{\mathfrak z}_{\mathfrak g}(x_n))=\mathop{\rm dim}\nolimits Z_G(A)$. Thus $x$ is regular in ${\mathfrak p}$. This completes the proof. \end{proof} \begin{lemma}\label{gstarconj} For semisimple elements of ${\mathfrak p}^*$, $(G^*)$-conjugacy is equivalent to $K$-conjugacy. \end{lemma} \begin{proof} Let $a,a'$ be semisimple elements of ${\mathfrak p}^*$. Since any two maximal tori of ${\mathfrak p}^*$ are conjugate by an element of $G^*$ (resp. $K$), we may clearly assume that $a,a'\in{\mathfrak a}$. But now $a,a'$ are $K$-conjugate if and only if they are $W_A$-conjugate, hence if and only if they are $G^*$-conjugate. \end{proof} Let $e$ be a nilpotent element of ${\mathfrak p}^*$ satisfying the equivalent conditions of Lemma \ref{regstar}. By Lemma \ref{assoc} there is an associated cocharacter $\lambda:k^\times\longrightarrow (G^*)^{\theta^*}$ for $e$. As $e$ is regular, ${\mathfrak z}_{{\mathfrak k}^*}(e)$ is trivial. Therefore $[{\mathfrak p}^*,e]={\mathfrak k}^*$ and $[{\mathfrak k}^*,e]$ is of codimension $r=\mathop{\rm dim}\nolimits{\mathfrak a}$ in ${\mathfrak p}^*$. Let ${\mathfrak v}$ be an $\mathop{\rm Ad}\nolimits\lambda$-graded subspace of ${\mathfrak p}^*$ such that $[{\mathfrak k}^*,e]\oplus{\mathfrak v}={\mathfrak p}^*$. We recall (by \cite[6.3-6.5]{veldkamp}, see also \cite[\S 3]{premtang} for the proof in good characteristic) that every element of $e+{\mathfrak v}$ is regular in ${\mathfrak g}^*$, that the embedding $e+{\mathfrak v}\hookrightarrow{\mathfrak g}^*$ induces an isomorphism $e+{\mathfrak v}\rightarrow{\mathfrak g}^*\ensuremath{/ \hspace{-1.2mm}/} {G^*}$, and that each regular orbit in ${\mathfrak g}^*$ intersects $e+{\mathfrak v}$ in exactly one point. \begin{lemma}\label{slices} Let $j$ be the composite of the isomorphisms $k[{\mathfrak p}]^K\longrightarrow k[{\mathfrak a}]^{W_A}\longrightarrow k[{\mathfrak g}^*]^{G^*}$ and let $f\in k[{\mathfrak p}]^K,g\in k[{\mathfrak g}^*]^{G^*}$. Then $j(f)=g$ if and only if $f|_{e+{\mathfrak v}}=g|_{e+{\mathfrak v}}$. Hence ${\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$ is isomorphic to $e+{\mathfrak v}$, and each regular $K^*$-orbit in ${\mathfrak p}$ intersects $e+{\mathfrak v}$ in exactly one point. \end{lemma} \begin{proof} Clearly $j(f)=g\Leftrightarrow f|_{\mathfrak a}=g|_{\mathfrak a}$. The set of regular elements in ${\mathfrak a}$ is a dense open subset. Hence its image in ${\mathfrak a}\ensuremath{/ \hspace{-1.2mm}/} W_A={\mathfrak a}/W_A$ is dense. It follows that the set $U$ of semisimple elements in $e+{\mathfrak v}$ is dense. By Lemma \ref{gstarconj}, $f|_{\mathfrak a}=g|_{\mathfrak a}\Leftrightarrow f|_U=g|_U\Leftrightarrow f|_{e+{\mathfrak v}}=g|_{e+{\mathfrak v}}$. Therefore the restriction $k[{\mathfrak p}]\rightarrow k[e+{\mathfrak v}]$ induces an isomorphism $k[{\mathfrak p}]^K\longrightarrow k[e+{\mathfrak v}]$. Let $x\in{\mathfrak p}$ be regular. Then any regular element of $\pi_{\mathfrak p}^{-1}(\pi_{\mathfrak p}(x))$ is $K^*$-conjugate to $x$ by Thm. \ref{fibres}. There is a unique point $y\in e+{\mathfrak v}$ such that $\pi(y)=\pi(x)$. Moreover, $y$ is regular by Lemma \ref{regstar}. This completes the proof. \end{proof} \begin{corollary}\label{regcond} Let $k[{\mathfrak p}]^K=k[u_1,u_2,\ldots,u_r]$, where the $u_i$ are homogeneous polynomials, and let $x\in{\mathfrak p}$ be regular. Then the differentials $(du_i)_x,1\leq i\leq r$ are linearly independent. \end{corollary} \begin{proof} Let $x$ be regular. By Lemma \ref{slices} there is a unique $K^*$-conjugate $y$ of $x$ in $e+{\mathfrak v}$. Therefore the differentials $(du_i)_x$ are linearly independent if and only if $(du_i)_y$ are linearly independent (since $k[{\mathfrak p}]^K=k[{\mathfrak p}]^{K^*}$). But the restriction map $k[{\mathfrak p}]^K\rightarrow k[e+{\mathfrak v}]$ is an isomorphism. The result follows immediately since $e+{\mathfrak v}$ is isomorphic to affine $r$-space. \end{proof} \begin{lemma}\label{regcodim} The set ${\mathfrak p}\setminus{\cal R}$ of non-regular elements in ${\mathfrak p}$ is of codimension $\geq 2$. \end{lemma} \begin{proof} Let ${\mathfrak a}_{reg}$ denote the set of regular elements in ${\mathfrak a}$. Since ${\mathfrak a}\setminus{\mathfrak a}_{reg}$ is a union of hyperplanes in ${\mathfrak a}$, $U=\pi_{\mathfrak a}({\mathfrak a}\setminus{\mathfrak a}_{reg})$ is of pure codimension 1 in ${\mathfrak a}/W_A\cong {\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} K$. Let $V=\pi_{\mathfrak p}^{-1}(U)$, the complement of the set of regular semisimple elements in ${\mathfrak p}$. For any $x\in U$, the irreducible components of $\pi^{-1}(x)$ are of dimension $\mathop{\rm dim}\nolimits{\mathfrak p}-\mathop{\rm dim}\nolimits{\mathfrak a}$, hence $V$ is a closed set in ${\mathfrak p}$ of codimension greater than or equal to 1. It is easy to see that ${\mathfrak p}\setminus{\cal R} = Y=V\setminus({\cal R}\cap V)$. But $\pi_{\mathfrak p}(Y)=U$ and each fibre of $\pi_{\mathfrak p}|_Y$ has dimension strictly less than $\mathop{\rm dim}\nolimits{\mathfrak p}-\mathop{\rm dim}\nolimits{\mathfrak a}$. It follows that $Y$ is of codimension $\geq 2$ in ${\mathfrak p}$. \end{proof} We can now apply Skryabin's theorem on infinitesimal invariants. The action of $K$ on the polynomial ring $k[{\mathfrak p}]$ induces an action of the Lie algebra ${\mathfrak k}$ as homogeneous derivations of $k[{\mathfrak p}]$. We denote by $k[{\mathfrak p}]^{\mathfrak k}=\{ f\in k[{\mathfrak p}]\,|\,(x\cdot f)=0\;\forall x\in{\mathfrak k}\}$. It is easy to see that $k[{\mathfrak p}]^{\mathfrak k}$ contains the global invariants $k[{\mathfrak p}]^K$. Moreover, the ring of $p$-th powers, $k[{\mathfrak p}]^{(p)}=\{ f^p\,|\, f\in k[{\mathfrak p}]\}$ is also contained in $k[{\mathfrak p}]^{\mathfrak k}$. \begin{theorem} (1) (a) $k[{\mathfrak p}]^{\mathfrak k}=k[{\mathfrak p}]^K\cdot k[{\mathfrak p}]^{(p)}$ and $k[{\mathfrak p}]^{\mathfrak k}$ is free of rank $p^r$ over $k[{\mathfrak p}]^{(p)}$. (b) $k[{\mathfrak p}]^{\mathfrak k}$ is a locally complete intersection. (c) If $\pi_{{\mathfrak p},{\mathfrak k}}:{\mathfrak p}\longrightarrow {\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/}{\mathfrak k}=\mathop{\rm Spec}\nolimits(k[{\mathfrak p}]^{\mathfrak k})$ is the canonical morphism then $\pi_{{\mathfrak p},{\mathfrak k}}({\cal R})$ is the set of all smooth rational points of ${\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/}{\mathfrak k}$. (2) Let $K_i$ denote the $i$-th Frobenius kernel of $K$ and let $k[{\mathfrak p}]^{(p^i)}$ denote the ring of all $p^i$-th powers of elements of $k[{\mathfrak p}]$. (a) $k[{\mathfrak p}]^{K_i}=k[{\mathfrak p}]^K\cdot k[{\mathfrak p}]^{(p^i)}$ and $k[{\mathfrak p}]^{K_i}$ is free of rank $p^{ir}$ over $k[{\mathfrak p}]^{(p^i)}$. (b) $k[{\mathfrak p}]^{K_i}$ is a locally complete intersection. (c) Let $\pi_{{\mathfrak p},K_i}:{\mathfrak p}\longrightarrow{\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/} {K_i}$ denote the quotient morphism. Then $\pi_{{\mathfrak p},K_i}({\cal R})$ is the set of all smooth rational points of ${\mathfrak p}\ensuremath{/ \hspace{-1.2mm}/}{K_i}$. \end{theorem} \begin{proof} This follows immediately from Cor. \ref{regcond}, Lemma \ref{regcodim} and \cite[Thm.s 5.4,5.5]{skry}. \end{proof}
1,941,325,220,176
arxiv
\section{Introduction} \label{sec:intro} The modern era of exact statistical treatments of the late-time steady states of 2D fluid flows, properly accounting for the infinite number of conserved integrals of the motion, began with the Miller--Robert--Sommeria (MRS) theory of the 2D Euler equation \cite{M1990,RS1991,MWC1992,LB1967}, generalizing earlier approximate treatments going all the way back to the seminal work of Onsager \cite{O1949}, and progressing through the Kraichnan Energy--Enstrophy theory \cite{K1975}, and various formulations of the point vortex problem (see, e.g., \cite{MJ1974,LP1977}). Since then, the theory has been applied to significantly more complex systems, containing multiple interacting fields (in contrast to the Euler equation, which reduces to a single scalar equation for the vorticity), but still possessing an infinite number of conserved integrals \cite{HMRW1985}. These include, for example, magnetohydrodynamic equilibria \cite{JT1997,W2012}, 3D axisymmetric flow \cite{TDB2013}, and the shallow water equations \cite{WP2001,CS2002}, as well as numerous other geophysical applications \cite{BV2012}. The theory of the shallow water system was recently revisited in Ref.\ \cite{RVB2016} (hereinafter referred to as RVB). The work highlighted simplifying approximations made in previous work on this system \cite{WP2001,CS2002}, and aimed to move beyond them in order to generate more quantitative predictions. Previous simplifications mainly involved the problem of dissipation of microscale gravity wave fluctuations. Such physical effects are certainly physically present, in the form of nonlinear phenomena such as wave breaking or shock wave dissipation, but lie beyond the shallow water approximation (which, in particular, assumes the length scale of horizontal motions to be much larger than the fluid depth). In previous work the small scale free surface fluctuations were simply set to zero at a convenient point in the calculation (citing untreated dissipation processes), and mean field variational equations describing the remaining large scale eddy motion were then derived \cite{WP2001}. In RVB, the shallow water system, though idealized, is taken at face value, and an attempt is made to treat the wave fluctuations in a more consistent manner, but also within a mean field approximation. The result is a very interesting equilibrium state that includes both steady large scale eddy motions and finite microscale wave fluctuations. The key underlying physics here, also motivating earlier studies, is that the two nonlinearly interacting fields, surface height and eddy vorticity, when viewed in isolation, have very different turbulent dynamics. Two-dimensional eddy systems governed by Navier--Stokes turbulence tend to self-organize into long-lived, large-scale coherent structures such as cyclones (exemplified by Jupiter's Great Red Spot) and jets, a consequence of the famous 2D inverse energy cascade \cite{K1967,KM1980}. However, weak turbulence theory \cite{ZFL1998} predicts that interacting acoustic waves, similar to 3D Navier--Stokes turbulence, possesses a forward cascade of energy, transporting it from larger to smaller scales where it is ultimately acted upon by viscosity or other microscopic dissipation mechanisms. When both motions are present, the question arises as to what the final disposition of the energy is. The RVB results propose a quantitative answer, predicting the equilibrium distribution of energy (and other quantities of interest) between the large-scale eddy and microscale wave motions, depending of course on all of the conserved integral values set, for example, by a flow initial condition. The purpose of the present paper is to revisit deeper simplifying mathematical assumptions made in RVB that strongly impact the derived statistical equilibrium state. Two key features are highlighted. First, the variational mean field results are at odds with other recent results for systems with multiple interacting fields which are only partially constrained by conservation laws (in contrast, e.g., to the Euler equation, in which the vorticity field completely specifies the dynamics, while at the same time its fluctuations are strongly limited by the conservation laws). For example, for magnetohydrodynamic equilibria, the unconstrained degrees possess finite microscale fluctuations that lead to a non-mean field thermodynamic description of the large scale flow \cite{W2012}. An analogous result is derived here: the surface height fluctuations are not controlled by the vorticity conservation laws, and lead to a strongly fluctuating equilibrium thermodynamics. Physically, the microscale surface height fluctuations lead to a fluctuating effective Coulomb-like interaction between vortices that does not self-average even on large length scales. A mean field description emerges only in an approximation where this effect is ignored. Second, the formalism of statistical mechanics relies on identification of the correct phase space measure used to compute the thermodynamic free energy and perform statistical averages. This measure is determined by a Liouville theorem that characterizes the geometry of phase space flows. In particular, when expressed in terms of the correct combination of fields, these flows are incompressible, and this constrains the phase space measure to be a function only of the conserved integrals of the motion (expressed in terms of these particular field combinations). An issue addressed in this paper is that the correct Liouville theorem is indeed derived in RVB, but is not actually implemented correctly to define the phase space measure. The authors recognize this, but propose various physical arguments why their chosen implementation, which simplifies the mathematics (in particular, it makes the fields statistically independent), also makes more physical sense. If the motivation of the study was to follow the full consequences of the shallow water equations, prior to speculating on the effects neglected physics, there appears to be a basic inconsistency here. In the following, the full statistical theory is derived using the correct equilibrium phase space measure. The resulting theory leads to much more complex behavior, and indeed has some unusual physical consequences---for example, the microscale fluctuations lead to an equilibrium-averaged flow that does \emph{not} satisfy the time-independent shallow water equations. Of course, which theory more closely reflects physical reality remains an interesting question, but the point of view taken here is that one should at least start by adhering as rigorously as possible to the mathematically consistent predictions of the model. Only following this should one attempt to insert physical considerations at various points to see what their affect might be. For example, a key consequence of the shallow water model is that the surface height fluctuations cascade to arbitrarily small wavelengths while at the same time maintaining a finite amplitude, thereby generating a kind of finite-thickness surface ``foam''. It is the dynamics of this foam that leads to both the strongly fluctuating equilibrium and to the violation of the time-independent equations, and was suppressed at the outset in previous work \cite{WP2001,CS2002}. These are consistent predictions of the model, but is obviously inconsistent with any physical final state, which must emerge by inserting a dissipation step to obtain a ``true'' equilibrium. How to best accomplish this lies beyond the scope of this paper, and would be an interesting topic for future work. \subsection{Outline} \label{sec:outline} The aim of this paper is to formulate general statistical models of shallow water equilibrium states, and then explore some of their key, high-level features. More detailed, physically motivated, investigations of model predictions are left for future work. The outline of the remainder of this paper is as follows. In Sec.\ \ref{sec:bkgnd} the shallow water equations are summarized, and the infinite number of conserved potential vorticity integrals are identified. In Sec.\ \ref{sec:canonfields} all quantities of interest are expressed in terms of the basic vorticity (velocity curl), compressional (velocity divergence), and fluid height fields. The free slip boundary conditions play a key role here, especially in multiply connected domains where a set of circulation integrals about each connected component of the boundary is separately conserved. The latter lead to an additional set of ``potential flow'' contributions to the energy, and also to the expressions for the linear or angular momentum (in the case of translation or rotation invariant domains, respectively, where they are conserved). These have not been previously considered in the context of the shallow water system. In Sec.\ \ref{sec:statmech} the equilibrium statistical mechanics formalism is introduced, with the conservation laws handled by introducing conjugate ``chemical potentials'' within the grand canonical approach. Application of a Kac--Hubbard--Stratanovich transformation allows one to exactly integrate out the fluid fields, and reduce the problem to that of a single effective field whose equilibrium average determines the large scale flow. The resulting statistical model is equivalent that of a fluctuating, scalar nonlinear elastic membrane problem \cite{W2012}. The model also has a dual description in terms of the vortex degrees of freedom interacting through a fluctuating Coulomb-like interaction. In Sec.\ \ref{sec:furtherprops}, we consider simplifying limits in which fluctuations are neglected. An approximate saddle point variational approach (analogous to, but quantitatively different from, that derived by RVB) is then used to illustrate further properties of the model. Equivalent forms of this theory are derived from both the elastic membrane and Coulomb models. The latter is closer in spirit to the RVB microcanonical approach. In Sec.\ \ref{sec:eulercomp} an interesting order-of-limits paradox (infinite gravity $g$ vs.\ perfect rigid lid Euler equation boundary condition) is examined. The two limits produce very different forms of the Liouville theorem, and the paradox is resolved in terms of the finite contribution of microscale gravity waves to the free energy due to the simultaneous divergence of the wave speed $c \approx \sqrt{g h}$. The paper is concluded in Sec.\ \ref{sec:conclude}. Two Appendices \ref{app:liouville} and \ref{app:liouvilleinequiv} prove a very general form of the Liouville theorem and review its relation to the statistical phase space integration measure. Some formal energy and momentum calculational details are relegated to App.\ \ref{app:KEPi}. \begin{figure} \includegraphics[width=3.0in,viewport=100 210 540 420,clip]{ShallowWaterCartoon.png} \caption{Shallow water geometry and fields.} \label{fig:swcartoon} \end{figure} \section{Background} \label{sec:bkgnd} The (2D) shallow water equations take the form \cite{foot:2Dcompress} \begin{eqnarray} \partial_t {\bf v} + ({\bf v} \cdot \nabla) {\bf v} + f {\bf \hat z} \times {\bf v} &=& -g \nabla \eta \nonumber \\ \partial_t h + \nabla \cdot (h {\bf v}) &=& 0 \label{2.1} \end{eqnarray} where ${\bf v}$ is the (horizontal) velocity field, $h({\bf r})$ is the fluid layer thickness, $f({\bf r})$ is the Coriolis parameter, $h_b({\bf r})$ is the bottom height, and \begin{equation} \eta({\bf r}) = h({\bf r}) + h_b({\bf r}) - H_0 \label{2.2} \end{equation} is the surface height deviation from its average value \begin{equation} H_0 = \int_D \frac{d{\bf r}}{A_D} h({\bf r}) \label{2.3} \end{equation} (see Fig.\ \ref{fig:swcartoon}). Here $A_D$ is the area of the domain $D$, and we normalize the average bottom height to vanish, \begin{equation} \int_D \frac{d{\bf r}}{A_D} h_b({\bf r}) = 0. \label{2.4} \end{equation} The second equation in (\ref{2.1}) expresses conservation of 3D fluid density through the mass current, or momentum (areal) density, \begin{equation} {\bf j} = \rho_0 h {\bf v}. \label{2.5} \end{equation} The (fixed, uniform) fluid 3D mass density $\rho_0$ is included here for convenience in order to maintain a consistent set of physical units ($\rho_0$ drops out of the equations of motion). One may simply set $\rho_0 = 1$ if one wishes. \subsection{Conservation laws} \label{sec:conslaws} \subsubsection{Potential vorticity} \label{subsec:potvort} The potential vorticity, \begin{equation} \Omega = \frac{\omega + f}{h},\ \ \omega = \nabla \times {\bf v} \label{2.6} \end{equation} which includes the combined effect of Earth and fluid rotation, is advectively conserved: \begin{equation} \frac{D\Omega}{Dt} \equiv \partial_t \Omega + ({\bf v} \cdot \nabla) \Omega = 0. \label{2.7} \end{equation} It follows that, for any function $w(\Omega)$, $hw(\Omega)$ is a conserved density, \begin{equation} \partial_t [h w(\Omega)] + \nabla \cdot [h w(\Omega) {\bf v}] = 0, \label{2.8} \end{equation} and hence that any integral of the form \begin{equation} I_w = \int_D d{\bf r} h({\bf r}) w[\Omega({\bf r})] \label{2.9} \end{equation} is conserved, $\partial_t I_w = 0$. All such conservation laws may be conveniently summarized by the function \begin{equation} g(\sigma) = \int_D d{\bf r} h({\bf r}) \delta[\sigma - \Omega({\bf r})], \label{2.10} \end{equation} which is then conserved for each value of $-\infty < \sigma < \infty$. One may recover any $I_w$ from $g(\sigma)$ in the form \begin{equation} I_w = \int d\sigma g(\sigma) w(\sigma). \label{2.11} \end{equation} An important consequence of (\ref{2.9}) is that, choosing $w(\Omega) = \Omega$, one obtains \begin{equation} I_1 = \int_D d{\bf r} h\Omega = \int_D d{\bf r} (\omega+f) = \int_{\partial D} {\bf v} \cdot d{\bf l} + \int_D d{\bf r} f. \label{2.12} \end{equation} It follows that the total circulation is conserved. In a multiply connected domain, it can be shown that the individual circulations \begin{equation} \Gamma_l = \int_{\partial D_l} {\bf v} \cdot d{\bf l} \label{2.13} \end{equation} about any connected component $\partial D_l$, $l = 1,2,\ldots,N_D$, of the boundary are conserved as well. These generate an additional $N_D-1$ independent conserved integrals that are not expressible in terms of $g(\sigma)$. We adopt the sign convention here that $\partial D_1$ is the outermost boundary, so the circulation integral direction on all other $\partial D_l$, $l \geq 2$, is opposite. In particular, the total circulation appearing in (\ref{2.12}) is given by \begin{equation} \Gamma = \int_{\partial D} {\bf v} \cdot d{\bf l} = \Gamma_1 - \sum_{l=2}^{N_D} \Gamma_l. \label{2.14} \end{equation} \begin{figure} \includegraphics[width=2.8in,viewport=190 190 530 370,clip]{PeriodicStrip.png} \includegraphics[width=2.5in,viewport=190 110 500 440,clip]{Annulus.PNG} \caption{\textbf{Top:} Translation invariant domain relevant to linear momentum conservation. Periodic boundary conditions are applied along $x$. \textbf{Bottom:} Rotation invariant domain relevant to angular momentum conservation.} \label{fig:symdomains} \end{figure} \subsubsection{Energy and momentum} \label{subsec:energy} The conserved energy is a sum of kinetic and potential contributions: \begin{equation} E = \frac{\rho_0}{2} \int_D d{\bf r} \left[h({\bf r}) |{\bf v}({\bf r})|^2 + g \eta({\bf r})^2 \right]. \label{2.15} \end{equation} The canonical linear momentum is given by \begin{equation} {\bf P} = \rho_0 \int_D d{\bf r} \, h[{\bf v} + {\bf A}], \label{2.16} \end{equation} where the vector potential is defined by $f = \nabla \times {\bf A}$ \cite{foot:vecpot}. If the system is translation invariant along a direction which we call ${\bf \hat x}$ (including the case of periodic boundary conditions along this direction, illustrated in the upper panel of Fig.\ \ref{fig:symdomains}), the momentum component $P_x = {\bf P} \cdot {\bf \hat x} $ is conserved. More explicitly, if $f = f(y)$ and $h_b = h_b(y)$ depend only on the orthogonal coordinate $y$, one may choose ${\bf A} = -F(y) {\bf \hat x}$ where $\partial_y F = f$, and one obtains the conserved integral \begin{equation} P_x = \rho_0 \int_D d{\bf r} h({\bf r}) [v_x({\bf r}) - F(y)]. \label{2.17} \end{equation} Using (\ref{2.1}), and judicious application of integration by parts and the boundary conditions, it is straightforward to verify directly that $\partial_t P_x = 0$. The translation symmetry corresponds to the following Galilean transformation of the fields themselves: \begin{eqnarray} {\bf \bar v}({\bf r},t) &=& {\bf v}({\bf r} - {\bf \hat x} v_0 t,t) + v_0 {\bf \hat x} \nonumber \\ \bar h({\bf r},t) &=& h({\bf r} - {\bf \hat x} v_0 t,t) \nonumber \\ \bar \omega({\bf r},t) &=& \omega({\bf r} - {\bf \hat x} v_0 t,t) \nonumber \\ \bar h_b(y) &=& h_b(y) - \frac{v_0}{g} F(y), \label{2.18} \end{eqnarray} Thus, the same flow pattern boosted by an arbitrary velocity $v_0$ is a solution to (\ref{2.2}) if one imposes an additional bottom tilt proportional to $F(y)$. In the magnetic analogy \cite{foot:vecpot}, the latter corresponds to a ``Hall voltage'' that compensates for the change in Coriolis force induced by the change in the mean flow. Similarly, in the presence of a rotational symmetry (circular or annular domain, illustrated in the lower panel of Fig.\ \ref{fig:symdomains})), the canonical angular momentum \begin{equation} L = \rho_0 \int_D d{\bf r} \, h {\bf r} \times ({\bf v} + {\bf A}) \label{2.19} \end{equation} is conserved. Here, the 2D vector cross product produces the scalar quantity ${\bf r} \times {\bf j} = x j_y - y j_x$. In this case $f = f(r)$ and $h_b = h_b(r)$ depend only on the radial coordinate, and one may choose azimuthal ${\bf A} = \hat {\bm \theta} F(r)$, with $f = r^{-1} \partial_r(rF)$ to obtain the explicit form \begin{equation} L = \rho_0 \int_D d{\bf r} h({\bf r}) [{\bf r} \times {\bf v}({\bf r}) + rF(r)]. \label{2.20} \end{equation} It is again straightforward to verify directly that $\partial_t L = 0$. The field symmetry corresponding to (\ref{2.20}) is the rotational Galilean transformation \begin{eqnarray} {\bf \bar v}({\bf r},t) &=& {\bf \hat R}_{\omega_0 t} {\bf v}({\bf \hat R}_{-\omega_0 t}{\bf r},t) + \omega_0 r \hat {\bm \theta} \nonumber \\ \bar h({\bf r},t) &=& h({\bf \hat R}_{-\omega_0 t}{\bf r},t) \nonumber \\ \bar \omega({\bf r},t) &=& \omega({\bf \hat R}_{-\omega_0 t}{\bf r},t) + 2\omega_0 \nonumber \\ \bar h_b(r) &=& h_b(r) + \frac{\omega_0}{g} rF(r) - \frac{\omega_0^2}{2g} r^2 \nonumber \\ \bar f(r) &=& f(r) - 2\omega_0, \label{2.21} \end{eqnarray} where $\hat {\bm \theta} = {\bf \hat z} \times {\bf \hat r}$ is the azimuthal unit vector, and ${\bf \hat R}_\alpha {\bf r} = r [\cos(\alpha){\bf \hat r} + \sin(\alpha) \hat {\bm \theta}]$ applies the 2D rotation by angle $\alpha$. In this case, the transformation preserves the identical flow pattern, but it now undergoes a net rotation at arbitrary angular rate $\omega_0$, and is maintained by both a bottom tilt correction [this time including also a centrifugal potential $\propto (\omega_0 r)^2$] and a change in the Coriolis parameter itself. \section{Expressions in terms of canonical fields $\Omega,Q,h$} \label{sec:canonfields} Given the fundamental role of the conservation laws in the statistical mechanical treatment, it is useful to express quantities in terms of $\Omega$ and the compressional part of the velocity field \begin{equation} Q = \frac{q}{h},\ \ q \equiv \nabla \cdot {\bf v}. \label{3.1} \end{equation} To this end, one decomposes ${\bf v}$ into rotational and compressional components: \begin{equation} {\bf v} = \nabla \times \psi - \nabla \phi, \label{3.2} \end{equation} where the 2D curl of a scalar is defined by $\nabla \times \psi = (\partial_y \psi, -\partial_x \psi)$. Both terms are chosen transverse to any free-slip boundary: ${\bf \hat l} \cdot \nabla \psi = 0$, ${\bf \hat n} \cdot \nabla \phi = 0$, where ${\bf \hat l}$ and ${\bf \hat n}$ are the boundary tangent and normal unit vectors, respectively. Both obey any periodic boundary condition that might present as well. Substituting the form (\ref{3.2}) into (\ref{2.6}) and (\ref{3.1}), one obtains \begin{equation} \left[\begin{array}{c} \omega \\ q \end{array} \right] = \left[\begin{array}{c} h\Omega - f \\ hQ \end{array} \right] = -\nabla^2 \left[\begin{array}{c} \psi \\ \phi \end{array} \right]. \label{3.3} \end{equation} \begin{figure} \includegraphics[width=2.5in,viewport=200 160 515 400,clip]{SimplyConnected.PNG} \bigskip \includegraphics[width=2.5in,viewport=200 180 520 410,clip]{MultiplyConnected.PNG} \caption{\textbf{Top:} Simply connected domain. \textbf{Bottom:} Multiply connected domain. Independent circulation integrals $\Gamma_l$ and stream function values $\psi^0_l$ are associated with each internal boundary $\partial D_l$, and are related via (\ref{3.13}) through the potential flow circulations (\ref{3.11}).} \label{fig:simmultdomains} \end{figure} \subsection{Potential and non-potential flow decomposition} \label{sec:vortpot} The free-slip condition on $\psi$ implies that it is constant on each connected component of the boundary. For a simply connected domain (top panel of Fig.\ \ref{fig:simmultdomains}), one may specify the Dirichlet condition $\psi|_{\partial D} \equiv 0$. However, for a multiply connected domain (bottom panel of Fig.\ \ref{fig:simmultdomains}), one has the interesting consequence that the boundary value differences of $\psi$ can fluctuate. Multiply connected domains are emphasized here because they play a key role in the presence of conserved momenta. To account for these conservation laws in the statistical mechanics treatment, one must separate out the corresponding \emph{potential flow} contributions. To account for this dynamical degree of freedom we write $\psi$ in the form of a superposition: \begin{equation} \psi({\bf r}) = \psi^V({\bf r}) + \psi^P({\bf r}), \label{3.4} \end{equation} in which the vortical component $\psi^V$ \emph{vanishes} on every free slip boundary component, and contains all contributions to $\omega$, \begin{equation} -\nabla^2 \psi^V = \omega \label{3.5} \end{equation} while the ``potential flow'' field $\psi^P$ matches the boundary values of $\psi$, \begin{equation} \psi^P|_{\partial D} = \psi|_{\partial D}, \label{3.6} \end{equation} while producing zero circulation and compression: \begin{equation} -\nabla^2 \psi^P = 0 \ \ \Leftrightarrow \ \ \nabla \times {\bf v}^P = 0 = \nabla \cdot {\bf v}^P, \label{3.7} \end{equation} where we define \begin{equation} {\bf v}^P = \nabla \times \psi^P,\ \ {\bf v}^V = \nabla \times \psi^V,\ \ {\bf v}^C = -\nabla \phi. \label{3.8} \end{equation} The orthogonality conditions \begin{equation} \int_D d{\bf r} {\bf v}^I \cdot {\bf v}^J = 0, \label{3.9} \end{equation} for $I \neq J = P,V,C$ follows through integration by parts, and the use of the boundary conditions. Since both $\psi^V$ and $\phi$ satisfy homogeneous boundary conditions, one obtains the inverse relations \begin{eqnarray} \psi^V({\bf r}) &=& \int_D d{\bf r}' G_D({\bf r},{\bf r}') \omega({\bf r}') \nonumber \\ \phi({\bf r}) &=& \int_D d{\bf r}' G_N({\bf r},{\bf r}') q({\bf r}'), \label{3.10} \end{eqnarray} in which $G_D$ and $G_N$ are, respectively, the Dirichlet and Neumann Green functions of the Laplacian for the domain $D$. The aim in what follows is to show that both $\psi^V$ and $\psi^P$ are fully determined by $\omega$ and the conserved circulations (\ref{2.13}). To this end, we further decompose \begin{equation} \psi^P({\bf r}) = \psi^0_1 + \sum_{l=2}^{N_D} (\psi^0_l - \psi^0_1) \psi^P_l({\bf r}), \label{3.11} \end{equation} in which $\psi^0_l = \psi|_{\partial D_l}$ is the value of $\psi$ on connected boundary component $\partial D_l$, $l=1,2,\ldots,N_D$, and the ``potential flow eigenfunctions'' are independent solutions to the Laplace equation on $D$ obeying \begin{equation} \psi^P_l({\bf r})|_{\partial D_m} = \delta_{lm},\ l=2,3,\ldots,N_D, \label{3.12} \end{equation} i.e., the boundary value is nonzero only on the matching boundary component. We define as well the symmetric, positive definite array of inner products \begin{equation} \Gamma^P_{lm} = \int_D d{\bf r} {\bf v}_l^P \cdot {\bf v}_m^P = \int_{\partial D_l} {\bf v}_m^P \cdot d{\bf l} = \int_{\partial D_m} {\bf v}_l^P \cdot d{\bf l}, \label{3.13} \end{equation} in which the boundary integrals follow by substituting ${\bf v}^P_l = \nabla \times \psi^P_l$, integrating by parts, and using (\ref{3.9}). The potential eigenfunctions may also be used to decompose the circulation integrals (\ref{2.13}) into potential and vortex contributions (the contribution from the compressional component $\phi$ trivially vanishes). Through integration by parts, and recalling the sign convention (\ref{2.14}), it is easy to check that \begin{eqnarray} \int_D d{\bf r} \psi^P_l({\bf r}) \omega({\bf r}) &=& \int_{\partial D} \psi^P_l {\bf v} \cdot d{\bf l} + \int_D d{\bf r} {\bf v} \cdot {\bf v}^P_l \nonumber \\ &=& -\Gamma_l + \sum_{m=2}^{N_D} \Gamma^P_{lm} (\psi^0_m - \psi^0_1).\ \ \ \ \ \ \label{3.14} \end{eqnarray} It follows that the conserved circulation integrals (\ref{2.13}) may be decomposed in the form \begin{eqnarray} \Gamma_l &=& \Gamma^V_l + \Gamma^P_l \nonumber \\ \Gamma^V_l &=& \int_{\partial D_l} {\bf v}^V \cdot d{\bf l} = -\int_D d{\bf r} \psi^P_l({\bf r}) \omega({\bf r}) \nonumber \\ \Gamma^P_l &=& \int_{\partial D_l} {\bf v}^P \cdot d{\bf l} = \sum_{m=2}^{N_D} \Gamma^P_{lm} (\psi^0_m - \psi^0_1). \label{3.15} \end{eqnarray} This leads to the interpretation of $\psi^P({\bf r})$ as the circulation about boundary component $l$ due to a unit point vortex at ${\bf r}$. One obtains, in particular, \begin{equation} \psi^0_l - \psi^0_1 = \sum_{l=2}^{N_D} [\Gamma^P]^{-1}_{lm} (\Gamma_m - \Gamma^V_m), \label{3.16} \end{equation} demonstrating, as required, that the inhomogeneous boundary values, though fluctuating with the flow, are in fact fully specified by the vorticity field and the conserved integrals. \subsubsection{Periodic strip geometry} \label{subsec:periodicstrip} Relevant to systems with linear momentum conservation (\ref{2.17}), the two connected boundary components are the lower and upper boundaries, $y_1 < y_2$, of the periodic strip of length $L_x$ (top panel of Fig.\ \ref{fig:symdomains}). There is a single potential flow eigenfunction, representing uniform flow along the channel: \begin{equation} \psi^P_2({\bf r}) = \frac{y-y_1}{L_y},\ \ {\bf v}^P_2 = \frac{1}{L_y} {\bf \hat x}, \label{3.17} \end{equation} where $L_y = y_2 = y_1$. The circulation integral follows in the form \begin{equation} \Gamma^P \equiv \Gamma^P_{22} = \frac{L_x}{L_y}. \label{3.18} \end{equation} Well known analytic series forms for the Green functions $G_N,G_D$ entering (\ref{3.10}) may be derived using the method of images. \subsubsection{Annular geometry} \label{subsec:annulus} Relevant to systems with angular momentum conservation, for an annular geometry, with inner and outer radii $0 \leq R_2 < R_1$ (lower panel of Fig.\ \ref{fig:symdomains}), the single eigenfunction corresponds to the axial flow \begin{equation} \psi^P_2({\bf r}) = \frac{\ln(r/R_1)}{\ln(R_2/R_1)},\ \ {\bf v}^P_2 = \frac{1}{\ln(R_2/R_1) r} \hat {\bm \theta}. \label{3.19} \end{equation} The circulation integral takes the form \begin{equation} \Gamma^P = \frac{2\pi}{\ln(R_2/R_1)}. \label{3.20} \end{equation} Once again, well known analytic series forms for the Green functions in (\ref{3.10}) may be derived in polar coordinates. \subsection{Kinetic energy} \label{sec:ke} The substitution of the decomposition ${\bf v} = {\bf v}^V + {\bf v}^C + {\bf v}^P$, along with the representations (\ref{3.8}), (\ref{3.10}) and (\ref{3.11}), allows one to express into the kinetic part of the energy (\ref{2.15}), \begin{equation} E_K = \frac{\rho_0}{2} \int_D d{\bf r} h({\bf r}) |{\bf v}({\bf r})|^2, \label{3.21} \end{equation} as a nonlocal quadratic functional of $\omega,q,\psi_l^0$, including also $h$. Note that in the periodic strip or the annulus, there is only a single term in the sum (\ref{3.11}), $l = m = N_D = 2$. Only $\Gamma^P_{22}$ enters (\ref{3.15}), given by the explicit forms (\ref{3.18}) or (\ref{3.20}), respectively. Substituting $\omega = h\Omega - f$ and $q = hQ$, as well as (\ref{3.16}), provides the explicit representation in terms of the basic fields $\Omega,Q,h$. The result is quite messy, including nonvanishing cross-terms, despite (\ref{3.9}), due to presence of $h$. This expression is not actually needed in the analysis below, but for completeness is written out in App.\ \ref{app:KEPi}. \subsection{Conserved momenta} \label{sec:consmomenta} The kinetic parts of the linear and angular momenta, (\ref{2.17}) and (\ref{2.20}), may similarly be decomposed into vortical, compressional, and potential components. It is useful to write these in the form \begin{eqnarray} \Pi &=& \Pi_K + \Pi_h \nonumber \\ \Pi_K &=& \rho_0 \int_D d{\bf r} h({\bf r}) {\bf v}_\Pi({\bf r}) \cdot {\bf v}({\bf r}) \nonumber \\ \Pi_h &=& \rho_0 \int_D d{\bf r} h({\bf r}) F_\Pi({\bf r}) \label{3.22} \end{eqnarray} where \begin{eqnarray} {\bf v}_\Pi({\bf r}) &=& \left\{\begin{array}{ll} \hat {\bf x}, & \Pi = P_x \\ r \hat {\bm \theta}, & \Pi = L \end{array} \right. \nonumber \\ F_\Pi({\bf r}) &=& \left\{\begin{array}{ll} -F(y), & \Pi = P_x \\ r F(r), & \Pi = L. \end{array} \right. \label{3.23} \end{eqnarray} Substituting the decomposition (\ref{3.8}) of ${\bf v}$, $\Pi_K$ may be written out as a linear functional of $\omega,q,\psi^0_l$, depending also nonlocally on $h$. These expressions, given in App.\ \ref{app:KEPi}, will again not actually be needed below. \subsection{Example: flat-bottom Euler equation} \label{sec:eulereg} The Euler equation on a flat bottom is obtained by setting $h = H_0$, $\eta = h_b = 0$, $\nabla \cdot {\bf v} = 0$, hence ${\bf v} = \nabla \times \psi$, and $\phi = 0$ (no compressional component). The potential flow eigenfunction expansion (\ref{3.11}) remains exactly as before. The vortex contribution to the stream function is still given by first line of (\ref{3.10}), and the vortex contribution to the kinetic energy $E_K = E_K^V + E_K^P$ follows in the familiar Coulomb-like form \begin{equation} E_K^V = \frac{1}{2} \rho_0 H_0 \int_D d{\bf r} \int_D d{\bf r}' \omega({\bf r}) G_D({\bf r},{\bf r}') \omega({\bf r}'). \label{3.24} \end{equation} Since $\Gamma_{lm}[h] = H_0 \Gamma_{lm}^P$, the potential flow contribution is given by \begin{equation} E_K^P = \frac{1}{2} \rho_0 H_0 \sum_{l,m = 2}^{N_D} \Gamma_{lm}^P (\psi^0_l - \psi^0_1)(\psi^0_m - \psi^0_1) \label{3.25} \end{equation} The cross term vanishes by orthogonality (\ref{3.9}). With linear momentum conservation on a periodic strip, one obtains from (\ref{2.16}) the form \begin{equation} P_x = \rho_0 (v_0 - v_f) V_D, \label{3.26} \end{equation} where $V_D = H_0 A_D$ is the system volume, \begin{equation} v_0 = \frac{\psi^0_2 - \psi^0_1}{L_y} \label{3.27} \end{equation} is the (conserved) mean flow speed along the periodic dimension, $L_y = y_2-y_1$ is the strip width, and \begin{equation} v_f = \frac{1}{L_y} \int_{y_1}^{y_2} F(y) dy \label{3.28} \end{equation} is a speed defined by the Coriolis effect. The momentum resides entirely in the potential component of the flow in this case, and the boundary values $\psi_{1,2}^0$ are both conserved. In particular, the value of $P_x$ fully specifies the boundary conditions and the energy in the potential flow. It fully specifies the potential contribution to the kinetic energy as well: \begin{equation} E_K^P = \frac{1}{2} \rho_0 H_0 \Gamma^P (\psi^0_2 - \psi^0_1)^2 = \frac{1}{2} \rho_0 V_D v_0^2. \label{3.29} \end{equation} The circulation integral $\Gamma_2 = \Gamma_2^V + \Gamma^P (\psi^0_2 - \psi^0_1)$ follows directly from (\ref{3.15}). Inserting (\ref{3.17}), one sees that the vorticity contribution \begin{equation} \Gamma^V_2 = - \frac{1}{L_y} \int_D d{\bf r} (y-y_1) \omega({\bf r}) \label{3.30} \end{equation} is separately conserved, and also equivalent to momentum conservation. For the annular geometry, one may express \begin{eqnarray} \int_D d{\bf r} {\bf r} \times {\bf v} &=& -\frac{1}{2} \int_D d{\bf r} {\bf v}({\bf r}) \cdot \nabla \times (r^2-R_1^2) \label{3.31} \\ &=& \frac{1}{2} (R_1^2 - R_2^2) \Gamma_2 - \frac{1}{2} \int_D d{\bf r} (r^2 - R_1^2) \omega. \nonumber \end{eqnarray} The angular momentum may therefore be written in the form \begin{eqnarray} L &=& \rho_0 H_0 \left[L_2 + \frac{1}{2}(R_1^2 - R_2^2) \Gamma_2 + F_2 \right] \nonumber \\ L_2 &=& \frac{1}{2} \int_D d{\bf r} (R_1^2 - r^2) \omega({\bf r}) \nonumber \\ F_2 &=& 2\pi \int_{R_2}^{R_1} r^2 F(r) dr, \label{3.32} \end{eqnarray} which expresses it entirely in terms of the vorticity field and the conserved boundary circulations. Conservation of $L$ therefore produces the new conserved vorticity second moment $L_2$, analogous to the first moment (\ref{3.30}). The potential flow is equivalent to a point vortex at the origin, and one obtains \begin{equation} E_K^P = \frac{\pi (\psi^0_2 - \psi^0_1)^2}{\rho_0 H_0 \ln(R_1/R_2)}. \label{3.33} \end{equation} Using (\ref{3.15}), (\ref{3.19}) and (\ref{3.20}), the vorticity contribution to the circulation integral is \begin{equation} \Gamma_2^V = \int_D d{\bf r} \frac{\ln(r/R_1)}{\ln(R_1/R_2)} \omega({\bf r}) \label{3.34} \end{equation} Unlike for the linear momentum case, $\Gamma_2^V$, along with the boundary value $\psi_2^0 - \psi_1^0$, is not conserved, hence fluctuates with the flow. The reason for the difference is that in the linear momentum case ${\bf v}_{P_x}({\bf r}) = \hat {\bf x} = L_y {\bf v}^P_2({\bf r})$ happens to coincide with the potential eigenfunction, whereas ${\bf v}_L({\bf r}) = r \hat {\bm \theta}$ is distinct from ${\bf v}^P_2 \propto \hat {\bm \theta}/r$. In particular, the former has nonzero vorticity $\omega_L = 2$. \section{Fluid system statistical mechanics} \label{sec:statmech} We seek a description of the equilibrium flows of the shallow water system, with conserved integrals defined by the energy (\ref{2.15}), advection constraints (Casimirs) (\ref{2.10}), the circulation integrals (\ref{2.13}), and momentum (\ref{2.17}) or (\ref{2.20}), if present. The equilibrium phase space measure $d\nu(\Gamma) = \rho(\Gamma) d\Gamma$, and the Liouville theorem from which it follows, are described in detail in App.\ \ref{app:liouville}. We work in the grand canonical ensemble with phase space probability density \begin{equation} \rho = \frac{1}{Z} e^{-\beta {\cal K}},\ Z \equiv \int d\Gamma e^{-\beta {\cal K}} \label{4.1} \end{equation} and generalized Hamiltonian \begin{equation} {\cal K}[h,{\bf v}] = E - \alpha \Pi - \sum_{l=2}^{N_D} \gamma_l \Gamma_l - \int_D d{\bf r} h({\bf r}) \mu[\Omega({\bf r})]. \label{4.2} \end{equation} The function $\mu(\sigma)$ is the Lagrange multiplier function conjugate to $g(\sigma)$, and $\Pi$ denotes the conserved momentum ($P_x$ or $L$), if present---see (\ref{3.22}). The objective is to use this form to compute the free energy density \begin{equation} {\cal F}[\beta,\alpha,{\bm \gamma},\mu] = -\frac{1}{\beta A_D} \ln(Z), \label{4.3} \end{equation} which characterizes the equilibrium state. The phase space integral (\ref{4.1}) is a formal infinite-dimensional functional integral over all possible fluid field configurations, weighted by the density $\rho(\Gamma)$. In order to perform computations, a finite-dimensional approximation is first constructed by discretizing the domain $D$ using a finite mesh (for simplicity, here taken as a uniform square mesh), replacing ${\bf r} \to {\bf r}_i$ by a discrete index $i$. To make physical sense, the continuum limit, taken at the end, must produce a finite, well defined form for ${\cal F}$, and this requirement will enforce nontrivial scaling of some parameters, especially the temperature $T = 1/\beta$. Given the prominent role played by the potential vorticity, we use the statistical measure (\ref{A25}), defined in terms of unrestricted integrals over each grid value of $(\Omega,Q,h)$, as well as the Dirichlet boundary values $\psi_l^0$. The partition function takes the form \begin{eqnarray} Z &=& \prod_{l=2}^{N_D} \int d\psi^0_l \frac{\rho_0}{P_0} \prod_i \int h_i^4 dh_i \frac{\rho_0^2 \Delta x^2}{H_0 P_0^2} \nonumber \\ &&\times\ \int dQ_i d\Omega_i e^{-\beta {\cal K}[\Omega,Q,h,{\bm \psi}^0]} \label{4.4} \end{eqnarray} where $\Delta x \to 0$ is the mesh size, and, as discussed in App.\ \ref{app:liouville}, the constant factors $\rho_0/P_0$ and $\rho_0^2/H_0 P_0^2$ are introduced for convenience to make the partition function dimensionless ($P_0$ has dimensions of momentum or mass current density ${\bf j}$). \subsection{Form of generalized Hamiltonian} \label{sec:formgenham} In a symmetric domain, the $\alpha \Pi$ term is present, and some manipulations are required to put the combination $E - \alpha \Pi$ into a convenient form. By completing the square in various terms one obtains \begin{eqnarray} F &\equiv& E - \alpha \Pi = F^v + F^h + F^0 \nonumber \\ F^v &=& \frac{1}{2} \int d{\bf r} h \left|{\bf v} - \alpha {\bf v}_\Pi \right|^2 \nonumber \\ F^h &=& \frac{1}{2} \rho_0 g \int_D d{\bf r} \bar \eta^2 \nonumber \\ F^0 &=& -\rho_0 g \int_D d{\bf r} \left[(h_b-H_0) \delta h_b + \frac{1}{2} \delta h_b^2 \right] \label{4.5} \end{eqnarray} in which we define \begin{eqnarray} \bar \eta &=& h + \bar h_b - H_0 \nonumber \\ \bar h_b &=& h_b + \delta h_b \nonumber \\ \delta h_b &=& -\frac{\alpha}{g} \left(F_\Pi + \frac{1}{2} \alpha |{\bf v}_\Pi|^2 \right). \label{4.6} \end{eqnarray} The term $F^0(\alpha)$ is constant, but does depend on the Lagrange multiplier $\alpha$. Note that only the full velocity ${\bf v}$ appears: the decompositions (\ref{3.2}) and (\ref{3.8}) will be exploited at a later step. One may write the vorticity combination \begin{eqnarray} \omega - \alpha \omega_\Pi &=& hQ - \bar f \nonumber \\ \bar f({\bf r}) &=& f({\bf r}) + \alpha \omega_\Pi({\bf r}) \nonumber \\ &=& \left\{\begin{array}{ll} f(y), & \Pi = P_x \\ f(r) + 2 \alpha, & \Pi = L. \end{array} \right. \label{4.7} \end{eqnarray} The factor of $\omega_L = 2$ is obtained from (\ref{3.23}). The transformations (\ref{4.6}) and (\ref{4.7}) correspond precisely to the symmetry transformations (\ref{2.18}) and (\ref{2.21}) with $v_0 = -\alpha$ and $\omega_0 = -\alpha$, respectively. In this way, the $\alpha \Pi$ term effectively identifies the frame of reference in which the translation or rotation velocity vanishes. \subsection{KHS transformation} \label{sec:khs} In order to simplify the calculation, we perform a Kac--Hubbard--Stratanovich (KHS) transformation by introducing an auxilliary Laplace transform 2D current density field ${\bf J}$. Its equilibrium average will eventually be related to the large scale flow. This field is used to convert the kinetic energy term into a term linear in the velocity via the Gaussian identity \begin{equation} e^{-\frac{1}{2} \beta \rho_0 \Delta x^2 h_i |{\bf V}_i|^2} = \int_C d{\bf J}_i \frac{e^{\beta \Delta x^2 |{\bf J}_i|^2/2 \rho_0 h_i}} {2\pi \rho_0 h_i/\beta \Delta x^2} e^{-\beta \Delta x^2 {\bf J}_i \cdot {\bf V}_i} \label{4.8} \end{equation} applied independently to each site $i$, and used with ${\bf V} = {\bf v} - \alpha {\bf v}_\Pi$. The subscript $C$ is a complex integration contour, for each component of ${\bf J}_i$, that runs parallel to the imaginary axis. Here and below, for any 2D vector, we adopt the notation $|{\bf J}|^2 = {\bf J} \cdot {\bf J}$, which does \emph{not} include a complex magnitude. The result (\ref{4.8}) holds for arbitrary real axis intersection point, but saddle point and other considerations will determine a convenient choice below. This identity is sensible in the limit $\Delta x \to 0$ only if the combination \begin{equation} \bar \beta = \beta \Delta x^2 \label{4.9} \end{equation} remains finite. Thus, the fluid hydrodynamic temperature $T = 1/\beta = \Delta x^2 \bar T$ (in contrast to the physical thermodynamic temperature) must \emph{vanish} in the continuum limit in order to obtain nontrivial macroscopic flows. The physical motivation for this scaling, which recognizes that large scale hydrodynamic flows cannot be in equilibrium with microscopic thermal fluctuations, has been discussed extensively in the literature, see, e.g., Ref.\ \cite{MWC1992}. The Gaussian integral also converges only if $\beta > 0$: as observed in \cite{RVB2016}, the inclusion of height fluctuations precludes the negative temperature states observed for the Euler equation \cite{M1990,RS1991,MWC1992}. Inserting this identity for each $i$, one obtains \begin{eqnarray} Z &=& \prod_{l=2}^{N_D} d\psi^0_l \frac{\rho_0}{P_0} \prod_i \int h_i^3 dh_i \frac{\bar \beta \rho_0 \Delta x^2}{2\pi H_0 P_0^2} \nonumber \\ &&\times\ \int_C d{\bf J}_i \int dQ_i d\Omega_i e^{-\beta \tilde {\cal F}[{\bf J},h,{\bf v}]}, \label{4.10} \end{eqnarray} in which the free energy functional takes the continuum form \begin{eqnarray} \tilde {\cal F}[{\bf J},h,{\bf v}] &=& \int_D d{\bf r} \left\{{\bf J}({\bf r}) \cdot [{\bf v}({\bf r}) - \alpha {\bf v}_\Pi({\bf r})] - \frac{|{\bf J}({\bf r})|^2}{2 \rho_0 h({\bf r})} \right\} \nonumber \\ &&+\ F^h + F^0 - \sum_{l=2}^{N_D} \gamma_l \Gamma_l - \int_D d{\bf r} h({\bf r}) \mu[\Omega({\bf r})] \nonumber \\ \label{4.11} \end{eqnarray} We now reexpress the ${\bf J} \cdot {\bf V}$ term in terms of the canonical fields. Given that there is a component of ${\bf J}$ associated with each nonzero component of ${\bf v}$, it makes sense to enforce the free slip boundary condition ${\bf J} \cdot \hat {\bf n} = 0$ on ${\bf J}$ as well. It follows that one may apply the same decomposition (\ref{3.8}) to obtain \begin{equation} {\bf J} = \nabla \times \Psi - \nabla \Phi,\ \ \Psi = \Psi^V + \Psi^P. \label{4.12} \end{equation} Substituting (\ref{3.8}), (\ref{3.9}), and (\ref{3.10}), one obtains \begin{eqnarray} &&\int_D d{\bf r} {\bf J} \cdot ({\bf v} - \alpha {\bf v}_\Pi) = \int_D d{\bf r} [(h\Omega-\bar f) \Psi^V + h Q \Phi] \nonumber \\ &&+\ \sum_{l=2}^{N_D} (\Psi_l^0 - \Psi_1^0) \left[\sum_{m=2}^{N_D} \Gamma_{lm}^P (\psi_m^0 - \psi_1^0) - \alpha \Gamma_{\Pi,l} \right] \ \ \ \ \ \ \label{4.13} \end{eqnarray} in which the momentum circulations (defined only for the case $N_D = 2$) are obtained from (\ref{3.23}) in the form \begin{equation} \Gamma_{\Pi,2} = \int_D d{\bf r} {\bf v}_\Pi \cdot {\bf v}^P_2 = \left\{\begin{array}{ll} L_x, & \Pi = P_x \\ \frac{2\pi (R_1^2 - R_2^2)}{\ln(R_2/R_1)}, & \Pi = L \end{array} \right. \label{4.14} \end{equation} and one identifies the explicit forms \begin{eqnarray} \Psi^V({\bf r}) &=& \int d{\bf r}' G_D({\bf r},{\bf r}') \nabla' \times {\bf J}({\bf r}') \nonumber \\ \Phi({\bf r}) &=& \int d{\bf r}' G_N({\bf r},{\bf r}') \nabla' \cdot {\bf J}({\bf r}'). \label{4.15} \end{eqnarray} The potential flow component is similarly decomposed in the form \begin{equation} \Psi^P({\bf r}) = \Psi^0_1 + \sum_{l=2}^{N_D} (\Psi^0_l - \Psi^0_1) \psi^P_l({\bf r}). \label{4.16} \end{equation} With these substitutions, and using the circulation representation (\ref{3.15}), the free energy functional takes the explicit form \begin{widetext} \begin{eqnarray} \tilde {\cal F}[{\bf J},h,{\bf v}] &=& \int_D d{\bf r} \left\{\frac{1}{2} \rho_0 g \bar \eta({\bf r})^2 - \bar f({\bf r}) \left[\Psi^V({\bf r}) + \Psi^{\bm \gamma}({\bf r}) \right] - \frac{|{\bf J}({\bf r})|^2}{2 \rho_0 h({\bf r})} \right\} \nonumber \\ &&+\ \int_D d{\bf r} h({\bf r}) \left\{ \left[\Psi^V({\bf r}) + \Psi^{\bm \gamma}({\bf r}) \right] \Omega({\bf r}) + \Phi({\bf r}) Q({\bf r}) - \mu[\Omega({\bf r})]\right\} \nonumber \\ &&+\ \sum_{l,m=2}^{N_D} \Gamma_{lm}^P (\Psi_l^0 - \Psi_1^0 - \gamma_l) (\psi_m^0 - \psi_1^0) - \alpha \sum_{l=2}^{N_D} \Gamma_{\Pi,l} (\Psi_l^0 - \Psi_1^0 - \gamma_l) + \bar F^0(\alpha,{\bm \gamma}), \label{4.17} \end{eqnarray} \end{widetext} where we define \begin{eqnarray} \bar F^0(\alpha,{\bm \gamma}) &=& F^0(\alpha) - \alpha \sum_{l=2}^{N_D} \gamma_l \Gamma^\Pi_l \nonumber \\ \Gamma^\Pi_2 &\equiv& \int_{\partial D_l} {\bf v}_\Pi \cdot d{\bf l} = \Gamma_{\Pi,2} - \int_D d{\bf r} \omega_\Pi \psi^P_2 \nonumber \\ &=& \left\{\begin{array}{ll} L_x, & \Pi = P_x \\ 2\pi R_2^2, & \Pi = L \end{array} \right. \nonumber \\ \Psi^{\bm \gamma}({\bf r}) &=& \sum_{l=2}^{N_D} \gamma_l \psi^P_l({\bf r}). \label{4.18} \end{eqnarray} The form (\ref{4.17}) achieves the goal of being entirely local in $h,\Omega,Q$: for given ${\bf J}$, the statistical factor $e^{-\beta \tilde {\cal F}}$ can be expressed as an independent product over sites $i$, allowing the integration over $h_i,Q_i,\Omega_i,\psi^0_m$ to be carried out explicitly. To proceed, we note first that the only dependence on $Q$ is in the second line of (\ref{4.17}). Choosing $\Phi_i$ to be pure imaginary, the former may be integrated out to produce a factor \begin{equation} \prod_i 2\pi \delta(i \bar \beta h_i \Phi_i) = \prod_i \frac{2\pi}{\bar \beta h_i} \delta(i\Phi_i). \label{4.19} \end{equation} Directly analogous to the change of variable from ${\bf v}$ in (\ref{A23}) to $(\Omega,Q,{\bm \psi}^0)$ in (\ref{A24}), one may change variables ${\bf J} \to (\Psi,\Phi,{\bm \Psi}^0)$, with constant Jacobian $\Delta x^{-2N_E}$: \begin{equation} \prod_i \int_C d{\bf J}_i = \prod_{l=2}^{N_D} \int_C d\Psi^0_l \prod_i \int_C \frac{d\Phi_i d\Psi^V_i}{\Delta x^2}. \label{4.20} \end{equation} In each case, $C$ is again a contour parallel to the imaginary axis. The result of the $\Phi$ integral is therefore to simply set \begin{equation} \Phi_i \equiv 0 \ \ \forall i \label{4.21} \end{equation} in $\tilde {\cal F}$ \cite{foot:Qint}. The factor $\prod_i (\bar \beta h_i)^{-1}$ produced by (\ref{4.19}) encompasses the contribution to the free energy from the fluctuations in $Q$ that have now been fully integrated out. Similarly, ${\bm \psi}^0$ appears only in the last term in (\ref{4.17}). Choosing the ${\bm \Psi}^0$ contours so that $\Psi^0_l - \Psi^0_1 - \gamma_l$ are all pure imaginary, the ${\bm \psi}^0$ integrals produce the factor \begin{equation} \frac{1}{\det(\Gamma^P)} \prod_{l=2}^{N_D} \delta[i(\Psi^0_l - \Psi^0_1 - \gamma_l)]. \label{4.22} \end{equation} The result of the ${\bm \Psi}^0$ integrals is therefore to simply replace \begin{equation} \Psi^0_l - \Psi^0_1 = \gamma_l,\ l=2,3,\ldots,N_D. \label{4.23} \end{equation} Using (\ref{4.12}), (\ref{4.16}) and (\ref{4.21}), one identifies \begin{eqnarray} \Psi({\bf r}) &=& \Psi^V({\bf r}) + \Psi^{\bm \gamma}({\bf r}) \nonumber \\ |{\bf J}({\bf r})|^2 &=& |\nabla \times \Psi({\bf r})|^2 = |\nabla \Psi({\bf r})|^2 \label{4.24} \end{eqnarray} The first line implies that the circulation Lagrange multipliers simply enforce the boundary conditions $\Psi|_{\partial D_l} = \gamma_l$. We reiterate, here and below, that $|\nabla \Psi|^2 = \nabla \Psi \cdot \nabla \Psi$ does \emph{not} include a complex magnitude. The end result of eliminating $Q,\Phi,{\bm \Psi}^0,{\bm \psi}^0$ is the partially reduced free energy functional \begin{eqnarray} \hat {\cal F}[\Psi,h,\Omega] &=& \bar F^0 + \int_D d{\bf r} \bigg\{\frac{1}{2} \rho_0 g \bar \eta({\bf r})^2 - \frac{|\nabla \Psi({\bf r})|^2}{2 \rho_0 h({\bf r})} \nonumber \\ &&+\ [\omega({\bf r}) - \alpha \omega_\Pi({\bf r})] \Psi({\bf r}) - \mu[\Omega({\bf r})] \bigg \}. \nonumber \\ \label{4.25} \end{eqnarray} \subsection{Final effective models} \label{sec:modelfinal} There are two ways to proceed in order to further reduce (\ref{4.25}), each providing a rather different (but obviously equivalent) view of the underlying physics. The first is to integrate out $h,\Omega$ to obtain an effective theory in terms of the stream function $\Psi$ alone. This yields an effective nonlinear elastic membrane interpretation. The second is to integrate out $\Psi$ to obtain a dual effective theory in terms of $\Omega,h$. This yields the generalized Coulomb system interpretation. The latter, which is now completely independent of the KHS field ${\bf J}$, could also have been obtained by integrating out $Q,{\bm \psi}$ from ${\cal K}$ in (\ref{4.4}). However, the intermediate KHS route actually provides the more transparent derivation. We derive both models in sequence. \subsubsection{Nonlinear elastic membrane model} \label{subsec:nonlinmembrane} In order to handle the $h_i,\Omega_i$ integrals, we define a function $W$ of three scalar arguments by \begin{eqnarray} e^{\bar \beta W(\tau,h_0,\xi)} &=& \frac{\rho_0}{P_0} \int_0^\infty \lambda^2 d\lambda \int d\sigma e^{\bar \beta \lambda[\mu(\sigma) - \sigma \tau]} \nonumber \\ &&\ \ \ \ \ \ \times\ e^{-\frac{1}{2} \bar \beta [\xi/\rho_0 \lambda + \rho_0 g(\lambda + h_0)^2]}.\ \ \ \ \ \label{4.26} \end{eqnarray} The factor $\lambda^2$ originates from the factor $\bar \beta h_i^3$ in (\ref{4.10}), divided the factor $\bar \beta h_i$ in (\ref{4.19}). The remaining factor $\rho_0/P_0$ makes the result dimensionless. We observe here again that (1) this function makes sense only if $\bar \beta$ (not $\beta$) is finite, and (2) that the $\lambda$-integral converges only if $\bar \beta, \xi > 0$. Combining (\ref{4.20}), (\ref{4.23}), and (\ref{4.24}), the partition function may be put in the form \begin{equation} Z = \prod_i \int_C \frac{d\Psi^V_i}{P_0 H_0} e^{-\beta {\cal F}[{\bm \Psi}]} \equiv \int D[\Psi^V] e^{-\beta {\cal F}[{\bm \Psi}]} \label{4.27} \end{equation} with fully reduced (continuum limit) free energy functional \begin{widetext} \begin{eqnarray} {\cal F}[\Psi] = \bar F^0(\alpha,{\bm \gamma}) - \int_D d{\bf r} \left\{\bar f({\bf r})\Psi({\bf r}) + W\left[\Psi({\bf r}), \bar h_b({\bf r}) - H_0,\ -|\nabla \Psi({\bf r})|^2 \right] \right\}. \label{4.28} \end{eqnarray} \end{widetext} The physical interpretation of this model is that of an inhomogeneous, nonlinear elastic fluctuating membrane (see Ref.\ \cite{W2012} for a similar analogy in the context of the theory of magnetohydrodynamic equilibria). In the limit $\beta = \bar \beta/\Delta x^2 \to \infty$, the dependence on $|\nabla \Psi({\bf r})|^2$ ensures that $\Psi$ is continuous, with $\delta \Psi = \Psi - \Psi^\mathrm{eq} = O(\Delta x/\sqrt{\bar \beta})$ differing only microscopically from its (smooth) equilibrium average $\Psi^\mathrm{eq}({\bf r}) = \langle \Psi({\bf r}) \rangle$ \cite{foot:C}. However, it follows that $|\nabla \Psi({\bf r})|^2 = O(1/\bar \beta)$ is a finite random variable, varying on the microscale $\Delta x$. For non-gradient terms inside ${\cal F}$, one is therefore free to replace $\Psi \to \Psi^\mathrm{eq}$ (whose form must eventually be determined self-consistently), and the first two arguments of $W$ may then be viewed as smooth, deterministic functions of ${\bf r}$. However, the third argument remains a fluctuating field, contributing nontrivially to the functional integral. If $W(\xi)$ were a slowly varying function of its third argument, on the scale $\bar T = 1/\bar \beta$, then $W(\xi) \simeq W(\xi_0) + \partial_\xi W(\xi_0) (\xi - \xi_0)$, where $\xi_0 = -|\nabla \Psi^\mathrm{eq}|^2$, and the membrane becomes linear (though still inhomogeneous), with effective local surface tension defined by $\partial_\xi W(\xi_0)$. However, with increasing $\bar T$ the linear approximation fails, the surface tension depends on $\xi$ itself, and the model becomes intrinsically nonlinear. One may understand this effect from the point of view of the original shallow water system. With increasing $\bar T$ the microscopic height fluctuations, correlated with the current density fluctuations $\nabla \times \Psi$, increase to the point where the height field excursions become comparable to $H_0$, and one exits the regime of linear surface waves. In this sense, the behavior here is significantly more complex than that found in the magnetohydrodynamic problem, where terms equivalent to $|\nabla \Psi|^2$ always enter the free energy functional linearly \cite{W2012}. Since $|\nabla \Psi|^2$ varies by $O(1)$ on the lattice scale $\Delta x$, one might hope that it possesses only short range correlations. If this were true, one could independently integrate it out at each point ${\bf r}$ according to its single site-statistics, as we did the fields $\Omega,Q,h,\Phi$ in obtaining ${\cal F}[\Psi]$ from $\tilde {\cal F}[{\bf J},\Omega,Q,h]$. Unfortunately precisely the opposite is the case: the curl-free condition on $\nabla \Psi$, makes it highly correlated from site to site. For example, for the simplest, linear, homogenous model one obtains logarithmic correlations $\langle [\Psi({\bf r}) - \Psi({\bf r}')]^2 \rangle \sim \ln(|{\bf r}-{\bf r}'|/\Delta x)$. Correspondingly, one obtains macroscopic-scale dipole-like correlations of the current $\nabla \times \Psi$ \cite{W2012}. Thus, ${\cal F}$ generates a highly nontrivial, strongly correlated statistical model, with no simple analytic form for the free energy. In Sec.\ \ref{sec:furtherprops} we will consider limits in which the fluctuations are small, and in which more explicit analytic progress can be made. If, for convenience one separates \cite{foot:C} \begin{widetext} \begin{eqnarray} {\cal F}[\Psi] &=& {\cal F}[\Psi^\mathrm{eq}] + {\cal F}^\mathrm{fluct}[\Psi^\mathrm{eq},\delta \Psi] \nonumber \\ {\cal F}[\Psi^\mathrm{eq},\delta \Psi] &=& -\int_D d{\bf r} \left\{W\left[\Psi^\mathrm{eq},\bar h_b - H_0, -|\nabla (\Psi^\mathrm{eq} + \delta \Psi)|^2 \right] - W\left[\Psi^\mathrm{eq},\bar h_b - H_0, -|\nabla \Psi^\mathrm{eq}|^2 \right] \right\} \label{4.29} \end{eqnarray} into static and fluctuating parts, then the equilibrium free energy takes the form \begin{eqnarray} F[\Psi^\mathrm{eq}] &=& {\cal F}[\Psi^\mathrm{eq}] + F^\mathrm{fluct}[\Psi^\mathrm{eq}] \label{4.30}\\ F^\mathrm{fluct}[\Psi^\mathrm{eq}] &=& -\frac{1}{\beta} \ln\left\{\int D[\delta \Psi] e^{-\beta {\cal F}^\mathrm{fluct} [\Psi^\mathrm{eq},\delta \Psi]} \right\}, \nonumber \end{eqnarray} which explicitly exposes the ``mean field'' and fluctuating components. The self-consistent equation for the large-scale equilibrium flow follows by minimizing $F$: \begin{equation} \frac{\delta F}{\delta \Psi^\mathrm{eq}({\bf r})} = 0. \label{4.31} \end{equation} The functional derivative (\ref{4.31}) may be conveniently evaluated by first defining an intermediate average over the fields $\Omega,h$ using the functional $W$: \begin{eqnarray} n_{\Omega,h}({\bf r},\sigma,\lambda) &\equiv& \langle \delta[\Omega({\bf r}) - \sigma] \delta[h({\bf r}) - \lambda] \rangle_W \nonumber \\ &=& \frac{\frac{\rho_0}{P_0} \lambda^2 e^{\bar \beta \lambda \left[\mu(\sigma) - \sigma \Psi({\bf r}) \right]} e^{\frac{1}{2} \bar \beta \left\{|\nabla \Psi({\bf r})|^2/\rho_0\lambda - \rho_0 g[\lambda + \bar h_b({\bf r}) - H_0]^2 \right\}}} {e^{\bar \beta W\left[\Psi({\bf r}), \, \bar h_b({\bf r})-H_0, \, -|\nabla \Psi({\bf r})|^2 \right]}}, \label{4.32} \end{eqnarray} which may be interpreted as the probability density for potential vorticity and fluid height at the point ${\bf r}$, for a given fixed realization of the field $\Psi$. The $e^{\bar \beta W}$ denominator ensures that the distribution is normalized. This interpretation is most easily derived by following the identical sequence of integration steps to obtain the results (\ref{4.21}) and (\ref{4.23}), but in the integration over the $h$ and $\Omega$ fields (in advance of the $\Psi$ integration), the delta functions then produce (\ref{4.32}) in place of free integration result (\ref{4.26}). With this definition one obtains \begin{eqnarray} -\partial_\tau W &=& \int_0^\infty \lambda d\lambda \int d\sigma \sigma n_{\Omega,h}({\bf r},\sigma,\lambda) = \langle h({\bf r}) \Omega({\bf r}) \rangle_W = \langle \omega({\bf r}) \rangle_W + f({\bf r}) \nonumber \\ 2\rho_0 \partial_\xi W &=& \int_0^\infty \frac{d\lambda}{\lambda} \int d\sigma n_{\Omega,h}({\bf r},\sigma,\lambda) = \left\langle \frac{1}{h({\bf r})} \right\rangle_W \nonumber \\ -\partial_{h_0} W &=& \rho_0 g \int_0^\infty d\lambda (\lambda + \bar h_b - H_0) \int d\sigma n_{\Omega,h}({\bf r},\sigma,\lambda) = \rho_0 g \langle \bar \eta({\bf r}) \rangle_W, \label{4.33} \end{eqnarray} \end{widetext} and one may express (\ref{4.31}) in the form \begin{eqnarray} \nabla \times {\bf V}^\mathrm{eq} &=& \langle h({\bf r}) \Omega({\bf r}) \rangle + f({\bf r}) = \langle \omega({\bf r}) \rangle \nonumber \\ {\bf V}^\mathrm{eq} &\equiv& \left\langle \frac{\nabla \times \Psi({\bf r})}{\rho_0 h({\bf r})} \right\rangle + \alpha {\bf v}_\Pi({\bf r}), \label{4.34} \end{eqnarray} in which the averages now include $\Psi$: $\langle \cdot \rangle \equiv \langle \langle \cdot \rangle_W \rangle_{\cal F}$. In the presence of momentum conservation, ${\bf V}^\mathrm{eq}$ is the instantaneous mean flow velocity seen in the laboratory frame, while ${\bf J} = \rho_0 \langle h ({\bf v} - \alpha {\bf v}_\Pi) \rangle$ is current density in the translating or rotating frame of reference (hence, generated by the net vorticity $\Delta \omega^\mathrm{eq} = \langle \omega \rangle - \alpha \omega_\Pi$). In the latter frame, the equilibrium flow is time-independent, obtained from the transformation (\ref{2.18}) or (\ref{2.21}), with $v_0 = -\alpha$ or $\omega_0 = -\alpha$, respectively. The incompressibility condition on ${\bf J}^\mathrm{eq}$ still allows, in general, a nonzero compressible velocity field component $\langle q \rangle = \nabla \cdot {\bf V}^\mathrm{eq}$. The equilibrium form $\Psi$ depends on the Lagrange multipliers $\beta,\mu,\alpha,{\bm \gamma}$, which must then be tuned to obtain prescribed values of the conserved integrals. The latter may be derived as equilibrium averages in the form \begin{widetext} \begin{eqnarray} g(\sigma) &=& - \frac{\delta {\cal F}}{\delta \mu(\sigma)} = \langle h({\bf r}) \delta[\Omega({\bf r}) - \sigma] \rangle = \int_D d{\bf r} \int_0^\infty \lambda d\lambda \, \langle n_{\Omega,h}({\bf r},\sigma,\lambda) \rangle_{\cal F} \nonumber \\ \Gamma_l &=& -\frac{\partial {\cal F}}{\partial \gamma_l} = -\frac{\partial \bar F^0}{\partial \gamma_l} + 2 \int_{\partial D_l} \langle \partial_\xi W (\nabla \times \Psi) \rangle \cdot d{\bf l} = \int_{\partial D_l} \left\langle \frac{{\bf J} + \rho_0 h \alpha {\bf v}_\Pi}{\rho_0 h} \right \rangle \cdot d{\bf l} \nonumber \\ &=& \int_{\partial D_l} {\bf V} \cdot d{\bf l} = \sum_{m=2}^{N_D} \Gamma^P_{lm} \langle \psi^0_l - \psi^0_1 \rangle - \int_D d{\bf r} \psi^P_l({\bf r}) \langle \omega({\bf r}) \rangle \nonumber \\ \Pi &=& -\frac{\partial {\cal F}}{\partial \alpha} = -\frac{\partial \bar F^0}{\partial \alpha} + \int_D d{\bf r} \left\{\omega_\Pi \langle \Psi \rangle - \frac{1}{g} \langle \partial_{h_0}W \rangle [F_\Pi + \alpha |{\bf v}_\Pi|^2] \right\} \nonumber \\ &=& \int_D d{\bf r} \left({\bf v}_\Pi \cdot \langle {\bf J} + \rho_0 h \alpha {\bf v}_\Pi \rangle + \rho_0 \langle h \rangle F_\Pi \right) = \rho_0 \int_D d{\bf r} \langle h({\bf r}) \rangle \left[{\bf v}_\Pi({\bf r}) \cdot \tilde {\bf V}({\bf r}) + F_\Pi({\bf r}) \right] \nonumber \\ E &=& \left[\frac{\partial (\bar \beta {\cal F})} {\partial \bar \beta}\right]_{\bar \beta \alpha, \bar \beta \mu, \bar \beta {\bm \gamma}} = \frac{1}{2} \int_D d{\bf r} \left\{ \left\langle \frac{|{\bf J} + \rho_0 h \alpha {\bf v}_\Pi|^2}{\rho_0 h} \right\rangle + \rho_0 g \langle \eta^2 \rangle \right\}, \label{4.35} \end{eqnarray} \end{widetext} which correspond to averages of (\ref{2.10}), (\ref{3.15}), (\ref{3.22}), and (\ref{2.15}). In the last expression for $\Pi$, we define a somewhat different measure of the mean velocity field $\tilde {\bf V}$ by \begin{equation} \tilde {\bf V}({\bf r}) = \frac{\langle {\bf J} + \rho_0 h \alpha {\bf v}_\Pi \rangle} {\rho_0 \langle h \rangle}. \label{4.36} \end{equation} Clearly, $\tilde {\bf V}$ and ${\bf V}$ become equivalent if the fluctuations in $h$ are small. In the computation of $\Gamma_l$, only the surface term survives in the first line by virtue of (\ref{4.34}). The last expression for $\Gamma_l$ uses (\ref{3.15}) to alternatively express the average potential flow circulation $\langle \Gamma^P_l \rangle = \int_D d{\bf r} {\bf v}^P_l \cdot {\bf V}$ in terms of an average of the boundary values. \subsubsection{Generalized Coulomb model} \label{subsec:coulomb} Alternatively, one may integrate out $\Psi$ from (\ref{4.25}). For fixed height field $h$ the integral is Gaussian, and one obtains \begin{widetext} \begin{eqnarray} Z &=& \prod_i \int_0^\infty h_i^2 dh_i \frac{\rho_0}{P_0 H_0^{1/2}} \int d\Omega_i \sqrt{\det(G_h)} e^{-\beta \hat K[\Omega,h]} \nonumber \\ \hat {\cal K}[\Omega,h] &=& \frac{1}{2} \int_D d{\bf r} \int_D d{\bf r}' [\omega({\bf r}) - \alpha \omega_\Pi({\bf r})] G_h({\bf r},{\bf r}') [\omega({\bf r}) - \alpha \omega_\Pi({\bf r})] + \int_D d{\bf r} \left\{\frac{1}{2} \rho_0 g_0 \bar \eta({\bf r})^2 - h({\bf r}) \mu[\Omega({\bf r})] \right\} \nonumber \\ &&-\ \sum_{l=2}^{N_D} \gamma_l \int_{\partial D_l} \tilde {\bf v} \cdot d{\bf l} + \frac{A_D}{2 \bar \beta} \ln\left(\frac{2\pi P_0^2}{\rho_0 H_0} \bar \beta \right) + F^0(\alpha), \label{4.37} \end{eqnarray} \end{widetext} in which, as before, $\omega = h\Omega - f$, and the Green function $G_h$ is defined by \begin{equation} -\nabla \cdot \frac{1}{\rho_0 h({\bf r})} \nabla G_h({\bf r},{\bf r}') = \delta({\bf r}-{\bf r}'), \label{4.38} \end{equation} with Dirichlet boundary conditions on $\partial_D$. The form (\ref{4.37}) differs significantly from the form of the tensor Green function (\ref{C3}) before $Q$ is integrated out (compare, especially, the $\omega$-$\omega$ block of ${\cal \hat G}_h$). The $\det(G_h) \approx \prod_i h_i$ term comes from the normalization of the Gaussian integral and adjusts the phase space measure defining $\int D[h]$. The $\ln(\bar \beta)$ term generates the equipartition contribution $1/2\bar \beta = \bar T/2$ to the energy density coming the fluctuating $Q$-field that has been integrated out. The quantity \begin{equation} \tilde \Psi({\bf r}) = \int_D d{\bf r}' G_h({\bf r},{\bf r}') [\omega({\bf r}') - \alpha \omega_\Pi({\bf r}')] \label{4.39} \end{equation} obeys $\nabla \times (h^{-1} \nabla \times \tilde \Psi) = \omega - \alpha \omega_\Pi$, and \begin{equation} \tilde {\bf j}({\bf r}) \equiv h({\bf r})[\tilde {\bf v}({\bf r}) - \alpha {\bf v}_\Pi({\bf r})] = \nabla \times \tilde \Psi({\bf r}) \label{4.40} \end{equation} therefore represents the divergence free component of the current density. This also defines the quantity $\tilde {\bf v}$ appearing in the circulation term in (\ref{4.37}). For constant $h \equiv H_0$, $G_h$ becomes the Dirichlet Coulomb potential, and the corresponding term in the free energy coincides with that for the Euler equation. The smoothness of this potential, together with the constrained fluctuations in $\omega$, produce an energy that is completely dominated by the large scale flow \cite{M1990,RS1991,MWC1992}. The energy may therefore be obtained by substituting $\langle \omega \rangle$ for $\omega$, and this in turn produces an exact variational form for the free energy. On the other hand, the presence of $1/h$ here, with $O(1)$ fluctuations on the scale $\Delta x$, produces a finite microscale fluctuation energy contribution: $G_h$ (as well as $\tilde \Psi$) is continuous, but its gradient fluctuates on scale $\Delta x$, and is highly correlated with $h$. It follows that one \emph{cannot} simply substitute $\langle \omega \rangle$ for $\omega$ and $\langle G_h \rangle$ for $G_h$. The reasons for this failure are equivalent to the correlated site-to-site fluctuations of $|\nabla \Psi|^2$ found in the membrane formulation (\ref{4.27}). One concludes again that the model free energy does not reduce to a variational mean field form. \section{Simplifying limits and further properties of the models} \label{sec:furtherprops} In this section we consider simplifying limits in which more explicit computations can be carried out, and used these to explore further properties of the models. The critical assumption will be that the fluctuations are small, so that $\Psi \simeq \Psi^\mathrm{eq}$ may be treated as a fixed, nonfluctuating field. \subsection{Variational limit} \label{sec:varlimit} The variational or mean field limit is defined by neglecting $F^\mathrm{fluct}$. In particular one sets $\Psi = \Psi^\mathrm{eq}$ inside the functional $W$ (in both the first and last arguments), and in the distribution (\ref{4.32}) as well. The condition (\ref{4.31}) applied to ${\cal F}[\Psi]$ then leads to the Euler--Lagrange equations \begin{equation} 2 \nabla \cdot (\partial_\xi W \, \nabla \Psi) = \partial_\tau W + \bar f. \label{5.1} \end{equation} It is important here that the variation is with respect to $\Psi^V$, which ensures that there are no boundary terms. Using (\ref{4.33}), equation (\ref{4.34}) reduces to \begin{eqnarray} \nabla \times {\bf V}^\mathrm{eq} &=& \langle \omega({\bf r}) \rangle_W \nonumber \\ {\bf V}^\mathrm{eq} &=& \left \langle \frac{1}{\rho_0 h({\bf r})} \right \rangle_W \nabla \times \Psi^\mathrm{eq} + \alpha {\bf v}_\Pi. \label{5.2} \end{eqnarray} Equation (\ref{5.2}) is the basic result of this section. Its solution allows one to derive the large scale mean flow encoded in $\Psi^\mathrm{eq}$ in the presence of the microscopic height and compressional fluctuations encoded in $n_{\Omega,h}({\bf r},\sigma,\lambda)$. The result therefore represents a mean field self-consistency condition, in the form of a highly nonlinear PDE, whose solution $\Psi^\mathrm{eq}({\bf r})$ also fully determines $n_{\Omega,h}$. The solution $\Psi^\mathrm{eq}$ again depends on the Lagrange multipliers $\beta,\mu,\alpha,{\bm \gamma}$, which must be tuned to obtain prescribed values of the conserved integrals. The latter are given by (\ref{4.35}), but with all averages now with respect to $W$ at fixed $\Psi = \Psi^\mathrm{eq}$. There is one subtlety here, however. The conserved energy takes the form \begin{eqnarray} E &=& \frac{1}{2} \int_D d{\bf r} \left\{\left\langle \frac{|\nabla \times \Psi^\mathrm{eq} + \rho_0 h \alpha {\bf v}_\Pi|^2}{\rho_0 h} \right\rangle_W \right. \nonumber \\ &&\hskip 0.75in \left. +\ \rho_0 g \langle \eta^2 \rangle_W + \frac{1}{\bar \beta} \right\} \nonumber \\ &=& \frac{1}{2} \int_D d{\bf r} \left\{ \left\langle \frac{1}{\rho_0 h} \right\rangle_W |\nabla \times \Psi^\mathrm{eq}|^2 + 2 \alpha {\bf v}_\Pi \cdot \nabla \times \Psi^\mathrm{eq} \right. \nonumber \\ &&+\ \left. \rho_0 \langle h \rangle_W \alpha^2 |{\bf v}_\Pi|^2 + \rho_0 g \langle \eta^2 \rangle_W + \frac{1}{\bar \beta} \right\}. \label{5.3} \end{eqnarray} The $\bar T = 1/\bar \beta$ constant term in the energy is the equipartition energy due to the quadratic fluctuations of $\Psi$ about the equilibrium value, and is produced by the Gaussian integral about saddle point in the steepest descent calculation. This term remains finite even when fluctuations are small, and represents precisely the contribution of the compressional degree of freedom $Q$ that gave rise to the $\ln(\bar \beta)$ term in (\ref{4.37}). \subsection{Variational equations derived from the generalized Coulomb representation} \label{sec:vareqcoulomb} Variational equations equivalent to (\ref{5.1}) and (\ref{5.2}) can also be derived from the generalized Coulomb representation (\ref{4.37}). Since the latter is expressed entirely in terms of the original $h,\Omega$ fields, the derivation is much closer in spirit to the microcanonical approach used by RVB. Central to this approach is the local distribution function $n_{\Omega,h}({\bf r},\sigma,\lambda) = \langle \delta[\Omega({\bf r}) - \sigma] \delta[h({\bf r}) - \lambda] \rangle$ characterizing the local microscopic vorticity and height fluctuations. The grand canonical form is given in (\ref{4.32}), and the corresponding microcanonical form will be rederived here by a different route. One could in principle consider as a starting point a more fundamental three-field correlation function that includes $Q$ (see Sec.\ \ref{sec:Qdistr} below), and attempt to work directly with the original generalized Hamiltonian ${\cal K}$ defined in (\ref{4.2}). However, the divergent fluctuations in $Q \sim 1/\Delta x$ lead to the failure of the key self-averaging property used below, and hence make ${\cal K}$ a less convenient starting point. We work then with the representation (\ref{4.37}) in which $Q$ has already been integrated out. The derivation proceeds by considering, in addition to the microscale $\Delta x$, a mesoscale $\Delta X$, both vanishing in the continuum limit, but with $\Delta X/\Delta x \to \infty$. On the scale $\Delta X$, one may define the joint probability density whose limiting form is obtained by counting the number of joint occurrences of the field levels across the $\Delta x$-cells in the given $\Delta X$-cell centered on point ${\bf r}$: \begin{equation} n_{\Omega,h}({\bf r},\sigma,\lambda) = \lim_{\Delta V_g \to 0} \lim_{\Delta x \to 0} \frac{\Delta x^2}{\Delta X^2} \frac{\nu_{ik}}{\Delta V_g}, \label{5.4} \end{equation} where $i$ labels $\Delta X$-cell centers ${\bf r}_i$, $\{\sigma_k, \lambda_k \}_{k=1}^{N_g}$ is a 2D gridding of $(\Omega, h)$-space, with 2D cell volume $\Delta V_g = \Delta \Omega \Delta h$, and $\nu_{ik}$ [normalized so that $\sum_k \nu_{ik} = (\Delta X/\Delta x)^2$] counts the number of $\Delta x$-cells in $\Delta X$-cell $i$ (in the $\Delta X^2$ neighborhood of the point ${\bf r}$) with parameter value $\sigma_k,\lambda_k$ (in the $\Delta V_g$ neighborhood of $\sigma,\lambda$). The form (\ref{5.4}) ensures the normalization \begin{equation} \int d\sigma \int_0^\infty d\lambda \, n_{\Omega,h}({\bf r},\sigma,\lambda) = 1 \label{5.5} \end{equation} and the conserved quantities are expressed in the same form (\ref{4.35}). The phase space integral is now performed using a separation of scales: First one assigns field values for \emph{fixed} $n_{\Omega,h}$, then one integrates over all possible $n_{\Omega,h}$. The former includes all permutations of $\Delta x$-cells within a given $\Delta X$-cell (which clearly leaves $n_{\Omega,h}$ fixed, as well as all Casimirs). This sum, via the usual permutation count familiar from the lattice hard core ideal gas, produces an entropic contribution to the partition function of the form \cite{MWC1992} \begin{eqnarray} e^{S[n_{\Omega,h}]/\Delta x^2} &=& e^{\beta \bar T S[n_{\Omega,h}]} \nonumber \\ S[n_{\Omega,h}] &\equiv& -\int_D d{\bf r} \int d\sigma \int_0^\infty d\lambda \, n_{\Omega,h}({\bf r},\sigma,\lambda) \nonumber \\ && \times\ \ln[R_0 n_{\Omega,h}({\bf r},\sigma,\lambda)], \label{5.6} \end{eqnarray} where $R_0 = P_0/\rho_0 H_0^2$ has dimensions $[\sigma \lambda] = [\Omega h]$, and is required to make the argument of the logarithm dimensionless. This information theoretic form for $S$ is equivalent to the Sanov theorem result used by RVB. The key assumption underlying (\ref{5.6}) is that microscale fluctuations are uncorrelated across $\Delta X$-cells: In addition to the Casimirs (which are clearly unchanged, for arbitrary shuffling of $\Omega$ values around the domain $D$), the energy and momentum should also be unchanged. Arbitrarily shuffling $h$ values, even over the entire $D$, obviously does not change the potential energy term. However, the singular fashion in which $h$ enters the Green function $G_h$ defined in (\ref{4.38}) (as well as the tensor Green function $\hat {\cal G}_h$ defined in App.\ \ref{app:KEPi}), in the form of a gradient acting on a field with $O(1)$ variations on the scale $\Delta x$, \emph{does} in fact lead to strong correlations across $\Delta X$ cells, invalidating (\ref{5.6}). Recognizing that the result is at best approximate, we proceed now in a manner equivalent to the variational approach, by neglecting such correlations. We define the microscale averaged Green function $\overline{G_h}$ by \begin{equation} -\nabla \cdot \left\langle \frac{1}{\rho_0 h({\bf r})} \right\rangle_0 \nabla \overline{G_h}({\bf r},{\bf r}'). \label{5.7} \end{equation} Using this in place of $G_h$ one may express all quantities in terms of $n_{\Omega,h}$: \begin{eqnarray} E[n_{\Omega,h}] &=& \frac{1}{2} \int_D d{\bf r} \bigg\{\frac{1}{\rho_0} \left\langle \frac{1}{h({\bf r})} \right\rangle_0 |\langle {\bf j}({\bf r}) \rangle_0|^2 \nonumber \\ &&+\ g \rho_0 \langle [h({\bf r}) + h_b({\bf r}) - H_0]^2 \rangle_0 + \frac{1}{\bar \beta} \bigg\} \nonumber \\ \Pi[n_{\Omega,h}] &=& \rho_0 \int d{\bf r} [{\bf v}_\Pi({\bf r}) \cdot \langle {\bf j}({\bf r}) \rangle_0 + \langle h({\bf r}) \rangle_0 F_\Pi({\bf r})] \nonumber \\ \Gamma_l[n_{\Omega,h}] &=& \int_{\partial D_l} \left\langle \frac{1}{h({\bf r})} \right\rangle_0 \langle {\bf j}({\bf r}) \rangle_0 \cdot d{\bf l} \nonumber \\ g_\sigma[n_{\Omega,h}] &=& \int_D d{\bf r} \int_0^\infty \lambda d\lambda \, n_{\Omega,h}({\bf r},\sigma,\lambda). \label{5.8} \end{eqnarray} Here local averages $\langle \cdot \rangle_0$ are defined in the obvious way: \begin{equation} \langle F[\Omega({\bf r}),h({\bf r})] \rangle_0 = \int d\sigma \int_0^\infty d\lambda F(\sigma,\lambda) n_{\Omega,h}({\bf r},\sigma,\lambda), \label{5.9} \end{equation} while the mean current density $\langle {\bf j}({\bf r}) \rangle_0$, obeying $\nabla \cdot \langle {\bf j}({\bf r}) \rangle_0$, is defined by the analog of (\ref{5.2}): \begin{equation} \nabla \times \left[\left\langle \frac{1}{h({\bf r})} \right\rangle_0 \langle {\bf j}({\bf r}) \rangle_0 \right] = \langle h({\bf r}) \Omega({\bf r}) \rangle_0 + f({\bf r}), \label{5.10} \end{equation} which, in addition to the circulation constraint in (\ref{5.8}), fully specifies its form. Along the same lines as (\ref{4.39}) and (\ref{4.40}), the formal solution may be expressed in terms of $\overline{G_h}$. The total microcanonical entropy ${\cal S}$ is now given by a functional integral over all $n_{\Omega,h}$, constrained by particular values of all of the conserved quantities: \begin{eqnarray} e^{{\cal S}(\varepsilon,p,g)/\Delta x^2} &=& \int D[n_{\Omega,h}] e^{S[n_{\Omega,h}]/\Delta x^2} \delta(\varepsilon - E[n_{\Omega,h}]) \nonumber \\ &\times& \delta(p - \Pi[n_{\Omega,h}]) \prod_{l=2}^{N_D} \delta(c_l - \Gamma_l[n_{\Omega,h}]) \nonumber \\ &\times& \prod_\sigma \delta(g(\sigma) - g_\sigma[n_{\Omega,h}]), \label{5.11} \end{eqnarray} The computation of ${\cal S}$ proceeds now by noting the appearance of the divergent factors $1/\Delta x^2$ in the exponentials, which produces a saddle point solution: ${\cal S}$ is the maximum of $S[n_{\Omega,h}]$ over all $n_{\Omega,h}$ obeying the constraint conditions (and, for this reason, the precise definition of the measure $\int D[n_{\Omega,h}]$ is not important here). We handle these constraints via the \emph{ordinary} use of Lagrange multipliers: rather than invoking them, via the grand canonical ensemble, at the level of the phase space integration, which introduces more stringent conditions on valid free energy minima, we use them here only to perform the constrained minimization of the functional ${\cal S}[n_{\Omega,h}]$. Thus, we introduce Lagrange multipliers $\beta = \bar \beta/\Delta x^2, \alpha, \gamma_l$, respectively, for the energy, momentum, circulation constraints, a function $\mu(\sigma)$ defining a functional \begin{equation} {\cal C}_\mu[n_0] = \int_D d{\bf r} \int \mu(\sigma) d\sigma \int_0^\infty \lambda d\lambda \, n_{\Omega,h}({\bf r},\sigma,\lambda) \label{5.12} \end{equation} that is used to enforce the Casimir constraints, and an additional function $\zeta({\bf r})$ to enforce for the normalization constraint (\ref{5.5}): \begin{equation} {\cal N}_\zeta[n_{\Omega,h}] = \int_D \zeta({\bf r}) d{\bf r} \int_0^\infty d\lambda \int d\sigma \, n_{\Omega,h}({\bf r},\sigma,\lambda). \label{5.13} \end{equation} We therefore seek the minimum with respect to $n_{\Omega,h}$ of the microcanonical variational free energy \begin{eqnarray} {\cal F}_\mathrm{micro} &=& E[n_{\Omega,h}] - \bar T S[n_{\Omega,h}] - \alpha \Pi[n_{\Omega,h}] \nonumber \\ &&-\ \sum_{l=2}^{N_D} \gamma_l \Gamma_l[n_{\Omega,h}] - {\cal C}_\mu[n_{\Omega,h}] - {\cal N}_\zeta[n_{\Omega,h}] \nonumber \\ &&-\ 2 \bar T \int d{\bf r} \langle \ln(H_0/h({\bf r})) \rangle_0. \label{5.14} \end{eqnarray} The last term accounts for the net $h^2$ factor in the phase space measure that also appears in (\ref{4.26}). The Euler-Lagrange equation, $\delta {\cal F}_\mathrm{micro}/\delta n_{\Omega,h}({\bf r},\sigma,\lambda) = 0$, produces \begin{eqnarray} &&\bar T \ln[(P_0/\rho_0) n_{\Omega,h}({\bf r},\sigma,\lambda)/\lambda^2] = - \frac{\rho_0 g}{2} [\lambda + \bar h_b({\bf r}) - H_0]^2 \nonumber \\ &&\ \ \ \ \ +\ \frac{|{\bf J}({\bf r})|^2}{2\rho_0 \lambda} + \lambda [\mu(\sigma) - \sigma \Psi({\bf r})] - {\cal N}({\bf r}). \label{5.15} \end{eqnarray} Here, \begin{equation} {\bf J}({\bf r}) \equiv \nabla \times \Psi({\bf r}) = \langle {\bf j}({\bf r}) \rangle_0 - \alpha \rho_0 \langle h({\bf r}) \rangle_0 {\bf v}_\Pi({\bf r}) \label{5.16} \end{equation} includes the momentum term, as does the shift (\ref{4.6}) to $\bar h_b$, and the circulation constraint enforces the boundary value $\Psi|_{\partial D_l} = \gamma_l$---equivalent to the first line of (\ref{4.24}). The normalization ${\cal N}({\bf r})$ combines various other constant terms with $\zeta({\bf r})$. Exponentiating this result precisely reproduces (\ref{4.32}). Inserting this result into (\ref{5.10}) produces the self-consistent variational equation for $\Psi$, equivalent to (\ref{5.2}). Inserting it into (\ref{5.8}) produces equations for the Lagrange multipliers. \subsection{Equilibrium properties of the field $Q$} \label{sec:Qdistr} All of the previous results were derived by freely integrating out the compressional field $q = hQ$, resulting, via (\ref{4.19}), in $\Phi \equiv 0$ and confirming that the large scale mean flow is completely determined by the remaining field $\Psi$. The distribution $n_{\Omega,h}$, defined by (\ref{4.32}), is fundamental and allows one to compute all inputs to the variational equation (\ref{5.2}). However, it may be of interest to compute equilibrium properties of $q$ as well. As observed above, its mean $\langle q({\bf r}) \rangle = \nabla \cdot {\bf V}({\bf r})$ is trivially determined from the previously computed mean flow. More interesting are its statistical fluctuations about the mean. To illustrate such a computation (but still within the variational approximation), we extend the two-field distribution (\ref{4.32}) to the three-field distribution function \begin{equation} n_0({\bf r},\sigma,\kappa,\lambda) = \langle \delta[\Omega({\bf r}) - \sigma] \delta[\bar Q({\bf r}) - \kappa] \delta[h({\bf r}) - \lambda] \rangle, \label{5.17} \end{equation} whose integral over $\kappa$ must reduce to (\ref{4.32}). Here $\bar Q = \Delta x Q$ will be seen to be the correct continuum limit scaling \cite{foot:QRVB}: the fluctuations in $Q$ are $O(1/\Delta x)$, leading to order unity fluctuations in the compressional part of the velocity ${\bf v}^C = -\nabla \phi$, and a continuous velocity potential $\phi$. The computation again begins with the KHS-transformed free energy functional (\ref{4.17}). Integration over the field $Q$ now replaces (\ref{4.19}) by \begin{equation} e^{-\bar \beta \lambda \Phi({\bf r}) \kappa/\Delta x} \prod_{{\bf r}_i \neq {\bf r}} 2\pi \delta(i\bar \beta h_i \Phi_i). \label{5.18} \end{equation} where we have substituted $h({\bf r}) = \lambda$. The result of the $\Phi$ integral is again to set $\Phi_i = 0$ for all ${\bf r}_i \neq {\bf r}$, but now leaving a single nontrivial integral over $\Phi({\bf r})$. The dependence on $\Phi({\bf r})$, via the $|{\bf J}|^2/2\rho_0 h$ term, is quadratic, with $\nabla \Phi(\bar {\bf r}) = \Delta x^{-1} [\Phi(\bar {\bf r} + \Delta x {\bf \hat x}) - \Phi(\bar {\bf r}),\ \Phi(\bar {\bf r} + \Delta x {\bf \hat y}) - \Phi(\bar {\bf r})]$ nonzero only on the neighboring sites ${\bf r}$, ${\bf r}_x = {\bf r} - \Delta x {\bf \hat x}$, and ${\bf r}_y = {\bf r} - \Delta x {\bf \hat y}$. The result is the (normalized) Gaussian integral \begin{eqnarray} n_G({\bf r},\kappa) &=& \frac{\bar \beta \lambda}{\Delta x} \int_C d\Phi({\bf r}) e^{-\bar \beta \lambda \Phi({\bf r}) (\kappa - \bar \kappa)/\Delta x} \nonumber \\ &&\ \ \ \ \ \ \times\ e^{[\bar \beta \lambda \Phi({\bf r})/\Delta x]^2 \Delta \kappa^2/2} \nonumber \\ &=& \frac{e^{-(\kappa - \bar \kappa)^2/2 \Delta \kappa^2}} {\sqrt{2\pi \Delta \kappa^2}}, \label{5.19} \end{eqnarray} where we define the mean and variance \begin{eqnarray} \bar \kappa({\bf r},\lambda,\lambda_x,\lambda_y) &=& \frac{1}{\rho_0 \lambda} \left[\frac{\partial_x \Psi({\bf r}) - \partial_y \Psi({\bf r})}{\lambda} \right. \nonumber \\ &&- \frac{\partial_x \Psi({\bf r}_y)}{\lambda_y} +\ \left. \frac{\partial_y \Psi({\bf r}_x)}{\lambda_x} \right] \nonumber \\ \Delta \kappa(\lambda,\lambda_x,\lambda_y)^2 &=& \frac{1}{\bar \beta \lambda^2} \left[\frac{2}{\lambda} + \frac{1}{\lambda_x} + \frac{1}{\lambda_y} \right] \label{5.20} \end{eqnarray} in which $\lambda_x = h({\bf r}_x)$, $\lambda_y = h({\bf r}_y)$, and, for future reference, $\sigma_x = \Omega({\bf r}_x)$, $\sigma_y = \Omega({\bf r}_y)$ \cite{foot:dualpsi}. The result for $n_G$ is independent of $\Delta x$, as claimed. The integral over the fields $\Omega,h$ now produce a factor $e^{\bar \beta W(\bar {\bf r})}$ [defined by (\ref{4.26}), with the same argument substitutions as in (\ref{4.28})], for every $\bar {\bf r} \notin \{{\bf r},{\bf r}_x,{\bf r}_y \}$, while the remaining integrals produce a factor \begin{eqnarray} n_{\bar Q}({\bf r},\kappa|\lambda) &=& \int_0^\infty d\lambda_x \int d\sigma_x \, n_{\Omega,h}({\bf r}_x,\sigma_x,\lambda_x) \nonumber \\ &&\times\ \int_0^\infty d\lambda_y \int d\sigma_y \, n_{\Omega,h}({\bf r}_y,\sigma_y,\lambda_y) \nonumber \\ &&\ \ \ \ \ \ \times\ n_G({\bf r},\kappa|\lambda,\lambda_x,\lambda_y), \label{5.21} \end{eqnarray} which differs from unity by the presence of the Gaussian factor (\ref{5.19}). One may view the result as a superposition of Gaussian densities in which the mean $\bar \kappa$ and variance $\Delta \kappa^2$ range over values weighted by the probability distribution $n_{\Omega,h}$. The normalization (\ref{5.19}) ensures that $n_{\bar Q}(\kappa)$ is a probability density for any fixed values of the other parameters. With these inputs, the final result for $n_0$ is given by \begin{equation} n_0({\bf r},\sigma,\kappa,\lambda) = n^\mathrm{eq}_{\Omega,h}({\bf r},\sigma,\lambda) n^\mathrm{eq}_{\bar Q}({\bf r},\kappa|\lambda,\sigma) \label{5.22} \end{equation} in which ``eq'' superscript indicates that in the continuum limit one simply substitutes the variational solution $\Psi = \Psi^\mathrm{eq}$. In this same limit one may replace all appearances of $\Psi$ and its derivatives on neighboring lattice sites by their values at ${\bf r}$ wherever they appear in (\ref{5.20}) and (\ref{5.21}). A key observation is that the fluctuation statistics predicted by $n_{\Omega,h}$ and $n_0$ are not independent. In particular, independent products of various terms for fixed height field $h$ become strongly mixed after the functional integral over $h$. This is in strong contrast to the results of RVB, in which the different choice of phase space measure does produce independent statistics \cite{foot:QRVB}. Nevertheless, despite this statistical entanglement, we have seen in Sec.\ \ref{sec:khs} that $Q$ can still be straightforwardly integrated out to produce a relatively transparent effective free energy (\ref{4.25}) for $\Omega,h$. Note as well that the microscale Gaussian form (\ref{5.19}), and hence the precise form of $n_{\bar Q}$, is sensitive to the definition of the discrete derivative used here. One could imagine using a non-square lattice, and/or further-neighbor discrete difference forms. Given that the microscale fluctuations on the grid scale $\Delta x$ contain a finite fraction of the system energy, this sensitivity to the precise form of the grid is physically consistent. However, this sensitivity disappears upon integrating out $\kappa$. Thus, $n_{\Omega,h}$ depends only on the macroscopic flow $\Psi^\mathrm{eq}$, and produces continuum limit equilibrium forms that are insensitive to grid details. This is entirely consistent with the trivial equipartition contribution to the energy observed in (\ref{5.3}) arising from fluctuations in $Q$. \subsection{Small height fluctuation limit} \label{sec:smallhfluct} In order to begin to make contact with equilibria, previously treated in the literature \cite{WP2001,CS2002}, in which surface height fluctuations were neglected, we consider here the limit in which small scale fluctuations in $h$ are assumed very small. This allows one to further reduce the problem to a simultaneous extremum problem for $\Psi$ and $h$. We continue to work within the variation approximation, though one may expect that for parameter ranges that do indeed produce small fluctuations, this approximation may often become exact (though quantifying this is beyond the scope of this paper). Assuming that thermodynamic parameters are chosen in such a way that (\ref{4.26}) produces a very narrow distribution for $h$ about a (yet to be determined) mean, one may simplify $W$ to the form \begin{eqnarray} e^{\bar \beta W(\tau,h_0,\xi)} &=& \frac{h^2}{H_0^2} e^{V(\tau,\bar \beta h)} e^{\frac{1}{2} \bar \beta[\xi/\rho_0 h - \rho_0 g(h+h_0)^2]} \nonumber \\ e^{V(\tau,\gamma)} &\equiv& \frac{\rho_0 H_0^3}{P_0} \int d\sigma e^{\gamma [\mu(\sigma) - \sigma \tau]}, \label{5.23} \end{eqnarray} and the free energy functional (\ref{4.28}) now reduces to the form \begin{widetext} \begin{equation} {\cal F}[\Psi,h] = \bar F^0(\alpha,{\bm \gamma}) - \int_D d{\bf r} \left[\frac{|\nabla \Psi|^2}{2 \rho_0 h} - \frac{1}{2} \rho_0 g (h + \bar h_b - H_0)^2 + \bar f \Psi + \frac{1}{\bar \beta} V(\Psi,\bar \beta h) \right], \label{5.24} \end{equation} \end{widetext} whose minimum describes the large scale equilibrium flow, and simultaneously self-consistently determines the value of the mean surface height $h$. The Euler--Lagrange equations now produce the forms \begin{eqnarray} &&-\nabla \cdot \left(\frac{1}{\rho_0 h} \nabla \Psi \right) + \bar f = -\frac{1}{\bar\beta} \partial_\tau V(\Psi,\bar \beta h) \label{5.25} \\ &&\frac{|\nabla \Psi|^2}{2\rho_0 h^2} + \rho_0 g(h + \bar h_b - H_0) = \partial_\gamma V(\Psi,\bar \beta h) - \frac{2}{\bar \beta h}. \nonumber \end{eqnarray} Analogous to (\ref{4.32}), the potential vorticity distribution function for given $\Psi,h$ is \begin{eqnarray} n_\Omega({\bf r},\sigma) &=& \langle \delta[\Omega({\bf r}) - \sigma] \rangle \label{5.26} \\ &=& \frac{\rho_0 H_0^3}{P_0} e^{V(\Psi,\bar\beta h)} e^{\bar \beta h [\mu(\sigma) - \sigma \Psi)]}, \nonumber \end{eqnarray} from which one identifies \begin{eqnarray} -\frac{1}{\bar \beta} \partial_\tau V &=& h({\bf r}) \int \sigma d\sigma \, n_\Omega({\bf r},\sigma) \nonumber \\ &=& h({\bf r}) \langle \Omega({\bf r}) \rangle = \langle \omega({\bf r}) \rangle + f({\bf r}). \label{5.27} \end{eqnarray} Analogous to (\ref{5.2}), if we define the equilibrium flow velocity ${\bf V}$ by \begin{equation} {\bf V} - \alpha {\bf v}_\Pi = (\rho_0 h)^{-1} \nabla \times \Psi, \label{5.28} \end{equation} the first line of (\ref{5.25}) reproduces (\ref{5.1}), while the second line produces the generalized Bernouilli equation \begin{equation} \frac{1}{2} \rho_0 |{\bf V} - \alpha {\bf v}_\Pi|^2 + \rho_0 g \bar \eta = \partial_\gamma V(\Psi,\bar \beta h) - \frac{2}{\bar \beta h}. \label{5.29} \end{equation} As discussed in \cite{RVB2016}, by perturbatively treating small, but finite, fluctuations in $h$ around the mean value defined by (\ref{5.25}), the resulting theory is that of a weakly coupled system consisting of large-scale eddy motions with superimposed small scale fluctuations. \subsubsection{Vlasov and Bernoulli conditions} \label{subsec:vlasovbernoulli} The forms (\ref{5.25}) appear to violate the Vlasov and Bernoulli conditions, namely that the right hand sides should depend only on the stream function $\Psi$. These conditions follow from the observations that the first line of (\ref{2.1}) and (\ref{2.7}), respectively, require that steady state flows (i.e., time-independent, in the appropriate frame of reference if momentum is conserved) obey, \begin{eqnarray} ({\bf v}-\alpha{\bf v}_\Pi) \cdot \nabla \left(\frac{1}{2} |{\bf v} - \alpha {\bf v}_\Pi|^2 + g \bar \eta \right) &=& 0 \nonumber \\ ({\bf v}-\alpha {\bf v}_\Pi) \cdot \nabla \Omega &=& 0. \label{5.30} \end{eqnarray} The steady state condition $\nabla \cdot {\bf J} = 0$ implied by the second line of (\ref{2.1}) allows one to express ${\bf J} \equiv h({\bf v}-\alpha{\bf v}_\Pi) = \nabla \times \Psi$ in terms of a current density stream function $\Psi$. Equations (\ref{5.30}) then imply that the level curves of $\Psi$, $\Omega$, and $B \equiv \frac{1}{2} |{\bf v} - \alpha {\bf v}_\Pi|^2 + g \bar \eta$ all coincide, and hence one may formally write $\Omega = f_V(\Psi)$ and $B = f_B(\Psi)$ for some fixed pair of 1D functions $f_\Omega,f_B$. To resolve the paradox implied by the failure of the equilibrium equations to produce this functional dependence, one must understand the limits under which surface height fluctuations are small, and show that these indeed restore the Vlasov and Bernoulli conditions. We consider the cases of (1) strong gravity, $g \to \infty$, (2) low temperature $\bar \beta \to \infty$, and (3) the effects of physical processes that dissipate small scale fluctuations and hence lead to a quiescent surface. \paragraph{Case (1):} The limit $g \to \infty$ turns out to be surprisingly subtle, and is discussed in detail in Sec.\ \ref{sec:eulercomp}. This limit indeed produces a fluctuation-free surface, $\eta \to 0$, hence $h = H_0 - \bar h_b$ independent of $\Psi$. However, due to the increased surface wave speed $c \approx \sqrt{gH_0}$, even as $g \to \infty$ one finds finite amplitude fluctuations in the compressional part of ${\bf v}$. Even though these fluctuations can still be integrated out freely [see equation (\ref{4.19})], this still leads to violations of the Vlasov condition because the advective term ${\bf v} \cdot \nabla \Omega$ in (\ref{2.7}) contains finite amplitude, correlated fluctuations in both ${\bf v}$ and $\Omega$, with the result that $\langle {\bf v} \cdot \nabla \Omega \rangle \neq {\bf V} \cdot \nabla \langle \Omega \rangle$. The same considerations apply to the Bernoulli condition, which then fails because $\nabla \cdot \langle h{\bf v} \rangle \neq \nabla \cdot (\langle h \rangle {\bf V})$. As shown in Sec.\ \ref{sec:eulercomp}, if one imposes a strict ``rigid lid'' condition on the surface, corresponding to the Euler equation limit, the Vlasov condition is restored, but the absence of microscopic fluctuations in ${\bf v}$ and $\eta$ leads to a quantitatively different equilibrium theory. Only in the additional $\bar \beta \to \infty$ limit, discussed next, do the two theories match. \paragraph{Case (2):} In the limit $\bar \beta \to \infty$ the term $2/\bar \beta h$ may be neglected, while a steepest descent evaluation of $V(\tau,\gamma)$ is appropriate. The latter produces \begin{equation} V(\tau,\gamma) \approx \gamma \{\mu[\sigma_0(\tau)] - \tau \sigma_0(\tau) \} \label{5.31} \end{equation} where $\sigma_0(\tau)$ is the solution to the stationary condition \begin{equation} \tau = \mu'(\sigma). \label{5.32} \end{equation} This leads to \begin{eqnarray} &&\frac{1}{\bar \beta} V(\Psi,\bar \beta h) \approx h \{\mu[\sigma_0(\Psi)] - \Psi \sigma_0(\Psi) \} \nonumber \\ &&\Rightarrow\ \left\{\begin{array}{l} \partial_\tau V/\bar \beta h = -\sigma_0(\Psi) \\ \partial_\gamma V = \mu[\sigma_0(\Psi)] - \Psi \sigma_0(\Psi), \end{array} \right. \label{5.33} \end{eqnarray} which are indeed both functions of $\Psi$ alone. Thus, zero temperature, non-fluctuating flows indeed satisfy the requisite stream line conditions. Equation (\ref{5.32}), taking the form $\Psi = \mu'(\Omega)$, directly exhibits the Lagrange multiplier function. \paragraph{Case (3):} This case is the most speculative, and was in fact the basis for the treatment of shallow water equilibria in Ref.\ \cite{WP2001}. There, at a critical step in the analysis, fluctuations in $h$ and $Q$ were simply assumed to have been suppressed by some set of dissipative mechanisms (e.g., viscosity, wave breaking). The resulting variational equation for $\Psi,h$ was then developed in a form similar to (\ref{5.24}) and (\ref{5.25}). By appealing to dissipative mechanisms lying outside of the shallow water system, the theory is removed, at least temporarily, from the purely equilibrium statistical mechanics arena. The supporting notion is that in a number of physically relevant cases, a strong separation develops between the large scale eddies and the small scale wave motions, and the latter are preferentially dissipated with negligible effect on the large scale flow. The result is to remove a certain fraction of the total energy from the system, while the remainder would be proposed to lie entirely in a ``renormalized'' equilibrium flow with vanishing height fluctuations. The appropriate effective theoretical description could then be a version of case (2), in which corresponding renormalized values of the Lagrange multipliers are sought that reproduce the observed values of the conserved integrals. An interesting consequence is that negative temperatures are no longer precluded \cite{WP2001}. Thus, $V(\tau,\gamma)$, unlike $W(\tau,h_0,\xi)$, is perfectly well defined for $\gamma < 0$, and so extrema of (\ref{5.24}) may be sought for both positive and negative $\bar \beta$ (in particular, for both $\bar \beta \to \pm \infty$). In principle, negative temperature equilibria are unstable to leakage of energy into (positive temperature) wave motions, but the physical coupling of large scale flows to small scale wave generation is extremely small, and it makes sense to develop a theory along these lines that neglects such effects. The key observation here is that compact eddy structures, such as Jupiter's Great Red Spot, having vorticity maxima confined away from the system boundaries, can only be interpreted as negative temperature states \cite{MWC1992}. Such structures therefore lie outside the strict shallow water theory presented here, and nonequilibrium dissipation arguments \emph{must} therefore be invoked in order to make contact with the effective equilibrium descriptions ubiquitous in the literature \cite{BV2012}. We note finally that there is no reason for the more general result (\ref{5.1})---or, for that matter, the fully fluctuating result (\ref{4.34})---to satisfy these conditions because the microscale flows are not steady state. The conditions need only be restored when such fluctuations are assumed to be absent. An interesting point is that RVB found that, even in their general theory, both conditions to be satisfied, and cite this as a supporting feature \cite{RVB2016}. Their result occurs because, in contrast to (\ref{5.2}), their version of the phase space measure produces independent microscale fluctuations of $h,Q,\Omega$. This leads in particular to $\langle {\bf v} \cdot \nabla Q \rangle = {\bf V} \cdot \nabla \langle Q \rangle$ and $\langle \nabla \cdot (h{\bf v}) = \nabla \cdot (\langle h \rangle {\bf V})$, and this is then reflected in the desired $\Psi$-dependence of the equilibrium equations. The feature therefore is a direct consequence of the inconsistency of their measure choice with the Liouville theorem (see Apps.\ \ref{app:liouville} and \ref{app:liouvilleinequiv}), and we argue therefore that it should not (in absence of much deeper arguments) be considered as supporting the validity of the approach. \section{Comparison with Euler equilibria} \label{sec:eulercomp} In this section, shallow water equilibria will be compared to those of the Euler equation, including variable bottom topography $h_b({\bf r})$, but now with a fixed rigid-lid surface. It will be shown that the latter leads to an equilibrium phase space measure with non-uniform gridding of the domain $D$, determined by $h_b$---consistent in this case with the choice made by RVB. This is significantly different from the limit $g \to \infty$ in the shallow water results of Sec.\ \ref{sec:statmech}, which continues to require uniform gridding. The paradox is resolved by showing that the equilibria are in fact expected to be physically \emph{different}, with microscale height fluctuation effects present even in the limit $g \to \infty$. These results serve again to highlight the inconsistency of the RVB nonuniform grid choice with that implied by the shallow water Liouville equation. The Euler equation, including variable bottom topography, is described by \begin{eqnarray} \partial_t {\bf v} + ({\bf v} \cdot \nabla) {\bf v} + f {\bf \hat z} \times {\bf v} &=& -\frac{1}{\rho_0} \nabla p \nonumber \\ \nabla \cdot (h {\bf v}) &=& 0, \label{6.1} \end{eqnarray} and is equivalent to the shallow water equations (\ref{2.1}) but with $h({\bf r}) = H_0 - h_b({\bf r})$ now a fixed function, and the pressure $p$ enforcing the incompressibility condition (and an equation for which is obtained by multiplying both sides of the first line by $h$ and taking the divergence). The potential vorticity is still given by (\ref{2.6}), and continues to be advectively conserved [equation (\ref{2.7})]. The incompressibility condition implies that \begin{equation} {\bf j} = \rho_0 h {\bf v} = \nabla \times \psi \label{6.2} \end{equation} is purely transverse. The velocity and potential vorticity \begin{eqnarray} {\bf v} &=& \frac{1}{\rho_0 h} \nabla \times \psi \nonumber \\ h\Omega - f &=& -\nabla \cdot \left(\frac{1}{\rho_0 h} \nabla \psi \right) \label{6.3} \end{eqnarray} are completely determined in terms of the single scalar function $\psi$. The equation of motion (\ref{2.7}) therefore fully describes the Euler dynamics. Statistical equilibria, obeying the Vlasov condition \begin{equation} {\bf v} \cdot \nabla \Omega = 0, \label{6.4} \end{equation} are then formulated entirely in terms of $\Omega$ as well. For simplicity, we will consider only a simply connected domain $D$ with Dirichlet boundary conditions on $\psi$. Defining the (symmetric) scalar Green function $G_h$ by \begin{equation} -\nabla \cdot \frac{1}{\rho_0 h} \nabla G_h({\bf r},{\bf r}') = \delta({\bf r}-{\bf r}') \label{6.5} \end{equation} with Dirichlet boundary conditions [identical to (\ref{4.38}), but now with deterministic $h$], one obtains the relation \begin{equation} \psi({\bf r}) = \int_D d{\bf r} G_h({\bf r},{\bf r}') (h\Omega - f)({\bf r}'). \label{6.6} \end{equation} \subsection{Liouville theorem and equilibrium measures} \label{sec:eulerliouville} The Liouville theorem follows from the equation of motion written in the conserved form \begin{equation} h \dot \Omega = -\nabla \cdot [h \Omega {\bf v}], \label{6.7} \end{equation} which leads to \cite{foot:liouville} \begin{equation} h({\bf r}) \frac{\delta \dot \Omega({\bf r})}{\delta \Omega({\bf r})} = - \nabla \cdot \left\{h({\bf r}) \frac{\delta [\Omega({\bf r}) {\bf v}({\bf r})]}{\delta \Omega({\bf r})} \right\}. \label{6.8} \end{equation} From the boundary conditions on $\partial D$, it follows that \begin{equation} \int_D d{\bf r} h({\bf r}) \frac{\delta \dot \Omega({\bf r})}{\delta \Omega({\bf r})} = 0. \label{6.9} \end{equation} In order to express this in the standard form of a divergence free condition on the phase space flows, we define a mapping ${\bf r}({\bf a}): D \to D$ (clearly not unique, but still fixed by the bottom topography) with Jacobian \begin{equation} J({\bf a}) \equiv \frac{\partial {\bf r}}{\partial {\bf a}} = \frac{H_0}{h[{\bf r}({\bf a})]}. \label{6.10} \end{equation} Thus, ${\bf r}({\bf a})$ maps a fluid with uniform height $H_0$ to one with variable, but time-independent, height $h$. Relabeling $\Omega({\bf a}) \equiv \Omega[{\bf r}({\bf a})]$, (\ref{6.9}) may be written in the form \begin{equation} \int_D d{\bf a} \frac{\delta \dot \Omega({\bf a})}{\delta \Omega({\bf a})} = 0. \label{6.11} \end{equation} It follows immediately from (\ref{6.11}) that equilibrium statistical measures $\rho(E,P,{\cal C})$ are, as usual, functions only of the conserved integrals, and phase space averages may be defined through the continuum limit \begin{equation} \rho[\Omega] D[\Omega] = \lim_{\Delta V \to 0} \rho[\Omega] \prod_i d\Omega_i, \label{6.12} \end{equation} in which $\Omega_i = \Omega({\bf a}_i)$ and $\{{\bf a}_i \}$ represents a \emph{uniform} gridding of $D$, with fixed physical fluid element volume $\Delta V = H_0 \Delta A = h_i \Delta A_i$, where $\Delta A_i$ is the image of cell $i$ under the mapping ${\bf r}({\bf a})$. \subsection{Statistical mechanics} \label{sec:eulerstatmech} The grand canonical statistical measure is given by \begin{eqnarray} \rho &=& \frac{1}{Z} e^{-\beta {\cal K}[\Omega]} \nonumber \\ {\cal K}[\Omega] &=& E[\Omega] - C_\mu[\Omega] \label{6.13} \end{eqnarray} with energy and Casimir functionals \begin{eqnarray} E[\Omega] &=& \frac{\rho_0}{2} \int_D d{\bf r} \int_D d{\bf r}' (h \Omega - f)({\bf r}) \nonumber \\ &&\ \ \ \ \ \ \ \ \times G_h({\bf r},{\bf r}') (h \Omega - f)({\bf r}') \nonumber \\ C_\mu[\Omega] &=& \int_D d{\bf r} h({\bf r}) \mu[\Omega({\bf r})]. \label{6.14} \end{eqnarray} Given the simply connected domain, momentum conservation is not considered here. In discrete form, one obtains \begin{eqnarray} E &=& \int_D d{\bf a} \int_D d{\bf a}' (\Omega - f/h)({\bf a}) \nonumber \\ &&\ \ \ \ \ \times G_h[{\bf r}({\bf a}),{\bf r}({\bf a}')] (\Omega - f/h)({\bf a}') \nonumber \\ &=& \lim_{\Delta V \to 0} \frac{\rho_0}{2} \Delta V^2 \sum_{i,j} (\Omega - f/h)_i G_{h,ij} (\Omega - f/h)_j \nonumber \\ C_\mu &=& H_0 \int_D d{\bf a} f[\Omega({\bf a})] = \lim_{\Delta V \to 0} \Delta V \sum_i \mu[\Omega({\bf a}_i)]. \nonumber \\ \label{6.15} \end{eqnarray} The KHS transformation, acting to decouple the energy, produces the partition function \begin{eqnarray} Z &=& \prod_i \frac{P_0}{\rho_0 H_0^3} \int d\Omega_i e^{-\beta {\cal K}[\Omega]} \nonumber \\ &=& \frac{1}{{\cal N}_h} \prod_i P_0 H_0 \int d\Psi_i e^{-\beta {\cal F}[\Psi]} \label{6.16} \end{eqnarray} with (continuum limit) free energy functional \begin{equation} {\cal F}[\Psi] = -\int d{\bf r} \left[\frac{|\nabla \Psi|^2}{2 \rho_0 h} + f \Psi + h W(\Psi) \right]. \label{6.17} \end{equation} The normalization ${\cal N}_h$ is the determinant of the quadratic form that defines $E$ in (\ref{6.15}), and is a nontrivial functional of $h$. This is contrast to shallow water KHS result (\ref{4.10}) where the normalization takes the form of trivial product factors. However, it is a fixed constant, and hence does not contribute to equilibrium averages. The Lagrange multiplier function $\mu$ is now subsumed into the function $W(\tau)$ defined by \begin{equation} e^{\bar \beta_E W(\tau)} = \int d\sigma e^{\bar \beta_E [\mu(\sigma) - \sigma \tau]} \label{6.18} \end{equation} with renormalized temperature variable \begin{equation} \bar \beta_E = \beta \Delta V,\ \bar T_E = T/\Delta V \label{6.19} \end{equation} remaining finite in the continuum limit $\Delta V \to 0$. Both positive and negative temperatures are allowed here since convergence of the integral is in general controlled by $\mu(\sigma)$. In this limit, one has $|\beta| \to \infty$ and the variational condition where one seeks the minimum of ${\cal F}$ emerges. In this case, since the compressional degree of freedom has been suppressed at the outset and, correspondingly, the height field is fixed, $\Psi$ is continuously differentiable, and both $\Psi$ and $\nabla \Psi$ are non-fluctuating in the continuum limit (while, of course, $\nabla^2 \Psi$ has finite fluctuations). The variational approximation is therefore exact in this case, and one obtains the Euler-Lagrange equation \begin{equation} \omega_0 \equiv -\nabla \cdot \left(\frac{1}{\rho_0 h} \nabla \Psi_0 \right) = -h W'(\Psi_0) - f. \label{6.20} \end{equation} The equilibrium potential vorticity obeys \begin{equation} \Omega_0 = \frac{\omega_0 + f}{h} = -W'(\Psi_0), \label{6.21} \end{equation} which ensures that the Vlasov condition (\ref{6.4}) is satisfied (i.e., $\nabla \Psi_0$ and $\nabla Q_0$ are everywhere colinear, and hence $\Psi_0$ and $\Omega_0$ share stream lines). \subsection{Comments} \label{sec:comments} Unlike for the shallow water equations, in which the microscale (especially surface height) fluctuations contribute in a highly nontrivial way, even in the $g \to \infty$ limit, to the variational free energy (\ref{4.28}) for the large-scale vortical stream function, the rigid lid boundary conditions here suppress these entirely, and the Vlasov condition is satisfied explicitly. The key enabling result is that the velocity ${\bf v} = {\bf v}_0$ is purely large scale. From a mathematical point of view, the Vlasov result \emph{requires} that the function $W$ have spatial dependence through $\Psi$ alone---all dependence on $h$ escapes only to the overall multiplier of $W(\Psi)$ in (\ref{6.17}). This happens only because the Liouville equation that determines the phase space measure produces, in contrast to the shallow water case, a \emph{nonuniform} real space mesh. Intuitively, the rigid lid condition places corresponding rigid conditions on the Eulerian phase space fluid parcel distribution $\nu({\bf r},{\bf p})$ defined in App.\ \ref{app:liouville}. Specifically, the moments defined in (\ref{A18}) are restricted by $h = H_0 - h_b$ and $\nabla \cdot {\bf j} = 0$. The resulting reformulation (\ref{6.12}) entirely in terms of $\Omega$, which enforces these conditions automatically, then also induces the nonuniform mesh. Comparing to the $g \to \infty$ limit of the shallow water equations, one observes that the function $W$ in (\ref{4.26}) or $V$ in (\ref{5.23}) continues to depend nontrivially on $h$, even though the condition $\eta = h + h_b - H_0 \to 0$ is indeed enforced, in the small fluctuation limit, through the $g (h+h_0)^2$ term in (\ref{4.26}) or $g \bar \eta^2$ term in (\ref{5.24}). The resolution of this paradox is that although $\eta/H_0 = O(1/\sqrt{\bar \beta \rho_0 g H_0^2})$ is very small, the compressional part of the velocity ${\bf v}_L = O(c \eta/H_0) = O(1/\sqrt{\bar \beta \rho_0 H_0})$ remains finite because the wave speed $c = \sqrt{g H_0}$ (along with the microscopic frequency $\partial_t h/H_0$) diverges. Thus, the Vlasov combination ${\bf v} \cdot \nabla \Omega$ has finite amplitude (correlated) microscopic fluctuations in \emph{both} ${\bf v}$ and $\Omega$, and the equilibrium average $\langle {\bf v} \cdot \nabla \Omega \rangle \neq {\bf v}_0 \cdot \nabla \Omega_0$ fails to factorize [except in the additional low temperature limit $\bar \beta \to \infty$ described by (\ref{5.31})--(\ref{5.33})]. This explains the violation of the Vlasov condition implied by the dependence on $h$ of the right hand side of the first line of (\ref{5.25}). As a final comment, we note that the heuristic suppression of small-scale wave fluctuations considered in Ref.\ \cite{WP2001} also led to a theory with a nonuniform $h$-dependent mesh. However, in this case $h$ was not fixed \emph{a priori}, but determined, along with $\Psi$, through the free energy minimization (which, as we have seen, implicitly assumes that all of the energy is in the large scale flow, and hence provides the mathematical mechanism for suppressing small-scale waves). Self-consistently, the resulting hydrostatically balanced flows satisfied both the Vlasov and Bernoulli conditions. \section{Concluding remarks} \label{sec:conclude} An important distinction between RVB and the present approach is the use here of the grand canonical ensemble, and of the KHS transformation (Sec.\ \ref{sec:khs}), as key tools for deriving useful reduced forms for the effective free energy functional from the generalized Hamiltonian (\ref{4.2}). By expressing the generalized Hamiltonian in the purely local form (\ref{4.17}), the method has the advantage of providing a mathematically complete and efficient procedure for deriving the intermediate reduced form (\ref{4.25}) (integrating out $Q,\Phi$), following with either the fully reduced elastic membrane form (\ref{4.28}) (integrating out $h,\Omega$, leaving only $\Psi$), or the dual generalized Coulomb form (\ref{4.37}) (integrating out $\Psi$, leaving $\Omega,h$). Most importantly, it transparently exhibits the strong fluctuations and long-range correlations that survive the continuum limit. The formulation adopted by RVB misses both of these effects because the mean field approximation is implicit in their approach to separating the fields into large scale and small scale components. The discussion in App.\ \ref{app:liouvilleinequiv} on the connection between the Liouville theorem and equilibrium measures is based on a very general formulation (\ref{B2}) or (\ref{B10}) of the Liouville theorem, and does not require an appeal to an underlying Hamiltonian structure. The latter is used as part of the specific derivation in App.\ \ref{app:liouville}, but the conclusions follow much more generally. In particular, the theory leads quite generally to the construction of a phase space measure through a limiting procedure completely consistent with standard uniform area gridding of the field index ${\bf r}$, which is also fully consistent with many previous statistical mechanics applications in quantum and classical field theory. RVB instead replace the uniform grid by a highly nonuniform ``Lagrangian'' grid, that is moreover \emph{dynamically adjusted} according to the fluid height field, which is itself one of the phase space variables being integrated over. Given that $h$ has strong variations on the grid scale, this is a rather singular adjustment, and is very unlike, for example, the smooth change of variable adopted in Ref.\ \cite{WP2001} after the microscale height fluctuations were assumed to have been dissipated, or the time-independent change of variable(\ref{6.10}) in the Euler case (with degree of smoothness governed by the bottom topography $h_b$). We have seen that the RVB choice corresponds to a very different form of the Liouville theorem---equivalent to a nontrivial density $w({\bf r})$ in (\ref{B17}) that also includes strong variations on the grid scale. The equilibrium theory resulting from the two choices are quantitatively different, so this is not an instance of mathematical convenience to obtain an equivalent continuum limit. In particular, we have emphasized that the shallow water equilibrium states \emph{are not expected to be stationary, time independent solutions to the fluid equations.} Unlike the pure 2D Euler case (discussed in detail in Sec.\ \ref{sec:eulercomp}), we have seen that the macroscopic flows are strongly dynamic, with finite energy, finite amplitude, high frequency height fluctuations (resulting from the undissipated forward cascade of wave energy). They are found to be stationary in Ref.\ \cite{RVB2016} only because the Lagrangian gridding leads to a product measure in which the basic fields have independent statistics, leading to exact factorization of key averages. In the present theory, the height field $h$ is not independent of ${\bf v}$, and this leads to the expected nonstationary equilibrium averages. As discussed in Sec.\ \ref{sec:eulercomp}, this is also what leads to the inequivalence of the rigid lid Euler flow and shallow water $g \to \infty$ limit. This paper has concentrated on deriving general statistical models and exploring some of their key general features. Detailed studies of equilibrium solutions for specific, physically motivated choices of model parameters remains to be addressed in future work. The effects of fluctuations, and predicting the effects of various dissipation mechanisms in producing the ultimate \emph{quiescent} equilibria \cite{WP2001}, deserve special focus. RVB have already made some explorations along these lines within the variational theory. Significant insight can be gained by restricting the problem to a finite number of degrees of freedom. For example, the choice $\mu(\sigma) = -\frac{1}{2} \mu_0 \sigma^2$ reduces (\ref{4.26}) to a Gaussian integral in the variable $\sigma$, and corresponds to a version of the Energy--Enstrophy theory \cite{K1975}. Perhaps more interesting are the finite-level systems $e^{\bar \beta \mu(\sigma)} = \sum_{n=1}^{N_\sigma} e^{\bar \beta \mu_n} \delta(\sigma - \sigma_n)$ \cite{MWC1992,BV2012} in which the potential vorticity is permitted to take only a discrete set of values, with relative populations controlled by the corresponding discrete set of chemical potentials $\mu_n$. Even the cases $N_\sigma = 2,3$ generate an interesting variety of equilibria as the temperature and other parameters are varied. Most previous investigations have focused on mean field equilibria, especially those of the Euler and quasigeostrophic equations for which they are exact. A very interesting feature is the set of transitions between equilibrium states that can occur as a function of the thermodynamic parameters. An important example is when a translation or rotational symmetry is broken: with increasing energy, an instability can occur in which an annular or linear jet transitions to a more compact vortex structure. Within the variational approximation, such transitions are simple bifurcations. In the presence of strong fluctuations the character of the transition remains an open question. A possibility is that it elevates to a true critical phenomenon with nontrivial critical exponents \cite{S1971}. Phase transitions in the context of elastic membranes include roughening of crystalline solid facets \cite{CL1995}. Here there is competition between a periodic confining potential which prefers a flat interface, and entropic fluctuations which prefer a rough surface with the logarithmic correlations alluded to in Sec.\ \ref{subsec:nonlinmembrane}. In the present case the analogue of a periodic crystalline potential is absent, and the membrane is always in the rough regime. Instead, there is a large-scale conformational change of the membrane, more akin perhaps to shape changes in biological membrane systems \cite{PKTH2013}.
1,941,325,220,177
arxiv
\section{Introduction and outline} The relationship between soft theorems in gauge theories and gravity and asymptotic symmetries in these theories is an active area of investigation. The essential idea is simple and can be understood without referring to specific theories. Given a large gauge transformation\footnote{In this paper we will use the expressions ``asymptotic symmetry'' and ``large gauge transformation'' interchangeably.} which is parametrized by a gauge parameter $\epsilon$ on a cross section $S^{2}$ of null infinity ${\cal I}$, the soft theorems are statements regarding conservation laws: \begin{equation} Q_{+}[\epsilon^{+}]\ =\ Q_{-}[\epsilon^{-}] \end{equation} where $\epsilon^{\pm}$ parametrizes large gauge transformations at ${\cal I}^{\pm}$ and are identified via anti-podal matching conditions. $Q_{\pm}[\epsilon^{\pm}]$ are charges which are evaluated on the celestial sphere $S^{2}$ at $u\ =\ -\infty\ ({\cal I}^{+}_{-})$ and $v\ =\ +\infty\ ({\cal I}^{-}_{+})$ respectively. Thus, a rather natural question to ask is, if there indeed are such infinity of conservation laws in theories like Electrodynamics, should we not be able to derive them in classical theory? First step in this direction was taken in \cite{eyhe} where it was shown that the infinity of charges associated to large $U(1)$ gauge transformations were indeed conserved in classical theory (from previous works \cite{strom0,stromprahar} these conservation laws in quantum theory were known to be equivalent to soft photon theorem). This conservation law was derived by analyzing the equations of the theory at spatial infinity. By considering a compactification scheme where spatial infinity is a hyperboloid ${\cal H}$ (a three dimensional Lorentzian de Sitter space in fact) as opposed to a point, one could relate fields at $u\ =\ -\infty$ and $v\ =\ +\infty$ by using the field equations. We revisit this idea below and show that it leads to (classical) conservation law for charges associated to leading as well as subleading soft photon theorem.\footnote{Subleading soft photon theorem is non-universal \cite{elvang}, however this most general form of subleading theorem can be understood in terms of asymptotic symmetries \cite{prahar}.} However this derivation leads to an interesting consequence. That, if we restrict ourselves to a suitable subset in the set of all radiative data, there is in fact an entire tower of conservation laws that can be shown to be valid. A priori, it is not clear what the Ward identities corresponding to these hierarchy of conservation laws imply in quantum theory. We thus have the following question to answer: {\bf (A)} If such infinite tower of charges are conserved, we would expect (along the lines of leading and sub-leading soft photon theorems) an infinite hierarchy of soft theorems. Do such theorems exist? As it turns out, from the side of soft theorems, a complimentary puzzle already existed: {\bf (B)} In \cite{shiu,llz}, it was shown that in fact for tree level scattering amplitudes in QED, there do exist an infinity of soft theorems. From the perspective of these soft theorems, a natural question to ask would be, just as leading and sub-leading soft theorems are Ward identities for certain asymptotic symmetries. Is the same true for the higher order theorems? In this paper we present substantial evidence that questions (A) and (B) mutually answer each other. That is, the conservation of asymptotic charges (beyond the ones associated to sub-leading soft theorem) are equivalent to sub-$n$ ($n > 1$) soft theorems. The outline of the paper is as follows. In section \ref{two}, we revisit the asymptotic analysis of Maxwell's equations at null infinity. For simplicity, we consider charged massless scalar fields coupled fo $U(1)$ gauge fields but our analysis can be generalized to the situation when charged fields are massive and are scalars or Fermions. We then revisit the derivation of asymptotic charges associated to leading and sub-leading soft photon theorems. Our derivation of these charges is along the lines of \cite{strom1,condemao} in that we first obtain them as integrals over $S^{2}$ at $u\ =\ -\infty$, and then show that they can be written as fluxes over ${\cal I}$. The reason for revisiting these charges is that our derivation is amenable to a direct generalization to an infinite hierarchy of further conservation laws. In section \ref{three} we review the infinity of soft theorems derived in \cite{shiu,llz} for tree level scattering amplitudes and show how these theorems can be written in terms of Ward identities. In section \ref{conj-equi} we argue that these Ward identities precisely correspond to the hierarchy of conservation laws proposed above. Finally in section \ref{class-cons}, we show how this infinite hierarchy of conservation laws are indeed true in classical theory, provided one restricts attention to certain subset of radiative data that is compatible with tree-level scattering. We end with certain speculations and future directions. \section{Maxwell equations at ${\cal I}^{+}$ and asymptotic charges} \label{two} In this section, we review the asymptotic expansion of Maxwell fields at null infinity and study Maxwell's equations in an $\frac{1}{r}$ expansion. We will work in terms of self-dual fields since this simplifies the field equations (bringing them into a form equivalent to the Newman-Penrose formulation \cite{npcharges}) and because the charges associated to (positive) negative helicity soft photon theorem can be written in terms of (anti) self-dual fields. Our first aim will be to understand how the self-dual field can be determined order by order in $\frac{1}{r}$ in terms of free data at ${\cal I}^{+}$. At each order there will appear new `integration constants' that will be interpreted as asymptotic charges under the assumption of certain strong $|u| \to \infty$ fall-offs. We will discuss in detail the first two set of asymptotic charges and show how they correspond to the charges associated to the leading and subleading (negative helicity) soft photon theorems. We will finally discuss the relation with Newman-Penrose charges. We consider $U(1)$ gauge field minimally coupled to massless scalar field. In order to analyze the behavior of the radiation fields at null infinity, we rewrite the equations of motion in terms of retarded coordinates $ (u, r, z, \bar{z})$ as, \begin{eqnarray} r^2 j_r & = & - \partial_r (r^2 F_{ru}) +D^{A}F_{rA} \label{eqr}\\ r^2 j_u & = & - \partial_r (r^2 F_{ru})+r^2 \partial_u F_{ru}+ D^A F_{uA} \label{equ}\\ j_A &=& \partial_r (F_{uA}-F_{rA}) + \partial_u F_{rA} + r^{-2}D^{B}F_{AB}. \label{eqA} \end{eqnarray} where sphere indices are raised with the unit-sphere metric $\gamma_{AB}$. These should be supplemented with Bianchi identities $\partial_{[a} F_{bc]}=0$. An alternative description can be given in terms of the self-dual field strength as follows. Define, dual, self-dual and anti-self-dual fields by: \begin{equation} \tilde{F}_{ab} := \frac{1}{2} \epsilon_{abcd}F^{cd}, \quad F^\pm_{ab}:= F_{ab} \mp i \tilde{F}_{ab}. \end{equation} Then the self-dual field satisfies the equations \begin{equation} j_b= \nabla^b F^+_{ab} , \quad \tilde{F}^+_{ab} = i F^{+}_{ab} . \label{Fpluseqs} \end{equation} From the perspective of soft theorems, the self-dual and anti-self dual equations are better suited than the ordinary Maxwell's equations in terms of real fields since the positive and negative helicity soft photon theorems are related to charges constructed from the $F^\mp_{ru}$ fields. For definitiveness we work with self-dual field $F^+_{ab}$ whose quantization gives single particle states associated to negative helicity photons. The standard fall-off conditions (in $\frac{1}{r}$) which accommodates radiative data are\footnote{To simplify notation we will omit the $+$ superscript in the coefficients for $F^+_{ru}$.} \begin{eqnarray} F^+_{ru}(r,u,\hat{x}) & = & \frac{1}{r^2} \sum_{n=0} \frac{1}{r^n}\overset{n}{F}_{ru}(u,\hat{x}) \label{rexpFru}\\ j_z(r,u,\hat{x}) & = & \frac{1}{r^2} \sum_{n=0} \frac{1}{r^n}\overset{n}{j}_z(u,\hat{x}) \label{rexpjz} \\ j_{r}(r,u,\hat{x}) & = & \frac{1}{r^4} \sum_{n=0} \frac{1}{r^n}\overset{n}{j}_{r}(u,\hat{x}) \label{rexpjr} \end{eqnarray} As shown in appendix \ref{maxwell-selfdual}, Maxwell equations for self-dual fields gives rise to the following recursive equation for $\overset{n}{F}_{ru}$: \begin{equation} 2(n+1) \partial_u \overset{n+1}{F}_{ru} + (\Delta + n(n+1)) \overset{n}{F}_{ru} = -2 D^z \overset{n}{j}_z + 2 \partial_u \overset{n}{j}_r + (n+1) \overset{n-1}{j}_r, \label{Frun} \end{equation} for $n=0,1,\ldots$ with the understanding that $\overset{-1}{j}_r \equiv 0$. This allows us to express all $\overset{n}{F}_{ru}$ in terms of $\overset{0}{F}_{ru}$ and the current. In fact, as shown in appendix \ref{maxwell-selfdual} $\overset{0}{F}_{ru}$ can in turn be expressed in terms of `free data' $\big( A_{A}(u,\hat{x}),\ \overset{0}{j}_{u}(u,\hat{x}) \big)$ as: \begin{equation} \partial_u \overset{0}{F}_{ru} = \overset{0}{j}_{u} - 2 \partial_u D^{\bar{z}} A_{\bar{z}}. \label{fallofffru0} \end{equation} \subsection{$u \to -\infty$ fall-offs and candidates for asymptotic charges} As shown in \cite{stromprahar}, in the case of massless QED, the asymptotic charge associated to large gauge transformations is given by\footnote{When $F$ is self-dual, the charge is associated to negative helicity soft photon theorem.} \begin{equation} Q_{+}[\epsilon^{+}]\ =\ \int_{S^{2}}\epsilon^{+}(\hat{x})\ \overset{0}F_{ru}(u=-\infty,\hat{x}) \end{equation} Now, the standard $u \to - \infty$ fall-offs for the radiative data associated to generic solutions of Maxwell's equations is \cite{ash-stru} \begin{equation} \overset{0}{F}_{ru}(u,\hat{x}) = \overset{0,0}{F}_{ru}(\hat{x}) +O(u^{-\epsilon}) \end{equation} Thus the non-vanishing and finite limit of $\overset{0}{F}_{ru}$ at ${\cal I}^{+}_{-}$ is $\overset{0,0}{F}_{ru}(\hat{x})\ =\ \lim_{u\rightarrow -\infty}\overset{0}{F}_{ru}(u,\hat{x})$. Whence the charge density which gives rise to the leading (negative-helicity) soft theorem is given by $\overset{0,0}{F}_{ru}(\hat{x})$. Note that (\ref{Frun}) together with (\ref{fallofffru0}) implies $\overset{n}{F}_{ru}=O(u^n)+O(u^{n-\epsilon})$. However consider a subset of radiative fields with the following fall-off in $u$: \begin{equation} \overset{0}{F}_{ru}(u,\hat{x}) = \overset{0,0}{F}_{ru}(\hat{x}) +R(u), \label{fallFru0strong} \end{equation} where the remainder $R(u)$ falls off faster than $\vert u\vert^{-n}$ $\forall\ n$. Then using Eq. (\ref{Frun}), the $u \to - \infty$ behavior of $\overset{n}{F}_{ru}$ can be specified to be \begin{equation} \overset{n}{F}_{ru}(u,\hat{x}) = u^n \sum_{k=0}^n \frac{1}{u^k} \overset{n,k}{F}_{ru}(\hat{x}) + O(u^{-\epsilon})\ \forall\ n \label{uexp} \end{equation} One may think that this subspace is too restrictive to be physically interesting. However, as we show in appendix \ref{fallofftree}, these fall-offs are equivalent to the assumption that radiative data for gauge field $A_{A}(\omega,\hat{x})$ has a Laurent expansion in $\omega$ which begins at $\frac{1}{\omega}$. This precisely corresponds to the case of tree level scattering amplitudes where absence of infrared loop effects imply that Laurent expansion remains valid \cite{ashoke-biswajit}. For this space of radiative data, there is a natural prescription to extract out a finite and non-trivial ``moment" of $\overset{n}{F}_{ru}$ as $u\ \rightarrow\ -\infty$. From Eq. (\ref{uexp}) we see that $\overset{n,n}{F}_{ru}$ is the $u$-independent term of $\overset{n}{F}_{ru}$ as $u \to - \infty$. This term may be thought of as an integration constant that arises when solving Eq. (\ref{Frun}). In terms of the lower order coefficients $\overset{n,k}{F}_{ru}(\hat{x}), k<n$ this quantity can be expressed as: \begin{equation} \overset{n,n}{F}_{ru}(\hat{x})\ =\lim_{u\rightarrow -\infty}\left[\ \overset{n}{F}_{ru}(u,\hat{x})\ -\ \sum_{k=0}^{n-1}u^{n-k}\overset{n,k}{F}_{ru}(u,\hat{x})\ \right] \end{equation} As we show below, each of these $\overset{n,n}{F}_{ru}(\hat{x})$ generates an infinity of asymptotic charges which are in fact conserved in classical theory so far as we restrict ourselves to the subspace of radiative data defined above. We illustrate this idea with an example of charge whose Ward identities lead to subleading soft photon theorem for tree level scattering amplitudes. That is, we will see how this charge (or equivalently fluxes which are integrals over ${\cal I}^{+}$) which was defined in \cite{stromlow, subleading} is nothing but \begin{equation} Q_{1}[\epsilon]\ =\ \int_{S^{2}}\epsilon(\hat{x})\ \overset{1,1}{F}_{ru}(\hat{x}) \end{equation} \subsection{The subleading charge} \label{leq0F} We would like to compute \begin{equation} \overset{1,1}{F}_{ru}(\hat{x})\ =\ \lim_{u\rightarrow\ -\infty}\left[\ \overset{1}{F}_{ru}(u,\hat{x})\ -\ u\overset{1,0}{F}_{ru}(\hat{x})\right] \end{equation} Now using Eq. (\ref{Frun}) we have, \begin{equation} 2 \partial_u \overset{1}{F}_{ru} + \Delta \overset{0}{F}_{ru} =-2 D^z \overset{0}{j}_z + 2 \partial_u \overset{0}{j}_r \label{Fru1} \end{equation} In the $u \to -\infty$ limit the current term vanishes. Evaluating (\ref{Fru1}) with the expansion (\ref{uexp}) one finds \begin{equation} 2 \overset{1,0}{F}_{ru}+ \Delta \overset{0,0}{F}_{ru}=0. \end{equation} Thus, \begin{equation} \overset{1,1}{F}_{ru}(\hat{x}) = \lim_{u \to - \infty} \left[\overset{1}{F}_{ru}(u,\hat{x}) + \frac{u}{2} \Delta \overset{0}{F}_{ru}(u,\hat{x}) \right] \label{sigma0} \end{equation} \begin{comment} which cancels the linear in $u$ divergent term. This is the candidate subleading charge density. \end{comment} To evaluate this ``charge density" in terms of the free data, we write (\ref{sigma0}) as an integral over $u$ (here we assume there is no contribution from $u=+\infty$, which is consistent with the absence of massive charges), \begin{eqnarray} \overset{1,1}{F}_{ru}(\hat{x}) & =& - \int du \, (\partial_u \overset{1}{F}_{ru} + \frac{1}{2}\Delta \overset{0}{F}_{ru} + \frac{u}{2} \Delta \partial_u \overset{0}{F}_{ru} ) \\ & =& \int du \, ( D^z \overset{0}{j}_z - \partial_u \overset{0}{j}_r - \frac{u}{2} \Delta \partial_u \overset{0}{F}_{ru}) \label{sigma0final} \end{eqnarray} where we used Eq. (\ref{Fru1}). The $\partial_u \overset{0}{j}_r$, being a total $u$-derivative of a current term will not contribute and thus we have been able to express the charge-density as a flux (per unit solid angle) at ${\cal I}^{+}$ as \begin{equation} \overset{1,1}{F}_{ru}(\hat{x})\ =\ \int du \, ( D^z \overset{0}{j}_z\ - \frac{u}{2} \Delta \partial_u \overset{0}{F}_{ru}) \end{equation} If we now define $Q^{+}_{1}[\epsilon^{+}]$ as \begin{eqnarray} Q^{+}_{1}[\epsilon^{+}] & := & \int_{S^{2}}\epsilon^+(\hat{x})\overset{1,1}{F}_{ru}(\hat{x})\nonumber\\ &= & \int_{{\cal I}^{+}}\epsilon^{+}(\hat{x})\ ( D^z \overset{0}{j}_z\ - \frac{u}{2} \Delta \partial_u \overset{0}{F}_{ru} ) \end{eqnarray} We immediately see that this charge matches with the charges obtained in \cite{subleading} which was shown to be associated to sub-leading soft photon (of negative helicity) insertions. In fact as shown in \cite{subleading}, $Q_{1}[\epsilon^{+}]$ equals the charge obtained in \cite{stromlow} if we define \begin{equation} Y^{A}\ :=\ D^{A}\epsilon\ +\ \epsilon^{AB}D_{B}\epsilon \end{equation} An exactly analogous procedure yields the corresponding charge $Q_{1}^{-}[\epsilon^{-}]$ at ${\cal I}^{-}$. Along with anti-podal matching conditions on $\epsilon^{\pm}$, the conservation of charge would follow if we could show that \begin{equation} \overset{1,1}{F}_{ru}(\hat{x}) = \overset{1,1}{F}_{rv}(-\hat{x}). \end{equation} We emphasize that in contrast to ideas and viewpoint employed in \cite{subleading}, here we never had to use covariant phase space techniques neither ``divergent" gauge transformations. Our analysis only refers to radiative data and structures available at null (and as we see below, spatial) infinity. \subsection{Towards a tower of asymptotic charges} Naturally we are now tempted to consider higher order ``charge densities"\footnote{Of course in order to justify calling them charge densities, we need to show that the corresponding charges are indeed conserved. This will be shown in section \ref{five}.} $\overset{n,n}{F}_{ru}(\hat{x})$ defined at ${\cal I}^{+}_{-}$. As a first example of this kind, in this section we focus on $\overset{2,2}{F}_{ru}(\hat{x})$ and write the corresponding charge $Q^{+}_{2}[\epsilon^{+}]$ as an integral over ${\cal I}^{+}$ as a functional of the free data. We will refer to this charge as sub-subleading charge as the corresponding Ward identities will turn out to be equivalent to the (``projected'' \cite{llz}) sub-subleading soft photon theorem for tree-level QED.\footnote{The soft expansion at sub-subleading order does not factorize, and hence we have to project out the unfactorized contribution to get a sub-subleading soft photon theorem. We refer to these statements as projected soft theorems.} That is we would like to evaluate, \begin{equation} \begin{array}{lll} \overset{2,2}{F}_{ru}(\hat{x})\ =\ \lim_{u\rightarrow -\infty}\ \left[\ \overset{2}{F}_{ru}(u,\hat{x})\ -\ u^{2}\ \overset{2,0}{F}_{ru}(u,\hat{x})\ -\ u\overset{2,1}{F}_{ru}(u,\hat{x})\right] \end{array} \end{equation} Using, Eq. (\ref{Frun}) we can immediately see that \begin{equation} 4 \partial_u \overset{2}{F}_{ru} + (\Delta + 2) \overset{1}{F}_{ru} = -2 D^z \overset{1}{j}_z + 2 \partial_u \overset{1}{j}_r + 2 \overset{0}{j}_r, \label{Fru2} \end{equation} Evaluating the equation in the $u \to - \infty$ and using (\ref{uexp}) as before one finds \begin{eqnarray} 8 \overset{2,0}{F}_{ru} + (\Delta +2) \overset{1,0}{F}_{ru} =0 \\ 4 \overset{2,1}{F}_{ru}+(\Delta +2) \overset{1,1}{F}_{ru} =0 \label{Fru22} \end{eqnarray} From the first equation we find how to cancel the $u^2$ divergence of $\overset{2}{F}_{ru}$: \begin{eqnarray} \overset{2}{F}_{ru} + \frac{u}{8} (\Delta +2) \overset{1}{F}_{ru} & = & u( \overset{2,1}{F}_{ru}+ \frac{1}{8} (\Delta +2) \overset{1,1}{F}_{ru} ) + O(1) \label{Fru21}\\ & = & - \frac{u}{8}(\Delta +2) \overset{1,1}{F}_{ru} +O(1), \label{Fru21b} \end{eqnarray} where in (\ref{Fru21}) we collected all $O(u)$ terms and in (\ref{Fru21b}) we simplified them using (\ref{Fru22}). Finally, from Eq. (\ref{sigma0}) we see that the $O(u)$ piece in (\ref{Fru21b}) can be cancelled by the addition of \begin{equation} \frac{u}{8}(\Delta +2)\left[\overset{1}{F}_{ru} + \frac{u}{2} \Delta \overset{0}{F}_{ru} \right] .\label{Ou} \end{equation} Adding (\ref{Ou}) to (\ref{Fru21}) gives the candidate for the sub-subleading charge density \begin{equation} \overset{2,2}{F}_{ru}(\hat{x}) = \lim_{u \to - \infty}\left[\overset{2}{F}_{ru} + \frac{u}{4} (\Delta +2) \overset{1}{F}_{ru} + \frac{u^2}{16} \Delta (\Delta+2) \overset{0}{F}_{ru} \right] .\label{sigma1} \end{equation} As before, we now write (\ref{sigma1}) as an integral over $u$, assuming no contribution arises from $u=+\infty$. Using (\ref{Fru2}) and (\ref{Fru1}), one again finds cancellation of terms, resulting in: \begin{equation} \overset{2,2}{F}_{ru}(\hat{x}) = \frac{1}{2}\int du \, \left[ D^z \overset{1}{j}_z - \partial_u \overset{1}{j}_r - \overset{0}{j}_r + \frac{u}{2}(\Delta+2)(D^z \overset{0}{j}_z - \partial_u \overset{0}{j}_r) - \frac{u^2}{8}\Delta(\Delta+2) \partial_u \overset{0}{F}_{ru}\right] . \label{sigma1b} \end{equation} The expression can be further simplified by integrating by parts the term proportional to $u \partial_u \overset{u}{j}_r$ and dropping the total derivative term $\partial_u \overset{1}{j}_r$. We note that unlike the leading and sub-leading charge densities (i.e. $\overset{0,0}{F}_{ru}(\hat{x})$ and $\overset{1,1}{F}_{ru}(\hat{x})$), $\overset{2,2}{F}_{ru}(\hat{x})$ is a sum of two kinds of terms. First set of terms involve the integrand which is a local function of the free data, $\phi(u,\hat{x}), A_{A}(u,\hat{x})$ and in the second case, the integrand is a non-local function of the free data involving $\int^{u}\phi(u^{\prime},\hat{x})\ du^{\prime}$. \begin{eqnarray} \overset{2,2}{F}_{ru}^{\text{non-local}} &:=& \frac{1}{4}\int du \, [ 2 D^z \overset{1}{j}_z + \Delta \overset{0}{j}_r ]\\ \overset{2,2}{F}_{ru}^{\text{local}} &:=& \frac{1}{4}\int du \, \left[u (\Delta+2)D^z \overset{0}{j}_z - \frac{u^2}{4}\Delta(\Delta+2) \partial_u \overset{0}{F}_{ru} \right]. \end{eqnarray} \begin{comment} Whereas the `local' piece can be written entirely in terms of the free data $\phi(u,\hat{x}), A_A(u,\hat{x})$ and its derivatives, the `non-local' piece involves terms with $\textstyle{\int}^u \phi(u,\hat{x})$. \end{comment} Using the above charge densities, we can define the corresponding charges, which are parametrized by functions on the sphere $\epsilon(\hat{x})$ as, \begin{eqnarray} Q^+_{2}[\epsilon^{+}]\ =\ \int d^{2}x\ \epsilon^+(\hat{x})\overset{2,2}{F}_{ru}(\hat{x})\nonumber\\ Q^-_{2}[\epsilon^{-}]\ =\ \int d^{2}x\ \epsilon^-(\hat{x})\overset{2,2}{F}_{rv}(\hat{x}) \label{s2charge} \end{eqnarray} with $\epsilon^+(\hat{x})= \epsilon^-(-\hat{x})=\epsilon(\hat{x})$. In fact, one can define a tower of asymptotic charges labelled by $n\ \geq 0$ and $\epsilon_n \in C^\infty(S^2)$: \begin{eqnarray} Q^+_{n}[\epsilon^{+}_n]\ =\ \int d^{2}x\ \epsilon^+_n(\hat{x})\overset{n,n}{F}_{ru}(\hat{x})\nonumber\\ Q^-_{n}[\epsilon^{-}_n]\ =\ \int d^{2}x\ \epsilon^-_n(\hat{x})\overset{n,n}{F}_{rv}(\hat{x}), \label{sncharge} \end{eqnarray} with $\epsilon^+_n(\hat{x})= \epsilon^-_n(-\hat{x})=\epsilon_n(\hat{x})$. We postpone to section \ref{four} a detailed description of these charges for arbitrary $n$. For now we would like to comment on a particular feature of these charges that we will discover below: The spherical harmonic decomposition of $\overset{n,n}{F}_{ru}(\hat{x})$ starts at $l=n$. In other words, if we take $\epsilon_n = Y_{l,m}$, the charge is non-zero only for $l \geq n$. It turns out that the $l=n-1$ case corresponds to the so called Newman-Penrose charges \cite{npcharges}. \subsection{Relationship with Newman Penrose charges} \label{NPsec} We thus see we have a ``doubly infinite'' family of charges parametrized by $\left(n,\epsilon_n \in\ C^{\infty}(S^{2})\right)$ as in Eq. (\ref{sncharge}). As shown above and later in section \ref{four}, these charges (which are localised at $u = -\infty$ or $v=+\infty$) can be written as fluxes integrated over entire null infinity, if we assume that the radiative fields are trivial at $v= -\infty$ and $u = +\infty$. In \cite{npcharges}, Newman and Penrose showed that in a free theory with massless fields (that is pure Maxwell theory in our case), one could construct infinitely many ``charges" defined by taking a celestial 2-sphere located at $u =\textrm{constant}$ (or analogously, $v= \textrm{constant}$) and integrating certain densities which were constructed out of radiative data and spherical harmonics $Y_{l,m}$. We note here that in the absence of sources, these charges are precisely an (infinite) subset of the asymptotic charges constructed above.\footnote{See \cite{condemao,condemao2,pope} for earlier explorations between soft theorems charges and NP charges.} That is if we consider the subset of charges parametrized by $\left(n+1, Y_{n,m}\right)$, then in vacuum these are the Newman Penrose (NP) charges. NP charges are defined as follows (our choice of normalization is for later convenience) \begin{equation}\label{NP1} Q_{n}^{\textrm{NP}}\ = \frac{2}{(n+1)} \int d^{2}\hat{x}\ D^{z}Y_{n,m}(\hat{x})\ \overset{n}{F}_{rz}(u,\hat{x}) \end{equation} Although the charge density is evaluated at a fixed $u$, by using vacuum equations of motion \begin{equation}\label{NP2} 2\partial_{u}\overset{n}{F}_{rz}\ =\ -\left[\frac{2}{n}D_{z}D^{z}\ +\ (n+1)\right]\overset{n-1}{F}_{rz} \end{equation} we can easily see that the charge is independent of $u$. Substituting Eq. (\ref{NP2}) in Eq. (\ref{NP1}) and integrating over the sphere, we see that due to defining equation of the spherical harmonics we have \begin{equation} \frac{d}{d u}Q_{n}^{\textrm{NP}}\ =\ 0. \end{equation} We can now see how these charges are related to the asymptotic charges we are considering in this paper. As our charges are in terms of $F_{ru}$ instead of $F_{rz}$ we need to find a relation between the two. This follows from the equation of motion involving the self-dual field, \begin{equation} -\partial_{r} F_{rz}\ +2 \partial_{u} F_{rz}\ =\ D_{z}F^+_{ru} \end{equation} Asymptotic expansion at future null infinity yields, \begin{equation}\label{NP3} \overset{n+1}{F^+}_{ru}\ =\ -\frac{2}{ (n+1)}\ D^{z}\overset{n}{F}_{rz}. \end{equation} Using Eq.(\ref{NP3}) in Eq.(\ref{NP1}) we see that NP charge is given by \begin{equation} Q_{n}^{\textrm{NP}}\ =\ \int d^{2}\hat{x} Y_{n,m} \overset{n+1}{F^+}_{ru}(u,\hat{x}). \end{equation} As these charges are finite, they can be evaluated at any $u$ and in particular at $u\ =\ -\infty$ they can be written as \begin{equation} Q_{n}^{\textrm{NP}}\ =\ \int d^{2}\hat{x} Y_{n,m} \overset{n+1,n+1}{F^+}_{ru}(\hat{x}), \end{equation} which are same as the asymptotic charges $Q_{n+1}[\epsilon]$ for $\epsilon\ =\ Y_{n,m}\ \forall\ m$. We conclude by noting that our fall-off conditions imply the vanishing of the NP charges. This corresponds to the observation in \cite{npcharges} that the charges (at future null infinity) vanish for `outgoing solutions' (see beginning of p. 184 in \cite{npcharges}). The vanishing of NP charges can also be seen as a consequence of the relation with the soft theorem charges: In section \ref{four} we will see that the asymptotic charges can be written as \begin{equation} \int d^{2}\hat{x} \, \epsilon \overset{n+1,n+1}{F}_{ru} = \int d^2 \hat{x} du (D^z)^{n+1}\epsilon \, \rho_{z \ldots z} \end{equation} where $\rho_{z \ldots z}$ depends on the free data at null infinity. Since $(D^z)^{n+1} Y_{n,m}=0$ this implies the vanishing of the NP charges. \section{From Ward identities to sub-$n$ soft theorems} \label{three} We would now like to show that if we assume $Q_{2}^{+}[\epsilon^{+}]\ =\ Q_{2}^{-}[\epsilon^{-}]$ then the corresponding Ward identity is equivalent to the sub-subleading soft photon theorem in tree level scattering amplitudes \cite{shiu,llz}. However before establishing this equivalence we take a small detour and review the hierarchy of subleading soft theorems in tree level QED. At the 1st and 2nd order of hierarchy, these are nothing but sub-leading and sub-subleading soft photon theorems. As we will see, the higher order soft theorems are far less constraining then the previous ones, but are present nonetheless. We will refer to this entire hierarchy as sub-$n$ ($n \geq 1$) soft theorems \subsection{sub-$n$ soft theorems}\label{3.1} Consider an \emph{un-stripped} (that is, including the momentum conserving delta function) tree-level amplitude in QED consisting of $N$-charged particles and a photon of energy $\omega$.\footnote{The sub-$n$ soft theorems were derived for stripped amplitudes in \cite{shiu,llz}. However it can be easily checked that the same factorization holds for unstripped amplitudes as well.} Regarded as a function of $\omega$, the amplitude has an expansion of the form \begin{equation} \mathcal{M}_{N+1} = \sum_{n=0}^{\infty} \omega^{n-1} \mathcal{M}^{(n)}_{N+1}. \label{gralexp} \end{equation} As is well known, the first term is given by Weinberg's soft theorem\footnote{To simplify expressions we display signs as if all particles are outgoing and have positive charge. For negative outgoing or positive incoming charges the factor comes with opposite sign.} \begin{equation} \mathcal{M}^{(0)}_{N+1} = \sum_{i=1}^N S^{(0)}(q,p_i) \mathcal{M}_N , \end{equation} \begin{equation} S^{(0)}(q,p) = e \frac{\epsilon \cdot p}{q \cdot p} \end{equation} where $\epsilon^\mu$ is the photon polarization and $q=(1,\hat{q})$ the photon 4-momentum direction. The next term in the soft expansion also factorizes and is given by Low's subleading soft theorem: \begin{equation} \mathcal{M}^{(1)}_{N+1} = \sum_{i=1}^N S^{(1)}(q,p_i) \mathcal{M}_N , \end{equation} \begin{equation} S^{(1)}(q,p) = \frac{e}{q \cdot p} \epsilon_\mu q_\nu J^{\mu \nu} \end{equation} where $J^{\mu \nu} = p^\mu \partial_\nu - p^\nu \partial_\mu$ is the angular momentum of the particle with momentum $p$.\footnote{The sub-leading soft photon theorem is not universal even for tree-level scattering amplitudes. In fact it was shown in \cite{elvang} that there is a class of higher derivative terms which can modify the sub-leading factor. Thus the most general sub-leading factor comprises of the universal term $S^{(0)}$ given above and an additive non-universal term. In this paper we restrict ourselves to universal sub-leading factor.} For tree level scattering amplitudes and if we restrict to minimal coupling, in fact more is true. In \cite{llz} it was shown that gauge invariance implies the $n$-th term in (\ref{gralexp}) for $n \geq 1$ is given by \begin{equation} \mathcal{M}^{(n)}_{N+1} = \frac{1}{n!}\sum_{i=1}^N S^{(1)}(q,p_i) (q \cdot \partial_i)^{n-1} \mathcal{M}_N + \epsilon_\mu q_{\nu_1} \ldots q_{\nu_{n-1}} A^{\mu \nu_1 \ldots \nu_{n-1}}(p_1,\ldots,p_N)\label{Ml} \end{equation} where $A^{\mu \nu_1 \ldots \nu_{n-1}}$ is antisymmetric under the exchange of $\mu$ and a $\nu$ index but its dependence on the hard momenta is undetermined by the requirement of gauge invariance. Thus at the onset it appears that there is no factorization theorem at sub-subleading order and beyond in tree-level QED. However as was shown in \cite{shiu}, the second term in Eq. (\ref{Ml}) can be projected out and one obtains, \emph{what we call} sub-n ($n \geq 1$) soft theorem. This can be understood as follows. \begin{comment} In order to answer this question, we will like to Follow the strategy for leading and sub-leading soft theorems and like to construct a ``charge'' for each such sub-$l$ factorization theorem. \end{comment} The $n$-th term in (\ref{gralexp}) can be extracted as \begin{equation} \lim_{\omega \to 0} \partial^{n}_\omega ( \omega \mathcal{M}_{N+1} ) = \sum_{i=1}^N S^{(0)}(q,p_i) (q \cdot \partial_i)^{n-1} \mathcal{M}_N + R^{(n)} \label{sublst} \end{equation} where $R^{(n)}$ denotes the reminder term in (\ref{Ml}) \begin{equation} R^{(n)} := \epsilon_\mu q_{\nu_1} \ldots q_{\nu_{n-1}} A^{\mu \nu_1 \ldots \nu_{n-1}}. \label{Rl} \end{equation} For definitiveness in the following we restrict attention to the case where the soft photon has negative helicity. Parametrizing the soft momentum direction $q^\mu$ in terms of standard stereographic coordinates $q^\mu =(1,\hat{q}(w,\overline{w}))$ such that $\epsilon^{- \mu}= \frac{1}{\sqrt{2}}(w,1,i,-w)$ \cite{stromprahar} it can be easily seen that \begin{equation} D_{w}^{2} q_{\mu} = 0, \quad \quad D^2_w \big[ (1+ |w|^2)^{-1}\epsilon^{-}_\mu \big] = 0 , \end{equation} as a result of which we have \begin{equation} D^{n+1}_w \big[ (1+ |w|^2)^{-1}R^{(n)} \big] =0. \end{equation} Thus by operating on both sides of Eq. (\ref{sublst}) with $D^{n+1}_w (1+ |w|^2)^{-1}$ we get a factorization theorem which we call sub$-n$ soft theorem $\forall\ n \geq 1$: \begin{equation}\label{sublst1} \lim_{\omega \to 0} \partial^{n}_\omega ( \omega D^{n+1}_w (1+ |w|^2)^{-1}\mathcal{M}_{N+1} ) = D^{n+1}_w (1+ |w|^2)^{-1}\left(\sum_{i=1}^N S^{(0)}(q,p_i) (q \cdot \partial_i)^{n-1}\right) \mathcal{M}_N . \end{equation} \subsection{Ward identities from soft theorems} \label{sec3.2} In fact, from Eq. (\ref{sublst1}), we can write down the Ward identities which are equivalent to these soft theorems. Here we follow the same strategy that is used in deriving Ward identities from soft theorems in the leading and sub-leading case. We first note that the LHS of (\ref{sublst1}) corresponds to insertion of an operator \begin{equation} \lim_{\omega \to 0}\ D^{n+1}_{w} (1+ |w|^2)^{-1}\ \partial^{n}_\omega [\omega a_-(\omega \hat{q})] \label{unsmeared} \end{equation} where $a_-(\omega \hat{q})$ is the Fock operator of a negative helicity photon. In analogy with other soft theorems we can construct a smeared version of (\ref{unsmeared}) such that it takes a simple form at null infinity. Recall the `free-data' of the Maxwell field at null infinity is given by the angular components of the vector potential, $A_{A}(u,\hat{x})$. These are related to the Fock operators by \begin{equation} A_{\bar{w}}(\omega,\hat{q}) = \frac{1}{4 \pi i} \frac{\sqrt{2}}{(1+ |w|^2)}a_-(\omega \hat{q}) \label{fdfock} \end{equation} where $A_{\bar{w}}(\omega,\hat{q})$ is the time-Fourier transform of $A_{\bar{w}}(u,\hat{q})$. We can hence see that by smearing both sides of Eq.(\ref{sublst1}) by $\int d^{2}w\ T^{w\dots w}$ we obtain a formal\footnote{Formal in the sense that we need to show that this identity arises from conservation laws.} Ward identity with a `soft' charge given by \begin{comment} The structure of the known charges associated to the leading and subleading soft theorems suggests the following candidate for soft charge, \end{comment} \begin{eqnarray} Q^{\text{soft}}_n[T] & := & (-i)^{n+1} \lim_{\omega \to 0} \partial^{n}_\omega \, \omega \int d^2 w \, T D^{n+1}_w A_{\bar{w}}(\omega,\hat{q}) \label{softcharge2} \end{eqnarray} where \begin{equation} T \equiv T^{\overbrace{w \ldots w}^{n}} \end{equation} is the holomorphic component of a rank $n$ (symmetric, trace-free) sphere tensor that parametrizes the charge. In going from the first to second line we used that $\sqrt{\gamma} \gamma^{w \bar{w}}=1$. From (\ref{fdfock}) and (\ref{softcharge2}), the soft charge can be written as the operation \begin{equation} \frac{(-i)^{n}}{4 \pi} \int d^2 w \, T D^{n+1}_w \frac{\sqrt{2}}{(1+ |w|^2)} \label{smearing} \end{equation} acting on (\ref{unsmeared}). In other words, the smearing (\ref{smearing}) acting on the LHS of the sub-$n$ soft theorem (\ref{sublst}) can be interpreted as arising from the insertion of a soft charge (\ref{softcharge2}). In order to look for the existence of a hard charge, we need to perform the same smearing on the RHS of (\ref{sublst1}). \begin{comment} The first observation is that under this smearing, the reminder term $R^{(l)}$ disappears, namely: \begin{equation} D^{l+2}_w \big[ (1+ |w|^2)^{-1}R^{(l)} \big] =0. \end{equation} This result can be shown using the identities, \begin{eqnarray} D^2_w q_\mu &= &0, \label{D2q}\\ D^2_w \big[ (1+ |w|^2)^{-1}\epsilon^{-}_\mu \big] & = & 0 \end{eqnarray} and the fact that there are $l+2$ derivatives and $l$ powers of $q_\mu$ in $R^{(l)}$. We now discuss the smearing (\ref{smearing}) on the first term of the RHS of (\ref{sublst}). \end{comment} Up to an $i/e$ factor, such smearing defines the following differential operator on the $p$ variable, \begin{equation} \mathbb{T} := \frac{(-i)^{n+1}}{2} \int d^2 w \, T D^{n+1}_w \left[ \mathbb{K}_{\wb} (q \cdot \partial)^{n-1} \right], \label{deltaT} \end{equation} \begin{equation} \text{where} \quad \mathbb{K}_{\wb} :=\frac{1}{2\pi e} \frac{\sqrt{2}}{(1+ |w|^2)} S^{(0)}_-(q,p) \end{equation} (the minus subscript denotes the negative helicity of the soft photon under consideration). We will see this operator satisfies two key properties: \begin{enumerate} \item $\mathbb{T}$ is local in the $p$ variable \item $\mathbb{T}$ satisfies \begin{equation} \int \widetilde{d p}\, b^\dagger (\mathbb{T} b) = (-1)^{n} \int \widetilde{d p}\, (\mathbb{T} b^\dagger) b . \label{sympl} \end{equation} \end{enumerate} The first property will be a consequence of the identity \cite{stromlow}: \begin{equation} D^2_w \mathbb{K}_{\wb} = D_w \delta^{(2)}(w,z) \partial_E + E^{-1} \delta^{(2)}(w,z) \partial_z, \label{D2K} \end{equation} where $(E,z,\bar{z})$ parametrize the momentum $p$ of the hard particle. The second property will ensure the smearing (\ref{smearing}) on the RHS of (\ref{sublst}) can be understood as arising from a hard charge:\footnote{The $(-1)^n$ sign arises because, in the factorization formula, an incoming 4-momentum is expressed as outgoing by reversing its sign.} \begin{eqnarray} [b,Q^{\text{hard}}_n[T]] & = & i e \mathbb{T} b, \label{commbQ} \\ \label{commbQdag} [b^\dagger,Q^{\text{hard}}_n[T]] &= & i e (-1)^{n} \mathbb{T} b^\dagger. \end{eqnarray} Classically, Eq. (\ref{sympl}) is the condition that the infinitesimal transformation $\delta b = \mathbb{T} b$, $ \delta b^* = (-1)^{n} \mathbb{T} b^*$ is symplectic. The charge reproducing (\ref{commbQ}) and (\ref{commbQdag}) will then be given by: \begin{equation} Q^{\text{hard}}_n[T] = \frac{i e}{2 (2 \pi)^3} \int_0^\infty d E E \int d^2 \hat{x} \, b^\dagger(E,\hat{x}) \mathbb{T} b(E,\hat{x}) - (b \leftrightarrow c), \label{Qhard} \end{equation} where $(b \leftrightarrow c)$ is the contribution from the antiparticles. To compare with the usual definition of hard charges which are integrals over ${\cal I}$, we will finally need to write (\ref{Qhard}) in terms of the free data of the scalar field at ${\cal I}$, $\phi(u,\hat{x})$. Equivalently we can write the hard charge as integral over $(E,z,\bar{z})$ where $E$ is the energy conjugate to $u$ (or $v$). Whence if $\phi(E,\hat{x})$ and $\bar{\phi}(E,\hat{x})$ denote the Fourier transforms of $\phi(u,\hat{x})$ and $\bar{\phi}(u,\hat{x})$, one has \begin{equation} \begin{array}{lll} \phi(E,\hat{x}) = \frac{b(E \hat{x})}{4 \pi i} , & \quad \bar{\phi}(E,\hat{x}) = \frac{ c(E \hat{x}) }{4 \pi i}, & \quad \text{for} \quad E>0 \\ & & \\ \phi(E,\hat{x}) = - \frac{c^\dagger(-E \hat{x})}{4 \pi i} , & \quad \bar{\phi}(E,\hat{x}) = -\frac{ b^\dagger(-E \hat{x}) }{4 \pi i}, & \quad \text{for} \quad E<0 \end{array} \end{equation} From these expressions, and using the fact that under $E \to -E$, $\mathbb{T} \to (-1)^{n+1} \mathbb{T}$, the charge (\ref{Qhard}) can be written as \begin{equation} Q^{\text{hard}}_n[T] = \frac{i e}{2 \pi} \int_{-\infty}^\infty d E E \int d^2 \hat{x} \, \bar{\phi}(-E,\hat{x}) \mathbb{T} \phi(E,\hat{x}) - (\phi \leftrightarrow \bar{\phi}). \label{Qhard2} \end{equation} We now calculate $\mathbb{T}$ and associated charges $Q_{n}[T] = Q^{\text{soft}}_{n}[T] + Q^{\text{hard}}_{n}[T]$ for $n=1,2$. We expect these charges to be related to the subleading and sub-subleading charges $Q_{n}[\epsilon], n=1,2$ which were defined in the previous section. \subsection{$n=1$ case: subleading charge from soft theorem} When $n=1$ one finds \begin{equation} \mathbb{T}= \frac{1}{2}(D_z T \partial_E - T E^{-1}\partial_z). \end{equation} It is easy to verify \begin{equation} \int_0^\infty dE E \int d^2 \hat{x} ( b^\dagger \mathbb{T} b + \mathbb{T} b^\dagger b) = 0, \end{equation} which corresponds to property (\ref{sympl}). To compute the charge (say at ${\cal I}^{+}$) we use the identities \begin{eqnarray} \int \frac{dE}{2\pi} E \bar{\phi}(-E) \partial_E \phi(E) & = & \int du u \partial_u \bar{\phi}(u) \phi(u)\\ \int \frac{dE}{2\pi}\bar{\phi}(-E) \phi(E) & = & \int du \bar{\phi}(u) \phi(u) . \end{eqnarray} One can then verify (\ref{Qhard2}) becomes \begin{equation} Q^{\text{hard}}_1[T]= \frac{1}{2} \int du d^2 \hat{x}( D_z T^z u \overset{0}{j}_u +T^z \overset{0}{j}_z) . \end{equation} On the other hand, the soft charge (\ref{softcharge2}) can be written as \begin{equation} Q^{\text{soft}}_1[T] = -\int du d^2 \hat{x} D_z T^z u \partial_u D^{\bar{z}} A_{\bar{z}}. \end{equation} Finally, taking $T^z = - 2 D^z \epsilon$ with $\epsilon(\hat{x})$ a function on the sphere one can verify the total (hard plus soft) charge takes the form \begin{equation} Q_1[T^z= - 2 D^z\epsilon] = \int d^2 \hat{x} \epsilon(\hat{x}) \overset{1,1}{F}_{ru}(\hat{x}) \end{equation} with the charge density $\overset{1,1}{F}_{ru}(\hat{x})$ found in section \ref{leq0F} (Eqs. (\ref{sigma0final}) and (\ref{fallofffru0})). \subsection{$n=2$ case: sub-subleading charge from soft theorem} For $n=2$ one finds that, \begin{equation} \mathbb{T}= \frac{i}{2} \left[ D^2_z T \partial_E^2 -2 E^{-1} D_z T \partial_E \partial_z +2 E^{-2}(D_z T \partial_z +T D_z \partial_z ) \right] .\label{Tl1} \end{equation} One can check it verifies condition (\ref{sympl}). To compute the charge we use the identities \begin{eqnarray} \int \frac{dE}{2\pi} E^{-1} \bar{\phi}(-E) \phi(E) & = & i \int du (\textstyle{\int}^u \bar{\phi}) \phi(u)\\ \int \frac{dE}{2\pi} \bar{\phi}(-E) \partial_E \phi(E) & =& i \int du u \bar{\phi}(u) \phi(u) \\ \int \frac{dE}{2\pi} E \bar{\phi}(-E) \partial_E^2 \phi(E) & = & i \int du u^2 \partial_u \bar{\phi}(u) \phi(u) . \end{eqnarray} One then finds (\ref{Qhard2}) to be given by \begin{equation} Q^{\text{hard}}_2[T] = -\frac{1}{2}\int du d^2 \hat{x} \left[ u^2 D^2_z T^{zz} \overset{0}{j}_u +2 u D_z T^{zz} \overset{0}{j}_z + 2 i e [ (\textstyle{\int}^u \bar{\phi}) (D_z T^{zz}\partial_z \phi +T^{zz}D^2_z \phi) - (\phi \leftrightarrow \bar{\phi})] \right] .\label{Qhard1} \end{equation} Thus hard charge is a sum of two terms. One term is an integral over local functionals of the radiative data and the other term involves non-local fields which are then integrated. In the above equations, these are the terms which involve $\int^{u}\bar{\phi}$ (or $\int^{u}\phi$). On the other hand, the soft charge (\ref{softcharge2}) can be written as \begin{equation} Q^{\text{soft}}_2[T] = \int du d^2 \hat{x} D^2_z T^{zz} u^2 \partial_u D^{\bar{z}} A_{\bar{z}}. \end{equation} To compare with $\overset{2,2}{F}_{ru}(\hat{x})$ obtained in the previous section, we take \begin{equation} T^{zz}= \frac{1}{2} D^z D^z \epsilon. \end{equation} Using the identity $[D_z,D^z] V^z = V^z$ one verifies that the sum of local terms in $Q^{\text{hard}}_2[T]$ added to $Q^{\text{soft}}_2[T]$ yield the `local' part of $\overset{2,2}{F}_{ru}$, \begin{equation} Q^{\text{local}}_2[T^{zz}= \frac{1}{2} D^z D^z \epsilon] = \int d^2 \hat{x} \epsilon(\hat{x}) \overset{2,2}{F}_{ru}^{\text{local}}(\hat{x}). \end{equation} As shown in subsection \ref{non-loc}, there is a similar matching between the `non-local' terms, so that the total charges coincide: \begin{equation} Q_2[T^{zz}= \frac{1}{2} D^z D^z \epsilon] = \int d^2 \hat{x} \epsilon(\hat{x}) \overset{2,2}{F}_{ru}(\hat{x}). \label{match1} \end{equation} Whence the question we would like to ask is if the charges $Q_{n}[T]$ defined from the sub-$n$ soft theorems are also the same as $Q_{n}[\epsilon]$ defined at ${\cal I}$ when $n\ >\ 2$. We turn to this question in section \ref{four}. \subsubsection{Matching the non-local terms}\label{non-loc} We want to show the matching of the `non-local' terms in Eq. (\ref{match1}). This corresponds to the equality \begin{equation} \int du d^2 \hat{x} \, \epsilon(\hat{x}) ( 2 D^z \overset{1}{j}_z + \Delta \overset{0}{j}_r ) = - i e \int du d^2 \hat{x} \left[ (\textstyle{\int}^u \bar{\phi}) (D_z T^{zz}D_z \phi +T^{zz}D^2_z \phi) - (\phi \leftrightarrow \bar{\phi}) \right] \label{nl0} \end{equation} with $T^{zz} \equiv \frac{1}{2} D^z D^z \epsilon$. We start with the integrand of the LHS of (\ref{nl0}) using the expressions for the current in terms of the scalar field given in Eq. (\ref{l1j}). Discarding total sphere-derivative terms we have: (we omit a ``$- (\phi \leftrightarrow \bar{\phi})$'' at the end of each equation) \begin{eqnarray} \epsilon ( 2 D^z \overset{1}{j}_z + \Delta \overset{0}{j}_r ) & = & i e [ - 2 D^z \epsilon ( \phi D_z \overset{1}{\bar{\varphi}} - \overset{1}{\bar{\varphi}} D_z \phi) - \Delta \epsilon \phi \overset{1}{\bar{\varphi}} ] \\ & = & 4 i e \overset{1}{\bar{\varphi}} D^z \epsilon D_z \phi \\ & = & - 2 i e (\textstyle{\int}^u \bar{\phi}) \Delta( D^z \epsilon D_z \phi) \\ & = & - i e (\textstyle{\int}^u \bar{\phi})[2 D_z T^{zz} D_z \phi + 2 T^{zz}D^2_z \phi+ 2 D^z \epsilon D_z \Delta \phi + \Delta \epsilon \Delta \phi ] ,\label{nl4} \end{eqnarray} where we used $\Delta f = 2 D_z D^z f = 2 D^z D_z f$ for a scalar function $f$. The last two terms in (\ref{nl4}) can be brought to a different form by integrating by parts in $u$. Up to total $u$ and sphere derivatives one can show the identity \begin{equation} (\textstyle{\int}^u \bar{\phi}) [2 D^A \phi D_A f + \phi \Delta f ]- (\phi \leftrightarrow \bar{\phi})=0 \label{idl1b} \end{equation} for any $u$-independent sphere function $f(\hat{x})$. Using this identity for $f= \Delta \epsilon$, as well as the sphere-derivative relations \begin{equation} [D^z,D_z]V_z = V_z, \quad [D_z,D^z]V^z = V^z \end{equation} one can show: \begin{equation} (\textstyle{\int}^u \bar{\phi})[ 2 D^z \epsilon D_z \Delta \phi + \Delta \epsilon \Delta \phi + D_z T^{zz} D_z \phi + T^{zz}D^2_z \phi ] - (\phi \leftrightarrow \bar{\phi})=0. \label{idl1} \end{equation} Using (\ref{idl1}) in (\ref{nl4}) one arrives at the desired result in (\ref{nl0}). \section{Higher order charges and a conjectured equivalence} \label{conj-equi} \label{four} \def\overset{n}{\rho}_{\text{soft}}{\overset{n}{\rho}_{\text{soft}}} \def\overset{n}{\rho}_{\text{hard}}{\overset{n}{\rho}_{\text{hard}}} In this section we extend our previous discussion to the higher order soft theorems. That is, we would like to argue that sub-$n$ soft theorems for $n\ >\ 2$ are equivalent to Ward identities associated to $Q_{n}[\epsilon]$. Although we do not provide a complete proof, we give several hints in this direction and conjecture the equivalence between sub-$n$ soft theorems and Ward identities of higher-$n$ charges. Before embarking on a tedious analysis, let us summarize what we are able to prove below. We show that if one defines a relationship between $T$ and $\epsilon$ as, \begin{equation} T\ :=\ 2\frac{(-1)^{n}}{n!^{2}}(D^{z})^{n}\epsilon \end{equation} then \begin{equation} Q_{n}[\epsilon]^{\textrm{soft}}\ =\ Q_{n}[T]^{\textrm{soft}} \end{equation} For the hard charges, we are able to bring $Q_{n}[\epsilon]^{\textrm{hard}}$ and $Q_{n}[T]^{\textrm{hard}}$ into a form that allows for a direct comparison. Establishing their equality however becomes a technical problem we have not been able to solve for arbitrary $n$. The first step in understanding the relationship between $Q_{n}[T]$ and $Q_{n}[\epsilon]$ is to write $Q_{n}[\epsilon]$ in terms of free data at ${\cal I}^{\pm}$. As always we focus on analyzing the charge at ${\cal I}^{+}$. In order to simplify expressions, let us introduce the following notation: \begin{equation} \label{defDeltan} \Delta_n := -\frac{1}{2n}(\Delta +n(n-1)), \quad n \geq 1 , \end{equation} \begin{equation} \label{defDeltanm} \Delta(n,m) := \prod_{k=m}^n \Delta_k, \quad 1 \leq m \leq n ; \quad \Delta(n,0) := 0, \quad \Delta(n,n+1):=1 \end{equation} \begin{equation} s_n:= \frac{1}{2n}( -2 D^z \overset{n-1}{j}_z + 2 \partial_u \overset{n-1}{j}_r + n \overset{n-2}{j}_r) , \quad n \geq 1 , \end{equation} where as before $\overset{-1}{j}_r \equiv 0$. The field equations (\ref{Frun}) can then be written as: \begin{equation} \partial_u \overset{n}{F}_{ru} = \Delta_n \overset{n-1}{F}_{ru} +s_{n} , \quad n \geq 1 \label{feq}. \end{equation} \subsection{$n$-th asymptotic charge in terms of free data} Consider the quantity \begin{equation} \sigma_n(u,\hat{x}) := \sum_{m=0}^n \frac{(-u)^{n-m}}{(n-m)!} \Delta(n,m+1) \overset{m}{F}_{ru}(u,\hat{x}). \label{sigman} \end{equation} Using the field equations (\ref{feq}) it is straightforward to show that \begin{equation} \partial_u \sigma_n = \frac{(-u)^n}{n!}\Delta(n,1)\partial_u \overset{0}{F}_{ru}+ \sum_{m=1}^{n} \frac{(-u)^{n-m}}{(n-m)!} \Delta(n,m+1) s_{m} . \label{prop2} \end{equation} Furthermore, the $u \to -\infty$ fall-off conditions described in section 2.1 together with the field equations (\ref{feq}) can be shown to imply (see appendix \ref{prop1app}): \begin{equation} \lim_{u \to -\infty} \sigma_n(u,\hat{x})= \overset{n,n}{F}_{ru}(\hat{x}) \label{prop1} . \end{equation} Properties (\ref{prop2}) and (\ref{prop1}) can be used to express $\overset{n,n}{F}_{ru}(\hat{x})$ in terms of data at null infinity by the same procedure as in the $n \leq 2$ charges studied before. We start with Eq. (\ref{prop1}) and an integration by parts in $u$: \begin{equation} \overset{n,n}{F}_{ru}(\hat{x}) = \lim_{u \to -\infty} \sigma_n(u,\hat{x}) = - \int du \, \partial_u \sigma_n(u,\hat{x}) + \lim_{u \to \infty} \sigma_n(u,\hat{x}). \end{equation} In the absence of massive fields the last term vanishes (this term should otherwise give the contribution to the charge from the massive particles) and one is left with the integral over $u$. From Eq. (\ref{prop2}) and Eq. (\ref{fallofffru0}), \begin{equation} \partial_u \overset{0}{F}_{ru} = \overset{0}{j}_{u} - 2 \partial_u D^{\bar{z}} A_{\bar{z}}, \label{feq0} \end{equation} one can write the resulting expression as a sum of `soft' and `hard' pieces, \begin{equation} \overset{n,n}{F}_{ru}(\hat{x}) = \int du \big( \overset{n}{\rho}_{\text{soft}}(u,\hat{x}) + \overset{n}{\rho}_{\text{hard}}(u,\hat{x}) \big) \end{equation} where \begin{equation} \overset{n}{\rho}_{\text{soft}} : = \frac{2 (-u)^n}{n!} \Delta(n,1) \partial_u D^{\bar{z}}A_{\bar{z}} \end{equation} \begin{equation} \overset{n}{\rho}_{\text{hard}} : = -\frac{(-u)^n}{n!} \Delta(n,1) \overset{0}{j}_u + \sum_{m=1}^{n} \frac{(-u)^{n-m}}{(n-m)! m} \Delta(n,m+1) \big( D^z \overset{m-1}{j}_z - \partial_u \overset{m-1}{j}_r - \frac{1}{2} m \overset{m-2}{j}_r \big). \label{rhonh} \end{equation} We finally need to express the current coefficients in terms of the scalar field radiative data $\phi$. For the tree-level charges of interest here the scalar field can be treated as free when computing the current. For the first term of (\ref{rhonh}) we have \begin{equation} \overset{0}{j}_u = i e \phi \partial_u \bar{\phi} + c.c. \end{equation} For the second term, one can show the combination of current coefficients can be written as \begin{equation}\label{phik1} [D^z \overset{m-1}{j}_z +\frac{1}{2(m-1)}\Delta \overset{m-2}{j}_r] = \frac{2 i e}{(m-1)} \sum_{k=1}^{m-1} k D^z( \overset{k}{\varphi} D_z \overset{m-1-k}{\bar{\varphi}} ) - (\varphi \leftrightarrow \bar{\varphi}) \end{equation} where $ \overset{k}{\varphi}(u,\hat{x})$ is the $1/r^{k+1}$ coefficient in the $1/r$ expansion of the scalar field $\varphi$. These coefficient can be expressed in terms of the free data by recursively solving the free field equation \begin{equation} \partial_u \overset{n}{\varphi} = \Delta_n \overset{n-1}{\varphi} \implies \overset{k}{\varphi} = \Delta(k,1) \partial_u^{-k} \phi, \label{phik} \end{equation} where $\partial_u^{-k}$ denotes the $k$-th $u$-primitive. Substituting (\ref{phik}) in (\ref{phik1}) and then in (\ref{rhonh}) gives an expression of $\overset{n}{\rho}_{\text{hard}}(u,\hat{x})$ in terms of the scalar field free data $\phi(u,\hat{x})$. \subsection{Higher-n charges from soft theorems} From the discussion of section \ref{sec3.2}, the (smeared) sub-$n$ soft theorem can be written as \begin{equation} Q_n[T] S = S Q_n[T] \end{equation} where $Q_n[T] = Q^{\text{soft}}_n[T] + Q^{\text{hard}}_n[T]$ with $Q^{\text{soft}}_n[T]$ and $Q^{\text{hard}}_n[T]$ given in Eqs. (\ref{softcharge2}) and (\ref{Qhard2}) respectively. We now discuss how to cast these quantities in terms of free data at null infinity. For the soft charge, we simply Fourier transform (\ref{softcharge2}) to obtain \begin{equation} Q^{\text{soft}}_n[T] = \int du d^2 \hat{q} \, u^{n} T D^{n}_w \partial_u D^{\bar{w}} A_{\bar{w}}(u,\hat{q}) . \end{equation} For the hard charge we need to bring the differential operator $\mathbb{T}$ defined in Eq. (\ref{deltaT}) into a simpler form. As in the $n=1,2$ cases, the integral localizes in the hard momentum direction. The general form, derived in appendix \ref{Tapp}, is found to be \begin{equation} \mathbb{T} = - \frac{i^{n+1} n!}{2}\sum_{k=0}^{n} \frac{(-1)^k }{(n-k)!} D^{n-k}_z T \partial^{n-k}_E E^{-k} D_z^k . \label{Tfinal} \end{equation} Substituting this in the expression for the hard charge (\ref{Qhard2}) and using the Fourier transform identities \begin{equation} \int \frac{d E}{2 \pi} E \bar{\phi}(-E,\hat{x}) \partial^{n-k}_E E^{-k} D_z^k \phi(E,\hat{x}) = - i^{n+1-2k} \int du u^{n-k} \partial_u \bar{\phi}(u,\hat{x}) \partial_u^{-k} D_z^k \phi(u,\hat{x}) \end{equation} one arrives at \begin{equation} Q^{\text{hard}}_n[T]= \frac{i e}{2} (-1)^{n+1} n! \sum_{k=0}^n \frac{1}{(n-k)!}\int du d^2 \hat{x} \; u^{n-k} D^{n-k}_z T \partial_u \bar{\phi} \partial_u^{-k} D_z^k \phi - (\phi \leftrightarrow \bar{\phi}) \label{Qhard3} \end{equation} \subsection{Conjectured equivalence} We wish to generalize the equivalence found in section \ref{three} between $\overset{n,n}{F}_{ru}$ and the soft theorem charges $Q_n[T]$. The idea is to parametrize the tensor $T$ in terms of a sphere function $\epsilon(\hat{x})$ as $T \sim (D^z) \epsilon$, and then identify $\epsilon(\hat{x})$ with a smearing function for $\overset{n,n}{F}_{ru}(\hat{x})$. That is, we wish to find $T_\epsilon$ such that \begin{equation} \int d^2 \hat{x} \epsilon(\hat{x}) \overset{n,n}{F}_{ru}(\hat{x}) = Q_n[T_\epsilon]. \label{comparison} \end{equation} Using the identity \begin{equation} (D_z)^n (D^z)^n \epsilon = (-1)^n n! \Delta(n,1) \epsilon \label{Dzn} \end{equation} one can show the \emph{soft} part of (\ref{comparison}) is satisfied provided one sets \begin{equation} T_\epsilon := \frac{2 (-1)^n}{n!^2} (D^z)^n \epsilon. \label{Teps} \end{equation} Thus, in order to establish (\ref{comparison}), one needs to show the matching (\ref{comparison}) between the \emph{hard} parts, \begin{equation} Q_{n}[\epsilon]^{\textrm{hard}} = Q^{\text{hard}}_n[T_\epsilon] \label{hardcomparison}. \end{equation} with $T_\epsilon$ given by (\ref{Teps}). A strategy to compare both sides of (\ref{hardcomparison}) is to bring the RHS into a form where $\epsilon$ appears with no derivatives. Substituting (\ref{Teps}) in (\ref{Qhard3}) and using the identity\footnote{To show (\ref{Dznk}), apply $D^k_z$ on both sides of the equation. The LHS is then given by (\ref{Dzn}). For the RHS use (\ref{Dzn}) for $n=k$ and $\Delta(k,1) \Delta(n,k+1)=\Delta(n,1)$.} \begin{equation} (D_z)^{n-k} (D^z)^n \epsilon = \frac{(-1)^{n} n!}{(-1)^{k} k!} (D^z)^k \Delta(n,k+1) \epsilon, \quad k=0,\ldots, n \label{Dznk} \end{equation} one finds, after integration by parts on the sphere, \begin{equation} Q^{\text{hard}}_n[T_\epsilon] = i e (-1)^{n+1} \sum_{k=0}^n \frac{ 1}{k! (n-k)!} \int du d^2 \hat{x} \; u^{n-k} \epsilon(\hat{x}) \Delta(n,k+1) (D^z)^k[ \partial_u \bar{\phi} \partial_u^{-k} D_z^k \phi ]- (\phi \leftrightarrow \bar{\phi}). \label{Qhard4} \end{equation} Expression (\ref{Qhard4}) defines a ``charge density'' that should be compared with $\overset{n}{\rho}_{\text{hard}}(u,\hat{x})$ as given in Eqs. (\ref{rhonh}) to (\ref{phik}). One can readily check that the $k=0$ term in (\ref{Qhard4}) agrees with the first term in (\ref{rhonh}). The comparison of the remaining $k \geq 1$ terms in (\ref{Qhard4}) with the second term in (\ref{rhonh}) is non-trivial due to the possibility of integration by parts in $u$ and the interchanges $\varphi \leftrightarrow \bar{\varphi}$. This was already seen in the $n=1,2$ cases studied earlier, where the equality of charges required the use of several non-trivial identities. Unfortunately, we have not been able to find a general form of such identities that would allow us to establish the equality for arbitrary $n$. \section{Proof of conservation laws in classical theory} \label{class-cons} \label{five} In the previous sections we showed that the sub-$n$ soft theorems can be written as Ward identities of a tower of asymptotic charges and conjectured that these charges were the charges generated from $\overset{n,n}{F}_{ru}(\hat{x})$ at ${\cal I}^{+}$ and $\overset{n,n}{F}_{rv}(\hat{x})$ at ${\cal I}^{-}$. Our conjecture was motivated by showing this equivalence in the case of leading, sub-leading and sub-subleading soft photon theorems. However even assuming the conjecture to be valid, a natural question arises. Why do we expect this infinite hierarchy of infinite dimensional (asymptotic) charges to be conserved in classical theory? After all, the theory is not expected to be integrable. In this section we show that classically these charges are indeed conserved. That is, by analyzing the theory at \emph{spatial infinity}, we show that\footnote{In this section ${F}_{ab}$ will stand for the \emph{real} field strength rather than its self-dual part. The Ward identity associated to the negative (positive) sub-$n$ soft theorem corresponds to the (anti) self-dual part of Eq. (\ref{cons}). The conservation statements can be understood as Eq. (\ref{cons}) for the real field strength, together with its Hodge dual version.} \begin{equation}\label{cons} \overset{n,n}{F}_{ru}(\hat{x}) = \overset{n,n}{F}_{rv}(-\hat{x}). \end{equation} Our proof is an extension of the analysis done in \cite{eyhe} for the $n=0$ case. We will summarize the key idea first and then provide a detailed analysis of these conservation laws. \subsection{Key ideas} \label{5.1} $\overset{n,n}{F}_{ru}(\hat{x})$ and $\overset{n,n}{F}_{rv}(\hat{x})$ are charge densities localised on $S^{2}$ at $u = -\infty$ and $v = +\infty$ respectively. These two celestial spheres can also be understood as (future and past) boundaries of a three dimensional de Sitter space representing space-like infinity of Minkowski (or more generally asymptotically flat) spacetime in a particular compactification \cite{romano}. More in detail, when working with hyperbolic coordinates $(\tau,\rho,z,\bar{z})$, if we take $\rho\rightarrow\ \infty$ while keeping $\tau,z,\bar{z}$ fixed, we reach Lorentzian three dimensional de Sitter. We will refer to this (blow up of) spatial infinity as $\mathcal{H}$. The advantage of working with this definition of spatial infinity is that it dovetails nicely with ${\cal I}^{\pm}$. Thus the boundary spheres at $\tau\ =\ \pm\infty$ are mapped onto $S^{2}$ at $u = -\infty$, $v = \infty$ respectively. In order to prove Eq. (\ref{cons}), we need to relate the fields at $\mathcal{H}$ with fields on the respective boundaries of ${\cal I}^{\pm}$ . By analyzing the equations of motion at spatial infinity and assuming appropriate fall-offs of the fields as $u \rightarrow -\infty$ and $v \rightarrow \infty$ we relate fields at $\mathcal{H}$ with $\overset{n,n}{F}_{ru}(\hat{x})$ and $\overset{n,n}{F}_{rv}(\hat{x})$. We will thus first analyze the equations of motion at spatial infinity in section \ref{5.2} and in section \ref{5.3}, show how to relate $\overset{n,n}{F}_{ru}(\hat{x})$ and $\overset{n,n}{F}_{rv}(\hat{x})$ with certain data at $\mathcal{H}$. \subsection{Maxwell equations at spatial infinity}\label{5.2} We introduce hyperbolic coordinates in the region $r>|t|$, \begin{equation} x^\mu = \rho Y^\mu(y), \quad Y^\mu(y) Y_\mu(y) = 1, \end{equation} where $\rho:= \sqrt{x^\mu x_\mu} = \sqrt{r^2 -t^2}$ and $y = y^\alpha$ are coordinates on the unit hyperboloid $\mathcal{H}$. The Minkowski line element takes the form \begin{equation} ds^2 = d \rho^2 + \rho^2 h_{\alpha \beta} d y^\alpha d y^\beta, \end{equation} with $h_{\alpha \beta}$ the unit hyperboloid metric. For concreteness we take $y^\alpha= (\tau,x^A)$, $\tau=t/\sqrt{r^2-t^2}$ as coordinates on $\mathcal{H}$ so that \begin{equation} Y^\mu = ( \tau, \sqrt{1+\tau^2} \; \hat{x}), \end{equation} \begin{equation} h_{\alpha \beta} d y^\alpha d y^\beta= - \frac{d \tau^2}{1+\tau^2}+(1+\tau^2) q_{AB} dx^A dx^B . \end{equation} Following \cite{beig}, we assume a $1/\rho$ expansion of the Maxwell field near spatial infinity: \begin{eqnarray} F_{ \rho \alpha}(\rho,y) & =& \frac{1}{\rho} \sum_{n=0}^{\infty} \frac{1}{\rho^n} \overset{n}{F}_{\rho \alpha }(y) \label{Frhoalphaexp}\\ F_{\alpha \beta}(\rho,y) & =& \sum_{n=0}^{\infty} \frac{1}{\rho^n} \overset{n}{F}_{\alpha \beta}(y) . \end{eqnarray} For the scalar field, we assume that their fall-off behavior at spatial infinity is faster than $O(\rho^{-m})\ \forall\ m$. This assumption is motivated by the fact that we do not consider soft limit of the charged particles. Under this assumption the field equations reduce to source-free Maxwell equations at spatial infinity. The reason we can consistently do this is due to the fact that there is a clear separation between the source and electromagnetic radiation. In the case of non-abelian gauge theories or perturbative gravity at second order there is no such separation and hence our analysis has to be generalized. Note that for massive charged particles the assumption can be proved since in a flat background massive fields decay exponentially fast at spatial infinity. Under this expansion, the source-free Maxwell equations take the form\footnote{We remind the reader that in this section we are working with real (rather than self-dual) field strengths.} \begin{equation} \mathcal{D}^\alpha \overset{n}{F}_{\rho \alpha} =0, \quad \mathcal{D}^\beta \overset{n}{F}_{\alpha \beta} + n \overset{n}{F}_{\rho \alpha} =0 \label{divFspi} \end{equation} \begin{equation} \partial_{[\alpha} \overset{n}{F}_{\beta \gamma]}=0, \quad \partial_\alpha \overset{n}{F}_{\rho \beta} - \partial_\beta \overset{n}{F}_{\rho \alpha}+n \overset{n}{F}_{\alpha \beta}=0, \label{bianchispi} \end{equation} where $\mathcal{D}_\alpha$ is the covariant derivative on $\mathcal{H}$. For $n=0$ the equations and their solutions are related to the leading soft photon theorem charges \cite{eyhe}. The idea is that the subleading charges will be related to the remaining values of $n$. For $n>1$ one can decouple the equations by combining the second equations of (\ref{divFspi}), (\ref{bianchispi}) to eliminate $\overset{n}{F}_{\alpha \beta}$. Using the identity $[\mathcal{D}^\beta,\mathcal{D}_\alpha] V_\beta = 2 V_\alpha$ one arrives at \cite{perng}: \begin{equation} \mathcal{D}^\alpha \overset{n}{F}_{\rho \alpha} =0, \quad [\mathcal{D}^2 +(n^2-2)] \overset{n}{F}_{\rho \alpha} =0, \quad n>0. \label{Espi} \end{equation} Once a solution to $\overset{n}{F}_{\rho \alpha}$ is found, one can obtain $\overset{n}{F}_{\alpha \beta}$ from the second equation in (\ref{bianchispi}). Our next task is to specify $\tau \to \infty$ fall-offs. Due to the divergence-free condition, the $\alpha=A$ and $\alpha=\tau$ fall-offs are not independent: If we assume $\overset{n}{F}_{\rho A} =O(\tau^h)$ then $\overset{n}{F}_{\rho \tau} = O(\tau^{h-3})$. When such asymptotics are inserted in the wave equation (\ref{Espi}) one finds $h= \pm n$. In the next subsection we show that our prescribed fall-offs at null infinity imply $h=-n$. Thus, at spatial infinity the problem to solve is (\ref{Espi}) with the $\tau \to \infty$ asymptotic condition \begin{equation} \overset{n}{F}_{\rho A}(\tau,\hat{x}) = \tau^{-n} \overset{n,0}{F}_{\rho A}(\hat{x})+ \ldots. \label{fallFtau} \end{equation} The leading sphere components $\overset{n,0}{F}_{\rho A}(\hat{x})$ play the role of `asymptotic free data' for the hyperboloid vector field $\overset{n}{F}_{\rho \alpha}$. On the other hand, the leading term of the $\alpha=\tau$ component, \begin{equation} \overset{n}{F}_{\rho \tau}(\tau,\hat{x}) = \tau^{-n-3} \overset{n,0}{F}_{\rho \tau}(\hat{x})+ \ldots. \label{Frhotaum0} \end{equation} is determined by the asymptotic divergence-free condition, \begin{equation}\label{rhoalph-rhoA} n \overset{n,0}{F}_{\rho \tau} + D^A \overset{n,0}{F}_{\rho A} =0. \end{equation} \subsection{Relating field expansions at null and spatial infinity} \label{5.3} As shown in appendix \ref{fallofftree} in the limit $u \to -\infty$, the falloff for the $1/r^{k+2}$ coefficient of $F_{ru}$ is given by, \begin{equation} \overset{k}{F}_{ru}(u,\hat{x}) = u^k \sum_{l=0}^k \frac{1}{u^l} \overset{k,l}{F}_{ru} + \ldots. \label{uexp2} \end{equation} where the dots denote terms that fall off faster than any power of $1/u$. We now use this expansion to obtain the $\tau \to \infty$ fall-offs at spatial infinity. Combining (\ref{rexpFru}) with (\ref{uexp2}) we obtain the double sum expansion\footnote{In this section $\overset{k,l}{F}_{ru}$ denotes the coefficients of the real field strength $F_{ru}$ rather than its self-dual part $F^+_{ru}$. The two have the same fall-off behavior.} \begin{equation} F_{ru}(r,u,\hat{x})= \sum^\infty_{k=0} \sum_{l=0}^{k} (u/r)^{k+2} (1/u)^{l+2}\overset{k,l}{F}_{ru}(\hat{x}). \label{doublesumnull} \end{equation} We regard this as an expansion in the two small parameters: \begin{equation} |u/r| \ll 1 , \quad \text{and} \quad |1/u| \ll 1, \label{smallnull} \end{equation} For later comparison with the spatial infinity expansion, it will be convenient to rewrite (\ref{doublesumnull}) in a way that the $l$-sum appears first: \begin{equation} F_{r u}(r,u,\hat{x}) = \sum_{l=0}^\infty \sum_{k=l}^{\infty} (1/u)^{l+2} (u/r)^{k+2}\overset{k,l}{F}_{ru}(\hat{x}). \label{doublesumnull2} \end{equation} We now express the two small parameters (\ref{doublesumnull2}) in terms of the $(\rho,\tau)$ coordinates. From the change of coordinates, \begin{equation} r = \rho \sqrt{1+\tau^2}, \quad u = \rho(\tau - \sqrt{1+\tau^2}), \end{equation} one finds: \begin{equation} 1/u= - \frac{2 \tau}{\rho} \big(1+O(\tau^{-2}) \big), \end{equation} \begin{equation} u/r= -\frac{1}{2\tau^2}+O(\tau^{-4}). \end{equation} Substituting these in (\ref{doublesumnull2}) yields an expansion in the small parameters \begin{equation} \tau/\rho \ll 1 , \quad \text{and} \quad 1/\tau^2 \ll 1, \label{smallspi} \end{equation} that can be related to the spatial infinity expansion. Note that the $\rho$ dependence appears only in the $l$-sum. From the relation \begin{equation} F_{\rho \tau}= \frac{\rho}{\sqrt{1+\tau^2}} F_{ru} \end{equation} one finds the $\rho$ dependence is the same as in the expansion (\ref{Frhoalphaexp}): \begin{equation} F_{\rho \tau}(\rho,\tau,\hat{x}) = \frac{1}{\rho} \sum_{l=0}^{\infty} \frac{1}{\rho^l} \overset{l}{F}_{\rho \tau }(\tau,\hat{x}). \end{equation} Furthermore, for a given $1/\rho$ power, the dominant $\tau\to \infty$ term is determined by the $k=l$ summand. Specifically, one finds: \begin{equation} \overset{l}{F}_{\rho \tau}(\tau,\hat{x}) = \tau^{-l-3} \overset{l,l}{F}_{ru}(\hat{x})+ O( \tau^{-l-5}), \end{equation} as anticipated in Eq. (\ref{Frhotaum0}). We thus conclude that \begin{equation} \label{spinull1} \overset{n,0}{F^+}_{\rho \tau}(\hat{x})= \overset{n,n}{F}_{ru}(\hat{x}), \end{equation} where we included a $+$ upperscript to indicate this is a $\tau \rightarrow + \infty$ coefficient\footnote{Not to be confused with the self-dual field. We apologize for the overlap of notation.}. The analogous analysis at $\tau \rightarrow -\infty$ yields \begin{equation} \label{spinull1v} \overset{n,0}{F^-}_{\rho\tau}(\hat{x})\ =\ (-1)^{n+1}\ \overset{n,n}{F}_{rv}(\hat{x}) \end{equation} A similar analysis for the other components of the field strength (in the limit $\tau\ \rightarrow\ \pm\infty$) revels that \begin{equation} \label{spinull2} \overset{n,0}{F}_{\rho A}(\hat{x}) = \overset{n-1,n}{F}_{r A}(\hat{x}) \end{equation} where $\overset{n-1,n}{F}_{r A}$ is the $O(u^0)$ coefficient of $\overset{n-1}{F}_{r A}$. Eqs. (\ref{spinull1}), (\ref{spinull2}) (and the analogues for past infinity) are the key equations that will allow us to relate the future and past charge densities. Notice that relation (\ref{rhoalph-rhoA}) becomes, upon using (\ref{spinull1}), (\ref{spinull2}), equivalent to relation (\ref{NP3}) discussed in section \ref{NPsec}.\footnote{Recall in our conventions $F_{ab}= \frac{1}{2}(F^+_{ab}+F^-_{ab})$ and $F^-_{rz}=0=F^+_{r\bar{z}}$. Adding (\ref{NP3}) to its complex conjugated then leads to Eq. (\ref{rhoalph-rhoA}).} \subsection{Establishing the conservation} \label{six} From the perspective of spatial infinity it is natural to work with $\overset{n,0}{F}_{\rho A}$ ($\equiv \overset{n-1,n}{F}_{r A}$) as the charge density, since it represents `free data' for the differential equation (\ref{Espi}). The idea is to start with data at the future asymptotic boundary of $\mathcal{H}$ and evolve it backwards to the asymptotic past of $\mathcal{H}$. One may object that this conservation is rather trivial: One is simply casting the free field equations in hyperbolic coordinates, but for free fields the conservation should be trivial! The non-triviality arises from the strong fall-off conditions imposed at future and past null infinities. As shown in the previous subsection, these select $O(|\tau|^{-n})$ fall-offs on $\mathcal{H}$ which need not be satisfied for generic solutions to Eq. (\ref{Espi}). It turns out that this specific large $\tau$ decay is related to another consequence of the strong fall-offs at null infinity: that the (vector) spherical harmonic decomposition of ($\overset{n-1,n}{F}_{r A}$) $\overset{n,n}{F}_{ru}(\hat{x})$ starts at $l=n$. As shown below, the $O(|\tau|^{-n})$ fall-off together with the $l\geq n$ spherical harmonic property imply that solutions to Eq. (\ref{Espi}) satisfy specific parity conditions on $\mathcal{H}$ under inversion $Y^\mu \to - Y^\mu$. This parity property then automatically implies the conservation law. Our strategy is to find an explicit `boundary to bulk' Green's function that solves (\ref{Espi}) and satisfies all the aforementioned properties. A first candidate can be found by generalizing the Green's functions used to extend superrotations vector fields to time-infinity \cite{massivebms}. This leads to the following family of Green's functions: \begin{equation} \label{candidate0} G_A^\alpha(\hat{q},y) \sim (Y \cdot q)^{-(n+2)} J^{\mu \nu}_\alpha(Y) L_{\mu \nu}^B(q), \end{equation} where $J^{\mu \nu}_\alpha$ and $L_{\mu \nu}^B$ are the angular momentum vector fields on $\mathcal{H}$ and $S^2$ respectively, \begin{equation} J^{\mu \nu}_\alpha(Y) = Y^\mu \mathcal{D}_\alpha Y^\nu - (\mu \leftrightarrow \nu), \end{equation} \begin{equation} L_{\mu \nu}^A(q) = q_\mu D^A q_\nu - (\mu \leftrightarrow \nu). \end{equation} With the help of identities given in appendix \ref{idsH} one can readily verify that (\ref{candidate0}) satisfies Eq. (\ref{Espi}) (with respect to the $y$ variable). It is also not difficult to verify that solutions constructed from (\ref{candidate0}) have the `wrong' $O(\tau^n)$ fall-offs.\footnote{We expect the Green's functions (\ref{candidate0}) can be used to define \emph{smearing fields} which would allow to extend the charges $Q_n[\epsilon]$ to spatial infinity, as in the $n=0$ case \cite{eyhe}. We leave for future work the study of smeared charges at spatial infinity. See also \cite{perng} for a set of closely related charges.} To look for the Green's function with the `correct' $O(\tau^{-n})$ fall-offs, we seek to replace $(Y \cdot q)^{-(n+2)}$ in (\ref{candidate0}) by another homogenous function $f(Y \cdot q)$. There are two independent homogenous functions $f(s)$ such that $f(\lambda s)= \lambda^{-{(n+2)}} f(s)$: $f(s) \propto s^{-(n+2)}$ and $f(s) \propto \delta^{(n+1)}(s)$ (the $(n+1)$-th derivative of the Dirac delta function). We thus consider the following ansatz for the solution to Eq. (\ref{Espi}):\footnote{The ansatz (\ref{candidate}) was also inspired by the integral representation of solutions to free Maxwell equations used by Herdegen, see e.g. \cite{herdegen}.} \begin{equation} \overset{n}{F}_{\rho \alpha}(\tau,\hat{x}) = \frac{1}{2} \int d^2 \hat{q} \, \delta^{(n+1)}(Y \cdot q) J^{\mu \nu}_\alpha(y) L_{\mu \nu}^B(\hat{q}) \overset{n}{V}_B(\hat{q}), \label{candidate} \end{equation} where $\overset{n}{V}_A(\hat{q})$ is an arbitrary sphere vector field. With the help of the identities given in appendix \ref{idsH} (together with the Dirac delta identities $s^2 \delta^{(n+2)}(s) = (n+2) (n+1) \delta^{(n)}(s)$ and $s \delta^{(n+1)}(s) = - (n+1) \delta^{(n)}(s)$) one can verify that (\ref{candidate}) indeed satisfies (\ref{Espi}). Studying the $\tau \to \infty$ fall-off of (\ref{candidate}) is more subtle and we leave it to appendix \ref{fallt}. We find there that (\ref{candidate}) has the correct fall-offs (\ref{fallFtau}), (\ref{Frhotaum0}) with\footnote{The operator $ \Delta(n,1) $ acting on covectors is given as in the definition for scalars, Eqs. (\ref{defDeltan}), (\ref{defDeltanm}) but with $\Delta$ replaced by $\Delta -1$ so that $D^A \Delta(n,1) V_A = \Delta(n,1) D^A V_A$.} \begin{equation} \overset{n,0}{F}_{\rho A} = - n 2 \pi (-1)^{n+1} \Delta(n,1) \overset{n}{V}_A \label{fallt1} \end{equation} \begin{equation} \overset{n,0}{F}_{\rho \tau} = 2 \pi (-1)^{n+1} \Delta(n,1) D^B \overset{n}{V}_B. \label{fallt2} \end{equation} These equations give the relation between our actual data at the asymptotic future of $\mathcal{H}$ with the auxiliary field $\overset{n}{V}_A$. To write the solution in terms of our data we need to invert these relations, i.e. express $\overset{n}{V}_A$ in terms of $\overset{n,0}{F}_{\rho A}$. This can be done provided $\overset{n,0}{F}_{\rho A}$ has a vector spherical harmonic expansion starting at $l=n$ (corresponding to a spherical harmonic expansion of $\overset{n,0}{F}_{\rho \tau}$ starting at $l=n$). This is precisely the property that follows from the equivalence with the soft theorem charges (also responsible for the vanishing of the NP charges, see section \ref{NPsec}). We thus conclude that (\ref{candidate}) gives the correct solution to our problem. Since (\ref{candidate}) is even under $Y^\mu \to -Y^\mu$, we have that the leading $\tau \to \pm \infty$ coefficients of $\overset{n}{F}_{\rho \tau}$ satisfy \begin{equation} \label{consspi} \overset{n,0}{F^+}_{\rho \tau}(\hat{x}) = (-1)^{n+1} \overset{n,0}{F^-}_{\rho \tau}(-\hat{x}) \end{equation} where the $\pm$ upperscripts distinguish the coefficients from the $\tau \to + \infty$ and $\tau \to - \infty$ expansions. This relation, together with Eqs. (\ref{spinull1}) and (\ref{spinull1v}), implies (\ref{cons}). \section{Conclusions and outlook} In this paper we have shown that for tree level scattering processes, there exists an infinite hierarchy of conservation laws such that at each level $n$ of the hierarchy there is an infinite dimensional family of conserved charges $Q_{n}[\epsilon_n]$ labeled by functions on the sphere $\epsilon_n(\hat{x})$.\footnote{For $\epsilon_n=Y_{l,m}$ (a spherical harmonic of order $l$) the charge vanishes if $l < n$. For $l=n-1$ the expression coincides with the $(n-1)$-th Newman-Penrose charge, but it vanishes due to our assumed $|u| \to \infty$ fall-offs.} For $n=\ 0,1$ these charges are the well known asymptotic charges associated to leading and sub-leading soft photon theorems. Our analysis divorced the derivation of asymptotic charges from the paradigm of large gauge transformations. In fact, as argued in \cite{subleading}, we do not expect the charges corresponding to levels $n\ \geq 2$ to be associated to any large gauge transformations. These are intrinsically boundary charges and are purely a consequence of the fall-off behavior of the radiative data and equations of motion of the theory at infinity. A natural question is, what kind of factorization theorems arise by considering Ward identities associated to charges with $n\ \geq\ 2$. We showed that the Ward identities associated to $n=2$ charges are equivalent to the sub-subleading soft photon theorem and conjectured that this equivalence holds $\forall\ n$. That is, Ward identities associated to $Q_{n}[\epsilon_n]$ are equivalent to sub-$n$ soft photon theorems. Note that these sub-$n$ soft theorems only capture the factorized piece of the $\omega^{n-1}$ coefficient in the $\omega \to 0$ expansion of the amplitude. For $n \geq 2$ there are non-factorized terms that are not seen by these charges. Beyond the obvious open issue of proving the aforementioned conjecture, there are several open questions which emerge out of this work and which have not been addressed in this paper. We expect that our analysis can easily be generalized to tree-level amplitudes in perturbative gravity. In fact, an interesting puzzle in this direction concerns the soft-exactness of tree level MHV amplitudes in pure gravity \cite{huang}. That is, in the case of Gravity MHV amplitudes, the sub-$n$ soft limit exactly factorizes and there is no remainder $R_{n}$. Whence we expect the asymptotic charges to constrain the MHV amplitudes completely as opposed to only in the infrared sector. A related question concerns the algebra of charges $Q_{n}[\epsilon_n]$. If it does form an algebra then the quantization of this infinite dimensional algebra could shed interesting light in the structure of gauge theories. Once loop corrections are taken into account, the story changes completely beyond the leading order in soft limit. As shown in \cite{ashoke-biswajit}, the sub-leading soft photon theorems receives corrections at one loop and are associated to terms proportional to $\ln\omega$ as opposed to $\omega^{0}$. Such a soft expansion clearly implies that the large $u$ fall-off behavior that we have assumed for tree level scattering data breaks down and one needs to consider a slower fall-off where the radiative field contains $\frac{1}{u}$ terms as $\vert u\vert\ \rightarrow\ \infty$ \cite{ashoke-memory}. Whether such soft theorems with logarithmic corrections can be derived from asymptotic charges remains to be seen. \section{Acknowledgements} We are grateful to Ashoke Sen for raising the issue of proving classical conservation law associated to subleading soft photon theorem which led us to this inquiry as well as for discussions on infinity of soft theorems. We are also grateful to Amitabh Virmani for urging us to look at the relationship between asymptotic charges and Newman-Penrose charges. AL would like to thank IISER, Pune and ICTS, Bangalore for their hospitality during course of this project. MC would like to thank the Center for Theoretical Physics at Columbia University for hospitality during the final stages of this project.
1,941,325,220,178
arxiv
\section{Inference Comparison (Continued)} \section{Derivations} \label{sec:ap-derivations} \subsection{Acceptance probability} \label{sec:ap-accept} For computing the acceptance probability in the MH scheme, we obtain \begin{eqnarray*} \frac {\mathcal{P}'} {\mathcal{P}} \frac {q(\varphi_n \mid \varphi_n')} {q(\varphi_n' \mid \varphi_n)} &=& \frac {g \left( \mathbf{x}_n, \lambda_{n} \mid \bm{v}_n', s_n', \m{B}', \Theta, \bm{\eta} \right) p(\m{V}') p(\m{B}') } {g \left( \mathbf{x}_n, \lambda_n \mid \bm{v}_n, s_n, \m{B}, \Theta, \bm{\eta} \right) p(\m{V}) p(\m{B}) } \frac {q(\bm{v}_n \mid \bm{v}_n') q(\m{B}) } {q(\bm{v}_n' \mid \bm{v}_n) q(\m{B}') } \\ &=& \frac {g \left( \mathbf{x}_n, \lambda_{n} \mid \bm{v}_n', s_n', \m{B}', \Theta, \bm{\eta} \right) p(\bm{v}'_n \mid \m{V} \setminus \{\bm{v}_n\}) p(\m{V} \setminus \{\bm{v}_n\})} {g \left( \mathbf{x}_n, {\lambda}_n, \mid \bm{v}_n, s_n, \m{B}, \Theta, \bm{\eta} \right) p(\bm{v}_n \mid \m{V} \setminus \{\bm{v}_n\}) p(\m{V} \setminus \{\bm{v}_n\})} \\ && \times \frac {q(\bm{v}_n \mid \bm{v}_n')} {q(\bm{v}_n' \mid \bm{v}_n)} \\ &=& \frac {g \left( \mathbf{x}_n, \lambda_{n} \mid \bm{v}_n', s_n', \m{B}', \Theta, \bm{\eta} \right)} {g \left( \mathbf{x}_n, \lambda_{n} \mid \bm{v}_n, s_n, \m{B}, \Theta, \bm{\eta} \right) } \, \end{eqnarray*} as $p(\bm{v}_n \mid \m{V} \setminus \{\bm{v}_n'\}) = q(\bm{v}_n \mid \bm{v}_n') = \mathrm{nCRP}(\alpha)$, likewise for $p(\bm{v}'_n \mid \cdot)$ and $q(\bm{v}_n' \mid \bm{v}_n)$. \subsection{Sampling $\bm{\eta}_z$} First, we look at \begin{align*} (\lambda_{n} + C \zeta_{n \ell z})^2 &= (\lambda_{n} + C \epsilon_{0} - C (\bm{\eta}_{v_{n \ell}} - \bm{\eta}_{z})^\top \mathbf{x}_n)^2 \\ &= (\lambda_{n} + C \epsilon_{0})^2 - 2 C (\lambda_{n} + C\epsilon_{0}) (\bm{\eta}_{v_{n \ell}} - \bm{\eta}_{z})^\top \mathbf{x}_n \\ &\quad + C^2 (\bm{\eta}_{v_{n \ell}}^\top \mathbf{x}_n - \bm{\eta}_{z}^\top \mathbf{x}_n)^2 \\ &= const. - 2 C (\lambda_{n} + C\epsilon_{0}) \bm{\eta}_{v_{n \ell}}^\top \mathbf{x}_n + 2 C (\lambda_{n} + C\epsilon_{0}) \bm{\eta}_{z}^\top \mathbf{x}_n \\ &\quad + C^2 \bm{\eta}_{v_{n \ell}}^{\top} \mathbf{x}_n \mathbf{x}_n^{\top} \bm{\eta}_{v_{n \ell}} - 2 C^2 \bm{\eta}_{v_{n \ell}}^{\top} \mathbf{x}_n \mathbf{x}_n^{\top} \bm{\eta}_{z} + C^2 \bm{\eta}_{z}^{\top} \mathbf{x}_n \mathbf{x}_n^{\top} \bm{\eta}_{z} \, . \end{align*} We would like to point out that $\{n \mid v_{n \ell} = z\} \cap \{n \mid s_{n 2} = z\} = \emptyset$ according to our constraint set. Hence, we need to sum up \begin{align*} &\sum_{n: v_{n \ell} = z} \sum_{n: s_{n 2} = z} - \frac {(\lambda_{n} + C \zeta_{n \ell s_{n 2}})^2} {2 \lambda_{n}} \\ &= {C^2} \left \{ \sum_{n: v_{n \ell}=z} \frac 1 {\lambda_{n}} \left ( \left ( \frac {\lambda_{n}}{C} + \epsilon_{0} \right ) \mathbf{x}_n^{\top} + \bm{\eta}_{s_{n 2}} \mathbf{x}_n \mathbf{x}_n^{\top} \right ) \right. \\ &\quad \left. + \sum_{n: s_{n 2}=z} \frac 1 {\lambda_{n}} \left ( - \left ( \frac {\lambda_{n}}{C} + \epsilon_{0}\right ) \mathbf{x}_n^{\top} + \bm{\eta}_{v_{n \ell}} \mathbf{x}_n \mathbf{x}_n^{\top} \right ) \right \} \bm{\eta}_z \\ &\quad - \frac 1 2 \bm{\eta}_z^{\top} \left \{ C^2 \sum_{n} \mathds{1}(v_{n \ell} = z~||~s_{n 2} = z) \frac {\mathbf{x}_n \mathbf{x}_n^{\top}} {\lambda_{n}} \right \} \bm{\eta}_z + const. \,. \end{align*} \section{Labels for \texttt{Animals}} \begin{table} \centering \caption{Manual labels for \texttt{Animals}} \include{animal_labels} \end{table} \end{appendices} \section{Conclusion} In this work, we propose to apply posterior regularisation on the BHMC model to add \emph{max-margin} constraints on the nodes of the hierarchy. We detail the modelling and inference procedures. The experimental study has shown its advantages over the original BHMC model and achieved the expected improvements. We expect this method could be employed to a broader range of Bayesian tasks. One future direction is to develop a variational inference approach over the regularised framework. It will also be interesting to investigate other penalty functions and features. Meanwhile, we should seek to extend the solutions to handle large scale problems. \section{Experimental Study} \label{sec:experiment} We carry out an empirical study using the datasets evaluated in~\citep{Huang2021}, on which hierarchies can be intuitively interpreted. We refer to these as the \texttt{Animals}~\citep{kemp2008discovery} and \texttt{MNIST-fashion}~\citep{xiao2017fmnist} datasets. \texttt{MNIST-fashion} is sub-sampled with $100$ items. Principal Component Analysis~\citep{hastie2009elements} is employed to reduce the dimension of the data to 7 and 10 respectively for the two datasets. \subsubsection*{Sensitivity analysis} This analysis focuses on the regularisation parameters $C$ and $\epsilon_0$ for which hyperpriors are not appropriate (while the hyperparameter for ${\bm{\eta}}$ can have hyperpriors). From a statistical point of view, if $C$ is too large then the regularisation term will dominate the pseudo likelihood. That is, the clustering should be close to a uniform assignment. When $C$ approaches $0$ (but note $C=0$ in this solution will fail), the model can be regarded as the original BHMC. The analysis aims to explore \begin{itemize} \item how sensitive the hyperparameters are; \item whether the algorithm can achieve improvements within some hyperparameter settings. \end{itemize} Thus, we examine two simple measures, 1) the expected average inner empirical $L_2$ distance within the nodes over multiple simulations, 2) the average centroid $L_2$ distance between the siblings. For simplicity, let us name the ``inner'' distance by AID, and the ``outer'' distance by AOD. For each set of hyperparameter configurations, we repeat $50$ times and thus have $\mathrm{AID}$ as the expectation over $\mathrm{AID}_{single}$, likewise for AOD. Specifically, \begin{align*} \mathrm{AID}_{single} &= \frac 1 M \sum_{z} \frac {2} {N_z (N_z -1)} \sum_{ \mathbf{x}_{n'},\mathbf{x}_{n} \in z} \mathds{1}(n > n') \lVert \mathbf{x}_{n} - \mathbf{x}_{n'} \rVert^2_2 \, . \end{align*} where $M$ is the total number of non-root nodes in the hierarchy. Then, let $\mathbf{x}^{(c)}_{z_m}$ denote the centroid in a node $m$ where $\mathbf{x}^{(c)}_{z_m} = \frac{1}{|z_m|} \sum_{\mathbf{x}_n \in z_m} \mathbf{x}_n $. Also let $N_{sibs}$ be the set of sibling pairs under the same parent. \begin{align*} \mathrm{AOD}_{single} &= \frac{1}{N_{sibs}} \sum_{z: z \ne z_0} \sum_{z' \in \mathcal{S}(z)} \left \lVert \mathbf{x}^{(c)}_{z} - \mathbf{x}^{(c)}_{z'} \right \rVert_2^2 \,. \end{align*} We would like to emphasise that, it is possible to have a random tree which has only one child under each parent (namely, a singular path), and in this case, AOD will fail. Therefore we exclude these cases which, fortunately, are rare. Finally, we consider the two measures together as any single measure could be affected by other factors, e.g. the number of nodes may enlarge the AID when there are fewer nodes. In summary, we would like to obtain a lower AID and a higher AOD. \begin{figure}[!t] \centering \includegraphics[scale=0.3]{Figures/animals_conv_one_piece} \caption{Convergence analysis on \texttt{Animals}} \label{fg:conv_animals} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.45]{Figures/sens_animals_contour} \caption{Sensitivity analysis on \texttt{Animals}} \label{fg:sens_animals} \end{figure} \begin{figure} \centering \subfloat[Sample 1]{ \includegraphics[scale=0.37]{Figures/animals_candidate_mcl_0.0_0.0_1} } ~~ \subfloat[Sample 2]{ \includegraphics[scale=0.37]{Figures/animals_candidate_mcl_0.0_0.0_2} } \\ \subfloat[Sample 1: $C=0.1, \epsilon_0=0.01$]{ \includegraphics[scale=0.37]{Figures/animals_candidate_mcl_0.1_0.01_1} } ~~ \subfloat[Sample 2: $C=0.1, \epsilon_0=0.01$]{ \includegraphics[scale=0.37]{Figures/animals_candidate_mcl_0.1_0.01_2} } \\ \subfloat[Sample 1: $C=0.1, \epsilon_0=1$]{ \includegraphics[scale=0.37]{Figures/animals_candidate_mcl_0.1_1.0_1} } ~~ \subfloat[Sample 2: $C=0.1, \epsilon_0=1$]{ \includegraphics[scale=0.37]{Figures/animals_candidate_mcl_0.1_1.0_2} } \\ \subfloat[Sample 1: $C=1, \epsilon_0=1$]{ \includegraphics[scale=0.37]{Figures/animals_candidate_mcl_1.0_1.0_1} } ~~ \subfloat[Sample 2: $C=1, \epsilon_0=1$]{ \includegraphics[scale=0.37]{Figures/animals_candidate_mcl_1.0_1.0_2} } \caption{Examples of output} \label{fg:animal_tree_examples} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.36]{Figures/animals_fmeasure} \caption{F-measure comparison for \texttt{Animals}} \label{fg:animals_fmeasure} \end{figure} \subsection{Animals data} First of all, we fix the hyperparameters to $\alpha=0.4, \gamma=1, \gamma_0=0.85, L=3, \nu_0=1, G=\N(\cdot, \m{I}), H=\N(\bm{0}, \m{I})$ which roughly follows the settings in~\citep{Huang2021}. \paragraph{Convergence analysis} Fig.~\ref{fg:conv_animals} shows the complete data likelihood (CDL) and the pseudo marginal complete data likelihood (PCDL). Each setting is run with $15,000$ iterations. All of the plots show that both the statistics increase to a certain level while oscillating as more runs are carried out. Clearly, the variable $C$ is the factor that influences the range of change for the PCDL. When $C=0.01$, the PCDL and CDL are quite similar however, for $C=1$, the fluctuations of the PCDL become far stronger. Some of the plots show that, even if the CDL seems smoothened, the PCDL still fluctuates strongly. This implies that the regularisation searches the space more effectively but the CDL is being kept at a rather consistent level. \paragraph{Sensitivity analysis} Fig.~\ref{fg:sens_animals} depicts the sensitivity analysis for the two hyperparameters respectively. Again, each setting is run with $15,000$ iterations. The tree in the last iteration is selected for computing the scores and thus is a random choice in some sense. It is unsurprising that the performance of either does not follow a linear relationship of the hyperparameters. We can find that there are certain settings for which the RBMHC achieves both a better AID and AOD, e.g. $C=1$ and $\epsilon_0=1$. Meanwhile, it shows a number of choices that lead the RBHMC to outperform the original BHMC model. We can observe that the hyperparameter selections within a rather large numeric scale can provide an improved performance. It roughly follows the pattern that the smaller $C$ and $\epsilon_0$ lead to the stronger regularisation power. Generally $C$ is shown to be a more influential, yet setting a suitable $\epsilon_0$ is still important. \paragraph{Case analysis} In this section, we run for a few hyperparameter settings with 5 chains. At each chain, we run the first $5,000$ iterations as burnin, and report the hierarchy with the highest PCDL in the following $10,000$ draws. For the BHMC, we report the one with the highest CDL. As learned in Fig.~\ref{fg:sens_animals}, $(C=1,\epsilon_0=1)$ and $(C=0.01,\epsilon_0=1)$ are two good pairs of hyperparameter values. Therefore, we report two trees (from the first two MCMC chains) generated under these configurations and two trees from the the BHMC model. We can see that BHMC has performed pretty well, while it still has some flaws for some cases, e.g. the sample 1 of the BHMC. For randomised clustering, it is understandable to have certain misplaced items. However, for RBHMC, they still look better under the randomised environment. In~\citep{Huang2021}, the experiments show that, even though BHMC performs very well in the lower levels, it might obtain a random combination of clusters in higher levels which is due to the nature of the HDPMM. We would like to check if, with a good set of hyperparameters, RBHMC can achieve better performance than BHMC, particularly in higher levels, close to the root. We manually label the animals to the following classes: \emph{birds}, \emph{land mammals}, \emph{predators}, \emph{insects}, \emph{amphibians}, \emph{water animals}, \emph{mouses}, and \emph{fish} (further labelling is attached as a supplemental material). We then check the F-measure~\citep{steinbach2000comparison} against the clustering at each level. The RBHMC with $(C=0.1, \epsilon_0=1.0)$ and BHMC are run with 10 chains repectively, in which $5,000$ burnin runs and $10,000$ draws are carried out. Fig.~\ref{fg:animals_fmeasure} compares the F-measure by level for the two algorithms. The results at the first level show that, even though sometimes the RBHMC may perform worse than the BHMC, the distribution of scores for the RBHMC is skewed towards superior performance. For the later levels, it shows that the RBHMC performs even better though an original posit is that they should be on par. \begin{figure}[!tp] \centering \includegraphics[scale=0.3]{Figures/fmnist_conv_one_piece} \caption{Convergence analysis on \texttt{MNIST-fashion}} \label{fg:conv_fmnist} \end{figure} \begin{figure}[!h] \centering \includegraphics[scale=0.45]{Figures/sens_fmnist_contour} \caption{Sensitivity analysis on \texttt{MNIST-fashion}} \label{fg:sens_fmnist} \end{figure} \begin{figure} \centering \subfloat[BHMC Sample]{ \includegraphics[scale=2.1]{Figures/fmnist_mcl_0.0_0.0_1} } \\ \subfloat[Sample: $C=0.001, \epsilon_0=0.01$]{ \includegraphics[scale=2.1]{Figures/fmnist_mcl_0.001_0.01_1} } \\ \subfloat[Sample: $C=0.001, \epsilon_0=1$]{ \includegraphics[scale=2.1]{Figures/fmnist_mcl_0.001_1.0_3} } \\ \subfloat[Sample 1: $C=1, \epsilon_0=1$]{ \includegraphics[scale=2.1]{Figures/fmnist_mcl_1.0_1.0_1} } \caption{Examples of output} \label{fg:fmnist_tree_examples} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=0.36]{Figures/fmnist_fmeasure} \caption{F-measure comparison for \texttt{MNIST-fashion}} \label{fg:fmnist_fmeasure} \end{figure} \subsection{MNIST-fasioon data} For this dataset, part of the hyperparameters are set as $\alpha=0.2, \gamma=1.5, \gamma_0=0.85, L=4, \nu_0=1, G=\N(\cdot, \m{I}), H=\N(\bm{0}, \m{I})$. \paragraph{Convergence analysis} The CDL and RCDL of the algorithms for \texttt{MNIST-fashion} are demonstrated in Fig.~\ref{fg:conv_fmnist}. We run five chains with $5,000$ burnin runs and $15,000$ draws. In this example, we observe that the convergence of both CDL and RCDL is steadier. When $C=1$, the variance of the RCDL is larger than that obtained when $C=0.01$. However, all settings converge very nicely. Note that this data has 100 samples which might imply that with more data, the algorithm will have a stabler convergence performance. \paragraph{Sensitivity analysis} We show the results in Fig.~\ref{fg:sens_fmnist}. Again, these experiments are run for $20,000$ iterations. Each pair of $C$ and $\epsilon_0$ is repeated 50 times but the rare cases of having a singular path are eliminated. The scale of hyperparameters for achieving improvements are biased towards smaller values (in comparison with \texttt{Animals}). In agreement with the results for \texttt{Animals}, we can see that the improvement changes are smooth given the range of values. Moreover, it still illustrates that, within a certain range, we can obtain notable improvements as we expected. Still, the change of $C$ affects the improvements more sensitively, though, we can not be careless about setting a suitable value for $\epsilon_0$. \paragraph{Case analysis} We run the hyperparameter settings for 4 chains and show the plots of the generated trees in Fig.~\ref{fg:fmnist_tree_examples}. We provide a poorly performed example generated from the setting $C=1$ and $\epsilon_0=1$. Intuitively, with a larger $C$, it should be a uniform cluster allocations. In practice, this clustering is proposed by nCRP which follows the property that ``the rich get richer'' and thus we will more likely receive such a skewed hierarchy. For BHMC, it returns a reasonable hierarchy but, e.g., the third cluster from the LHS is less well clustered. In regard to RBHMC, we deliberately show the results with $C=0.001$ which is a value that results in improvements from Fig.~\ref{fg:sens_fmnist}. RBHMC results look better clustered. The \texttt{MNIST-fashion} data has ground-truth labels in categories. We apply the same methodology as that for the \texttt{Animals} data. Again, 10 chains with $5,000$ burn-in rounds and $10,000$ draws are simulated. From the result in Fig.~\ref{fg:fmnist_fmeasure}, we observe that at a higher level, the RBHMC has a potential to perform much better than the BHMC. The distributions of the RBHMC are also skewed towards better performance. For the lower level, the RBHMC outperform the BHMC even more. \section{Inference} We will employ the Markov Chain Monte Carlo method for inferring the model. In this section, we firstly specify the essential properties of the PR model and hence analyse the sampling details for the added variables. For most of the variables for the original model, sampling remains the same as in~\citep{Huang2021}. We adopt the term RBHMC for the (posterior) regularised BHMC. The algorithmic procedure is displayed in Algorithm~\ref{alg:regmh-sampling} and the missing derivations are exhibited in Sec.~\ref{sec:ap-derivations} of the Appendix. \subsection{Data augmentation} We denote by $s_{n} = (\ell, z)$, a tuple encapsulating the level $\ell$ and the sibling $z$ that maximises $\zeta_{n \ell z}$. Then $\zeta_{n s_n}$ represents the maximum $\zeta_{n \ell z}$ corresponding to that tuple. Using the index, we let $s_{n 1}$ return the level and $s_{n 2}$ return the corresponding node. This is actually an idea inspired by the slice sampling~\citep{chen2014robust}. We write \[ p(s_{n} = \textstyle (\ell, z) \mid \bm{v}_n, \bm{\eta}, \mathbf{x}_n) = \delta \left ( \argmax_{\ell, z \in \mathcal{S}(v_{n \ell})} \zeta_{n \ell z} \right ) \] where $\delta(\cdot)$ is the Dirac delta function. This shows that ${s}_n$ is determined, once $\bm{v}_n$ is fixed. To sample the regularised term, we have to appeal to data augmentation~\citep{polson2011data,chen2014robust}. \citet{polson2011data} showed that, for any arbitrary real $\zeta$, \begin{align} \label{eq:max-dist} \exp\{-2C\max(0, \zeta)\} &= \int_0^\infty \frac 1 {\sqrt{2 \pi \lambda}} \exp\left\{ - \frac {(C\zeta + \lambda)^2} {2 \lambda} \right \} d \lambda \nonumber \\ &\propto \int_0^\infty p(\lambda \mid \zeta) d\lambda \end{align} where $p(\lambda \mid \zeta) \sim \gig(1/2, 1, C^2\zeta^2)$ is a Generalised Inverse Gaussian (GIG) distribution, defined as \begin{align} \label{eq:gig-pdf} \gig(x; \rho, a, b) \propto x^{\rho-1} \exp \left \{ - \left ( a x + {b/x} \right ) /2 \right \} \, . \end{align} Introducing a set of augmented variables $\lambda_n$, such that \begin{multline} \label{eq:pseudo_part} p(\lambda_{n} \mid \bm{v}_n, s_n, \mathbf{x}_n, \bm{\eta}) \propto \frac 1 {\sqrt{ \lambda_{n}}} \exp\left\{ - \frac {(C\zeta_{n s_n} + \lambda_{n})^2} {2 \lambda_{n}} \right \} \\ = \frac 1 {\sqrt{ \lambda_{n}}} \exp\left\{ - \frac {[C (\epsilon_{0} \mathds{1}(s_{n2} \ne v_{n s_{n1}}) - (\bm{\eta}_{\bm{v}_{n s_{n1}}} - \bm{\eta}_{s_{n2}})^{\top} \mathbf{x}_n) + \lambda_{n}]^2} {2 \lambda_{n}} \right \} \, , \end{multline} the objective of~\eqref{eq:obj} is identical to the marginal distribution of the augmented post-data posterior: \begin{align} \label{eq:augmented-obj} q(\mathbf{M}_0, \bm{\eta}, \m{s}, \bm{\lambda}) \propto \prod_n g \left( \mathbf{x}_n, {\lambda}_n \mid \bm{v}_n, s_n, \m{B}, \Theta, \bm{\eta} \right) p(\mathbf{M}_0, \bm{\eta}, \m{s}) \end{align} where $\m{s} = \{ {s}_n \}_{n = 1}^N$, likewise for $\bm{\lambda} = \{ \lambda_{n} \}_{n = 1}^N$. Additionally, the joint (pseudo) likelihood of $\mathbf{x}_n$ and $\lambda_n$ can be written as $g \left( \mathbf{x}_n, {\lambda}_n \mid \bm{v}_n, s_n, \m{B}, \Theta, \bm{\eta} \right) = p(\mathbf{x}_n \mid \bm{v}_n, \bm{\beta}_{v_{n L}}, \Theta) p(\lambda_{n} \mid \bm{v}_n, s_n, \mathbf{x}_n, \bm{\eta})$. The target posterior $q(\mathbf{M}_0, \bm{\eta})$ can be approached by sampling from $q(\mathbf{M}_0, \bm{\eta}, \m{s}, \bm{\lambda})$ and dropping the augmented variables. \subsection{Sampling $(\bm{v}_n, s_n)$} We appeal to the Metropolis-Hastings sampling for the path like that in~\citep{Huang2021}. Meanwhile, we propose $s_n$ along with $\bm{v}_n$. Let us denote $\varphi_n = (\bm{v}_n, s_n)$. For $s_n$, we obtain \[ s_{n} \mid \mathbf{x}_n, \bm{v}_n, \bm{\eta} = \argmax_{\ell, z \in \mathcal{S}(v_{n \ell})} \zeta_{n \ell z} \,. \] The MH scheme considers an acceptance probability $\mathcal{A} = \min \left(1,~ \frac {\mathcal{P}'} {\mathcal{P}} \frac {q(\varphi_n \mid \varphi_n')} {q(\varphi_n' \mid \varphi_n)} \right)$ where $\mathcal{P}$ denotes the posterior (including the regularisation) and the variable denoted with the prime superscript is that for the modified variable given the new proposal. We obtain \begin{eqnarray*} \frac {\mathcal{P}'} {\mathcal{P}} \frac {q(\varphi_n \mid \varphi_n')} {q(\varphi_n' \mid \varphi_n)} &=& \frac {g \left( \mathbf{x}_n, \lambda_n \mid \bm{v}_n', s_n', \m{B}', \Theta, \bm{\eta} \right)} {g \left( \mathbf{x}_n, \lambda_n \mid \bm{v}_n, s_n, \m{B}, \Theta, \bm{\eta} \right)} \, . \end{eqnarray*} The derivation details are listed in Sec.~\ref{sec:ap-accept}. \subsection{Sampling $\bm{\eta}_z$} Let us consider $(\lambda_{n} + C \zeta_{n \ell z})^2$ which is \begin{multline*} (\lambda_{n} + C \zeta_{n \ell z})^2 = const. - 2 C (\lambda_{n} + C\epsilon_{0}) \bm{\eta}_{v_{n \ell}}^\top \mathbf{x}_n + 2 C (\lambda_{n} + C\epsilon_{0}) \bm{\eta}_{z}^\top \mathbf{x}_n \\ + C^2 \bm{\eta}_{v_{n \ell}}^{\top} \mathbf{x}_n \mathbf{x}_n^{\top} \bm{\eta}_{v_{n \ell}} - 2 C^2 \bm{\eta}_{v_{n \ell}}^{\top} \mathbf{x}_n \mathbf{x}_n^{\top} \bm{\eta}_{z} + C^2 \bm{\eta}_{z}^{\top} \mathbf{x}_n \mathbf{x}_n^{\top} \bm{\eta}_{z} \, . \end{multline*} We could hence sum up the terms over all datum, i.e. $\sum_{n} \mathds{1}(v_{n \ell} = z ~||~ s_{n 2} = z) - \frac {(\lambda_{n} + C \zeta_{n \ell s_{n 2}})^2} {2 \lambda_{n}}$ for a certain $\bm{\eta}_z$. On the other hand, the canonical parametrisation for a Multivariate Gaussian distribution can be written as \[ \N_{\rm canonical}(\bm{\eta}_z \mid \bm{\nu}_z, \m{\Lambda}_z) = \exp \left ( const. + \bm{\nu}_z^{\top} \bm{\eta}_z - \frac 1 2 \bm{\eta}_z^{\top} \m{\Lambda}_z \bm{\eta}_z \right ) \, . \] Therefore, let us combine the prior of $\bm{\eta}_z$ which is $\N(\m{0}, \nu_0^2 \m{I})$ together and then achieve \begin{eqnarray*} \bm{\nu}_z &=& {C^2} \left \{ \sum_{n: v_{n \ell}=z} \frac 1 {\lambda_{n}} \left( \left ( \frac {\lambda_{n}}{C} + \epsilon_{0}\right ) \mathbf{x}_n^{\top} + \bm{\eta}_{s_{n 2}} \mathbf{x}_n \mathbf{x}_n^{\top} \right) \right. \\ && \left. - \sum_{n: s_{n 2}=z} \frac 1 {\lambda_{n}} \left( \left ( \frac {\lambda_{n}}{C} + \epsilon_{0}\right ) \mathbf{x}_n^{\top} - \bm{\eta}_{v_{n \ell}} \mathbf{x}_n \mathbf{x}_n^{\top} \right) \right \} \\ \m{\Lambda}_z &=& {C^2} \sum_{n} \mathds{1}(v_{n \ell} = z~||~s_{n 2} = z) \frac {\mathbf{x}_n \mathbf{x}_n^{\top}} {\lambda_{n}} + \nu_0^2 \m{I} \, . \end{eqnarray*} The above term is in the natural exponent so that we can sample $\bm{\eta}_z$ by \begin{align*} \bm{\eta}_z \mid \m{X}, \m{s}, \bm{\lambda}, \mathbf{M} \setminus \{ \bm{\eta}_z \} \sim \N \left( \m{\Lambda}_z^{-1} \bm{\nu}_z, \m{\Lambda}_z^{-1} \right) \, . \end{align*} \subsection{Sampling ${\lambda}_n$} Based on Eq.~\eqref{eq:gig-pdf}, we sample $\lambda_n \mid \zeta_{n s_n}$ through $\gig(1/2, 1, C^2 \zeta_{n s_n}^2)$ in our task. As shown in~\citep{polson2011data}, if $x \sim \gig(1/2, a, a/b^2)$ then $x^{-1} \sim \ig(\lvert b \rvert, a)$. Rather than sampling $\lambda_n$ directly, this fact allows us to instead sample the reciprocal of $\lambda_{n}$ from an Inverse Gaussian (IG) distribution: $\lambda_{n}^{-1} \sim \ig\left(| C\zeta_{n \ell s_{n 2}} |^{-1}, 1 \right)$. \footnote{This might mitigate the difficulty of seeking a mature library supporting GIG for some programming languages---but according to the authors' test, only \texttt{R4.0.0} (compared with \texttt{Python3.8} and \texttt{Julia1.6}) will not suffer from the numerical inconsistency when applying the reciprocal sampling from IG with very small $C$.} \begin{algorithm}[!ht] \caption{\textsc{RBHMC Sampling Procedure}} \label{alg:regmh-sampling} \While{not convergent}{ \ForEach{$n \in \textsc{Shuffle}(N)$}{ Sample a path $\bm{v}'_n$ and the corresponding $\bm{\beta}$ if needed \\ $s_{n}' \gets \argmax_{1 \le \ell \le L, z \in \mathcal{S}(v_{n \ell})} \zeta_{n \ell z}$ \tcp*{$s_n'$ is the tuple of some $\ell$ and $z$} \If(\tcp*[f]{$\varphi'_n = (\bm{v}'_n, {s}'_n)$}){$chance \sim \U(0, 1) < \mathcal{A}$}{ Accept $\varphi_n'$ and assign it to $\varphi_n$ } Sample ${\lambda}_{n} \mid \zeta_{n s_{n}}$ \\ } Sample $\m{c}, \m{B}, \Theta, \bm{\eta}$ \\ } \end{algorithm} \subsection{Output hierarchy} In~\citep{adams2010tree,Huang2021}, the trees with the highest likelihood are output. In general for posterior estimation, one would like to present the output using the solution with the highest posterior during a finite number of MCMC draws. However, there are some works involving interesting discussions on replacing the highest posterior with other well-established criteria, e.g.~\citep{rastelli2018optimal,wade2018bayesian}, etc. Their works focus on the flat clustering, and we leave this a future research challenge to explore a better criterion for choosing the output hierarchy under the Bayesian setting. We follow~\citet{adams2010tree,Huang2021} and still apply the complete data likelihood (CDL) to select the output. This approach applies the regularised complete data likelihood (RCDL) where RCDL is defined as \[ p(\m{X}, \m{c}, \m{V} \mid \mathbf{M} \setminus \{ \m{c}, \m{V} \}) \prod_n \exp\{-2C \max(0, \zeta_{n s_n})\} \, . \] Finally, any node with no siblings will be merged with its parent for presentation. \section{Introduction} \label{sec:intro} Posterior regularisation (PR) is an emerging approach to handle Bayesian models with extra constraints. The framework is founded on an approach of minimising the Kullback-Leibler (KL) divergence between a variational solution and the posterior, in a constrained space. The works~\citep{Dudik2004,Dudik2007,altun2006unifying} first raised the idea of including constraints in maximum entropy density estimation and provided a theoretical analysis. Based on \emph{convex duality} theory, the optimal solution of the regularised posterior is found to be the original posterior of the model, discounted by the constrained pseudo likelihood introduced by the constraints. Later work founded on the idea of posterior constraints includes~\citep{Graca2009} which proposed constraining the E-step of an Expectation-maximization (EM) algorithm, in order to impose feature constraints on the solution. Around the same time,~\citet{zhu2009maximum} proposed structural maximum entropy discrimination Support Vector Machines (SVM), which use the same idea. A number of extensions to various models have been proposed, including to SVM, Matrix Factorisation, Topic Models, Classification, Regression and Clustering etc.~\citep{zhu2011infinite,zhu2012medlda,xu2012nonparametric,zhu2014gibbs,he2020online,chen2014robust}, which can all be developed under the common framework of Posterior Regularisation~\citep{Dudik2004,Dudik2007,altun2006unifying,Ganchev2010,Zhu2014regbayes}, However, we notice that most works are within the category of supervised learning. As mentioned in~\citep{Graca2009}, extending generative models to incorporate even small additional constraints is practically challenging. In comparison, posterior regularisation is a considerably simpler and more flexible means of learning models with extra information. In short, it maintains the properties of the original model while constraining the solution search to a restricted space. For example, the \emph{max-margin} Bayesian clustering (MMBC) model~\citep{chen2014robust} imposes a latent linear discriminant variable on a Dirichlet Process Mixture Model (DPMM) to ensure that clusters are better separated. The framework adds discounts to the likelihood for each individual datum, i.e. it adjusts the proportion of each cluster assignment for every datum. On the other hand, it does not alter any other bit of the DPMM itself. In contrast, the Bayesian Repulsive Gaussian Mixture Model (BRGM) was proposed by~\citep{xie2020bayesian} and performs a similar job to MMBC but imposes the constraints on the prior rather than the likelihood. In particular, the BRGM imposes a distance function between the Gaussian clusters in the generative process. Designing such a novel model (BRGM) needs careful consideration about the posterior consistency. Also, it loses exchangeability which leads to harder inference procedures. So far, only a Gaussian model and simple distance function are applied and analysed---there is still much to explore for this stream of models with other distributions and distance functions. \subsection{Bayesian Hierarchical Clustering} Our focus is to enhance the Bayesian hierarchical mixture clustering (BHMC)~\citep{Huang2021} model. Here, each node in the hierarchy is associated with a distribution, with parameters connected through a transition kernel along paths in the hierarchy. Each datum is generated by choosing a path through the hierarchy to a leaf node and is then drawn from the distribution associated with the leaf. One significant property of the BHMC model is that the nodes in the hierarchy/tree\footnote{In this paper, we will use the term ``hierarchy'' and ``tree'' interchangeably} are associated with a mixture model rather than a Gaussian distribution, which is the more common choice in Bayesian hierarchical clustering~\citep{adams2010tree,neal2003density,Knowles2015}. That is, the parent-to-node diffusion in the BHMC is a multi-level Hierarchical Dirichlet Process Mixture Model (HDPMM). With proper hyperparameter settings, the non-zero mixture components will reduce for nodes that are deeper in the tree. We aim to apply posterior regularisation to the BHMC model. Consider that at any level of the hierarchy, each node in the BHMC corresponds to some mixture distribution. Inspired by a recent evaluation framework on hierarchies~\citep{huang2020partially}, we recognise that a good hierarchy should exhibit good separation, particularly at high levels, close to the root. This means that the mixture components associated with each node should be geometrically close together, relative to their separation from the components associated with the other nodes on the same level. However, we find that an inference procedure over the model can easily get stuck in a local mode where the components associated with high level nodes are not sufficiently coherent. We attribute this to the fact that the data is generated only at the leaf nodes, so that it can take a long time for the data to influence the mixture components at the higher levels. Separation of the mixture components among nodes at the same level is a desirable feature of the posterior and our solution is to regularise the model in order to focus the optimisation on distributions that exhibit this feature. In practise, regularisation has the impact of introducing an explicit data dependence on the choice of mixture components at every level, pushing the direct influence of the data up to all levels along every path in the hierarchy. Specifically, we add the \emph{max-margin} property to the nodes at each level, such that the decision boundary determining the correct node for each datum is maximally separated from the other nodes. As suggested in~\citep{Ganchev2010}, the constraints should contain features that appear nowhere in the original model; therefore, \emph{max-margin} in the internal nodes and BHMC can be a good marriage under the framework of PR. \subsection{Why max-margin for HC?} \label{sec:motivation} HC is inherently an unsupervised learning problem that learns the latent labels for the data. In particular, it assigns a sequence of dependent labels to each datum. \citet{huang2020partially} suggest that a high quality hierarchy should be separated sufficiently well under each parent so that a robot can easily identify where each datum belongs and then retrieve the datum. It is non-trivial to formulate such separating features, but we can appeal to an approximation by introducing \emph{max-margin} to the original model. Starting from the simplest SVM binary classification, we consider a datum $\mathbf{x}_n \in \m{X} = \{\mathbf{x}_1, \ldots, \mathbf{x}_N \}$ could be assigned with a label $v_n$ where $v_n \in \{-1, 1\}$. Now, we augment it with a latent discriminant variable $\bm{\eta}$ so that \begin{align*} v_n \left ( \bm{\eta}^\top \mathbf{x}_n + \eta_0 \right ) > \epsilon / 2 \implies \begin{cases} \left ( \bm{\eta}^{\top} \mathbf{x}_n + \eta_0 \right ) > \epsilon / 2 & v_n = 1 \\ -\left ( \bm{\eta}^{\top} \mathbf{x}_n + \eta_0 \right ) > \epsilon / 2 & v_n = -1 \end{cases} \, . \end{align*} Thus, $\bm{\eta}^\top \mathbf{x} + \eta_0$ (for a certain $\mathbf{x}$) is the hyperplane boundary that separates the two classes, and the datum on the two sides are at least $\epsilon/2$ units away from the hyperplane~\citep{hastie2009elements}. Simplifying the case, we set $\eta_0$ as constant. Using a non-negative slack variable $\xi_n$ for $n$, we write \begin{align*} v_n \left ( \bm{\eta}^\top \mathbf{x}_n + \eta_0 \right ) > \epsilon / 2 - \xi_n / 2 \, . \end{align*} However, when there are more than two classes, which is common for clustering, we need some modifications. One way is to associate a different $\bm{\eta}$ with each cluster. This $\bm{\eta}$ can be then employed to form a boundary to separate the data in this cluster and the rest (namely, the one-vs-all approach~\citep{hsu2002comparison}). We set the cluster assignment $v_n$ to be $1$ and set any other cluster to be $-1$. Considering a class denoted with $z$ or $z'$, the optimisation will have the following constraints: \begin{multline} \label{eq:constraint-single-lvl} \begin{cases} \left ( \bm{\eta}_{z}^{\top} \mathbf{x}_n + \eta_0 \right ) > \epsilon / 2 - \xi_n / 2 & z = v_n \\ -\left ( \bm{\eta}_{z'}^{\top} \mathbf{x}_n + \eta_0 \right ) > \epsilon / 2 - \xi_n / 2 & \forall z', z' \ne v_n \end{cases} \\ \implies\mbox{constraint~space~for~datum}~n = \left \{ (\bm{\eta}_{v_n} - \bm{\eta}_z)^\top \mathbf{x}_n > \epsilon - \xi_n ~\big |~ \forall z \ne v_n \right \} \,. \end{multline} Our work extends this idea to a multilevel case to fit the task of HC, where the constraint can be applied to clusters under the same parent. Given the constraint space, we can then apply PR to restrict the search space for the Bayesian inference. \subsection{Contributions} In this paper, we analyse the modelling of PR with Bayesian HC which is studied for the first time. We show the improvements in the model given the extra components added to the model via \emph{max-margin} and PR. \section{Preliminaries} \label{sec:model} We denote the tree by $\mathcal{H} = (Z_{\mathcal{H}}, E_{\mathcal{H}})$ where $Z_{\mathcal{H}}$ is the set of nodes and $E_{\mathcal{H}}$ is the edge set in the tree. For any $z, z' \in Z_{\mathcal{H}}$, we write $(z, z') \in E_{\mathcal{H}}$ if $z$ is the parent of $z'$ in the hierarchy. The paths can be collected through the set $\mathbb{Z}_{\tree}$ which contains all paths. A path is an ordered list of nodes and therefore denoted by $\bm{z} = \{z_0, z_1, \dots, z_L\}$ such that $\bm{z} \in \mathbb{Z}_{\tree}$ for all $\bm{z}$. Finally, we define $\mathcal{S}(z)$ to be the set of siblings of $z$ such that $\mathcal{S}(z) = \{z' \mid z' \ne z, (z^*, z') \in E_{\mathcal{H}} \mbox{ where $z^*$ is the parent of $z$}\}$. \paragraph{Dirichlet Process} The BHMC relies heavily on the Dirichlet Process (DP). It has two well-known forms, namely the Chinese Restaurant Process (CRP) and the stick-breaking process. Given the $n$-{th} customer (datum) coming into a restaurant, in the CRP formulation, she would select a table $k$ which is labelled as $c_n$. The randomness of $\mathrm{CRP}(\alpha)$ is defined as \begin{align*} p(c_n = k \mid c_{1:n-1}, \alpha) = \begin{cases} {\frac{N_k}{n + \alpha}} & k \mbox{~exists} \\ {\frac{\alpha}{n + \alpha}} & k \mbox{~is~new} \end{cases} \end{align*} which means that the customer selects a table proportional to the number of customers ($N_k$) sitting at the table $k$ while there is still a possibility to start a new table. The stick-breaking process simulates the process of taking portions out from a remaining stick. Assume the original stick is of length $1$. Let $o_k$ be the proportion of the length to be cut off from the remaining stick, and $\beta_k$ be the actual length being removed. We can write \begin{align*} \beta_1 = o_1 \qquad \beta_k = o_k \prod_{i=1}^{k-1} (1-o_i) \qquad o_i \sim \B(1, \alpha) \, \end{align*} and equivalently one can observe $p(c_n = k \mid \alpha) = \beta_k$. This is also known as the GEM distribution. \paragraph{Nested Chinese Restaurant Process} The CRP can be extended to a hierarchical random process, named the nested Chinese Restaurant Process (nCRP). Given a fixed hierarchy depth, $L$, the process recursively applies the CRP at each level. For a given data $n$, we denote the path assignment label of the datum in the hierarchy by $\bm{v}_n$, such that $\bm{v}_n$ would be assigned with some $\bm{z} \in \mathbb{Z}_{\tree}$. \paragraph{Hierarchical Dirichlet Process Mixture Model (HDPMM)} We summarise the content of~\citep{Huang2021} giving just the necessary details of the model. BHMC considers a hierarchy in which each node maintains a global book of mixture components, while keeping local weights for the components which can be completely different from each other. Let $H$ be a distribution, which we call the base measure. If $G$ is drawn from a DP of $G_0$ such that $G \sim \DP(\gamma, G_0)$ and $G_0 \sim \DP(\gamma_0, H)$, then $G$ is said to be an instance generated from a \emph{hierarchical} DP (HDP)~\citep{sudderth2006graphical}. In an HDPMM with finite $K$, the process to generate the $n$-{th} observation $x_n$ is as follows: \begin{align*} \beta_{1} \dots \beta_K \mid \gamma_0 &\sim \Dir(\gamma_0/K \dots \gamma_0/K) \qquad & \theta_1 \dots \theta_K \mid H & \sim H \\ \tilde{\beta}_{1} \dots \tilde{\beta}_{K} \mid \gamma, \bm{{\beta}} &\sim \Dir(\gamma \bm{\beta}) \qquad & c_n \mid \tilde{\bm{\beta}} &\sim \discrete(\tilde{\bm{\beta}}) \\ x_n \mid c_n, \theta_1 \dots \theta_K &\sim F(\theta_{c_n}) \end{align*} where $F(\cdot)$ is a sampling function that takes $\theta$ as its parameter. \paragraph{Bayesian Hierarchical Mixture Clustering} The model depicts the following generative process. Given an index $n$, the corresponding datum is generated through an nCRP with $L$ steps, starting from the root node. Whenever a new node $z$ is created during this nCRP, local weights of the global components are sampled from the node's parent through a HDP and associated with the node. After moving $L$ times down the hierarchy, a leaf node is reached and the $n$-{th} observation is sampled using the local weights of the leaf node. \begin{figure} \centering \subfloat[{Example of BHMC}]{ \begin{forest} for tree={ fit=band } [$\{1\, 2\, 3\, 4\, 5\, 6\} \mid z_0 $ [$ \{ 2\, 4\, 5\} \mid z_1 $ [$ \{ 2 \, 4\} \mid z_4 $] [$\{ 5 \} \mid z_5 $] ] [$\{ 1\, 3\} \mid z_2 $ [$\{ 1\} \mid z_6 $] [$\{ 3\} \mid z_7 $] ] [$\{ 2\, 6 \} \mid z_3 $ [$\{ 2\, 6 \} \mid z_8 $] ] ] \end{forest} } \quad \subfloat[The flow for the LHS BHMC example]{ \begin{forest} for tree={ fit=band } [$\bm{\beta}_{z_0} $ [$\bm{\beta}_{z_1}$ [$\bm{\beta}_{z_4} $] [$\bm{\beta}_{z_5} $] ] [$\bm{\beta}_{z_2} $ [$\bm{\beta}_{z_6} $] [$\bm{\beta}_{z_7} $] ] [$\bm{\beta}_{z_3} $ [$\bm{\beta}_{z_8} $] ] ] \end{forest} } \caption{An example of BHMC generative process} \label{fg:bhmc-example} \end{figure} We show a generative example with a finite setting in Fig.~\ref{fg:bhmc-example} which has $K=6$ and $L=2$. The associated distributions are as follows: \begin{align*} &\bm{v}_n = \{z_0, v_{n 1}, v_{n 2} \} \sim \mbox{nCRP}(\alpha) \\ &\bm{\beta}_0 \sim \Dir\left ( \frac{\gamma_0}{K}, \dots, \frac{\gamma_0}{K} \right ) \quad \bm{\beta}_{v_{n 1}} \sim \Dir(\gamma \bm{\beta}_0) \quad \bm{\beta}_{v_{n 2}} \sim \Dir(\gamma \bm{\beta}_{v_{n 1}}) \\ &x_n \sim F(\theta_{c_n}) \quad c_n \sim \discrete(\bm{\beta}_{v_{n 2}}) \quad \theta_1, \ldots, \theta_K \sim H \, . \end{align*} For an infinite setting, we would instead sample $\bm{o} \sim \Gem(\gamma_0)$ and thus acquire $\bm{\beta}_0$ for the infinite setting. That is, $\beta_{z_0 k} = o_k \prod_{k'=1}^{k-1} (1 - o_{k'})$. For any node, we can present $\beta_z = \{ \beta_{z 1}, \ldots, \beta_{z K}, \beta_{z}^* \}$ where the last element $\beta_{z}^*$ is the probability of creating a new component. Then, for any $(z, z') \in E_{\mathcal{H}}$, we obtain $\bm{\beta}_{z'} \sim \Dir(\bm{\beta}_z)$. Algorithm~\ref{alg:generative} demonstrates a detailed generative process for the local variables, $\bm{v}_{n} = \bm{z}$ for some $\bm{z} \in \mathbb{Z}_{\tree}$. Also, we can write $\m{B}^{-} = \m{B} \backslash \{\bm{\beta}_{z_0}\}$ and thus obtain $p(\m{B} \mid \gamma, \gamma_0) \equiv p(\m{o} \mid \gamma_0) p(\m{B}^{-} \mid \gamma)$. Let us denote the variables in the model by $\mathbf{M}_0$ such that $\mathbf{M}_0 = \{\m{o}, \m{B}^{-}, \m{V}, \m{c}, \Theta \}$. The variables are $\m{B} = \{\bm{\beta}_z\}_{z \in Z_{\tree}}$, $\m{o} = \{o_k \}_{k=1}^{\infty}$, $\m{V} = \{\bm{v}_n \}_{n=1}^N$, $\m{c} = \{c_k \}_{k=1}^{\infty}$, and $\Theta = \{\theta_k \}_{k=1}^{\infty}$. Let $f(\cdot)$, corresponding to $F(\cdot)$, be the density function of $x$ given the parameter $\theta$ . Moreover, $\Bfunc(\bm{a})$ is the beta function such that $\Bfunc(\bm{a}) = \prod_i \Gamma(a_i) / \Gamma(\sum_i a_i)$. With this notation, the model is summarised by Fig.~\ref{eq:model-prob}. \begin{algorithm}[!ht] \caption{\sc Generative Process of BHMC (Infinite)} \label{alg:generative} Sample $\bm{\beta}_{z_0} \sim \Gem(\gamma_0)$ \tcp*{$z_0$ is the root restaurant} Sample $\theta_1, \theta_2, \theta_3, \dots \sim H$ \\ \For{$n =1 \dots N$}{ $v_{n 0} \gets z_0$ \\ \For{$\ell = 1 \dots L$} { Sample $v_{n \ell}$ using CRP($\alpha$) \\ $z \gets v_{n (\ell-1)}$ \\ $z' \gets v_{n \ell}$ \\ \If{$z'$ is new}{ Sample $\bm{\beta}_{z'} \sim \DP\left(\gamma, \bm{\beta}_{z}\right)$ \\ Attach $(z, z')$ to the tree $\mathcal{H}$ } } Sample $c_n \sim \discrete(\bm{\beta}_{v_{n L}})$ \\ Sample $x_n \sim F(\theta_{c_n})$ \\ } \end{algorithm} \begin{figure}[htp] \fbox{ \parbox[c]{0.97\textwidth}{ \begin{eqnarray*} p(\m{X} \mid \m{V}, \Theta, \m{B}) &=& \prod_n \left( \sum_{k=1}^{\infty} \beta_{v_{n L} k} f(x_n; \theta_k) \right) \\ p(\m{V} \mid \alpha) &=& \Gamma(\alpha)^{\lvert \mathcal{I}_{\mathcal{H}} \rvert}\prod_{z \in \mathcal{I}_{\mathcal{H}}} \frac {\alpha^{m_{z}}} {\Gamma(N_{z}+\alpha)} \prod_{(z, z') \in E_{\tree}} \Gamma(N_{z'}) \\ p(\bm{o} \mid \gamma_0) &=& \prod_k {\Bfunc(1, \gamma_0)}^{-1} (1 - o_k)^{\gamma_0 - 1} \\ p(\m{B}^{-} \mid \m{V}, \gamma) &=& \prod_{(z, z') \in E_{\mathcal{H}}} {\Bfunc(\gamma \bm{\beta}_{z})}^{-1} \prod_k \beta_{z' k}^{\gamma \beta_{z k} - 1} \\ p(\Theta \mid H) &=& \prod_k p(\theta_k \mid H) \mbox{ (needs further specification)} \end{eqnarray*} }} \caption{The probabilities about the model} \label{eq:model-prob} \end{figure} \section{Regularised Solution} \label{sec:regbayes} The PR framework was developed on top of variational inference (VI) for learning Bayesian models~\citep{Zhu2014regbayes}, i.e., as for VI, the goal is to minimise the KL-divergence (denoted by $\KL$) between a proposed distribution and the posterior distribution, but PR adds a penalty function based on a set of feature constraints to the optimisation problem. \begin{definition}[RegBayes]. Let $q(\mathbf{M})$ be a distribution proposed to approximate the posterior $p(\mathbf{M} \mid \m{X})$. The PR method is to solve \begin{align} \min_{q(\mathbf{M}), \bm{\xi}} &~ \KL\infdivx{q(\mathbf{M})}{p(\mathbf{M}, \m{X})} + U(\bm{\xi}, q, \mathbf{M}) \quad s.t. : ~ q(\mathbf{M}) \in \mathcal{P}_{cs}(\bm{\xi}) \nonumber \end{align} where, $p(\mathbf{M}, \m{X})$ denotes the joint probability of the model parameters and the data and $U(\bm{\xi}, q, \mathbf{M})$ is a penalty function obtained on the constraint space $\mathcal{P}_{cs}(\bm{\xi})$. \end{definition} In the sequel, we will first discuss the specification of $\mathcal{P}_{cs}(\bm{\xi})$ and then $U(\bm{\xi}, q, \mathbf{M})$. Finally, we show the solutions. \subsection{PR specifications for BHMC} As discussed earlier, we aim to restrict the search space of the solution into a subspace that avoids some solutions where the separation between the siblings is too small. First, let us denote $\mathbf{M} = \mathbf{M}_0 \cup \{ \bm{\eta} \}$ where $\{ \bm{\eta} \}$ denotes a set of discriminant variables as discussed in Sec.~\ref{sec:motivation}. Now, let us extend Eq.~\eqref{eq:constraint-single-lvl} to a constraint space for HC. We can apply Eq.~\eqref{eq:constraint-single-lvl} for the assignment of the datum $n$ at each level $1 \le \ell \le L$ rather than only at the leaf level. This imposes constraints in order to ensure that the hierarchy is well separated under each internal parent node. Recalling that $v_{n \ell}$ represents the correct node for datum $n$ at level $\ell$, the constraint space may be written as \begin{eqnarray} \label{eq:constrained-space} {\cal P}_{cs}( \bm \xi) = \left\{ q( \mathbf{M} ) ~\Bigg |~ \begin{array}{l} 1 \le n \le N,~ 1 \le \ell \le L,~ \displaystyle \forall z \in \mathcal{S}(v_{n \ell}), \\ (\bm{\eta}_{v_{n \ell}} - \bm{\eta}_{z})^{\top} \mathbf{x}_n \ge \epsilon_{n \ell}^{\Delta} - \xi_{n \ell}, \xi_{n \ell} \ge 0 \end{array} \right\} \end{eqnarray} where $\epsilon_{n \ell}^{\Delta}$ is the cost of choosing the child $z$ over the true child $v_{n \ell}$ To simplify the notation, let us denote an auxiliary variable: \begin{align*} \zeta_{n \ell z} \coloneqq \epsilon_{n \ell}\mathds{1}(z \ne v_{n \ell}) - (\bm{\eta}_{v_{n \ell}} - \bm{\eta}_{z})^{\top} \mathbf{x}_n \, \end{align*} where $\epsilon_{n \ell}^{\Delta} = \epsilon_{n \ell}\mathds{1}(z \ne v_{n \ell})$. Notice that this is not a new variable but used for simplifying the notation. Based on the constrained space of Eq.~\eqref{eq:constrained-space}, we design $U(\bm{\xi}, q, \mathbf{M})$ as the regularisation function: \begin{align} \label{eq:penalty} U(\bm{\xi}, q, \mathbf{M}) = 2 C \sum_n \max \left(0, \max_{\ell, z \in \mathcal{S}(v_{n \ell})} \E_q \left[ \zeta_{n \ell z} \right] \right) \end{align} where $C$ is a positive scale coefficient of the regularisation term and this is known as the hinge loss~\citep{hastie2009elements}. Heuristically, this regularisation function, plugged into a minimisation task, means to maximise ``tightest separation'' for an item from other paths by minimising the maximal margin violation between the correct node at any level in the assigned path from its siblings. With this understanding, it suffices to set $\epsilon_{n \ell}$ as a constant hyperparameter $\epsilon_0$. As the maximum function is convex, we instead consider minimising \begin{equation} \label{eq:obj} \min_{q(\mathbf{M})} ~ \KL\infdivx{q(\mathbf{M})}{p(\mathbf{M}, \m{X})} + 2 C \sum_n \E_{q(\m{V}, \bm{\eta})} \left[ \max \left(0, \max_{\ell, z \in \mathcal{S}(v_{n \ell})} \zeta_{n \ell z} \right) \right] \end{equation} since $\max \left(0, \max_{\ell, z \in \mathcal{S}(v_{n \ell})} \E_q \left [ \zeta_{n \ell z} \right ] \right) \le \E_q \left [ \max \left(0, \max_{\ell, z \in \mathcal{S}(v_{n \ell})} \zeta_{n \ell z} \right) \right ]$. Again, $\m{V}$ is the set of path assignments to the observations. We can assume that the prior for all $\bm{\eta}_z$ is $p(\bm{\eta}_z) \sim \N(\m{0}, \nu_0^2 \m{I})$ given $\nu_0$ as a hyperparameter. Applying the variational derivations over the objective, one can obtain an analytical solution to $q(\mathbf{M})$. \begin{lemma} \label{lem:closed-form} The optimal solution of the objective in Eq.~\eqref{eq:obj} is \begin{align} \label{eq:closed-form} q(\mathbf{M}) \propto p(\mathbf{M} \mid \m{X}) \prod_n \exp\left \{ - 2C \max \left ( 0, \max_{\ell, z \in \mathcal{S}(v_{n \ell})}\zeta_{n \ell z} \right ) \right \} \end{align} \end{lemma} Lemma~\ref{lem:closed-form} has been proved through connecting the objective to the Euler-Lagrange equation and hence deriving the updates by setting its derivatives to 0~\citep{chen2014robust}. \paragraph{Discussion of alternative solution} Another straightforward formulation of $U(\bm{\xi}, q, \mathbf{M})$ could be $\sum_n \sum_{\ell} \max_{z \in \mathcal{S}(v_{n \ell})} \E_q \left [ \zeta_{n \ell z} \right ]$, which indicates that we hope to separate the nodes under the parent at each level. However, one can change minimising this alternative penalty function to minimising its upper bound which is $L \max_{\ell} \max_{z \in \mathcal{S}(v_{n \ell})} \E_q \left [ \zeta_{n \ell z} \right ]$, since \begin{equation*} \forall n: \sum_{\ell} \max_{z \in \mathcal{S}(v_{n \ell})} \E_q \left [ \zeta_{n \ell z} \right ] \le L \max_{\ell} \max_{z \in \mathcal{S}(v_{n \ell})} \E_q \left [ \zeta_{n \ell z} \right ] \,. \end{equation*} In practice, $L$ can be simply absorbed into the coefficient $C$, and then we obtain the penalty function of Eq.~\eqref{eq:penalty}. Additionally, from a hindsight regarding the inference, this setting has a clear advantage in the computational complexity for updating $\bm{\eta}$. For the adopted setting, each datum is associated with only one $\bm{\eta}_z$ in a path, while for the alternative version it would be used to compute updates for every $\bm{\eta}_z$ in the path. \section*{Conflict of interest} \bibliographystyle{spbasic} \subsection{Analysing Mixture Kernels} For demonstration, we set $\phi$ to be parameters for multivariate Normal distributions. Therefore, one can set $T$ to be also a multivariate Normal. Let $\phi = \mu$ with covariance matrix $\Sigma$ is assumed known. Hence, we write $T = \N({m}, \Phi)$. According to our setting, $\phi \sim T$. It could be specified as The updates for the non-regularised version, it is easy to achieve, (see Appendix for the derivation). However, he updates for the kernel will be non-trivial given the regularised term. In addition, the term has the following inequality given the convexity: \begin{align} \label{eq:neglogsum_ineq} - \E_q \left[\log \sum_k \beta_k f(x_n; \phi_k)\right] \ge - \sum_k \log \E_q [\beta_k f(x_n; \phi_k)] \,. \end{align} It simply means we can maximise the term at the right hand side instead. Before applying the bound, we detail the parts of RELBO that contains $\phi_k$: \begin{align*} \tilde{\mathcal{L}}(\phi_k) &= \E_q[\log p(\m{X} \mid \m{C}, \m{V}, {\phi}_k)] + \E_q [p(\phi_k \mid H)] - \E_q [\log q(\phi_k \mid T_k)] \\ &\quad - \varrho \sum_n \sum_{z \in v_{n}} \sum_{z' \in sibs(z)} \sum_{k'} \log \E_q \left[ \beta_{z' k'} f(x_n; \phi_{k'}) \right] \, . \end{align*} It can be seen that in the last term, we cannot distil the likelihood and the mixing proportion for a specific $k$. We specify the term \begin{align*} p(\m{X} \mid \m{C}, \m{V}, \bm{\phi}) &= \prod_n \prod_{\bm{z}} \prod_{k} [p(x_n \mid \phi_k)^{c_{n \bm{z} k}}]^{v_{n \bm{z}}} \\ \log \E_q [p(\m{X} \mid \m{C}, \m{V}, \bm{\phi})] &= \sum_n v_{n \bm{z}} \sum_k \rho_{n \bm{z} k} \E_q [\log p(x_n \mid \phi_k)] \end{align*} For $\mathcal{Q}$, it is in some sense similar to the marginal likelihood with $T$ as $F$'s prior. It will be straightforward if $T$ is a conjugate prior. For our case, we can write \begin{align*} \E_q[f(x; \phi)] &= \int \frac {\exp\{-\frac 1 2 [(x-\mu)^{\top}\Sigma^{-1}(x-\mu) + (\mu-m)^{\top}\Phi^{-1}(\mu-m)]\} } {(2\pi)^D \sqrt{|\Sigma| |\Phi|}} d \mu \\ &= \mathcal{C} \exp\left\{-\frac 1 2\mathcal{A} \right\} \int \exp \left\{-\frac 1 2\mathcal{B}\right\} d\mu \\ \mathcal{A} &= x^{\top} \Sigma^{-1} x + m^{\top}\Phi^{-1}m - \frac 1 2 (\Sigma^{-1}x + \Phi^{-1}m)^{\top} (\Sigma^{-1}+\Phi^{-1})(\Sigma^{-1}x + \Phi^{-1}m)\\ \mathcal{B} &= [\mu - (\Sigma^{-1}+\Phi^{-1})(\Sigma^{-1}x + \Phi^{-1}m)]^{\top}(\Sigma^{-1} + \Phi^{-1})^{-1} [\mu - (\Sigma^{-1}+\Phi^{-1})(\Sigma^{-1}x + \Phi^{-1}m)] \\ \mathcal{C} &= \frac 1 {(2\pi)^D \sqrt{|\Sigma| |\Phi|}} \,. \end{align*} Let $\tilde{m} = \Sigma^{-1}x + \Phi^{-1}m$. We obtain \begin{align*} \E_q[f(x; \phi)] &= \sqrt{\frac {|\Sigma^{-1} + \Phi^{-1}|} {(2\pi)^D |\Sigma| |\Phi|}}\exp\left\{ -\frac 1 2 \left[x^{\top}\Sigma^{-1}x + m^{\top}\Phi^{-1}m -\tilde{m}^{\top} (\Sigma^{-1}+\Phi^{-1}) \tilde{m} \right] \right\} \, \end{align*} because, setting a variable $y$, \begin{align*} \mu^{\top} (\Sigma^{-1}+\Phi^{-1})^{-1} y &= \mu^{\top} (\Sigma^{-1}x + \Phi^{-1}m) \\ y &= (\Sigma^{-1}+\Phi^{-1})(\Sigma^{-1}x + \Phi^{-1}m) \\ y^{\top} (\Sigma^{-1}+\Phi^{-1})^{-1} y &= (\Sigma^{-1}x + \Phi^{-1}m)^{\top} (\Sigma^{-1}+\Phi^{-1})(\Sigma^{-1}+\Phi^{-1})^{-1} (\Sigma^{-1}+\Phi^{-1})(\Sigma^{-1}x + \Phi^{-1}m) \\ &= (\Sigma^{-1}x + \Phi^{-1}m)^{\top}(\Sigma^{-1}+\Phi^{-1})(\Sigma^{-1}x + \Phi^{-1}m) \end{align*} It indicates that it is non-trivial to obtain closed-form updates for the variables $m$ and $\Phi$. We derive the following equations with the helps from~\cite{petersen2008matrix}. Denoting the dimension of data by $D$, let us specify, when $v_{n \bm{z}} \rho_{n \bm{z} k} \ne 0$, \begin{align*} \frac {\E_{q} [\log p(x_n \mid c_{n \bm{z} k}, v_{n \bm{z}}, \phi_k)]} {v_{n \bm{z}} \rho_{n \bm{z} k}} &= - \frac D 2 \log 2 \pi - \frac 1 2 \log |\Sigma_k| \\ &\quad - \frac {1} {2} \tr\left\{\Sigma_k^{-1} \E_q \left[(x_n - \mu_k)(x_n - \mu_k)^{\top} \right]\right\} \\ &= - \frac D 2 \log 2 \pi - \frac 1 2 \log |\Sigma_k| \\ &\quad - \frac {1} {2} \tr\left\{\Sigma_k^{-1} \left( x_n^2 - 2 m_k x_n + \E_q [\mu_k^2] \right) \right\} \\ &= - \frac D 2 \log 2 \pi - \frac 1 2 \log |\Sigma_k| \\ &\quad - \frac {1} {2} \tr\left\{\Sigma_k^{-1} \left( x_n^2 - 2 m_k x_n + m_k^2 \right) + \Sigma_k^{-1} \Phi_{k} \right\} \\ &= - \frac D 2 \log 2 \pi - \frac 1 2 \log |\Sigma_k| \\ &\quad - \frac {1} {2} \tr\left\{\Sigma_k^{-1} (x_n - m_k)(x_n - m_k)^{\top} + \Sigma_k^{-1} \Phi_{k} \right\} \\ \E_{q} [\log p(\phi_k \mid H)] &= -\frac D 2 \log 2 \pi - \frac 1 2 \log |\Phi_0| \\ &\quad - \frac {1} {2} \tr\left[\Phi_0^{-1}(m_k - m_0)(m_k - m_0)^{\top} \right] - \frac 1 2 \tr (\Phi_0^{-1} \Phi_k) \\ \E_{q} [\log q(\phi_k \mid T_k)] &= -\frac D 2 \log 2 \pi - \frac 1 2 \log |\Phi_k| - \frac {D} {2} \, . \end{align*} Hence \begin{align*} &\E_{q} [\log p(x_n \mid c_{n \bm{z} k}, v_{n \bm{z}}, \phi_k)] + \E_{q} [\log p(\phi_k)] - \E_{q} [\log q(\phi_k)] \\ &= v_{n \bm{z}} \rho_{n \bm{z} k} \left(- \frac D 2 \log 2 \pi - \frac 1 2 \log |\Sigma_k| - \frac {1} {2} \tr\left\{\Sigma_k^{-1} \left [(x_n - m_k)(x_n - m_k)^{\top} \right]\right\} - \frac 1 2 \tr(\Sigma_k^{-1} \Phi_k) \right) \\ &\quad - \frac 1 2 \log |\Phi_0| - \frac {1} {2} \tr\left[\Phi_0^{-1}(m_k - m_0)(m_k - m_0)^{\top} \right] - \frac 1 2 \tr (\Phi_0^{-1} \Phi_k) + \frac 1 2 \log |\Phi_k| + \frac {D} {2}\, . \end{align*} For a batch setting, \begin{align*} &\E_{q} [\log p(X \mid c_{n \bm{z} k}, v_{n \bm{z}}, \phi_k)] + \E_{q} [\log p(\phi_k)] - \E_{q} [\log q(\phi_k)] \\ &= \sum_n v_{n \bm{z}} \rho_{n \bm{z} k} \left(- \frac D 2 \log 2 \pi - \frac 1 2 \log |\Sigma_k| - \frac {1} {2} \tr\left\{\Sigma_k^{-1} \left [(x_n - m_k)(x_n - m_k)^{\top} \right]\right\} - \frac 1 2 \tr(\Sigma_k^{-1} \Phi_k) \right) \\ &\quad - \frac 1 2 \log |\Phi_0| - \frac {1} {2} \tr\left[\Phi_0^{-1}(m_k - m_0)(m_k - m_0)^{\top} \right] - \frac 1 2 \tr (\Phi_0^{-1} \Phi_k) + \frac 1 2 \log |\Phi_k| + \frac {D} {2}\, . \end{align*} Actually, there are faster ways of getting the derivatives solved by leaving the expectation unsolved until the derivatives are obtained. As known, \begin{align*} \frac {\partial} {\partial A} \tr(AB) &= B^{\top} \quad \frac {\partial} {\partial A} |A| = A^{-\top} \\ \frac {\partial} {\partial m} m^{\top} A m &= Am = m^{\top}A \quad \frac {\partial} {\partial m} m m^{\top} = 2 m \,. \end{align*} As regards of $m_k$, \begin{align*} \nabla_{m_k} \E_{q} [\log p(x_n \mid c_{nzk}, \m{V}, \phi_k)] &= v_{n \bm{z}} \rho_{n \bm{z} k} \left[ \Sigma^{-1}_k (x_n - m_k) \right] \\ \nabla_{m_k} \E_{q} [\log p(\phi_k)] &= - \Phi_0^{-1}(m_k - m_0) \\ \nabla_{m_k} \E_{q} [\log q(\phi_k)] &= 0 \\ \nabla_{m_k} \log \sum_{k'} \E_q \left[ \beta_{zk'} f(x_n; \phi_{k'}) \right] &= \mathcal{Q}_{nzk} \Sigma^{-1}_k({x}_n - m_k) \, . \end{align*} The last line is calculated based on Proposition~\ref{prop:grad}. Therefore, letting $\mathcal{R}_{n \bm{z} k} = v_{n \bm{z}} \rho_{n \bm{z} k} - \varrho\sum_{z' \in v_n}\sum_{z \in sibs(z')} \mathcal{Q}_{nzk}$, \begin{align*} \nabla_{m_k} \tilde{\mathcal{L}} &= \Sigma^{-1}_k\sum_n \mathcal{R}_{n \bm{z} k} ({x}_n - m_k) - \Phi_0^{-1}(m_k - m_0) \, . \end{align*} However, the term $\mathcal{Q}_z$ still keeps the variable $\Phi$ and $\E_q[f(x_n; \phi_k)]$ maintains $\Phi$ in the power. We assume that $\mathcal{Q}_{nzk}$ is not related to $m_k$ and set $\nabla_{m_k} \tilde{\mathcal{L}}=0$, so we can write \begin{align} \label{eq:update-m} m_k &\approx \left(\Sigma_k^{-1} \mathcal{R}_{nzk} + \Phi_0^{-1}\right)^{-1} \left(\Sigma_k^{-1} \sum_n \mathcal{R}_{nzk} x_n + \Phi_0^{-1} m_0\right) \, \end{align} which is in spirit an empirical update. \section*{References} \section*{Acknowledgements} This research is supported by Science Foundation Ireland under grant number SFI/12/RC/2289{\textunderscore}P2. \section{Proof} \begin{proof}[Proof of Lemma 1] Set, for all $n$, $0 \le \xi^{\star}_n < 1$ and $0 < \xi_n < {\infty}$ that enables $\mathcal{P}^{\star}$ equivalent to $\mathcal{P}_{post}$. Taking the log of both sides in the inequality in $\mathcal{P}^{\star}_{post}$, we obtain \begin{align} \sum_{(z, z') \in v_n} \log \mathrm{gd}(z, z', x_n) \ge \log(1 - \xi_n^{\star}) \\ \implies \E_q \left[\sum_{(z, z') \in v_n} \log \mathrm{gd}(z, z', x_n) \right] \ge \log(1 - \xi_n^{\star}) \end{align} The second line comes from that $\E_q$ does not depend on $\xi^{\star}$. Then, given $1 - \xi_n^{\star} > 0$, one can see \[ \log(1 - \xi_n^{\star}) \ge 1 - \frac {1} {1 - \xi_n^{\star}} \] Set $\xi_n = 1 / (1 - \xi_n^{\star})$ that it satisfies $0 < \xi_n < \infty$. Hence, we can transform that space to \begin{align} \mathcal{P}_{post} &= \left\{\mathcal{T} ~\Bigg|~ \sum_{(z, z') \in v_n} \E_q \left[\log\mathrm{gd}(z, z', x_n)\right] \ge 1 - \xi_n, \forall n\right\} \, . \end{align} Putting the statements together, we finish the proof. Thus, we can write an ideal approximation assuming the bot can descend to the right end of the hierarchy \begin{align} \prod_{z \in v_n} \frac {\exp\{\mathrm{sim}(x_n, z')\}} {\sum_{z'' \in \mathcal{C}(z')} \exp\{\mathrm{sim}(x_n, z''))\}} &\ge 1 - \xi_n^{\star} \, . \end{align} Equivalently, it can be written as \begin{align} \prod_n \prod_{\bm{z}} \left[\frac {\exp\{\mathrm{sim}(x_n, z')\}} {\sum_{z'' \in \mathcal{C}(z')} \exp\{\mathrm{sim}(x_n, z'')\}} \right]^{v_{n \bm{z}}} &\ge 1 - \xi_n^{\star} \, . \end{align} Take the log of both sides: \begin{align} \sum_{(z, z') \in v_n} \mathrm{sim}(x_n, z') - \log \sum_{z'' \in \mathcal{C}(z')} e^{\mathrm{sim}(x_n, z'')} \ge -\log(\xi^{\star}_n) &\ge 1 - \xi_n \ge - \xi_n \\ \implies \E_q \left[ \sum_{(z, z') \in v_n} \mathrm{sim}(x_n, z') - \log \sum_{z'' \in \mathcal{C}(z')} e^{\mathrm{sim}(x_n, z'')} \right] &\ge -\xi_n \end{align} Nevertheless, we write $Y(\bm{\xi}) \triangleq \varrho \sum_n \xi_n$, where $\varrho$ is a positive constant, such that \begin{subequations} \begin{align} \frac {Y(\bm{\xi})} \varrho &= \inf \sum_n \E_q \left[ \sum_{z \in v_{n}} - \mathrm{sim}(x_n, z) + \log \sum_{z' \in sibs(z) \cup \{z\}} e^{\mathrm{sim}(x_n, {z'})} \right] \\ &\le \inf \sum_n \E_q \left[\sum_{z \in v_{n}} - \mathrm{sim}(x_n, z) + \sum_{z' \in sibs(z) \cup \{z\}} \mathrm{sim}(x_n, z') \right] \\ &= \inf \sum_n \E_q \left[ \sum_{z \in v_{n}} \sum_{z' \in sibs(z)} \mathrm{sim}(x_n, z') \right] \\ &= \inf \sum_n \E_q \left[ \sum_{z \in v_{n}} \sum_{z' \in sibs(z)} \log \sum_k \beta_{z' k} f(x_n; \phi_k) \right] \\ &\triangleq \inf \mathcal{R}(q(\mathbf{M}))\, . \end{align} \end{subequations} Using a vector point of view, one can denote $\bm{\phi}_n^{+} = \{f(x_n; \phi_1), \dots, f(x_n; \phi_k)\}$ We can also write, equivalently, \begin{align} \mathcal{R}(q(\mathbf{M})) = \sum_n \E_q \left[ \sum_{\bm{z}} v_{n \bm{z}} \sum_{z \in v_{n}} \sum_{z' \in sibs(z)} \log \bm{\beta}_z \cdot \bm{\phi}_n^{+} \right] \end{align} Therefore, the RegBayes objective can be written as, equivalently, \begin{align} \inf_{q(\mathbf{M}) \in \mathcal{P}_{prob}} ~ \mathrm{KL}[q(\mathbf{M}) || p(\mathbf{M} \mid X)] + \varrho \mathcal{R}(q(\mathbf{M})) \, . \end{align} Again, minimising the KL divergence of $p$ and $q$ is equivalent to maximising the ELBO. Therefore, this formulation is equivalent to \begin{align} \tilde{\mathcal{L}} = \sup_{q(\mathbf{M})} \mathcal{L}(q(\mathbf{M}))- \varrho \mathcal{R}(q(\mathbf{M})) \, . \end{align} \end{proof} \subsection{Analysing nCRP} For this part, we can apply the details from \cite{Wang2009variational} directly. Let us assume a proposal distribution $A$ where a certain $u$ has two matching variational parameters $a_{u1}$ and $a_{u2}$ for a beta distribution such that $q(u \mid a_{u1}, a_{{u2}})$. Ignoring the terms which are unrelated with $\m{U}$, we consider the ELBO with $u$, i.e. $\mathcal{L}(u)$, as follows: \begin{align*} \tilde{\mathcal{L}}(u_z) &= \sum_{n: v_n \in V(u_z)} v_{n \bm{z}}\E_q[\log p({v} \mid \bm{u}_{v})] + \E_q[\log p(u_z)] - \E_q[\log q(u_z)] \, \end{align*} where $\bm{u}$ is the set of $u_z$ associated with the path assignment $v$, and $V(u)$ denotes the path labels passing through $u$. Using the standard approach for considering $\nabla_q \mathcal{L}(u)$, we can have that \begin{align} \label{eq:q-u} q(u_z) &\propto p(u_z) \exp \left\{\sum_{{v} \in V(u_z)} \E_{q(u_z)}[\log p({v} \mid \m{U})] \right\} \, . \end{align} According to BHMC, the condition $p(u \mid \alpha) = \B(u; 1, \alpha)$ is fixed. We can specify more details and derive the estimation for $a_{u1}$ and $a_{u2}$ via conjugacy. One can write \begin{align*} q(u_z \mid a_{z 1}, a_{z 2}) &\propto u_z^{1+\sum_{n} \sum_{\bm{z} \in V(u)} q({v}_{n\bm{z}}) - 1} (1-u_z)^{\alpha + \sum_{n}\sum_{\bm{z} \in \underline{V}(u)} q({v}_{n \bm{z}}) - 1} \, , \end{align*} Assuming each node is ordered at each level with a breadth first manner, we write $\underline{V}(u_z)$ for the set of paths after $u_z$. Then, the paths after that node are all the paths that pass the node's later siblings. Given the conjugacy, we can simply update the variational parameters as \begin{align} \begin{split} \label{eq:ab-v} a_{u 1} &= 1 + \sum_{n} \sum_{\bm{z} \in V(u)} q({v}_{n\bm{z}}) \\ a_{u 2} &= \alpha + \sum_{n}\sum_{\bm{z} \in \underline{V}(u)} q({v}_{n \bm{z}}) \, . \end{split} \end{align} \subsection{Analysing Mixing Proportions} Let us start with a leaf node $z$, \begin{align*} \tilde{\mathcal{L}} (\bm{\beta}_z) &= \sum_n \E_q \left[\log p\left(\bm{c}_{nz} \mid \bm{\beta}_{z}\right)\right] + \E_q \left[\log p\left(\bm{\beta}_{z} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \E_q \left[\log q(\bm{\beta}_{z})\right] \\ &\quad - \varrho\sum_n \sum_{z'} \mathds{1}\{z' \in sibs(z)\} \log \sum_k \E_q \left[ p(x_n, c_{n k} \mid \bm{\beta}_{z'}, \phi_k) \right]\,. \end{align*} There is no closed-form solution for updating the parameters for the proposal distribution $q(\beta_{z k})$. We consider the third bound in Remark~\ref{rmk:bounds}, since this helps us arrange the terms for each $k$: \begin{align*} -\E_q \left[ \log \sum_k X_k \right] \ge 1 - \log \nu - \frac {\sum_k \E_q[X_k]} {\nu} \, . \end{align*} where $X_k$ is a random variable and $\nu$ is an auxiliary variable. The corresponding RELBO can be rewritten as \begin{align*} \tilde{\mathcal{L}} (\bm{\beta}_z) &= \sum_n \E_q \left[\log p\left(\bm{c}_{nz} \mid \bm{\beta}_{z}\right)\right] + \E_q \left[\log p\left(\bm{\beta}_{z} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \E_q \left[\log q(\bm{\beta}_{z}) \right] \\ &\quad + \varrho\sum_n \sum_{z'} \mathds{1}\{z' \in sibs(z)\} \left(1 - \log \nu_z - \frac {\sum_k \E_q[\beta_k f(x_n; \phi_k)]} {\nu_z}\right)\,. \end{align*} Considering the term $1 - \log \nu_z$ (associated with some indicators) in the second line of the above equation is constant, we can ignore it when optimising the corresponding RELBO. We further denote \[ \mathcal{R}(\beta_{z k}) \triangleq \sum_n \sum_{z'} \mathds{1}\{z' \in sibs(z)\} \frac {\E_q[\beta_k f(x_n; \phi_k)]} {\nu_z} \, . \] Then, \begin{align*} q(\bm{\beta}_z) &\propto \mathcal{C}\exp\left\{\sum_n \E_q \left[\log p(\bm{c}_{nz} \mid \bm{\beta}_{z})\right] + \E_q \left[\log p\left(\bm{\beta}_{z} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \sum_{k} \varrho \mathcal{R}(\beta_{z k}) \right\} \\ &\propto \exp \left\{\sum_k \left(\sum_n \E_q \left[\log p(\bm{c}_{nzk} \mid \bm{\beta}_{z})\right] + \E_q \left[\log p\left({\beta}_{zk} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \varrho \mathcal{R}(\beta_{z k}) \right) \right\} \\ &= \prod_k \beta_{z k}^{\sum_n \E_q [c_{n z k}]} \beta_{z k}^{\gamma \E_q [\beta_{z^* k}] - 1} \beta_{z k}^{\log_{\beta_{zk}}\exp\left\{- \varrho \mathcal{R}(\beta_{z k})\right\}} \end{align*} where $\mathcal{C}$ is a constant. Coming back to the expression, we have \begin{align*} q(\bm{\beta}_z) &\propto \prod_k \beta_{z k}^{\sum_n \E_q [c_{n z k}]} \beta_{z k}^{\gamma \E_q [\beta_{z^* k}] - 1} \beta_{z k}^{\log_{\beta_{zk}}\exp\left\{- \mathcal{R}(\beta_{z k})\right\}} \\ &= \prod_k \beta_{z k}^{\sum_n \E_q [c_{n z k}] + \gamma \E_q [\beta_{z^* k}] - \varrho \mathcal{R}(\beta_{z k}) / \log \beta_{zk} - 1}\, . \end{align*} So that, \begin{align*} q(\beta_{zk}) &\propto \exp\{ \E_q[\log p\left(\beta_{zk} \mid \beta_{z^*k}, \gamma \right)] + \E_{q} \left[\log p({c}_{n \bm{z} k} \mid \bm{\beta}_{z})\right] - \varrho\mathcal{R}(\beta_{zk}) \} \, . \end{align*} As discussed in~\cite{paisley2010two}, the bound will become the tightest when \[ \nu_z = \sum_k \E_q [\beta_{zk} f(x_n; \phi_k)] = \sum_k \E_q [\beta_{zk}] \E_q [f(x_n; \phi_k)] \, . \] That is, for the parameter updates, we obtain \begin{align*} \omega_{z k} &= \gamma \E_q [\beta_{z^* k}] + \E_q [n_{z k}] - \log_{\beta_{z k}} \exp\left\{\varrho\mathcal{R}(\beta_{z k})\right\} \\ &= \gamma \frac {\omega_{z^* k}} {\sumvec{\omega_{z^*}}} + \sum_{n} \sum_{\bm{z}} v_{n \bm{z}} \mathds{1}\{\bm{z} \in V(z)\} \rho_{n \bm{z} k} \\ &- \varrho \sum_n \sum_{z'} \mathds{1}\{z' \in sibs(z)\} \frac {\omega_{zk} \E_q[f(x_n; \phi_k)]} {\bm{\omega}_z \cdot \E_q[\bm{\phi}_n^{+}]} \frac 1 {\log \beta_{z k}} \, . \end{align*} So far, the leaf nodes can be handled. Now we resort to inductions and apply the results in~\cite{huang2019bayesian} to integrate out the leaves. That is, one level is diminished and the model returns a tree with a reduced set of $\bm{\beta}$'s. Then, we employ the above techniques again and achieve the updates for the nodes at $L-1$. The only different point will be the expected number $\E_q [n_{z k}]$ and it will be \begin{align*} \E_q [n_{z k}] = \sum_{n} \sum_{\bm{z}} v_{n \bm{z}} \mathds{1}\{\bm{z} \in V(z)\} \rho_{n \bm{z} k} \, . \end{align*} This procedure iterates until all the sub-trees are collapsed, i.e., $L=0$. Finally, the procedure returns to $z_0$, it is a truncated GEM. Notice that, for the root node, the regularisation term is absent. We therefore can apply the results from~\cite{blei2006variational}. \begin{align} \label{eq:eta} \begin{split} \eta_{k 1} &= 1 + \sum_{n} q(c_{n z_0 k}) \\ \eta_{k 2} &= \gamma_0 + \sum_{n} \sum_{j=k+1}^K q(c_{n z_0 j}) \end{split} \end{align} Also, it is trivial that \begin{align*} \E_q [\log o_{k}] &= \Psi(\eta_{k 1}) - \Psi(\eta_{k 1} + \eta_{k 2}) \\ \E_q [\log (1 - o_{k})] &= \Psi(\eta_{k 2}) - \Psi(\eta_{k 1} + \eta_{k 2}) \, . \end{align*} Thereafter, we can derive $\beta_{z_0 k}$ such that $\E_q [\beta_{z_0 k}] = \sum_{j < k} \E_q [1-o_j] + \E_q[o_k]$. \subsection{Analysing cluster assignments} VI always considers an EM-like coordinate ascent procedure and soft assignment is employed in the coordinate ascent. Let us consider the RELBO with the variable $C$ only, such that \begin{align*} \tilde{\mathcal{L}}(c_{n \bm{z} k}) &= \E_{q} \left[\log p(x_n \mid \phi_k)\right] + \E_q[\log p({c}_{n \bm{z} k} \mid \beta_{\bm{z} k})] - \E_q[\log q({c}_{n \bm{z} k})] \,. \end{align*} Setting $\nabla_{c_{n \bm{z} k}} \tilde{\mathcal{L}}$ with a Lagrange multiplier to zero, we get \begin{align*} q({c}_{n \bm{z} k}) & \propto \exp \{\E_q [\log p(x_n \mid \phi_{k})] + \E_{q(\bm{\beta}_{\bm{z}})} \left[\log p(c_{n \bm{z} k} \mid \bm{\beta}_{\bm{z}}) \right] \} \, . \end{align*} The first term is already analysed earlier, so that we derive the second term. Trivially, we obtain \begin{align*} \E_{q(\bm{\beta}_{\bm{z}})}\left[\log p(c_{n \bm{z} k} \mid \bm{\beta}_{\bm{z}}, {v}_{n \bm{z}}) \right] &= \E_{q(\bm{\beta}_{z})}\left[ \log p(c_{n \bm{z} k} \mid \bm{\beta}_{z})\right] = \tilde{\beta}_{\bm{z} k} \, . \end{align*} In the latter text, we will occasionally use the leaf node $z$ to replace $\bm{z}$ in some notations about $\bm{z}$. \begin{align} \label{eq:rho-nk} \rho_{n \bm{z} k} &\propto \exp\{ \E_q [\log p(x_n \mid \phi_{k}) + \log \tilde{\beta}_{\bm{z} k} \} \, . \end{align} Clearly, since the regularisation terms contains no variational variables related to $C$, this RVI updating procedure with regard to the variable $C$ is identical to the VI updates. \subsection{Analysing path assignments} In the rest of the material, we use $\sum_n \sum_{\bm{z}} v_{n \bm{z}} \equiv \sum_n v_n$. With respect to $\tilde{\mathcal{L}}({v}_{n \bm{z}})$, \begin{align*} \tilde{\mathcal{L}}({v}_{n \bm{z}}) &= \sum_{\bm{z}} v_{n \bm{z}} \left \{\E_{q} \left[\log p(x_n, {c}_{n \bm{z}} \mid v_{n \bm{z}}, \bm{\beta}_{z}, \bm{\phi}) + \log p({v}_{n \bm{z}} \mid \m{U})\right] \right\} - \E_q[\log q({v}_{n \bm{z}})] \\ &\quad - \varrho \sum_{\bm{z}} v_{n \bm{z}} \sum_{z \in \bm{z}} \sum_{z' \in sibs(z)} \E_q [ \log \bm{\beta}_{z'} \cdot \bm{\phi}_n^{+} ] \, . \end{align*} To simplify the notation, we write \begin{align*} \mathcal{R}(v_{n \bm{z}}) = \varrho\sum_{z \in \bm{z}} \sum_{z' \in sibs(z)} \E_{-q(v)}[ \log \bm{\beta}_{z'} \cdot \bm{\phi}_n^{+}] \end{align*} Taking the derivatives of $\nabla_{v_{n \bm{z}}} \mathcal{L}$, we can have that \begin{align*} q(v_{n}) \propto \prod_{\bm{z}} ( \exp \{ \E_{-q(v)} [p(v_{n \bm{z}} \mid \m{U})] + \E_{-q(v)}[\log p(x_n, {c}_{n \bm{z}} \mid v_{n \bm{z}}, \bm{\beta}_{z}, \bm{\phi})] - \mathcal{R}(v_{n \bm{z}}) \} )^{v_{n \bm{z}}} \, . \end{align*} Obviously, $q(v)$ is a Multinomial distribution. Considering the tree based stick breaking formulation in~\cref{eq:treestk}, we get \begin{align*} \tilde{v}_{n \bm{z}} &=\E_{-q(v)} \left[ \log \prod_{\ell=1}^{L} u_{\bm{z}^{\ell}}\prod_{z \prec \bm{z}^{\ell}_n} (1 - u_z) \right] \\ &= \sum_{\ell=1}^{L} \E_{q(\bm{u})}\left[\log u_{\bm{z}^{\ell}} \right] + \sum_{z \prec \bm{z}^{\ell}} \E_{q(\bm{u})} \left[\log(1-u_z) \right] \\ &= \sum_{\ell=1}^{L} \left [ \Psi\left(a_{\bm{z}^{\ell} 1}\right) - \Psi\left(a_{\bm{z}^{\ell} 1} + a_{\bm{z}^{\ell} 2}\right) + \sum_{z \prec \bm{z}^{\ell}} \Psi(a_{z 2}) - \Psi(a_{z 1} + a_{z 2}) \right ]\, . \end{align*} The results can be also found in~\cite{Wang2009variational}. Secondly, letting $z$ denote the last node of $\bm{z}$, Given the results in~\cite{paisley2010two} for the lower bound of expected log sum of random variables, we can write \begin{align*} \E_{q} \left[ \sum_k \log p(x_n, {c}_{n \bm{z} k } \mid v_{n \bm{z}}, \bm{\beta}_z, \phi_k) \right] \ge \log \sum_k \exp \left \{\E_{q} \left[ p(x_n, {c}_{n \bm{z} k } \mid v_{n \bm{z}}, \bm{\beta}_z, \phi_k) \right] \right \} \, . \end{align*} Thus, we focus on a single $k$ within the exponential function in the above equation: \begin{align*} \E_{-q(v_n)} \left[ \log p(x_n, {c}_{n \bm{z} k } \mid \bm{\beta}_z, \phi_k) \right] &= \E_{-q(v_n)} \left\{ \log [\beta_k^{c_{n \bm{z} k}} p(x_n \mid \phi_k)] \right\} \\ &= \E_{q(c, \bm{\beta})} [c_{n \bm{z} k} \log \beta_k] + \E_{q(\phi)} [ \log p(x_n \mid \phi_k)] \\ &= \rho_{n \bm{z} k} \log \tilde{\beta}_{\bm{z} k} + \log \tilde{\phi}_{k} \, \end{align*} where $\log \tilde{\phi}_{k} \triangleq \E_{q(\phi)} [ \log p(x_n \mid \phi_k)]$ which will be discussed later. Plus, denoting the sum of a vector by $\sumvec{a} \triangleq \sum_j a_{j}$ for any arbitrary vector $\bm{a}$ where $\bm{a} = \{a_1, a_2, \dots\}$, the known results for the expected values for logarithmic transformations in Dirichlet distribution show that \[ \log \tilde{\beta}_{\bm{z} k} = \log \tilde{\beta}_{z k} \triangleq \E_{q(\bm{\beta}_{\bm{z}})} [\log \beta_{z k}] = \Psi(\omega_{zk}) - \Psi\left(\sumvec{\omega}_z\right) \] where $z = \textsc{Leaf}(\bm{z})$ and $\Psi(\cdot)$ is the \emph{digamma function}. Given that $q(v)$ follows a delta distribution, i.e., $q(v_{n \bm{z}} \mid \lambda_{n \bm{z}}) = \lambda_{n \bm{z}}$, one can write \begin{align*} \lambda_{n \bm{z}} &\propto \sum_k \exp \{\rho_{n \bm{z} k} \log \tilde{\beta}_{\bm{z} k} + \log\tilde{\phi}_{k} + \tilde{v}_{n \bm{z}} - \mathcal{R}(v_{n \bm{z}}) \} \,. \end{align*} Given that $\lambda$ is an indicator function, for RVI, we have \begin{align} \label{eq:lambda-nv} \lambda_{n \bm{z}} = \mathds{1}\{\bm{z} = \argmax_{\bm{z}} q(v_{n \bm{z}})\} \, . \end{align} \section{Derivations} In this section, we derive the necessary updates that would be used in the optimisation algorithm. We would like to emphasise that the regularised term $\mathcal{R}$ have a few different forms for inequalities~\cite{paisley2010two}. Our strategy is to use different forms flexibly for deriving the updates for distinct variables. As regularised term needs to be maximised, the inequality considers only the ``greater than or equal to'' inequality. \input{theory} \input{equations} \input{analyze_v} \input{analyze_c} \input{analyze_u} \input{analyze_b} \input{analyze_phi} \input{elbo} \bibliographystyle{spmpsci} \section{Theoretical analysis of convergence} In this section, we prove the convergence of the algorithm. \begin{corollary} \label{coro:continuous} The function $\tilde{\mathcal{L}}(m_k)$ is differentiable. \end{corollary} \begin{proof} We have derived the derivatives of the regELBO of $m_k$. Now, we can show that the derivative is continuous everywhere. Hence, one can conclude that regELBO of $m_k$ is differentiable. So, we would like to show that the derivative is continuous by \begin{align*} \lim_{\Delta m_k \to \bm{0}} \nabla_{m_k} \tilde{\mathcal{L}}(m_k + \Delta m_k) = \nabla_{m_k}\tilde{\mathcal{L}}(m_k) \, . \end{align*} \end{proof} \begin{lemma} For any $k$, $\tilde{\mathcal{L}}(m_k)$ and $\tilde{\mathcal{L}}(\Phi_k)$ are Lipschitz continuous. \end{lemma} \begin{proof} Let us first prove that $\tilde{\mathcal{L}}(m_k)$ is Lipschitz continuous. According to Corollary~\ref{coro:continuous}, the function is differentiable. Thus, given the mean value theorem, we can have that \begin{align*} \tilde{\mathcal{L}}\left(m_k^{(t+1)} - m_k^{(t)}\right) &= \nabla_{m_k}\tilde{\mathcal{L}}(y) \left(m_k^{(t+1)} - m_k^{(t)}\right) \, . \end{align*} Let $\bar{\mathcal{Q}}_{nzk} = \left(1- \varrho\sum_{z' \in v_n}\sum_{z \in sibs(z')} \mathcal{Q}_{nzk}\right)$. Appointing the norms to the two sides, we can see \begin{align*} \left\| \tilde{\mathcal{L}}\left(m_k^{(t+1)}\right) - \tilde{\mathcal{L}}\left(m_k^{(t)}\right) \right\| &\le \left\| \nabla \tilde{\mathcal{L}} \left[(1-c)m_k^{(t+1)} + c m_k^{(t)} \right] \right\| \left\| m_k^{(t+1)} - m_k^{(t)} \right\| \\ &\le c_0 \left\| m_k^{(t+1)} - m_k^{(t)} \right\| \end{align*} where \begin{align*} c_0 = n \max_n \left[|\bar{\mathcal{Q}}_{nzk}| \left \|\Sigma^{-1}_k(\bar{x}_k - m_k) \right \|\right ] + \|\Phi_0^{-1}(m_k - m_0) \| . \end{align*} This proves that $\tilde{\mathcal{L}}(m_k)$ is Lipschitz continuous. \end{proof} \section{List of Expectations} \label{sec:expec} In this section, we display the important expectations for reference. \begin{align*} \E_q [\log o_k] &= \Psi(\eta_{k 1}) - \Psi(\eta_{k 1} + \eta_{k 2}) \\ \E_q [\log (1-o_k)] &= \Psi(\eta_{k 2}) - \Psi(\eta_{k 1} + \eta_{k 2}) \\ \E_q [\log \beta_{z_0 k}] &= \sum_{j<k} \E_q[\log (1 - o_j)] + \E_q[\log o_k] \\ \E_{q(\bm{\omega}_{z})} [\beta_{z k}] &= \frac {\omega_{z k}} {\sum_{k'} \omega_{z k'}} \\ \E_{q(\bm{\omega}_{z})} [\log \beta_{z k}] &= \Psi(\omega_{z k}) - \Psi\left(\sum_{k'} \omega_{z k'}\right) \\ \E_q [c_{n z k}] &= \rho_{n z k} \\ \E_q [v_{n \bm{z}}] &= \lambda_{n \bm{z}} (\mbox{NEED RECHECKING}) \\ \E_{q(\m{U})} [\log p(v_{n \bm{z}} \mid \m{U})] &= \sum_{\ell=1}^{L} \left [ \Psi\left(a_{\bm{z}^{\ell} 1}\right) - \Psi\left(a_{\bm{z}^{\ell} 1} + a_{\bm{z}^{\ell} 2}\right) + \sum_{z \prec \bm{z}^{\ell}} \Psi(a_{z 2}) - \Psi(a_{z 1} + a_{z 2}) \right ] \\ \E_{q(\phi_k)} [p(x_n \mid \phi_k)] &= \sqrt{\frac {|\Sigma^{-1} + \Phi^{-1}|} {(2\pi)^D |\Sigma| |\Phi|}}\exp\left\{ -\frac 1 2 \left[x^{\top}\Sigma^{-1}x + m^{\top}\Phi^{-1}m -\tilde{m}^{\top}_k (\Sigma^{-1}+\Phi^{-1}) \tilde{m}_k \right] \right\} \\ \tilde{m}_k &= \Sigma^{-1}_k x + \Phi^{-1}_k m_k \\ \E_{q(\phi_k)} [\log p(x_n \mid \phi_k)] &= -\frac D 2 \log 2\pi - \frac 1 2 \log |\Sigma_k| - \frac 1 2 \tr\left[\Phi_0^{-1} (x_n - m_k) (x_n - m_k)^{\top} + \Sigma_k^{-1} \Phi_k \right] \\ \E_{q(\phi_k)} [\log p(\phi_k \mid H)] &= -\frac D 2 \log 2 \pi - \frac 1 2 \log |\Phi_0| - \frac 1 2 \tr\left[ \Phi_0^{-1} (m_k - m_0) (m_k - m_0)^{\top} + \Phi_0^{-1} \Phi_0 \right] \\ \E_{q(\phi_k)} [\log q(\phi_k \mid \tau_k)] &= - \frac D 2 \log 2\pi - \frac 1 2 \log |\phi_k| - \frac D 2 \\ \end{align*} \section{ELBO and RELBO} In this section, we specify the calculation of the ELBO and the RELBO. Let $\mathbb{B}$ be the beta function and $\tr(\cdot)$ is the trace function. \subsection{Model Parameters} \begin{align*} \E_q [\log p (X \mid \bm{\phi})] &= - \frac {DK} 2 \log {2 \pi} - \frac K 2 \log |\Phi_k| - \sum_k \frac 1 2 \log |\Sigma_k| \\ &\quad - \frac 1 2 x_k^{\top}\Sigma^{-1}_k x_k + m_k^{\top}\Phi_k^{-1}m_k \\ &\quad - (\Sigma^{-1}_k x_k + \Phi_k^{-1}m_k)^{\top} (\Sigma_k+\Phi_k)(\Sigma_k^{-1} x_k + \Phi_k^{-1} m_k) \, . \end{align*} \begin{align*} \E_q [\log p (\bm{\phi} \mid H)] &= -\frac {DK} 2 \log {2 \pi} - \frac K 2 \log |\Phi_0| + \sum_k (m_k - m_0)^{\top}\Phi_0^{-1}(m_k - m_0) \, . \end{align*} \begin{align*} \E_q[\log p(C \mid B, V)] &= \sum_{n} \sum_{\bm{z}} q({v}_{n \bm{z}}) \sum_k q(c_{n z k}) \int \log p(c_{n z k} \mid \bm{\beta}_{z}) d \bm{\beta}_{z} \\ &= \sum_n \sum_{\bm{z}} \lambda_{n \bm{z}} \sum_k \rho_{n k} \left[\Psi(\omega_{\bm{z} k}) - \Psi\left(\sumvec{\omega}_{\bm{z}}\right) \right] \end{align*} \begin{align*} \E_q[\log p(V \mid U)] &= \sum_n \sum_{\bm{z}} q(v_{n \bm{z}}) \int \log p(v_{n \bm{z}} \mid U) dU \\ &= \sum_n \mathds{1}_{\{v_n = \bm{z}\}} \sum_{\ell=1}^{|\bm{z}|-1} \Psi \left(u_{z^{\ell} 1}\right) - \Psi\left(u_{z^{\ell} 1} + u_{z^{\ell} 2}\right) + \sum_{z \prec z^{\ell}} \Psi(u_{z 2}) - \Psi(u_{z 1} + u_{z 2}) \end{align*} \begin{align*} \E_q [\log p (U \mid 1, \alpha)]q &= \sum_u \E_q [ \log p(u \mid 1, \alpha) ] \\ &= |U| \log \mathbb{B}(1, \alpha) - (\alpha - 1) \sum_u \Psi(a_{u 1}) - \Psi(a_{u 1} + a_{u 2}) \end{align*} \begin{align*} \E_q [\log p(\bm{\beta}_{z_0} \mid \gamma_0) &\equiv \sum_{k} \E_q [p(o_k \mid \eta_{k 1}, \eta_{k 2})] \\ &= (\gamma_0 - 1) \sum_k \Psi(\eta_{k 1}) - \Psi(\eta_{k 1} + \eta_{k 2}) - K \log \mathbb{B}(1, \gamma_0) \end{align*} To the best of our knowledge, there is no trivial solutions to $\E_q [\mathbb{B}(\cdot)]$. According to bound derived in~\cite{Hughes2015}, we can write \begin{align*} - \log \mathbb{B}(\gamma \bm{\beta}) &\ge K \log \gamma + \sum_{k=1}^{K+1} \log \beta_k \\ - \E_q[\log \mathbb{B}(\gamma \bm{\beta})] &\ge K \log \gamma + \sum_{k=1}^{K+1} [\Psi(\omega_{k}) - \Psi(\sumvec{\omega})] \end{align*} It follows that \begin{align*} &\E_q [ \log p(B^{-} \mid \gamma) ] \\ &= \sum_{z: (z^*, z) \in Z} \E_q [\log p(\bm{\beta}_{z}\mid \bm{\beta}_{z^*}, \gamma)] \\ &= \sum_{z: (z^*, z)} \left[ K \log \gamma + \sum_{k=1}^{K+1} [\Psi(\omega_{z^* k}) - \Psi(\sumvec{\omega}_{z^*})] + \sum_k \left(\gamma \tilde{\omega}_{z^* k} - 1 \right) \left[ \Psi(\omega_{z k}) - \Psi \left(\sumvec{\omega}_{z}\right) \right] \right] \end{align*} \begin{align*} \E[\log p(\bm{\phi} \mid H)] &= -\frac {DK} 2 \log 2 \pi - \frac K 2 \log |\Phi_0| - \frac {1} {2} \sum_k (m_k - m_0)^{\top}\Phi_0^{-1}(m_k - m_0) \end{align*} \subsubsection{For Variational Parameters} \begin{align*} \E_q[\log q(C \mid \mathcal{P})] &= \sum_{n} \sum_k q(c_{n \bm{z} k}) \log q(c_{n \bm{z} k}) = \sum_n \lambda_{n \bm{z}} \sum_k \rho_{n \bm{z} k} \log \rho_{n \bm{z} k} \end{align*} \begin{align*} \E_q[\log q(V \mid \Lambda)] &= \sum_n \sum_{\bm{z}} q(v_{n\bm{z}}) \log q(v_{n\bm{z}} \mid \lambda_{n \bm{z}}) = \sum_n \sum_{\bm{z}} \lambda_{n \bm{z}} \log \lambda_{n \bm{z}} \end{align*} \begin{align*} \E_q [\log q (U \mid A)] &= \sum_u \E_q [ \log q(u \mid a_{u 1}, a_{u 2}) ] = (\alpha - 1) \sum_u \Psi(a_{u 1}) - \Psi(a_{u 1} + a_{u 2}) \end{align*} \begin{align*} \E_q [\log q(\bm{\beta}_{z_0} ) &\equiv \sum_{k} \E_q [p(o_k \mid \eta_{k 1}, \eta_{k 2})] = (\gamma_0 - 1) \sum_k \Psi(\eta_{k 1}) - \Psi(o_{k 1} + o_{k 2}) \end{align*} \begin{align*} \E_q [ \log q(B^{-} \mid \Omega) ] &= \sum_{z: (z^*, z) \in Z} \E_q [\log q(\bm{\beta}_z \mid \bm{\omega}_z)] \\ &= \sum_{z: (z^*, z) \in Z} -\log \mathbb{B}\left( \bm{\omega}_z \right) + \sum_k \left(\tilde{\omega}_{z k} - 1 \right) \left[ \log \Psi(\omega_{z k}) - \Psi \left(\sumvec{\omega}_{z}\right) \right] \end{align*} \begin{align*} \E_q [ \log q(\bm{\phi} \mid \bm{T}) ] &= - \frac {DK} 2 \log 2\pi - \frac 1 2 \sum_k \log |\Phi_k| - \frac {DK} 2 \end{align*} \subsection{Analysing path assignments} In the rest of the material, we use $\sum_n \sum_{\bm{z}} v_{n \bm{z}} \equiv \sum_n v_n$. Here, we analyse $\m{V}$ and thus can focus only on the parts of RELBO that are associated with $\m{V}$ as other parts will become $0$ when taking the derivatives. With respect to $\tilde{\mathcal{L}}({v}_{n \bm{z}})$, \begin{align*} \tilde{\mathcal{L}}({v}_{n \bm{z}}) &= \E_{q} \left[\log p(x_n, {c}_{n \bm{z}} \mid v_{n \bm{z}}, \bm{\beta}_{z}, \bm{\phi}) + \log p({v}_{n \bm{z}} \mid \m{U})\right] - \E_q[\log q({v}_{n \bm{z}})] \\ &\quad - v_{n \bm{z}} \varrho \sum_{z \in \bm{z}} \sum_{z' \in sibs(z)} \E_q [ \log \bm{\beta}_{z'} \cdot \bm{\phi}_n^{+} ] \\ &= \E_q [v_{n \bm{z}}] \left(\sum_k \E_q [\log p(x_n, c_{n \bm{z} k} \mid v_{n \bm{z}}, \phi_k)] + \E_q[p(v_{n \bm{z}})] \right) - \E_q [\log q(v_{n \bm{z}})] \\ &\quad - v_{n \bm{z}} \varrho \sum_{z \in \bm{z}} \sum_{z' \in sibs(z)} \E_q [ \log \bm{\beta}_{z'} \cdot \bm{\phi}_n^{+} ] \\ &= \E_q [v_{n \bm{z}}] \left(\sum_k \E_q [c_{n z k}] \{ \E_q [\log f(x_n; \phi_k)] + \E_q [\log \beta_{z k}] \} + \E_q [\log p(v_{nz})] \right) - \log q(v_{n z}) \\ &\quad - v_{n \bm{z}} \varrho \sum_{z \in \bm{z}} \sum_{z' \in sibs(z)} \E_q [ \log \bm{\beta}_{z'} \cdot \bm{\phi}_n^{+} ] \, . \end{align*} Taking the derivatives of $\nabla_{q(v_{n \bm{z}})} \tilde{\mathcal{L}}$ in a variational calculus manner, we can have that \begin{align*} \log q(v_n) &= v_{n \bm{z}} \left(\sum_k \E_q [c_{n z k}] \{ \E_q [\log p(x_n \mid \phi_k)] + \E_q [\log \beta_{z k}] \} + \E_q [\log p(v_{nz})] - \mathcal{R}(v_{n \bm{z}}) \right) \\ q(v_{n}) &\propto \prod_{\bm{z}} ( \exp \{ \E_{-q(v)} [p(v_{n \bm{z}} \mid \m{U})] + \E_{-q(v)}[\log p(x_n, {c}_{n \bm{z}} \mid v_{n \bm{z}}, \bm{\beta}_{z}, \bm{\phi})] - \mathcal{R}(v_{n \bm{z}}) \} )^{v_{n \bm{z}}} \, , \end{align*} where \begin{align*} \mathcal{R}(v_{n \bm{z}}) = \varrho \sum_{z \in \bm{z}} \sum_{z' \in sibs(z)} \E_{-q(v)}[ \log \bm{\beta}_{z'} \cdot \bm{\phi}_n^{+}] \end{align*} The specifications of $\E_q[c_{n z k}]$, $\E_q[p(x_n \mid \phi_k)]$, and $\E_q [\beta_{z k}]$ are listed in~\cref{sec:expec}. Considering the tree based stick breaking formulation in~\cref{eq:treestk}, we get \begin{align*} \E_{-q(v)}[p(v_{n \bm{z}})] &= \E_{q(U)} \left[ \log \prod_{\ell=1}^{L} u_{\bm{z}^{\ell}}\prod_{z \prec \bm{z}^{\ell}_n} (1 - u_z) \right] \\ &= \sum_{\ell=1}^{L} \E_{q(\bm{u})}\left[\log u_{\bm{z}^{\ell}} \right] + \sum_{z \prec \bm{z}^{\ell}} \E_{q(\bm{u})} \left[\log(1-u_z) \right] \\ &= \sum_{\ell=1}^{L} \left [ \Psi\left(a_{\bm{z}^{\ell} 1}\right) - \Psi\left(a_{\bm{z}^{\ell} 1} + a_{\bm{z}^{\ell} 2}\right) + \sum_{z \prec \bm{z}^{\ell}} \Psi(a_{z 2}) - \Psi(a_{z 1} + a_{z 2}) \right ] \, . \end{align*} The results can be also found in~\cite{Wang2009variational}. As regards of $-\E_q [\bm{\beta} \cdot \bm{\phi}^{+}]$, it can be specified as \begin{align*} -\E_{-q(v)}[ \log \bm{\beta}_{z'} \cdot \bm{\phi}_n^{+} ] &= -\E_q \left[ \sum_k \beta_{z k} p(x_n \mid \phi_k) \right] \\ &\ge -\log \sum_k \E_q [\beta_{z k}] \E_q [p(x_n \mid \phi_k)] \end{align*} where $\mathbf{B1}$ in~\cref{rmk:bounds} is applied. This does not violate the optimisation as we will optimise towards one of its lower bounds. Given that $q(v_{n \bm{z}} \mid \lambda_{n \bm_{z}})$ follows a delta distribution, we have \begin{align} \label{eq:lambda-nv} \lambda_{n \bm{z}} = \mathds{1}\{\bm{z} = \argmax_{\bm{z}'} q(v_{n \bm{z}'})\} \, . \end{align} \subsection{Analysing cluster assignments} Considering the variable $C$ only, RELBO is identical to ELBO. Let us consider the RELBO with the variable $C$ only, such that \begin{align*} \tilde{\mathcal{L}}(c_{n \bm{z} k}) &= \E_{q} \left[\log p(x_n, c_{n \bm{z} k} \mid v_{n \bm{z}}, \phi_k)\right] - \E_q[\log q({c}_{n \bm{z} k})] \\ &= \E_q [\log p(x_n \mid {c}_{n \bm{z} k}, v_{n z}, \phi_k)] + \E_q [\log p({c}_{n \bm{z} k} \mid v_{n z}, \beta_{z k})] - \E_q [\log q({c}_{n \bm{z} k})] \\ &= \E_q [v_{n z}] \E_q[{c}_{n \bm{z} k}] \left(\E_q [\log p(x_n \mid \phi_k)] + \E_q [\log \beta_{z k}]\right) - \E_q [\log q({c}_{n \bm{z} k})] \, . \end{align*} Let the derivatives be $0$, so that \begin{align*} \nabla_{q(c_{n \bm{z} k})} \tilde{\mathcal{L}} &= \E_q [v_{n z}] c_{n z k} \{ \E_q [\log p(x_n \mid \phi_k)] + \E_q [\log \beta_{z k}] \} - \log q(c_{n z k}) = 0 \,. \end{align*} Setting $\nabla_{q(c_{n \bm{z} k})} \tilde{\mathcal{L}}$ with a Lagrange multiplier to zero, we get \begin{align*} q({c}_{n \bm{z} k}) & \propto \exp \{\E_q [\log p(x_n \mid \phi_{k})] + \E_{q} \left[\log \beta_{z k} \right] \} \, . \end{align*} As defined $q(c_{n \bm{z} k})$ is a discrete distribution, we have Clearly, since the regularisation terms contains no variational variables related to $C$, this RVI updating procedure with regard to the variable $C$ is identical to the VI updates. \subsection{Analysing nCRP} For this part, we can apply the details from \cite{Wang2009variational} directly. Let us assume a proposal distribution $A$ where a certain $u$ has two matching variational parameters $a_{u1}$ and $a_{u2}$ for a beta distribution such that $q(u \mid a_{u1}, a_{{u2}})$. Ignoring the terms which are unrelated with $\m{U}$, we consider the ELBO with $u$, i.e. $\mathcal{L}(u)$, as follows: \begin{align*} \tilde{\mathcal{L}}(u_z) &= \sum_n \sum_{\bm{z}} \mathds{1}\{\bm{z} \in \mathbb{Z}_{\tree}(z)\} \E_q [v_{n \bm{z}}] \E_q[\log p({v} \mid \bm{u}_{v})] + \E_q[\log p(u_z)] - \E_q[\log q(u_z)] \\ &= \sum_{n \mid v_n \in \mathbb{Z}_{\tree}(z)} \E_q[\log p({v} \mid \bm{u}_{v})] + \E_q[\log p(u_z)] - \E_q[\log q(u_z)] \, \end{align*} where $\bm{u}$ is the set of $u_z$ associated with the path assignment $v$, and $\mathbb{Z}_{\tree}(u)$ denotes the path labels passing through $u$. Using the standard approach for considering $\nabla_{q(u)} \tilde{\mathcal{L}}$, we can have that \begin{align} \label{eq:q-u} q(u_z) &\propto p(u_z) \exp \left\{\sum_{n} \mathds{1}\{v_{n} \in \mathbb{Z}_{\tree}(u_z)\} \E_{q(v)}[\log p({v} \mid \m{U})] \right\} \, . \end{align} According to BHMC, the condition $p(u \mid \alpha) = \B(u; 1, \alpha)$ is fixed. We can specify more details and derive the estimation for $a_{u1}$ and $a_{u2}$ via conjugacy. One can write \begin{align*} q(u_z \mid a_{z 1}, a_{z 2}) &\propto u_z^{1 + \sum_{n}\mathds{1}\{v_{n} \in \mathbb{Z}_{\tree}(u_z)\}\E_q [v_{n}] - 1} (1-u_z)^{\alpha + \sum_{n}\mathds{1}\{v_{n} \in \underline{\mathbb{Z}_{\tree}}(u_z)\}\E_q [v_{n}] - 1} \, , \end{align*} Assuming each node is ordered at each level with a breadth first manner, we write $\underline{V}(u_z)$ for the set of paths after $u_z$. Then, the paths after that node are all the paths that pass the node's later siblings. Given the conjugacy, we can simply update the variational parameters as \begin{subequations} \begin{align} a_{u 1} &= 1 + \sum_{n}\mathds{1}\{v_{n} \in \mathbb{Z}_{\tree}(u_z)\}\E_q [v_{n}] \label{eq:ab-v1} \\ a_{u 2} &= \alpha + \sum_{n}\mathds{1}\{v_{n} \in \underline{\mathbb{Z}_{\tree}}(u_z)\}\E_q [v_{n}] \label{eq:ab-v2} \, . \end{align} \end{subequations} \subsection{Analysing mixing proportions} Let us start with a leaf node $z$. With the same technique, the RELBO with $\bm{\beta}_z$ is \begin{align*} \tilde{\mathcal{L}} (\bm{\beta}_z) &= \sum_n \sum_{\bm{z}} v_{n \bm{z}} \E_q \left[\log p\left(\bm{c}_{nz} \mid \bm{\beta}_{z}\right)\right] + \E_q \left[\log p\left(\bm{\beta}_{z} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \E_q \left[\log q(\bm{\beta}_{z})\right] \\ &\quad - \varrho \sum_n \sum_{\bm{z}} v_{n \bm{z}} \sum_{z' \in \bm{z}} \mathds{1}\{l(\bm{z}) \in sibs(z)\} \E_q \left[ \log \sum_k p(x_n, c_{n z k} \mid \bm{\beta}_{z}, \phi_k) \right] \\ &\equiv \sum_n \mathds{1}\{l(v_{n}) = z\} \E_q \left[\log p\left(\bm{c}_{nz} \mid \bm{\beta}_{z}\right)\right] + \E_q \left[\log p\left(\bm{\beta}_{z} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \E_q \left[\log q(\bm{\beta}_{z})\right] \\ &\quad - \varrho \sum_n \mathds{1}\{l(v_{n}) \in sibs(z)\} \E_q \left[ \log \sum_k p(c_{n k} \mid \beta_{z k}) p(x_n \mid c_{n k}, \phi_k)\right] \, . \end{align*} There is no closed-form solution for updating the parameters given this direct form. The key step here is to extract the summation out from the log function of the last line. We consider $\mathbf{B2}$ in Remark~\ref{rmk:bounds} to arrange the terms for each $k$. Hence, the corresponding RELBO can be rewritten as \begin{align*} \tilde{\mathcal{L}} (\bm{\beta}_z) &\approx \sum_n \mathds{1}\{l(v_{n}) = z\} \E_q \left[\log p\left(\bm{c}_{nz} \mid \bm{\beta}_{z}\right)\right] + \E_q \left[\log p\left(\bm{\beta}_{z} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \E_q \left[\log q(\bm{\beta}_{z})\right] \\ &\quad + \varrho\sum_n \mathds{1}\{l(v_n) \in sibs(z)\} \left(1 - \log \nu_z - \frac {\sum_k \E_q[\beta_k f(x_n; \phi_k)]} {\nu_z}\right)\,. \end{align*} Considering the term $1 - \log \nu_z$ (associated with some indicators) in the second line of the above equation is constant, we can ignore it when optimising the corresponding RELBO. Then, \begin{align*} \frac {\partial \tilde{\mathcal{L}} (\bm{\beta}_z)} {\partial q(\bm{\beta}_{z})} &\approx \sum_n \mathds{1}\{l(v_{n}) = z\} \E_q \left[\log p\left(\bm{c}_{nz} \mid \bm{\beta}_{z}\right)\right] + \E_q \left[\log p\left(\bm{\beta}_{z} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \log q(\bm{\beta}_{z}) \\ &\quad - \varrho\sum_n \mathds{1}\{l(v_n) \in sibs(z)\} \frac 1 {\nu_z} \sum_k \E_q [p(c_{n k} \mid \beta_{z k}) p(x_n \mid c_{n k}, \phi_k)] \,. \end{align*} We further denote \[ \mathcal{R}(\beta_{z k}) \triangleq \sum_n \mathds{1}\{l(v_n) \in sibs(z)\} \frac 1 {\nu_z} {\E_q[\beta_k f(x_n; \phi_k)]} \, . \] It follows that \begin{align*} \log q(\bm{\beta}_z) &= \sum_n \mathds{1}\{l(v_{n}) = z\} \E_q \left[\log p(\bm{c}_{nz} \mid \bm{\beta}_{z})\right] + \E_q \left[\log p\left(\bm{\beta}_{z} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \sum_{k} \varrho \mathcal{R}(\beta_{z k}) \\ &= \sum_k \left(\sum_n \E_q \left[\log p(\bm{c}_{nzk} \mid \bm{\beta}_{z})\right] + \E_q \left[\log p\left({\beta}_{zk} \mid \bm{\beta}_{z^*}, \gamma \right) \right] - \varrho \mathcal{R}(\beta_{z k}) \right) \\ &= \sum_k \left\{\sum_n \mathds{1}\{l(v_{n}) = z\} \E_q [c_{n z k}] + \gamma \frac {\omega_{z^* k}} {\sum_{k'} \omega_{z^* k'}} - 1 - \frac {\varrho \mathcal{R}(\beta_{z k})} {\log \beta_{z k}} \right\} \log \beta_{z k} \\ &= \sum_k \left\{\sum_n \mathds{1}\{l(v_{n}) = z\} \left(\rho_{n z k} - \frac {\varrho} {\log \beta_{z k}} \frac {\rho_{n z k} \E_q [\phi_k^{+}]} {\bm{\rho}_{n z} \cdot \E_q[\bm{\phi}^{+}]} \right) + \gamma \frac {\omega_{z^* k}} {\sum_{k'} \omega_{z^* k'}} - 1 \right\} \log \beta_{z k} \, . \end{align*} given \begin{align*} \E_{q(\beta_{z^* k})} [\log p(\beta_{z k} \mid \beta_{z^* k}, \gamma) &= \E_{q(\beta_{z^* k})} [(\gamma \beta_{z^* k} - 1) \log \beta_{z k}] = \left(\gamma \frac {\omega_{z^* k}} {\sum_{k'} \omega_{z^* k'}} - 1 \right) \log \beta_{z k} \, . \end{align*} and \[ \nu_z = \sum_k \E_q [\beta_{zk} f(x_n; \phi_k)] = \sum_k \E_{q(c_{n k})} [p(c_{n k} \mid \beta_{zk})] \E_{q(\phi_k)} [f(x_n; \phi_k)] \, \] which, as discussed in~\cite{paisley2010two}, reaches the tightest bound. That is, for the parameter updates, we obtain \begin{align*} \omega_{z k} &= \sum_n \mathds{1}\{l(v_{n}) = z\} \left(\rho_{n z k} - \frac {\varrho} {\log \beta_{z k}} \frac {\rho_{n z k} \E_q [\phi_k^{+}]} {\bm{\rho}_{n z} \cdot \E_q[\bm{\phi}^{+}]} \right) + \gamma \frac {\omega_{z^* k}} {\sum_{k'} \omega_{z^* k'}} \, . \end{align*} So far, the leaf nodes can be handled. Now we resort to inductions and apply the results in~\cite{huang2019bayesian} to integrate out the leaves. That is, one level is diminished and the model returns a tree with a reduced set of $\bm{\beta}$'s. Then, we employ the above techniques again and achieve the updates for the nodes at $L-1$. This procedure iterates until all the sub-trees are collapsed, i.e., $L=0$. Finally, the procedure returns to $z_0$, it is a truncated GEM. Notice that, for the root node, the regularisation term is absent. We therefore can apply the results from~\cite{blei2006variational}. \begin{align} \label{eq:eta} \begin{split} \eta_{k 1} &= 1 + \sum_{n} \rho_{n z_0 k} \\ \eta_{k 2} &= \gamma_0 + \sum_{n} \sum_{j=k+1}^K \rho_{n z_0 j} \end{split} \end{align} Also, it is trivial that \begin{align*} \E_q [\log o_{k}] &= \Psi(\eta_{k 1}) - \Psi(\eta_{k 1} + \eta_{k 2}) \\ \E_q [\log (1 - o_{k})] &= \Psi(\eta_{k 2}) - \Psi(\eta_{k 1} + \eta_{k 2}) \, . \end{align*} Thereafter, we can derive $\beta_{z_0 k}$ such that $\E_q [\log \beta_{z_0 k}] = \sum_{j < k} \E_q [\log(1-o_j)] + \E_q[\log o_k]$. \subsection{Analysing mixture kernels} We have set $\phi$ to be parameters for multivariate Normal distributions. Therefore, one can set $\tau_k$ to be also a multivariate Normal for any $k$. Let $\phi = \mu$ with covariance matrix $\Sigma$ is assumed known. Hence, we write $\tau = \N({m}, \Phi)$ and $\phi \sim \tau$. \begin{align*} \tilde{\mathcal{L}}(\phi_k) &= \E_q[\log p(\m{X} \mid \m{C}, \m{V}, {\phi}_k)] + \E_q [p(\phi_k \mid H)] - \E_q [\log q(\phi_k \mid T_k)] \\ &\quad - \varrho \sum_n \sum_{z \in v_{n}} \sum_{z' \in sibs(z)}\E_q \left[ \sum_{k'} \log \beta_{z' k'} f(x_n; \phi_{k'}) \right] \, . \end{align*} The updates for the kernel will be non-trivial given the regularised term. Thus, we apply $\mathbf{B1}$ to change the objective for optimising. \begin{align*} \tilde{\mathcal{L}}(\phi_k) &\approx \E_q[\log p(\m{X} \mid \m{C}, \m{V}, {\phi}_k)] + \E_q [p(\phi_k \mid H)] - \E_q [\log q(\phi_k \mid T_k)] \\ &\quad - \varrho \sum_n \sum_{z \in v_{n}} \sum_{z' \in sibs(z)}\sum_{k'} \log \E_q \left[ \beta_{z' k'} f(x_n; \phi_{k'}) \right] \\ &= \sum_n \sum_{\bm{z}} \E_q [v_{n \bm{z}} c_{n \bm{z} k} \log f(x_n; \phi_k)] + \E_q [p(\phi_k \mid H)] - \E_q [\log q(\phi_k \mid T_k)] \\ &\quad - \varrho \sum_n \sum_{z \in v_{n}} \sum_{z' \in sibs(z)}\sum_{k'} \log \E_q \left[ \beta_{z' k'} f(x_n; \phi_{k'}) \right] \, . \end{align*} To note we take the derivative of $\tilde{\mathcal{L}}$ rather than taking the variational derivative. In addition, we separate the relbo into the following terms. \begin{align*} \nabla_{m_k} \E_{q} [\log p(x_n \mid c_{nzk}, \m{V}, \phi_k)] &= v_{n \bm{z}} \rho_{n \bm{z} k} \left[ \Sigma^{-1}_k (x_n - m_k) \right] \\ \nabla_{m_k} \E_{q} [\log p(\phi_k)] &= - \Phi_0^{-1}(m_k - m_0) \\ \nabla_{m_k} \E_{q} [\log q(\phi_k)] &= 0 \\ \nabla_{m_k} \log \sum_{k'} \E_q \left[ \beta_{zk'} f(x_n; \phi_{k'}) \right] &= \mathcal{Q}_{nzk} \Sigma^{-1}_k({x}_n - m_k) \, \end{align*} where $\mathcal{Q}_{n z k}$ is defined as $\frac {\E_q[\beta_{k} f(x_n; \phi_k)]} {\E_{q}[\bm{\beta}_z \cdot \bm{\phi}_z^{+}]}$ since \begin{align*} \nabla_{\theta_k}\log \sum_{k'} \E_q \left[ \beta_{k'} f(x_n; \phi_{k'})\right] &= \frac {\nabla_{\theta} \E_q[\beta_{k} f(x_n; \phi_k)]} {\E_{q}[\sum_{k'} \beta_{k'} f(x_n; \phi_{k'})]} \\ &= \frac {\nabla_{\theta} \E_q[\beta_{k} f(x_n; \phi_k)]} {\E_{q}[\bm{\beta}_z \cdot \bm{\phi}_z^{+}]} \\ &= \frac {\E_q[\beta_{k} f(x_n; \phi_k)]} {\E_{q}[\bm{\beta}_z \cdot \bm{\phi}_z^{+}]} \nabla_{\theta} \log \E_q[\beta_{k} f(x_n; \phi_k)] \\ &= \mathcal{Q}_{n z k} \nabla_{\theta_k} \log \E_q[\beta_{k} f(x_n; \phi_k)] \\ &= \mathcal{Q}_{n z k} \nabla_{\theta_k} (\log \E_q[\beta_{k}] + \log \E_q[f(x_n; \phi_k)]) \\ &= \mathcal{Q}_{n z k} \nabla_{\theta_k} \log \E_q[f(x_n; \phi_k)]\, . \end{align*} Therefore, letting \[ \mathcal{R}_{n \bm{z} k} = v_{n \bm{z}} \rho_{n \bm{z} k} - \varrho\sum_{z' \in v_n}\sum_{z \in sibs(z')} \mathcal{Q}_{nzk} \, , \] we can summarise that \begin{align*} \nabla_{m_k} \tilde{\mathcal{L}} &= \Sigma^{-1}_k\sum_n \mathcal{R}_{n \bm{z} k} ({x}_n - m_k) - \Phi_0^{-1}(m_k - m_0) \, . \end{align*} However, the term $\mathcal{Q}_z$ still keeps the variable $\Phi$ and $\E_q[f(x_n; \phi_k)]$ maintains $\Phi$ in the power. We assume that $\mathcal{Q}_{nzk}$ is not related to $m_k$ and set $\nabla_{m_k} \tilde{\mathcal{L}}=0$, so we can write \begin{align} \label{eq:update-m} m_k &\approx \left(\Sigma_k^{-1} \mathcal{R}_{nzk} + \Phi_0^{-1}\right)^{-1} \left(\Sigma_k^{-1} \sum_n \mathcal{R}_{nzk} x_n + \Phi_0^{-1} m_0\right) \, \end{align} which is in spirit an empirical update. \section{The variational inference with ELBO} \subsection{Local updates} Then, let us discuss the logarithms. \begin{align*} \log p(x_n, \bm{c}_{n z} \mid v_{n z}, \bm{\phi}) &= \log \left(\prod_k p(x_n, c_{n z k} \mid v_{n z}, \phi_k) \right)^{v_{n z}} \\ &= \log \left\{\prod_k [\beta_{z k}f(x_n; \phi_k)]^{c_{n z k}} \right\}^{v_{n z}} \\ &= v_{nz} \sum_k c_{n z k} [\log \beta_{z k} + \log f(x_n; \phi_k)] \end{align*} In the following equations, for simplicity, we write $\E_q$ for the expectation excluding some the variable stated in the part $\partial q(\cdot)$. \begin{align*} \mathcal{L}(v_{n z}) &= \E_q [v_{n z}] \left(\sum_k \E_q [\log p(x_n, c_{n z k} \mid v_{n z}, \phi_k)] + \E_q[p(v_{nz})] \right) - \E_q [\log q(v_{n z})] \\ \frac {\partial \mathcal{L}(v_{n z})} {\partial q(v_{n z})} &= v_{n z} \left(\sum_k \E_q [c_{n z k}] \{ \E_q [\log f(x_n; \phi_k)] + \E_q [\log \beta_{z k}] \} + \E_q [\log p(v_{nz})] \right) - \log q(v_{n z}) = 0 \\ q(v_{n z}) &= \begin{cases} 1 & \argmax_{z} \sum_k \E_q [c_{n z k}] \{\E_q [\log \beta_{z k}] + \E_q [\log f(x_n; \phi_k)] \} + \E_q [\log p(v_{nz})] \\ 0 & otherwise \end{cases} \end{align*} as $q(v)$ is delta distribution, and $\argmax_{z} \log q(v_{n z}) \equiv \argmax_z q(v_{n z})$. \begin{align*} \mathcal{L}(c_{n z k}) &= \E_q [\log p(x_n, c_{n z k} \mid v_{n z}, \phi_k)] - \E_q [\log q(c_{n z k})] \\ &= \E_q [\log p(x_n \mid c_{n z k}, v_{n z}, \phi_k)] + \E_q [\log p(c_{n z k} \mid v_{n z}, \beta_{z k})] - \E_q [\log q(c_{n z k})] \\ &= \E_q [v_{n z}] \E_q[c_{n z k}] \E_q [\log f(x_n; \phi_k)] + \E_q [v_{n z}] \E_q [c_{n z k}] \E_q [\log \beta_{z k}] - \E_q [\log q(c_{n z k})] \\ \frac {\partial \mathcal{L}(c_{n z k})} {\partial q(c_{n z k})} &= \E_q [v_{n z}] c_{n z k} \{ \E_q [\log f(x_n; \phi_k)] + \E_q [\log \beta_{z k}] \} - \log q(c_{n z k}) = 0 \\ q(c_{n z k}) &\propto \exp \{ \mathds{1}\{v_{n} = z, c_{n} = k\} (\E_q [\log \beta_{z k}] + \E_q [\log f(x_n; \phi_k)])\} \end{align*} given that $\E_q[v_{n \bm{z}}] = \mathds{1}\{v_n = \bm{z}\}$. \subsection{Global updates} For $\m{B}^{-1}$, we have to first look at the leaf nodes. Assuming $z$ is a leaf node, we then see \begin{align*} \mathcal{L}(\bm{\beta}_{z}) &= \sum_n \sum_k \E_q [\log p(c_{n z k} \mid v_{n z}, {\beta}_{z k})] + \E_q [\log p({\beta}_{z k} \mid {\beta}_{z^* k}, \gamma)] - \E_q [\log q(\beta_{z k})] \\ &= \sum_n \E_q [v_{n z}] \sum_k \E_q [c_{n z k} \log {\beta}_{z k}] + \E_q [({\beta}_{z^* k} - 1)\log {\beta}_{z k}] - \E_q [\log q(\beta_{z k})] \\ \frac {\partial \mathcal{L}(\bm{\beta}_{z k})} {\partial q(\bm{\beta}_{z k})} &= \left \{\sum_n \E_q [v_{n z} c_{n z k}] + \gamma \E_q [\beta_{z^* k}] - 1 \right \} \log{\beta}_{z k} - \log q(\beta_{z k}) = 0 \\ q(\beta_{z k}) &\propto \beta_{z k}^{\sum_n v_{n z} \rho_{n z k} + \gamma \E_q [\beta_{z^* k}] - 1} \end{align*} where $\E_q [\beta_{z^* k}] = {\omega_{z^* k}} / ({\sum_{k'} {\omega}_{z^*k'}})$. \begin{align*} \mathcal{L}(\bm{\phi}) &= \sum_k \sum_n \E_q [\log p(x_n \mid c_{n z k}, v_{n z}, {\phi}_{k})] + \E_q [\log p({\phi}_{k})] - \E_q [\log q(\phi_{k})] \\ &= \E_q [v_{n z}] \E_q[c_{n z k}] \E_q [\log f(x_n; \phi_k)] + \E_q [\log p(\phi_k)] - \E_q [\log q(\phi_{k})] \\ \frac {\partial \mathcal{L}({\phi}_k)} {\partial q(\phi_k)} &= \sum_n \E_q [\log p(x_n \mid c_{n z k}, v_{n z}, {\phi}_{k})] + \E_q [\log p({\phi}_{k})] - \log q(\phi_{k}) = 0 \\ q(\phi_k) &\propto \exp \left \{ \sum_n \E_q [v_{n z} c_{n z k}] \E_q [\log f(x_n; \phi_k)] + \E_q [\log p(\phi_k)] \right \} \\ &= \exp \left \{ \sum_n v_{n z} \rho_{n z k} \E_q [\log f(x_n; \phi_k)] + \E_q [\log p(\phi_k)] \right \} \end{align*} \begin{align*} \mathcal{L}(u_z) &= \sum_n \E_q [\log p(v_n \mid \m{U})] + \E_q [\log p(u_z)] - \E_q [\log q(u_z)] \\ \frac {\partial \mathcal{L}(u_z)} {\partial q(u_z)} &= \sum_{n} \sum_{\bm{z}} \E_q [v_{n \bm{z}}] \E_q [\log p(v_{n \bm{z}})] + \E_q [\log p(u_z)] - \log q(u_z) \\ &= \left( \sum_{n} v_{n \bm{z}} \right) \log u + \left( \sum_{n} \mathds{1}\{v_n = \bm{z}, z \in \underline{V}(z)\} + \alpha - 1 \right) \log (1 - u) - \log q(u_z) = 0 \\ q(u_z) &\propto u^{\sum_{n} \mathds{1}\{v_n = \bm{z}\}} (1 - u)^{\sum_{n} \mathds{1}\{v_n = \bm{z}, z \in \underline{V}(z)\} + \alpha - 1} \end{align*} where $\underline{V}(z)$ is the set of paths that follow after the node $z$, such that the paths pass through the parent of $z$ and sampled after $z$.
1,941,325,220,179
arxiv
\section{Introduction} There has been a lot of interest in the study of asymptotic shape of growing clusters. One of the earliest of such studies was by Richardson, who showed that the asymptotic shape of the infected region in an epidemic model has linear segments ~\cite{Richardson73}. In the Eden model \cite{Eden}, which models an epidemic without recovery, it has been shown that the asymptotic shape of the growing cluster is not a perfect circle~\cite{DDhar}. The shape of growing clusters has also been studied in sandpile models. In the abelian sandpile model, in two dimensions in an initial background of $h$ particles at each lattice site \cite{Boer08, sadhu09}, it was found that the cluster in general has a convex asymptotic shape, which becomes more circular as $h$ is decreased, and tends to a perfect circle as $h\rightarrow -\infty$. In this paper, we study the Eulerian walker (EW) model on the square lattice. This model is related to the sandpile model and was initially introduced by Priezzhev {\it et al.}~\cite{Priezzhev96,Priezzhev98} as a simpler variant of the sandpile model of self-organized criticality (SOC)~\cite{BTW}. It has subsequently found applications in design of derandomized simulations of Markov chains~\cite{Propp09}, efficient information transfer protocols in computer networks~\cite{Panagiotouk09}, and modelling coevolution of virus and immune systems~\cite{Izmailian07}. We study the model, starting from a disordered background, by Monte Carlo simulations. We have a single walker that moves on the lattice and we look at the shape of region visited by it which grows with the length $N$ of the walk. Interestingly, we find evidence that this region is asymptotically a perfect circle. The circular shape is reminiscent of the circular shape in the rotor-router aggregation model studied by Propp (see \cite{Levine02,Kleber05}), where, for a special initial configuration, the region is almost a perfect circle~\cite{Levine05} with departures from the circle being of order 1. The circular shape for the EW cluster is not very evident for small walks. For example, in Fig.~\ref{fig:1}, we have shown clusters formed by the EW of $N=10^5$ and $10^7$ steps. Clearly, only for large $N$, does the circular shape start to emerge, and it requires careful statistical analysis to see this when $N$ is not so large. \figOne The EW model can also be looked upon as a particular limit of a growing self repelling walk, in which the walker preferably jumps along a bond which has been visited least number of times so far. This model was studied in $1d$ by Toth and Veto~\cite{Toth08}. In the zero temperature limit, in one specific variant, this becomes the EW model and a finite temperature corresponds to noise. It was found that the number of visits at a distance $y$ from the origin satisfies the scaling function $F(y) = 1 - y, \ \text{for} \ 0 \le y \le 1$. We find that the same scaling function holds even in $2d$. We also study the model in the presence of noise, where there is a small probability $\epsilon$ that walker goes in a direction not given by the EW rule. We find that a small noise changes the asymptotic behaviour. The diameter of growing region scales as $A(\epsilon)N^{1/2}$ in the presence of noise and as $N^{1/3}$ in its absence. The paper is organized as follows. In Sec.~\ref{sec:model}, we define our model. In Sec.~\ref{sec:case1}, we give details of the simulation without noise (i.e. $\epsilon=0$). We find that the asymptotic shape of lines of constant average number of visits by an EW are perfectly circular, within statistical errors. We will argue that the variance of average number of visit at any distance from the origin tends to a finite number for large $N$ and also obtain the scaling function for the average number of visits. In Sec.~\ref{sec:case2} we discuss the case with noise, and finally we summarize our results in Sec.~\ref{sec:conc}. \section{Model} \label{sec:model} \figTwo The Eulerian walker is defined as follows: We consider a square lattice. We associate with each site an arrow which can point to along one of the four directions, denoted by N, E, S and W (Fig.~\ref{fig:2}). In the initial configuration, the direction of the arrow at each site is chosen independently, and with equal probability. We put a walker at the origin which moves on the lattice. The motion of the walker is affected by configuration of arrows on the lattice, which in turn affects the arrow configuration on the lattice. The walker follows the following rule: at each time step, the walker after arriving at a site rotates the arrow at that site in a clockwise direction by $90^{\circ}$, and then moves one step along the new arrow direction. It was shown in Ref. \cite{Priezzhev96} that on any finite graph, using the above rules, the walker eventually visits all sites and settles into a limit cycle which is an Eulerian circuit visiting each directed bond exactly once in a cycle. This is not the case on an infinite lattice, where the walker always finds new bonds which are not visited earlier and the number of visited sites keeps on growing. It was noted already that in $2d$, the diameter of the region visited by the walker grows as $N^{1/3}$, but the asymptotic shape was not investigated. The EW with noise is defined as follows: at each time step the walker rotates the arrow at its location by $0^{\circ}$, $90^{\circ}$, $180^{\circ}$ or $270^{\circ}$ with probability $\epsilon/3$, $1-\epsilon$, $\epsilon/3$ or $\epsilon/3$ respectively. We will show that the diameter of the region visited grows as $N^{1/2}$ for nonzero $\epsilon$ and as $N^{1/3}$ when $\epsilon=0$. \section{Numerical simulations for evolution without noise} \label{sec:case1} First we discuss the case $\epsilon=0$. We denote the number of times different sites visited by the walker after $N$ steps by $n_{N}({\bf x})$ and the walker's square displacement from the origin by $R_{N}^2$. We evaluate $\overline{n_{N}}({\bf x})$ (the over line represents averaging over initial conditions), the variance of $n({\bf x})$ denoted by $Var\left[n_{N}({\bf x})\right]$, and the mean square displacement $\overline{R_N^2}$ by averaging over $10^6$ different initial configurations. \figThree \subsection{Mean square displacement} According to the heuristic argument given in Ref. \cite{Priezzhev96}, if at time $t$ the number of sites visited by the walker is $S(t)$, then in the previous $4S(t)$ time steps, most of these sites have been visited exactly $4$ times except a small fraction at the boundary. As the cluster is seen to have few holes, it is nearly compact, and $S(t) \sim D^{2}(t)$, where $D(t)$ is the diameter of the cluster, at time $t$. Thus we get \begin{equation} \frac{dD(t)}{dt} \sim \frac{1}{D^2}, \end{equation} which implies that after $N$ steps, \begin{equation} \label{eq:rrms} D_{N} \sim N^{\nu} \quad \text{with}\ \nu = \frac{1}{3}. \end{equation} Figure~\ref{fig:3} shows the mean square displacement of the EW as a function of its length $N$. The averaging is done over $10^6$ realizations. The straight line, which is the best fit to the data has a slope $0.33 \pm 0.01$, consistent with Eq.~(\ref{eq:rrms}). \subsection{Average number of visits} As seen in Fig.~\ref{fig:1}, the cluster of visited sites is quite irregular in shape. Also sites that have been visited at least $n$ times have rough boundaries with several islands of sites that have been visited fewer number of times than all the surrounding sites. However, if we average over different realizations of the initial arrow configuration, some interesting regularities are seen. In Fig.~\ref{fig:4}, we have plotted lines of $\overline{n_{N}}({\bf x}) = \zeta$, for different $\zeta$ as indicated in the figure, for an EW of length $N = 10^6$ averaged over $10^7$ realizations. To obtain these lines, we add a diagonal bond between $(x,y)$ and $(x+1,y+1)$ for each $(x,y)$, and extend the definition of $\overline{n_N}({\bf x})$ to all real ${\bf x}$ by linear interpolation within each small triangle. The plot shows that these lines are nearly perfect circular in shape. \figFour The shape of rings, for large $N$, can be defined by a function \begin{equation} f(\theta) = \lim_{N \rightarrow \infty} \frac{r_N(\theta)}{ N^{1/3}}, \end{equation} where $r_{N}(\theta) \ (0 \le \theta < 2\pi)$ is the angle dependent radius. If the shape is a perfect circle $f(\theta) = constant$, otherwise $f(\theta)$ is a periodic function of $\theta$ that can be expressed in terms of Fourier cosine series \begin{equation} \label{eq:fseries} f(\theta) = \sum_{m = 0}^{\infty} a_{4 m} \cos( 4 m \theta). \end{equation} Since the shape has fourfold symmetry the series will only have terms with $m=4u \ (u=0,1,2\dots)$. The vanishing of $a_{ 4m}$'s for all $m \neq 0$ then implies a circular shape. We define \begin{equation} A_{4}(r) = \frac{\sum_j \overline{n_N}({\bf x}_j) \cos(4\theta_j) }{ \sum_j \overline{n_N}({\bf x}_j)} \end{equation} as the normalized amplitude of the fourth Fourier mode. The summation $j$ is over all the lattice points whose Euclidean distance from the origin lies between $r$ and $r+1$. Here $\theta_j$ is the angle that the vector ${\bf x}_j$ makes with the $x$-axis. This function, for a fixed $r$, has a well defined limit for $N \rightarrow \infty$. In Fig.~\ref{fig:5}(a), we have shown $A_4(r)$ as a function of $r$ for $N=10^6$ steps. The plot shows that for large $r$, $A_{4}(r)$ approaches zero with fairly large irregular-looking fluctuations. These fluctuations, for a fixed $r$, \emph{do not become smaller by statistical averaging or larger $N$}. These fluctuations occur because the lattice points lying between radii $r$ and $r+1$ are not distributed perfectly evenly along the ring. They are of number-theoretic origin, and have been studied in the mathematics literature under the name of the `Gauss circle problem'~\cite{Grosswald84}. The analysis of moments of $A_4$ is therefore not very useful to estimate the shape and we have to adopt some other procedure. \figFive The simplest way to have a quantitative estimate of shape of rings is to fit the data and obtain the mean radius for various rings of constant $\overline{n_N}({\bf x})$. In Fig.~\ref{fig:5}(b), we have shown such a ring and its fitting for $\overline{n_N}({\bf x}) = 1$. The best fit gives the mean radius $\langle R \rangle = 145.436 \pm 0.003$ for $N=10^6$. For other rings also we find the error bars of the same order showing that the line of constant average number of visits are circular in shape within an error bar of $0.002\%$. The inset shows a close up of a particular region of the ring. We also calculate the root mean square deviation of distance, $\Delta r(\zeta)$, of points on the line of constant $\overline{n_N}(x)=\zeta$ to the origin with mean radius $\langle R \rangle_{\zeta}$. This is shown in Fig.~\ref{fig:5}(c) as a function of $\theta$ for various $\zeta$. The plot shows that, $\Delta r(\zeta)/\langle R \rangle_{\zeta}$ is of the order $10^{-4}$ and decreases as $\langle R \rangle_{\zeta}$ increases. Another way to estimate the shape of the cluster formed by visited sites is to obtain various moments of the data. Since all the four directions are equivalent for the walker, we expect that $\overline{n_N}({\bf x})$ has a fourfold symmetry. For a given length $N$, we calculate $\langle x^4 \rangle$, $\langle y^4 \rangle$ and $\langle x^2 y^2 \rangle$ moments. If the shape of the cluster is perfect circular we would have \begin{equation} \label{Eq:8} \frac{ \langle x^4 \rangle}{ \langle x^2 y^2 \rangle} = \frac{\langle y^4 \rangle}{ \langle x^2 y^2 \rangle} = 3. \end{equation} For $N=10^6$ steps averaged over $10^6$ initial realizations, we find that $\langle x^4 \rangle / \langle x^2 y^2 \rangle = \langle y^4 \rangle / \langle x^2 y^2 \rangle = 3.007$, which is consistent with the asymptotic value $3$ deviation being only about $0.2\%$. \subsection{Scaling of $\overline{ n_{N}}({\bf x})$} For large $N$, $\overline{ n_{N}}({\bf x})$, the average number of visits to the site ${\bf x}$, depends only on $|{\bf x}|$. Therefore, we expect that $\overline{ n_{N}}({\bf x}) $, satisfies the scaling form \begin{equation} \label{eq:FSS} \overline{ n_{N}}(|{\bf x}|) = a N^{1/3} F \left( \frac{ |{\bf x}|} {b N^{1/3}} \right), \end{equation} where $F(y)$ is the scaling function. The scaling function can be determined as follows: Let ${\bf x}_1$ and ${\bf x}_2 \ (|{\bf x}_2| > |{\bf x}_1|)$ be the distances of two different sites from the origin. The walker would have made several visits to ${\bf x}_1$ before it first reaches ${\bf x}_2$. Afterwards, because of the local Euler-like organization of the arrows, both sites are visited equally often. Therefore, the difference between the number of times sites at distances ${\bf x_1}$ and ${\bf x_2}$ are visited remains bounded as $N\rightarrow \infty$, i.e., \begin{equation} \overline{n_N}({\bf x}_1) - \overline{n_N}({\bf x}_2) = a N^{1/3} \left[ F(y_1) - F(y_2) \right] = constant. \end{equation} This implies that $F(y)$ must be a linear function of $y$. Using the freedom of choice of constants $a$, and $b$, we can set $F(0) = 1$ and $F(1) = 0$. Therefore, we have \begin{equation} \label{eq:fx} F(y) = \begin{cases} 1 - y \quad {\rm for} \ 0 \le y \le 1, \cr 0 \quad {\rm otherwise}. \end{cases} \end{equation} This simple form of $F(y)$ was already noted by Toth and Veto for the problem in one dimension~\cite{Toth95,Toth08}. The normalization condition $\int \overline{n_N}({\bf x}) d{\bf x} = N$ gives $ab^2 = 3/\pi$. \figSix In Fig.~\ref{fig:6}, we have plotted the finite size scaling of $\overline{n_N}(x{\bf e}_x) $ for $N=10^6, 10^7$ and $10^8$ steps with $a=0.5$ and $b=1.38$. In the same plot, we have also shown the scaling form given by Eq.~(\ref{eq:fx}). The plot shows that as $N$ is increased, the scaled data approaches rather slowly towards the scaling form. In particular for smaller $|{\bf x}|$, the approach to the asymptotic curve seems to be slow. \subsection{Variance of $n_{N}({\bf x})$} \figSeven We also monitored the variance of $n_{N}({\bf x})$ as a function of $N$ for different $|{\bf x}|$. This is plotted in Fig.~\ref{fig:7}. The graph shows that $Var\left[n_{N}({\bf x})\right]$ increases slowly with $N$ and suggests that $Var[n_N({\bf x})]$ remains finite for all fixed ${\bf x}$. This can be understood as follows: the variance of $n_{N}({\bf x})$ arises only from the randomness in the initial visits of walker to ${\bf x}$. Once the local bonds have been organized into a near Euler circuit, the subsequent increments in $n_N({\bf x})$ are nearly deterministic. \subsection{Roughness exponent} We define the surface of the set of visited sites as all visited sites that have at least one unvisited neighbor. Let $W_N ^2$ denotes the variance of the distance from the origin of randomly picked surface formed by the EW of length $N$. Then, $W_N^2$ is defined by \begin{equation} \label{eq:width} W_N ^2 = \overline{ \frac{1}{M} \sum_{i=1}^{M} \left( |{\bf x}_i| - \langle R \rangle \right)^2 }, \end{equation} where $M$ is the number of surface points $|{\bf x}_i|$ of the cluster, $\langle R \rangle$ is the average distance of a perimeter site, and the overbar denotes averaging over different clusters. We define the width of the surface by square root of $W_N^2$. In Fig.~\ref{fig:8}, we have plotted $W_N$ for various $N$, in a log-log scale, averaged over $10^4$ different clusters. We observe that $W_N \sim N^{\delta} \sim R^{3 \delta} \sim R^{\alpha}$ with $\delta = 0.136 \pm 0.02$, which gives the roughness exponent $\alpha = 0.40 \pm 0.06$. The effective value of $\alpha$ seems to decrease with $N$, and it is difficult to estimate its limiting value. It is consistent with the asymptotic value $1/3$ expected for the Kardar-Parisi-Zhang (KPZ) surface growth process~\cite{HHZ95}. Note that in order to think of growth of cluster of visited sites as a local growth process, we have to redefine time so that the radius of the cluster grows linearly in the new variable. \figEight \section{Numerical simulations for evolution with noise}\label{sec:case2} For the evolution with noise also, we monitored $\overline{ R_{N}^2 }$ and $\overline{n_{N}}({\bf x})$ for various $N$ by averaging over $10^6$ different initial configurations. \subsection{Mean square displacement} For zero noise, for $N < 10^3$, $\overline{ R_{N}^2 }$ increases roughly linearly with $N$ and then as $N^{2/3}$. In Fig.~\ref{fig:3}, we have also shown $\overline{ R_{N}^{2} }$ as a function of $N$ for noise strengths $\epsilon = 0.001$, $0.01$, and $0.1$. For $\epsilon=0.001$, there is only a small change from the Eulerian like behaviour. However, when the noise strength is increased, there is a clear crossover seen in $\overline{ R_{N}^{2} }$ from the Eulerian like to a simple random walk behaviour, i.e. $ \overline{ R_{N}^{2} } \sim N$. This crossover can be observed even for noise strength as small as ($\epsilon=0.01$). That presence of noise changes the critical behaviour is not so unexpected. In equilibrium critical phenomena, the well known Harris criterion~\cite{Harris74} characterizes a large class of systems where the critical behavior is substantially altered by the presence of disorder. In SOC models, the Manna model~\cite{Manna91} with stochastic toppling rules is in a different universality class than the model with deterministic toppling rules (e.g., the Bak-Tang-Wiesenfeld model)~\cite{BenHur96}. In fact, different types of stochasticity can yield different universality classes. For example, in the directed sandpile with stickiness one gets a different behaviour than the stochastic Manna model (for the undirected case the situation is less clear \cite{Mohanty,Munoz}). It is known that in some sandpile models, adding noise can change the transition from first order to continuous \cite{Lee-Lee}. \figNine Let $m= N \epsilon$ be the number of `mistakes' the walker makes in a walk of $N$ steps and let $\overline{ R^2_N} (m) $ represents the mean square displacement of a walk with $m$ mistakes. We would like to know how $\overline{ R^2_N} (m) $ increases with $m$. Consider the effect of a single mistake. The rules of the Eulerian walk are such that as the walk evolves, the initial random arrow-directions are rearranged into an Euler circuit gradually. A wrong step would disrupt an evolving local Eulerian circuit. This local defect can be repaired, and a possibly different locally Euler-like circuit formed, when the walker revisits the site. Therefore, there is not much change in $\overline{ R^2_N} (m)$ for small $m$. However, if $m$ is larger (i.e., of the order $N$), the walker keeps on making new mistakes before the old mistakes can be corrected, and the self-organization into Eulerian circuits is lost. This suggests that there should be a value $m^{*}$, such that walks are nearly Eulerian for $m < m^{*}$, and random walk-like for $m > m^{*}$. We find that $m^{*} \sim N^{\Delta}$ with $\Delta < 1$. Furthermore, we find that $\overline{ R^2_N} (m)$ satisfies a scaling relation \begin{equation} \label{eq:R2m} \overline{ R^2_N} (m) = \overline{ R^2_N} (0) G \left( \frac{m}{N^{\Delta}} \right), \end{equation} where $G(x)$ is the scaling function. In Fig.~\ref{fig:9}, we have plotted $\overline{ R^2_{N}}(m) / \overline{ R^2_N} (0)$ vs $m/N^{\Delta}$ for $N = 1 \times 10^5$, $5 \times 10^5$ and $1 \times 10^6$ steps. A good collapse is obtained for $\Delta = 0.74 \pm 0.01$. Equation~(\ref{eq:R2m}) can also be written as (using $\epsilon N = m$) $\overline{ R^2_N}(\epsilon) = \overline{ R^2_N}(0) G \left( \epsilon N^{1-\Delta} \right) $. Then we have $\overline{ R^2_N}(0) \sim N^{2/3}$ and $\overline{ R^2_N}(\epsilon) \sim N$, which implies that $G(x)$ should increase as $x^{1/3(1-\Delta)}$ and the cross over value $N^*(\epsilon) \sim\epsilon^{-1/3(1-\Delta)}$. A direct reliable estimate of $N^*(\epsilon)$ is not possible from our data and this scaling theory prediction is difficult to check from our simulations. \subsection{Average number of visits} \figTen In Fig.~\ref{fig:10}, we have shown the contour of $\overline{n_N}({\bf x}) = 1$ for an EW of length $N=10^6$ steps in the presence of noise with strength $\epsilon = 0.01$. The averaging is done over $10^6$ initial conditions. The best fit to the data gives the mean radius $\langle R \rangle = 296.401 \pm 0.015$, i.e. circular within an error bar of $0.005\%$. The inset shows the close up of a particular region of the ring. On comparing this with the $\epsilon = 0$ case, we see that the mean radius of an EW with $1 \%$ noise strength is about two times greater than the mean radius without noise. In Fig.~\ref{fig:11}, we have plotted $\overline{n_N}(x {\bf e}_x)$ vs $x/N^{1/2}$, where ${\bf e}_x$ is the unit vector along $x$-axis, for an EW in the presence of noise of strength $\epsilon = 0.01$ for $N =1\times 10^5$, $1\times 10^6$ and $5\times 10^6$ steps. The plot shows that, except near the origin where there is a $\log(N)$ dependence, the data for various $N$ collapse on top of each other. \figEleven We also obtain the width, $W_N$, of the surface formed by the EW in the presence of noise. We find that $W_N \sim N^{1/2} \sim R$ as expected for random walks. Hence for nonzero $\epsilon$, the asymptotic shape of the cluster is not circular in individual realizations. Some circular symmetry is only seen in the ensemble averages (i.e., lines of constant $\overline{n}({\bf x})$). \section{Conclusions}\label{sec:conc} We have studied an Eulerian walker on a $2$ dimensional square lattice using Monte Carlo simulations. In the absence of noise, the mean square displacement $\langle R_N^2 \rangle \sim N^{2/3}$. We find that lines of constant average number of visits seem to be perfectly circular. This result is not entirely unexpected, given the fact that Eulerian walkers can be considered as a particular type of derandomized random walkers \cite{Propp09}. However, note that the result does depend on the fact that Eulerian walkers in two dimension return to origin infinitely often, and while the {\it averaged} cluster shape would be expected to show circular symmetry, one does not expect {\it each} individual cluster to show spherical shape in dimensions $d>2$. We also estimated the roughness exponent for the boundary of the visited region. This has a slow convergence to its asymptotic value, but our data is consistent with it belonging to the KPZ universality class. We also find that even a small randomness in the rule for walker's next step changes the asymptotic properties: the mean square displacement shows a crossover from the Eulerian like $\langle R_N^2 \rangle \sim N^{2/3}$ to a simple random walk behaviour $\langle R_N^2 \rangle \sim N$. In higher dimensions $d > 2$, an Eulerian walker does not return to previously visited sites often, and one would expect a random walk like behavior even in the zero-noise case. DD thanks B. Toth and B. Veto for discussions. We thank J. Propp for a critical reading of the earlier version of the paper.
1,941,325,220,180
arxiv
\section{Introduction} \label{sec:intro} \subsection{Definitions and results}\label{subsec:def} A random-step Markov process, introduced by Kalikow in~\cite{Kalikow90} under the name `random Markov process', is a natural generalization of the classical $n$-step Markov chain. In this type of stationary process, rather than the current state determining the probability distribution of the state in the next time step, as in a Markov chain, there is a random `look-back time' which determines how much of the state history is used to find the distribution of the random variable at the next time step. If the random look-back time is bounded above by some $n \in {\mathbb N}$, then the random Markov process is trivially an $n$-step Markov chain. In this paper, we present new results on characterizations those of stationary processes on both countable and uncountable alphabets that are random-step Markov processes as well as a number of examples that show the sharpness of these results. The following notation is used throughout the paper: For any set $A$ and $\Omega = A^{\mathbb{Z}}$, the set $A$ is called the \emph{alphabet} for $\Omega$ and $\Omega$ is represented as the set of doubly infinite words on $A$. For a given word $\mathbf{\omega} = (\omega_i)_{i \in \mathbb{Z}} \in\Omega$, the infinite sequence $\bracks{\omega_i}_{i\in{\mathbb Z}_{-}}\in A^{{\mathbb Z}_{-}}$ is called the \emph{past}. Similarly, for any $m \geq 1$, the sequence $\bracks{\omega_{i}}_{i=-m}^{-1}$ is called the \emph{$m$-past} of $\mathbf{\omega}$. To condense notation, we will write, for example, ${\mathbb P}\bracks{X_0=\omega_0\mid\bracks{X_i}_{-m}^{-1}=\bracks{\omega_i}_{-m}^{-1}}$ as shorthand for \[ {\mathbb P}\bracks{X_0=\omega_0\mid X_{-1}=\omega_{-1},\ldots,X_{-m}=\omega_{-m}}. \] \begin{defi}[Random-step Markov Process]\label{def:randomMarkov} A stationary process $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ on an alphabet $A$, with measurable sets $\mathcal{A}$, is called a \emph{random-step Markov process} (or \emph{random Markov process}) if{f} there exists an independent stationary process on the positive integers, $\left(L_i\right)_{i \in \mathbb{Z}}$, and a stationary coupling $\hat{\mathbb{P}}$ of $\left(X_i\right)_{i \in \mathbb{Z}}$ and $\left(L_i\right)_{i \in \mathbb{Z}}$ such that $L_0$ is independent of $\left\{X_i:i <0\right\}$, and so that for every $n \in \mathbb{Z}^+$, $\mathbf{\omega} \in A^{\mathbb{Z}}$ and measurable set $E_0 \subseteq A$, \begin{multline}\label{eq:rm-def} \hat{{\mathbb P}}\bracks{X_0 \in E_0 \mid \bracks{X_i}_{i=-n}^{-1} = (\omega_i)_{i=-n}^{-1} \wedge L_0=n}\\ =\hat{{\mathbb P}}\bracks{X_0 \in E_0\mid \bracks{X_i}_{i<0} = \bracks{\omega_i}_{i<0} \wedge L_0=n}. \end{multline} The stationary coupling $\left(\left(X_i, L_i\right)_{i \in \mathbb{Z}}, \hat{\mathbb{P}}\right)$ is called a \emph{complete random-step Markov process} (or \emph{complete random Markov process}). \end{defi} For every $i \in \mathbb{Z}$, the variable $L_i$ is called the \emph{look back time (or distance) for $X_i$}, as one need know only $L_0$ and then look at the $L_0$-past of $(X_i)_{i \in \mathbb{Z}}$ to determine the law of $X_0$ exactly. Note that in Definition \ref{def:randomMarkov}, since conditional probabilities are only defined up to sets of measure $0$, it does not matter whether the condition \eqref{eq:rm-def} holds for all $\mathbf{\omega}$ or only almost all. In the case that the alphabet $A$ is either finite or countably infinite, it suffices to verify condition \eqref{eq:rm-def} in the special case that the set $E_0$ consists of a singleton. Previously, the types of stationary processes given by Definition \ref{def:randomMarkov} have been simply called `random Markov processes'. In this paper, we shall continue to use this terminology, but specify `random-step Markov process' in the definition to clarify the source of the additional randomness, compared to a usual $n$-step Markov process. A similar notion to that of a random Markov process is a `uniform martingale' also introduced by Kalikow~\cite{Kalikow90}, for which the martingale convergence theorem applies in a uniform way. That is, the distribution on $X_0$ given the past can be approximated uniformly and arbitrarily well by a distribution which is dependent only on the $m$-past, for some $m$ large enough. For a measure $\mu$, let $||\mu||_{\text{TV}}$ denote the total variation norm so that for probability measures $\mu$ and $\nu$, the total variation distance is given by $||\mu - \nu||_{\text{TV}} = \sup_E \left\{|\mu\left(E\right) - \nu\left(E\right)|\right\}$. Recall also that if $\mu$ and $\nu$ are both measures on a countable set $A$, then $||\mu - \nu||_{\text{TV}} = \frac{1}{2} \sum_{a \in A}|\mu(a) - \nu(a)|$. \begin{defi}\label{def:um} A stationary distribution $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ on alphabet $A$ is called a \emph{uniform martingale} if{f} for every $\varepsilon > 0$ there exists $n_{\varepsilon}$ so that for every $n \geq n_\varepsilon$ and every $\left(\omega_i\right)_{i<0} \in \prod_{i<0} A$, \begin{equation}\label{eq:um-def} \left\|\mathbb{P}\left(\cdot \mid \left(X_i\right)_{-\infty}^{-1} = \left(\omega_i\right)_{-\infty}^{-1}\right) - \mathbb{P}\left(\cdot \mid \left(X_i\right)_{-n}^{-1} = \left(\omega_i\right)_{-n}^{-1}\right)\right\|_{\text{TV}} < \varepsilon. \end{equation} In the case that $A$ is countable, for every $k \geq 0$, define the \emph{$k$-th variation of $\mathbb{P}$} to be \begin{align} \operatorname{var}_k &= \operatorname{var}_k(\mathbb{P}) \notag\\ &= \sup\bigg\{|\mathbb{P}\left(X_0 = a_0 \mid (X_i)_{i=-k}^{-1} = (\omega_i)_{i = -k}^{-1} \right) \notag\\ & \qquad - \mathbb{P}\left(X_0 = a_0 \mid (X_i)_{i=-\infty}^{-1} = (\omega_i)_{i = -\infty}^{-1} \right) | \ :\ (\omega_i)_{i \leq -1} \in A^{\mathbb{Z}^-}\bigg\} \label{def:nvar} \end{align} Note that if $A$ is finite, this is equivalent to the condition that for every $\varepsilon>0$, there exists $n_{\varepsilon}$ so that for every $n \geq n_{\varepsilon}$, $a \in A$, and $\left(\omega_i\right)_{i<0} \in \prod_{i<0} A$, \begin{multline}\label{eq:UM_finite} \bigg|\mathbb{P}\left(X_0 = a \mid \left(X_i\right)_{i=-\infty}^{-1} = \left(\omega_i\right)_{i=-\infty}^{-1}\right)\\ - \mathbb{P}\left(X_0 = a \mid \left(X_i\right)_{i=-n}^{-1} =\left(\omega_i\right)_{i=-n}^{-1}\right)\bigg| < \varepsilon. \end{multline} \end{defi} The definition of the $k$-th variation in Equation \eqref{def:nvar} is chosen to agree with the definition of $k$-th variations of $g$-functions in Equation \eqref{def:nvar-g-fn} to come. If $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is a random Markov process with look-back distances $(L_i)_{i \in \mathbb{Z}}$, then for every $n \geq 1$, and $(\omega_i)_{i < 0}$, \begin{equation}\label{eq:tv-lookback} \| \mathbb{P}\left( \cdot \mid (X_i)_{-\infty}^{-1} = (\omega_i)_{-\infty}^{-1}\right) - \mathbb{P}\left(\cdot \mid (X_i)_{-n}^{-1} = (\omega_i)_{-n}^{-1}\right) \|_{TV} \leq \mathbb{P}(L_0 > n), \end{equation} which tends to $0$ as $n$ tends to infinity, uniformly in the choice of $(\omega_i)_{i < 0}$. In the original paper by Kalikow~\cite{Kalikow90}, the weaker condition in Equation \eqref{eq:UM_finite} was used as the definition of a uniform martingale. In the case of processes on infinite alphabets, the stronger condition is necessary, as was noted in~\cite{Kalikow12}. The reason for the name `uniform martingale' is that for any stationary process $((X_i)_{i \in \mathbb{Z}}, \mathbb{P})$ and every $a$, the sequence $\left\{\mathbb{P}\left(X_0 = a \mid \left(X_i\right)_{i=-m}^{-1} = \left(\omega_i\right)_{i=-m}^{-1}\right)\right\}_{m \geq 1}$ is a martingale and converges, pointwise, to \begin{equation}\label{eq:martingale} \mathbb{P}\left(X_0 = a \mid \left(X_i\right)_{i=-\infty}^{-1} = \left(\omega\right)_{i=-\infty}^{-1}\right). \end{equation} A stationary process is a uniform martingale if this convergence is uniform in $\left(\omega_i\right)_{i<0}$. Kalikow~\cite[Theorem 1.7]{Kalikow12} showed that, for any Lebesgue probability space, every aperiodic measure-preserving transformation is isomorphic to a random Markov process on a countable alphabet. While the statement of Kalikow's Theorem 1.7~\cite{Kalikow12} concludes only that such transformations are isomorphic to uniform martingales, the proof, in fact, constructs a random Markov process. In~\cite{Kalikow90}, Kalikow showed that a uniform martingale on a binary alphabet is also a random Markov process. The purpose of this paper is to both strengthen this result for finite alphabets and to examine some extensions of this result to processes on both countable and uncountable alphabets. In Section \ref{sec:ctble}, it is shown that uniform martingales on a countable alphabet that satisfy an additional condition, discussed below, are random Markov processes. \begin{defi}[Dominating Measure] \label{def:dommeasure} Let $\bracks{\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}}$ be a stationary process on an alphabet $A$ and $\mu$ a measure on $A$. Then, $\mu$ is called a \emph{dominating measure} for $\bracks{\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}}$ if{f} for every event $E$ in the $\sigma$-algebra generated by $\left(X_{i}\right)_{i<0}$ and every measurable set $A_0 \subseteq A$, $\mathbb{P}\bracks{X_0 \in A_0 \mid E} \leq \mu\bracks{A_0}$. If the measure $\mu$ is finite ($\mu\left(A\right)< \infty$), then $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is said to have a \emph{finite dominating measure}. \end{defi} It was shown by Parry~\cite{Parry66, Parry69} and Rohlin~\cite{Rohlin65} that every measure-preserving transformation is isomorphic to a stationary process on a countable alphabet for which the past determines the present. These results were strengthened by Kalikow~\cite{Kalikow12}. In the case of random Markov processes, the notion the past determining the present can sometimes be expressed precisely in terms of the look-back distances. In particular, we shall consider the a particular class of random Markov processes, for which the look-back distance $L_0$ and the previous states uniquely determine the state $X_0$. \begin{defi} A random Markov process $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ on a countable alphabet $A$ is called a \emph{deterministic random-step Markov process} if{f} there exists a representation as a complete random-step Markov process $\left(\left(X_i, L_i\right)_{i \in \mathbb{Z}}, \mathbb{P}'\right)$ so that for every sequence $\left(\omega_i\right)_{i \leq 0}$, \begin{multline}\label{eq:def-det-rm} \mathbb{P}'\left(X_0 = \omega_0 \mid \left(X_i\right)_{i <0} = \left(\omega_i\right)_{i<0} \wedge L_0 = n\right)\\ =\mathbb{P}\left(X_0 = \omega_0 \mid \left(X_i\right)_{-n}^{-1} = \left(\omega_i\right)_{-n}^{-1} \wedge L_0 = n\right)\in \left\{0,1\right\}. \end{multline} \end{defi} In this paper, only deterministic random Markov processes on countable alphabets are considered. In particular, since the condition in equation \eqref{eq:def-det-rm} is trivially satisfied for many stationary processes on uncountable alphabets, a generalization of the notion of a deterministic random Markov process to an uncountable alphabet would require a careful choice of definition, which is not addressed here. In the study of random Markov processes on a countable alphabet, for any integer $n$ and sequence $\left(\omega_i\right)_{-n\leq i <0} \in A^n$, the values \begin{equation}\label{eq:table-vals} \left\{\mathbb{P}\left(X_0 = \omega_0 \mid \left(X_i\right)_{i=-n}^{-1} = \left(\omega_i\right)_{i=-n}^{-1} \wedge L_0 = n\right) \mid \omega_0 \in A\right\} \end{equation} are sometimes called the \emph{table values} for the event \[\left\{\left(X_i\right)_{i=-n}^{-1} = \left(\omega_i\right)_{i=-n}^{-1} \wedge L_0 = n\right\}\] (see~\cite{Kalikow90}). A deterministic random Markov is then one with a representation as a complete random Markov process for which all table values are either $0$ or $1$. While the property of being a deterministic random Markov process might appear to be quite strong, the following theorem shows that, in fact, all uniform martingales on finite alphabets can be expressed in this form. \begin{thm}\label{thm:countable} Every uniform martingale, $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$, on a countable alphabet with a finite dominating measure, $\mu$, is a deterministic random Markov process. In the case that the alphabet $A$ is finite, $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is a deterministic random Markov process with finite expected look-back distance if{f} the $n$-th variations are summable: \[ \sum_{n \geq 1} \operatorname{var}_n (\mathbb{P}) < \infty. \] \end{thm} As a corollary, note that if $A$ is a finite set, every stationary process on $A^{\mathbb{Z}}$ has a finite dominating measure: for example, for each $a \in A$, set $\mu\left(a\right) = 1$ so that $\mu\left(A\right) = |A| < \infty$. Thus, Theorem \ref{thm:countable} simplifies for processes on a finite alphabet. \begin{cor}[Finite Alphabets] \label{thm:ergfinlet} Let $A$ be a finite set and $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ be a uniform martingale. Then $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is a deterministic random Markov process and furthermore has a representation as a complete random Markov process with finite expect look-back distance if{f} $\sum_n \operatorname{var}_n (\mathbb{P}) < \infty$. \end{cor} Theorem \ref{thm:countable} is best-possible in the sense that there are uniform martingales on countable alphabets without a dominating measure that are not random Markov processes. Such an example is given in Section \ref{sec:ctble}. For uncountable alphabets, there are examples of uniform martingales on an uncountable alphabet with a finite dominating measure that are not random Markov processes. Such an example is given in Section \ref{sec:unctble}, where a stronger condition (Berbee's ratio condition, Definition \ref{cond:four} below) is considered that implies that a process is a uniform martingale and also a random Markov process. Berbee~\cite{Berbee87} considered Markov representations of stationary processes, based on the properties of their associated $g$-functions. The condition on $g$-functions that was considered by Berbee in \cite{Berbee87} was different than that for uniform martingales (Definition \ref{def:um}). Given a stationary process $((X_i)_{i \in \mathbb{Z}}, \mathbb{P})$ and $n \geq 1$, define \begin{multline}\label{eq:Berbee-cond} r_n = r_n(\mathbb{P}) = \log\bigg(\sup\bigg\{\frac{\mathbb{P}(X_0 = x_0 \mid X_{-1} = x_{-1}, \ldots)}{\mathbb{P}(X_0 = y_0 \mid X_{-1} = y_{-1}, \ldots)} \bigg\rvert \\ x_0 = y_0, x_{-1} = y_{-1}, \ldots, x_{-n} = y_{-n} \bigg\} \bigg). \end{multline} Reframing the results in the language of random Markov processes, Berbee showed that if $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is a uniform martingale on a finite alphabet with $\sum_n r_n < \infty$, then $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is a random Markov process. Theorem \ref{thm:countable} provides a stronger result when applied to a process on a finite alphabet and Theorem \ref{thm:ratio} below shows that a closely related condition is sufficient for any process, even on an uncountable alphabet, to be a random Markov process. The following definition (Definition \ref{cond:four} below) is equivalent in the case of a countable alphabet to the underlying stationary process having $\{r_n\}_{n \geq 1}$ as in Equation \ref{eq:Berbee-cond} satisfying $\lim_{n \to \infty} r_n = 0$. A stationary process satisfying the condition in Definition \ref{cond:four} is some times also called \emph{log-continuous}, due to the formulation from Equation \eqref{eq:Berbee-cond}. \begin{defi}[Berbee's ratio condition] \label{cond:four} A stationary process on an alphabet $A$ is said to satisfy \emph{Berbee's ratio condition} (or simply the \emph{ratio condition}) if{f} for every $\varepsilon >0$ there exists $n = n(\varepsilon)$ so that for every $(\omega_i)_{i<0} \in A^{\mathbb{Z^-}}$ and every measurable set $E_0 \subseteq A$, \begin{equation}\label{eq:ratiodef} \left| \frac{{\mathbb P}\bracks{X_0 \in E_0 \mid\bracks{X_i}_{i\in{\mathbb Z}_{-}}=\bracks{\omega_i}_{i\in{\mathbb Z}_{-}}}}{{\mathbb P}\bracks{X_0 \in E_0\mid \bracks{X_{i}}_{-m}^{-1}=\bracks{\omega_{i}}_{-m}^{-1}}} - 1\right| < \varepsilon. \end{equation} Note that in the expression in equation \eqref{eq:ratiodef}, a $0$ only appears in the denominator when there is also a $0$ in the numerator. The convention adopted for this possible scenario is that $\frac{0}{0} = 1$. \end{defi} As in the definition of a random Markov process, Definition \ref{def:randomMarkov}, it suffices that the condition in \eqref{eq:ratiodef} hold for merely almost all $\mathbf{\omega}$. Examples are given in Section \ref{sec:unctble} to show that not every uniform martingale satisfies the ratio condition and not every stationary process that satisfies the ratio condition has a dominating measure. However, even in the case where the alphabet is uncountable, every stationary process that satisfies Berbee's ratio condition is a random Markov process; this is stated formally in the following theorem. \begin{thm}\label{thm:ratio} Let $A$ be any set and $\Omega = A^{\mathbb{Z}}$. Let $\mathbb{P}$ be a stationary probability measure on $\Omega$ so that $\left(\Omega, \mathbb{P}\right)$ satisfies the ratio condition, then $\left(\Omega, \mathbb{P}\right)$ is a random Markov process. \end{thm} Finally, some open questions and conjectures are given in Section \ref{sec:open}. \subsection{Background} \label{sec:gfunctions} Random Markov processes and uniform martingales, both introduced by Kalikow~\cite{Kalikow90}, are related to the study of `$g$-functions'. Roughly, a $g$~-~function determines the probability distribution on the alphabet given a possibly infinite word. These are also known as transition probabilities. This section gives a short background on both uniform martingales and (continuous) $g$-functions, including, for completeness, some sketches of known constructions for measure for certain $g$-functions which are used throughout. It should be noted that this is merely an overview; we do not attempt to survey this area in its entirety. For further information, we point the reader towards the many references given within this section. The notion of a probability distribution on the present state of a process depending on infinitely many past states was introduced by Onicescu and Mihoc \cite{OM35} under the name `cha\^{i}nes \`{a} liaisons compl\`{e}tes' (now called `chains with complete connections') and subsequently developed by Doeblin and Fortet~\cite{DF37}. Connections between a number of closely related definitions are given in lecture notes by Maillard \cite{Maillard07}. The terminology and notation used in this paper are as follows. \begin{defi} For a countable set $A$, a function $f : A \times \prod_{i<0} A \to [0,1]$ is called a \emph{$g$-function} if{f} for every $\left(\omega_i\right)_{i<0} \in A^{\mathbb{Z}^-}$, \begin{equation}\label{eq:g-function} \sum_{a \in A} f\left(a, \omega_{-1}, \omega_{-2}, \ldots\right) = 1. \end{equation} \end{defi} Every stationary process $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ on a countable alphabet $A$ corresponds to a $g$-function defined by \begin{equation}\label{eq:g-measure} f\left(a, \omega_{-1}, \omega_{-2}, \ldots\right) = \mathbb{P}\left(X_0 = a \mid X_{-1} = \omega_{-1}, X_{-2} = \omega_{-2}, \ldots\right). \end{equation} For a particular $g$-function $f$, any probability measure that satisfies equation \eqref{eq:g-measure} is called a $g$-measure for $f$. For every $k$, define the \emph{$k$-th variation of $f$} by \begin{multline}\label{def:nvar-g-fn} \operatorname{var}_k\left(f\right) = \sup\left\{\left|f\left(x_0, x_{-1}, x_{-2}, \ldots\right) - f\left(y_0, y_{-1}, y_{-2}, \ldots\right)\right|\right.\\ \left.\lvert x_0 =y_0, x_{-1} = y_{-1}, \ldots, x_{-k} = y_{-k}\right\}. \end{multline} If $f$ satisfies equation \eqref{eq:g-measure} for a stationary process $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$, write $\operatorname{var}_k (\mathbb{P})$ for $\operatorname{var}_k\left(f\right)$. When the $g$-function in question is clear from context, we shall sometimes write $\operatorname{var}(k)$. The notion of a $g$-function is reserved here for processes on a countable alphabet. Keane~\cite{Keane72} looked at those $g$-functions that are continuous as linear forms on the space of probability measures for a compact metric space. For stationary distributions on a finite alphabet, a uniform martingale corresponds precisely to a continuous $g$-function. In the same paper, Keane proved that if $A$ is finite and $f$ is a continuous $g$-function, there exists at least one $g$-measure for $f$ and gave certain sufficient conditions for uniqueness of these measures. Since then, there have been a number of results related to the existence and uniqueness of $g$-measures for a particular $g$-function~\cite{Berbee87, BM93, Hulse06, JOP07, Keane72, MU01, Sarig99}. In the remainder of this section, we review some results related to $g$-functions. Given an alphabet $A$ and a stochastic process $\left(X_i\right)_{i \in \mathbb{Z}} \in A^{\mathbb{Z}}$ with probability measure $\mathbb{P}$, the function $f: \prod_{i \leq 0} A~\to~[0,1]$ given by \begin{equation}\label{eq:marginal_fn} f\left(a_0, a_{-1}, a_{-2}, \ldots\right) = \mathbb{P}\left(X_0 = a_0 \mid X_{-1}=a_{-1}, X_{-2} = a_{-2}, \ldots\right) \end{equation} is uniquely determined up to sets of $\mathbb{P}$-measure $0$. By definition, for any $\mathbf{a} \in \prod_{i <0} A$, the function $f\left(\cdot\ \mathbf{a}\right): A \to [0,1]$ determines a probability measure on $A$. If $\left(X_i\right)_{i \in \mathbb{Z}}$ is stationary then also, for any $i \in \mathbb{Z}$, \begin{equation}\label{eq:stationary} \mathbb{P}\left(X_i = a_0 \mid X_{i-1}=a_{-1}, X_{i-2} = a_{-2}, \ldots\right) = f\left(a_0, a_{-1}, a_{-2}, \ldots\right). \end{equation} Further, when $|A| < \infty$, the set $\prod_{i \leq 0} A$ can be endowed with a metric and a compact topology. Doeblin and Fortet~\cite{DF37} examined these types of functions \[ \mathbf{a} \mapsto {\mathbb P}\left(X_0 = a_0 \mid X_{-1} = a_{-1}, X_{-2} = a_{-2}, \ldots\right), \] which they called \emph{chains with infinite connections} and gave some results on properties of some limits of these functions, under certain conditions. A function $f$, as in equations \eqref{eq:marginal_fn} and \eqref{eq:stationary}, need not necessarily arise from a fixed probability measure. Keane~\cite{Keane72} looked at the question of determining the collection of probability measures $\mu$ on $A^{\mathbb{Z}}$ for which \[ \mu\left(X_0 =a_0 \mid X_{-1} = a_{-1}, \ldots\right) = f\left(a_0, a_{-1}, a_{-2}, \ldots\right), \] and in particular, under which conditions there is exactly one such measure. The use of the letter $g$ in the terms `$g$-function' and `$g$-measure' seems to originate in a paper of Keane~\cite{Keane72}. There, the letter $g$ was used for the functions on $\prod_{i<0} A$, and a `$g$-measure' was one that was associated with precisely the function $g$. Later, the term `$g$-function' came into use, to describe this class of functions. As previously noted, if $f$ is a continuous $g$-function and $\mu$ is a $g$-measure for $f$, then by definition, $\mu$ is a uniform martingale. Markov processes are a particular example of continuous $g$-functions. Given a transition matrix $P = \left(p_{a, b}\right)_{a, b, \in A}$, for a finite state space $A$, the function $f$ given, for $a, b \in A$ and $\mathbf{x} \in A^-$, by $f\left(b, a, \mathbf{x}\right) = p_{a, b}$ is a continuous $g$-function for $A^{\mathbb{Z}^-}$. Note that, even in the case where $A = \left\{0,1\right\}$, if a $g$-function $f$ is not continuous, there need not be any $g$-measures for $f$, as the following example shows. \begin{example} Let $A=\left\{0,1\right\}$ and define $f\left(a, \mathbf{x}\right)$ as follows: If $\mathbf{x}$ contains an infinite string of consecutive $0$s, set $f\left(1, \mathbf{x}\right) = 1$ and $f\left(0, \mathbf{x}\right) = 0$. If $\mathbf{x}$ does not contain an infinite string of $0$s, set $f\left(1, \mathbf{x}\right)=0$ and $f\left(0, \mathbf{x}\right)=1$. The function $f$ is not continuous and one can check that there is no stationary $g$-measure for this $g$-function.\qed \end{example} Using fixed-point theorems, Keane~\cite{Keane72}, showed that every continuous $g$-function on a finite alphabet has at least one $g$-measure. Such probability measures can also be constructed directly by taking limits of `nearly-stationary' measures on finite words constructed from a $g$-function. Such a method for constructing stationary processes can be found, for example, in the book of Kalikow and McCutcheon~\cite[Section 2.11]{KMcC10}. \begin{prop}[Keane~\cite{Keane72}]\label{Prop:existence} Let $(A, d)$ be a compact metric space and let $f:A^{\mathbb{Z}^-} \to [0,1]$ be a $g$-function that is continuous in the product topology on $A^{\mathbb{Z}^-}$. There is at least one stationary $g$-measure for $f$. \end{prop} If the alphabet $A$ is finite, then the set $A$ with the trivial metric is compact and $g$-measures for continuous $g$-functions correspond exactly to uniform martingales. In general, this type of result need not be possible in the case of infinite alphabets: the same technique need not yield a probability measure, nor even a non-zero measure. For example, one can define a Markov process on $\mathbb{Z}$ in terms of a $g$-function that has no stationary measure. In this paper, a number of examples are described in terms of their $g$-functions and these types of existence results are needed to verify that corresponding $g$-measures exist. As some of the properties of these $g$-measures are used in the proofs, in practice, most examples of stationary processes in this paper are constructed explicitly by defining measures on all cylinder sets. One of the methods used repeatedly in this paper to construct stationary processes is one that the third author learned from Ornstein in about 1978. This construction, that we shall describe shortly, was used by Alexander and Kalikow \cite{AK92} to study randomly generated stationary processes. This method recursively constructs stationary measures on finite words in the alphabet as follows. Given an alphabet $A$, for each $n \geq 1$, a stationary probability measure $\mu_n$ is defined on $A^n$ so that the sequence of measures $\{\mu_n\}_{n \geq 1}$ is consistent. The recursive construction begins by choosing a probability measure $\mu_1$ on $A$. For every $n \geq 1$, given a stationary probability measure, $\mu_n$, on $A^n$, a consistent stationary measure on $A^{n+1}$ is defined in the following manner. For every word of length $n-1$, $a_1 a_2 \ldots a_{n-1} \in A^{n-1}$, the measure $\mu_n$ gives conditional measures on all words of the form $x_0 a_1 a_2 \ldots a_{n-1}$ and all words of the form $a_1 a_2 \ldots a_{n-1} x_n$. Coupling these two conditional measures in any way, and even differently for different words $a_1 a_2 \ldots a_{n-1}$ yields a measure $\mu_{n+1}$ on $A^{n+1}$ with the property that if $\mathbf{a} \in A^n$, then \begin{equation}\label{eq:measure-constr} \mu_{n+1}(\cdot \mathbf{a}) = \mu_{n+1}(\mathbf{a} \cdot) = \mu_n(\mathbf{a}). \end{equation} Thus, the measure $\mu_{n+1}$ is stationary since $\mu_n$ is stationary and these measures are consistent. Note that in the step $n =1$ of the recursion, the unique word of length $n-1$ is the empty word. This method is as general as possible since any stationary process can be constructed in this way. Furthermore, in many cases, properties of the resulting measure can be incorporated into the recursion and proved along the way. In the case that $A$ is either finite or countable, the construction of the measures on finite words is straightforward and can be given by defining the probability of each word individually. For uncountable alphabets, constructing the measures on finite words may require some care, but in either case, once the collection of stationary probability measures on all finite words is given, these extend to a stationary probability measure on $A^{\mathbb{Z}}$ with all probabilities of cylinder sets given by the appropriate measure on finite words. In much of the literature, the focus has been on uniqueness of $g$-measures, rather than simply existence. In his paper introducing the notion of $g$-functions, Keane~\cite{Keane72} showed that if $A$ is finite and a function $f$ is both continuous and satisfies certain differentiability conditions, then there is a unique measure for $f$ that is also strongly mixing. Further conditions for uniqueness were given by Berbee~\cite{Berbee87} and Hulse~\cite{Hulse06} for $g$-function on finite alphabets. Results for existence and uniqueness on countable alphabets were considered by Johansson, \"{O}berg, and Pollicott~\cite{JOP07}, Sarig~\cite{Sarig99}, and Mauldin and Urba\'{n}ski~\cite{MU01}. Examples of $g$-functions with more than one $g$-measure were given by Bramson and Kalikow~\cite{BM93}, by Hulse~\cite{Hulse06}, and by Berger, Hoffman and Sidoravicius~\cite{BHS08}. A random Markov process on a countable alphabet can be expressed in terms of $g$-functions. Let $\left(\left(X_n, L_n\right)_{n \in Z}, \mathbb{P}\right)$ be a random Markov process on alphabet $A$. For each $n \geq 1$, and $a_0, a_{-1}, \ldots, a_{-n} \in A$, define \begin{multline*} f_n\left(a_0, a_{-1}, \ldots, a_{-n}\right) =\\ \mathbb{P}\left(X_0 = a_0 \mid X_{-1} = a_{-1}, \ldots, X_{-n} = a_{-n}\ \wedge L_0 = n\right ). \end{multline*} Such a function $f_n$ is naturally extended to a continuous $g$-function on $A^{\mathbb{Z}^-}$ and by the definition of a random Markov process (Definition \ref{def:randomMarkov}), \begin{align*} \mathbb{P}&\left(X_0=a_0 \mid X_{-1}=a_{-1}, X_{-2} = a_{-2}, \ldots\right) \\ &=\sum_{n=1}^{\infty} \mathbb{P}\left(L_0 = n\right)\cdot f_n\left(a_0, a_{-1}, \ldots, a_{-n}\right)\\ &= f\left(a_0, a_{-1}, a_{-2}, \ldots\right). \end{align*} In this way, a random Markov processes is a random mix of $n$-step Markov processes. To see that this $g$-function $f$ is continuous, for every $\varepsilon >0$, let $N$ be such that for $M \geq N$, $\sum_{n \geq M} \mathbb{P}\left(L_0 =n\right) < \varepsilon$. Then, \[ \left|f\left(a_0, a_{-1}, \ldots\right) - \sum_{n=1}^{M-1}\mathbb{P}\left(L_0=n\right)\cdot f_n\left(a_0, a_{-1}, \ldots, a_{-n}\right)\right|< \varepsilon. \] In particular, given a random Markov process, any other stationary process with the same $g$-function will also be a random Markov process. To see that the notion of random Markov processes is genuinely different from $n$-step Markov processes, consider the following example. \begin{example} Define a $g$-function on the alphabet $\{0,1\} \times \mathbb{Z}$ as follows. For each $a_0, a_{-1}, a_{-2}, \ldots \in \{0,1\}$ and $\ell_0, \ell_{-1}, \ell_{-2}, \ldots \in \mathbb{Z}^+$, define \begin{equation}\label{eq:rm-notMarkov} f((a_0, \ell_0), (a_{-1}, \ell_{-1}), (a_{-2}, \ell_{-2}), \ldots) = \begin{cases} \frac{1}{4} \cdot \frac{1}{2^{\ell_0} } &\text{if } a_0 = a_{-\ell_0}\\ \frac{3}{4} \cdot \frac{1}{2^{\ell_0} } &\text{if } a_0 = 1 - a_{-\ell_0}. \end{cases} \end{equation} One can show that there is a complete random Markov process $\left(\left(X_i, L_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ that is a $g$-measure for the $g$-function in equation \eqref{eq:rm-notMarkov} so that for every $k \geq 1$, $\mathbb{P}(L_0 = k) = \frac{1}{2^k}$ and so that given $L_0 = k$ and $X_{-k}$, $X_0$ is equal to $X_{-k}$ with probability $1/4$ and different with probability $3/4$. It is left as an exercise to the interested reader to show that this can be done with $\mathbb{P}\left(X_0 = 0\right) = \mathbb{P}\left(X_0=1\right)=\frac{1}{2}$. Such a stationary process is a random Markov process, but not an $n$-step Markov chain for any $n$. \qed \end{example} Further consideration of the properties of random Markov processes were given by Kalikow, Katznelson and Weiss~\cite{KKW92}, who showed that zero entropy systems can be extended to a random Markov process. Later, Rahe~\cite{Rahe93, Rahe94} gave extensions of some results for $n$-step Markov chains to random Markov chains for which the expected look-back distance is finite. \section{Countable alphabets} \label{sec:ctble} \subsection{Proof of Theorem \ref{thm:countable}} In this section the proof of Theorem \ref{thm:countable} is given showing that if a uniform martingale on a countable alphabet has a finite dominating measure, then it is also a deterministic random Markov process. Recall that for the $n$-th variation (see equation \eqref{def:nvar}) of a $g$-function associated with a particular stationary process, we shall write $\operatorname{var}_n \mathbb{P}$ or $\operatorname{var}(n)$ interchangeably. We now recall Theorem \ref{thm:countable} and give its proof in full. \begin{thm1} Let $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ be a uniform martingale on a countable alphabet $A$ with a dominating measure $\mu$. Then \begin{enumerate}[(a)] \item \label{part:count-determ} $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is a deterministic random Markov process, and further \item \label{part:finite-look-back} if the alphabet $A$ is finite, then the process is a deterministic random Markov process with finite expected look-back distance if{f} $\sum_{n \geq 1} \operatorname{var}_n \mathbb{P} < \infty$. \end{enumerate} \end{thm1} \begin{proof} Let $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ be a uniform martingale on a countable alphabet $A$. Without loss of generality, assume that $A = \mathbb{Z}^+$. First, to prove part \eqref{part:count-determ}, the random variables $\left(L_i\right)_{i \in \mathbb{Z}}$ are constructed recursively together with a stationary coupling that is a deterministic random Markov process. Indeed, the table values, as in equation \eqref{eq:table-vals}, for the random Markov process are constructed in terms of a sequence of functions $(T_k)_{k \geq 0}$ together with a sequence of look-back distances $\{n_k\}_{k \geq 0}$ and a sequence of look-back probabilities $\{p_k\}_{k \geq 0}$, constructed recursively with the following properties: \begin{enumerate}[(i)] \item \label{rec:unique} For every $k \geq 0$, $T_k : A^{n_k+1} \to [0,1]$ has the property that for every $\mathbf{\omega} \in A^{n_k}$, there is $a(\mathbf{\omega}) \in A$ with $T_k(a(\mathbf{\omega}), \mathbf{\omega}) = p_k$ and if $b \neq a(\mathbf{\omega})$, then $T_k(b, \mathbf{\omega}) = 0$. \item \label{rec:leq} For every $(\omega_i)_{i \leq -1} \in A^{\mathbb{Z}^-}$ and $b \in A$, \begin{equation}\label{eq:tableValues} \sum_{i \leq k} T_i(b; (\omega_i)_{-n_i}^{-1}) \leq \mathbb{P}\left(X_0 = b \mid (X_i)_{i=-\infty}^{-1} = (\omega_i)_{i=-\infty}^{-1}\right). \end{equation} \item \label{rec:converge} For every $k \geq 1$, there exists $(\omega_{k, i})_{i =- n_k}^{-1} \in A^{n_k}$ so that for every $b \in A$, \begin{equation}\label{eq:table-conv} \mathbb{P}\left(X_0 = b \mid (X_i)_{i=-n_{k}}^{-1} = (\omega_{k, i})_{i = -n_k}^{-1}\right) - \sum_{i = 1}^{k-1} T_i(b, (\omega_{k,i})_{i=-n_i}^{-1}) \leq p_k + \operatorname{var}(n_k) + \frac{1}{k}. \end{equation} \end{enumerate} Note that since the process is a uniform martingale, the sequence of $n$-th variations $(\operatorname{var}(n))_{n \geq 1}$ is non-increasing and satisfies $\lim_{n \to \infty} \operatorname{var}(n) = 0$. It is possible that for some $N \geq 1$, $\operatorname{var}(N) = 0$ in which case the process is an $N$-step Markov process. The existence of such an $N$ has no effect on the following construction. To begin the recursive construction for $k = 0$, set $p_0 = n_0 = 0$ and let $T_0 \equiv 0$ (the $0$-function). Note that the conditions in parts \eqref{rec:unique} and \eqref{rec:leq} are trivially satisfied and the condition in part \eqref{rec:converge} does not apply to $k = 0$. For the recursion step, fix $k \geq 1$ and suppose that $T_0, T_1, \ldots, T_{k-1}$, $p_0, p_1, \ldots, p_{k-1}$ and $n_0, n_1, \ldots, n_{k-1}$ have been defined and satisfy conditions \eqref{rec:unique}, \eqref{rec:leq}, and \eqref{rec:converge} above. For ease of notation, for any $n \geq n_{k-1}$ and $(\omega_i)_{i = -n}^{-1} \in A^n$ or $(\omega_i)_{i=-\infty}^{-1} \in A^{\mathbb{Z}^-}$ and $b \in A$, define \begin{equation}\label{eq:left-over-measure} P_{k-1}(b; (\omega_i)_{i \leq -1}) = \mathbb{P}\left(X_0 = b \mid (X_i)_{i\leq -1} = (\omega_i)_{i\leq -1}\right) - \sum_{i \leq k-1} T_i(b; (\omega_i)_{-n_i}^{-1}). \end{equation} By the condition \eqref{rec:leq}, for any $b$ and $(\omega_i)_{i \leq -1}$, then $P_{k-1}(b, (\omega_i)_{i \leq -1}) \geq 0$. For any $(\omega_i)_{i \leq -1} \in A^{\mathbb{Z}^-}$ or $(\omega_i)_{i=-n}^{-1} \in A^n$, the function defined by $P_{k-1}(\cdot, (\omega_i)_{i \leq -1})$ is a positive measure on $A$ that is dominated by the measure $\mu$. Also, for any $n \geq n_{k-1}$ and $\mathbf{\omega}, \mathbf{\omega}' \in A^{\mathbb{Z}^-}$ with $\omega_{-1} = \omega_{-1}', \ldots, \omega_{-n} = \omega_{-n}'$, then \begin{equation}\label{eq:left-over-variation} |P_{k-1}(b; \mathbf{\omega}) - P_{k-1}(b; \mathbf{\omega}')| \leq \operatorname{var}(n). \end{equation} Set $\gamma = 1 - \sum_{j = 0}^{k-1}p_j$ and let $M > 0$ be sufficiently large so that the dominating measure satisfies $\sum_{k > M} \mu\left(k\right) < \frac{\gamma}{10}$. Then, for any $\left(\omega_i\right)_{i < 0} \in A^{\geq n_{k-1}}$ and any $n \geq n_{k-1}$, \[\sum_{a =1}^{M}P_{k-1}(a; \left(\omega_i\right)_{-n}^{-1}) \geq \frac{9\gamma}{10}.\] In particular, for each $n \geq n_{k-1}$ and $\left(\omega_i\right)_{-n}^{-1}$, there is an $a \in A$, depending on $(\omega_i)_{i=-n}^{-1}$ so that \[ P_{k-1}\left(a; \left(\omega_i\right)_{-n}^{-1}\right) \geq \frac{9\gamma}{10M}. \] Choose $n_k > n_{k-1}$ to be large enough so that $2\operatorname{var}(n_k) \leq \frac{9\gamma}{10M}$. That is, for every $\left(\omega_i\right)_{-n}^{-1}$, there is an $a \in A$ with \begin{equation}\label{eq:likely-element} P_{k-1}\left(a; \left(\omega_i\right)_{-n}^{-1}\right) > 2 \operatorname{var}(n_k). \end{equation} For every $\mathbf{\omega} = \left(\omega_i\right)_{-n_k}^{-1} \in A^{n_k}$, define \[ s_k\left(\mathbf{\omega}\right) = \max_{a \in A} \left\{P_{k-1}(a; \left(\omega_i\right)_{-n_k}^{-1})\right\} \] and let $a\left(\mathbf{\omega}\right) \in A$ achieve this maximum. Set \begin{equation}\label{eq:rk} r_k = \inf \left\{ s_k\left(\mathbf{\omega}\right) \mid \mathbf{\omega} \in A^{n_k}\right\} \end{equation} and define $p_k = r_k - \operatorname{var}\left(n_k\right)$. By the choice of $n_k$ and inequality \eqref{eq:likely-element}, $p_k \geq 0$. For any $a \in A$ and $\mathbf{\omega} \in A^{n_k}$, define \[ T_k\left(a; \omega_{-1}, \ldots, \omega_{-n_k}\right) = \begin{cases} p_k &\text{if $a = a\left(\mathbf{\omega}\right)$}\\ 0 &\text{otherwise}. \end{cases} \] For any $\left(\omega_i\right)_{-\infty}^{-1} \in A^{\mathbb{Z}^-}$ and $a \in A$, by the choice of $n_k$, inequality \eqref{eq:likely-element}, and the definition of $p_k$, \[ T_k\left(a; \omega_{-1}, \ldots, \omega_{-n_k}\right) \leq P_{k-1}\left(a; \left(\omega_i\right)_{i=-\infty}^{-1}\right). \] Thus, by the definition of $P_{k-1}$ (equation \eqref{eq:left-over-measure}), condition \eqref{rec:leq} is satisfied for this value of $k$. Further, by the definition of $r_k$ in equation \eqref{eq:rk} and the definition of $p_k = r_k - \operatorname{var}(n_k)$, there exists some $\mathbf{\omega}_k$ so that for every $b \in A$, \[ P_{k-1}(b; \mathbf{\omega}_k) \leq s(\mathbf{\omega}_k) \leq r_k + \frac{1}{k} = p_k + \operatorname{var}(n_k) + \frac{1}{k}. \] Thus, condition \eqref{rec:converge} is satisfied for this value of $k$ also. This completes the recursive construction. Given $(T_i)_{i \geq 1}$, a representation of the random Markov process is defined by setting, for every $k \geq 1$, $\hat{\mathbb{P}}\left(L_0 = n_k\right) = p_k$ and for every $a \in A$ and $\mathbf{\omega} \in A^{n_k}$, \[ \hat{\mathbb{P}}\left(X_0 = a \mid \left(X_i\right)_{i=-n_k}^{-1} = \left(\omega_i\right)_{i=-n_k}^{-1}\wedge L_0 = n_k\right) = \frac{1}{p_k}T_{k}\left(a; \omega_{-1}, \ldots, \omega_{-n_{k}}\right). \] Note that the functions given by $\frac{1}{p_k} T_k$ are the table values for the random Markov process being defined. It remains to show that $\sum_{k \geq 1} p_k = 1$ and that \begin{multline}\label{eq:same-measure} \sum_k \hat{\mathbb{P}}\left(X_0 = a \wedge L_0 = n_k \mid \left(X_i\right)_{-n_k}^{-1} = \left(\omega_i\right)_{-n_k}^{-1}\right)\\ = \mathbb{P}\left(X_0 = a \mid \left(X_i\right)_{-\infty}^{-1} = \left(\omega_i\right)_{-\infty}^{-1}\right). \end{multline} Note that, by construction $\sum_{k \geq 1} p_k \leq 1$ which implies that $\lim_{k \to \infty} p_k = 0$. For every $k \geq 1$, let $\mathbf{\omega}_k \in A^{n_k}$ be defined so that $s_k\left(\mathbf{\omega}_k\right) \leq r_k + \frac{1}{k} = p_k+\operatorname{var}\left(n_k\right) + \frac{1}{k}$. Then, for every $a \in A$, \[ P_k\left(a; \mathbf{\omega}_k\right) \leq s_k \leq p_k + \operatorname{var}\left(n_k\right) + \frac{1}{k}. \] Since $\lim_{k \to \infty} \left(p_k + \operatorname{var}\left(n_k\right) + \frac{1}{k}\right) = 0$, the sequence $\left(P_k\left(\cdot; \mathbf{\omega}_k\right)\right)_{k \geq 1}$ consists of positive measures on the countable set $A$, each dominated by $\mu$, converging to $0$ pointwise. By the Lebesgue dominated convergence theorem, \[ \lim_{k \to \infty} 1 - \sum_{i=1}^k p_i = \lim_{k \to \infty}\sum_{a \in A} P_k\left(a; \mathbf{\omega}_k\right) = 0. \] Thus, $\sum_{k \geq 0} p_k = 1$ and hence $\sum_{k \geq 1} \hat{\mathbb{P}}\left(X_0 = \cdot \wedge L_0 = n_k \mid \left(X_i\right)_{-n_k}^{-1} = \left(\omega_i\right)_{-n_k}^{-1}\right)$ is a probability measure on $A$ and so equation \ref{eq:same-measure} holds. Consider now the expected value of the look-back distance for part \eqref{part:finite-look-back} of the theorem. First, let $\left(\left(X_i, \hat{L}_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ be a random Markov process with $\mathbb{E}\left(\hat{L}_0\right) < \infty$. For every $n$, $\operatorname{var}\left(n\right) \leq 2\cdot\mathbb{P}\left(\hat{L_0} > n\right)$ and so \[ \sum_{n \geq 1} \operatorname{var}\left(n\right) \leq 2 \sum_{n \geq 1} \mathbb{P}\left(\hat{L}_0 > n\right) \leq 2\cdot\mathbb{E}\left(\hat{L}_0\right) < \infty. \] In the case that $A$ is finite, let $M \in \mathbb{Z}^+$ be such that $|A| = M < \infty$ and suppose that the process satisfies $\sum_{n \geq 1} \operatorname{var}(n) < \infty$. The recursive construction given can be repeated as above, with the following changes. For every $k \geq 1$, choose $n_k> n_{k-1}$ to be the smallest integer $n$ with \begin{equation}\label{eq:careful_nk} \min \left\{ \inf_{\mathbf{\omega} \in A^n} \left\{\max_{a \in A} P_{k-1}(a; \mathbf{\omega})\right\}, \left(1- \frac{1}{M^2}\right)r_{k-1} \right\} \geq 2\operatorname{var}(n). \end{equation} Such an $n$ is well-defined since the left-hand side of inequality \eqref{eq:careful_nk} is increasing in $n$ and bounded away from $0$ while the right-hand side is decreasing to $0$. Define \[ r_k = \min \left\{ \inf_{\mathbf{\omega} \in A^n} \left\{\max_{a \in A} P_{k-1}(a; \mathbf{\omega})\right\}, \left(1- \frac{1}{M^2}\right)r_{k-1} \right\} \] and set $p_k = r_k - \operatorname{var}(n_k)$. By inequality \eqref{eq:careful_nk}, $p_k > 0$ and also $r_k \leq r_{k-1} \left(1 - \frac{1}{M^2}\right)$. With this choice of $(p_k)_{k \geq 1}$ and $(n_k)_{k \geq 1}$, consider $\hat{\mathbb{E}}(L_0) = \sum_k p_k n_k$. Note that for each $n \in [n_{k-1}, n_k-1]$, we have $\operatorname{var}(n) \leq r_k/2$. Thus, setting $n_0 =r_0= 1$, \begin{align*} \sum_{n \geq 1} \operatorname{var}(n) &\geq \sum_{k \geq 1} (n_k - n_{k-1}) \frac{r_k}{2}\\ & = \frac{1}{2}\sum_{k \geq 1} n_k r_k - \frac{1}{2}\sum_{k \geq 1} n_{k-1} r_k\\ &\geq \frac{1}{2} \sum_{k \geq 1} n_k r_k - \frac{1}{2}\left(1 - \frac{1}{M^2}\right)\sum_{k \geq 1} n_{k-1} r_{k-1}\\ & = \frac{1}{2M^2} \sum_{k \geq 1} n_k r_k - \frac{1}{2} \left(1 - \frac{1}{M^2}\right)\\ &\geq \frac{1}{2M^2} \sum_{k \geq 1} n_k p_k - \frac{1}{2}. \end{align*} Thus, using the fact that $\sum \operatorname{var}(n) < \infty$, \[ \hat{\mathbb{E}}(L_0) = \sum_{k \geq 1} n_k p_k \leq 2M^2\left(1 + \sum_{n \geq 1} \operatorname{var}(n) \right) < \infty. \] This completes the proof of the theorem. \end{proof} \subsection{Another construction for finite alphabets}\label{subsec:determ_construction} The following section contains a different construction to show that every random Markov process on a finite alphabet is a deterministic random Markov process. The result is not as strong as Theorem \ref{thm:countable} and is independent of further results, and can be skipped, but is presented as an alternative approach. In the construction given in the proof of Proposition \ref{prop:determ} below, the digits of the binary expansion of probabilities of the `present state' given the `past' are used to define a new deterministic random Markov process from any random Markov process on a finite alphabet. A key tool in this proof is the use of an injective function $F: (\mathbb{Z}^+)^{n+1} \to \mathbb{Z}^+$ with the property that for any $i_0, i_1, \ldots, i_n$, $F(i_0, i_1, \ldots, i_n) \geq i_0$. Roughly, conditioned on a particular past, the event $\{L_0 =i_0\}$ is decomposed into infinitely many parts $\{\hat{L}_0 = F(i_0, i_1, \ldots, i_n) \mid i_1, \ldots, i_n \in \mathbb{Z}^+\}$. After the proof of Proposition \ref{prop:determ}, a method of constructing such functions is described along with some examples. \begin{prop}\label{prop:determ} Let $\left(\left(X_i, L_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ be a random Markov process on a finite alphabet $A$, let $n \in \mathbb{Z}^+$ be such that $|A| \leq 2^n$, and let $F: (\mathbb{Z}^+)^{n+1} \to \mathbb{Z}^+$ be an injective function with the property that for every $i_0, i_1, \ldots, i_n \in \mathbb{Z}^+$, $F(i_0, i_1, \ldots, i_n) \geq i_0$. There is an independent process $\left(\hat{L}_i\right)_{i\in{\mathbb Z}}$ with \[ \hat{\mathbb{P}}\left(\hat{L}_0 = F\left(i_0, i_1, \ldots, i_n\right)\right) = \mathbb{P}\left(L_0 = i_0\right) \frac{1}{2^{i_1}}\cdots \frac{1}{2^{i_n}} \] and a stationary coupling so that $\left(\left(X_i, \hat{L}_i\right)_{i \in \mathbb{Z}}, \hat{\mathbb{P}}\right)$ is a deterministic complete random Markov process. \end{prop} \begin{proof} Assume, without loss of generality, that $A = \{0,1\}^n$ and write $X_i = \left(Y_i^1, Y_i^2, \ldots, Y_i^n\right)$. Given an integer $i$, a fixed sequence of past states $\omega_{-1}, \ldots, \omega_{-i}$, and $d_1, \ldots, d_n \in \left\{0,1\right\}$, let $\left\{\varepsilon_k^j \mid k \geq 1,\ j \in \left[1,n\right]\right\}$ be such that for every $k, j$, $\varepsilon_k^j \in \left\{0,1\right\}$ and \begin{align*} \mathbb{P}&(Y_0^1 = d_1 \mid (X_j)_{-i}^{-1} = (\omega_j)_{-i}^{-1} \wedge L_0 = i) = \sum_{k \geq 1} \frac{\varepsilon_k^1}{2^k},\\ \mathbb{P}&(Y_0^2 = d_2 \mid (X_j)_{-i}^{-1} = (\omega_j)_{-i}^{-1} \wedge Y_0^1 = d_1 \wedge L_0 = i) = \sum_{k \geq 1} \frac{\varepsilon_k^2}{2^k},\\ \vdots\\ \mathbb{P}&(Y_0^n = d_n \mid (X_j)_{-i}^{-1} = (\omega_j)_{-i}^{-1} \wedge Y_0^1 = d_1, Y_0^2 = d_2, \ldots,\\\ & \qquad Y_0^{n-1} = d_{n-1} \wedge L_0 = i) \\ &= \sum_{k \geq 1} \frac{\varepsilon_k^n}{2^k}. \end{align*} Thus, \begin{multline} \mathbb{P}(Y_0^1 = d_1, \ldots, Y_0^n = d_n \mid (X_j)_{-i}^{-1} = (\omega_j)_{-i}^{-1} \wedge L_0 = i)\\ = \prod_{i=1}^n \left(\sum_{k \geq 1} \frac{\varepsilon_k^i}{2^k} \right) = \sum_{i_1, i_2, \ldots, i_n} \frac{\prod_{j=1}^n \varepsilon_{i_j}^j}{2^{i_1+i_2 + \cdots+i_n}}. \end{multline} For any $i_1, i_2, \ldots, i_n \geq 1$, define \[ \hat{\mathbb{P}}\left(\hat{L}_0 = F\left(i, i_1, i_2, \ldots, i_n\right)\right) = \mathbb{P}\left(L_0 = i\right) \frac{1}{2^{i_1}}\cdot \frac{1}{2^{i_2}}\cdots \frac{1}{2^{i_n}} \] and set \begin{multline} \hat{\mathbb{P}}\left((Y_0^i)_{i=1}^n = (d_i)_{i=1}^n \mid \left(X_j\right)_{j=-i}^{-1} = \left(\omega_j\right)_{j=-i}^{-1} \wedge \hat{L}_0 = F\left(i, i_1, i_2, \ldots, i_n\right)\right)\\ = \prod_{k=1}^n \varepsilon_{i_k}^k \in \left\{0,1\right\}. \end{multline} Since the function $F$ is injective, $F\left(i, i_1, i_2, \ldots, i_n\right) > i$, and \[ \sum_{i_1, \ldots, i_n} \hat{\mathbb{P}}\left(\hat{L}_0 = F\left(i, i_1, \ldots, i_n\right)\right) = \mathbb{P}\left(L_0 = i\right), \] this defines a deterministic complete random Markov process. \end{proof} The proof of Proposition \ref{prop:determ} depends on the existence of the functions $F:(\mathbb{Z}^+)^n \to \mathbb{Z}^+$ that are both injective and have the property that for every choice of $i_0, i_1, \ldots, i_n \in \mathbb{Z}^+$, $F(i_0, i_1, \ldots, i_n) \geq i_0$. Here, we describe one possible method of constructing such functions. Let $\{B_i \mid i \in \mathbb{Z}^+\}$ be a collection of disjoint sets of integers with the property that for every $i$, $\min B_i \geq i$. For example if $\{q_1 < q_2 < \cdots\}$ is the set of primes, then choosing the sets $B_i = \{q_i^k \mid k \geq 1\}$ will have the desired property. For any such collection $\{B_i\}_{i \in \mathbb{Z}^+}$, a sequence of functions $\{F_n\}_{n\geq 1}$ is defined so that for each $n \geq 1$, the function $F_n: (\mathbb{Z}^+)^{n+1} \to \mathbb{Z}^+$ is injective and has the property that for every $i_0, i_1, \ldots, i_n$, $F_n(i_0, i_1, \ldots, i_n) \geq i_0$. To begin the recursive construction, for $n =1$ and $i, j \in \mathbb{Z}^+$, define $F_1(i,j)$ to be the $j$-th smallest element of $B_i$. The function $F_1$ is injective since the sets $\{B_i\}_{i \in \mathbb{Z}^+}$ are all disjoint and by construction, $F_1(i, j) \geq \min B_i \geq i$. For $n \geq 2$, given $F_{n-1}$ with the desired properties, define $F_n : (\mathbb{Z}^+)^{n+1} \to \mathbb{Z}^+$ as follows. For $i_0, i_1, \ldots, i_n \in \mathbb{Z}$, let $F_n(i_0, i_1, \ldots, i_n)$ be the $i_n$-th smallest element of $B_{F_{n-1}(i_0, i_1, \ldots, i_{n-1})}$. The function $F_n$ is injective since $F_{n-1}$ is injective and the sets $\{B_i\}_{i \geq 1}$ are disjoint. Further, \[ F_n(i_0, i_1, \ldots, i_n) \geq \min B_{F_{n-1}(i_0, i_1, \ldots, i_{n-1})} \geq F_{n-1}(i_0, i_1, \ldots, i_{n-1}) \geq i_0. \] This completes the recursive construction. Note that using a function $F$ defined in terms of powers of primes, as above, will lead to a deterministic random Markov process from the proof of Proposition \ref{prop:determ} with $\mathbb{E}(\hat{L}_0) = \infty$ even if $\mathbb{E}(L_0) < \infty$. However, if $((X_i, L_i)_i, \mathbb{P})$ is a random Markov process on a finite alphabet with $\mathbb{E}(L_0) < \infty$, then by Theorem \ref{thm:countable}, it is also a deterministic random Markov process with a finite expected look-back distance. It turns out that a suitably chosen function $F$ can be used with Proposition \ref{prop:determ} to give an alternate proof of this fact. After describing a version of the proof of Proposition \ref{prop:determ} for 2 letter alphabets, together with the construction of functions above, to Paul Balister, he gave the following construction for sets of integers to show that a complete random Markov process on a two-letter alphabet with finite expected look-back distance is also a deterministic random Markov processes with the same property. When the proof was generalized to arbitrary finite alphabets, it was realized that these sets together with the method of constructing injective functions described above could be used to prove the corresponding result for any finite alphabet. For each $i \geq 1$, set $B_i^0 = \left\{4i-1\right\}$ and for each $n \geq 1$, recursively define \[ B_i^n = \left\{4t+1 \mid t \in B_i^{n-1}\right\} \cup \left\{4t+2 \mid t \in B_i^{n-1}\right\}. \] Set $B_i = \cup_{n \geq 0} B_i^n$. For each $i\geq 1$, $\min B_i = 4i-1 > i$. Arguing by congruence module $4$, for each $i \neq j$, then $B_i \cap B_j = \emptyset$. Further, it can be shown, by induction, that for each $n \geq 0$, $B_i^n \subseteq [4^{n+1}\left(i-1\right), 4^{n+1}i]$ and also $|B_i^n| = 2^n$. Thus, defining a function $F_1: (\mathbb{Z}^+)^2 \to \mathbb{Z}^+$, as above, in terms of these sets $\{B_i\}_{i \geq 1}$ has the property that \[ \sum_{j \geq 1} \frac{F_1\left(i,j\right)}{2^j} \leq \sum_{n \geq 0} \sum_{\ell \in B_i^n} \frac{\ell}{2^{2^n}} \sum_{n \geq 0} \frac{2^n 4^{n+1}i}{2^{2^n}} \leq 35 i. \] Using the recursive definition of the functions $F_n$, one can show that for every $n \geq 1$ and $i_0 \in \mathbb{Z}^+$, \[ \sum_{i_1, i_2, \ldots, i_n \in \mathbb{Z}^+} \frac{F_n(i_0, i_1, \ldots, i_n)}{2^{i_1+i_2+\cdots+i_n}} \leq (35)^n i. \] Therefore, using the construction in Proposition \ref{prop:determ} this generalizes to arbitrary finite alphabets and the sets $\left\{F\left(i\right) \mid i \geq 1\right\}$ can be used to show that if $|A| \leq 2^n$, then there is a function $F$ that can be used to construct $\hat{L}$, with the property that \[ \mathbb{E}\left(\hat{L}_0\right) \leq 35^n \cdot \mathbb{E}\left(L_0\right). \] \subsection{Examples} In this section, some examples are given to show the limits of possible extensions of Theorem \ref{thm:countable}. One can verify that all of the examples given in this section on finite and countable alphabets have finite entropy. This is of interest because it is often the case that results that hold for processes on finite alphabets also hold for processes with finite entropy on countable alphabets. The following example shows that without the assumption of a finite dominating measure, not all random Markov processes are deterministic random Markov processes. \begin{example}\label{ex:not-determ} Consider the Markov process, $\left(X_i, Y_i\right)_{i \in \mathbb{Z}}$ defined on a countable state space $\mathbb{Z}^+ \times \mathbb{Z}^+$ with the first coordinate given by one Markov process and, for $n \geq 1$, given $X_0 = n$, the second coordinate is chosen independently in the integers $[1,n]$. Precisely, the $g$-function is defined by \begin{equation}\label{eq:markov-not-determ1} \mathbb{P}\left(X_0 = a, Y_0 = 1 \mid X_{-1} = b\right)= \begin{cases} 1, &\text{if $a=1$, $b=0$}\\ \frac{2}{3} &\text{if $a = 1$, $b=2$}\\ \frac{2}{3} &\text{if $a = 0$, $b=1$} \end{cases} \end{equation} and for all $n \geq 2$ and $k \in [1,n]$, \begin{align} \mathbb{P}\left(X_0 = n, Y_0 = k \mid X_{-1} = n-1\right) &=\frac{1}{3n}, \notag\\ \mathbb{P}\left(X_0 = n, Y_0 = k \mid X_{-1} = n+1\right) &=\frac{2}{3n}. \label{eq:markov-not-determ2} \end{align} The stationary distribution for this process, denoted $\{\pi(n,k) \mid n \geq 0,\ k \in [1, \max\{1,n\}]\}$, is given by \begin{equation}\label{eq:not-determ-stat-dist} \pi(n,k) = \begin{cases} \frac{1}{4} &\text{if } (n,k) = (0,1)\\ \frac{3}{n 2^{n+2}} &\text{if } n \geq 1 \text{ and } k \in [1,n].\\ \end{cases} \end{equation} One can construct a stationary process with a $g$-function given by \eqref{eq:markov-not-determ1} and \eqref{eq:markov-not-determ2} directly by defining measures on cylinder sets. Suppose that $\left(X_i, Y_i\right)_{i \in \mathbb{Z}}$ were a deterministic random Markov process and let $\left(L_i\right)_{i \in \mathbb{Z}}$ be a process of look-back distances. Let $k$ be a positive integer so that $\mathbb{P}\left(L_0 = k\right) = p > 0$ and let $n \geq \lceil 1/p \rceil +1$. Let $\left(a_i, b_i\right)_{i=-k}^{-1}$ be such that $a_{-1} = n$ and $\mathbb{P}\left(\left(X_i, Y_i\right)_{-k}^{-1} = \left(a_i, b_i\right)_{i=-k}^{-1}\right)>0$. Then, given that the process is a deterministic random Markov process, there exist $a_0, b_0$ such that \[ \mathbb{P}\left(X_0 = a_0, Y_0 = b_0 \mid \left(X_i, Y_i\right)_{-k}^{-1} = \left(a_i, b_i\right)_{i=-k}^{-1} \wedge L_0 = k\right) = 1 \] and so \begin{align*} \mathbb{P}&\left(X_0 = a_0, Y_0 = b_0 \mid \left(X_i, Y_i\right)_{i=-k}^{-1} = \left(a_i, b_i\right)_{i=-k}^{-1}\right) \\ &\geq \mathbb{P}\left(X_0 = a_0, Y_0 = b_0 \wedge L_0 = k \mid \left(X_i, Y_i\right)_{i=-k}^{-1} = \left(a_i, b_i\right)_{i=-k}^{-1}\right)\\ &= p \cdot \mathbb{P}\left(X_0 = a_0, Y_0 = b_0 \mid \left(X_i, Y_i\right)_{i=-k}^{-1} = \left(a_i, b_i\right)_{i=-k}^{-1} \wedge L_0 = k\right). \end{align*} However, since $a_{-1} = n \geq \lceil 1/p \rceil + 1$, \[ \mathbb{P}\left(X_0 = a_0, Y_0 = b_0 \mid \left(X_i, Y_i\right)_{i=-k}^{-1} = \left(a_i, b_i\right)_{i=-k}^{-1}\right) \leq \frac{2}{3\left(n-1\right)} < p. \] Thus, $\left(\left(X_i, Y_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is not only a random Markov process, but a usual Markov process on a countable alphabet, and hence a random Markov process with a finite expected look-back distance, that is not a deterministic random Markov process. On can easily check that $\left(\left(X_i, Y_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ has no finite dominating measure. \qed \end{example} In Theorem \ref{thm:countable}, it was shown that a uniform martingale on a finite alphabet has a representation as a deterministic random Markov process with finite expected look-back distance if{f} $\sum \operatorname{var}_n (\mathbb{P}) < \infty$. As shown in the proof, even if the alphabet is countable, any deterministic random Markov process with finite expected look-back distance satisfies $\sum \operatorname{var}_n (\mathbb{P}) < \infty$. As the following example shows, the reverse implication is not, in general, true, even for a process with a finite dominating measure. \begin{example}\label{ex:inf-look-back} Let $\left\{Z_i\right\}_{i \geq 1}$ be a collection of disjoint sets so that for every $i \geq 1$, $|Z_i| = 4^i$. Let $A = \cup_{i=1}^{\infty} Z_i$ be the countable alphabet and define an independent process $\left\{\left(X_i\right)_{i \in \mathbb{Z}}\right\}$ with the following stationary measure. For every $i \geq 1$ and $a \in Z_i$, set \[ \mathbb{P}\left(X_0 = a \right)= \frac{1}{2^i|Z_i|}. \] As this is an independent process, the stationary measure on $X_0$ is a finite dominating measure and for every $n \geq 1$, $\operatorname{var}_n (\mathbb{P}) = 0$ and hence $\sum_n \operatorname{var}_n (\mathbb{P}) = 0 < \infty$. By Theorem \ref{thm:countable}, this process has a representation as a deterministic random Markov process. Note that any independent process is a random Markov process with a finite expected look-back distance. This process cannot, however, be represented as a deterministic random Markov process with finite expected look-back distance. To see this, let $(L_i)_{i \in \mathbb{Z}}$ be any sequence of look-back distances with which $\left(X_i\right)_{i \in \mathbb{Z}}$ is a deterministic random Markov process. Fix $\ell \geq 1$ and let $k \geq \ell$ be the smallest integer such that $\mathbb{P}\left(L_0 > k\right) < \frac{1}{100 \cdot 2^\ell}$. Fix any $a_1, a_2, \ldots, a_k \in A$ and consider the probability that $X_0 \in Z_\ell$ conditioned on the event that for every $i \in [1,k]$, $X_{-i} = a_i$. Then, as the process is independent, \begin{align*} \frac{1}{2^\ell} = \mathbb{P}(X_0 \in Z_{\ell}) &= \mathbb{P}\left(X_0 \in Z_\ell \mid X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-k} = a_k\right)\\ &= \mathbb{P}\left(X_0 \in Z_{\ell}, L_0 \leq k \mid X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-k} = a_k\right) \\ & \qquad + \mathbb{P}\left(X_0 \in Z_{\ell}, L_0 > k \mid X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-k} = a_k\right)\\ &\leq \mathbb{P}\left(X_0 \in Z_{\ell}, L_0 \leq k \mid X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-k} = a_k\right)\\ &\qquad + \frac{1}{2^{\ell} 100}. \end{align*} Thus, \[ \mathbb{P}\left(X_0 \in Z_\ell, L_0 \leq k \mid X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-k} = a_k\right) \geq \frac{0.99}{2^\ell}. \] As this is a deterministic random Markov process, for each $i \leq k$, there is at most one element, $a \in Z_\ell$ for which the probability that $X_0 = a$ conditioned on the fixed past and the event $L_0 = i$ is positive. Define the set \begin{multline*} A_{\ell, k} = \big\{z \in Z_\ell \mid \exists\ i \leq k \text{ s.t. }\\ \mathbb{P}\left(X_0 = z \mid L_0 = i, X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-i} = a_i\right) = 1\big\}. \end{multline*} Since $a_1, a_2, \ldots, a_k$ are fixed, then since $\mathbb{P}$ is a probability measure, $|A_{\ell, k}| \leq k$ and \begin{align} \frac{0.99}{2^{\ell}} &\leq \mathbb{P}\left(X_0 \in Z_\ell, L_0 \leq k \mid X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-k} = a_k\right) \notag\\ &= \mathbb{P}\left(X_0 \in A_{\ell, k}, L_0 \leq k \mid X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-k} = a_k\right) \notag\\ &\leq \mathbb{P}\left(X_0 \in A_{\ell, k} \mid X_{-1} = a_1, X_{-2} = a_2, \ldots, X_{-k} = a_k\right) \notag\\ &= \mathbb{P}\left(X_0 \in A_{\ell, k}\right) \notag\\ &=\frac{|A_{\ell,k}|}{2^\ell |Z_\ell|} \leq \frac{k}{2^{\ell}|Z_\ell|}. \label{eq:z-prob} \end{align} Thus, by the inequalities in \eqref{eq:z-prob}, $k \geq 0.99|Z_\ell|$ and by the choice of $k$, \[ \mathbb{P}\left(L_0 > k-1\right) = \mathbb{P}\left(L_0 \geq k\right) \geq \frac{1}{100\cdot 2^{\ell}}. \] In particular, \[ \mathbb{E}\left(L_0\right) \geq k \mathbb{P}\left(L_0 \geq k\right) \geq \frac{0.99 \cdot |Z_\ell|}{100\cdot 2^\ell} = \frac{0.99 \cdot 4^\ell}{100\cdot 2^\ell} =\frac{99\cdot 2^\ell}{10^4}. \] As $\ell$ was arbitrary, $\mathbb{E}\left(L_0\right) = \infty$. \qed \end{example} One of the key questions that Theorem \ref{thm:countable} seeks to answer is which uniform martingales on countably infinite alphabets are random Markov processes. The following example shows that, in this respect, the results in Theorem \ref{thm:countable} are best possible. Example \ref{sec:rwbins} below is a uniform martingale on a countable alphabet that does not have a dominating measure and is not a random Markov process. Consider the following example, showing that the properties of uniform martingales are not strong enough in general to guarantee that a uniform martingale is a random Markov process, without the additional assumption of a finite dominating measure. This key example shows that Theorem \ref{thm:countable} is, in a strong sense, best possible. \begin{example}\label{sec:rwbins} The stochastic process presented here is similar to Example \ref{ex:not-determ} and is constructed as a joint distribution on pairs, where the first coordinate forms a stationary random walk on $\mathbb{N}$. The distribution of the second coordinate is given in terms of the past values of the first coordinate. Let $\left(B_n\right)_{n \in \mathbb{Z}}$ be the biased random walk on $\mathbb{N}$ given, for $i \geq 2$ and any $n \in \mathbb{Z}$ by \begin{align*} \mathbb{P}&\left(B_n = i+1 \mid B_{n-1}=i\right) =1/3,\\ \mathbb{P}&\left(B_n = i-1 \mid B_{n-1} = i\right) =2/3, \text{ and}\\ \mathbb{P}&\left(B_n = 2 \mid B_{n-1}=1\right) =1. \end{align*} Let $\left\{\pi\left(i\right)\right\}_{i \in \mathbb{N}}$ be the stationary distribution for the process $\left(B_n\right)_{n \in \mathbb{Z}}$. Then, similarly to Example \ref{ex:not-determ}, \[ \pi\left(i\right) = \begin{cases} \frac{1}{4} &\text{ if } i=1,\\ \frac{3}{2^{i+1}} &\text{ if } i \geq 2. \end{cases} \] The process $\left(B_n\right)_{n \in \mathbb{Z}}$ is used to define another process $\left(Y_n\right)_{n \in \mathbb{Z}}$ such that, conditioned on the event $B_n =i$, then $Y_n \in [1, i+1]$. On can think of the random variable $B_n$ indicating a `bin' and $Y_n$ the position chosen within the $B_n$-th bin. The process given in this example is defined so that, for every $k \geq 1$, \begin{equation}\label{eq:bins_cond} \mathbb{P}(Y_0 = i \mid B_0 = B_{-2k} = k, Y_{-2k} = j) = \begin{cases} \frac{1}{k} &\text{if $i \neq j$,}\\ 0 &\text{otherwise}.\\ \end{cases} \end{equation} In all other cases, the state for $Y_0$ is chosen independently at random. Note that the state of $B_0$ does not depend on the sequence $(Y_i)_{i< 0}$. To be precise, the $g$-function in question is defined by the following conditional probabilities. For $\{k_n\}_{n \leq 0}$ and $\{j_n\}_{n \leq 0}$ such that for every $n \leq 0$, $j_n \in \{1, 2, \ldots, k_n+1\}$, then \begin{multline}\label{eq:bins-g-function} \mathbb{P}\left(B_0 = k_0, Y_0 = j_0 \mid (B_n)_{n < 0} = (k_n)_{n < 0}, (Y_n)_{n < 0} = (j_n)_{n < 0} \right)\\ = \begin{cases} \frac{2}{3(k_0 + 1)} &\text{if } k_{-1} \geq 2, k_{-1} = k_0 - 1, \text{ and, } k_{0} \neq k_{-2k_0}\\ \frac{2}{3k_0} &\text{if } k_{-1} \geq 2, k_{-1} = k_0 - 1, k_{0} = k_{-2k_0}, \text{ and, } j_{0} \neq j_{-2k_0}\\ \frac{1}{3(k_0+1)} &\text{if } k_{-1} \geq 2, k_{-1} = k_0 + 1, \text{ and, } k_{0} \neq k_{-2k_0}\\ \frac{1}{3k_0} &\text{if } k_{-1} \geq 2, k_{-1} = k_0 + 1, k_{0} = k_{-2k_0}, \text{ and, } j_{0} \neq j_{-2k_0}\\ \frac{1}{3} &\text{if } k_0 = 2, k_{-1} = 1, \text{ and, } k_{-4} \neq 2\\ \frac{1}{2} &\text{if } k_0 = 2, k_{-1} = 1, k_{-4} = 2, \text{ and, } j_0 \neq j_{-4}\\ 0 &\text{otherwise}.\\ \end{cases} \end{multline} The measure is constructed by recursively defining measures on cylinder sets, using the recursive technique described in the introduction. Let $\mathbb{P}_0$ be the measure for the stationary process given by the random walk $\left(B_n\right)_{n \in \mathbb{Z}}$. For every $k \geq 1$, $\mu_k$ will be a measure on $(\mathbb{N} \times \mathbb{N})^k$ that satisfies the $g$-function \ref{eq:bins-g-function} and with the property that for every $k \geq 0$, if $\min \{a_0, a_1, \ldots, a_{2k}\} > k$ and $j_0, j_1, \ldots, j_{2k}$ are such that for each $\ell \in [0, 2k]$, $j_\ell \in [1, a_{\ell} + 1]$, then \begin{multline}\label{eq:bins-pos-measure} \mu_{2k+1}\left(B_0 = a_0, Y_0 = j_0, B_{1} = a_1, Y_{1} = j_1, \ldots, B_{2k} = a_{2k}, Y_{2k} = j_{2k} \right)\\ = \mathbb{P}_0\left(B_0 = a_0, B_{1} = a_1, \ldots, B_{2k} = a_{2k} \right) \prod_{i=0}^{2k} \frac{1}{a_i + 1}. \end{multline} In particular, if for each $\ell \in [1, 2k]$, $|a_{\ell} - a_{\ell-1}| = 1$, then in particular, \[ \mu_{2k+1}\left(B_0 = a_0, Y_0 = j_0, B_{1} = a_1, Y_{1} = j_1, \ldots, B_{2k} = a_{2k}, Y_{2k} = j_{2k} \right)>0. \] Define the measure $\mu_1$ so that for any $i \geq 1$ and $j \in [1, i+1]$, \[ \mu_1(B_1 = i, Y_1 = j) = \mathbb{P}_0(B_1 = i) \frac{1}{i+1}. \] Define the measure $\mu_2$ for $i_1, i_2 \geq 1$, $j_1 \in [1, i_1+1]$ and $j_2 \in [1, i_2+1]$ by \[ \mu_2(B_1 = i_1, Y_1 = j_1, B_2 = i_2, Y_2 = j_2) = \mathbb{P}_0(B_1 = i_1, B_2 = i_2) \frac{1}{i_1+1}\cdot \frac{1}{i_2 + 1}. \] Note that the condition \eqref{eq:bins-pos-measure} is trivially satisfied for $\mu_1$. For each $k \geq 1$, given $\mu_{2k}$ and $\mu_{2k-1}$, define $\mu_{2k+1}$ as follows. For each $\ell \in [1, 2k+1]$, let $i_{\ell} \geq 1$ and $j_{\ell} \in [1, i_{\ell}+1]$. Define the following events: \begin{align*} E_1 &=\{B_1 = i_1, Y_1 = j_1\},\\ E_2 &=\{B_2 = i_2, Y_2 = j_2, \ldots, B_{2k} = i_{2k}, Y_{2k} = j_{2k}\},\\ E_3 &=\{B_{2k+1} = i_{2k+1}, Y_{2k+1} = j_{2k+1}\} \end{align*} Define the measure of the cylinder set $E_1 \cap E_2 \cap E_3$ as follows. If either $i_1 \neq k$ or $i_{2k+1} \neq k$, then define \[ \mu_{2k+1}\left(E_1 \cap E_2 \cap E_3\right) = \mu_{2k}(E_1 \mid E_2) \mu_{2k-1}(E_2) \mu_{2k}(E_3 \mid E_2). \] Note that if \eqref{eq:bins-pos-measure} is satisfied for $\mu_{2k-1}$, then it is also satisfied for $\mu_{2k+1}$. If $i_1 = i_{2k+1} = k$ and $j_1 \neq j_{2k+1}$, then define \begin{multline*} \mu_{2k+1}\left(E_1 \cap E_2 \cap E_3\right)\\ = \mathbb{P}_0(B_1 = i_1 \mid B_2 = i_2) \mu_{2k-1}(E_2) \mathbb{P}_0(B_{2k+1} = i_{2k+1} \mid B_{2k} = i_{2k}) \frac{1}{k^2+k}. \end{multline*} Otherwise, if $i_1 = i_{2k+1} = k$ and $j_1 = j_{2k+1}$, then define $\mu_{2k+1}\left(E_1 \cap E_2 \cap E_3\right) = 0$. The measure $\mu_{2k+2}$ is defined by a single case. Define the events \begin{align*} E_1 &=\{B_1 = i_1, Y_1 = j_1\},\\ E_2 &=\{B_2 = i_2, Y_2 = j_2, \ldots, B_{2k+1} = i_{2k+1}, Y_{2k+1} = j_{2k+1}\},\\ E_3 &=\{B_{2k+2} = i_{2k+2}, Y_{2k+2} = j_{2k+2}\} \end{align*} and define the measure $\mu_{2k+2}$ on the cylinder set $E_1 \cap E_2 \cap E_3$ by \[ \mu_{2k+2}(E_1 \cap E_2 \cap E_3) = \mu_{2k+1}(E_1 \mid E_2)\mu_{2k}(E_2) \mu_{2k+1}(E_3 \mid E_2). \] The measures $(\mu_k)_{k \geq 1}$ are a consistent sequence of measures that define a stationary process on $(\mathbb{N} \times \mathbb{N})^{\mathbb{Z}}$ with the property given by equation \eqref{eq:bins_cond}. The process is a uniform martingale with $n$-th variation satisfying \[ \operatorname{var}(n) \leq \frac{2}{n}. \] Denote the measure of this process $\mathbb{P}$ and suppose, in hopes of a contradiction, that $((\mathbb{N} \times \mathbb{N})^{\mathbb{Z}}, \mathbb{P})$ were a random Markov process with look-back distance $(L_i)_{i \in \mathbb{Z}}$. Fix $k$ with the property that $\mathbb{P}(L_0 = k) > 0$. Let $n \geq 2k$ and consider the event \[ E_n = \begin{cases} \{B_{-1} = n+1, B_{-2} = n+2, \ldots, B_{-k} = n+1\} &\text{$k$ odd}\\ \{B_{-1} = n+1, B_{-2} = n+2, \ldots, B_{-k} = n+2\} &\text{$k$ even}. \end{cases} \] Then, the event $F_n = E_n \cap \{Y_{-1} = \cdots = Y_{-k} = 1\}$ has positive measure by equation \eqref{eq:bins-pos-measure}. Conditioned on the event $F_n$, either $B_0 = n$ or $B_0 = n+2$ with probability $1$. For every $j \in [1,n+1]$, there is a past, contained in $F_n$ with $B_{-2n} = n$ and $Y_{-2n} = j$. Thus, \begin{multline*} \mathbb{P}(B_{0} = n, Y_{0} = j, L_0 = k \mid F_n)\\ = \mathbb{P}(B_{0} = n, Y_{0} = j, L_0 = k \mid F_n, B_{-2n} = n, Y_{-2n} = j) = 0. \end{multline*} Similarly, for every $j \in [1,n+3]$, \begin{multline*} \mathbb{P}(B_{0} = n+2, Y_{0} = j, L_0 = k \mid F_n)\\ = \mathbb{P}(B_{0} = n, Y_{0} = j, L_0 = k \mid F_n, B_{-2n-2} = n+2, Y_{-2n-2} = j) = 0. \end{multline*} Then, \begin{align*} \mathbb{P}(L_0 = k \mid F_n) &=\sum_{j=1}^{n+1} \mathbb{P}(B_{0} = n, Y_{0} = j, L_0 = k \mid F_n) +\\ & \qquad \sum_{\ell =1}^{n+3} \mathbb{P}(B_{0} = n+2, Y_{0} = j, L_0 = k \mid F_n)\\ &=0, \end{align*} contradicting the assumption that $\mathbb{P}(L_0 = k) > 0$, since $\mathbb{P}(F_n) > 0$ and the events $F_n$ and $\{L_0 = k\}$ are independent. The details are not given here, but one could prove that the measure constructed in this example is reversible. Therefore, this process is not a random Markov process. \qed \end{example} Example \ref{sec:rwbins} shows that something stronger than the assumptions in the definition of a uniform martingale is needed to guarantee that a stationary process is a random Markov process. However, random Markov processes do not obey the dominating measure condition (Definition \ref{def:dommeasure}) in general; while every random Markov chain on a finite state space has a finite dominating measure, this need not be the case when the alphabet is infinite. We illustrate this by the following much simpler example. \begin{example} \label{example:rmnodom} Consider the following Markov chain on the state space $\mathbb{Z}^+$. Define the transition probabilities, for each $n \geq 1$ by \begin{align*} \mathbb{P}\left(X_0 = 1 \mid X_{-1} = n\right) &=\frac{1}{2}\\ \mathbb{P}\left(X_0 = n+1 \mid X_{-1} = n\right) &=\frac{1}{2}. \end{align*} Then, $\left(X_i\right)_{i \in \mathbb{Z}}$ has a stationary probability measure with $\mathbb{P}\left(X_0 = n\right) = \frac{1}{2^n}$. As $\left(\left(X_i\right)_{i \in \mathbb{Z}}, \mathbb{P}\right)$ is a Markov chain with a stationary probability measure, this process is also a random Markov chain. However, there can be no finite dominating measure. \qed \end{example} \section{Uncountable alphabets}\label{sec:unctble} Example \ref{sec:rwbins} in the previous section demonstrates that the notions of uniform martingale and random Markov process are not equivalent when the alphabet for the process is infinite. In this section, another condition is considered which implies that a process is a random Markov process, even for uncountable alphabets. Recall that a stationary process $\left(\left(X_n\right)_{n \in \mathbb{Z}}, \mathbb{P}\right)$, on alphabet $A$, satisfies \emph{Berbee's ratio condition} (see Definition \ref{cond:four}) if{f} for every $\varepsilon >0$, there is an $n$ so that for every $\mathbf{\omega} \in A^{\mathbb{Z}^-}$ and $E_0$, a measurable subset of $A$, \begin{equation}\label{eq:ratio} \left|\frac{\mathbb{P}\left(X_0 \in E_0 \mid X_{-1} = \omega_{-1}, \ldots\right)}{\mathbb{P}\left(X_0 \in E_0 \mid X_{-1} = \omega_{-1}, \ldots, X_{-n} = \omega_{-n}\right)} - 1 \right| < \varepsilon. \end{equation} The above choice of ratio is chosen so that if the denominator is $0$ then the numerator is also. Thus, adopting the convention that $\frac{0}{0} = 1$, the fraction in \eqref{eq:ratio} is well-defined for all $A$ and $\mathbf{\omega}$. While any stationary process that satisfies the ratio condition is certainly a uniform martingale, the converse need not be true, in general. Indeed, we now give a series of examples illustrating the differences between these notions. First, the idea behind Example \ref{sec:rwbins} is modified to explicitly construct an example of a uniform martingale on an uncountable alphabet that has a dominating measure, is not a random Markov process and does not satisfy the ratio condition. Note that by Theorem \ref{thm:ratio}, it must be the case that any such stationary process does not satisfy Berbee's ratio condition. \begin{example}\label{ex:unctble-notrm} Let $A = \left(0,1\right]$ be the half-open unit interval and let $\lambda$ be the usual Lebesgue measure on $A$. The example given here is a uniform martingale on $A$, with stationary measure $\lambda$ and finite dominating measure $2\cdot \lambda$ that is not a random Markov process. For every $x \in (0,1]$, there is a unique pair $(i, r)$ with $i \geq 1$ and $r \in (0,1]$ so that $x = \frac{1}{2^i}(1+r)$. The alphabet $A$ is treated equivalently as $(0,1]$ and as $\mathbb{Z}^+ \times (0,1]$ and the values of $i$ and $r$ are used below to simplify the definition of the process and $g$-function. On $\mathbb{Z}^+ \times (0,1]$, Lebesgue measure $\lambda$ can be given by choosing $i$ and $r$ independently, with $\mathbb{P}(i = i_0) = \frac{1}{2^{i_0}}$ and $r$ chosen according to $\lambda$. For every $k \geq 1$ and $j \in \{1, 2, \ldots, k\}$, define \begin{equation}\label{eq:small-ints} I_{k,j} = \bigg(\frac{j-1}{k}, \frac{j}{k} \bigg] \end{equation} These sets will play the role of the `bins' as in Example \ref{sec:rwbins}. The stationary process $((X_n, R_n)_{n \in \mathbb{Z}}, \mathbb{P})$ constructed in this section has conditional probabilities given as follows. For every $(i_n, r_n)_{n < 0} \subseteq (\mathbb{Z}^+ \times (0,1])^{\mathbb{Z}^-}$, $i_0 \in \mathbb{Z}^+$, and measurable set $E_0 \subseteq (0,1]$, \begin{multline}\label{eq:unctble-g-fn} \mathbb{P}\left(X_0 = i_0, R_0 \in E_0 \mid X_{-1} = i_{-1}, R_{-1} = r_{-1}, X_{-2} = i_{-2}, R_{-2} = r_{-2}, \ldots \right)\\ = \begin{cases} \frac{1}{2^{i_0}} \frac{\lambda(E \cap I_{k, \ell}^c)}{1 - 1/k} &\text{if $i_{-1} = \cdots = i_{-k+1} = 1$, $i_{-k} = k$, and $r_{-k} \in I_{k, \ell}$}\\ \frac{1}{2^{i_0}} \lambda(E) &\text{otherwise}. \end{cases} \end{multline} That is, the values of $i_0$ are always $\ell$ with probability $2^{-\ell}$ and $r_0$ is chosen according to Lebesgue measure unless $i_{-1} = \cdots = i_{-k+1} = 1$ and $i_{-k} = k$, in which case, $r_0$ is chosen in a bin different from the bin containing $r_{-k}$. To see that any stationary process that satisfies equation \eqref{eq:unctble-g-fn} is a uniform martingale (Definition \ref{def:um}), fix $(i_j, r_j)_{j < 0} \subseteq (\mathbb{Z}^+ \times (0,1])^{\mathbb{Z}^-}$, $i_0 \in \mathbb{Z}^+$, $n \geq 1$ and let $E_0 \subseteq (0,1]$ be a measurable set. If for some $m \geq n+1$, $i_{-1} = \cdots = i_{-m+1} = 1$, then for some $\ell \in \{1, 2, \ldots, m\}$, \begin{align*} \vert \mathbb{P}&\left(X_0 = i_0, R_0 \in E_0 \mid X_{-1} = i_{-1}, R_{-1} = r_{-1}, \ldots \right)\\ & - \mathbb{P}\left(X_0 = i_0, R_0 \in E_0 \mid X_{-1} = i_{-1}, R_{-1} = r_{-1}, \ldots, X_{-n} = i_{-n}, R_{-n} = r_{-n} \right) \vert\\ &\leq \frac{1}{2^{i_0}}\bigg\vert \frac{\lambda(E_0 \cap I_{m, \ell}^c)}{1 - 1/m} - \lambda(E_0)\bigg\vert\\ & = \frac{1}{2^{i_0}}\bigg\vert \frac{\lambda(E_0 \cap I_{m, \ell}^c)}{m-1} - \lambda(E_0 \cap I_{m, \ell})\bigg\vert\\ &\leq \frac{1}{2}\left(\frac{1 - 1/m}{m-1} + \frac{1}{m} \right)\\ & = \frac{1}{m} \leq \frac{1}{n+1}. \end{align*} Otherwise, \begin{multline*} \mathbb{P}\left(X_0 = i_0, R_0 \in E_0 \mid X_{-1} = i_{-1}, R_{-1} = r_{-1}, \ldots \right) =\\ \mathbb{P}\left(X_0 = i_0, R_0 \in E_0 \mid X_{-1} = i_{-1}, R_{-1} = r_{-1}, \ldots, X_{-n} = i_{-n}, R_{-n} = r_{-n} \right) \end{multline*} Further, to see that any stationary process satisfying equation \eqref{eq:unctble-g-fn} has $2 \cdot \lambda$ as a finite dominating measure, note that for any $m \geq 1$, $\ell \in \{1, 2, \ldots, m\}$, \[ \frac{\lambda(E \cap I_{m, \ell}^c)}{1 - 1/k} \leq 2 \lambda(E). \] Rather than appealing to fixed-point theorems to show that there is at least one stationary process satisfying equation \eqref{eq:unctble-g-fn}, such a measure can be explicitly constructed by recursion in order to guarantee that certain natural sets have positive measure. Some details of this construction are given here and closely follow the construction method used in Example \ref{sec:rwbins}. A sequence of consistent measures $(\mu_k)_{k \geq 1}$ is constructed so that for each $k \geq 1$, $\mu_k$ is a measure on $(\mathbb{Z}^+ \times (0,1])^k$ and with elements of $(\mathbb{Z}^+ \times (0,1])^{k}$ defined by $X_1, \ldots, X_k \in \mathbb{Z}^+$ and $R_1, \ldots, R_k \in (0,1]$. The measures constructed will have the property that for every $\ell \leq k-1$, $i_1, i_2, \ldots, i_k$ does not contain a subsequence of length $\ell$ of the form $\ell, 1, 1, \ldots, 1$, then \begin{multline}\label{eq:no-bad-event} \mu_k(X_1 = i_1, R_1 \in E_1, X_2 = i_2, R_2 \in E_2, \ldots, X_k = i_k, R_k \in E_k)\\ = \frac{1}{2^{i_1 + i_2 + \cdots + i_k}} \prod_{j = 1}^{k} \lambda(E_j). \end{multline} That is, except for certain special choices of $i_1, i_2, \ldots, i_k$, the measure $\mu_k$ will couple the measure on each coordinate independently. To start the recursion, let $\mu_1 \sim \lambda$ be the usual Lebesgue measure and let $\mu_2 \sim \lambda \times \lambda$ be the usual product measure with $\lambda$. Both measures $\mu_1$ and $\mu_2$ satisfy condition \eqref{eq:no-bad-event} trivially. For the recursion step, fix $k \geq 3$ and suppose that $\mu_{k-1}$ and $\mu_{k-2}$ have been defined and satisfy the condition in equation \eqref{eq:no-bad-event}. Fix $i_1, i_2, \ldots, i_k \in \mathbb{Z}^+$ and let $E_1, E_2, \ldots E_k \subseteq (0,1]$ be measurable sets. Without loss of generality, assume that for every $j \in [1,k]$, there exists $\ell \in [1,k-1]$ so that $E_j \subseteq I_{k-1, \ell}$. The measure can subsequently be defined on arbitrary sets by taking finite disjoint unions. Define the events \begin{align*} F_1 &= \left\{X_1= i_1, R_1 \in E_1 \right\}\\ F_{k-2} &=\bigg\{X_2 = i_2, R_2 \in E_2, X_3 = i_3, R_3 \in E_3, \ldots, X_{k-1} = i_{k-1}, R_{k-1} \in E_{k-1}\bigg\}\\ F_k &= \left\{ X_k = i_k, R_{k} \in E_k\right\}\\ \end{align*} In order to define $\mu_k(F_1 \cap F_{k-2} \cap F_k)$, the measures $\mu_{k-2}, \mu_{k-1}$ are used to define an coupling of $\mu_{k-1}(F_1 \cap F_{k-2})$ and $\mu(F_{k-2} \cap F_k)$ in a way that satisfies equation \eqref{eq:no-bad-event} and condition \eqref{eq:unctble-g-fn}. If $i_1 = k-1$ and $i_2 = \cdots = i_{k-1} = 1$, then let $\ell_1, \ell_2$ be such that $E_1 \subseteq I_{k-1, \ell_1}$ and $E_k \subseteq I_{k-1, \ell_2}$. Define \begin{equation}\label{eq:muk1} \mu_k\left(F_1 \cap F_{k-2} \cap F_k \right) = \mu_{k-2}(F_{k-2}) \cdot \begin{cases} \frac{1}{2^{i_k}} \frac{1}{2^{i_1}} \lambda(E_1) \lambda(E_k) \cdot \frac{k-1}{k-2} &\text{if $\ell_1 \neq \ell_2$}\\ 0 &\text{if $\ell_1 = \ell_2$}. \end{cases} \end{equation} Otherwise, define \begin{equation}\label{eq:muk2} \mu_k\left(F_1 \cap F_{k-2} \cap F_k \right) = \mu_{k-2}\left(F_{k-2}\right) \cdot \mu_{k-1}\left(F_{1} \mid F_{k-2} \right) \cdot \mu_{k-1}\left(F_{k} \mid F_{k-2} \right). \end{equation} Using the Carath\'{e}odory extension theorem, the measure $\mu_k$ is extended from cylinder sets to all measurable sets. One can show that the measure $\mu_k$ defined by equations \eqref{eq:muk1} and \eqref{eq:muk2} satisfy conditions \eqref{eq:unctble-g-fn} and \eqref{eq:no-bad-event} and in particular, for every $k, \ell$, \[ \mathbb{P}\left(X_{-k} = k, X_{-k+1} = 1, \cdots, X_{-1} = 1, X_0 = \ell \right) > 0. \] This completes the construction of the sequence of consistent measures $(\mu_k)_{k \geq 1}$ that are stationary by construction. Thus, by the Kolmogorov extension theorem, there is a measure $\mathbb{P}$ on $(\mathbb{Z}^+ \times (0,1])^\mathbb{Z}$ with marginals given by the measures $(\mu_k)_{k\geq 1}$ that is stationary by construction and satisfies conditions \eqref{eq:unctble-g-fn} and \eqref{eq:no-bad-event}. To see that this uniform martingale is not a random Markov process, suppose, in hopes of a contradiction that it were and let $k$ be such that $\mathbb{P}(L_0 = k) > 0$. Then, for $r_{-1}, r_{-2}, \ldots, r_{-k+1} \in (0,1]$ and every $j \geq 1$ and $E_0 \in (0,1]$ be measurable, \begin{align*} \mathbb{P}&\bigg(X_0 = j, R_0 \in E_0 \wedge L_0 = k \mid X_{-1} = 1, R_{-1} = r_1, \ldots,\\ &\qquad X_{-k+1} = 1, R_{-k+1} = r_{-k+1} \bigg)\\ &=\mathbb{P}\bigg(X_0 = j, R_0 \in E_0 \wedge L_0 = k \mid X_{-1} = 1, R_{-1} = r_1, \ldots,\\ &\qquad X_{-k+1} = 1, R_{-k+1} = r_{-k+1}, X_{-k} = k, R_{-k} = j \bigg)\\ & = 0. \end{align*} As $j, r$ and $r_{-1}, \ldots, r_{-k+1}$ were arbitrary, \[ \mathbb{P}(L_0 = k \mid X_{-1} = X_{-2} = \cdots = X_{-k+1} = 1) = 0, \] contradicting the assumption that these events are independent and occur with positive probability. Therefore, the process is not a random Markov process. \qed \end{example} While a stationary process that satisfies the ratio condition is a uniform martingale, it need not have a dominating measure, as the following examples show. \begin{example} Two examples are given of processes without dominating measures that satisfy the ratio condition. For the first, consider the measure on $\mathbb{N}^{\mathbb{Z}}$, given by setting, for each $n \geq 1$, \[ \mathbb{P}\left(\ldots, X_{-1}=n, X_0=n, X_{1}=n, X_2 = n, \ldots\right) = \frac{1}{2^n}. \] This certainly satisfies the ratio condition and has no finite dominating measure. \qed \end{example} The next example is a process that satisfies the ratio condition without having dominating measure that is also ergodic and mixing. \begin{example} Consider again a measure on doubly infinite words on $\mathbb{N}$, with the measure on $X_0$ given in terms of the values of $X_{-1}$ and $X_{-2}$. Define \begin{equation}\label{eq:2-step-Markov} \mathbb{P}\left(X_0=c \mid X_{-1} = a, X_{-2}=b\right)= \begin{cases} \frac{1}{2} &\text{ if } a \neq b \text{ and } c=a,\\ \frac{2^{-c-1}}{1-2^{-a}} &\text{ if } a \neq b \text{ and } c \neq a,\\ 0 &\text{ if } a=b=c, \text{ and }\\ \frac{2^{-c}}{1-2^{-b}} &\text{ if } a=b \text{ and } c \neq b. \end{cases} \end{equation} At stationary process satisfying equation \eqref{eq:2-step-Markov} is a 2-step Markov process with stationary measure on pairs satisfying \begin{equation} \mathbb{P}\left(X_0 = a, X_1 = b \right) = \begin{cases} \frac{3}{4} \cdot \frac{1}{2^{a+b}} &\text{ if } a \neq b\\ \frac{3}{4} \cdot \frac{1}{2^a}\left(1 - \frac{1}{2^a}\right) &\text{ if } a = b. \end{cases} \end{equation} Thus, the stationary distribution on any particular coordinate is \[ \mathbb{P}\left(X_0 = a\right) = \frac{3}{2}\frac{1}{2^a}\left(1-\frac{1}{2^a}\right). \] This process satisfies the ratio condition, Definition \ref{cond:four}, as it is a two-step Markov chain. While $\left(X_i\right)_{i \in \mathbb{Z}}$ is a uniform martingale, this process does not have a finite dominating measure as, for example, a dominating measure would have to give measure at least $\frac{1}{2}$ to each integer. \qed \end{example} These examples show that, in terms of the conditions considered here, the implication in the following theorem is as strong as possible. This proof is closely related to that given by Kalikow~\cite{Kalikow90} for $2$-letter alphabets. Recall Theorem \ref{thm:ratio}: \begin{thmrat} Let $A$ be any set and $\Omega = A^{\mathbb{Z}}$. Let $\mathbb{P}$ be a stationary probability measure on $\Omega$ so that $\left(\Omega, \mathbb{P}\right)$ satisfies the ratio condition as in Definition \ref{cond:four}, then $\left(\Omega, \mathbb{P}\right)$ is a random Markov process. \end{thmrat} \begin{proof} Let $\left(p_i\right)_{i\geq 1} \subseteq \left(0,1\right)$ with $\sum_{i=1}^{\infty} p_i = 1$. It is shown in this proof that there is a sequence $\left\{n_i\right\}_{i \geq 1}$ and a process $\left(L_n\right)_{n \in \mathbb{Z}}$ with a coupling $\hat{\mathbb{P}}$ such that $\left(\left(X_n, L_n\right)_{n \in \mathbb{Z}}, \hat{\mathbb{P}}\right)$ is a random Markov process and for each $i \geq 1$, $\hat{\mathbb{P}}\left(L_0 = n_i\right) = p_i$. The sequence $n_i$ is chosen recursively. Using the ratio condition, as in Definition \ref{cond:four}, choose $n_1$ large enough so that for all $n \geq n_1$, and for all $\mathbf{\omega}$, $E$, \[ \left|\frac{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots\right)}{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n} = \omega_{-n}\right)} - 1\right| < \frac{p_1}{2}. \] For all $i \geq 1$, given $n_i$, choose $n_{i+1} \geq n_i$ to be such that for all $n \geq n_{i+1}$, for all $\mathbf{\omega}$, $E$ \[ \left|\frac{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots\right)}{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n} = \omega_{-n}\right)} - 1\right| < \frac{p_{i+1}}{2\sum_{j \leq i+1} p_j}. \] For every $i \geq 1$, $\mathbf{\omega}$, define a measure, $\mu_{\mathbf{\omega}, i}$ on $A$ by \[ \mu_{\mathbf{\omega}, i}\left(E\right) = \mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n_i}= \omega_{-n_i}\right). \] As in the proof of Theorem \ref{thm:countable}, the goal is to show that there is a joint distribution with a process $\left(L_n\right)_{n \in \mathbb{Z}}$ so that \[ \mu_{\mathbf{\omega}, i}\left(E\right) = \hat{\mathbb{P}}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n} = \omega_{-n} \wedge L_0 \leq n_i\right). \] For each $i \geq 1$, define another measure $\tau_{\mathbf{\omega}, i}$ as follows. For $i=1$, set $\tau_{\mathbf{\omega}, 1} = \mu_{\mathbf{\omega}, 1}$ and for $i \geq 1$ define \begin{equation}\label{eq:uncountable_table} \tau_{\mathbf{\omega}, i+1}\left(E\right) = \frac{1}{p_{i+1}}\left(\left(\sum_{j \leq i+1} p_j\right)\mu_{\mathbf{\omega}, i+1}\left(E\right) - \left(\sum_{j \leq i} p_j\right)\mu_{\mathbf{\omega}, i}\left(E\right) \right). \end{equation} Since $\mu_{\mathbf{\omega}, i}$ and $\mu_{\mathbf{\omega}, i+1}$ are both probability measures on $A$, $\tau_{\mathbf{\omega}, i+1}$ is a signed measure on $A$ with $\tau_{\mathbf{\omega}, i}\left(A\right) = 1$. Note also, that by definition, for every measurable set $E$ and every $i$, the function $\mathbf{\omega} \mapsto \mu_{\mathbf{\omega}, i}(E)$ is a $\mathbb{P}$-measurable function of $\mathbf{\omega}$. Thus, the function $\mathbf{\omega} \mapsto \tau_{\mathbf{\omega}, i}(E)$ is also $\mathbb{P}$-measurable. To show that $\tau_{\mathbf{\omega}, i+1}$ is also a positive measure, note that for every event $E$, \begin{align*} \frac{\mu_{\mathbf{\omega}, i+1}\left(E\right)}{\mu_{\mathbf{\omega}, i}\left(E\right)} &=\frac{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n_{i+1}} = \omega_{-n_{i+1}}\right)}{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n_i} = \omega_{-n_i}\right)}\\ &=\frac{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n_{i+1}} = \omega_{-n_{i+1}}\right)}{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots\right)} \\ &\qquad \cdot \frac{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots\right)}{\mathbb{P}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n_i} = \omega_{-n_i}\right)}\\ &\geq \left(1-\frac{p_{i+1}}{2\sum_{j \leq i+1} p_j}\right)\left(1+\frac{p_{i+1}}{2\sum_{j \leq i+1}p_j}\right)^{-1}\\ &\geq 1- \frac{p_{i+1}}{\sum_{j \leq i+1}p_j} = \frac{\sum_{j \leq i} p_j}{\sum_{j \leq i+1} p_j}. \end{align*} Thus, $\left(\sum_{j \leq i+1} p_j\right)\mu_{\mathbf{\omega}, i+1}\left(E\right) \geq \left(\sum_{j \leq i} p_j\right)\mu_{\mathbf{\omega}, i}\left(E\right)$ which implies that for every event $E$, $\tau_{\mathbf{\omega}, i+1}\left(E\right) \geq 0$. The measure $\hat{\mathbb{P}}$ is defined so that for each $i$, $\mathbf{\omega}$ and $E$. \begin{equation}\label{eq:unctbl_measure} \hat{\mathbb{P}}\left(X_0 \in E \mid X_{-1} = \omega_{-1}, \ldots, X_{-n_i} = \omega_{-n_i} \wedge L_0 = n_i\right) = \tau_{\mathbf{\omega}, i}\left(E\right). \end{equation} Equation \eqref{eq:unctbl_measure} can be used together with the original measure $\mathbb{P}$ and the fact that $\tau_{\mathbf{\omega}, i}(E)$ is a $\mathbb{P}$-measurable function of $\mathbf{\omega}$, to define a probability measure, $\hat{\mathbb{P}}$ on the doubly infinite sequences $\left(X_n, L_n\right)_{n \in \mathbb{Z}}$. \end{proof} \section{Conclusion}\label{sec:open} Many new problems have been raised about extending results about random Markov process on finite alphabets to processes on infinite alphabets. The question of whether or not a uniform martingale has a representation as a random Markov process with finite expected look-back distance is of interest in light of the result of Kalikow~\cite{Kalikow90} regarding sufficient conditions for a process to be weak-Bernoulli. Kalikow~\cite{Kalikow90} showed that a finite state random Markov process with finite expected look-back distance and satisfying some other conditions such as being weak-mixing or having some state whose probability given any past is bounded away from $0$, is weak-Bernoulli. While this result~\cite[Theorem 7]{Kalikow90} was stated only for processes on finite alphabets, the proof remains valid for countable alphabets. In general, it remains unknown whether every uniform martingale on a countable alphabet with a finite dominating measure and $\sum_n \operatorname{var}_n\left(\mathbb{P}\right) < \infty$ has a representation as a random Markov process with finite expected look-back distance. If this is not true, there are some special cases raised by previous work on random Markov processes that remain of interest. One area in which open questions remain are the connections between uniform martingales, random Markov processes and extension of processes, as in the sense used in isomorphism ergodic theory. Recall that the \emph{shift map} $T$ on doubly infinite words is the function $(\omega_i)_{i \in \mathbb{Z}} \mapsto (\omega_{i+1})_{i \in \mathbb{Z}}$. A stationary process $(X_i)_{i \in \mathbb{Z}}$ on an alphabet $A$ is said to \emph{extend to} a stationary process $(Y_i)_{i \in \mathbb{Z}}$ on an alphabet $A'$ if{f} there is a function $f: A^{\mathbb{Z}} \to A'^{\mathbb{Z}}$ that is measurable, measure preserving and commutes with the shift operator. Kalikow, Katznelson, and Weiss~\cite{KKW92} showed that every zero-entropy process can be extended to a uniform martingale on a finite alphabet. One could ask whether every zero-entropy process be extended to a uniform martingale on a countable alphabet with $\sum_n \operatorname{var}_n < \infty$? In~\cite{Kalikow12}, Kalikow proved that every process can be extended to a uniform martingale, and in fact to a random Markov process. In this paper we have established that it is of particular interest when such a process has a finite dominating measure for the present given the past. A further direction would be to establish necessary and sufficient conditions for a process to be extendable to a random Markov process with a finite dominating measure. In particular, since a process has finite entropy if{f} it can be displayed as process with a finite alphabet, one interesting question is whether every process with finite entropy can be extended to a random Markov process with a finite alphabet. This question was previously posed by Kalikow in~\cite{Kalikow90}. In this paper and in a previous paper by Kalikow~\cite{Kalikow12}, processes with a countable alphabet have been studied in terms of the categories in which such processes can be displayed. Here, we have also displayed finite processes as deterministic random Markov processes and looked at categories of processes with an uncountable alphabet. Isomorphism ergodic theory was launched by Don Ornstein with his isomorphism theorem and since then a great many theorems about isomorphism classes have been proved and extended to processes with a countable and uncountable alphabets. Now that we have displayed these additional ways of looking at such processes, it is our hope that future work will extend isomorphism ergodic theory to study these classifications also. \section*{Acknowledgements} The authors wish to thank Paul Balister for sharing his construction for the sets of integers in the special case of two-letter alphabets presented in Section \ref{subsec:determ_construction}. It was his construction and our generalization of it to arbitrary finite alphabets that set us in the direction of considering deterministic random-step Markov processes with finite expected look-back distance. \bibliographystyle{amsplain}
1,941,325,220,181
arxiv
\section{Introduction} The pupose of this paper is to determine the density of monic integer polynomials of given degree whose discriminant is squarefree. For polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$, the term $(-1)^ia_i$ represents the sum of the $i$-fold products of the roots of $f$. It is thus natural to order monic polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ by the height $H(f):={\rm max}\{|a_i|^{1/i}\}$ (see, e.g., \cite{BG2}, \cite{PS2}, \cite{SW}). We determine the density of monic integer polynomials having squarefree discriminant with respect to the ordering by this height, and show that the density is positive. The existence of infinitely many monic integer polynomials of each degree having squarefree discriminant was first demonstrated by Kedlaya~\cite{Kedlaya}. However, it has not previously been known whether the density exists or even that the lower density is positive. To state the theorem, define the constants $\lambda_n(p)$ by \begin{equation}\label{jos} \lambda_n(p)=\left\{ \begin{array}{cl} 1 & \mbox{if $n =1$,}\\[.075in] 1-\displaystyle\frac1{p^2} & \mbox{if $n= 2$,}\\[.135in] 1-\displaystyle\frac2{p^2}+\frac1{p^3} & \mbox{if $n= 3$,}\\[.185in] 1-\displaystyle\frac1{p}+\frac{(p-1)^2(1-(-p)^{2-n})}{p^2(p+1)} & \mbox{if $n\geq 4$} \end{array}\right. \end{equation} for $p\neq 2$; also, let $\lambda_1(2)=1$ and $\lambda_n(2)=1/2$ for $n\geq2$. Then a result of Brakenhoff~\cite[Theorem~6.9]{ABZ} states that $\lambda_n(p)$ is the density of monic polynomials over~${\mathbb Z}_p$ having discriminant indivisible by~$p^2$. Let~$\lambda_n:=\prod_p\lambda_n(p)$, where the product is over all primes $p$. We prove: \begin{theorem}\label{polydisc2} Let $n\geq1$ be an integer. Then when monic integer polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ of degree~$n$ are ordered by $H(f):= {\rm max}\{|a_1|,|a_2|^{1/2},\ldots,|a_n|^{1/n}\}$, the density having squarefree discriminant $\Delta(f)$ exists and is equal to $\lambda_n>0$. \end{theorem} Our method of proof implies that the theorem remains true even if we restrict only to those polynomials of a given degree $n$ having a given number of real roots. It is easy to see from the definition of the $\lambda_n(p)$ that the $\lambda_n$ rapidly approach a limit $\lambda$ as $n\to\infty$, namely, \begin{equation} \lambda=\lim_{n\to\infty} \lambda_n = \prod_p \left(1-\displaystyle\frac1{p}+\frac{(p-1)^2}{p^2(p+1)}\right) \approx 35.8232\%. \end{equation} Therefore, as the degree tends to infinity, the probability that a random monic integer polynomial has squarefree discriminant tends to $\lambda\approx 35.8232\%$. In algebraic number theory, one often considers number fields that are defined as a quotient ring $K_f:={\mathbb Q}[x]/(f(x))$ for some irreducible integer polynomial $f(x)$. The question naturally arises as to whether $R_f:={\mathbb Z}[x]/(f(x))$ gives the ring of integers of $K_f$. Our second main theorem states that this is in fact the case for {most} polynomials $f(x)$. We prove: \begin{theorem}\label{polydiscmax2} The density of irreducible monic integer polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ of degree~$n>1$, when ordered by $H(f):={\rm max}\{|a_1|,|a_2|^{1/2},\ldots,|a_n|^{1/n}\}$, such that ${\mathbb Z}[x]/(f(x))$ is the ring of integers in its fraction field is $\prod_p(1-1/p^2)=\zeta(2)^{-1}$. \end{theorem} Note that $\zeta(2)^{-1}\approx\, 60.7927\%$. Since a density of 100\% of monic integer polynomials are irreducible (and indeed have associated Galois group $S_n$) by Hilbert's irreducibility theorem, it follows that $\approx 60.7927\%$ of monic integer polynomials $f$ of any given degree $n>1$ have the property that $f$ is irreducible and ${\mathbb Z}[x]/(f(x))$ is the maximal order in its fraction field. The quantity $\rho_n(p):=1-1/p^2$ represents the density of monic integer polynomials of degree $n>1$ over ${\mathbb Z}_p$ such that ${\mathbb Z}_p[x]/(f(x))$ is the maximal order in ${\mathbb Q}_p[x]/(f(x))$. The determination of this beautiful $p$-adic density, and its independence of $n$, is due to Hendrik Lenstra (see~\cite[Proposition~3.5]{ABZ}). Theorem~\ref{polydiscmax2} again holds even if we restrict to polynomials of degree $n$ having a fixed number of real roots. If the discriminant of an order in a number field is squarefree, then that order must be maximal. Thus the irreducible polynomials counted in Theorem~\ref{polydisc2} are a subset of those counted in Theorem~\ref{polydiscmax2}. The additional usefulness of Theorem~\ref{polydisc2} in some arithmetic applications is that if $f(x)$ is a monic irreducible integer polynomial of degree $n$ with squarefree discriminant, then not only is ${\mathbb Z}[x]/(f(x))$ maximal in the number field ${\mathbb Q}[x]/(f(x))$ but the associated Galois group is necessarily the symmetric group $S_n$ (see, e.g., \cite{Yamamura}, \cite{Kondo} for further details and applications). We prove both Theorems~\ref{polydisc2} and \ref{polydiscmax2} with power-saving error terms. More precisely, let ${V_n^{\textrm{mon}}}({\mathbb Z})$ denote the subset of ${\mathbb Z}[x]$ consisting of all monic integer polynomials of degree $n$. Then it is easy to see that \begin{equation*} {\#\{f\in {V_n^{\textrm{mon}}}({\mathbb Z}): H(f)<X \}} = 2^nX^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}} + O(X^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}-{ 1}}). \end{equation*} We prove \begin{equation}\label{errorterms} {\begin{array}{ccl} \displaystyle {\#\{f\in {V_n^{\textrm{mon}}}({\mathbb Z}) : H(f)<X \mbox{ and $\Delta(f)$ squarefree}\}} &\!\!=\!\!& \lambda_n\cdot 2^n{X^\frac{\scriptstyle n(n+1)}{\scriptstyle2}} + O_\varepsilon(X^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}-{\textstyle \frac15}+\varepsilon});\\[.25in] \displaystyle {\#\{f\in {V_n^{\textrm{mon}}}({\mathbb Z}) : H(f)<X \mbox{ and ${\mathbb Z}[x]/(f(x))$ maximal}\}}&\!\!=\!\!& {\displaystyle\frac{6}{\pi^2}}\cdot 2^nX^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}} + O_\varepsilon(X^{\frac{\scriptstyle n(n+1)}{\scriptstyle2}-{\textstyle \frac15}+\varepsilon}) \end{array}} \end{equation} for $n>1$. These asymptotics imply Theorems~\ref{polydisc2} and \ref{polydiscmax2}. Since it is known that the number of reducible monic polynomials of a given degree~$n$ is of a strictly smaller order of magnitude than the error terms above (see Proposition \ref{propredboundall}), it does not matter whether we require $f$ to be irreducible in the above asymptotic formulae. Recall that a number field $K$ is called {\it monogenic} if its ring of integers is generated over~${\mathbb Z}$ by one element, i.e., if ${\mathbb Z}[\theta]$ gives the maximal order of $K$ for some $\theta\in K$. As a further application of our methods, we obtain the following corollary to Theorem~\ref{polydisc2}: \begin{corollary}\label{monogenic} Let $n>1$. The number of isomorphism classes of number fields of degree~$n$ and absolute discriminant less than $X$ that are monogenic and have associated Galois group $S_n$ is $\gg X^{1/2+1/n}$. \end{corollary} We note that our lower bound for the number of monogenic $S_n$-number fields of degree $n$ improves slightly the best-known lower bounds for the number of $S_n$-number fields of degree $n$, due to Ellenberg and Venkatesh~\cite[Theorem 1.1]{EV}, by simply forgetting the monogenicity condition in Corollary~\ref{monogenic}. We conjecture that the exponent in our lower bound in Corollary~\ref{monogenic} for monogenic number fields of degree $n$ is optimal. As is illustrated by Corollary~\ref{monogenic}, Theorems~\ref{polydisc2} and \ref{polydiscmax2} give a powerful method to produce number fields of a given degree having given properties or invariants. We give one further example of interest. Given a number field $K$ of degree $n$ with $r$ real embeddings $\xi_1,\dots,\xi_r$ and $s$ complex conjugate pairs of complex embeddings $\xi_{r+1},\bar\xi_{r+1},\ldots,\xi_{r+s},\bar\xi_{r+s}$, the ring of integers $\mathcal O_K$ may naturally be viewed as a lattice in ${\mathbb R}^n$ via the map $x\mapsto (\xi_1(x),\ldots,\xi_{r+s}(x))\in {\mathbb R}^r\times{\mathbb C}^s\cong {\mathbb R}^n$. We may thus ask about the length of the shortest vector in this lattice generating $K$. In their final remark~\cite[Remark~3.3]{EV}, Ellenberg and Venkatesh conjecture that the number of number fields $K$ of degree $n$ whose shortest vector in ${\mathcal O}_K$ generating $K$ is of length less than~$Y$ is $\,\asymp Y^{(n-1)(n+2)/2}$. They prove an upper bound of this order of magnitude. We use Theorem~\ref{polydiscmax2} to prove also a lower bound of this size, thereby proving their conjecture: \begin{corollary}\label{shortvector} Let $n>1$. The number of isomorphism classes of number fields $K$ of degree~$n$ whose shortest vector in ${\mathcal O}_K$ generating $K$ has length less than $Y$ is $\,\asymp$ $Y^{(n-1)(n+2)/2}$. \end{corollary} Again, Corollary~\ref{shortvector} remains true even if we impose the condition that the associated Galois group is~$S_n$ (by using Theorem~\ref{polydisc2} instead of Theorem~\ref{polydiscmax2}). Finally, we remark that our methods allow the analogues of all of the above results to be proven with any finite set of local conditions imposed at finitely many places (including at infinity); the orders of magnitudes in these theorems are then seen to remain the same---with different (but easily computable in the cases of Theorems~\ref{polydisc2} and \ref{polydiscmax2}) positive constants---provided that no local conditions are imposed that force the set being counted to be empty (i.e., no local conditions are imposed at~$p$ in Theorem~\ref{polydisc2} that force $p^2$ to divide the discriminant, no local conditions are imposed at~$p$ in Theorem~\ref{polydiscmax2} that cause ${\mathbb Z}_p[x]/(f(x))$ to be non-maximal over~${\mathbb Z}_p$, and no local conditions are imposed at $p$ in Corollary~\ref{monogenic} that cause such number fields to be non-monogenic locally). \vspace{.1in} We now briefly describe our methods. It is easily seen that the desired densities in Theorems~\ref{polydisc2} and \ref{polydiscmax2}, if they exist, must be bounded above by the Euler products $\prod_p \lambda_n(p)$ and $\prod_p (1-1/p^2)$, respectively. The difficulty is to show that these Euler products are also the correct lower bounds. As is standard in sieve theory, to demonstrate the lower bound, a ``tail estimate'' is required to show that not too many discriminants of polynomials $f$ are divisible by $p^2$ when $p$ is large relative to the discriminant $\Delta(f)$ of $f$ (here, large means larger than $\Delta(f)^{1/(n-1)}$, say). For any prime $p$, and a monic integer polynomial $f$ of degree $n$ such that $p^2\mid \Delta(f)$, we say that $p^2$ {\it strongly divides} $\Delta(f)$ if $p^2\mid \Delta(f + pg)$ for any integer polynomial $g$ of degree $n$; otherwise, we say that $p^2$ {\it weakly divides} $\Delta(f)$. Then $p^2$ strongly divides $\Delta(f)$ if and only if $f$ modulo $p$ has at least two distinct multiple roots in $\bar{{\mathbb F}}_p$, or has a root in ${\mathbb F}_p$ of multiplicity at least 3; and $p^2$ weakly divides $\Delta(f)$ if $p^2\mid \Delta(f)$ but $f$ modulo $p$ has only one multiple root in ${\mathbb F}_p$ and this root is a simple double root. For any squarefree positive integer $m$, let ${\mathcal W}_m^{\rm {(1)}}$ (resp.\ ${\mathcal W}_m^{\rm {(2)}}$) denote the set of monic integer polynomials in $V^{\textrm{mon}}_n({\mathbb Z})$ whose discriminant is strongly divisible (resp.\ weakly divisible) by $p^2$ for every prime factor $p$ of $m$. Then we prove tail estimates for ${\mathcal W}_m^{\rm {(1)}}$ and ${\mathcal W}_m^{\rm {(2)}}$ separately, as follows. \begin{theorem}\label{thm:mainestimate} For any positive real number $M$ and any $\epsilon>0$, we have \vspace{-5pt}\begin{eqnarray*} \label{eq:equs} {\rm (a)}\quad \#\bigcup_{\substack{m>M\\ m\;\mathrm{ squarefree} }}\{f\in{\mathcal W}_m^{\rm {(1)}}:H(f)<X\}&=& O_\epsilon(X^{n(n+1)/2+\epsilon}/M)+O(X^{n(n-1)/2});\\[.075in] \label{equ1} {\rm (b)}\quad \#\bigcup_{\substack{m>M\\ m\;\mathrm{ squarefree} }}\{f\in{\mathcal W}_m^{\rm {(2)}}:H(f)<X\}&=& O_\epsilon(X^{n(n+1)/2+\epsilon}/M)+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}), \end{eqnarray*} where the implied constants are independent of $M$ and $X$. \end{theorem} The power savings in the error terms above also have applications towards determining the distributions of low-lying zeros in families of Dedekind zeta functions of monogenic degree-$n$ fields; see \cite[\S5.2]{SST}. We prove the estimate in the strongly divisible case (a) of Theorem~\ref{thm:mainestimate} by geometric techniques, namely, a quantitative version of the Ekedahl sieve (\cite{Ek}, \cite[Theorem 3.3]{geosieve}). While the proof of \cite[Theorem~3.3]{geosieve} uses homogeneous heights, and considers the union over all primes $p>M$, the same proof also applies in our case of weighted homogeneous heights, and a union over all squarefree $m>M$. Since the last coefficient $a_n$ is in a larger range than the other coefficients, we in fact obtain a smaller error term than in \cite[Theorem~3.3]{geosieve}. The estimate in the weakly divisible case (b) of Theorem~\ref{thm:mainestimate} is considerably more difficult. Our main idea is to embed polynomials $f$, whose discriminant is {\it weakly} divisible by $p^2$, into a larger space that has more symmetry, such that the invariants under this symmetry are given exactly by the coefficients of $f$; moreover, we arrange for the image of $f$ in the bigger space to have discriminant {\it strongly} divisible by $p^2$. We then count in the bigger space. More precisely, we make use of the representation of $G={\rm SO}_n$ on the space $W=W_n$ of symmetric $n\times n$ matrices, as studied in \cite{BG2,SW}. We fix $A_0$ to be the $n\times n$ symmetric matrix with $1$'s on the anti-diagonal and $0$'s elsewhere. The group $G={\rm SO}(A_0)$ acts on $W$ via the action $g\cdot B=gBg^t$ for $g\in G$ and $B\in W$. Define the {\it invariant polynomial} of an element $B\in W$ by $$f_B(x) = (-1)^{n(n-1)/2}\det(A_0x - B).$$ Then $f_B$ is a monic polynomial of degree~$n$. It is known that the ring of polynomial invariants for the action of $G$ on $W$ is freely generated by the coefficients of the invariant polynomial. Define the {\it discriminant} $\Delta(B)$ and {\it height} $H(B)$ of an element $B\in W$ by $\Delta(B)=\Delta(f_B)$ and $H(B)=H(f_B)$. This representation of $G$ on $W$ was used in \cite{BG2,SW} to study 2-descent on the hyperelliptic curves $C:y^2=f_B(x)$. A key step of our proof of Theorem~\ref{thm:mainestimate}(b) is the construction, for every positive squarefree integer $m$, of a map \begin{equation*} \sigma_m:{\mathcal W}_m^{\rm {(2)}}\to \frac14W({\mathbb Z}), \end{equation*} such that $f_{\sigma_m(f)}=f$ for every $f\in {\mathcal W}_m^{\rm {(2)}}$; here $\frac14W({\mathbb Z})\subset W({\mathbb Q})$ is the lattice of elements $B$ whose coefficients have denominators dividing $4$. In our construction, the image of $\sigma_m$ in fact lies in a special subspace $W_0$ of $W$; namely, if $n=2g+1$ is odd, then $W_0$ consists of symmetric matrices $B\in W$ whose top left $g\times g$ block is 0, and if $n=2g+2$ is even, then $W_0$ consists of symmetric matrices $B\in W$ whose top left $g\times (g+1)$ block is 0. We associate to any element of $W_0$ a further polynomial invariant which we call the $Q$-{\it invariant} (which is a relative invariant for the subgroup of ${\rm SO}(A_0)$ that fixes $W_0$). The significance of the $Q$-invariant is that, if the discriminant polynomial $\Delta$ is restricted to $W_0$, then it is not irreducible as a polynomial in the coordinates of $W_0$, but rather is divisible by the polynomial $Q^2$. Moreover, we show that for elements $B$ in the image of $\sigma_m$, we have $|Q(B)|=m$. Finally, {even though the discriminant polynomial of $f\in{\mathcal W}_m^{\rm {(2)}}$ is {\it weakly} divisible by $p^2$, the discriminant polynomial of its image $\sigma_m(f)$, when viewed as a polynomial on $W_0\cap \frac14W({\mathbb Z})$, is {\it strongly} divisible by $p^2$.} This is the key point of our construction. To obtain Theorem~\ref{thm:mainestimate}(b), it thus suffices to estimate the number of $G({\mathbb Z})$-equivalence classes of elements $B\in W_0\cap \frac14W({\mathbb Z})$ of height less than $X$ having $Q$-invariant larger than $M$. This can be reduced to a geometry-of-numbers argument in the spirit of \cite{BG2,SW}, although the current count is more subtle in that we are counting certain elements in a cuspidal region of a fundamental domain for the action of $G({\mathbb Z})$ on $W({\mathbb R})$. The $G({\mathbb Q})$-orbits of elements $B\in W_0\cap W({\mathbb Q})$ are called {\it distinguished orbits} in \cite{BG2,SW}, as they correspond to the identity 2-Selmer elements of the Jacobians of the corresponding hyperelliptic curves $y^2=f_B(x)$ over ${\mathbb Q}$; these were not counted separately by the geometry-of-numbers methods of \cite{BG2,SW}, as these elements lie deeper in the cusps of the fundamental domains. We develop a method to count the desired elements in the cusp, following the arguments of \cite{BG2,SW} while using the invariance and algebraic properties of the $Q$-invariant polynomial. This yields Theorem~\ref{thm:mainestimate}(b), which then allows us to carry out the sieves required to obtain Theorems~\ref{polydisc2} and \ref{polydiscmax2}. Corollary~\ref{monogenic} can be deduced from Theorem~\ref{polydisc2} roughly as follows. Let $g\in {V_n^{\textrm{mon}}}({\mathbb R})$ be a monic real polynomial of degree~$n$ and nonzero discriminant having $r$ real roots and $2s$ complex roots. Then ${\mathbb R}[x]/(g(x))$ is isomorphic to ${\mathbb R}^n\cong {\mathbb R}^r\times {\mathbb C}^s$ via its real and complex embeddings. Let $\theta$ denote the image of $x$ in ${\mathbb R}[x]/(g(x))$ and let $R_g$ denote the lattice formed by taking the ${\mathbb Z}$-span of $1,\theta,\ldots,\theta^{n-1}$. Suppose further that: there exist monic integer polynomials $h_i$ of degree $i$ for $i=1,\ldots,n-1$ such that $1,h_1(\theta),h_2(\theta),\ldots,h_{n-1}(\theta)$ is the unique Minkowski-reduced basis of $R_g$; we say that the polynomial $g(x)$ is {\it strongly quasi-reduced} in this case. Note that if $g$ is an integer polynomial, then the lattice $R_g$ is simply the image of the ring ${\mathbb Z}[x]/(g(x))\subset {\mathbb R}[x]/(g(x))$ in ${\mathbb R}^n$ via its archimedean embeddings. When ordered by their heights, we prove that 100\% of monic integer polynomials $g(x)$ are strongly quasi-reduced. We furthermore prove that two distinct strongly quasi-reduced integer polynomials $g(x)$ and $g^\ast(x)$ of degree~$n$ with vanishing $x^{n-1}$-term necessarily yield non-isomorphic rings $R_g$ and $R_{g^\ast}$. The proof of the positive density result of Theorem~\ref{polydisc2} then produces $\gg X^{1/2+1/n}$ strongly quasi-reduced monic integer polynomials $g(x)$ of degree~$n$ having vanishing $x^{n-1}$-term, squarefree discriminant, and height less than $X^{1/(n(n-1))}$. These therefore correspond to $\gg X^{1/2+1/n}$ non-isomorphic monogenic rings of integers in $S_n$-number fields of degree $n$ having absolute discriminant less than~$X$, and Corollary~\ref{monogenic} follows. A similar argument proves Corollary~\ref{shortvector}. Suppose $f(x)$ is a strongly quasi-reduced irreducible monic integer polynomial of degree $n$ with squarefree discriminant $\Delta(f)$. Elementary estimates show that if $H(f)<Y$, then $\|\theta\|\ll Y$, and so the shortest vector in the ring of integers generating the field also has length bounded by $O(Y)$. The above-mentioned result on the number of strongly quasi-reduced irreducible monic integer polynomial of degree $n$ with squarefree discriminant, vanishing $x^{n-1}$-coefficient, and height bounded by $Y$ then gives the desired lower bound of $\gg Y^{(n-1)(n+2)/2}.$ \vspace{.1in} This paper is organized as follows. In Section~\ref{sQ}, we collect some algebraic facts about the representation $2\otimes g\otimes(g+1)$ of ${\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g+1}$ and we define the $Q$-invariant, which is the unique polynomial invariant for this action. In Sections~\ref{sec:monicodd} and~\ref{sec:moniceven}, we then apply geometry-of-numbers techniques as described above to prove the critical estimates of Theorem~\ref{thm:mainestimate}. In Section~\ref{sec:sieve}, we then show how our main theorems, Theorems \ref{polydisc2} and \ref{polydiscmax2}, can be deduced from Theorem~\ref{thm:mainestimate}. Finally, in Section~\ref{latticearg}, we prove Corollary~\ref{monogenic} on the number of monogenic $S_n$-number fields of degree~$n$ having bounded absolute discrminant, as well as Corollary~\ref{shortvector} on the number of rings of integers in number fields of degree $n$ whose shortest vector generating the number field is of bounded length. \section{The representation $V_g=2\otimes g\otimes(g+1)$ of $H_g={\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g+1}$ and the $Q$-invariant}\label{sQ} In this section, we collect some algebraic facts about the representation $V_g=2\otimes g\otimes(g+1)$ of the group $H_g={\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g+1}$. This representation will also play an important role in the sequel. First, we claim that this representation is {\it prehomogeneous}, i.e., the action of ${\mathbb G}_m\times H_g$ on $V_g$ has a single Zariski open orbit. We prove this by induction on $g$. The assertion is clear for $g=1$, where the representation is that of ${\mathbb G}_m\times {\rm SL}_2\times {\rm SL}_2$ on $2\times 2$ matrices; the single relative invariant in this case is the determinant, and the open orbit consists of nonsingular matrices. For higher $g$, we note that $V_g$ is a {\it castling transform} of $V_{g-1}$ in the sense of Sato and Kimura~\cite{SK}; namely, the orbits of ${\mathbb G}_m\times {\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g-1}$ on $2\times g \times (g-1)$ are in natural one-to-one correspondence with the orbits of ${\mathbb G}_m\times{\rm SL}_2\times{\rm SL}_g\times{\rm SL}_{g+1}$ on $2\times g\times (2g-(g-1))=2\times g\times(g+1)$, and under this correspondence, the open orbit maps to an open orbit (cf. \cite{SK}). Thus all the representations $V_g$ for the action of ${\mathbb G}_m\times H_g$ are prehomogeneous. Next, we may construct an invariant for the action of $H_g$ on $V_g$ (and thus a relative invariant for the action of ${\mathbb G}_m\times H_g$ on $V_g$) as follows. We write any $2\times g\times (g+1)$ matrix $v$ in $V_g$ as a pair $(A,B)$ of $g\times(g+1)$ matrices. Let $M_v(x,y)$ denote the vector of $g\times g$ minors of $Ax-By$, where $x$ and $y$ are indeterminates; in other words, the $i$-th coordinate of the vector $M_v(x,y)$ is given by $(-1)^{i-1}$ times the determinant of the matrix obtained by removing the $i$-th column of $Ax-By$. Then $M_v(x,y)$ is a vector of length $g+1$ consisting of binary forms of degree $g$ in $x$ and~$y$. Each binary form thus consists of $g+1$ coefficients. Taking the determinant of the resulting $(g+1)\times(g+1)$ matrix of coefficients of these $g+1$ binary forms in $M_v(x,y)$ then yields a polynomial $Q=Q(v)$ in the coordinates of $V_g$, invariant under the action of $H_g$. We call this polynomial the $Q$-{\it invariant}. It is irreducible and homogeneous of degree $g(g+1)$ in the coordinates of $V_g$, and generates the ring of polynomial invariants for the action of $H_g$ on $V_g$. The $Q$-invariant is also the {\it hyperdeterminant} of the $2\times g \times (g+1)$ matrix (cf.\ \cite[Theorem 3.18]{GKZ}). Note that castling transforms preserve stabilizers over any field. Since, for any field $k$, the generic stabilizer for the action of $H_1(k)$ on $V_1(k)$ is isomorphic to ${\rm SL}_2(k)$, it follows that this remains the generic stabilizer for the action of $H_g(k)$ on $V_g(k)$ for all $g\geq 1$. \section{A uniformity estimate for odd degree monic polynomials}\label{sec:monicodd} In this section, we prove the estimate of Theorem~\ref{thm:mainestimate}(b) when $n=2g+1$ is odd, for any $g\geq 1$. \subsection{Invariant theory for the fundamental representation: ${\rm SO}_n$ on the space $W$ of symmetric $n\times n$ matrices} Let $A_0$ denote the $n\times n$ symmetric matrix with $1$'s on the anti-diagonal and $0$'s elsewhere. The group $G={\rm SO}(A_0)$ acts on $W$ via the action \begin{equation*} \gamma\cdot B=\gamma B\gamma^t. \end{equation*} We recall some of the arithmetic invariant theory for the representation $W$ of $n\times n$ symmetric matrices of the split orthogonal group $G$; see \cite{BG2} for more details. The ring of polynomial invariants for the action of $G({\mathbb C})$ on $W({\mathbb C})$ is freely generated by the coefficients of the {\it invariant polynomial $f_B(x)$ of $B$}, defined by $$f_B(x):=(-1)^{g}\det(A_0x-B).$$ We define the {\it discriminant} $\Delta$ on $W$ by $\Delta(B)=\Delta(f_B)$, and the $G({\mathbb R})$-invariant {\it height} of elements in $W({\mathbb R})$ by $H(B)=H(f_B).$ Let $k$ be any field of characteristic not $2$. For a monic polynomial $f(x)\in k[x]$ of degree~$n$ such that $\Delta(f)\neq0$, let $C_f$ denote the smooth hyperelliptic curve $y^2=f(x)$ of genus $g$ and let $J_f$ denote the Jacobian of $C_f$. Then $C_f$ has a rational Weierstrass point at infinity. The stabilizer of an element $B\in W(k)$ with invariant polynomial $f(x)$ is naturally isomorphic to $J_f[2](k)$ by~\cite[Proposition~5.1]{BG2}, and hence has cardinality at most $\#J_f[2](\bar k)=2^{2g}$, where $\bar k$ denotes a separable closure of $k$. We say that the element (or $G(k)$-orbit of) $B\in W(k)$ is {\it distinguished} over $k$ if there exists a $g$-dimensional subspace defined over $k$ that is isotropic with respect to both $A_0$ and $B$. If $B$ is distinguished, then the set of these $g$-dimensional subspaces over $k$ is again in bijection with $J_f[2](k)$ by~\cite[Proposition~4.1]{BG2}, and so it too has cardinality at most $2^{2g}$. In fact, it is known (see \cite[Proposition~5.1]{BG2}) that the elements of $J_f[2](k)$ are in natural bijection with the even-degree factors of $f$ defined over $k$. (Note that the number of even-degree factors of $f$ over $\bar k$ is indeed $2^{2g}$.) In particular, if $f$ is irreducible over $k$, then the group $J_f[2](k)$ is trivial. Now let $W_0$ be the subspace of $W$ consisting of matrices whose top left $g\times g$ block is zero. Then elements $B$ in $W_0(k)$ with nonzero discriminant are all evidently distinguished since the $g$-dimensional subspace $Y_g$ spanned by the first $g$ basis vectors is isotropic with respect to both $A_0$ and $B$. Let $G_0$ denote the subgroup of $G$ consisting of elements $\gamma$ such that $\gamma^t$ preserves $Y_g$. Then $G_0$ acts on $W_0$. An element $\gamma\in G_0$ has the block matrix form \begin{equation}\label{eq:G_0} \gamma=\Bigl(\begin{array}{cc}\gamma_1 & 0\\ \delta & \gamma_2 \end{array}\Bigr)\in\Bigl(\begin{array}{cc}M_{g\times g} & 0\\ M_{(g+1)\times g} & M_{(g+1)\times (g+1)} \end{array}\Bigr), \end{equation} and so $\gamma\in G_0$ transforms the top right $g\times (g+1)$ block of an element $B\in W_0$ as follows: $$(\gamma\cdot B)^{\textrm{top}} = \gamma_1B^{\textrm{top}}\gamma_2^t,$$ where we use the superscript ``top'' to denote the top right $g\times (g+1)$ block of any given element in $W_0$. We may thus view $(A_0^{\textrm{top}},B^{\textrm{top}})$ as an element of the representation $V_g=2\times g\times (g+1)$ considered in Section \ref{sQ}. In particular, we may define the $Q$-{\it invariant} of $B\in W_0$ to be the $Q$-invariant of $(A_0^{\textrm{top}},B^{\textrm{top}})$: \begin{equation}\label{eqQB} Q(B):=Q(A_0^{\textrm{top}},B^{\textrm{top}}). \end{equation} Then the $Q$-invariant is also a relative invariant for the action of $G_0$ on $W_0$, since for any $\gamma\in G_0$ expressed in the form \eqref{eq:G_0}, we have \begin{equation}\label{eq:weightG_0} Q(\gamma\cdot B) = \det(\gamma_1)Q(B). \end{equation} In fact, we may extend the definition of the $Q$-invariant to an even larger subset of $W({\mathbb Q})$ than $W_0({\mathbb Q})$. We have the following proposition. \begin{proposition}\label{prop:extendQ} Let $B\in W_0({\mathbb Q})$ be an element whose invariant polynomial $f(x)$ is irreducible over~${\mathbb Q}$. Then for every $B'\in W_0({\mathbb Q})$ such that $B'$ is $G({\mathbb Z})$-equivalent to $B$, we have $Q(B')=\pm Q(B)$. \end{proposition} \begin{proof} Suppose $B'=\gamma\cdot B$ with $\gamma\in G({\mathbb Z})$ and $B,B'\in W_0({\mathbb Q})$. Then $Y_g$ and $\gamma^t Y_g$ are both $g$-dimensional subspaces over ${\mathbb Q}$ isotropic with respect to both $A_0$ and $B$. Since $f$ is irreducible over ${\mathbb Q}$, we have that $J_f[2]({\mathbb Q})$ is trivial, and so these two subspaces must be the same. We conclude that $\gamma\in G_0({\mathbb Z})$, and thus $Q(\gamma\cdot B)=\pm Q(B)$ by~\eqref{eq:weightG_0}. \end{proof} We may thus define the $|Q|$-{\it invariant} for any element $B\in W({\mathbb Q})$ that is $G({\mathbb Z})$-equivalent to some element $B'\in W_0({\mathbb Q})$ and whose invariant polynomial is irreducible over ${\mathbb Q}$; indeed, we set $|Q|(B):=|Q(B')|$. By Proposition~\ref{prop:extendQ}, this definition of $|Q|(B)$ is independent of the choice of $B'$. Note that all such elements $B\in W({\mathbb Q})$ are {distinguished}. \subsection{Embedding ${\mathcal W}_m^{\rm {(2)}}$ into $\frac12W({\mathbb Z})$}\label{sembedodd} We begin by describing those monic integer polynomials in ${V_n^{\textrm{mon}}}({\mathbb Z})$ that lie in ${\mathcal W}_m^{\rm {(2)}}$, i.e., the monic integer polynomials that have discriminant weakly divisible by $p^2$ for all $p\mid m$. \begin{proposition} Let $m$ be a positive squarefree integer, and let $f$ be a monic integer polynomial whose discriminant is weakly divisible by $p^2$ for all $p\mid m$. Then there exists an integer $\ell$ such that $f(x+\ell )$ has the form \begin{equation} f(x+\ell ) = x^n + c_1x^{n-1} + \cdots + c_{n-2}x^2 + mc_{n-1}x + m^2c_n \end{equation} for some integers $c_1,\ldots,c_n$. \end{proposition} \begin{proof} Since $m$ is squarefree, by the Chinese Remainder Theorem it suffices to prove the assertion in the case that $m=p$ is prime. Since $p$ divides the discriminant of~$f$, the reduction of $f$ modulo~$p$ must have a repeated factor $h(x)^e$ for some polynomial $h\in {\mathbb F}_p[x]$ and some integer $e\geq2$. As the discriminant of $f$ is not strongly divisible by $p^2$, we see that $h$ is linear and $e=2$. By replacing $f(x)$ by $f(x+\ell )$ for some integer $\ell$, if necessary, we may assume that the repeated factor is $x^2$, i.e., we may assume that the constant coefficient $c_n$ as well as the coefficient $c_{n-1}$ of $x$ are both multiples of~$p$. By the resultant definition of the discriminant---$\Delta(f):={\textrm{Res}}(f(x),f'(x))$---it follows that there exist polynomials $\Delta_1,\Delta_2,\Delta_3\in{\mathbb Z}[c_1,\ldots,c_n]$ such that \begin{equation}\Delta(f) = c_n\Delta_1 + c_{n-1}^2\Delta_2 + c_{n-1}c_n\Delta_3. \end{equation} Since $p$ divides $c_{n-1}$ and $c_n$, we see that $p^2\mid \Delta(f)$ if and only if $p^2\mid c_n\Delta_1$. If $p^2$ does not divide $c_n$, then $p$ divides $\Delta_1$, and continues to divide it even if one modifies each $c_i$ by a multiple of $p$, and so $p^2$ divides $\Delta(f)$ strongly in that case, a contradiction. Therefore, we must have that $p^2\mid c_n$. \end{proof} Having identified the monic integer polynomials whose discriminants are weakly divisible by $p^2$ for all $p\mid m$, our aim now is to map these polynomials into a larger space, so that: 1) there is a discriminant polynomial defined on the larger space; 2) the map is discriminant-preserving; and, 3) the images of these polynomials have discriminant {\it strongly divisible by $p^2$} for all $p\mid m$. To this end, consider the matrix \begin{equation}\label{mat1} B_m(c_1,\ldots,c_n) = \left(\!\begin{array}{ccccccccc}&&&&&&&m&\!0\\[.1in]&&&&&&\iddots&\iddots& \\[.15in]&&&&&\,1\,&\,0\,&&\\[.125in] &&&&\,1\,&0&&&\\[.125in] &&&1&-c_1&\!\!-c_2/2\!\!&&& \\[.15in] &&1&\;\,0\;\,&\!\!-c_2/2\!\!&-c_3&\!\!-c_4/2\!\!&&\\[.045in]&\iddots&\;\,\,0\;\,\,&&&\!\!-c_4/2\!\!&-c_5&\ddots&\\[.025in] \;m\;&\,\,\,\iddots\,\,\,&&&&&\ddots&\ddots&\!\!\!-c_{n-1}/2\!\!\! \\[.105in] \,0\,&&&&&&&\!\!\!-c_{n-1}/2\!\!\!&-c_n \end{array}\right) \end{equation} in $\frac12 W_0({\mathbb Z})$. It follows from a direct computation that $$f_{B_m(c_1,\ldots,c_n)}(x) = x^n + c_1x^{n-1} + \cdots + c_{n-2}x^2 + mc_{n-1}x + m^2c_n.$$ We set $\sigma_m(f) := B_m(c_1,\ldots,c_n) + \ell A_0\in \frac12 W_0({\mathbb Z})$. Then we have $f_{\sigma_m(f)}=f.$ Another direct computation shows that $|Q(B_m(c_1,\ldots,c_n))|=m$. Since the $Q$-invariant on $2\otimes g\otimes (g+1)$ is ${\rm SL}_2$-invariant, we conclude that $$|Q({\sigma_m(f)})|=m.$$ Finally, we note that for all odd primes $p\mid m$, we have that $p^2$ weakly divides $\Delta(f)$, and $p^2$ weakly divides $\Delta(\sigma_m(f))$ as an element of $\frac12W({\mathbb Z})$, {\it but $p^2$ strongly divides $\Delta(f)$ as an element of $\frac12W_0({\mathbb Z})$}! We have proven the following theorem. \begin{theorem}\label{keymap} Let $m$ be a positive squarefree integer. There exists a map $\sigma_m:{\mathcal W}_m^{\rm {(2)}}\to \frac12W_0({\mathbb Z})$ such that $f_{\sigma_m(f)}=f$ for every $f\in {\mathcal W}_m^{\rm {(2)}}$ and, furthermore, $p^2$ strongly divides $\Delta(\sigma_m(f))$ for all $p\mid m$. In~addition, elements in the image of $\sigma_m$ have $|Q|$-invariant equal to $m$. \end{theorem} Let ${\mathcal L}$ be the set of elements $v\in \frac12W({\mathbb Z})$ satisfying the following conditions: $v$ is $G({\mathbb Z})$-equivalent to some element in $\frac12W_0({\mathbb Z})$ and the invariant polynomial of $v$ is irreducible over ${\mathbb Q}$. Then by the remark following Proposition~\ref{prop:extendQ}, we may view $|Q|$ as a function also on ${\mathcal L}$. Using ${\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$ to denote the set of irreducible polynomials in ${\mathcal W}_m^{\rm {(2)}}$, we then have the following immediate consequence of Theorem \ref{keymap}: \begin{theorem}\label{keymaporbit} Let $m$ be a positive squarefree integer. There exists a map $$\bar{\sigma}_m:{\mathcal W}_m^{{\rm {(2)}},{\rm irr}}\to G({\mathbb Z})\backslash{\mathcal L}$$ such that $f_{\bar{\sigma}_m(f)}=f$ for every $f\in {\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$. Moreover, for every element $B$ in the $G({\mathbb Z})$-orbit of an element in the image of $\bar{\sigma}_m$, we have $|Q|(B)=m$. \end{theorem} It is well known that the number of reducible monic integer polynomials having height less than $X$ is of a strictly smaller order of magnitude than the total number of such polynomials (see, e.g., Proposition~\ref{propredboundall}). Thus, for our purposes of proving Theorem~\ref{thm:mainestimate}(b), it will suffice to count elements in ${\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$ of height less than $X$ over all $m>M$, which by Theorem~\ref{keymaporbit} we may do by counting these special $G({\mathbb Z})$-orbits on ${\mathcal L}\subset \frac12W({\mathbb Z})$ having height less than $X$ and $|Q|$-invariant greater than $M$. More precisely, let $N({\mathcal L};M;X)$ denote the number of $G({\mathbb Z})$-equivalence classes of elements in ${\mathcal L}$ whose $|Q|$-invariant is greater than $M$ and whose height is less than $X$. Then, by Theorem~\ref{keymaporbit}, to obtain an upper bound for the left hand side in Theorem~\ref{thm:mainestimate}(b), it suffices to obtain the same upper bound for $N({\mathcal L};M;X)$. On the other hand, we may estimate the number of orbits counted by $N({\mathcal L}; M;X)$ using the averaging method as utilized in~\cite[\S3.1]{BG2}. Namely, we construct fundamental domains for the action of $G({\mathbb Z})$ on $W({\mathbb R})$ using {\it Siegel sets}, and then count the number of points in these fundamental domains that are contained in ${\mathcal L}$. We describe the coordinates on $W({\mathbb R})$ and $G({\mathbb R})$ needed to describe these fundamental domains explicitly in Section~\ref{scoeffodd}. In Section~\ref{sgomodd}, we then describe the integral that must be evaluated in order to estimate $N({\mathcal L};M;X)$, as per the counting method of \cite[\S3.1]{BG2}, and finally we evaluate this integral. This will complete the proof of Theorem~\ref{thm:mainestimate}(b) in the case of odd integers~$n$. \subsection{Coordinate systems on $G({\mathbb R})$}\label{scoeffodd} In this subsection, we describe a coordinate system on the group $G({\mathbb R})$. Let us write the Iwasawa decomposition of $G({\mathbb R})$ as $$ G({\mathbb R})=N({\mathbb R})TK, $$ where $N$ is a unipotent group, $K$ is compact, and $T$ is the split torus of $G$ given by \begin{equation*} T= \left\{\left(\begin{array}{ccccccc} t_1^{-1}&&&&&&\\ &\ddots &&&&& \\ && t_{g}^{-1} &&&&\\ &&& 1 &&&\\ &&&& t_g &&\\ &&&&&\ddots & \\ & &&&&& t_{1} \end{array}\right):t_1,\ldots,t_g\in{\mathbb R}\right\}. \end{equation*} We may also make the following change of variables. For $1\leq i\leq g-1$, set $s_i$ to be $$ s_i=t_i/t_{i+1}, $$ and set $s_g=t_g$. It follows that for $1\leq i\leq g$, we have \begin{equation*} t_i=\prod_{j=i}^g s_j. \end{equation*} We denote an element of $T$ with coordinates $t_i$ (resp.\ $s_i$) by $(t)$ (resp.\ $(s)$). The Haar measure on $G({\mathbb R})$ is given by $$ dg=dn\,H(s)d^\times s\,dk, $$ where $dn$ is Haar measure on the unipotent group $N({\mathbb R})$, $dk$ is Haar measure on the compact group~$K$, $d^\times s$ is given by $$ d^\times s:=\prod_{i=1}^g\frac{ds_i}{s_i}, $$ and \begin{equation}\label{eqhaarodd} H(s)=\prod_{k=1}^g s_k^{k^2-2kg}; \end{equation} see \cite[(10.7)]{BG2}. We denote the coordinates on $W$ by $b_{ij}$, for $1\leq i\leq j\leq n$. These coordinates are eigenvectors for the action of $T$ on $W^*$, the dual of $W$. Denote the $T$-weight of a coordinate $\alpha$ on $W$, or more generally a product $\alpha$ of powers of such coordinates, by $w(\alpha)$. An elementary computation shows that \begin{equation}\label{wbij} w(b_{ij})=\left\{ \begin{array}{rcl} t_i^{-1}t_j^{-1} &\mbox{ if }& i,j\leq g\\ t_i^{-1} &\mbox{ if }& i\leq g,\;j=g+1\\ t_i^{-1}t_{n-j+1} &\mbox{ if }& i\leq g,\; j>g+1\\ 1 &\mbox{ if }& i=j=g+1\\ t_{n-j+1} &\mbox{ if }& i=g+1,\;j>g+1\\ t_{n-i+1}t_{n-j+1} &\mbox{ if }& i,j>g+1. \end{array} \right. \end{equation} We may also compute the weight of the invariant $Q$. The polynomial $Q$ is homogeneous of degree $g(g+1)/2$ in the coefficients of $W_0$. We view the torus $T$ as sitting inside $G_0$. Then by \eqref{eq:weightG_0}, the polynomial $Q$ has a well-defined weight, given by \begin{equation}\label{eqQweight} w(Q)=\prod_{k=1}^gt_k^{-1}=\prod_{k=1}^gs_k^{-k}. \end{equation} \subsection{Proof of Theorem~\ref{thm:mainestimate}(b) for odd $n$}\label{sgomodd} Let ${\mathcal F}$ be a fundamental set for the action of $G({\mathbb Z})$ on $G({\mathbb R})$ that is contained in a {\it Siegel set}, i.e., contained in $N'T'K$, where $N'$ consists of elements in $N({\mathbb R})$ whose coefficients are absolutely bounded and $T'\subset T$ consists of elements in $(s)\in T$ with $s_i\geq c$ for some positive constant $c$. Let~$R$ be a bounded fundamental set for the action of $G({\mathbb R})$ on the set of elements in $W({\mathbb R})$ having nonzero discriminant and height less than $1$; such a set $R$ was constructed in \cite[\S9.1]{BG2}. Then for every $h\in G({\mathbb R})$, we have \begin{equation}\label{eqoddfundv} N({\mathcal L};M;X)\ll \#\{B\in (({\mathcal F} h)\cdot (XR))\cap{\mathcal L}:Q(B)>M\}, \end{equation} since ${\mathcal F} h$ remains a fundamental domain for the action of $G({\mathbb Z})$ on $G({\mathbb R})$, and so $({\mathcal F} h)\cdot (XR)$ (when viewed as a multiset) is the union of a bounded number (namely, between $1$ and $2^{2g}$) fundamental domains for the action of $G({\mathbb Z})$ on the elements in $V({\mathbb R})$ having height bounded by $X$. Let $G_0$ be a compact left $K$-invariant set in $G({\mathbb R})$ which is the closure of a nonempty open set. Averaging \eqref{eqoddfundv} over $h\in G_0$ and exchanging the order of integration as in \cite[\S10.1]{BG2}, we obtain \begin{equation} N({\mathcal L};M;X)\ll \int_{\gamma\in{\mathcal F}} \#\{B\in((\gamma G_0)\cdot (XR))\cap{\mathcal L}:Q(B)>M\} d\gamma, \end{equation} where the implied constant depends only on $G_0$ and $R$. Let $W_{00}\subset W$ denote the space of symmetric matrices $B$ whose $(i,j)$-entries are 0 for $i+j<n$. It was shown in \cite[Propositions~10.5 and 10.7]{BG2} that most lattice points in the fundamental domains $({\mathcal F} h)\cdot (XR)$ that are distinguished lie in $W_0$ and in fact lie in $W_{00}$. The reason for this is that in the main bodies of these fundamental domains, we expect a negligible number of distinguished elements (e.g., because each distinguished element will be distinguished $p$-adically as well, which happens with $p$-adic density bounded above by some constant $c_n$ strictly less than $1$, and $\prod_p c_n$=0). Meanwhile, in the cuspidal regions of these fundamental domains, the values of the $s_i$ become very large, yielding many integral points lying in $W_0$ and in fact in $W_{00}$ (the top left entries of $B$ must vanish for integral points in $(\gamma G_0)\cdot (XR)$ when the $s_i$ are large, as these top left entries have negative powers of $s_i$ as weights). These arguments from \cite{BG2} can be used in the identical manner to show that the number of points in our fundamental domains lying in ${\mathcal L}$ but not in $\frac12W_{00}({\mathbb Z})$ is negligible. \begin{proposition}\label{propodd} We have \begin{equation*} \displaystyle\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap ({\mathcal L}\setminus {\textstyle\frac12} W_{00}({\mathbb Z}))\}d\gamma=O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}). \end{equation*} \end{proposition} \begin{proof} First, we consider elements $B\in {\mathcal L}$ such that $b_{11}\neq 0$. Since $B$ is distinguished in $W({\mathbb Q})$, it is also distinguished as an element of $W({\mathbb Q}_p)$, which occurs in $W({\mathbb Z})$ with $p$-adic density at most $1-\frac{n}{2n+1}+O(\frac{1}{p})$. Since $\prod_p(1-\frac{n}{2n+1}+O(\frac{1}{p}))=0$, we thus obtain as in \cite[Proof of Proposition~10.7]{BG2} that \begin{equation}\label{selapp} \int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap {\mathcal L}:b_{11}\neq 0\}d\gamma=o(X^{n(n+1)/2}). \end{equation} An application of the Selberg sieve exactly as in \cite{ShTs} can be used to improve the right hand side of~(\ref{selapp}) to $O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})$. Meanwhile, as already mentioned above, \cite[Proof of Proposition~10.5]{BG2} immediately gives \begin{equation*} \int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap ({\textstyle\frac12} W({\mathbb Z})\setminus {\textstyle\frac12} W_{00}({\mathbb Z})):b_{11}=0\}d\gamma=O_\epsilon(X^{n(n+1)/2-1+\epsilon}). \end{equation*} Since ${\mathcal L}$ is contained in $\frac12W({\mathbb Z})$, this completes the proof of the proposition. \end{proof} Proposition~\ref{propodd} shows that the number of points in ${\mathcal L}$ in our fundamental domains outside $W_{00}$ is negligible (even without any condition on the $Q$-invariant!). It remains to estimate the number of points in our fundamental domains that lie in ${\mathcal L}\cap \frac12W_{00}({\mathbb Z})$ and which have $Q$-invariant larger than $M$. By \cite[Proof of Proposition 10.5]{BG2}, the total number of such points without any condition on the $Q$-invariant is $O(X^{n(n+1)/2})$. Thus, to obtain a saving, we must use the condition that the $Q$-invariant is larger than $M$. We accomplish this via two observations. First, as already noted above, if $\gamma\in{\mathcal F}$ has Iwasawa coordinates $(n,(s_i)_i,k)$, then the integral points in $((\gamma G_0)\cdot (XR))\cap \frac12W_{00}({\mathbb Z})$ with irreducible invariant polynomial occur predominantly when the coordinates $s_i$ are large. On the other hand, since the weight of the $Q$-invariant is a product of negative powers of $s_i$, the $Q$-invariants of such points in $((\gamma G_0)\cdot (XR))\cap \frac12W_{00}({\mathbb Z})$ become large when the coordinates $s_i$ are small. The tension between these two requirements on integral points in $((\gamma G_0)\cdot (XR))\cap {\mathcal L}$ will yield the desired saving. \begin{proposition}\label{proplargeQbound} We have \begin{equation*} \int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}d\gamma=O(\frac{1}{M}X^{n(n+1)/2}\log X). \end{equation*} \end{proposition} \begin{proof} Since $s_i\geq c$ for every $i$, there exists a compact subset $N''$ of $N({\mathbb R})$ containing $(t)^{-1}N'\,(t)$ for all $t\in T'$. Let $E$ be the pre-compact set $N''KG_0R$. Then we have \begin{eqnarray} &&\int_{\mathcal F} \#\{B\in((\gamma G_0)\cdot(XR))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}d\gamma \nonumber \\ &\ll&\int_{s_i\gg 1} \#\{B\in((s)\cdot (XE))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}H(s)d^\times s, \label{inttoest} \end{eqnarray} where $H(s)$ is defined in \eqref{eqhaarodd}. To estimate the integral in (\ref{inttoest}), we note first that the $(i,j)$-entry of any element of $(s)\cdot (XE)$ is bounded by $Xw(b_{ij}).$ Now, by \cite[Lemma~10.3]{BG2}, if an element in $\frac12 W_{00}({\mathbb Z})$ has $(i,j)$-coordinate $0$ for some $i+j=n$, then the element has discriminant $0$ and hence is not in ${\mathcal L}$. Since the weight of $b_{i,n-i}$ is $s_i^{-1}$, to count points in ${\mathcal L}$ it suffices to integrate only in the region where $s_i\ll X$ for all $i$, so that it is possible for an element of ${\mathcal L}\cap (s)\cdot(XE)$ to have nonzero $(i,n-i)$-entry. Furthermore, it suffices to integrate only in the region where $X^{g(g+1)/2}w(Q) \gg M$, since the $Q$-invariant has weight $w(Q)$ and is homogeneous of degree $g(g+1)/2$. Let $S$ denote the set of coordinates of $W_{00}$, i.e., $S=\{b_{ij}:i+j\geq n\}$. For $(s)$ in the range $1\ll s_i\ll X$, we have $Xw(\alpha)\gg 1$ for all $\alpha\in S$; thus the number of lattice points in $(s)\cdot (XE)$ for $(s)$ in this range is $\ll \prod_{\alpha\in S}(Xw(\alpha))$. Therefore, we have \begin{eqnarray*} &&\displaystyle\int_{s_i\gg 1} \#\{B\in((s)\cdot (XE))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}H(s)d^\times s \\ &\ll&\displaystyle\int_{1\ll s_i\ll X,\,\,X^{g(g+1)/2}w(Q) \gg M} \prod_{\alpha\in S}\bigl(Xw(\alpha)\bigr)H(s)d^\times s \\ &\ll&\displaystyle\int_{1\ll s_i\ll X,\,\,X^{g(g+1)/2}w(Q) \gg M} X^{n(n+1)/2-g^2}\prod_{k=1}^gs_k^{2k-1}d^\times s \\ &\ll&\displaystyle\frac{1}{M}\int_{s_i=1}^X X^{n(n+1)/2-g^2+g(g+1)/2}w(Q)\prod_{k=1}^gs_k^{2k-1}d^\times s \\ &\ll&\displaystyle\frac{1}{M}\int_{s_i=1}^X X^{n(n+1)/2-g(g-1)/2}\prod_{k=1}^gs_k^{k-1}d^\times s \\ &\ll&\displaystyle\frac{1}{M}X^{n(n+1)/2}\log(X), \end{eqnarray*} where the second inequality follows from the definition \eqref{eqhaarodd} of $H(s)$ and the computation (\ref{wbij}) of the weights of the coordinates $b_{ij}$, the third inequality follows from the fact that $X^{g(g+1)/2}w(Q) \gg M$, the fourth inequality follows from the computation of the weight of $Q$ in \eqref{eqQweight}, and the $\log X$ factor comes from the integral over $s_1$. \end{proof} The estimate in Theorem \ref{thm:mainestimate}(b) for odd $n$ now follows from Theorem \ref{keymaporbit} and Propositions~\ref{propodd} and~\ref{proplargeQbound}, in conjunction with the bound on the number of reducible polynomials proved in Proposition~\ref{propredboundall}. \section{A uniformity estimate for even degree monic polynomials}\label{sec:moniceven} In this section, which is structured similarly to Section~\ref{sec:monicodd}, we prove the estimate of Theorem~\ref{thm:mainestimate}(b) when $n=2g+2$ is even, for any $g\geq 1$. \subsection{Invariant theory for the fundamental representation: ${\rm SO}_n$ on the space $W$ of symmetric $n\times n$ matrices} We recall some of the arithmetic invariant theory of the representation $W$ of $n\times n$ symmetric matrices of the (projective) split orthogonal group $G={\rm PSO}_n.$ See \cite{SW} for more details. Let $A_0$ denote the $n\times n$ symmetric matrix with $1$'s on the anti-diagonal and $0$'s elsewhere. The group ${\rm SO}(A_0)$ acts on $W$ via the action \begin{equation*} \gamma\cdot B=\gamma B\gamma^t. \end{equation*} The central $\mu_2$ acts trivially and so the action descends to an action of $G={\rm SO}(A_0)/\mu_2$. The ring of polynomial invariants over ${\mathbb C}$ is freely generated by the coefficients of the {\it invariant polynomial} $$f_B(x):=(-1)^{g+1}\det(A_0x-B).$$ We define the {\it discriminant} $\Delta$ and {\it height} $H$ on $W$ as the discriminant and height of the invariant polynomial. Let $k$ be a field of characteristic not $2$. For any monic polynomial $f(x)\in k[x]$ of degree~$n$ such that $\Delta(f)\neq0$, let $C_f$ denote the smooth hyperelliptic curve $y^2=f(x)$ of genus $g$ and let $J_f$ denote its Jacobian. Then $C_f$ has two rational non-Weierstrass points at infinity that are conjugate by the hyperelliptic involution. The stabilizer of an element $B\in W(k)$ with invariant polynomial $f(x)$ is isomorphic to $J_f[2](k)$ by~\cite[Proposition~2.33]{W}, and hence has cardinality at most $\#J_f[2](\bar k)=2^{2g}$, where $\bar k$ denotes a separable closure of $k$. We say that the element (or the $G(k)$-orbit of) $B\in W(k)$ is {\it distinguished} if there exists a flag $Y'\subset Y$ defined over $k$ where $Y$ is $(g+1)$-dimensional isotropic with respect to $A_0$ and $Y'$ is $g$-dimensional isotropic with respect to $B$. If $B$ is distinguished, then the set of these flags is in bijection with $J_f[2](k)$ by \cite[Proposition~2.32]{W}, and so it too has cardinality at most $2^{2g}$. In fact, it is known (see~\cite[Proposition~22]{BGWhyper}) that the elements of $J_f[2](k)$ are in natural bijection with the even degree factorizations of $f$ defined over $k$. (Note that the number of such factorizations of $f$ over $\bar k$ is indeed $2^{2g}$.) In particular, if $f$ is irreducible over $k$ and does not factor as $g(x)\bar g(x)$ over some quadratic extension of $k$, then the group $J_f[2](k)$ is trivial. Let $W_0$ be the subspace of $W$ consisting of matrices whose top left $g\times (g+1)$ block is zero. Then elements $B$ in $W_0(k)$ with nonzero discriminant are all distinguished since the $(g+1)$-dimensional subspace $Y_{g+1}$ spanned by the first $g+1$ basis vectors is isotropic with respect to $A_0$ and the $g$-dimensional subspace $Y_g\subset Y_{g+1}$ spanned by the first $g$ basis vectors is isotropic with respect to $B$. Let $G_0$ be the parabolic subgroup of $G$ consisting of elements $\gamma$ such that $\gamma^t$ preserves the flag $Y_g\subset Y_{g+1}$. Then $G_0$ acts on $W_0$. An element $\gamma\in G_0$ has the block matrix form \begin{equation}\label{eq:G_0even} \gamma=\left(\begin{array}{ccc}\gamma_1 & 0 & 0\\ \delta_1 & \alpha & 0\\ \delta_2 & \delta_3 & \gamma_2 \end{array}\right)\in\left(\begin{array}{ccc}M_{g\times g} & 0 & 0\\ M_{1\times g} & M_{1\times 1} & M_{1\times (g+1)}\\ M_{(g+1)\times g} & M_{(g+1)\times 1} & M_{(g+1)\times (g+1)} \end{array}\right), \end{equation} and so $\gamma\in G_0$ acts on the top right $g\times (g+1)$ block of an element $B\in W_0$ by $$\gamma.B^{\textrm{top}} = \gamma_1B^{\textrm{top}}\gamma_2^t,$$ where we use the superscript ``top'' to denote the top right $g\times (g+1)$ block of any given element of $W_0$. We may thus view $(A_0^{\textrm{top}},B^{\textrm{top}})$ as an element of the representation $V_g=2\times g\times (g+1)$ considered in Section \ref{sQ}. In particular, we may define the $Q$-{\it invariant} of $B\in W_0$ as the $Q$-invariant of $(A_0^{\textrm{top}},B^{\textrm{top}})$: \begin{equation}\label{eqQBeven} Q(B):=Q(A_0^{\textrm{top}},B^{\textrm{top}}). \end{equation} Then the $Q$-invariant is a relative invariant for the action of $G_0$ on $W_0$, i.e., for any $\gamma\in G_0$ in the form \eqref{eq:G_0}, we have \begin{equation}\label{eq:weightG_0even} Q(\gamma.B) = \det(\gamma_1)^{g+1}\det(\gamma_2)^gQ(B) = \det(\gamma_1)\alpha^{-g}Q(B). \end{equation} In fact, we may extend the definition of the $Q$-invariant to an even larger subset of $W({\mathbb Q})$ than $W_0({\mathbb Q})$. We have the following proposition. \begin{proposition}\label{prop:extendQeven} Let $B\in W_0({\mathbb Q})$ be an element whose invariant polynomial $f(x)$ is irreducible over ${\mathbb Q}$ and, when $n\geq 4$, does not factor as $g(x)\bar{g}(x)$ over some quadratic extension of ${\mathbb Q}$. Then for every $B'\in W_0({\mathbb Q})$ such that $B'$ is $G({\mathbb Z})$-equivalent to $B$, we have $Q(B')=\pm Q(B)$. \end{proposition} \begin{proof} The assumption on the factorization property of $f(x)$ implies that $J_f[2]({\mathbb Q})$ is trivial. The proof is now identical to that of Proposition~\ref{prop:extendQ}. \end{proof} We may thus define the $|Q|$-{\it invariant} for any element $B\in W({\mathbb Q})$ that is $G({\mathbb Z})$-equivalent to some $B'\in W_0({\mathbb Q})$ and whose invariant polynomial is irreducible over ${\mathbb Q}$ and does not factor as $g(x)\bar{g}(x)$ over any quadratic extension of ${\mathbb Q}$; indeed, we set $|Q|(B):=|Q(B')|$. By Proposition~\ref{prop:extendQeven}, this definition of $|Q|(B)$ is independent of the choice of $B'$. We note again that all such elements $B\in W({\mathbb Q})$ are distinguished. \subsection{Embedding ${\mathcal W}_m^{\rm {(2)}}$ into $\frac14W({\mathbb Z})$}\label{sembedeven} Let $m$ be a positive squarefree integer and let $f$ be an monic integer polynomial whose discriminant is weakly divisible by $m^2$. Then as proved in \S\ref{sembedodd}, there exists an integer $\ell$ such that $f(x+\ell)$ has the form $$f(x+\ell) = x^n + c_1x^{n-1} + \cdots + c_{n-2}x^2 + mc_{n-1}x + m^2c_n.$$ Consider the following matrix: \begin{equation}\label{mat2} B_m(c_1,\ldots,c_n) = \left(\!\!\!\!\!\begin{array}{cccccccccc}&&&&&&&&m&0\\[.065in]&&&&&&&\iddots&\:\;\iddots& \\[.025in]&&&&&&1&\;\;\iddots&&\\[.185in] &&&&&\,1&0&&&\\[.185in] &&&&\:\;1\;&-c_1/2&&&& \\[.175in]&&&1&\!\!-c_1/2\,&\!\!c_1^2/4\!-\!c_2\!\!&-c_3/2&&&\\[.175in]&&1&\;\;\;0\;\;\;&&-c_3/2&-c_4&\!\!\!-c_5/2\!\!\!&&\\[.085in]&\iddots&\;\;\:\iddots\:\;\;&&&&-c_5/2&-c_6&\ddots&\\[.0125in] \;\;\;m\;\;\:&\;\,\,\iddots&&&&&&\ddots&\ddots&\!\!-c_{n-1}/2\!\! \\[.125in] 0&&&&&&&&\!\!\!\!-c_{n-1}/2\!\!\!\!&-c_n \end{array}\!\right). \end{equation} It follows from a direct computation that $$f_{B_m(c_1,\ldots,c_n)}(x) = x^n + c_1x^{n-1} + \cdots + c_{n-2}x^2 + mc_{n-1}x + m^2c_n.$$ We set $\sigma_m(f) := B_m(c_1,\ldots,c_n) + \ell A_0\in \frac14 W({\mathbb Z})$. Then evidently $f_{\sigma_m(f)}=f.$ A direct computation again shows that $|Q(B_m(c_1,\ldots,c_n))|=m$. Since the $Q$-invariant on $2\otimes g\otimes (g+1)$ is ${\rm SL}_2$-invariant, we conclude that $$|Q(\sigma_m(f))|=m.$$ Finally, we note that for all odd primes $p\mid m$, we have that $p^2$ weakly divides $\Delta(f)$, and $p^2$ weakly divides $\Delta(\sigma_m(f))$ as an element of $\frac14W({\mathbb Z})$, {\it but $p^2$ strongly divides $\Delta(\sigma_m(f))$ as an element of $\frac14W_0({\mathbb Z})$}. We have proven the following theorem. \begin{theorem}\label{th:mapeven} Let $m$ be a positive squarefree integer. There exists a map $\sigma_m:{\mathcal W}_m^{\rm {(2)}}\to \frac14W({\mathbb Z})$ such that $f_{\sigma_m(f)}=f$ for every $f\in {\mathcal W}_m^{\rm {(2)}}$ and, furthermore, Furthermore, $p^2$ strongly divides $\Delta(\sigma_m(f))$ for all $p\mid m$. In~addition, elements in the image of $\sigma_m$ have $|Q|$-invariant $m$. \end{theorem} Let ${\mathcal L}$ be the set of elements $v\in \frac14W({\mathbb Z})$ that are $G({\mathbb Z})$-equivalent to some elements of $\frac14 W_0({\mathbb Z})$ and such that the invariant polynomial of $v$ is irreducible over ${\mathbb Q}$ and does not factor as $g(x)\bar{g}(x)$ over some quadratic extension of ${\mathbb Q}$. Then by the remark following Proposition~\ref{prop:extendQeven}, we may view $|Q|$ as a function also on ${\mathcal L}$. Let ${\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$ denote the set of polynomials in ${\mathcal W}_m^{\rm {(2)}}$ that are irreducible over ${\mathbb Q}$ and do not factor as $g(x)\bar{g}(x)$ over any quadratic extension of ${\mathbb Q}$. Then we have the following immediate consequence of Theorem~\ref{th:mapeven}: \begin{theorem}\label{keymaporbiteven} Let $m$ be a positive squarefree integer. There exists a map $$\bar{\sigma}_m:{\mathcal W}_m^{{\rm {(2)}},{\rm irr}}\to G({\mathbb Z})\backslash{\mathcal L}$$ such that $f_{\bar{\sigma}_m(f)}=f$ for every $f\in {\mathcal W}_m^{\rm {(2)}}$. Furthermore, every element in every orbit in the image of $\bar{\sigma}_m$ has $|Q|$-invariant $m$. \end{theorem} It is known that the number of monic integer polynomials having height less than $X$ that are reducible or factor as $g(x)\bar{g}(x)$ over some quadratic extension of ${\mathbb Q}$ is of a strictly smaller order of magnitude than the total number of such polynomials (see, e.g., Proposition~\ref{propredboundall}). Thus to prove Theorem~\ref{thm:mainestimate}(b), it suffices to count the number of elements in ${\mathcal W}_m^{{\rm {(2)}},{\rm irr}}$ having height less than $X$ over all $m>M$, which, by Theorem~\ref{keymaporbiteven}, we may do by counting $G({\mathbb Z})$-orbits on ${\mathcal L}\subset \frac14W({\mathbb Z})$ having height less than $X$ and $|Q|$-invariant greater than $M$. More precisely, let $N({\mathcal L};M;X)$ denote the number of $G({\mathbb Z})$-equivalence classes of elements in ${\mathcal L}$ whose $Q$-invariant is greater than $M$ and whose height is less than $X$. We obtain a bound for $N({\mathcal L};M;X)$ using the averaging method utilized in~\cite{SW}. The rest of this section is structured exactly as the last two subsections of Section 3: we describe coordinate systems for $W({\mathbb R})$ and $G({\mathbb R})$ in Section 4.3, and then bound the quantity $N({\mathcal L};M;X)$ in Section 4.4. This will complete the proof of Theorem~\ref{thm:mainestimate}(b) in the case of even integers~$n$. \subsection{Coordinate systems on $G({\mathbb R})$}\label{scoeffeven} In this subsection we describe a coordinate system on the group $G({\mathbb R})$. Let us write the Iwasawa decomposition of $G({\mathbb R})$ as $$ G({\mathbb R})=N({\mathbb R})TK, $$ where $N$ is a unipotent group, $K$ is compact, and $T$ is a split torus of $G$ \begin{equation*} T= \left\{\left(\begin{array}{ccccccc} t_1^{-1}&&&&&&\\ &\ddots &&&&& \\ && t_{g+1}^{-1} &&&&\\ &&&& \!\!\!\!t_{g+1} &&\\ &&&&&\!\!\!\!\ddots & \\ & &&&&& \!\!t_{1} \end{array}\right)\right\}. \end{equation*} We may also make the following change of variables. For $1\leq i\leq g$, define $s_i$ to be $$ s_i=t_i/t_{i+1}, $$ and let $s_g=t_gt_{g+1}$. We denote an element of $T$ with coordinates $t_i$ (resp.\ $s_i$) by $(t)$ (resp.\ $(s)$). The Haar measure on $G({\mathbb R})$ is given by $$ dg=dn\,H(s)d^\times s\,dk, $$ where $dn$ is Haar measure on the unipotent group $N({\mathbb R})$, $dk$ is Haar measure on the compact group~$K$, $d^\times s$ is given by $$ d^\times s:=\prod_{i=1}^{g+1}\frac{ds_i}{s_i}, $$ and $H(s)$ is given by \begin{equation}\label{eqhaareven} H(s)=\prod_{k=1}^{g-1} s_k^{k^2-2kg-k}\cdot (s_gs_{g+1})^{-g(g+1)/2}; \end{equation} see~\cite[(26)]{SW}. As before, we denote the coordinates of $W$ by $b_{ij}$, for $1\leq i\leq j\leq n$, and we denote the $T$-weight of a coordinate $\alpha$ on $W$, or a product $\alpha$ of powers of such coordinates, by $w(\alpha)$. We compute the weights of the coefficients $b_{ij}$ to be \begin{equation}\label{wbij2} w(b_{ij})=\left\{ \begin{array}{rcl} t_i^{-1}t_j^{-1} &\mbox{ if }& i,j\leq g+1\\ t_i^{-1}t_{n-j+1} &\mbox{ if }& i\leq g+1,\; j>g+1\\ t_{n-i+1}t_{n-j+1} &\mbox{ if }& i,j>g+1. \end{array} \right. \end{equation} We end by computing the weight of $Q$. The polynomial $Q$ is homogeneous of degree $g(g+1)/2$ in the coefficients of $W_0$. We view the torus $T$ as sitting inside $G_0$. Then by \eqref{eq:weightG_0even}, the polynomial $Q$ has a well-defined weight and this weight is given by \begin{equation}\label{eqQweighteven} w(Q)=(t_1\cdots t_g)^{-1}t_{g+1}^g=\prod_{k=1}^g s_k^k. \end{equation} \subsection{Proof of Theorem~\ref{thm:mainestimate} for even $n$}\label{sgomeven} Let ${\mathcal F}$ be a fundamental set for the left action of $G({\mathbb Z})$ on $G({\mathbb R})$ that is contained in a Siegel set, i.e., contained in $N'T'K$, where $N'$ consists of elements in $N({\mathbb R})$ whose coefficients are absolutely bounded and $T'\subset T$ consists of elements $(s)\in T$ with $s_i\geq c$ for some positive constant $c$. Let $R$ be a bounded fundamental set for the action of $G({\mathbb R})$ on elements of $W({\mathbb R})$ with nonzero discriminant and height less than $1$. Such a set $R\subset W({\mathbb R})$ was constructed in \cite[\S4.2]{SW}. As in \S3.4, we see that for every $h\in G({\mathbb R})$, we have \begin{equation}\label{eqoddfundveven} N({\mathcal L};M;X)\ll \#\{B\in (({\mathcal F} h)\cdot (XR))\cap{\mathcal L}:Q(B)>M\}. \end{equation} Let $G_0$ be a compact left $K$-invariant set in $G({\mathbb R})$ which is the closure of a nonempty open set. Averaging \eqref{eqoddfundveven} over $h\in G_0$ as before, and exchanging the order of integration, we obtain \begin{equation}\label{eqevenfundavg} N({\mathcal L};M;X)\ll \int_{\gamma\in{\mathcal F}} \#\{B\in((\gamma G_0)\cdot (XR))\cap{\mathcal L}:Q(B)>M\} d\gamma, \end{equation} where the implied constant depends only on $G_0$ and $R$. Let $W_{00}\subset W$ denote the space of symmetric matrices $B$ such that $b_{ij}=0$ for $i+j < n$. It was shown in \cite[Propositions~4.5 and 4.7]{SW} (analogous to \cite[Propositions~10.5 and 10.7]{BG2} used in the odd case) that most lattice points in the fundamental domains $({\mathcal F} h)\cdot (XR)$ that are distinguished lie in $W_0$ and in fact lie in $W_{00}$. These arguments from \cite{SW} can be used in the identical manner to show that the number of points in our fundamental domains lying in ${\mathcal L}$ but not in $\frac14W_{00}({\mathbb Z})$ is negligible. \begin{proposition}\label{propeven} We have \begin{equation*} \displaystyle\int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap ({\mathcal L}\setminus {\textstyle\frac14} W_{00}({\mathbb Z}))\}d\gamma=O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}). \end{equation*} \end{proposition} \begin{proof} We proceed exactly as in the proof of Proposition \ref{propodd}. First we consider elements $B\in {\mathcal L}$ such that $b_{11}\neq 0$. Since $B$ is distinguished in $W({\mathbb Q})$, it is also distinguished as an element of $W({\mathbb Q}_p)$, which occurs in $W({\mathbb Z})$ with $p$-adic density bounded by some constant $c_n$ stricted less than~$1$. Since $\prod_p c_n=0$, we obtain as in \cite[Proof of Proposition~4.7]{SW} that \begin{equation}\label{selapp2} \int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap {\mathcal L}:b_{11}\neq 0\}d\gamma=o(X^{n(n+1)/2}). \end{equation} An application of the Selberg sieve exactly as in \cite{ShTs} again improves this estimate to \linebreak $O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})$. Meanwhile, \cite[Proof of Proposition 4.5]{SW} immediately gives \begin{equation*} \int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap ({\textstyle\frac14} W({\mathbb Z})\setminus {\textstyle\frac14} W_{00}({\mathbb Z})):b_{11}=0\}d\gamma=O_\epsilon(X^{n(n+1)/2-1+\epsilon}). \end{equation*} Since ${\mathcal L}$ is contained in $\frac14W({\mathbb Z})$, this completes the proof. \end{proof} Next, we estimate the contribution to the right hand side of \eqref{eqevenfundavg} from elements in ${\mathcal L}\cap \frac14 W_{00}({\mathbb Z})$. As in Proposition \ref{proplargeQbound}, the desired saving is obtained via the following two observations. Firstly, integral points in $((\gamma G_0)\cdot (XR))\cap \frac14 W_{00}({\mathbb Z})$ occur predominantly when the Iwasawa coordinates $s_i$ of $\gamma$ are large. Secondly, since the weight of the $Q$-invariant is a product of negative powers of the $s_i$, the $Q$-invariants of elements in $((\gamma G_0)\cdot (XR))\cap {\mathcal L}$ are large when the values of the $s_i$ are small. \begin{proposition}\label{proplargeQboundeven} We have \begin{equation*} \int_{\gamma\in{\mathcal F}} \#\{B\in ((\gamma G_0)\cdot(XR))\cap {\mathcal L}\cap {\textstyle\frac14} W_{00}({\mathbb Z}):|Q(B)|>M\}d\gamma=O(X^{n(n+1)/2}\log^2 X/M). \end{equation*} \end{proposition} \begin{proof} Since $s_i\geq c$ for every $i$, there exists a compact subset $N''$ of $N({\mathbb R})$ containing $(t)^{-1}N'\,(t)$ for all $t\in T'$. Let $E$ be the pre-compact set $N''KG_0R$. Then \begin{eqnarray*} &&\int_{\mathcal F} \#\{B\in((\gamma G_0)\cdot(XR))\cap {\mathcal L}\cap {\textstyle\frac14} W_{00}({\mathbb Z}):|Q(B)|>M\}d\gamma\\ &\ll&\int_{s_i\gg 1} \#\{B\in(s)\cdot XE\cap {\mathcal L}\cap {\textstyle\frac14} W_{00}({\mathbb Z}):|Q(B)|>M\}H(s)d^\times s \end{eqnarray*} where $H(s)$ is defined in \eqref{eqhaareven}. Analogous to the proof of Proposition \ref{proplargeQbound}, in order for the set $\{B\in((s)\cdot (XE))\cap {\mathcal L}\cap \frac14 W_{00}({\mathbb Z}):|Q(B)|>M\}$ to be nonempty, the following conditions must be satisfied: \begin{equation}\label{eqscond} \begin{array}{rcl} Xs_i^{-1}&\gg& 1,\\[.1in] Xs_gs_{g+1}^{-1}&\gg& 1,\\[.1in] X^{g(g+1)/2}w(Q)&\gg& M. \end{array} \end{equation} Let $S$ denote the set of coordinates of $W_{00}$, i.e., $S=\{b_{ij}:i+j\geq n\}$. Let $T_{X,M}$ denote the set of~$(s)$ satisfying $s_i\gg 1$ and the conditions of \eqref{eqscond}. Then we have \begin{eqnarray*} &&\displaystyle\int_{s_i\gg 1} \#\{B\in((s)\cdot (XE))\cap {\mathcal L}\cap {\textstyle\frac12} W_{00}({\mathbb Z}):|Q(B)|>M\}H(s)d^\times s \\ &\ll&\displaystyle\int_{(s)\in T_{X,M}} \prod_{\alpha\in S}(Xw(\alpha))H(s)d^\times s \\ &\ll&\displaystyle\int_{(s)\in T_{X,M}} X^{n(n+1)/2-g(g+1)}\prod_{k=1}^{g-1}s_k^{2k-1}\cdot s_g^{g-1}s_{g+1}^{g}d^\times s \\ &\ll&\displaystyle\frac{1}{M}\int_{(s)\in T_{X,M}} X^{n(n+1)/2-g(g+1)/2}w(Q)\prod_{k=1}^{g-1}s_k^{2k-1}\cdot s_g^{g-1}s_{g+1}^{g}d^\times s \\ &\ll&\displaystyle\frac{1}{M}\int_{(s)\in T_{X,M}} X^{n(n+1)/2-g(g+1)/2}\prod_{k=1}^{g-1}s_k^{k-1}\cdot s_g^{-1}s_{g+1}^{g}d^\times s \\ &\ll&\displaystyle\frac{1}{M}\int_{(s)\in T_{X,M}} X^{n(n+1)/2-g(g+1)/2+g}\prod_{k=1}^{g-1}s_k^{k-1}\cdot s_g^{g-1}d^\times s \\ &\ll&\displaystyle\frac{1}{M}X^{n(n+1)/2}\log^2(X), \end{eqnarray*} where the first inequality follows from the fact that $Xw(b_{ij})\gg1$ for all $b_{ij}\in S$ when $(s)$ is in the range $1\ll s_i\ll X$, the second inequality follows from the definition \eqref{eqhaareven} of $H(s)$ and the computation (\ref{wbij2}) of the weights of the coordinates $b_{ij}$, the third inequality follows from the fact that $X^{g(g+1)/2}w(Q) \gg M$, the fourth inequality follows from the computation of the weight of $Q$ in \eqref{eqQweighteven}, the fifth inequality comes from multiplying by the factor $(Xs_gs_{g+1}^{-1})^g\gg1$, and the $\log^2 X$ factor in the last inequality comes from the integrals over $s_1$ and $s_{g+1}$. \end{proof} The estimate in Theorem \ref{thm:mainestimate}(b) for even $n$ now follows from Theorem \ref{keymaporbiteven} and Propositions~\ref{propeven} and \ref{proplargeQboundeven}, in conjunction with the bound on the number of reducible polynomials proved in Proposition~\ref{propredboundall}. \section{Proof of the main theorems}\label{sec:sieve} Let ${V_n^{\textrm{mon}}}({\mathbb Z})$ denote the set of monic integer polynomials of degree~$n$. Let ${V_n^{\textrm{mon}}}({\mathbb Z})^{\rm red}$ denote the subset of polynomials that are reducible or when $n\geq 4$, factor as $g(x)\bar{g}(x)$ over some quadratic extension of~${\mathbb Q}$. For a set $S\subset {V_n^{\textrm{mon}}}({\mathbb Z})$, let $S_X$ denote the set of elements in $S$ with height bounded by~$X$. We first give a power saving bound for the number of polynomials in ${V_n^{\textrm{mon}}}({\mathbb Z})^{\rm red}$ having bounded height. We start with the following lemma. \begin{lemma}\label{lemlin} The number of elements in ${V_n^{\textrm{mon}}}({\mathbb Z})_X$ that have a rational linear factor is bounded by $O(X^{n(n+1)/2-n+1}\log X)$. \end{lemma} \begin{proof} Consider the polynomial \begin{equation*} f(x)=x^n+a_{1}x^{n-1}+\cdots +a_n\in {V_n^{\textrm{mon}}}({\mathbb Z})_X. \end{equation*} First, note that the number of such polynomials with $a_n=0$ is bounded by $O(X^{n(n+1)/2-n})$. Next, we assume that $a_n\neq 0$. There are $O(X^{n(n+1)/2-n+1})$ possibilities for the $(n-1)$-tuple $(a_1,a_2,\ldots,a_{n-2},a_n)$. If $a_n\neq 0$ is fixed, then there are $O(\log X)$ possibilities for the linear factor $x-r$ of $f(x)$, since $r\mid d$. By setting $f(r)=0$, we see that the values of $a_1,a_2,\ldots,a_{n-2},a_n$, and $r$ determine $a_{n-1}$ uniquely. The lemma follows. \end{proof} Following arguments of Dietmann~\cite{Dit}, we now prove that the number of reducible monic integer polynomials of bounded height is negligible, with a power-saving error term. \begin{proposition}\label{propredboundall} We have \begin{equation*} \#{V_n^{\textrm{mon}}}({\mathbb Z})^{\rm red}_X=O(X^{n(n+1)/2-n+1}\log X). \end{equation*} \end{proposition} \begin{proof} First, by \cite[Lemma~2]{Dit}, we have that \begin{equation}\label{eqtemppoly} x^n+a_1x^{n-1}+\cdots +a_{n-1}x+t \end{equation} has Galois group $S_n$ over ${\mathbb Q}(t)$ for all $(n-1)$-tuples $(a_1,\ldots,a_{n-1})$ aside from a set $S$ of cardinality $O(X^{(n-1)(n-2)/2})$. Hence, the number of $n$-tuples $(a_1,\ldots,a_n)$ with height bounded by $X$ such that the Galois group of $x^n+a_1x^{n-1}+\cdots +a_{n-1}x+t$ over ${\mathbb Q}(t)$ is not $S_n$ is $O(X^{(n-1)(n-2)/2}X^n) = O(X^{n(n+1)/2-n+1})$. Next, let $H$ be a subgroup of $S_n$ that arises as the Galois group of the splitting field of a polynomial in ${V_n^{\textrm{mon}}}({\mathbb Z})$ with no rational root. For reducible polynomials, we have from \cite[Lemma~4]{Dit} that $H$ has index at least $n(n-1)/2$ in $S_n$. When $n\geq4$ is even and the polynomial factors as $g(x)\bar{g}(x)$ over a quadratic extension, the splitting field has degree at most $2(n/2)!$ and so the index of the corresponding Galois group in $S_n$ is again at least $n(n-1)/2$. For fixed $a_1,\ldots,a_{n-1}$ such that the polynomial \eqref{eqtemppoly} has Galois group $S_n$ over ${\mathbb Q}(t)$, an argument identical to the proof of \cite[Theorem~1]{Dit} implies that the number of $a_n$ with $|a_n|\leq X^n$ such that the Galois group of the splitting field of $x^n+a_1x^{n-1}+\cdots a_n$ over ${\mathbb Q}$ is $H$ is bounded by $$ O_\epsilon\Bigl(X^\epsilon\exp\bigl(\frac{n}{[S_n:H]}\log X +O(1)\bigr)\Bigr) =O(X^{2/(n-1)+\epsilon}). $$ In conjunction with Lemma~\ref{lemlin}, we thus obtain the estimate \begin{equation*} \#{V_n^{\textrm{mon}}}({\mathbb Z})^{\rm red}_X=O(X^{n(n+1)/2-n+1}\log X)+O(X^{n(n+1)/2-n+1})+ O_\epsilon(X^{n(n+1)/2-n+2/(n-1)+\epsilon}), \end{equation*} and the proposition follows. \end{proof} For any positive squarefree integer $m$, let ${\mathcal W}_m$ denote the set of all elements in ${V_n^{\textrm{mon}}}({\mathbb Z})$ whose discriminants are divisible by $m^2$. Also, let ${\mathcal V}_m$ denote the set of all elements $f$ in ${V_n^{\textrm{mon}}}({\mathbb Z})$ such that the corresponding ring ${\mathbb Z}[x]/(f(x))$ is nonmaximal at all primes dividing $m$. \begin{theorem}\label{unif} Let ${\mathcal W}_{m,X}$ denote the set of elements in ${\mathcal W}_m$ having height bounded by $X$. For any positive real number $M$, we have \begin{equation}\label{equ2} \sum_{\substack{m>M\\ m\;\mathrm{ squarefree}}} \#{\mathcal W}_{m,X}=O_\epsilon(X^{n(n+1)/2+\epsilon}/\sqrt{M})+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}). \end{equation} \end{theorem} \begin{proof} Every element of ${\mathcal W}_m$ belongs to ${\mathcal W}_{m_1}^{\rm {(1)}}\cap{\mathcal W}_{m_2}^{\rm {(2)}}$ for some positive squarefree integers $m_1,m_2$ with $m_1m_2=m$. One of $m_1$ and $m_2$ must be larger than $\sqrt{m}$, and hence every element of ${\mathcal W}_m$ belongs to some ${\mathcal W}_k^{\rm {(1)}}$ or ${\mathcal W}_k^{\rm {(2)}}$ where $k\geq \sqrt{m}$. An element $f\in {V_n^{\textrm{mon}}}({\mathbb Z})$ belongs to at most $O(X^\epsilon)$ different sets ${\mathcal W}_m$, since $m$ is a divisor of $\Delta(f)$. The theorem now follows from Parts (a) and (b) of Theorem \ref{thm:mainestimate}. \end{proof} We remark that the $\sqrt{M}$ in the denominator in \eqref{equ2} can be improved to $M$. However we will be using Theorem \ref{unif} for $M=X^{1/2}$ in which case the second term $O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})$ dominates even $O_\epsilon(X^{n(n+1)/2+\epsilon}/M)$. We outline here how to improve the denominator to $M$ for the sake of completeness. Break up ${\mathcal W}_m$ into sets ${\mathcal W}_{m_1}^{\rm {(1)}}\cap{\mathcal W}_{m_2}^{\rm {(2)}}$ for positive squarefree integers $m_1,m_2$ with $m_1m_2=m$ as above. Break the ranges of $m_1$ and $m_2$ into dyadic ranges. For each range, we count the number of elements in ${\mathcal W}_{m_1}^{\rm {(1)}}\cap{\mathcal W}_{m_2}^{\rm {(2)}}$ by embedding each ${\mathcal W}_{m_2}^{\rm {(2)}}$ into $\frac14W({\mathbb Z})$ as in Sections \ref{sec:monicodd} and \ref{sec:moniceven}. Earlier, we bounded the cardinality of the image of ${\mathcal W}_{m_2,X}^{\rm {(2)}}$ by splitting $\frac14W({\mathbb Z})$ up into two pieces: $\frac14W_{00}({\mathbb Z})$ and $\frac14W({\mathbb Z})\setminus\frac14W_{00}({\mathbb Z})$. The bound on the second piece does not depend on $m_2$ and continues to be $O_\epsilon(X^{n(n+1)/2-1/5+\epsilon})$. However for the first piece, we now impose the further condition that elements in $\frac14 W_{00}({\mathbb Z})$ are strongly divisible by $m_1$ and apply the quantitative version of the Ekedahl sieve as in~\cite{geosieve}. This gives the desired additional $1/m_1$ saving, improving the bound to \begin{equation*} O_\epsilon(X^{n(n+1)/2+\epsilon}/M)+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}). \end{equation*} The reason for counting in dyadic ranges of $m_1$ and $m_2$ is that for both the strongly and weakly divisible cases, we count not for a fixed $m$ but sum over all $m>M$. Let $\bar{\lambda}_n(p)=1-\lambda_n(p)$ and $\bar{\rho}_n(p)=1-\rho_n(p)$ denote the $p$-adic densities of ${\mathcal W}_p$ and ${\mathcal V}_p$, respectively; the values of $\lambda_n(p)$ and $\rho_n(p)$ were determined by Brakenhoff and Lenstra (\cite{ABZ}), respectively, and are presented explicitly in the introduction. We define $\bar{\lambda}_n(m)$ and $\bar{\rho}_n(m)$ for squarefree integers $m$ to be \begin{equation} \begin{array}{rcl} \bar{\lambda}_n(m)&=&\displaystyle\prod_{p\mid m}\bar{\lambda}_n(p),\\[.15in] \bar{\rho}_n(m)&=&\displaystyle\prod_{p\mid m}\bar{\rho}_n(p). \end{array} \end{equation} For a set $S\subset {V_n^{\textrm{mon}}}({\mathbb Z})$, let $S_X$ denote the set of elements in $S$ having height less than~$X$. Let ${V_n^{\textrm{mon}}}({\mathbb Z})^{\rm sf}$ denote the set of elements in ${V_n^{\textrm{mon}}}({\mathbb Z})$ having squarefree discriminant and let ${V_n^{\textrm{mon}}}({\mathbb Z})^{\rm max}$ denote the set of elements in ${V_n^{\textrm{mon}}}({\mathbb Z})$ that correspond to maximal rings. Let $\mu$ denote the Mobius function. We have \begin{equation}\label{eqth1} \begin{array}{rcl} \#{V_n^{\textrm{mon}}}({\mathbb Z})^{\rm sf}_X&=&\displaystyle\sum_{m\geq 1}\mu(m)\#{\mathcal W}_{m,X}\\[.2in] &=&\displaystyle\sum_{m=1}^{\sqrt{X}}\mu(m)\bar{\lambda}_n(m)\#{V_n^{\textrm{mon}}}({\mathbb Z})_X +O\Bigl(\sum_{m=1}^{\sqrt{X}}X^{n(n+1)/2-n}\Bigr) +O\Bigl(\sum_{m>\sqrt{X}}\#{\mathcal W}_{m,X}\Bigr)\\[.15in] &=&\displaystyle\Bigl(\prod_p\lambda_n(p)\Bigr)\cdot\#{V_n^{\textrm{mon}}}({\mathbb Z})_X+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}), \end{array} \end{equation} where the final equality follows from Theorem \ref{unif}. Since ${\mathcal V}_m\subset {\mathcal W}_m$, we also obtain \begin{equation}\label{eqth2} \#{V_n^{\textrm{mon}}}({\mathbb Z})^{\rm max}_X=\Bigl(\prod_p\rho_n(p)\Bigr)\cdot\#{V_n^{\textrm{mon}}}({\mathbb Z})_X+O_\epsilon(X^{n(n+1)/2-1/5+\epsilon}) \end{equation} by the identical argument. Finally, note that we have \begin{equation*} \#{V_n^{\textrm{mon}}}({\mathbb Z})_X= 2^nX^{n(n+1)/2}+O(X^{n(n-1)/2}). \end{equation*} Therefore, Theorems \ref{polydisc2} and \ref{polydiscmax2} now follow from \eqref{eqth1} and \eqref{eqth2}, respectively, since the constants $\lambda_n$ and $\zeta(2)^{-1}$ appearing in these theorems are equal simply to $\prod_p\lambda_n(p)$ and $\prod_p\rho_n(p)$, respectively. \section{A lower bound on the number of degree-$n$ number fields that are monogenic / have a short generating vector}\label{latticearg} Let $g\in {V_n^{\textrm{mon}}}({\mathbb R})$ be a monic real polynomial of degree $n$ and nonzero discriminant with $r$ real roots and $2s$ complex roots. Then ${\mathbb R}[x]/(g(x))$ is naturally isomorphic to ${\mathbb R}^n\cong {\mathbb R}^r\times {\mathbb C}^s$ as ${\mathbb R}$-vector spaces via its real and complex embeddings (where we view ${\mathbb C}$ as ${\mathbb R}+{\mathbb R}\sqrt{-1}$). The ${\mathbb R}$-vector space ${\mathbb R}[x]/(g(x))$ also comes equipped with a natural basis, namely $1,\theta,\theta^2,\ldots,\theta^{n-1}$, where $\theta$ denotes the image of $x$ in ${\mathbb R}[x]/(g(x))$. Let $R_g$ denote the lattice spanned by $1,\theta,\ldots,\theta^{n-1}.$ In the case that $g$ is an integral polynomial in ${V_n^{\textrm{mon}}}({\mathbb Z})$, the lattice $R_g$ may be identified with the ring ${\mathbb Z}[x]/(g(x))\subset{\mathbb R}[x]/(g(x))\subset {\mathbb R}^n$. Since $g(x)$ gives a lattice in ${\mathbb R}^n$ in this way, we may ask whether this basis is reduced in the sense of Minkowski, with respect to the usual inner product on ${\mathbb R}^n$.\footnote{Recall that a ${\mathbb Z}$-basis $\alpha_1,\ldots,\alpha_n$ of a lattice $L$ is called {\it Minkowski-reduced} if successively for $i=1,\ldots, n$ the vector $\alpha_i$ is the shortest vector in $L$ such that $\alpha_1,\ldots,\alpha_i$ can be extended to a ${\mathbb Z}$-basis of $L$. Most lattices have a unique Minkowski-reduced basis.} More generally, for any monic real polynomial $g(x)$ of degree $n$ and nonzero discriminant, we may ask whether the basis $1,\theta,\theta^2,\ldots,\theta^{n-1}$ is Minkowski-reduced for the lattice $R_g$, up to a unipotent upper-triangular transformation over~${\mathbb Z}$ (i.e., when the basis $[1\;\;\theta\;\;\theta^2\;\cdots\; \theta^{n-1}]$ is replaced by $[1\;\;\theta\;\;\theta^2\;\cdots\; \theta^{n-1}]A$ for some upper triangular $n\times n$ integer matrix $A$ with $1$'s on the diagonal). More precisely, given $g\in {V_n^{\textrm{mon}}}({\mathbb R})$ of nonzero discriminant, let us say that the corresponding basis $1,\theta,\theta^2,\ldots,\theta^{n-1}$ of ${\mathbb R}^n$ is {\it quasi-reduced} if there exist monic integer polynomials $h_i$ of degree~$i$, for $i=1,\ldots,n-1$, such that the basis $1,h_1(\theta),h_2(\theta),\ldots,h_{n-1}(\theta)$ of $R_g$ is Minkowski-reduced (so that the basis $1,\theta,\theta^2,\ldots,\theta^{n-1}$ is Minkowski-reduced up to a unipotent upper-triangular transformation over~${\mathbb Z}$). By abuse of language, we then call the polynomial $g$ {\it quasi-reduced} as well. We say that $g$ is {\it strongly quasi-reduced} if in addition ${\mathbb Z}[x]/(g(x))$ has a unique Minkowski-reduced basis. The relevance of being strongly quasi-reduced is contained in the following lemma. \begin{lemma}\label{reducedg} Let $g(x)$ and $g^*(x)$ be distinct monic integer polynomials of degree $n$ and nonzero discriminant that are strongly quasi-reduced and whose $x^{n-1}$-coefficients vanish. Then ${\mathbb Z}[x]/(g(x))$ and ${\mathbb Z}[x]/(g^*(x))$ are non-isomorphic rings. \end{lemma} \begin{proof} Let $\theta$ and $\theta^*$ denote the images of $x$ in ${\mathbb Z}[x]/(g(x))$ and ${\mathbb Z}[x]/(g^*(x))$, respectively. By the assumption that $g$ and $g^\ast$ are strongly quasi-reduced, we have that $1,h_1(\theta),h_2(\theta),\ldots,h_{n-1}(\theta)$ and $1,h_1^*(\theta^*),h_2^*(\theta^*),\ldots,h^*_{n-1}(\theta^*)$ are the unique Minkowski-reduced bases of ${\mathbb Z}[x]/(g(x))$ and ${\mathbb Z}[x]/(g^*(x))$, respectively, for some monic integer polynomials $h_i$ and $h_i^*$ of degree $i$ for $i=1,\ldots,n-1$. If $\phi:{\mathbb Z}[x]/(g(x))\to{\mathbb Z}[x]/(g^*(x))$ is a ring isomorphism, then by the uniqueness of Minkowski-reduced bases for these rings, $\phi$ must map Minkowski basis elements to Minkowski basis elements, i.e., $\phi(h_i(\theta))=h_i^*(\theta^*)$ for all $i$. In particular, this is true for $i=1$, so $\phi(\theta)=\theta^*+c$ for some $c\in{\mathbb Z}$, since $h_1$ and $h_1^*$ are monic integer linear polynomials. Therefore $\theta$ and $\theta^*+c$ must have the same minimal polynomial, i.e., $g(x)=g^*(x-c)$; the assumption that $\theta$ and $\theta^*$ both have trace 0 then implies that $c=0$. It follows that $g(x)=g^*(x)$, a contradiction. We conclude that ${\mathbb Z}[x]/(g(x))$ and ${\mathbb Z}[x]/(g^*(x))$ must be non-isomorphic rings, as desired. \end{proof} The condition of being quasi-reduced is fairly easy to attain: \begin{lemma}\label{quasilemma} If $g(x)$ is a monic real polynomial of nonzero discriminant, then $g(\rho x)$ is quasi-reduced for any sufficiently large $\rho>0$. \end{lemma} \begin{proof} This is easily seen from the Iwasawa-decomposition description of Minkowski reduction. Given an $n$-ary positive definite integer-valued quadratic form $Q$, viewed as a symmetric $n\times n$ matrix, the condition that $Q$ is Minkowski reduced is equivalent to $Q=\gamma I_n \gamma^T$, where $I_n$ is the sum-of-$n$-squares diagonal quadratic form and $\gamma=\nu \tau \kappa$, where $\nu\in N'$, $\tau\in T'$, and $\kappa\in K$; here $N'$ as before denotes a compact subset (depending on $\tau$) of the group $N$ of lower-triangular matrices, $T'$ is the group of diagonal matrices $(t_1,\ldots,t_n)$ with $t_i\leq c\,t_{i+1}$ for all $i$ and some absolute constant $c=c_n>0$, and $K$ is the orthogonal group stabilizing the quadratic form $I_n$. The condition that $Q$ be quasi-reduced is simply then that $t_i\leq c\,t_{i+1}$ (with no condition on $\nu$). Consider the natural isomorphism ${\mathbb R}[x]/(g(x))\to {\mathbb R}[x]/(g(\rho x))$ of \'etale ${\mathbb R}$-algebras defined by $x\to\rho x$. If $\theta$ denotes the image of $x$ in ${\mathbb R}[x]/(g(x))$, then $\rho\theta$ is the image of $x$ in ${\mathbb R}[x]/(g(\rho x))$ under this isomorphism. Let $Q_\rho$ be the Gram matrix of the lattice basis $1,\rho\theta,\rho^2\theta^2,\ldots,\rho^{n-1}\theta^{n-1}$ in ${\mathbb R}^{n}$ associated to $g(\rho x)$. If the element $\tau \in T$ corresponding to $g(x)$ is $(t_1,\ldots,t_n)$, then the element $\tau_\rho\in T$ corresponding to $g(\rho x)$ is $(t_1,\rho t_2,\rho^2 t_3,\ldots,\rho^{n-1}t_n)$. This is because $Q_\rho=\Lambda Q\Lambda^T$, where $\Lambda$ is the diagonal matrix $(1,\rho,\rho^2,\ldots,\rho^{n-1})$; therefore, if $Q=(\nu\tau\kappa)I_n(\nu\tau\kappa)^T$, then $Q_\rho= (\Lambda\nu\tau\kappa)I_n(\Lambda\nu\tau\kappa)^T=(\nu'(\Lambda\tau)\kappa)I_n(\nu'(\Lambda\tau)\kappa)^T$ for some $\nu'\in N$ depending on $\Lambda$, so $\tau_\rho=\Lambda\tau$. For sufficiently large $\rho$, we then have $\rho^{i-1}t_i\leq c\rho^{i}t_{i+1}$ for all $i=1,\ldots,n-1$, as desired. \end{proof} \noindent Lemma~\ref{quasilemma} implies that most monic irreducible integer polynomials are strongly quasi-reduced: \begin{lemma}\label{mostqr} A density of $100\%$ of irreducible monic integer polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ of degree~$n$, when ordered by height $H(f):={\rm max}\{|a_1|,|a_2|^{1/2},\ldots,|a_n|^{1/n}\}$, are strongly quasi-reduced. \end{lemma} \begin{proof} Let $\epsilon>0$, and let $B$ be the closure of an open region in ${\mathbb R}^n\cong {V_n^{\textrm{mon}}}({\mathbb R})$ consisting of monic real polynomials of nonzero discriminant and height less than $1$ such that $${\rm Vol}(B)>(1-\epsilon){\rm Vol}(\{f\in {V_n^{\textrm{mon}}}({\mathbb R}):H(f)<1\}).$$ For each $f\in B$, by Lemma~\ref{quasilemma} there exists a minimal finite constant $\rho_f>0$ such that $f(\rho x)$ is quasi-reduced for any $\rho>\rho_f$. The function $\rho_f$ is continuous in $f$, and thus by the compactness of $B$ there exists a finite constant $\rho_B>0$ such that $f(\rho x)$ is quasi-reduced for any $f\in B$ and $\rho>\rho_B$. Now consider the weighted homogeneously expanding region $\rho\cdot B$ in ${\mathbb R}^n\cong {V_n^{\textrm{mon}}}({\mathbb R})$, where a real number $\rho>0$ acts on $f\in B$ by $(\rho\cdot f)(x)=f(\rho x)$. Note that $H(\rho\cdot f)=\rho H(f)$. For $\rho>\rho_B$, we have that all polynomials in $\rho\cdot B$ are quasi-reduced, and $${\rm Vol}(\rho\cdot B)>(1-\epsilon){\rm Vol}(\{f\in {V_n^{\textrm{mon}}}({\mathbb R}):H(f)<\rho\}).$$ Letting $\rho$ tend to infinity shows that the density of monic integer polynomials $f$ of degree $n$, when ordered by height, that have nonzero discriminant and are strongly quasi-reduced is greater than $1-\epsilon$ (since ``discriminant nonzero'' and ``strongly quasi-reduced'' are both open conditions on the coefficients of $f$). Since $\epsilon$ was arbitrary, and $100\%$ of integer polynomials are irreducible, the lemma follows. \end{proof} We have the following variation of Theorem~\ref{polydisc2}. \begin{theorem}\label{vanishinga1} Let $n\geq1$ be an integer. Then when monic integer polynomials $f(x)=x^n+a_1x^{n-1}+\cdots+a_n$ of degree~$n$ with $a_1=0$ are ordered by $H(f):= {\rm max}\{|a_1|,|a_2|^{1/2},\ldots,|a_n|^{1/n}\}$, the density having squarefree discriminant $\Delta(f)$ exists and is equal to $\kappa_n=\prod_p\kappa_n(p)>0$, where $\kappa_n(p)$ is the density of monic polynomials $f(x)$ over ${\mathbb Z}_p$ with vanishing $x^{n-1}$-coefficient having discriminant indivisible by $p^2$. \end{theorem} Indeed, the proof of Theorem~\ref{polydisc2} applies also to those monic integer polynomials having vanishing $x^{n-1}$-coefficient without any essential change; one simply replaces the representation $W$ (along with $W_0$ and $W_{00}$) by the codimension-1 linear subspace consisting of symmetric matrices with anti-trace $0$, but otherwise the proof carries through in the identical manner. The analogue of Theorem~\ref{vanishinga1} holds also if the condition $a_1=0$ is replaced by the condition $0\leq a_1<n$; in this case, $\kappa_n=\prod_p\kappa_n(p)>0$ is replaced by the same constant $\lambda_n=\prod_p\lambda_n(p)>0$ of Theorem~\ref{polydisc2}, since for any monic degree-$n$ polynomial $f(x)$ there is a unique constant $c\in{\mathbb Z}$ such that $f(x+c)$ has $x^{n-1}$-coefficient $a_1$ satisfying $0\leq a_1<n$. Lemmas \ref{reducedg} and \ref{mostqr} and Theorem~\ref{vanishinga1} imply that 100\% of monic integer irreducible polynomials having squarefree discriminant and vanishing $x^{n-1}$-coefficient (or those having $x^{n-1}$-coefficient non-negative and less than $n$), when ordered by height, yield {\it distinct} degree-$n$ fields. Since polynomials of height less than $X^{1/(n(n-1))}$ have absolute discriminant $\ll X$, and since number fields of degree $n$ and squarefree discriminant always have associated Galois group $S_n$, we see that the number of $S_n$-number fields of degree $n$ and absolute discriminant less than $X$ is $\gg X^{(2+3+\cdots+n)/(n(n-1))}=X^{1/2+1/n}$. We have proven Corollary~\ref{monogenic}. \begin{remark}{\em The statement of Corollary~\ref{monogenic} holds even if one specifies the real signatures of the monogenic $S_n$-number fields of degree $n$, with the identical proof. It holds also if one imposes any desired set of local conditions on the degree-$n$ number fields at a finite set of primes, so long as these local conditions do not contradict local monogeneity. }\end{remark} \begin{remark}{\em We conjecture that a positive proportion of monic integer polynomials of degree~$n$ with $x^{n-1}$-coefficient non-negative and less than $n$ and absolute discriminant less than $X$ have height $O( X^{1/(n(n-1))})$, where the implied $O$-constant depends only on $n$. That is why we conjecture that the lower bound in Corollary~\ref{monogenic} also gives the correct order of magnitude for the upper bound. In fact, let $C_n$ denote the $(n-1)$-dimensional Euclidean volume of the $(n-1)$-dimensional region $R_0$ in $V_n^{\textrm{mon}}({\mathbb R})\cong{\mathbb R}^n$ consisting of all polynomials $f(x)$ with vanishing $x^{n-1}$-coefficient and absolute discriminant less than 1. Then the region $R_z$ in $V^{\textrm{mon}}_n({\mathbb R})\cong{\mathbb R}^n$ of all polynomials $f(x)$ with $x^{n-1}$-coefficient equal to $z$ and absolute discriminant less than 1 also has volume $C_n$, since $R_z$ is obtained from $R_0$ via the volume-preserving transformation $x\mapsto x+z/n$. Since we expect that 100\% of monogenic number fields of degree $n$ can be expressed as ${\mathbb Z}[\theta]$ in exactly one way (up to transformations of the form $\theta\mapsto \pm \theta + c$ for $c\in {\mathbb Z}$), in view of Theorem~\ref{polydiscmax2} we conjecture that the number of monogenic number fields of degree $n$ and absolute discriminant less than $X$ is asymptotic to \begin{equation} \frac{nC_n}{2\zeta(2)} X^{1/2+1/n}. \end{equation} When $n=3$, a Mathematica computation shows that we have $C_3= \frac{2^{1/3}(3+\sqrt{3})}{45} \frac{\Gamma(1/2)\Gamma(1/6)}{\Gamma(2/3)}$}. \end{remark} Finally, we turn to the proof of Corollary~\ref{shortvector}. Following \cite{EV}, for any algebraic number $x$, we write $\|x\|$ for the maximum of the archimedean absolute values of~$x$. Given a number field~$K$, write $s(K)=\inf\{\|x\|: x\in{\mathcal O}_K,\; {\mathbb Q}(x)=K\}$. We consider the number of number fields $K$ of degree~$n$ such that $s(K)\leq Y$. As already pointed out in \cite[Remark~3.3]{EV}, an upper bound of $\ll Y^{(n-1)(n+2)/2}$ is easy to obtain. Namely, a bound on the archimedean absolute values of an algebraic number~$x$ gives a bound on the archimedean absolute values of all the conjugates of $x$, which then gives a bound on the coefficients of the minimal polynomial of $x$. Counting the number of possible minimal polynomials satisfying these coefficient bounds gives the desired upper bound. To obtain a lower bound of $\gg Y^{(n-1)(n+2)/2}$, we use Lemmas~\ref{reducedg} and \ref{mostqr} and Theorem~\ref{vanishinga1}. Suppose $f(x)=x^n + a_2x^{n-2} + \cdots + a_n$ is an irreducible monic integer polynomial of degree $n$. Let~$\theta$ denote a root of $f(x)$. If $H(f)\leq Y$, then $|\theta|\ll Y$; this follows, e.g., from Fujiwara's bound~\cite{Fujiwara}: $$ \|\theta\|\leq {\rm max}\{ |a_1|,|a_2|^{1/2},\ldots,|a_{n-1}|^{1/(n-1)}|, |a_n/2|^{1/n}\}.$$ Therefore, if $H(f)\leq Y$, then \begin{equation}\label{eq:est} s({\mathbb Q}[x]/(f(x))) \leq \|\theta\| \ll Y. \end{equation} Now Lemma~\ref{mostqr} and Theorem \ref{vanishinga1} imply that there are $\gg Y^{(n-1)(n+2)/2}$ such polynomials $f(x)$ of height less than $Y$ that have squarefree discriminant and are also strongly quasi-reduced. Lemma~\ref{reducedg} and \eqref{eq:est} then imply that these polynomials define distinct $S_n$-number fields $K$ of degree $n$ with $s(K)\leq Y$. This completes the proof of Corollary~\ref{shortvector}. \subsection*{Acknowledgments} We thank Levent Alpoge, Benedict Gross, Wei Ho, Kiran Kedlaya, Hendrik Lenstra, Barry Mazur, Bjorn Poonen, Peter Sarnak, and Ila Varma for their kind interest and many helpful conversations. The first and third authors were supported by a Simons Investigator Grant and NSF Grant DMS-1001828. \bigskip
1,941,325,220,182
arxiv
\section{Introduction} The ability of the Italian-Dutch BeppoSAX satellite \cite{psb:95} to accurately locate gamma-ray bursts (GRBs) with its Wide Field Cameras \cite{jhzb:95} has led to the discovery of two optical transients, associated with GRB\,970228 \cite{pggks:97} and GRB\,970508 \cite{bond:97}. The detection of redshifted absorption lines in the optical transient associated with GRB\,970508 \cite{mdska:97} has established that it lies at cosmological distance, and here we assume they all do. A typical dim burst ($1\un{ph}{}\un{cm}{-2}\un{s}{-1}$ lasting 10\,s) at $z=1$ releases $3.5\times10^{49}\un{erg}{}\un{sr}{-1}$ in gamma rays, hence the engine behind it must provide about $4.4\times10^{52}f\sub{b}/\epsilon_{-2}\un{erg}{}$. (Where needed, we assume $H_0=70\un{km}{}\un{s}{-1}\un{Mpc}{-1}$ and $\Omega_0=1$.) Here $f\sub{b}$ is the fraction of the sky illuminated by the gamma-ray emission, and the efficiency of conversion of the initially available energy into gamma rays is $\epsilon_{-2}$ percent. This is the natural energy scale of supernova explosions, in which of order $10^{53}$\,ergs is released suddenly from the rest mass energy of a solar mass of material. The bulk is carried away in neutrinos, and about 1\% becomes kinetic energy of the ejecta. The proposed GRB models related to end stages of massive stars are (i) merger of two neutron stars \cite{paczy:86,gdn:87,elps:89}; (ii) merger of a neutron star and a black hole \cite{paczy:91,mhim:93}; (iii) `failed supernova': the collapse of a massive star to a black hole surrounded by a dense torus of material that might result in a relativistic jet \cite{woosl2:93}; (iv) a `hypernova': the collapse of a rapidly rotating massive star in a binary \cite{paczy:97}; (v) collapse of a Chandrasekhar-mass white dwarf \cite{usov:92}. Whether these are efficient enough at converting a fraction of the available energy to kinetic energy and then eventually to gamma rays (see below) is an open question, and the major unsolved issue in this class of burst models. In this paper we assume that somehow a variety of such a model manages this. The important point is that they all arise from massive stars which evolve into remnants within about 100\,Myr. The binary mergers then usually take place within about 100\,Myr of remnant formation \cite{ps:96}, as does the white-dwarf collapse, because the favoured route has a high mass transfer rate \cite{hbnr:92}. Since the expansion age of the Universe is already 1\,Gyr at $z=4.4$, it is safe to neglect the delay between (binary) stellar birth and the GRB it eventually yields in the present context. {\it The gamma-ray burst rate therefore traces the massive star formation rate.\/} The star formation rate as a function of redshift has recently been studied extensively, and is determined observationally with some confidence \cite{llhc:96,mfdgs:96,madau:96}: the luminosity density in the rest frame $U$ and $B$ band is combined with an IMF to deduce the star formation rate. The assumption of an IMF introduces an uncertainty in the deduced total star formation rate, but the basic data ($U$ and $B$ light density) are dominated by massive stars. Since GRBs come from massive stars, they may trace the UV light density in the Universe better than the total star formation rate, and our results are therefore less sensitive to the assumed IMF. A further potential source of uncertainty is dust extinction, which would cause a relative underestimate of the high-redshift star formation rate. Recent interpretations of the afterglows \cite{mr:97,wrm:97,waxma:97} support the notion that the energy release is initially in the form of an ultrarelativistic explosion or `fireball' \cite{cr:78,goodm:86} whose energy is largely converted to a blast of gamma rays via hydrodynamic collisions within it \cite{px:94,rm:94} or with the ambient medium \cite{rm:92}. Since the kinetic energy comes from a fairly standardised event, it is likely that the gamma-ray luminosity distribution of bursts is not too wide, so we shall treat them as standard candles. With the fact that GRB trace star formation, this has the important testable consequence that the redshift dependence of the GRB rate has no free parameters. Only two normalisations need to be fitted, namely the local GRB rate density $\rho_0$ and the standard-candle 30--2000\,keV luminosity $L_0$. \section{Results} For a standard cosmology ($\Omega_0=1$, $\Lambda=0$), the predicted number of standard-candle bursts in some flux range (${\rm P_1~to~P_2}$) is \begin{equation} \Delta N({\rm P_1~to~P_2}) = 4\pi \int_{R(P_1)}^{R(P_2)} k_\rho\rho(z) r^2 dr~~, \end{equation} where $\rho(z)$ is the observed star formation rate and $k_\rho$ the constant of proportionality. We follow the method of Fenimore and Bloom (1995) to account for the influence of the diversity of spectral shapes of bursts on the observed flux distribution (similar to $K$ corrections in optical photometry). The fit is done by $\chi^2$ minimisation for the same 11 flux bins of combined PVO and BATSE data used by Fenimore and Bloom (1995). The best-fit model of this type to the GRB flux distribution is shown in Fig.~\ref{fi:fit}. Note that the fit was done only for $P>1\un{ph}{}\un{cm}{-2}\un{s}{-1}$, for which the BATSE catalogue is 99\% complete. \begin{figure} \epsfxsize=\columnwidth\epsfbox{fig1.ps} \caption[]{The flux distribution of GRBs from PVO and BATSE and the best-fit model proportional to the star formation rate. The $y$ axis is $P^{5/2}$ times the rate, which is period-independent at high fluxes, to emphasise this asymptotic behaviour and the turnover at low fluxes. The inset shows the 1-$\sigma$ confidence region of the fitted parameters. The apparent mismatch at the PVO-BATSE transition is just due to the differences in bin sizes. \label{fi:fit} } \end{figure} The fit gives $L_0=8.3^{+0.9}_{-1.5}\times10^{51}\un{erg}{}\un{s}{-1}$ and $\rho(z=0)\equiv\rho_0=0.14\pm0.02\un{Gpc}{-3}\un{yr}{-1}$. Assuming a local galaxy number density of 0.0048$\un{Mpc}{-3}$ \cite{lpem:92} this density translates into a rate of 0.025\,GEM (Galactic Events per Myr). The median redshift of bursts with $P=1\un{ph}{}\un{cm}{-2}\un{s}{-1}$ is 2.6. The fit is just acceptable, with $\chi^2=17.3$ for 9 d.o.f. If we omit the two lowest-flux bins, in which the star formation rate is most uncertain, the fitted parameters hardly change but the fit quality improves somewhat to $\chi^2/{\rm d.o.f.}=11.7/7$. As noted earlier, inclusion of dust extinction would increase the inferred star formation rate at high redshift, which in turn would improve the fit. However, the magnitude of this correction is quite uncertain. For comparison, we also fitted the same data with a non-evolving rate density. In that case, we recover the previously known result that $L_0=0.44\times10^{51}\un{erg}{}\un{s}{-1}$ and $\rho_0=3.7$\,GEM \cite{fb:95}. The redshift at $P=1\un{ph}{}\un{cm}{-2}\un{s}{-1}$ is then 0.68, and $\chi^2/{\rm d.o.f.}=9.1/9$. This means that our assumption about the evolution of the GRB rate has quite drastic consequences: it increases the GRB luminosity by a factor 19, and the local rate is decreased by a factor 150. This large factor is a combination of the distance increase due to the larger $L_0$ and the fact that the local density in evolving models is much lower than the mean, because the star formation rate in the Universe was much higher at $z=1$. Various indirect methods, such as time dilation \cite{nbnsk:95,fb:95} and the change of break energy with flux \cite{mppbp:95}, have been used to statistically derive the ratio of redshifts between bright and dim bursts. The result was found to disagree with the low redshifts implied by fits to the flux distribution that assumed no evolution of the rate density \cite{fb:95,brain:97}. For our new value of $L_0$, the predicted time dilation factor is 1.9 \cite{fb:95}, consistent with the measured value \cite{nbnsk:95}. Note that the slope of the cumulative flux distribution of $-1.5$ at moderate fluxes is not due to Euclidean geometry: it is a conspiracy between the curvature of space which tends to give a flatter slope and the strong evolution which gives a steeper one. Direct support of the significant redshift of bursts on the `Euclidean' part of the flux distribution comes from the work of Dezalay et~al. (1997a,b). From a detected hardness-intensity correlation in bright bursts seen by both ULYSSES and PHEBUS they infer $z\simeq3$ for bursts that roughly correspond to $P=2\un{ph}{}\un{cm}{-2}\un{s}{-1}$ in BATSE terms. This is even slightly higher than our range of $z=1.5-2.5$ at that flux level. \section{Implications} The twenty-fold increase in luminosity of the bursts has important implications. First, the total energy released in gamma rays in a 10-s burst goes up to $6.6\times10^{51}\un{erg}{}\un{sr}{-1}$, requiring an initial supply of energy of $8.3\times10^{54}f\sub{b}/\epsilon_{-2}\un{erg}{}$, which among the above mechanisms only the hypernova \cite{paczy:97} can provide if the emission is isotropic. We therefore conclude that gamma-ray beams of bursts probably illuminate no more than a few percent of the sky, hence most gamma-ray bursters escape detection in gamma rays. Since all the models of interest entail the collapse of an already rotating system and a non-vanishing angular momentum implies cylindrical symmetry, such beaming is quite plausible in the context of these models. The higher amount of energy alleviates the baryon pollution problem: for the outflow to reach a Lorentz factor above 100, as required to produce gamma rays, it should contain at most $10^{-5}\mbox{M$_\odot$}$ of baryons, which is not easy to get; but now we can allow twenty times more. The classical GRB rate is 1\,GEM, but we now find a rate of only 0.025\,GEM. Since the events could well be beamed, this does not necessarily exclude neutron star mergers as their source. The theoretical rate of NS-NS mergers is about 10\,GEM \cite{ps:96}, implying $f\sub{b}=0.3\%$ if all such mergers produce a GRB. (See also \cite{lpp:97,plp:97}, where similar conclusions are drawn using theoretical rather than observed star formation rates.) But it does mean that rarer types of event merit consideration as well. NS-BH mergers are probably about ten times rarer than NS-NS mergers, so they could be significantly beamed and still cause the observed GRB rate. The formation rate of super-soft X-ray sources is about 20 GEM \cite{hbnr:92}, but since it is not known what fraction, if any, of these lead to the accretion-induced collapse of a white dwarf and a possible GRB therefrom \cite{usov:92} we cannot calculate a rate for this burst model. The increased distance scale also removes the `no-host problem' for GRBs: deep searches of GRB error boxes \cite{vhj:95,fehkl:93,schl:97} have been used to set limits on the absolute brightness of host galaxies of 3.5 to 5.5 {\it mag\/} fainter than $L_\star$ \cite{schl:97}, suggesting that GRB do not come from galaxies. But since these estimates depend on $L_0$ and now have to be adjusted by 3.2 {\it mag\/} just from the increased distance, and by another 1--2 {\it mag}, depending on galaxy type, due to increased $K$ corrections \cite{lthcl:95}. This changes the limits on host galaxy luminosities to between 1.5 {\it mag\/} above and 1.5 {\it mag\/} below $L_\star$, so they are no longer inconsistent with the assumption that GRB are in normal galaxies. Using our fit for $L_0$ and the known gamma-ray spectra we can derive redshifts for the two GRB with detected optical afterglows. We find $z=2$ for GRB\,970228, consistent with the magnitudes of candidate hosts \cite{btw:97}. But $z=3.7$ for GRB\,970508, which exceeds the maximum redshift of 2.3 allowed by the observed spectrum \cite{mdska:97} and shows that this burst must have been somewhat less luminous. This means that its host is 2.5 to 5 magnitudes fainter than $L_\star$ \cite{nbsjt:97}, and that the luminosity distribution of GRB must have some width. Fortunately, the shape of the GRB flux distribution is fairly insensitive to a modest broadening of the luminosity function (e.g.\ Ulmer, Wijers \& Fenimore 1995) so this should not significantly influence our conclusions. The best-fit redshift distribution of GRBs is shown in Fig.~\ref{fi:nz}. The median redshift is similar to that of quasars; only 10\% of bursts have $z<1$, and 5\% are beyond redshift 4. The dim end of the distribution is at $z=5.3$ for the average spectrum, but goes up to $z=6.2$ due to varying spectral shape. \begin{figure} \epsfxsize=\columnwidth\epsfbox{fig2a.ps} \epsfxsize=\columnwidth\epsfbox{fig2b.ps} \caption[]{(top) The fraction of GRBs below redshift $z$ according to the best-fit evolving model, for bursts with the median spectral shape down to $0.2\un{ph}{}\un{cm}{-2}\un{s}{-1}$ in the evolving (solid) and non-evolving (dashed) case. (bottom) The redshift as a function of flux for the evolving model fit. The solid curve gives the average over spectral shapes. For three flux values, the individual redshifts for each of 48 measured spectra \cite{bmfsp:93} are shown to indicate the considerable variation due to spectral shape. \label{fi:nz} } \end{figure} Sahu et~al.\ (1997) briefly discuss the possibility of GRBs following the star formation rate. Whilst they do not account for the variety of spectral shapes and do not perform a formal fit, they conclude from visual inspection of their graphs that $L_\gamma=10^{51}\un{erg}{}\un{s}{-1}$. They interpret this as meaning that the accepted standard luminosity fits the data. However, since they count $L_\gamma$ from 100 to 500\,keV instead of the range 30--2000\,keV used in this and other works, their $L_\gamma$ implies $L_0\simeq3L_\gamma$. This is closer to our new value than to the no-evolution result, and in reasonable agreement with our $L_0$ given the differences between the methods. Totani (1997) tried different power-law spectral shapes, but did not allow for variation between bursts and only considered the case of NS-NS mergers. He did study the issue of NS-NS merger times in more detail and found that the tail of late mergers can flatten the flux distribution between redshifts 0 and 1: if a substantial fraction of mergers have a long delay, then because 20 times more binaries formed at $z=1$ their contribution to the present merger rate could exceed that due to current star formation. Whilst the bulk of NS-NS binaries merge within 100\,Myr according to all studies, the fraction merging after more than 5\,Gyr varies greatly. Tutukov \& Yungelson (1994) do find a large fraction of delayed mergers, whereas in the study of Lipunov et~al.\ (1995) it is negligible. Using this long delay, he finds a redshift of 2--2.5, for bursts with $P=0.4\un{ph}{}\un{cm}{-2}\un{s}{-1}$, where we get a median of 3.8. Whether this effect is indeed important can only be decided when better estimates of the merger time distribution become available. If GRB are not due to NS-NS mergers (we present some evidence for this below) then the long delay times are not an issue in any case, of course. \section{Observational tests} An important difference between the various compact stellar remnant models is the distance that a gamma-ray burster travels between where it was born as a massive (binary) star and where it produces the burst. A direct supernova origin \cite{woosl2:93} or hypernova \cite{paczy:97} would occur in short-lived objects with low space velocity, which would therefore still be in the star forming regions where they were born. In this case, an optical counterpart to a GRB should always be embedded in a galaxy or star-forming region. A NS-NS or NS-BH merger occurs in a system that has obtained a moderate to high (100--300$\un{km}{}\un{s}{-1}$) space velocity from the two supernova explosions that have taken place in the binary some 100\,Myr before the merger. This means that such GRB should often occur up to 30\,kpc away from any star forming region (corresponding to 6'' at $z=3$) depending on whether the host galaxy has a deep enough potential to hold it. The optical counterpart to GRB\,970228 is embedded in an extended object. That of GRB\,970508 is at least 25\,kpc away from any host so far detected \cite{nbsjt:97}, but the [OII] emission line seen in its spectrum (Metzger et~al.\ 1997) suggests that it lies in a star-forming region. The recent detection of a large absorption column in the X-ray spectrum of GRB\,970828 (Murakami 1997) suggests that it, too, may lie close to a star-forming region. There may thus be some tentative evidence favouring progenitors with low space velocities and very short delays between formation and burst. Another consequence of beamed gamma-ray emission in the context of blast wave models is that the optical afterglow can come from less relativistic material which has a greater opening angle \cite{wrm:97,paczy:97}. Consequently, the population of bursters that we can see only by their optical afterglow could be many times larger than that of GRBs. A limit to this is set by high-redshift supernova searches, which have not found any GRB afterglows \cite{rhoad:97}. Since one of them \cite{phdgg:96} has now surveyed close to 10 square degree years, a rate of afterglows in excess of 0.3 per square degree per year is unlikely. With a GRB rate of 0.01/sq.deg./yr, this implies that $f\sub{b,optical}/f\sub{b,\gamma}\lsim30$. Since $f\sub{b,\gamma}\lsim0.03$ from energy constraints, even the optical afterglows may have to be beamed. Since with the evolving rate density we sample most of the star forming Universe, we predict that more sensitive instruments than BATSE would find few bursts with $P<0.2\un{ph}{}\un{cm}{-2}\un{s}{-1}$, unlike in the non-evolving case (see Fig.~\ref{fi:nz}). Kommers et~al.\ (1997) recently studied untriggered bursts below the BATSE threshold and deduced that the flux distribution flattens considerably, perhaps supporting the paucity of faint bursts. At very low fluxes, there may again be increase due to the first episode of star formation in the early Universe associated with the first metal production \cite{mr2:97} or with the formation of the first stars in elliptical galaxies \cite{plp:97}. With a higher mean redshift than quasars, GRB should be lensed at least as often as quasars, of which 0.5\% are multiply imaged. Therefore, the dim end of the BATSE catalogue may already contain a few examples. From the absence of lensed bursts among bright BATSE bursts Marani et~al.\ (1997) deduce that bursts with $P>1\un{ph}{}\un{cm}{-2}\un{s}{-1}$ should have $z<3$; our fit is consistent with this limit. \section{Conclusions} Our results depend only on two key features of the star formation rate: rapid evolution up to $z\approx 1$ and a more gentle variation beyond that up to $z \approx 2.5$. Although little is known about the star formation rate between these redshifts, many indirect arguments suggest that the star formation rate did not evolve strongly between $z=1$ and $z=2.5$. Uncertainties are introduced by our lack of knowledge of the correction due to dust extinction and to a lesser extent by having to assume an IMF. However this does not change the qualitative nature of the results presented here. Furthermore, the distance corrections due to the wide range in spectral shapes \cite{fb:95} that are the result of assuming that GRB are standard candles in the 30--2000\,keV (rest frame) range are very important, and neglect of the spectral shape variation can lead to considerably different results. In view of this, and of the fact that we have not explored a range of parameters for the geometry of the Universe, our results obviously have a somewhat exploratory character. Other observational tests, such as the determination of redshifts from afterglow spectra and constraints from the lensing rate of GRBs, will provide additional constraints on parameters. The logical consequence of assuming that GRB are related to remnants of the most massive stars in any of the ways hitherto proposed is that the GRB rate is proportional to the formation rate of massive stars in the Universe (with the possible exception of NS-NS mergers if those simulations which indicate significant fractions of late mergers are correct). We show that this assumption is consistent with the GRB flux distribution. Compared to previous, non-evolving, models of cosmological bursts we find a twenty-fold increase of the required GRB luminosity, which suggests that the gamma-ray emission is significantly beamed in order that the emitted energy can be supplied by merger/collapse models. The redshifts of GRBs are also greatly increased, and the very dimmest known ones are at $z\gsim6$, beyond the farthest known quasars. This makes them the most distant known objects, and their optical counterparts very valuable probes of the early evolution of stars and interstellar gas. \section*{ACKNOWLEDGEMENTS} We thank S. McGaugh, R. McMahon, J. Miralda, M. Rees, and J. Wambsganss for helpful discussions and comments.
1,941,325,220,183
arxiv
\section{Alternating Turing machines and proof of~\Cref{theorem:kaexp-alternation-complexity}} \label{appendix:ATMs} This section contains the auxiliary technical tools needed to prove~\Cref{theorem:kaexp-alternation-complexity}, as well as the proof of this theorem and of~\Cref{theorem:kaexp-j-prenex-complexity}. The section does not assume any prior knowledge on alternating Turing machines (ATM), which we define adapting the presentation of~\cite{ChandraKS81}. \subparagraph*{Alternating Turing machines.} An~ATM $\ATuring = (\Alphabet,\States_{\exists},\States_{\forall},\qinit,\qacc,\qrej,\trans)$ is a single-tape machine where $\Alphabet$ is a finite \defstyle{alphabet} containing at least two symbols, one of which is the \defstyle{blank symbol}~$\blanksymb$, the sets $\States_{\exists}$ and $\States_{\forall}$ are two disjoint sets of \defstyle{states}, respectively called \defstyle{existential} and \defstyle{universal} states, $\qinit \in \States_{\exists}$ is the \defstyle{initial state}, and $\qacc$ and $\qrej$ are two auxiliary states that do not belong to $\States_{\exists} \cup \States_{\forall}$. The state~$\qacc$ is the \defstyle{accepting state} and the state~$\qrej$ is the \defstyle{rejecting state}. Every \defstyle{cell} of the tape contains a symbol from $\Alphabet$ and is indexed with a number from $\ensuremath{\mathbb{N}}$ (i.e.~the tapes are infinite only in one direction). Let $\States \egdef \States_{\exists} \cup \States_{\forall} \cup \{\qacc,\qrej\}$. The \defstyle{transition function} is a function of the form~$\trans \colon \States \times \Alphabet \to \powerset{(\States \times \Alphabet \times \{-1,+1\})}$, where for every~$\asymbol \in \Alphabet$, $\delta(\qacc,\asymbol) = \{(\qacc,\asymbol,+1)\}$, $\delta(\qrej,\asymbol) = \{(\qrej,\asymbol,-1)\}$, and for every $\astate \in \States_{\forall} \cup \States_{\exists}$, $\delta(\astate,\asymbol) \neq \emptyset$. We write $\card{\ATuring}$ for the size of~$\ATuring$, i.e.~the number of symbols needed in order to describe~$\ATuring$. A \defstyle{configuration} of~$\ATuring$ is given by a triple $(\mathfrak{w},\astate,k)$. The finite word~$\mathfrak{w} \in \Alphabet^*(\Alphabet \setminus \{\blanksymb\})$ specifies the content of the first $\card{\mathfrak{w}}$ cells fo the tape, after which the tape only contains the blank symbol~$\blanksymb$. Notice that $\mathfrak{w}$ does not end with~$\blanksymb$, which means that distinct words do not describe the same content of the tape. The~\emph{positional state}~$(\astate,k)$ describes the current state~$\astate$ of the machine together with the position of the read/write head on the tape, here corresponding to the $k$-th cell. Let~$c = (\mathfrak{w},\astate,k)$ be a configuration of~$\ATuring$ and consider the symbol $\asymbol \in \Alphabet$ read by the read/write head. We write $\Delta(c)$ for the set of configurations reachable in exactly one step from~$c$. In particular, $(\mathsf{w}',\astate',k') \in \Delta(c)$ if and only if there is $(\astate'',\asymbolbis,i) \in \trans(\astate,a)$ such that \begin{itemize} \item $k+i \in \ensuremath{\mathbb{N}}$ (i.e.~$k \neq 0$ or $i \neq -1$), $k' = k+i$, $\astate'' = \astate'$ and $\mathsf{w}'$ describes the tape after the read/write head modifies the content of the $k$-th cell to~$\asymbolbis$, or \item $k + i \not\in \ensuremath{\mathbb{N}}$, $k' = 0$, $\mathsf{w}' = \mathsf{w}$ and $\astate' = \qrej$. \end{itemize} A~\emph{computation path} of~$\ATuring$ is a sequence $(c_0,\dots,c_d)$ of configurations such that $c_{i} \in \Delta(c_{i-1})$ holds for every $i \in [1,d]$. If the states~$\qacc$ and~$\qrej$ appear in the last configuration~$c_d$, the path is called \emph{terminating}. Notice that if $\qacc$ (or~$\qrej$) belongs to some $c_i$ ($i \in [1,d]$), then by definition of $\trans$ it also belongs to every $c_j$ with $j > i$, and thus the configuration is terminating. To describe the notion of acceptance for~$\ATuring$, we introduce a \emph{partial} labelling function $\gamma_{\ATuring} \colon \Alphabet^* \times \States \times \ensuremath{\mathbb{N}} \to \{\tacc,\trej\}$ (we drop the prefix~$\ATuring$ from~$\gamma_{\ATuring}$ when clear from the context). Given $\mathfrak{w} \in \Alphabet^*$ and $k \in \ensuremath{\mathbb{N}}$, the function~$\gamma$ is defined as follows: \begin{itemize} \item $\gamma(\mathfrak{w},\qacc,k) \egdef \tacc$ and $\gamma(\mathfrak{w},\qrej,k) \egdef \trej$, \item for every $\astate \in \States_{\exists}$, \begin{itemize} \item $\gamma(\mathfrak{w},\astate,k) = \tacc$ if $\gamma(\mathfrak{w}',\astate',k') = \tacc$ holds for some~$(\mathsf{w}',\astate',k') \in \Delta(\mathfrak{w},\astate,k)$, \item $\gamma(\mathfrak{w},\astate,k) = \trej$ if $\gamma(\mathfrak{w}',\astate',k') = \trej$ holds for every~$(\mathsf{w}',\astate',k') \in \Delta(\mathfrak{w},\astate,k)$, \item otherwise, $\gamma(\mathfrak{w},\astate,k)$ undefined, \end{itemize} \item for every $\astate \in \States_{\forall}$, \begin{itemize} \item $\gamma(\mathfrak{w},\astate,k) = \tacc$ if $\gamma(\mathfrak{w}',\astate',k') = \tacc$ holds for every~$(\mathsf{w}',\astate',k') \in \Delta(\mathfrak{w},\astate,k)$, \item $\gamma(\mathfrak{w},\astate,k) = \trej$ if $\gamma(\mathfrak{w}',\astate',k') = \trej$ holds for some~$(\mathsf{w}',\astate',k') \in \Delta(\mathfrak{w},\astate,k)$, \item otherwise $\gamma(\mathfrak{w},\astate,k)$ undefined. \end{itemize} \end{itemize} The language described by $\ATuring$ is $\alang(\ATuring) \egdef \{\mathfrak{w} \in \Alphabet^* \mid \gamma(\mathfrak{w},\qinit,0) = \tacc\}$. For our purposes, we are only interested in the notions of time-bounded and alternation-bounded acceptance. We say that~$\ATuring$ \emph{accepts} (resp.~\emph{rejects}) a word~$\mathfrak{w} \in \Alphabet^*$ in time~$t \in \ensuremath{\mathbb{N}}$ whenever examining all the terminating computation paths~$(c_0,\dots,c_d)$, where $c_0 = (\mathfrak{w},\qinit,0)$ and $d \in [0,t]$, is sufficient to conclude whether $\gamma(\mathfrak{w},\qinit,0) = \tacc$ (resp.~$\gamma(\mathfrak{w},\qinit,0) = \trej$). The machine~$\ATuring$ \emph{halts} on the word~$\mathfrak{w}$ in time~$t$ if it either accepts or rejects~$\mathfrak{w}$ in time~$t$. Consider functions~$f,g \colon \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$. The \ensuremath{\normalfont{\mathsf{ATM}}}\xspace~$\ATuring$ is $f$-time bounded and~$g$-alternation bounded if and only if for every $\mathfrak{w} \in \Alphabet^*$, $\ATuring$ halts on $\mathfrak{w}$ in time~$f(\card{\mathfrak{w}})$ and, along each terminating computation path~$(c_0,\dots,c_d)$, where $c_0 = (\mathfrak{w},\qinit,0)$ and~$d \in [0,t]$, the positional state of the machine alternate between existential and universal states at most~$g(\card{\mathfrak{w}})$ times. \begin{proposition}[\cite{ChandraKS81}] \label{proposition:kaexppol-membership} Consider $k \in \ensuremath{\mathbb{N}}_+$, polynomials~$f,g \colon \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$, and~$h(\avar) \egdef \tetra(k,f(\avar))$. Let $\ATuring$ be an $h$-time bounded and $g$-alternation bounded \ensuremath{\normalfont{\mathsf{ATM}}}\xspace on alphabet~$\Alphabet$. Let~${\mathfrak{w} \in \Alphabet^*}$. The problem of deciding whether $\mathfrak{w} \in \alang(\ATuring)$ is $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-complete. \end{proposition} \subparagraph*{From $\ensuremath{\normalfont{\mathsf{ATM}}}\xspace$ to $\nTM$.} Working towards a proof of~\Cref{theorem:kaexp-alternation-complexity}, we now aim at defining a~\nTM that checks if its input words represent computation paths of an alternating Turing machine. Throughout this section, we fix~$k \in \ensuremath{\mathbb{N}}_+$ (encoded in unary), two polynomials~$f,g \colon \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$ and~$h(\avar) \egdef \tetra(k,f(\avar))$. We consider a~$h$-time bounded and $g$-alternation bounded \ensuremath{\normalfont{\mathsf{ATM}}}\xspace $\ATuring = (\Alphabet,\States_{\exists},\States_{\forall},\qinit,\qacc,\qrej,\trans)$. Let $\mathfrak{w} \in \Alphabet^*$. Since $\ATuring$ is \mbox{$h$-time} bounded, we can represent the configurations of~$\ATuring$ as words of the finite language $C = \{ \widehat{\mathfrak{w}} \in \Alphabet^* \cdot \States \cdot \Alphabet^+ \mid \card{\widehat{\mathfrak{w}}} = h(\card{\mathfrak{w}}) \}$. More precisely, we say that a word $\widehat{\mathfrak{w}} \in C$ \emph{encodes the configuration} $(\mathfrak{w}',\astate,k) \in \Alphabet^* \times \States \times \ensuremath{\mathbb{N}}$ whenever there are words~$\mathfrak{w}'' \in \Alphabet^*$, $\mathfrak{w}''' \in \Alphabet^+$ and $\mathfrak{w}_{\blanksymb} \in \{\blanksymb\}^*$. such that $\mathfrak{w}' \cdot \mathfrak{w}_{\blanksymb} = \mathfrak{w}'' \cdot \mathfrak{w}'''$, $\card{\mathfrak{w}''} = k$ and $\widehat{\mathfrak{w}} = \mathfrak{w}'' \cdot \astate \cdot \mathfrak{w}'''$. For instance, the configuration $(\asymbol\asymbolbis, \astate, 3)$ is encoded by the word $\asymbol\asymbolbis\blanksymb\astate\blanksymb\dots\blanksymb$ of length $h(\card{\mathfrak{w}})$. Each word~$\widetilde{\mathfrak{w}} \in C$ encodes exactly one configuration of~$\ATuring$, which we denote by~$c(\widetilde{\mathfrak{w}})$. We extend our encoding to computation paths of~$\ATuring$, with the aim at defining a~\nTM that checks if its input words represent a computation path of~$\ATuring$. To this end, we introduce a new symbol~$\trail$ that does not appear in~$\Alphabet \cup \States$ and that is used to delimit the end of a computation path. Let $\overline{\Alphabet} = \Alphabet \cup \States \cup \{\trail\}$ and consider a word~$\overline{\mathfrak{w}} \in \overline{\Alphabet}^*$. Again, since $\ATuring$ is $h$-time bounded, its computation paths can be encoded by a sequence of at most $h(\card{\mathfrak{w}})$ words from $C$ (so,~a word of length at most~$h(\card{\mathfrak{w}})^2$). Hence, let~$\widetilde{\mathfrak{w}} \in (\Alphabet \cup \States)^*$ be the only prefix of the word $\overline{\mathfrak{w}} \cdot \trail$ such that either $\widetilde{\mathfrak{w}} \cdot \trail$ is a prefix of $\overline{\mathfrak{w}} \cdot \trail$ or $\card{\widetilde{\mathfrak{w}}} = h(\card{\mathfrak{w}})^2$. We say that $\overline{\mathfrak{w}}$ encodes a computation path of~$\ATuring$ if and only if $\widetilde{\mathfrak{w}} = \mathfrak{w}_1 \cdot {\dots} \cdot \mathfrak{w}_{d}$, for some words $\mathfrak{w}_1,\dots,\mathfrak{w}_d \in (\Alphabet \cup \States)^*$ such that \begin{itemize} \item for every $i \in [1,d]$, $\mathfrak{w}_i \in C$, \item for every $i \in [1,d-1]$, $c(\mathfrak{w}_{i+1}) \in \Delta(c(\mathfrak{w}_i))$. \end{itemize} Each word in $\overline{\Alphabet}^*$ encodes at most one computation path of~$\ATuring$. Moreover, notice that given two words~$\overline{\mathfrak{w}}_1,\overline{\mathfrak{w}}_2 \in \overline{\Alphabet}^*$, if $\overline{\mathfrak{w}}_1$ has length at least $h(\card{\mathfrak{w}})^2$, then $\overline{\mathfrak{w}}_1$ and $\overline{\mathfrak{w}}_1\cdot\overline{\mathfrak{w}}_2$ encode the same computation path (if they encode one). We remind the reader that given a \nTM~$\Turing$ working on an alphabet~$\Alphabet$, we write $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_n)$ for the run of the \nTM on input $(\mathsf{w}_1,\dots,\mathsf{w}_n) \in \Alphabet^n$. We extend this notation and, given $m \leq n$, write $\Turing(\mathfrak{w}_1,\dots,\mathfrak{w}_m)$ for~the run of~$\Turing$ on input~${(\mathsf{w}_1,\dots,\mathsf{w}_m,\epsilon,\dots,\epsilon) \in \Alphabet^n}$, i.e.~an input where the last $n - m$ tapes are empty. Given two inputs $(\mathsf{w}_1,\dots,\mathsf{w}_n) \in \Alphabet^n$ and $(\mathsf{w}_1',\dots,\mathsf{w}_n') \in \Alphabet^n$ we say that $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_n)$ and $\Turing(\mathsf{w}_1',\dots,\mathsf{w}_n')$ \defstyle{perform the same computational steps} if the sequence of states produced during the runs $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_n)$ and $\Turing(\mathsf{w}_1',\dots,\mathsf{w}_n')$ is the same. In particular, given~$t \in \ensuremath{\mathbb{N}}$, $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_n)$ accepts (resp.~rejects) in time~$t$ if and only if $\Turing(\mathsf{w}_1',\dots,\mathsf{w}_n')$ accepts (resp.~rejects) in time~$t$. One last notion is desirable in order to prove~\Cref{theorem:kaexp-alternation-complexity}. We say that a computation path $\pi = (c_0,\dots,c_d)$ of $\ATuring$, is an (existential or universal) \emph{hop}, if one of the following holds: \begin{itemize}[align=left] \item[\textit{(existential hop)}:] all the states in the configurations $c_0,\dots,c_{d-1}$ belong to~$\States_{\exists}$, and the state of $c_d$ belongs to $\States_{\forall} \cup \{\qacc,\qrej\}$, \item[\textit{(universal hop)}:] all the states in the configurations $c_0,\dots,c_{d-1}$ belong to~$\States_{\forall}$, and the state of $c_d$ belongs to $\States_{\exists} \cup \{\qacc,\qrej\}$, \end{itemize} Intuitively, in a hop $\pi = (c_0,\dots,c_d)$, no alternation occurs in the computation path $(c_0,\dots,c_{d-1})$ and either $\pi$ is terminating or an alternation occurs between $c_{d-1}$ and $c_d$. We are now ready to state the essential technical lemma (\Cref{lemma:almost-theorem-kaexp-alternation}) that allows us to prove~\Cref{theorem:kaexp-alternation-complexity}. For simplicity, the lemma is stated referring to~$k$,~$f$,~$g$,~$h$,~$\ATuring$, $\mathfrak{w}$ and $\overline{\Alphabet}$ as defined above. \begin{lemma} \label{lemma:almost-theorem-kaexp-alternation} Let $m = g(\card{\mathfrak{w}})$ and $n \geq m + 3$. One can construct in polynomial time in $n$, $k$,~$\card{\ATuring}$ and $\card{\mathfrak{w}}$ a~\nTM~$\Turing$ working on alphabet~$\overline{\Alphabet}$ with blank symbol~$\trail$, and such that \begin{enumerate}[label=\rm{\textbf{\Roman*}}] \setlength{\itemsep}{3pt} \item\label{lem:almost-theorem-prop-1} $\Turing$ runs in polynomial time on the length of its inputs, \item\label{lem:almost-theorem-prop-2} given $i \in [1,m]$ and $\mathfrak{w}_1,\dots,\mathfrak{w}_i,\dots,\mathfrak{w}_m,\mathfrak{w}_i' \in \overline{\Alphabet}^*$, such that~$\card{\mathfrak{w}_i} \geq h(\card{\mathfrak{w}})^2$, $\Turing(\mathfrak{w}_1,\dots,\mathfrak{w}_m)$ and $\Turing(\mathfrak{w}_1,\dots,\mathfrak{w}_i \cdot \mathfrak{w}_i', \dots \mathfrak{w}_m)$ perform the same computational steps, \item\label{lem:almost-theorem-prop-2b} for all $\mathfrak{w}_1,\dots,\mathfrak{w}_m \in \overline{\Alphabet}^*$ and $\mathfrak{w}_{m+1},\mathfrak{w}_{m+1}',\dots,\mathfrak{w}_n,\mathfrak{w}_n' \in \overline{\Alphabet}^*$, $\Turing(\mathfrak{w}_1,\dots,\mathfrak{w}_m,\mathfrak{w}_{m+1},\dots,\mathfrak{w}_n)$ and $\Turing(\mathfrak{w}_1,\dots,\mathfrak{w}_m,\mathfrak{w}_{m+1}',\dots,\mathfrak{w}_n')$ perform the same computational steps, \item\label{lem:almost-theorem-prop-3} $\mathfrak{w} \in \alang(\ATuring)$ \ iff \ $Q_1 \mathsf{w}_1 \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots, Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts, \end{enumerate} where for every $i \in [1,m]$, $Q_i = \exists$ if $i$ is odd, and otherwise $Q_i = \forall$. \end{lemma} \begin{proof} Roughly speaking, the machine~$\Turing$ we shall define checks if the input~$(\mathfrak{w}_1,\dots,\mathfrak{w}_m)$ encodes an accepting computation path in $\ATuring$, where each $\mathfrak{w}_i$ represents an hop. Notice that the lemma imposes~$n \geq m + 3$, with $m = g(\card{\mathfrak{w}})$. This is done purely for technical convenience, as it allows us to rely on three additional tapes, i.e.~the $n$-th, $(n-1)$-th and $(n-2)$-th tapes, whose initial content does not affect the run of~$\Turing$ (see property~\eqref{lem:almost-theorem-prop-2b}). These three tapes are used by~$\Turing$ as auxiliary tapes. To simplify further the presentation of the proof without loss of generality, we assume that $\Turing$ features \emph{pseudo-oracles} that implement the standard unary functions on natural numbers~$(\,. + 1)$~$\ceil{\log(.)}$, $\floor{\sqrt{.}}$, as well as the function~$(\,. > f(\card{\mathfrak{w}}))$. As in the case of oracles, the machine~$\Turing$ can call a pseudo-oracle, which read the \defstyle{initial content} of the current tape of~$\Turing$ (i.e.~the word in $(\overline{\Alphabet}\setminus \{\trail\})^*$ that occurs the beginning of the tape currently scanned by the read/write head, and delimited by the first occurrence of~$\trail$) and produce the result of the function they implement at the beginning of the (auxiliary) $n$-th tape. However, all the functions implemented by pseudo-oracles can be computed in polynomial time, and one can construct in polynomial time Turing machines that compute them. More precisely, $\BigO{1}$ states are needed to implement~$(\,. + 1)$,~$\ceil{\log(.)}$ and $\floor{\sqrt{.}}$, and $\BigO{f(\card{\mathfrak{w}})}$ states are needed in order to implement $(\,. > f(\card{\mathfrak{w}}))$.% \footnote{Given an input written in unary, the only non-standard function~$(\,. > f(\card{\mathfrak{w}}))$ can be implemented with a Turing machine that runs in linear time on the size of its input and relies on a chain of~$f(\card{\mathfrak{w}})+1$ states, plus the accepting state and rejecting state. At the $i$-th step of the computation, with~$i \leq f(\card{\mathfrak{w}})$, the read/write head is on the $i$-th symbol~$c$ of the input tape and the machine is in its $i$-th state. If~$c$ is non-blank, the machine moves the read/write head to the right and switch to the $(i+1)$-th state. Otherwise, the machine rejects the input. On the last state (reached at step $i = f(\card{\mathfrak{w}})+1$, if the machine did not previously reject), if the head reads a non-blank symbol, then the machine accepts, otherwise it rejects. } So, the pseudo-oracles \emph{can be effectively removed} by incorporating their equivalent Turing machine directly inside~$\Turing$, only growing its size polynomially, and resulting in~$\Turing$ being a standard~\nTM. \Cref{proofT8:pseudo-oracles} formalises the semantics of the pseudo-oracles, given the initial content $c \in (\overline{\Alphabet} \setminus \{\trail\})^*$ of the current tape. Without loss of generality, we also assume~$\card{\mathfrak{w}}$ and $f(\card{\mathfrak{w}})$ to be at least~$1$. Lastly, again in order to simplify the presentation and without loss of generality, we assume that~$\Turing$ is able to reposition the read/write head to the first position of a given tape. This is a standard assumption, which can be removed by simply adding a new symbol to the alphabet (historically,~\textschwa), which shall precede the inputs of the machine. Then, the machine can retrieve the first (writeable) position on the tape by moving the read/write head to the left until it reads the new symbol~\textschwa, to then move right once, without overwriting~\textschwa. \begin{figure} \flushright \begin{minipage}{0.87\linewidth} \begin{itemize}[align=right,nosep,itemsep=5pt,before=\vspace{2pt},after=\vspace{2pt}] \item[$(\,. +1)$ :] write~$c \cdot \asymbol \cdot \trail$ at the beginning of the $n$-th tape, where $\asymbol \in \overline{\Alphabet} \setminus \{\trail\}$ is a fixed symbol. So, the length of the initial content on the $n$-th tape becomes $\card{c}+1$. \item[$\ceil{\log(.)}$ :] write $a^{\ceil{\log(\card{c})}}\cdot\trail$ at the beginning of the $n$-th tape, where $\asymbol \in \overline{\Alphabet} \setminus \{\trail\}$ is fixed. So, the length of the initial content of the $n$-th tape becomes~$\ceil{\log(\card{c})}$. \item[$\floor{\sqrt(.)}$ :] write $a^{\floor{\sqrt(\card{c})}}\cdot\trail$ at the beginning of the $n$-th tape, where $\asymbol \in \overline{\Alphabet} \setminus \{\trail\}$ is fixed. So, the length of the initial content of the $n$-th tape becomes~$\floor{\sqrt(\card{c})}$. \item[$(\,. > f(\card{\mathfrak{w}}))$ :] if $\card{c} > f(\card{\mathfrak{w}})$, write $\asymbol \cdot \trail$ at the beginning of the $n$-th tape, where $\asymbol \in \overline{\Alphabet} \setminus \{\trail\}$ is fixed. Else, write $\trail$ at the beginning of the $n$-th tape. So, the initial content of the $n$-th tape becomes empy if and only if $\card{c} \leq f(\card{\mathfrak{w}})$. \end{itemize} \end{minipage} \caption{Pseudo-oracles; $c \in (\overline{\Alphabet} \setminus \{\trail\})^*$ is the initial content of the current tape.} \label{proofT8:pseudo-oracles} \end{figure} Before describing~$\Turing$, we notice that property~\eqref{lem:almost-theorem-prop-2} requires to check whether~$\card{\mathfrak{w}_i} \geq h(\card{\mathfrak{w}})^2$. We rely on~\Cref{lemma:tetra-property-3,lemma:isqrt} in order to perform this test in polynomial time on $\card{\mathfrak{w}_1}$, without computing $h(\card{\mathfrak{w}})$ explicitly. We have, \begin{align*} \card{\mathfrak{w}_i} \geq h(\card{\mathfrak{w}})^2 & \text{ iff } \ \isqrt{\card{\mathfrak{w}_i}} \geq h(\card{\mathfrak{w}}), & \text{ by~\Cref{lemma:isqrt}}\\[3pt] & \text{ iff } \ \isqrt{\card{\mathfrak{w}_i}} + 1 > \tetra(k,f(\card{\mathfrak{w}})),\\[3pt] & \text{ iff } \ \klog{k}(\isqrt{\card{\mathfrak{w}_i}} + 1) > f(\card{\mathfrak{w}}). & \text{ by~\Cref{lemma:tetra-property-3}} \end{align*} To check whether $\klog{k}(\isqrt{\card{\mathfrak{w}_i}} + 1) > f(\card{\mathfrak{w}})$ holds, it is sufficient to write $\mathfrak{w}_i$ on the $n$-th tape, and then call first the pseudo-oracle for $\floor{\sqrt{.}}$, followed by the pseudo-oracle for $(\,. + 1)$ and by $k$ calls to the pseudo-oracle for~$\ceil{\log{(.)}}$ (recall that $k$ is written in unary), and lastly one call to the pseudo-oracle for $(\, . > f(\card{\mathfrak{w}}))$. When taking into account the number of states needed in order to implement these pseudo-oracles, we conclude that the function $\klog{k}(\isqrt{.} + 1) > f(\card{\mathfrak{w}})$ can be implemented by a Turing machine with~$\BigO{k + f(\card{\mathfrak{w}})}$ states. Hence, below we assume without loss of generality that~$\Turing$ also features pseudo-oracles for the function~$\klog{k}(\isqrt{.} + 1) > f(\card{\mathfrak{w}})$ as well as the (similar) functions~$\klog{k}(\,. + 1) > f(\card{\mathfrak{w}})$ and~$\klog{k}(\,. \,) > f(\card{\mathfrak{w}})$, both requiring~$\BigO{k + f(\card{\mathfrak{w}})}$ states to be implemented. Ultimately, because of the polynomial bounds on the number of states required to implement these functions,~$\Turing$ (without pseudo-oracles) can be constructed in \textsc{PTime}\xspace on~$n$,~$k$,~$\card{\mathfrak{w}}$ and~$\card{\ATuring}$. We are now ready to construct the~\nTM~$\Turing$ that works on alphabet~$\overline{\Alphabet}$. For an input ${(\mathfrak{w}_1,\dots,\mathfrak{w}_n) \in (\overline{\Alphabet}^*})^n$, $\Turing$ operates following $m$ ``macrosteps'' on the first $m$ tapes, disregarding the input of the last $n-m \geq 1$ tapes. At step $i \in [1,m]$, the machine operates as follows: \begin{description} \setlength{\itemsep}{3pt} \item[macrostep $i = 1$.] Recall that~$Q_1 = \exists$. $\Turing$ works on the first tape. \begin{enumerate} \setlength{\itemsep}{3pt} \item\label{turing:step1:substep1} $\Turing$ reads the prefix~$\widetilde{\mathfrak{w}}$ of~$\mathfrak{w}_1$, such that if $\card{\mathfrak{w}_1} \leq h(\card{\mathfrak{w}})^2$ then $\widetilde{\mathfrak{w}} = \mathfrak{w}_1$, else $\card{\widetilde{\mathfrak{w}}} = h(\card{\mathfrak{w}})^2$. (\textit{Note: $\Turing$ does not consider the part of the input after the $h(\card{\mathfrak{w}})^2$-th symbol}) \item\label{turing:step1:substep2} The \nTM~$\Turing$ then checks whether $\widetilde{\mathfrak{w}}$ corresponds to an \emph{existential hop} in~$\ATuring$ starting from the state~$\qinit$. It does so by analysing from left to right~$\widetilde{\mathfrak{w}}$ in chunks of length $h(\card{\mathfrak{w}})$ (i.e.~the size of a word in the language~$C$ of the configurations of~$\ATuring$), possibly with the exception of the last chunk, which can be of smaller size. Let $d$ be the number of chunks, so that $\widetilde{\mathfrak{w}} = \widetilde{\mathfrak{w}}_1 \cdot {\dots} \cdot \widetilde{\mathfrak{w}}_d$, where $\widetilde{\mathfrak{w}}_j$ is the $j$-th chunk analysed by~$\Turing$. \begin{enumerate}[label=(\alph*)] \setlength{\itemsep}{3pt} \item If $\widetilde{\mathfrak{w}}_j \not\in C$, $\Turing$ \textbf{rejects}, (\textit{Note: else, $\widetilde{\mathfrak{w}}_j$ encodes the configuration $c(\widetilde{\mathfrak{w}}_j)$~of~$\ATuring$}) \item if the symbol~$\qinit$ does not occur in the first chunk $\widetilde{\mathfrak{w}}_1$, then $\Turing$ \textbf{rejects}, \item if $\widetilde{\mathfrak{w}}_j$ is not the last chunk and contains a symbol from~$\States_\forall$, then $\Turing$ \textbf{rejects}, \item if $\widetilde{\mathfrak{w}}_j$ is not the last chunk and $c(\widetilde{\mathfrak{w}}_{j+1}) \not \in \Delta(c(\widetilde{\mathfrak{w}}_j))$, then $\Turing$ \textbf{rejects}. \item If the length of the last chunk~$\widetilde{\mathfrak{w}}_d$ is not $h(\card{\mathfrak{w}})$ (i.e.~the length of~$\widetilde{\mathfrak{w}}$ is not a multiple of~$h(\card{\mathfrak{w}})$), then $\Turing$ \textbf{rejects}, (\textit{Note: this case subsumes the case where~$\mathfrak{w}_1$ is empty}) \item if the last chunk $\widetilde{\mathfrak{w}}_d$ contains a symbol from $\States_{\exists}$, then $\Turing$ \textbf{rejects}. \end{enumerate} \item\label{turing:step1:substep3} $\Turing$ analyses the last chunk $\widetilde{\mathfrak{w}}_d$ of the previous step. If~$\qacc$ occurs in $\widetilde{\mathfrak{w}}_d$, then $\Turing$ \textbf{accepts}. Otherwise, if either~$m = i = 1$ or $\widetilde{\mathfrak{w}}_d$ contains the symbol~$\qrej$, then $\Turing$ \textbf{rejects}. Else,~$\Turing$ writes down $\widetilde{\mathfrak{w}}_d \cdot \trail$ at the beginning of the first tape, and \textbf{moves to macrostep~$2$}. Let $\mathfrak{w}^{(1)} = \widetilde{\mathfrak{w}}_d$, i.e.~the word (currently store at the beginning of the first tape) encoding the last configuration of the computation path encoded by~$\widetilde{\mathfrak{w}}$. \end{enumerate} From Step~\eqref{turing:step1:substep1}, we establish the following property of macrostep~1. \begin{claim} \label{claim:turing:macrostep-1-input} Macrostep~1 only reads at most the first $h(\card{\mathfrak{w}})^2$ character of the input of the first tape, and does not depend on the input written on other tapes. \end{claim} We analyse in details the complexity of macrostep~1. \begin{claim} \label{claim:turing:macrostep-1} The macrostep~$1$ runs in polynomial time on~$\card{\mathfrak{w}_1}$,~$k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$, and only required polynomially many states to be implemented, with respect to~$k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$. \end{claim} \begin{claimproof} We give a more fine grained description of the various steps, and analyse their complexity. Recall that the tapes $n$, $n-1$ and $n-2$ are auxiliary. \defstyle{Step~\eqref{turing:step1:substep1}} can be performed in polynomial time in~$\card{\mathfrak{w}_1}$, $k$, $\card{\mathfrak{w}}$ and $\card{\overline{\Alphabet}} \leq \card{\ATuring}$, as shown below. \begin{MyCode} write ${\trail}^{\card{\mathfrak{w}_1}+1}$ at the beginning the $(n-1)$-th tape while true //Invariant: The $(n-1)$-th tape contains a prefix of $\mathfrak{w}_1$ // of length less than $\min(\card{\mathfrak{w}_1},h(\card{\mathfrak{w}})^2)$ let $i$ be the length of the initial content of the $(n-1)$-th tape read the $i$-th symbol $\asymbol$ of the first tape if $\asymbol = \trail$, break write $\asymbol$ in the $i$-th position of the $(n-1)$-th tape call the pseudo-oracle for $\smash{\klog{k}(\isqrt{.} + 1) > f(\card{\mathfrak{w}})}$ on the $(n-1)$-th tape if the initial content of the $n$-th tape is not empty, break \end{MyCode} At the end of this procedure, the $(n-1)$-th tape contains the word~$\widetilde{\mathfrak{w}}$. Indeed, lines 3, 4 and 6 copy the $i$-th character of $\mathfrak{w}_1$ to the $(n-1)$-th tape. If $\card{\mathfrak{w}_1} < h(\card{\mathfrak{w}})^2$, then $\widetilde{\mathfrak{w}} = \mathfrak{w}_1$ and after copying $\mathfrak{w}_1$ on the $(n-1)$-th tape, the test in line 5 becomes true. Otherwise, after copying the first $h(\card{\mathfrak{w}})^2$ characters of $\mathfrak{w}_1$ to the $(n-1)$-th tape, the pseudo-oracle call of line 7 will write a non-blank symbol on the $n$-th tape, making the test in line 8 true. Time-wise, the complexity of the procedure above is polynomial in~$\card{\mathfrak{w}_1}$, $k$ (line 9), $\card{\mathfrak{w}}$ (line 9) and $\card{\overline{\Alphabet}} \leq \card{\ATuring}$ (lines 5--8). Space-wise, performing line 1 of the procedure requires a constant number of states. Performing lines 5--8 requires $\BigO{\card{\overline{\Alphabet}}}$ states, as~$\Turing$ needs to keep track of which symbol was read on the first tape, in order to write in on the tape $(n-1)$. For line 9 instead, we must take into account the number of states required to implement the computation done by the pseudo-oracle directly in~$\Turing$. As stated at the beginning of the proof, these are $\BigO{k + f(\card{\mathfrak{w}})}$ many states. Hence, $\Turing$ can implement the procedure above with $\BigO{k + f(\card{\mathfrak{w}}) + \card{\ATuring}}$ many states. Let us now move to \defstyle{Step~\eqref{turing:step1:substep2}}. Recall that, after \defstyle{Step~\eqref{turing:step1:substep1}},~$\widetilde{\mathfrak{w}}$ is stored at the beginning of tape $(n-1)$. The machine~$\Turing$ continues as follows, where the lines marked with symbols of the form \ $\pgftextcircled{i}$ \ are later expanded and analysed in details. \begin{MyCode} write $\widetilde{\mathfrak{w}} \cdot \trail$ at the beginning of the first tape if the initial content of the first tape is empty, $\text{\rm \textbf{reject}}$ $\pgftextcircled{1}$ if $\qinit$ does not appear in the first $h(\card{\mathfrak{w}})$ characters of $\widetilde{\mathfrak{w}}$, $\text{\rm \textbf{reject}}$ while true //Invariant: The 1st tape starts with a non-empty suffix of $\widetilde{\mathfrak{w}}$. let $i$ be the length of the initial content of the first tape write $\trail^{i+1}$ on both the $(n-1)$-th and $(n-2)$-th tapes call the pseudo-oracle for $\smash{\klog{k}(\, . + 1) > f(\card{\mathfrak{w}})}$ on the first tape if the initial content of the $n$-th tape is empty, $\text{\rm{\textbf{rejects}}}$ $\pgftextcircled{2}$ copy the first $h(\card{\mathfrak{w}})$ characters of the first tape to the $(n-1)$-th tape //The tape $(n-1)$ now contains a chunk $\widetilde{\mathfrak{w}}_j$. if the initial content of the $(n-1)$-th tape and the first tape is equal, break $\pgftextcircled{3}$ if the initial content of the $(n-1)$-th tape does not belong to $C$, $\text{\rm \textbf{reject}}$ $\pgftextcircled{4}$ if a symbol from $\States_\forall$ occur in the initial content of the $(n-1)$-th tape, $\text{\rm \textbf{reject}}$ $\pgftextcircled{5}$ shift the initial content of the first tape $h(\card{\mathfrak{w}})$ positions to the left, effectively removing the first $h(\card{\mathfrak{w}})$ characters from the tape //The first tape now starts with the chunk $\widetilde{\mathfrak{w}}_{j+1}$. $\pgftextcircled{6}$ copy the first $h(\card{\mathfrak{w}})$ characters of the first tape to the $(n-2)$-th tape //The tape $(n-2)$ now contains the chunk $\widetilde{\mathfrak{w}}_{j+1}$. $\pgftextcircled{7}$ perform step (2d) //$\widetilde{\mathfrak{w}}_j$ and $\widetilde{\mathfrak{w}}_{j+1}$ on tapes $(n-1)$ and $(n-2)$, respectively //Post: The tape $(n-1)$ contains the last chunk $\widetilde{\mathfrak{w}}_d$. $\pgftextcircled{8}$ if the initial content of the $(n-1)$-th tape does not belong to $C$, $\text{\rm \textbf{reject}}$ $\pgftextcircled{9}$ if a symbol from $\States_\exists$ occur in the initial content of the $(n-1)$-th tape, $\text{\rm \textbf{reject}}$ \end{MyCode} Provided that the lines marked with symbols of the form \ $\pgftextcircled{i}$ \ can be performed in polynomial time and can be implemented with a polynomial number of states, it is straightforward to see that this procedure also runs in polynomial time and only uses polynomially many states. Indeed, at the $j$-th iteration of the while loop, the $j$-th chunk of $\widetilde{\mathfrak{w}}$ is analysed. Therefore, the body of the while loop is executed at most $d = \ceil{\frac{\card{\widetilde{\mathfrak{w}}}}{h(\card{\mathfrak{w}})}}$ times, where $d$ is the number of chunks (which is polynomial in~$\card{\mathfrak{w}_1}$). Moreover, it is easy to see that the lines 1,2, 6--9 and 12 runs in polynomial time on $\card{\mathfrak{w}_1}$, $k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$, and only requires $\BigO{k+f(\card{\mathfrak{w}}) + \card{\ATuring}}$ many states to be implemented. Indeed, line 1 can be implemented in linear time on $\card{\widetilde{\mathfrak{w}}} \leq \card{\mathfrak{w}_1}$ and $\card{\overline{\Alphabet}}$, and requires $\BigO{\overline{\Alphabet}}$ states to be implemented, in order to track the symbols read on the $(n-1)$-th tape, and copy them on the first tape. A similar analysis can be done for line 12. Line~2 runs in time $\BigO{1}$ and requires $\BigO{1}$ states. Line~7 runs in time $\BigO{\card{\widetilde{\mathfrak{w}}}}$ and requires $\BigO{1}$ states. Line~8 is analogous to line~9 of Step~\eqref{turing:step1:substep1}, and checks whether the current suffix of~$\widetilde{\mathfrak{w}}$ stored in the first tape contains at least $h(\card{\mathfrak{w}})$ characters. Together with line \ $\pgftextcircled{5}$ \,, this line assures that $\card{\widetilde{\mathfrak{w}}}$ is a multiple of~$h(\card{\mathfrak{w}})$, effectively implementing the step (2e). Let us now focus on the lines marked with \ $\pgftextcircled{i}$ \,. Line \ $\pgftextcircled{1}$ \,, implements the step~(2b). To test whether $h(\card{\mathfrak{w}})$ character have been read, we can rely on the pseudo-oracle for $\smash{\klog{k}(\, . + 1) > f(\card{\mathfrak{w}})}$, similary to what it is done in line 7. Here is the resulting procedure. \begin{MyCode} write $\trail^{\card{\widetilde{\mathfrak{w}}}+1}$ on the $(n-1)$-th tape while true //Invariant: The $(n-1)$-th tape contains a prefix of $\widetilde{\mathfrak{w}}$ // of length less than $\min(\card{\widetilde{\mathfrak{w}}},h(\card{\mathfrak{w}}))$ let $i$ be the length of the initial content of the $(n-1)$-th tape read the $i$-th symbol $\asymbol$ of the first tape if $\asymbol = \qacc$, break if $\asymbol = \trail$, $\text{\rm\textbf{reject}}$ write $\asymbol$ in the $i$-th position of the $(n-1)$-th tape call the pseudo-oracle for $\smash{\klog{k}(\,. + 1) > f(\card{\mathfrak{w}})}$ on the $(n-1)$-th tape if the initial content of the $n$-th tape is not empty, $\text{\rm\textbf{reject}}$ //Post: $\qacc$ appears in the first $h(\card{\mathfrak{w}})$ characters of $\widetilde{\mathfrak{w}}$ \end{MyCode} With a similar analysis to what done in Step~\eqref{turing:step1:substep1}, we conclude that \ $\pgftextcircled{1}$ \, runs in polynomial time on $\card{\mathfrak{w}_1}$, $k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$, and requires $\BigO{k+f(\card{\mathfrak{w}}) + \card{\ATuring}}$ states to be implemented. Line~~$\pgftextcircled{2}$ \ is very similar to line \ $\pgftextcircled{1}$\,. Notice that form line 6 of Step~\eqref{turing:step1:substep2}, the initial content of the tape $(n-1)$ is empty. Moreover, from line 7, the initial content of the first tape has at least $h(\card{\mathfrak{w}})$ characters. Here is the procedure for~~$\pgftextcircled{2}$\,. \begin{MyCode} while true //Invariant: The $(n-1)$-th tape contains a prefix of the initial content // of the first tape, of length less than $h(\card{\mathfrak{w}})$ let $i$ be the length of the initial content of the $(n-1)$-th tape read the $i$-th symbol $\asymbol$ of the first tape write $\asymbol$ in the $i$-th position of the $(n-1)$-th tape call the pseudo-oracle for $\smash{\klog{k}(\,. + 1) > f(\card{\mathfrak{w}})}$ on the $(n-1)$-th tape if the initial content of the $n$-th tape is not empty, break //Post: the initial content of the $(n-1)$-th tape is a prefix of the // initial content of the first tape, of length $h(\card{\mathfrak{w}})$ \end{MyCode} As in the case of line~~$\pgftextcircled{1}$\,, this procedure runs in polynomial time on $\card{\mathfrak{w}_1}$, $k$ $\card{\mathfrak{w}}$ and $\card{\ATuring}$, and requires $\BigO{k+f(\card{\mathfrak{w}}) + \card{\ATuring}}$ states to be implemented. After executing line~~$\pgftextcircled{1}$\,, the initial content of the tape $(n-1)$ is a chunk of $\widetilde{\mathfrak{w}}$, say $\widetilde{\mathfrak{w}}_j$. Lines \ $\pgftextcircled{3}$ \ and \ $\pgftextcircled{4}$ \ implement the steps (2a) and (2c). The machine~$\Turing$ can implement both lines simultaneously, by iterating through the initial content of the $(n-1)$-th tape as shown below. \begin{MyCode} move the read/write head to the first position of the $(n-1)$-th tape while true let $\asymbol$ be the symbol corresponding to the read/write head if $\asymbol = \trail$, $\text{\rm{\textbf{reject}}}$ if $\asymbol \in \States_{\forall}$, $\text{\rm{\textbf{reject}}}$ if $\asymbol \in \States \setminus \States_{\forall}$, break move the read/write head to the right //Post: a symbol from $\States \setminus \States_{\forall}$ was found move the read/write head to the right if the the read/write head reads $\trail$, $\text{\rm{\textbf{reject}}}$ while the read/write head does not read $\trail$ if the read/write had reads a symbol from $\States$, $\text{\rm{\textbf{reject}}}$ move the read/write head to the right //Post: the initial content of the $(n-1)$-th tape belongs to $C$ and does not // contain symbols from $\States_{\forall}$ \end{MyCode} The correctness of this procedure with respect to the description of~lines~~$\pgftextcircled{3}$ \, and~~$\pgftextcircled{4}$ \, is straightforward as we recall that $C = \{ \widehat{\mathfrak{w}} \in \Alphabet^* \cdot \States \cdot \Alphabet^+ \mid \card{\widehat{\mathfrak{w}}} = h(\card{\mathfrak{w}}) \}$, and by line~~$\pgftextcircled{1}$ \, the initial content of the tape $(n-1)$ has length $h(\card{\mathfrak{w}})$. The running time of the procedure above is polynomial in the length of the initial content of the $(n-1)$-th tape, and thus polynomial on $\card{\mathfrak{w}_1}$. The number of states required to implement this procedure is in~$\BigO{1}$. Let us move to line~~$\pgftextcircled{5}$\,. Essentially, in this lines~$\Turing$ removes from the initial content of the first tape the chunk that is currently analysed and stored in the tape $(n-1)$, so that the initial content of the first tape starts with the next chunk. A possible implementation of this line is given below. Recall that, from line 6 of Step~\eqref{turing:step1:substep2}, the $(n-2)$-th tape starts with the word $\trail^{i+1}$, where $i$ is the length of the initial content of the first tape. Moreover, from line 12 of Step~\eqref{turing:step1:substep2}, the length of the initial content of the first tape is greater than the length of the initial content of the tape $(n-1)$. \begin{MyCode} move the read/write head to the first occurrence of $\trail$ on the $(n-1)$-th tape //The read/write head is now on the position $\card{\widetilde{\mathfrak{w}}_j}$ of the $(n-1)$-th tape move the read/write head to the first tape //Read/write head currently in position $\card{\widetilde{\mathfrak{w}}_j}$ of the first tape //From line 12 of Step (2), the head reads a symbol different from $\trail$ while true let $\asymbol$ be the symbol corresponding to the read/write head write $\trail$ move the read/write head to the right if the read/write head reads $\trail$, move the read/write head to the first occurrence of $\trail$ on the $(n-2)$-th tape write $\asymbol$ break else move the read/write head to the first occurrence of $\trail$ on the $(n-2)$-th tape write $\asymbol$ move the read/write head to the first occurrence of $\trail$ on the first tape //read/write head in the position where~$\asymbol$ was previously stored move the read/write head right //Post: The initial content of the tape $(n-2)$ is the word required by line $\pgftextcircled{5}$ let $\widetilde{\mathfrak{w}}'$ be the initial content of the $(n-1)$-th tape write $\widetilde{\mathfrak{w}}' \cdot \trail$ at the beginning of the first tape \end{MyCode} Again, this procedure runs in polynomial time on~$\card{\widetilde{\mathfrak{w}}}$ and $\card{\ATuring}$. Since~$\Turing$ needs to keep track of the symbol read in line 7, as well as implementing lines~21--22, the number of states required to implement the procedure is $\BigO{\overline{\Alphabet}}$, and thus polynomial in $\card{\ATuring}$. Line~~$\pgftextcircled{6}$ \ copies the chunk $\widetilde{\mathfrak{w}}_{j+1}$ to the tape $(n-2)$, so that $\Turing$ can then perform the step (2d) (line~~$\pgftextcircled{7}$\,). Line~~$\pgftextcircled{6}$ \ is implemented analogously to line~~$\pgftextcircled{2}$\,, i.e. the line that copied the chunk $\widetilde{\mathfrak{w}}_j$ to the $(n-1)$-th tape. Compared to the code for line~~$\pgftextcircled{2}$\,, it is sufficient to replace the steps done to the tape $(n-1)$ to equivalent steps done on the $(n-2)$ step in order to obtain the code for line~~$\pgftextcircled{6}$\,, which therefore runs in polynomial time, and requires $\BigO{k + f(\card{\mathfrak{w}}) + \card{\ATuring}}$ many sstates in order to be implemented. Let us now assume that the initial content of the $(n-1)$-th step is the chunk $\widetilde{\mathfrak{w}}_j$, and that the initial content of tape $(n-2)$ is the chunk $\widetilde{\mathfrak{w}}_{j+1}$. According to line~~$\pgftextcircled{7}$\,, we must check whether $c(\widetilde{\mathfrak{w}}_{j+1}) \in \Delta(c(\widetilde{\mathfrak{w}}_j))$ (step (2d)). Notice that we do now know whether $\widetilde{\mathfrak{w}}_{j+1} \in C$, as this test will be performed at the next iteration of the while loop of Step~\eqref{turing:step1:substep2}. However, we can still easily implement step (2d) as follows: we find the pair $(\astate,\asymbol) \in \States \times \Alphabet$ such that $\mathfrak{w}' \cdot \astate \cdot \asymbol \cdot \mathfrak{w}'' = \widetilde{\mathfrak{w}}_j$ for some words $\mathfrak{w}'$ and $\mathfrak{w}''$. This decomposition is guaranteed from $\widetilde{\mathfrak{w}}_j \in C$, which holds from line~~$\pgftextcircled{3}$\,. The machine then iterates over all elements of $\delta(\astate,\asymbol)$, updating $\widetilde{\mathfrak{w}}_j$ accordingly, on the $n$-th tape. After each update, $\Turing$~checks whether the initial content of the $n$-th tape corresponds to $\widetilde{\mathfrak{w}}_{j+1}$. Here is the procedure. \begin{MyCode} let $\widetilde{\mathfrak{w}}_j$ be the initial content of the $(n-1)$-th tape let $\widetilde{\mathfrak{w}}_{j+1}$ be the initial content of the $(n-2)$-th tape move the read/write head to the beginning of the tape $(n-1)$ while the read/write had reads a character not from $\States$ move the read/write head to the right //Since $\widetilde{\mathfrak{w}}_j \in C$, the loop above terminates //Post: the read/write head reads a character from $\States$ let $\astate$ be the symbol corresponding to the read/write head move the read/write head to the right let $\asymbol$ be the symbol corresponding to the read/write head //Since $\widetilde{\mathfrak{w}}_j \in C$, $\asymbol$ belongs to $\Alphabet$ for $(\astate',\asymbol',i) \in \States \times \Alphabet \times \{-1,+1\}$ belonging to $\delta(\astate,\asymbol)$ write $\widetilde{\mathfrak{w}}_j \cdot \trail$ at the beginning of the $n$-th tape move to the read/write tape to the (only) occurrence of $s$ on the $n$-th tape if $i = 1$, write $\asymbol'$ //overwrites $\astate$ move the read/write tape to the right write $\asymbol'$ //overwrites $\asymbol$ else if $s$ occurs at the beginning of the $n$-th tape, write $\qrej$ else move the read/write tape left let $\asymbolbis$ be the symbol corresponding to the read/write head write $\astate'$ //overwrites $\asymbolbis$ move the read/write tape to the right write $\asymbolbis$ //overwrites $\astate$ move the read/write tape to the right write $\asymbol'$ // overwrites $\asymbol$ if the initial content of the $n$-th tape equals $\widetilde{\mathfrak{w}}_{j+1}$, goto line 32 //Post : $c(\widetilde{\mathfrak{w}}_{j+1}) \not\in \Delta(c(\widetilde{\mathfrak{w}}_j))$ $\text{\rm{\textbf{reject}}}$ //If this line is reached, $c(\widetilde{\mathfrak{w}}_{j+1}) \in \Delta(c(\widetilde{\mathfrak{w}}_j))$ \end{MyCode} Briefly, if the test in line 14 holds, then $\widetilde{\mathfrak{w}}_j = \mathfrak{w}' \cdot \astate \cdot \asymbol \cdot \mathfrak{w}''$ is copied on the $n$-th tape and updated to $\mathfrak{w}' \cdot \asymbol' \cdot \astate' \cdot \mathfrak{w}''$, according to the semantics of the ATM~$\ATuring$ for the case $(\astate',\asymbol',1) \in \delta(\astate,\asymbol)$. If instead the test in line 19 holds, then $\widetilde{\mathfrak{w}}_j$ is of the form $\astate \cdot \asymbol \cdot \mathfrak{w}''$ we are considering a triple $(\astate',\asymbol',-1) \in \delta(\astate,\asymbol)$. According to the semantics of the $\ensuremath{\normalfont{\mathsf{ATM}}}\xspace$ $\widetilde{\mathfrak{w}}_j$ must be updated to $\qrej \cdot a \cdot \mathfrak{w}''$, as done in line 20. Lastly, if the else branch in line 21 is reached, then $\widetilde{\mathfrak{w}}_j$ is of the form $\mathfrak{w}' \cdot \asymbolbis \cdot \astate \cdot \asymbol \cdot \mathfrak{w}''$, and since we are considering a triple $(\astate',\asymbol',-1) \in \delta(\astate,\asymbol)$, it must be updated to $\mathfrak{w}' \cdot \astate' \cdot \asymbolbis \cdot \asymbol' \cdot \mathfrak{w}''$, as done in lines 22--28. This procedure runs in polynomial time on~$\card{\widetilde{\mathfrak{w}}_j} \leq \card{\mathfrak{w}_1}$ and $\card{\ATuring}$, and because of the \defstyle{for} loop in line 12 together with the necessity to copy $\widetilde{\mathfrak{w}}_j$ to the $n$-th tape and keeping track of the symbols $\astate$, $\asymbol$ and $\asymbolbis$, it requires a number of states that is polynomial in $\card{\ATuring}$. Lastly, the lines~~$\pgftextcircled{8}$ \ and~~$\pgftextcircled{9}$ \ are analogous to the lines~~$\pgftextcircled{3}$ \ and~~$\pgftextcircled{4}$\, (the only difference being that~$\States_{\forall}$ is replaced by~$\States_{\exists}$ in order to correctly implement~~$\pgftextcircled{9}$ \ and so (2f)). We conclude that Step~\eqref{turing:step1:substep2} of the procedure runs in polynomial time on $\card{\mathfrak{w}_1}$, $k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$. and requires a polynomial number of states with respect to $k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$. At the end of Step~\eqref{turing:step1:substep2}, the initial content of the tape $(n-1)$ corresponds to the last chunk~$\widetilde{\mathfrak{w}}_d$ of $\widetilde{\mathfrak{w}}$. Then, the step~\eqref{turing:step1:substep3} performed by~$\Turing$ is implemented as follows. \begin{MyCode} let $\widetilde{\mathfrak{w}}_d$ be the initial content of the $(n-1)$-th tape if $\qacc$ appears in $\widetilde{\mathfrak{w}}_d$, $\text{\rm\textbf{accept}}$ if $\qrej$ appears in $\widetilde{\mathfrak{w}}_d$ , $\text{\rm\textbf{reject}}$ if $m = 1$, $\text{\rm \textbf{reject}}$ write $\widetilde{\mathfrak{w}}_d \cdot \trail$ at the beginning of the first tape //see $\mathfrak{w}^{(1)}$ move to macrostep $2$ \end{MyCode} Clearly, this step runs in polynomial time on $\card{\widetilde{\mathfrak{w}}_d} \leq \card{\mathfrak{w}_1}$ and $\card{\overline{\Alphabet}} \leq \card{\ATuring}$, and requires $\BigO{\card{\overline{\Alphabet}}}$ many states to be implemented (see line~5). Considering all the steps, we conclude that macrostep~$1$ runs in polynomial time on~$\card{\mathfrak{w}_1}$,~$k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$, and requires polynomially many states to be implemented, w.r.t.~$k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$. \end{claimproof} The semantics of~$\Turing$ during macrostep~$1$ is summarised in the following claim. \begin{claim} \label{claim:turing:macrostep-1-semantics} Let $\widetilde{\mathfrak{w}}$ prefix~$\widetilde{\mathfrak{w}}$ of~$\mathfrak{w}_1 \cdot \trail$, s.t.~either $\widetilde{\mathfrak{w}} \cdot \trail$ is a prefix of $\mathfrak{w}_1 \cdot \trail$ or $\card{\widetilde{\mathfrak{w}}} = h(\card{\mathfrak{w}})^2$. \begin{itemize} \item if $\widetilde{\mathfrak{w}}$ encodes an existential hop of $\ATuring$ starting on state $\qinit$ and ending in~$\qacc$, $\Turing$ accepts, \item else, if $1 < m$ and $\widetilde{\mathfrak{w}}$ encodes an existential hop of $\ATuring$, starting on state $\qinit$ and ending in a state from~$\States_{\forall}$, then $\Turing$ writes the last configuration of this path at the beginning of the first tape (i.e.~the word~$\mathfrak{w}^{(1)}$) and moves to macrostep 2, \item otherwise, $\Turing$ rejects. \end{itemize} \end{claim} \item[macrostep: $i > 1$, $i$ odd.] Recall that~$Q_i = \exists$. $\Turing$ works on the $i$-th tape and on the word $\mathfrak{w}^{(i-1)}$ written in on the $(i-1)$-th tape at the end of step $i-1$. This macrostep resembles the first one. \begin{enumerate} \setlength{\itemsep}{3pt} \item\label{turing:step2:substep1} $\Turing$ reads the prefix~$\widetilde{\mathfrak{w}}$ of~$\mathfrak{w}_i$, such that if $\card{\mathfrak{w}_i} \leq h(\card{\mathfrak{w}})^2$ then $\widetilde{\mathfrak{w}} = \mathfrak{w}_i$, else $\card{\widetilde{\mathfrak{w}}} = h(\card{\mathfrak{w}})^2$. (\textit{Note: $\Turing$ does not consider the part of the input after the $h(\card{\mathfrak{w}})^2$-th symbol}) \item\label{turing:step2:substep2} The \nTM~$\Turing$ then checks whether $\mathfrak{w}^{(i-1)} \cdot \widetilde{\mathfrak{w}}$ encodes an existential hop in~$\ATuring$. It does so by analysing (from left to right)~$\widetilde{\mathfrak{w}}$ in chunks of length $h(\card{\mathfrak{w}})$, possibly with the exception of the last chunk, which can be of smaller size. Let $d$ be the number of chunks, and let $\widetilde{\mathfrak{w}}_j$ be the $j$-th chunk analysed by $\Turing$. \begin{enumerate}[label=(\alph*)] \setlength{\itemsep}{3pt} \item If $\widetilde{\mathfrak{w}}_j \not\in C$, $\Turing$ \textbf{rejects}, \item if the first chunk $\widetilde{\mathfrak{w}}_1$ is such that $c(\widetilde{\mathfrak{w}}_1) \not\in \Delta(c(\mathfrak{w}^{(i-1)}))$, then $\Turing$ \textbf{rejects}, \item if $\widetilde{\mathfrak{w}}_j$ is not the last chunk and contains a symbol from~$\States_\forall$, then $\Turing$ \textbf{rejects}, \item if $\widetilde{\mathfrak{w}}_j$ is not the last chunk and $c(\widetilde{\mathfrak{w}}_{j+1}) \not \in \Delta(c(\widetilde{\mathfrak{w}}_j))$, then $\Turing$ \textbf{rejects}, \item if the length of the last chunk~$\widetilde{\mathfrak{w}}_d$ is not $h(\card{\mathfrak{w}})$ (i.e.~the length of~$\widetilde{\mathfrak{w}}$ is not a multiple of~$h(\card{\mathfrak{w}})$), then $\Turing$ \textbf{rejects}, \item if the last chunk $\widetilde{\mathfrak{w}}_d$ contains a symbol from~$\States_\exists$, then $\Turing$ \textbf{rejects}. \end{enumerate} \item\label{turing:step2:substep3} $\Turing$ analyses the last chunk $\widetilde{\mathfrak{w}}_d$ of the previous step. If~$\qacc$ occurs in $\widetilde{\mathfrak{w}}_d$, then $\Turing$ \textbf{accepts}. Otherwise, if $i = m$ or $\qrej$ occurs in~$\widetilde{\mathfrak{w}}_d$, then $\Turing$ \textbf{rejects}. Else,~$\Turing$ writes down $\widetilde{\mathfrak{w}}_d \cdot \trail$ at the beginning of the $i$-th tape, and \textbf{moves to macrostep~$(i+1)$}. Let $\mathfrak{w}^{(i)} = \widetilde{\mathfrak{w}}_d$. \end{enumerate} From macrostep~$(i-1)$, the word~$\mathfrak{w}^{(i-1)}$ corresponds to the initial content of the $(i-1)$-th tape, and its length is bounded by $\card{\mathfrak{w}_{i-1}}$. Moreover, $\mathfrak{w}^{(i-1)} \in C$. The step (2b) can be implemented as line~~$\pgftextcircled{7}$ \, in the proof of~\Cref{claim:turing:macrostep-1}. The analysis of this macrostep follows essentially the same arguments given for the macrostep~$1$, allowing us to establish the following claims. \begin{claim} \label{claim:turing:macrostep-i-odd-input} Let $i > 1$ odd. Macrostep~$i$ only reads at most the first $h(\card{\mathfrak{w}})^2$ characters of the input of the $i$-th tape, and at most the first $h(\card{\mathfrak{w}})$ characters of the $(i-1)$-th tape. It does not depend on the content written on the other tapes. \end{claim} \begin{claim} \label{claim:turing:macrostep-i-odd} Let $i > 1$ odd. The macrostep~$i$ runs in polynomial time on~$\card{\mathfrak{w}_1}$,~$k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$, and required polynomially many states to be implemented, w.r.t.~$k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$. \end{claim} \begin{claim} \label{claim:turing:macrostep-i-odd-semantics} Let $i > 1$ odd, and let $\widetilde{\mathfrak{w}}$ prefix~of~$\mathfrak{w}_i \cdot \trail$, such that~either $\widetilde{\mathfrak{w}} \cdot \trail$ is a prefix of $\mathfrak{w}_i \cdot \trail$ or $\card{\widetilde{\mathfrak{w}}} = h(\card{\mathfrak{w}})^2$. Suppose that~$\Turing$ did not halt on the macrosteps~$1,\dots,i-1$. \begin{itemize} \item if $\mathfrak{w}^{(i-1)} \cdot \widetilde{\mathfrak{w}}$ encodes an existential hop of $\ATuring$ ending in state $\qacc$, then $\Turing$ accepts, \item else, if $i < m$ and $\mathfrak{w}^{(i-1)} \cdot \widetilde{\mathfrak{w}}$ encodes an existential hop of $\ATuring$, ending in a state from~$\States_\forall$, then $\Turing$ writes the last configuration of this path (i.e.~$\mathfrak{w}^{(i)}$) at the beginning of the $i$-th tape and moves to macrostep $(i+1)$, \item otherwise, $\Turing$ rejects. \end{itemize} \end{claim} \item[macrostep: $i > 1$, $i$ even.] Recall that~$Q_i = \forall$. $\Turing$ works on the $i$-th tape and on the word $\mathfrak{w}^{(i-1)}$ written in on the $(i-1)$-th tape at the end of step $i-1$. \begin{enumerate} \setlength{\itemsep}{3pt} \item\label{turing:step3:substep1} $\Turing$ reads the prefix~$\widetilde{\mathfrak{w}}$ of~$\mathfrak{w}_i$, such that if $\card{\mathfrak{w}_i} \leq h(\card{\mathfrak{w}})^2$ then $\widetilde{\mathfrak{w}} = \mathfrak{w}_i$, else $\card{\widetilde{\mathfrak{w}}} = h(\card{\mathfrak{w}})^2$. (\textit{Note: $\Turing$ does not consider the part of the input after the $h(\card{\mathfrak{w}})^2$-th symbol}) \item\label{turing:step3:substep2} The \nTM~$\Turing$ works ``dually'' with respect to the macrosteps where $i$ is odd. It checks whether $\mathfrak{w}^{(i-1)} \cdot \widetilde{\mathfrak{w}}$ corresponds to an universal hop of~$\ATuring$. However, differently from the case where $i$ is odd, if $\widetilde{\mathfrak{w}}$ does not encode such a computation path, then $\Turing$ accepts (as in this case the input is not ``well-formed'' and should not be considered for the satisfaction of the quantifier~$Q_i = \forall$). As in the other macrosteps, $\Turing$ perform this step by analysing (from left to right)~$\widetilde{\mathfrak{w}}$ in chunks of length $h(\card{\mathfrak{w}})$, possibly with the exception of the last chunk, which can be of smaller size. Let $d$ be the number of chunks, and let $\widetilde{\mathfrak{w}}_j$ be the $j$-th chunk analysed by~$\Turing$. \begin{enumerate}[label=(\alph*)] \setlength{\itemsep}{3pt} \item If $\widetilde{\mathfrak{w}}_j \not\in C$, $\Turing$ \textbf{accepts}, \item if the first chunk $\widetilde{\mathfrak{w}}_1$ is such that $c(\widetilde{\mathfrak{w}}_1) \not\in \Delta(c(\mathfrak{w}^{(i-1)}))$, then $\Turing$ \textbf{accepts}, \item if $\widetilde{\mathfrak{w}}_j$ is not the last chunk and contains a symbol from~$\States_\exists$, then $\Turing$ \textbf{accepts}, \item if $\widetilde{\mathfrak{w}}_j$ is not the last chunk and $c(\widetilde{\mathfrak{w}}_{j+1}) \not \in \Delta(c(\widetilde{\mathfrak{w}}_j))$, then $\Turing$ \textbf{accepts}, \item if the length of the last chunk~$\widetilde{\mathfrak{w}}_d$ is not $h(\card{\mathfrak{w}})$ (i.e.~the length of~$\widetilde{\mathfrak{w}}$ is not a multiple of~$h(\card{\mathfrak{w}})$), then $\Turing$ \textbf{accepts}, \item if the last chunk $\widetilde{\mathfrak{w}}_d$ contains a symbol from~$\States_\forall$, then $\Turing$ \textbf{accepts}. \end{enumerate} \item\label{turing:step3:substep3} $\Turing$ analyses the last chunk $\widetilde{\mathfrak{w}}_d$ of the previous step. If~$\qacc$ occurs in $\widetilde{\mathfrak{w}}_d$, then $\Turing$ \textbf{accepts}. If~$\qrej$ occurs in~$\widetilde{\mathfrak{w}}_d$ or $i = m$, then $\Turing$ \textbf{rejects}. Else,~$\Turing$ writes down $\widetilde{\mathfrak{w}}_d \cdot \trail$ at the beginning of the $i$-th tape, and \textbf{moves to macrostep~$(i+1)$}. Let $\mathfrak{w}^{(i)} = \widetilde{\mathfrak{w}}_d$. \end{enumerate} Following the same arguments of the first macrostep, the following claims are established. \begin{claim} \label{claim:turing:macrostep-i-even-input} Let $i > 1$ even. Macrostep~$i$ only reads at most the first $h(\card{\mathfrak{w}})^2$ characters of the input of the $i$-th tape, and at most the first $h(\card{\mathfrak{w}})$ characters of the $(i-1)$-th tape. It does not depend on the content written on the other tapes. \end{claim} \begin{claim} \label{claim:turing:macrostep-i-even} Let $i > 1$ even. The macrostep~$i$ runs in polynomial time on~$\card{\mathfrak{w}_1}$,~$k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$, and required polynomially many states to be implemented, w.r.t.~$k$, $\card{\mathfrak{w}}$ and $\card{\ATuring}$. \end{claim} \begin{claim} \label{claim:turing:macrostep-i-even-semantics} Let $i > 1$ even, and let $\widetilde{\mathfrak{w}}$ prefix~of~$\mathfrak{w}_i \cdot \trail$, such that~either $\widetilde{\mathfrak{w}} \cdot \trail$ is a prefix of $\mathfrak{w}_i \cdot \trail$ or $\card{\widetilde{\mathfrak{w}}} = h(\card{\mathfrak{w}})^2$. Suppose that~$\Turing$ did not halt on the macrosteps~$1,\dots,i-1$. \begin{itemize} \item if $\mathfrak{w}^{(i-1)} \cdot \widetilde{\mathfrak{w}}$ encodes an universal hop of $\ATuring$ ending in state $\qrej$, then $\Turing$ rejects, \item else, if $i = m$ and $\mathfrak{w}^{(i-1)} \cdot \widetilde{\mathfrak{w}}$ encodes an universal hop of $\ATuring$, ending in a state from~$\States_\exists$, then $\Turing$ rejects, \item else, if $i < m$ and $\widetilde{\mathfrak{w}}$ encodes an universal hop of $\ATuring$, ending in a state from~$\States_\exists$, then $\Turing$ writes the last configuration of this path (i.e.~$\mathfrak{w}^{(i)}$) at the beginning of the $i$-th tape and moves to macrostep $(i+1)$, \item otherwise, $\Turing$ accepts. \end{itemize} \end{claim} \end{description} This ends the definition of~$\Turing$. Since $\Turing$ performs $m = g(\card{\mathfrak{w}})$ macrosteps, where $g$ is a polynomial, from~\Cref{claim:turing:macrostep-1}, \Cref{claim:turing:macrostep-i-odd} and~\Cref{claim:turing:macrostep-i-even} we conclude that $\Turing$ can be constructed in polynomial time in $n > m$ (number of tapes), $k$, $\card{\ATuring}$ and $\card{\mathfrak{w}}$. Moreover, $\Turing$ runs in polynomial time, as required by property~\eqref{lem:almost-theorem-prop-1}. By~\Cref{claim:turing:macrostep-1-input},~\Cref{claim:turing:macrostep-i-odd-input} and~\Cref{claim:turing:macrostep-i-even-input}, $\Turing$ only looks at a prefix of its inputs of length at most $h(\card{\mathfrak{w}})^2$, and its runs are independent on the input of the last $n-m$ tapes. Therefore, both the properties~\eqref{lem:almost-theorem-prop-2} and~\eqref{lem:almost-theorem-prop-2b} are satisfied. To conclude the proof, it remains to show that~$\Turing$ satisfies property~\eqref{lem:almost-theorem-prop-3}, which follows directly from the following claim, proved thanks to~\Cref{claim:turing:macrostep-1-semantics},~\Cref{claim:turing:macrostep-i-odd-semantics} and~\Cref{claim:turing:macrostep-i-even-semantics}. \phantom\qedhere \end{proof} \begin{claim} Let $\ell \in [1,m]$ and $(\pi_1,\dots,\pi_\ell)$ be a $\ell$-uple such that, \begin{itemize}[nosep] \item $\pi_1 \cdot {\dots} \cdot \pi_\ell$ is a computation path of~$\ATuring$, \item for every $i \in [1,\ell]$ odd, $\pi_i = (c_0^i,\dots,c_{d_i}^i)$ is an existential hop, \item for every $i \in [1,\ell]$ even, $\pi_i = (c_0^i,\dots,c_{d_i}^i)$ is an universal hop, \item $c_0^1 = (\mathfrak{w},\qinit,0)$. \end{itemize} For every $i \in [1,\ell]$, let $\mathfrak{w}_i \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ be a word encoding the computation path $\pi_i$. Then, \begin{equation} \label{LastATMClaim} \gamma(c^\ell_{d_\ell}) = \tacc \text{ iff } Q_{\ell+1}\mathfrak{w}_{\ell+1} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots,Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2} \text{ : } \Turing(\mathsf{w}_1,\dots,\mathsf{w}_m) \text{ accepts}, \end{equation} where for every $i \in [\ell+1,m]$, $Q_i = \exists$ if $i$ is odd, and otherwise $Q_i = \forall$. \end{claim} \begin{claimproof} \let\oldqedsymbol\qedsymbol \renewcommand{\claimqedhere}{\renewcommand\qedsymbol{\textcolor{lipicsGray}{\ensuremath{\vartriangleleft}}\oldqedsymbol}% \qedhere% \renewcommand\qedsymbol{\textcolor{lipicsGray}{\ensuremath{\blacktriangleleft}}}} The proof is by induction on $m-\ell$. \begin{description} \item[base case: $m = \ell$.] Since $\ATuring$ is $g$-alternation bounded and $m = g(\card{\mathfrak{w}})$, in this case $c^m_{d_m}$ is either $\qacc$ of $\qrej$. If $\gamma(c^m_{d_m}) = \tacc$, then the state of the last configuration $c_{d_m}^m$ is the accepting state~$\qacc$, and from~\Cref{claim:turing:macrostep-1-semantics},~\Cref{claim:turing:macrostep-i-odd-semantics} and~\Cref{claim:turing:macrostep-i-even-semantics}, we conclude that $\ATuring(\mathfrak{w}_1,\dots,\mathfrak{w}_m)$ accepts. Else, suppose that $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts. Then, since ${\mathfrak{w}_1\cdot{\dots}\cdot\mathfrak{w}_m}$ encodes a computation path of $\ATuring$, by~\Cref{claim:turing:macrostep-1-semantics},~\Cref{claim:turing:macrostep-i-odd-semantics} and~\Cref{claim:turing:macrostep-i-even-semantics}, the state of the last configuration $c_{d_m}^m$ cannot be~$\qrej$. Hence, it is $\qacc$, and $\gamma(c^m_{d_m}) = \tacc$. \item[induction step: $m - \ell > 1$.] First of all, if the path $\pi_1\cdot{\dots}\cdot\pi_\ell$ is terminating, then~\eqref{LastATMClaim} follows exactly as in the base case. Below, let us assume that $\pi_1\cdot{\dots}\cdot\pi_\ell$ is not terminating. Notice that this means that the length of the computation path is less than $h(\card{\mathfrak{w}})$, since the \ensuremath{\normalfont{\mathsf{ATM}}}\xspace $\ATuring$ is $h$-time bounded. Below, $\overline{c}$ is short for $c^\ell_{d_\ell}$ and we write~$\overline{\mathfrak{w}}$ for the word from $C$ that encodes $\overline{c}$. We split the proof depending on the parity of~$\ell$. \begin{description} \item[case: $\ell$ even.] In this case, $Q_{\ell+1} = \exists$, $\pi_\ell$ is an universal hop, and $\overline{c}$ contains a state in~$\States_\exists$. \ProofRightarrow Suppose $\gamma(\overline{c}) = \tacc$. Since $\overline{c}$ is an existential configuration, this implies that there a computation path $\pi_{\ell+1} = (c_0,\dots,c_d)$ such that $\overline{c} \cdot \pi_{\ell+1}$ is an existential hop, and $\gamma(c_d) = \tacc$. Let $\mathfrak{w}_{\ell+1}$ be a word encoding $\pi_{\ell+1}$, of minimal length. Since $\ATuring$ is $h$-time bounded, $\mathfrak{w}_{\ell+1} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$. By induction hypothesis, \begin{center} $Q_{\ell+2}\mathfrak{w}_{\ell+2} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots,Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts. \end{center} Hence, \begin{center} $\exists \mathfrak{w}_{\ell+1} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2} Q_{\ell+2}\mathfrak{w}_{\ell+2} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots,Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts. \end{center} \ProofLeftarrow Conversely, suppose that the right hand side of~\eqref{LastATMClaim} holds. Since $Q_{\ell+1} = \exists$, from~\Cref{claim:turing:macrostep-i-odd-semantics} there is a word $\mathfrak{w}_{\ell+1} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ that encodes a computation path $\pi_{\ell+1} = (c_0,\dots,c_d)$ of $\ATuring$, such that $\overline{c} \cdot \pi_{\ell+1}$ is an existential hop and \begin{center} $Q_{\ell+2}\mathfrak{w}_{\ell+2} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots,Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts. \end{center} By induction hypothesis, $\gamma(c_d) = \tacc$, which yields $\gamma(\overline{c}) = \tacc$, by definition of~$\gamma$. \item[case: $\ell$ odd.] In this case, $Q_{\ell+1} = \exists$, $\pi_\ell$ is an existential hop, and $\overline{c}$ contains a state in~$\States_\forall$. \ProofRightarrow Suppose $\gamma(\overline{c}) = \tacc$. Since $\overline{c}$ is an universal configuration, $\gamma(c_d) = \tacc$ holds for every computation path $\pi = (c_0,\dots,c_d)$ such that $c^\ell_{d_\ell} \cdot \pi$ is an universal hop. Let $\mathfrak{w}_{\ell+1}$ be a word in $\overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$. If $\overline{\mathfrak{w}} \cdot \mathfrak{w}_{\ell+1}$ does not encode an universal hop, then from~\Cref{claim:turing:macrostep-i-even-semantics}, $\Turing(\mathfrak{w}_1,\dots,\mathfrak{w}_{\ell+1},x_{\ell+2},\dots,x_m)$ accepts for every values of~$x_{\ell+2},\dots,x_m$. If Suppose instead that $\overline{\mathfrak{w}} \cdot \mathfrak{w}_{\ell+1}$ encodes the universal hop $\pi_{\ell+1} = (c_0,\dots,c_d)$. From $\gamma(c_d) = \tacc$ and by induction hypothesis, \begin{center} $Q_{\ell+2}\mathfrak{w}_{\ell+2} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots,Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts. \end{center} We conclude that \begin{center} $\forall \mathfrak{w}_{\ell+1} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2} Q_{\ell+2}\mathfrak{w}_{\ell+2} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots,Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts. \end{center} \ProofLeftarrow Conversely, suppose that the right hand side of~\eqref{LastATMClaim} holds. Since $Q_{\ell+1} = \forall$, this means that for every $\mathfrak{w}_{\ell+1} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ \begin{center} $Q_{\ell+2}\mathfrak{w}_{\ell+2} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots,Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts. \end{center} Since $\overline{c}$ is an universal configuration, in order to conclude that $\gamma(\overline{c}) = \tacc$ holds, it is sufficient to check that, for every computation path $\pi_{\ell+1} = (c_0,\dots,c_d)$ of $\ATuring$, if $\overline{c} \cdot \pi_{\ell+1}$ is an universal hop, then $\gamma(c_d) = \tacc$. So, consider a computation path $\pi_{\ell+1} = (c_0,\dots,c_d)$ of $\ATuring$, such that $\overline{c} \cdot \pi_{\ell+1}$ is an universal hop. Since $\ATuring$ is $h$-time bounded, there is $\mathfrak{w}_{\ell+1} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ that encodes $\pi_{\ell+1}$. We have, \begin{center} $Q_{\ell+2}\mathfrak{w}_{\ell+2} \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots,Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts, \end{center} and by induction hypothesis we obtain $\gamma(c_d) = \tacc$, concluding the proof. \claimqedhere \end{description} \end{description} \end{claimproof} The proof of~\Cref{theorem:kaexp-alternation-complexity} stems directly from~\Cref{lemma:almost-theorem-kaexp-alternation}. \TheoremKAEXPAltProblem* \begin{proof} Fix~$k \geq 1$. Since the $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex problem is solvable in $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$ directly from its definition, we focus on the $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-hardness. We aim at a reduction from the membership problem described in~\Cref{proposition:kaexppol-membership}, relying on~\Cref{lemma:almost-theorem-kaexp-alternation}. Let~$f,g \colon \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$ be two polynomials, and define~$h(\avar) \egdef \tetra(k,f(\avar))$. Consider a~$h$-time bounded, $g$-alternation bounded~\ensuremath{\normalfont{\mathsf{ATM}}}\xspace~$\ATuring = (\Alphabet,\States_{\exists},\States_{\forall},\qinit,\qacc,\qrej,\trans)$, as well as a word~$\mathfrak{w} \in \Alphabet^*$. We want to check whether $\mathfrak{w} \in \alang(\ATuring)$. To simplify the proof without loss of generality, we assume $\card{\mathfrak{w}} \geq 1$ and that~$f(\avar) \geq \avar$ for every $\avar \in \ensuremath{\mathbb{N}}$. Then,~$f(\card{\mathfrak{w}}) \geq \card{\mathfrak{w}} \geq 1$. Let $m = \max(f(\card{\mathfrak{w}}),g(\card{\mathfrak{w}}))+3$, and let~$\overline{\Alphabet} \egdef \Alphabet \cup \States \cup \{\qacc,\qrej,\trail\}$. We consider the $m$DTM~${\Turing = (m,\overline{\Alphabet},\States',\qinit',\qacc',\qrej',\trans')}$ derived from~$\ATuring$ by applying~\Cref{lemma:almost-theorem-kaexp-alternation}. From the property~\eqref{lem:almost-theorem-prop-1} of~$\Turing$, there is a polynomial~$p \colon \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$ such that, for every~$(\mathfrak{w}_1,\dots,\mathfrak{w}_m) \in (\overline{\Alphabet}^*)^m$,~$\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ halts in time $p(\max(\card{\mathfrak{w}_1},\dots,\card{\mathfrak{w}_m}))$. W.l.o.g., we can assume $p(\avar) = \alpha \avar^d + \beta$ for some~${\alpha,d,\beta \in \PNat}$. So, for all~$\avar \in \ensuremath{\mathbb{N}}$, $p(\avar) \geq \avar$. We rely on~$\Turing$ to build a polynomial instance of the~$k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex problem that is satisfied if and only if $\mathfrak{w} \in \alang(\ATuring)$. Let~${n \egdef p(2 \cdot m)}$, and consider the \nTM~${\Turing' = (n,\overline{\Alphabet},\States',\qinit',\qacc',\qrej',\trans')}$. Notice that $\Turing'$ is defined exactly as~$\Turing$, but has $n \geq m$ tapes, where $n$ is polynomial in~$\card{\mathfrak{w}}$. However, since the two machines share the same transition function~$\trans'$, only the first $m$ tapes are effectively used by~$\Turing'$, and for all~$({\mathsf{w}_1,\dots,\mathsf{w}_m,\dots,\mathsf{w}_n) \in \overline{\Alphabet}}^n$, $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ and $\Turing'(\mathsf{w}_1,\dots,\mathsf{w}_n)$ perform the same computational steps. Let $\vec{Q} = (Q_1,\dots,Q_n) \in \{\exists,\forall\}^n$ such that for all $i \in [1,n]$, $Q_i = \exists$ iff $i$~is~odd. We consider the instance of the $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex problem given by~$(n,\vec{Q},\Turing')$. This instance is polynomial in~$\card{\mathfrak{w}}$ and~$\card{\ATuring}$, and asks whether \begin{equation} \label{proofT8:eq1} Q_1 \mathsf{w}_1 \in \overline{\Alphabet}^{\tetra(k,n)}, \dots, Q_n \mathsf{w}_n \in \overline{\Alphabet}^{\tetra(k,n)} \text{ : } \Turing'(\mathsf{w}_1,\dots,\mathsf{w}_n) \text{ accepts in time }\tetra(k,n). \end{equation} To conclude the proof, we show that~\Cref{proofT8:eq1} holds if and only if \begin{equation} \label{proofT8:eq2} Q_1 \mathsf{w}_1 \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots, Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2} \text{ : } \Turing(\mathsf{w}_1,\dots,\mathsf{w}_m) \text{ accepts}. \end{equation} Indeed, directly from the property~\eqref{lem:almost-theorem-prop-3} of~$\Turing$, this equivalence implies that \Cref{proofT8:eq1} is satisfied if and only if $\mathfrak{w} \in \alang(\ATuring)$, leading to the $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-hardness of the~$k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex problem directly by~\Cref{proposition:kaexppol-membership}. By definition of the polynomial $p \colon \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$, if~$\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts, then it does so in at most $p(\max(\card{\mathfrak{w}_1},\dots,\card{\mathfrak{w}_m}))$ steps. Therefore,~\Cref{proofT8:eq2} holds if and only if \begin{equation} \label{proofT8:eq3} Q_1 \mathsf{w}_1 \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots, Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2} \text{ : } \Turing(\mathsf{w}_1,\dots,\mathsf{w}_m) \text{ accepts in time}~p(h(\card{w})^2). \end{equation} By~\Cref{lemma:tetra-property-1,lemma:tetra-property-2}, we have \begin{equation} \label{proofT8:eq4} \tetra(k,n) \ = \ \tetra(k,p(2 \cdot m)) \ \geq \ p(\tetra(k,2 \cdot f(\card{\mathfrak{w}}))) \ \geq \ p(\tetra(k,f(\card{\mathfrak{w}})^2)) \ \geq \ p(h(\card{\mathfrak{w}})^2). \end{equation} This chain of inequalities implies that~\Cref{proofT8:eq3} holds if and only if \begin{equation} \label{proofT8:eq5} Q_1 \mathsf{w}_1 \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots, Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2} \text{ : } \Turing(\mathsf{w}_1,\dots,\mathsf{w}_m) \text{ accepts in time}~\tetra(k,n), \end{equation} where, again, the right to left direction holds since, if $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts, then it does so in at most~$p(h(\card{w})^2)$ steps. Now, from the property~\eqref{lem:almost-theorem-prop-2} of $\Turing$, and again by~\Cref{proofT8:eq4} together with the assumption $p(\avar) \geq \avar$ (for all~$\avar \in \ensuremath{\mathbb{N}}$), we can extend the bounds on the words $\mathfrak{w}_1,\dots,\mathfrak{w}_m$ we consider, and conclude that~\Cref{proofT8:eq5} is equivalent to \begin{equation} \label{proofT8:eq6} Q_1 \mathsf{w}_1 \in \overline{\Alphabet}^{\tetra(k,n)},\dots, Q_m \mathsf{w}_m \in \overline{\Alphabet}^{\tetra(k,n)} \text{ : } \Turing(\mathsf{w}_1,\dots,\mathsf{w}_m) \text{ accepts in time}~\tetra(k,n). \end{equation} Lastly, since for all~$({\mathsf{w}_1,\dots,\mathsf{w}_m,\dots,\mathsf{w}_n) \in \overline{\Alphabet}}^n$, $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ and $\Turing'(\mathsf{w}_1,\dots,\mathsf{w}_n)$ perform the same computational steps, we conclude that~\Cref{proofT8:eq6} holds iff~\Cref{proofT8:eq1} holds. \end{proof} \CorollarySIGMAKProblem* \begin{proof}(\textit{sketch}) Let~$f \colon \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$ be a polynomials, and define~$h(\avar) \egdef \tetra(k,f(\avar))$. Consider a~$h$-time $j$-alternation bounded~\ensuremath{\normalfont{\mathsf{ATM}}}\xspace~$\ATuring$ (i.e.~$\ATuring$ alternates between existential and universal states at most $j \in \PNat$ times), as well as a word~$\mathfrak{w} \in \Alphabet^*$. We want to check whether $\mathfrak{w} \in \alang(\ATuring)$. Let $m = \max(f(\card{\mathfrak{w}}),j)+3$. One can easily revisit~\Cref{lemma:almost-theorem-kaexp-alternation} with respect to~$\ATuring$ as above, so that the property~\eqref{lem:almost-theorem-prop-3} of the $m\TM$ $\Turing$ is updated as follows. \begin{center} $\mathfrak{w} \in \alang(\ATuring)$ \ iff \ $Q_1 \mathsf{w}_1 \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2},\dots, Q_m \mathsf{w}_m \in \overline{\Alphabet}^{h(\card{\mathfrak{w}})^2}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_m)$ accepts, \end{center} where $Q_1 = \exists$, for every $i \in [1,j]$, $Q_{i} \neq Q_{i+1}$, and for every $i \in [j+1,m]$, $Q_{i} = Q_{j+1}$. In particular, now $\vec{Q}= (Q_1,\dots,Q_m)$ is such that $\altern{\vec{Q}} = j$. The proof then carries out analogously to the proof of~\Cref{theorem:kaexp-alternation-complexity}, the only difference being that for the \nTM $\Turing'$, we consider the quantifier prefix $(Q_1,\dots,Q_n)$ such that, for every $i \in [m+1,n]$, $Q_i = Q_m$, in order to keep the number of alternations bounded by $j$. \end{proof} \section{Some properties of~$\tetra(k,n)$.} Before moving to the proofs of~\Cref{theorem:kaexp-alternation-complexity} and~\Cref{theorem:kaexp-j-prenex-complexity}, we discuss some trivial properties of the tetration function~$\tetra$, which we later need. In~\Cref{lemma:tetra-property-1,lemma:tetra-property-2,lemma:tetra-property-3} below, let~$k,n \in \PNat$. \begin{lemma} \label{lemma:tetra-property-1} $p(\tetra(k,n)) \leq \tetra(k,p(n))$, for every polynomial~$p(\avar) = \alpha \avar^d + \beta$ with~$\alpha,d,\beta \in \PNat$. \end{lemma} \begin{proof} Straightforward induction on~$k \in \PNat$: \begin{description}[nosep,leftmargin=*,itemsep=3pt] \item[base case: $k = 1$.] Trivially, $\alpha (2^n)^d + \beta \leq 2^{\alpha n^d + \beta}$. \item[induction step: $k > 1$.] By induction hypothesis, $\alpha\tetra(k-1,n)^d + \beta \leq \tetra(k-1, \alpha n^d + \beta)$, so,\\[3pt] $\alpha \tetra(k,n)^d + \beta = \alpha (2^{\tetra(k-1,n)})^d + \beta \leq 2^{\alpha\tetra(k-1,n)^d + \beta} \leq 2^{\tetra(k-1, \alpha n^d + \beta)} = \tetra(k,\alpha n^d + \beta)$. \qedhere \end{description} \end{proof} \begin{lemma} \label{lemma:tetra-property-2} $\tetra(k,n)^2 \leq \tetra(k,2n)$. \end{lemma} \begin{proof} For $k = 1$, we have $\tetra(1,n) = (2^n)^2 = 2^{2n} = \tetra(1,2n)$. For $k > 1$, by~\Cref{lemma:tetra-property-1} $2\tetra(k-1,n) \leq \tetra(k-1,2n)$. So, ${\tetra(k,n)^2 = (2^{\tetra(k-1,n)})^2 = 2^{2\tetra(k-1,n)} \leq 2^{\tetra(k-1,2n)} = \tetra(k,2n)}$. \end{proof} We inductively define the function $\klog{k} : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$ as follows: \begin{center} $\klog{1}(\avar) \egdef \ceil{\log_2(\avar)}$ \quad and \quad $\klog{k+1}(\avar) \egdef \klog{k}(\ceil{\log_2(\avar)})$. \end{center} \begin{lemma} \label{lemma:tetra-property-3} For every $m \in \ensuremath{\mathbb{N}}$, \ $m > \tetra(k,n)$ if and only if $\klog{k}(m) > n$. \end{lemma} \begin{proof} Straightforward induction on~$k \in \PNat$: \begin{description}[nosep,leftmargin=*,itemsep=3pt] \item[base case: $k = 1$.] \ProofRightarrow Suppose $m > 2^n$. Then, $\klog{1}(m) = \ceil{\log_2(m)} \geq \log_2(m) > n$. \ProofLeftarrow Conversely, suppose $m \leq 2^n$. Then $\log_2(m) \leq n$. As $n \in \ensuremath{\mathbb{N}}$, $\ceil{\log_2(m)} \leq n$. \item[induction step: $k > 1$.] \ProofRightarrow Suppose $m > \tetra(k,n)$. So, $\ceil{\log_2(m)} \geq \log_2(m) > \tetra(k-1,n)$. By induction hypothesis, $\klog{k-1}(\ceil{\log_2(m)}) > n$, i.e.~$\klog{k}(m) > n$. \ProofLeftarrow Conversely, suppose $m \leq \tetra(k,n)$. Therefore, $\log_2(m) \leq \tetra(k-1,n)$. Since $n \in \ensuremath{\mathbb{N}}$, $\ceil{\log_2(m)} \leq \tetra(k-1,n)$. By induction hypothesis, $\klog{k}(m) = \klog{k-1}(\ceil{\log_2(m)}) \leq n$. \qedhere \end{description} \end{proof} \Cref{lemma:tetra-property-3} allows us to check whether $m > \tetra(k,n)$ holds in time~$\BigO{k \cdot m + n}$, by first computing~$r = \klog{k}(m)$ and then testing whether $r > n$. Fundamentally, in this way we avoid computing~$\tetra(k,n)$ explicitly. This fact is later used in~\Cref{lemma:almost-theorem-kaexp-alternation}, where we introduce a \nTM that shall test whether its inputs~$\mathfrak{w}_1,\dots,\mathfrak{w}_n$ are greater than~$\tetra(k,n)$, while running in polynomial time in $\card{\mathfrak{w}_1},\dots,\card{\mathfrak{w}_n}$, $k$ and $n$. For similar reasons, this machine also requires to compute the \defstyle{integer square root}, defined as~$\isqrt{\avar} \egdef \floor{\sqrt{\avar}\,}$. \begin{lemma} \label{lemma:isqrt} Given $n, m \in \ensuremath{\mathbb{N}}$, \ $m \geq n^2$ if and only if $\isqrt{m} \geq n$. \end{lemma} \begin{proof} \ProofRightarrow Suppose $m \geq n^2$. Then, $\sqrt{m} \geq n$. Since $n \in \ensuremath{\mathbb{N}}$, we conclude $\floor{\sqrt{m}} \geq n$. \ProofLeftarrow Suppose $\isqrt{m} \geq n$. Trivially, $\sqrt{m} \geq n$ and so $m \geq n^2$. \end{proof} \section{Random stuff below} \begin{itemize} \item for every $i \in \ENat$, the logic $\QML{i}$ is a fragment of second-order modal logic, so not more expressive than graded modal logic (GML). \item let $i,j \in \ENat$ with $i < j$. $\existsL{i}$ can be expressed in $\QML{j}$ as $\aexistsL{j}{\aprop}\bigvee_{k \leq i}\Diamond^{k}(\aprop \land \varphi)$, where $\aprop$ is an atomic proposition not occurring in~$\varphi$. \item $\QML{1}$ is as expressive as GML. Note: \begin{center} $\aexistsL{1}{\aprop_1}\dots\aexistsL{1}{\aprop_k}\bigwedge_{i \in \interval{1}{k}}\Diamond(\varphi \land \aprop_i \land \bigwedge_{j \in [i+1,k]} \lnot \aprop_j)$ \end{center} is equivalent to $\Diamond_{\geq k} \varphi$ ($\aprop_1,\dots,\aprop_k$ not occurring in~$\varphi$). This means that, for every $i \in \ENat$, $\QML{i}$ is as expressive as GML. \item $\exists^i \avar \varphi$ can be translated in Hybrid logic in $\downarrow \avarbis.\ \Diamond^i (\downarrow \avar.\ @_\avarbis \varphi)$, with $\avarbis$ fresh variable. \item the satisfiability problem of $\QML{\ast}$ and the one of Sabotage Modal Logic are equireducible. \item In $\QML{\ast}$, every formula can be put in prenex normal form: \begin{align*} \Diamond \aexistsL{i}{\aprop} \varphi \ \iff \ \aexistsL{i+1}{\aprop} (\lnot \aprop \land \Diamond \varphi)\\ \Diamond \aforallL{i}{\aprop} \varphi \ \iff \ \aexistsL{1}{\apropbis} \aforallL{i+1}\aprop \Diamond (\apropbis \land \varphi) \end{align*} where $\apropbis$ does not appear in $\varphi$. \end{itemize} \begin{lemma} Let $\varphi$ in $\QML{i}$ be a formula of modal depth~$m$, and with $d$ occurrences of the first-order quantification. There is a prenex formula $\psi$ in $\QML{i+m}$ with modal depth $m$ and at most $m\cdot m$ occurrences of the first-order quantification such that $\varphi \equiv \psi$. \end{lemma} \begin{lemma} Let $\varphi$ in $\fQML{\ast}$ and let $\mathcal{K}$ be a (finite) Kripke forest. There is a deterministic time algorithm that checks whether $\mathcal{K} \models \varphi$ in time $\BigO{\card{\varphi} \cdot \card{\mathcal{K}}^{\card{\varphi}}}$ and polynomial space. \end{lemma} \begin{proof} Simply rely on a model checking algorithm for first-order logic. \end{proof} \begin{lemma} Let $\varphi$ in $\sQML{\ast}$ and let $\mathcal{K}$ be a (finite) Kripke forest. There is a deterministic time algorithm that checks whether $\mathcal{K} \models \varphi$ in time $\BigO{\card{\varphi} \cdot 2^{\card{\mathcal{K}} \cdot \card{\varphi}}}$ and polynomial space. \end{lemma} \begin{proof} Simply rely on a model checking algorithm for second-order logic. \end{proof} Note: the model checking of second-order logic can instead be solved in $\BigO{\card{\varphi} \cdot 2^{\card{\mathcal{K}} \cdot {\card{\varphi}}}}$. In terms of satisfiability problem, this means that, if first-order quantification is replaced with second-order quantification, the Hierarchy should update from $\textsc{NExpTime}\xspace$, $2\textsc{NExpTime}\xspace$, $3\textsc{NExpTime}\xspace$ $\dots$, to $\textsc{AExp}$_{\textsc{Pol}}$\xspace$, $2\textsc{AExp}$_{\textsc{Pol}}$\xspace$, $3\textsc{AExp}$_{\textsc{Pol}}$\xspace$ $\dots$... \section{Alternation problems for deterministic Turing machines} \label{sec:alternationproblems} In this section, we introduce the \defstyle{$k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex \TM problem}, a decision problem for deterministic multi-tape Turing machines, which we prove being $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-complete in~\Cref{appendix:ATMs} (see below for the definition of this complexity class). The $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex problem is a straightforward generalisation of the \defstyle{TM alternation problem} shown $\textsc{AExp}$_{\textsc{Pol}}$\xspace$-complete in~\cite{Bozzelli17} and in~\cite[page~292]{Molinari19}, from which our (self-contained) proofs are based upon. \subparagraph*{Notation for regular languages.} Let~$A$ and $B$ be two regular languages. As usual, we write $A \cup B$ and $A \setminus B$ for the set theoretical union and difference of languges, respectively. With~$A \cdot B$ we denote the language obtained by concatenating words in~$A$ with words in~$B$. Then,~$A^0 = \{\epsilon\}$ and for all $n \in \ensuremath{\mathbb{N}}$, $A^{n+1} = A^n \cdot A$. Lastly, $A^* = \bigcup_{n \in \ensuremath{\mathbb{N}}} A^n$ and $A^+ \egdef A^* \setminus A^0$. Given a finite~\emph{alphabet} $\Alphabet$, the set $\Alphabet^{n}$ is the set of words over~$\Alphabet$ of length~$n \in \ensuremath{\mathbb{N}}$ and, given a finite word~$\mathfrak{w}$, we write $\card{\mathfrak{w}}$ for its length. \subparagraph*{Complexity classes.} We recall some standard complexity classes that appear throughout these notes. Below and across the whole paper, given natural numbers $k,n \geq 1$, we write $\tetra$ for the tetration function inductively defined as $\tetra(0,n) \egdef n$ and ${\tetra(k, n) = 2^{\tetra(k-1,n)}}$. Intuitively, $\tetra(k,n)$ defines a tower of exponentials of height~$k$. \begin{itemize}[nosep] \item $k\textsc{NExpTime}\xspace$ is the class of all problems decidable with a non-deterministic Turing machine running in time~$\tetra(k,f(n))$ for some polynomial~$f : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$, on each input of length $n$. \item $\Sigma^{k\textsc{ExpTime}\xspace}_j$ is the class of all problems decidable with an alternating Turing machine (\ensuremath{\normalfont{\mathsf{ATM}}}\xspace,~\cite{ChandraKS81}) in time $\tetra(k,f(n))$, starting on an existential state and performing at most $j-1$ alternations, for some polynomial $f : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$, on each input of length~$n$. By definition, $\Sigma^{k\textsc{ExpTime}\xspace}_1 = k\textsc{NExpTime}\xspace$. \item $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$ is the class of all problems decidable with an \ensuremath{\normalfont{\mathsf{ATM}}}\xspace running in time~$\tetra(k,f(n))$ and performing at most $g(n)$ number of alternations, for some polynomials~$f,g$, on each input of length~$n$. The inclusion $\Sigma^{k\textsc{ExpTime}\xspace}_j \subseteq k\textsc{AExp}$_{\textsc{Pol}}$\xspace$ holds for every $j \in \PNat$. \end{itemize} \subparagraph*{Deterministic $n$-tapes TM.} Let $n \in \PNat$. Following the presentation in~\cite{Molinari19}, we define the class of \textit{deterministic $n$-tapes Turing machines} (\nTM). A~\nTM is a deterministic machine $\Turing = (n,\Alphabet,\States,\qinit,\qacc,\qrej,\trans)$, where $\Sigma$ is a finite~alphabet containing a \emph{blank symbol}~$\blanksymb$, and $\States$~is a finite set of \emph{states} including the initial state~$\qinit$, the accepting state~$\qacc$ and the rejecting state~$\qrej$. The \nTM operates on $n$ distinct \emph{tapes} numbered from $1$ to $n$, and it has one \emph{read/write head} shared by all tapes. Every \emph{cell} of each tape contains a symbol from $\Alphabet$ and is indexed with a number from $\ensuremath{\mathbb{N}}$ (i.e.~the tapes are infinite only in one direction). Lastly, the deterministic transition function~${\trans\colon \States \times \Sigma \to (\States \times \Sigma \times \{-1,+1\}) \cup (\States \times [1,n])}$ is such that for every $\asymbol \in \Alphabet$, $\trans(\qacc,a) \egdef (\qacc,1)$ and $\trans(\qrej,a) \egdef (\qrej,1)$. A configuration of~$\Turing$ is given by the content of the $n$ tapes together with a \emph{positional state}~$(\astate,j,k) \in \States \times [1,n] \times \ensuremath{\mathbb{N}}$. The content of the tape is represented by an $n$-uple of finite words $(\mathfrak{w}_1,\dots,\mathfrak{w}_n) \in (\Alphabet^*)^n$, where each $\mathfrak{w}_i$ specifies the content of the first $\card{\mathfrak{w}_i}$ cells of the $i$-th tape, after which the tape only contains the blank symbols~$\blanksymb$. The positional state~$(\astate,j,k)$ describes the current state~$\astate$ of the machine together with the position~$(j,k)$ of the read/write head, placed on the $k$-th cell of the $j$-th tape. At each step, given the positional state~$(\astate,j,k)$ of $\Turing$ and the symbol~$\asymbol \in \Alphabet$ read by the read/write head in position~$(j,k)$, one of the following occurs: \begin{itemize}[leftmargin=*,align=left] \item[ \textit{case $\trans(\astate,\asymbol) = (\astate',\asymbolbis,i)$, where $\astate \in \States$, $\asymbolbis \in \Alphabet$ and $i \in \{-1,1\}$ (ordinary moves)}:] If $k + i \in \ensuremath{\mathbb{N}}$, then $\Turing$ overwrites the symbol $\asymbol$ in position $(j,k)$ with the symbol $\asymbolbis$. Afterwards~$\Turing$ changes its positional state to $(\astate',j,k+i)$. Otherwise ($k + i \notin \ensuremath{\mathbb{N}}$, i.e.~$k = 0$ and $i = -1$), $\Turing$ does not modify the tapes and changes its positional state to $(\qrej,1,0)$. \item[\textit{case $\trans(\astate,\asymbol) = (\astate',j')$, where $\astate' \!\in\, \States$ and $j' \!\in\, [1,n]$ (jump moves)}:] The read/write head moves to the $k$-th cell of the $j'$-th tape, i.e.~$\Turing$ changes its positional state to $(\astate',j',k)$. Jump moves do not modify the content of the $n$ tapes. \end{itemize} Initially, each tape contains a word in $\Sigma^*$ written from left to right starting from the position~$0$ of the tape. Hence, an input of~$\Turing$ can be described as a tuple $(\mathfrak{w}_1,\dots,\mathfrak{w}_n) \in (\Sigma^*)^n$ where, for all $j \in [1,n]$, $\mathsf{w}_j$ is the input word for the $j$-th tape. We write $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_n)$ for the run of the \nTM on input $(\mathsf{w}_1,\dots,\mathsf{w}_n) \in (\Sigma^*)^n$, starting from the positional state $(\qinit,1,0)$. The run~$\Turing(\mathsf{w}_1,\dots,\mathsf{w}_n)$~\emph{accepts} in time~$t \in \ensuremath{\mathbb{N}}$ if $\Turing$ reaches the accepting state~$\qacc$ in at most~$t$ steps. \begin{definition}[$k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex \TM problem] \label{definition:KEXP-alt-problem} Fix~$k \in \PNat$. \begin{tabular}{rl} {\rm\textbf{Input:}}&$(n,\vec{Q},\Turing)$, where~${n \in \PNat}$ is written in unary, ${\vec{Q} = (Q_1,\dots,Q_n) \in \{\exists,\forall\}^n}$ \\ &with $Q_1 = \exists$, and a $\Turing$ is a \nTM~working on alphabet~$\Alphabet$.\\ {\rm\textbf{Question:}}&is it true that\\ &$Q_1 \mathsf{w}_1 \in \Sigma^{\tetra(k,n)}, \dots, Q_n \mathsf{w}_n \in \Sigma^{\tetra(k,n)}$ : $\Turing(\mathsf{w}_1,\dots,\mathsf{w}_n)$ accepts in time $\tetra(k,n)$ ? \end{tabular} \end{definition} We analyse the complexity of the $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex \TM problem depending on the type of \emph{quantifier prefix}~$\vec{Q}$. For arbitrary quantifier prefixes, the problem is $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-complete. The membership in~$k\textsc{AExp}$_{\textsc{Pol}}$\xspace$ is straightforward, whereas the hardness follows by reduction from the acceptance problem for alternating Turing machines running in $k$-exponential time and performing a polynomial number of alternations, as we show in~\Cref{appendix:ATMs}. \begin{restatable}{theorem}{TheoremKAEXPAltProblem} \label{theorem:kaexp-alternation-complexity} The $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex \TM problem is $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-complete. \end{restatable} For bounded alternation, let $\altern{\vec{Q}}$ be the number of alternations between existential and universal quantifiers in~$\vec{Q} \in \{\exists,\forall\}^n$, plus one. That is, $\altern{\vec{Q}} \egdef 1 + \card{\{i \in [2,n] : Q_i \neq Q_{i-1}\}}$. For $j \in \PNat$, the $\Sigma^{k\textsc{ExpTime}\xspace}_j$-prenex \TM problem is the $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex \TM problem restricted to instances~$(n,\vec{Q},\Turing)$ with $\altern{\vec{Q}} = j$. Refining~\Cref{theorem:kaexp-alternation-complexity}, this problem is~$\Sigma^{k\textsc{ExpTime}\xspace}_j$-complete. \begin{restatable}{theorem}{CorollarySIGMAKProblem} \label{theorem:kaexp-j-prenex-complexity} The $\Sigma^{k\textsc{ExpTime}\xspace}_j$-prenex \TM problem is $\Sigma^{k\textsc{ExpTime}\xspace}_j$-complete. \end{restatable} \Cref{theorem:kaexp-j-prenex-complexity} implies that the problem is $k\textsc{NExpTime}\xspace$-complete on instances where $\vec{Q} \in \{\exists\}^n$. \input{tiling} \section{Summary of the results, and roadmap} \label{sec:summary} \todo[inline]{write text.} \begin{lemma} Let $\varphi$ in $\fQML{\ast}$ and let $\mathcal{K}$ be a (finite) Kripke forest. There is a deterministic time algorithm that checks whether $\mathcal{K} \models \varphi$ in time $\BigO{\card{\varphi} \cdot \card{\mathcal{K}}^{\card{\varphi}}}$ and polynomial space. \end{lemma} \begin{lemma} Let $\varphi$ in $\sQML{\ast}$ and let $\mathcal{K}$ be a (finite) Kripke forest. There is a deterministic time algorithm that checks whether $\mathcal{K} \models \varphi$ in time $\BigO{\card{\varphi} \cdot 2^{\card{\mathcal{K}} \cdot \card{\varphi}}}$ and polynomial space. \end{lemma} \section{Alternation problems for multi-tiling systems} The $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-prenex TM problem can be easily recast as a problem on multi-tiling systems. \subparagraph*{Multi-tiling.} A \emph{multi-tiling system} $\mathcal{P}$ is a tuple $(\mathsf{T},\mathsf{T}_0,\mathcal{T}_{\text{acc}},\mathsf{H},\mathsf{V},\mathcal{M},n)$ such that $\mathsf{T}$ is a finite set of \defstyle{tile types}, $\mathsf{T}_0,\mathcal{T}_{\text{acc}} \subseteq \mathsf{T}$ are sets of \defstyle{initial} and \defstyle{accepting} tiles, respectively, $n \in \PNat$ (written in unary) is the \emph{dimension} of the system, and $\mathsf{H},\mathsf{V},\mathcal{M} \subseteq \mathsf{T} \times \mathsf{T}$ represent the horizontal, vertical and multi-tiling matching relations, respectively. Fix $k \in \PNat$. We write $\widehat{\Alphabet}$ for the set of words of length $\tetra(k,n)$ over an alphabet~$\Alphabet$. The \defstyle{initial row} $\tilinginit{\mathfrak{f}}$ of a map $\mathfrak{f} \colon [0,\tetra(k,n)-1]^2 \to \mathsf{T}$ is the word $\mathfrak{f}(0,0),\mathfrak{f}(0,1),\dots,\mathfrak{f}(0,\tetra(k,n){-}1)$ from $\widehat{\mathsf{T}}$. A \emph{tiling} for the \emph{grid} $[0,\tetra(k,n)-1]^2$ is a tuple $(\mathfrak{f}_1,\mathfrak{f}_2,\dots,\mathfrak{f}_n)$ such that \begin{description}[nosep,itemsep=1pt,topsep=1.5pt,after=\vspace{1.5pt}] \item[maps.\namedlabel{multit-maps}{\textbf{maps}}] $f_\ell \colon [0,\tetra(k,n)-1]^2 \to \mathsf{T}$ assigns a tile type to each position of the grid, for all $\ell \in [1,n]$; \item[init \& acc.\namedlabel{multit-initacc}{\textbf{init \& acc}}] $\tilinginit{f_\ell} \in \widehat{\mathsf{T}}_0$ for all $\ell \in [1,n]$, and $\mathfrak{f}_n(\tetra(k,n)-1,j) \in \mathcal{T}_{\text{acc}}$ for some $0 \leq j < \tetra(k,n)$; \item[hori.\namedlabel{multit-hori}{\textbf{hori}}] $(\mathfrak{f}_\ell(i,j),\mathfrak{f}_\ell(i+1,j)) \in \mathsf{H}$, for every $\ell \in [1,n]$, $i \in [0,\tetra(k,n)-2]$ and $0 \leq j < \tetra(k,n)$; \item[vert.\namedlabel{multit-vert}{\textbf{vert}}] $(\mathfrak{f}_\ell(i,j),\mathfrak{f}_\ell(i,j+1)) \in \mathsf{V}$, for every $\ell \in [1,n]$, $j \in [0,\tetra(k,n)-2]$ and $0 \leq i < \tetra(k,n)$; \item[multi.\namedlabel{multit-multi}{\textbf{multi}}] $(\mathfrak{f}_{\ell}(i,j),\mathfrak{f}_{\ell+1}(i,j)) \in \mathcal{M}$, for every $1 \leq \ell < n$ and $0 \leq i,j < \tetra(k,n)$. \end{description} \vspace{2pt} \begin{definition}[$k$-exp alternating multi-tiling problem] \label{definition:KEXP-alt-problem} Fix~$k \in \PNat$. \begin{tabular}{rl} {\rm\textbf{Input:}} & a multi-tiling system~$\mathcal{P} = (\mathsf{T},\mathsf{T}_0,\mathcal{T}_{\text{acc}},\mathsf{H},\mathsf{V},\mathcal{M},n)$\\ & and ${\vec{Q} = (Q_1,\dots,Q_n) \in \{\exists,\forall\}^n}$ with $Q_1 = \exists$.\\ {\rm\textbf{Question:}} & is it true that\\ &\begin{tabular}{rl} ${Q_1 \mathfrak{w}_1 \in \widehat{\mathsf{T}_0}} \dots {Q_n \mathfrak{w}_n \in \widehat{\mathsf{T}_0}}$ : & there is a tiling $(\mathfrak{f}_1,\dots,\mathfrak{f}_n)$ of $[0,\tetra(k,n)-1]^2$\\ &such that~$\tilinginit{\mathfrak{f}_\ell} = \mathfrak{w}_\ell$ for all $\ell \in \interval{1}{n}$ ? \end{tabular} \end{tabular} \end{definition} For the case where $k = 1$, deciding whether an instance of the $k$-exp alternating multi-tiling problem is accepted has been shown $\textsc{AExp}$_{\textsc{Pol}}$\xspace$-complete in~\cite[App.~E.7]{Molinari19}, whereas $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-completeness for arbitrary~$k$ can be shown thanks to~\Cref{theorem:kaexp-alternation-complexity}. Moreover, the problem becomes~$\Sigma^{k\textsc{ExpTime}\xspace}_j$-complete whenever $\altern{\vec{Q}} = j$, by~\Cref{theorem:kaexp-j-prenex-complexity}. \begin{restatable}{theorem}{TheoremKTILINGProblem} \label{theorem:kaexp-TILING-complexity} The $k$-exp alternating multi-tiling problem is $k\textsc{AExp}$_{\textsc{Pol}}$\xspace$-complete. \end{restatable} \begin{restatable}{theorem}{CorollarySIGMAKTILINGProblem} \label{theorem:kaexp-j-prenex-TILING-complexity} The $k$-exp alternating multi-tiling problem is $\Sigma^{k\textsc{ExpTime}\xspace}_j$-complete when restricted to inputs~$(\mathcal{P}, \vec Q)$ such that $\altern{\vec{Q}} = j$. \end{restatable} \input{appendix-alternating-tiling}
1,941,325,220,184
arxiv
\section{Introduction} Efficient coding theories have been extremely influential in understanding early sensory processing in general \cite{attneave1954some,barlow1961possible} and, visual processing in the retina in particular \cite{srinivasan1982predictive,huang2011predictive,atick1992does,atick1992could,van1992theory,haft1998theory,doi2012efficient}. Many efficient coding theories predict that the retina should whiten sensory stimuli \cite{graham2006can}. For instance, whitening is optimal from the predictive coding theory perspective, which postulates that only the unpredicted part of a signal should be transmitted. An optimal linear predictive coding filter is a whitening filter, and when applied to the retina this theory predicts that the receptive fields of retinal ganglion cells whiten visual stimuli \cite{srinivasan1982predictive,huang2011predictive}. Redundancy reduction theory proposes that redundant information transmission at the retinal output should be minimized by decorrelated firing of ganglion cells \cite{atick1992does}. Information theory, too, favors whitening. A linear filter that maximizes mutual information between noisy inputs and noisy outputs is a whitening filter \cite{van1992theory,haft1998theory,linsker1988self}. Finally, whitening was proposed by another normative theory \cite{pehlevan2015normative}, whose starting point was efficient dimensionality reduction through similarity matching \cite{pehlevan2015hebbian}. Here, using electrophysiological recordings from 152 ganglion cells in salamander retina responding to natural movies, we test three key predictions of the whitening theory. 1) Retinal ganglion cells should be uncorrelated. We confirm previous tests of this prediction finding that retinal ganglion cells are indeed decorrelated compared to strong spatial correlations of natural scenes \cite{pitkow2012decorrelation}, but significant redundancy remains \cite{puchalla2005redundancy}. 2) Output power spectrum should be flat. Whereas we observe flattening of the ouput compared to the input, it is not completely flat. 3) If the number of channels are reduced from the input to the output, the top input principal components are transmitted. We start by detailing the dataset we used for this work. Then, we present our results, which test the three predictions. Our results show that perfect whitening is not the whole story, consistent with findings of \cite{puchalla2005redundancy,pitkow2012decorrelation}. \section{Experimental Dataset and Notation} The dataset was collected by Michael Berry's laboratory at Princeton University. It consists of recorded responses from 152 larval tiger salamander retinal ganglion cells to a 7-minute natural movie stimulus. The stimulus consists of gray scale movie of leaves and branches blowing in the wind (figure \ref{fig:decorr}-A) and is projected onto the array from a CRT monitor at 60 Hz. The frame size of the stimulus is $512 \times 512$. Details of recording procedure, including the spike-sorting algorithm, are described in \cite{prentice2015error}. In this paper, $\bf X$ and $\bf Y$ denote the stimulus and the firing rate matrices respectively. $\bf X$ is a $n\times T$ matrix, where $n$ is the total number of pixels in each frame (reshaped to form a vector) and $T$ is the number of time samples. In our setting, for computational purposes, stimulus frames are down-sampled to $n=128\times128 = 16,384$ and the 7-minute movie shown at 60 Hz corresponds to $T=25,200$. Moreover, $\bf Y$ is the $k\times T$ firing rate matrix, where $k=152$ is the number of ganglion cells that data is recorded from. The firing rates are reported based on spike counts in sliding 80 ms time windows. Both $\bf X$ and $\bf Y$ are centered in time. \section{Results} \subsection{Do retinal ganglion cells decorrelate natural scenes?} \begin{figure} \centering \includegraphics[scale=.31,clip,trim=0in 1.2in 0in 0in]{decorr5.pdf} \caption{Output of ganglion cells are decorrelated compared to natural scenes. A. A sample frame from stimulus. B. Estimated receptive field for an On and an Off ganglion cell. C. Pearson correlation coefficient of firing rates for all ganglion cell-type pairs as a function of the distance between their receptive field centers. The curves corresponds to median correlation coefficient in a distance window of 1 pixel. Similar correlation coefficient and curve are shown for stimulus.} \label{fig:decorr} \end{figure} We start by testing whether output of ganglion cells are indeed decorrelated. This question was to a large extent answered by previous studies \cite{puchalla2005redundancy,pitkow2012decorrelation}, and here we confirm their findings. Our goal is to study the correlation between firing rates of ganglion cells as a function of distance between their receptive field center, therefore, it is necessary to first estimate the receptive field of each cell. We use regularized spike-triggered average also known as regularized reverse correlation technique \cite{chichilnisky2001simple} to estimate the spatial receptive field of each cell. In this technique, firing rate matrix is assumed to be a linear function of stimulus matrix, $\bf Y = RX $. Here, $\bf R$ is the $k \times n$ receptive field matrix for ganglion cells. $\bf R$ can be estimated using ridge regression which has the following closed form: $$\bf R = YX^T(XX^T+\alpha I)^{-1}.$$ Here, $\alpha$ is regularization parameter which we chose with 10-fold cross validation and $\bf I$ is identity matrix. Examples of the estimated On and Off receptive fields are shown in figure \ref{fig:decorr}-B. The center of receptive field for each cell is simply identified as the location with maximum (for On cells) or minimum (for Off cells) value of estimated receptive field. Figure \ref{fig:decorr}-C shows the Pearson correlation coefficient (CC) of firing rates for all ganglion cell-type pairs as a function of the distance between their receptive field centers. This is depicted for On-On, Off-Off, and On-Off pairs. To show the trend of each group, the median correlation coefficient in a distance window of 1 pixel is also shown. We compare this correlation with the spatial correlation in the stimulus. Consistent with Pitkow and Meister's results \cite{pitkow2012decorrelation}, firing rates of ganglion cells are less correlated than the pixels in naturalistic stimulus. However, significant correlations remain, questioning the validity of the whitening theory (see also \cite{puchalla2005redundancy}). \subsection{Is the power spectrum of ganglion cells flattened?} \begin{figure*}[t!] \centering \includegraphics[scale=.3,clip,trim=0in 3.8in 0in 0in]{psd_decay3.pdf} \caption{Power spectrum of ganglion cells are not flattened. A. Eigenvalues of stimulus and ganglion cells firing rate covariance matrices. Eigenvalues are normalized by the largest eigenvalue and are shown in log scale. The index is also normalized by the largest value. A line can be fitted on each plot with slope -4.7 and -5.3, respectively. B. Scatter plot of top 152 eigenvalue pairs. A line can be fitted to this plot with slope 1.09. } \label{fig:psd_decay} \end{figure*} Next we test whether the power spectrum of ganglion cells is flattened. In Figure \ref{fig:psd_decay}-A, we show the eigenvalues of the ganglion cell firing rate covariance matrix, normalized by the highest eigenvalue. We also show similar plot for the visual stimuli covariance matrix for comparison. We observe a decay in the power spectrum of the firing rates. To compare the rate of power spectrum decay, we plot the sorted eigenvalues of the firing rate covariance matrix against top 152 eigenvalues of natural stimuli covariance matrix, Figure \ref{fig:psd_decay}-B, and observe that a line can be fitted to this plot with slope 1.09, suggesting again that the decay is similar (A perfectly equalized output spectrum would lead to a slope of 0). This analysis assumed that in the scatter plot of Figure \ref{fig:psd_decay}-B, the correct correspondence between stimulus and firing-rate covariance eigenvalues is in order of their magnitude. It is possible that due to noise, we misestimate the rank-order of eigenvalues. We explore this possibility in more detail below. \subsection{Do retinal ganglion cells project natural scenes to their principal subspace?} Finally we test whether retinal ganglion cell outputs represent a projection of natural scenes to their principal subspace. To be mathematically precise, we want to test if ${\bf Y} = {\bf O}{\bf D}{\bf P}{\bf X}$, where {\bf P} is a $k\times n$ matrix with $k<n$, rows of ${\bf P}$ are the principal eigenvectors of the stimulus covariance matrix, ${\bf D}$ is a diagonal, nonnegative, $k\times k$ matrix, which could be rescaling power in each component for e.g. whitening, and ${\bf O}$ is an arbitrary $k$-dimensional orthogonal matrix. Since we do not know ${\bf O}{\bf D}$, we do not have a direct prediction for what ${\bf Y}$ should be. However, a prediction can be made for the right-singular vectors of ${\bf Y}$. Suppose a singular-value decomposition (SVD) for ${\bf X}={\bf U}_{X}{\bf \Lambda}_{X}{\bf V}_{X}^T$, where singular values are ordered in decreasing order. Then, ${\bf Y} = {\bf O}{\bf D}{\bf P}{\bf U}_{X}{\bf \Lambda}_{X}{\bf V}_{X}^T$. But, rows of ${\bf P}$ are the first $k$ columns of ${\bf U}_X$, and hence ${\bf \Lambda}_{Y} := {\bf D}{\bf P}{\bf U}_{X}{\bf \Lambda}_{X}$ is a $k\times N$ diagonal matrix. Then, ${\bf Y}={\bf R}{\bf \Lambda}_{Y}{\bf V}_{X}^T$ is an SVD, and we can claim that the top $k$ right singular vectors of ${\bf X}$ and ${\bf Y}$ (columns of ${\bf V}_{ X}$) should match. Of course the match will not be perfect due to noise, which may effect our analysis in two ways. First, our estimate of the ordering of singular vectors will be imperfect. Second, the singular vector estimates will be imperfect. \begin{figure*}[t!] \centering \includegraphics[scale=.2,clip,trim=.7in .5in .7in .1in]{eigmatch8.pdf} \caption{Input-output singular vector pair identification. A. Red curve shows correlation coefficient between the input-output right singular vector pair with the highest peak value among other pairs. Other curves show correlation coefficients between right singular vectors for 100 repeats of randomly time-permuted data (see text for more details). In this plot, the output singular vector is shifted by -30 to +30 time samples. B. Heatmap of peak correlation coefficients for all of the possible input-output pairs. The singular vectors of input and output belonging to the top 50 matches are ordered differently to maximize the matched pair correlation coefficients. For details of the sorting procedure see text. C. Same as B for randomly permuted data. D. Red: Histogram of correlation coefficients for 152 identified pairs from original data. Blue: Histogram of highest peak correlation coefficient for 100 randomly permuted trials. A Gaussian function can be fitted on this histogram. The threshold is set to be three standard deviation from mean of these 100 points. This leads to the selection of 31 pairs with significant peak correlation coefficient.} \label{fig:eigmatch} \end{figure*} \begin{figure*} \centering \includegraphics[scale=.25,clip,trim=.5in 3.65in .2in 0in]{TF8.pdf} \caption{A. Scatterplot of top selected eigenvalues and fitted linear function. The slope of the fitted line is 0.08. B. Probability of selecting each right singular vector by the selection process (see text for more details) for stimulus matrix. Singular vectors are ordered by values of the corresponding singular values (high-to-low). C. Same as B for ganglion cell firing rate matrix.} \label{fig:tr} \end{figure*} Therefore, we came up with the following procedure. We calculate the SVD of ${\bf X}={\bf U}_{X}{\bf \Lambda}_{X}{\bf V}_{X}^T$ and ${\bf Y}={\bf U}_{Y}{\bf \Lambda}_{Y}{\bf V}_{Y}^T$ from data. We match right singular vectors of $\bf X$ and $\bf Y$, columns of ${\bf V}_{ X}$ and ${\bf V}_{ Y}$, using CC as a similarity measure between these vectors. We compute CC for all possible pairs of input-output singular vectors and rank each pair based on this similarity measure. To address the delay of neural response to stimulus, CC is computed for several time shifts between stimuli and firing rates and the peak value is considered as the similarity. Fig. \ref{fig:eigmatch}-A (red curve) shows the CC between the pair of singular vectors with the highest peak value. There is a considerable correlation between the two vectors, when compared to peak CC values for randomly permuted data (see also below). Finally we use a greedy sorting procedure to identify top pairs. We select the pair with the highest rank as the first pick. This pair corresponds to certain index of input and output singular vectors. We then remove all the other pairs coming from these indexes and repeat a similar process for the remaining pairs. Fig. \ref{fig:eigmatch}-B shows the heatmap of similarity measures for sorted pairs. The diagonal structure implies that there is an approximate one-to-one mapping between the top sorted singular vectors. To test the significance of our matching between singular values, we provide a reference by breaking the temporal relationship between stimuli and firing rates and redoing the similarity analysis. We randomly permute columns in $\bf Y$ (neural response matrix) while we keep $\bf X$ (stimulus matrix) the same. This dataset is called the \textit{randomly permuted} data in this paper. We redo the similarity analysis for 100 trials of this random permutation. Fig. \ref{fig:eigmatch}-A shows 100 CC curves with the highest peak for each of these randomly permuted data. The red curve which corresponds to the original data has higher peak CC compared to other curves. Fig. \ref{fig:eigmatch}-C shows the heatmap of peak CC values for one repeat of randomly permuted data. Comparing this figure to fig. \ref{fig:eigmatch}-B, we observe that correlations in the original data are higher compared to the randomly permuted case. To identify pairs with significantly higher peak CC in original data, it is necessary to find a threshold CC based on randomly permuted data. Figure \ref{fig:eigmatch}-D shows the process to find this threshold. We first take the highest peak CC from 100 randomly permuted trials. Then, the threshold is set to be three standard deviations from mean of these 100 points. This leads to the selection of 31 pairs with significant peak CC. Out of these 31 input-output pairs, 12 was among the top 31 input singular values and 20 was among the top 31 output singular values. To summarize our findings in this section up until now, 1) we found that there is a significant correlation between some pairs of stimulus and firing rate right singular vectors, 2) we can use these correlations to come up with a one-to-one mapping between the singular vectors, and 3) the one-to-one mapping we obtained suggests that the top principal components of the input is represented at the output, consistent with whitening theory. After identifying the top input-output pairs of singular vectors, which also gives a mapping between covariance matrix eigenvalues, we go back to the question we asked in the previous section: does the power spectrum of retinal ganglion cells decay less steeply that the power spectrum of natural scenes? In Fig. \ref{fig:tr}-A, we show the scatterplot of the normalized covariance eigenvalues with our new matching and the fitted linear function. This should be compared to Figure \ref{fig:psd_decay}-B. We found that the slope is lower in this case, 0.08, suggesting a flattening effect at the output. To test the validity of our results, we perform further analyses. We estimate the probability of a singular vector being selected by our selection process (described above). First, we divide out the data into 20 non-overlapping segments and repeat the pair identification process for these subsets of data. The probability of selecting each singular value is estimated by dividing the total number of selection for that singular value by 20. To provide a reference, we randomly permute columns in each of the neural response subset matrices while keeping stimulus subset matrix the same and re-estimate the probabilities. Figure \ref{fig:tr}-B and \ref{fig:tr}-C show the probability of selection for top 150 right singular vectors of the stimulus and firing rate matrices, respectively. We find that the top singular vectors, both for stimuli and firing rates, have a significantly higher than chance level probability of selection. These figures suggest that top right singular vectors of the stimulus matrix are highly expected to have a significant CC with the top right singular values of the ganglion cell firing rate matrix. This observation is consistent with the theory that ganglion cell activity represents a projection of visual stimulus onto its principal subspace. \section{Conclusion} In this paper, we put three key predictions of whitening theory to test. We found that 1) retinal ganglion cells outputs are not perfectly uncorrelated (also see\cite{puchalla2005redundancy,pitkow2012decorrelation}), 2) the power spectrum of ganglion cell firing is not completely flat, but rather slopes downwards and 3) there are significant correlations between top right singular vectors of visual stimuli and retinal ganglion cell firing rates, suggesting that retinal output transmits dynamics in the principal subspace of visual scenes. Overall, our findings suggest that whitening alone cannot fully explain response properties of retinal ganglion cells. \section*{Acknowledgment} The authors would like to thank Michael Berry, Jason Prentice and Mark Ioffe for providing the dataset and related instructions. Bin Yu acknowledges partial research support from National Science Foundation (NSF) Grant CDS/E-MSS 1228246, and the Center for Science of Information, an NSF Science and Technology Center, under Grant Agreement CCF-0939370.
1,941,325,220,185
arxiv
\section{Introduction} The Cosmic Neutrino Background (C$\nu$B), analogous to the cosmic Microwave Background (CMB), carries invaluable independent information on the early universe \cite{dolgov,quigg,long,linholder}. The primordial electron, muon, and tau neutrinos decoupled in helicity eigenstates at temperatures $\sim$ MeV, much greater than neutrino masses, and cooled in the expanding universe to a present temperature $\sim 1.7 \times 10^{-4}$ eV. Detection of the C$\nu$B, a major experimental challenge, remains an elusive goal. The PTOLEMY experiment \cite{ptolemy} proposes to use inverse tritium beta decay (ITBD) \cite{weinberg}, $\rm \nu_e + ^3H \to e^- +^3He$, to capture the relic neutrinos. As the ITBD detection rate depends on the helicity as well as the Dirac vs.\ Majorana nature of the relic neutrinos \cite{long,numag}, a key question is to investigate how the helicity of relic neutrinos evolve as they propagate through the Universe. As first noted in Ref.~\cite{duda} a neutrino propagating in a gravitational field can develop an amplitude to have its helicity reversed; as the neutrino trajectory is bent by a gravitational field, the bending of its spin lags the bending of the momentum \cite{silenko,dvornikov}. A simple example is a finite mass neutrino with negative helicity shot straight upward from Earth at less than escape velocity; the neutrino will at a certain point reverse course and fall back down, but its spin direction will not be affected by the Earth's gravity (neglecting the Lense-Thirring effect from the Earth's rotation). The result is that the neutrino returns with its momentum parallel to its spin, i.e., its helicity is flipped. As another expample, the momentum of a non-relativistic neutrino in a circular orbit around a non-rotating gravitating point mass precesses by angle 2$\pi$ per orbit, while the spin precession is a relativistic correction \cite{schiff}. Thus non-relativistically the neutrino helicity oscillates between negative and positive helicity in half an orbit. A second effect that can modify the helicity of Dirac, but not Majorana, neutrinos arises from their expected magnetic moment \cite{marciano,benlee,fujikawa,lynn,s-w,bell-Dirac,bell,dolgov,gs}, which is diagonal in the mass eigenstate basis. Majorana neutrinos can only have non-diagonal transition magnetic moments between different mass eigenstates. As a Dirac neutrino propagates through astrophysical magnetic fields, from cosmic to galactic to magnetic fields in supernovae and neutron stars, its spin precesses and its helicity is modified. As we discussed, the helicity modification is sensitive both to the neutrino magnetic moment and to the characteristics of the magnetic fields ~\cite{numag}. In estimating the helicity flipping probability for relic neutrinos in both cosmic and galactic magnetic fields, we found that even a neutrino magnetic moment well below the value suggested by the XENON1T experiment could significantly affect the helicities of relic neutrinos, and their detection rate via the ITBD reaction~\cite{numag}. We focus here on the gravitational effect on the helicities of relic neutrinos as they propagate from the time of decoupling in the early universe, of order one second after the Big Bang, to the present. Owing to the charged current interaction for $\nu_e$ and $\bar \nu_e$, the reaction cross sections for electron neutrinos are larger than for muon and tau neutrinos. An immediate consequence is that electron neutrinos decouple from the plasma of the early universe at a later time and at a lower temperature than muon and tau neutrinos. As estimated in Ref.~\cite{dolgov}, $\nu_\tau$ and $\nu_\mu$ freeze out at temperature $T_\mu\sim 1.5$ MeV, while $\nu_e$ freeze out at temperature $T_e\sim 1.3$ MeV. However, the temperature differences at freezeout do not effect the present temperature, $T_{\nu 0}= 1.945 \pm 0.001$K = $(1.676\pm 0.001)\times 10^{-4}$ eV, of the various neutrino species (a factor $(11/4)^{1/3}$ smaller than that of the cosmic microwave background). Relic neutrinos are produced in flavor eigenstates, a coherent sum of neutrino mass eigenstates, and in wave packets whose structure is determined effectively by the electrons and positrons scattering with the $\nu$ and $\bar\nu$. The wave packets are limited in size by electron mean free paths at the time of decoupling; as calculated in Ref.~\cite{henning}, a characteristic electron mean free path is of order $1/\alpha^2 T$ to within logarithmic corrections, where $\alpha = e^2/4\pi$; thus at $T\sim$ 1 MeV, the electron mean free path is of order $10^6-10^7$ fm. The wave packets of flavor eigenstates quickly disperse into three effectively decoherent wavepackets each with a given mass, owing to their velocity differences. The velocity dispersion of the mass eigenstates of a relativistic neutrino with momentum $p$ at decoupling is $\delta v/c \simeq \frac12 \Delta m^2 /p^2$, where $\Delta m^2$ is the characteristic neutrino mass-squared splitting \cite{masses}. With $\Delta m^2$ on the characteristic scale of $10^{-4}$ eV$^2$, the velocity dispersion for $p \sim$ 1 MeV is $\sim 1.5 \times 10^{-6}$ cm/sec; thus in the first second alone after neutrinos are decoupled, dispersion would spread the mass components some $10^7$ fm, at least on the scale of the wave packets in which the neutrinos are produced. The decrease of $p$ in time only increases the velocity dispersion. By contrast, the velocity dispersion within a wave packet of definite mass, $ \sim (\delta p/p) m^2 /p^2$, is much smaller, since $\delta p$ within a wavepacket is small compared with the packet's mean momentum $p$. At freezeout the neutrinos are left in a relativistic thermal distribution, \beq f(p) = \frac{1}{e^{p/T}+1}, \label{distribution} \eeq where $p$ is the neutrino momentum and $T$ the temperature; this distribution is maintained throughout the evolution of the universe, even though neutrinos in at least two of the three mass states are non-relativistic at present. In the following Section,~\ref{spinrotation}, we lay out the basic physics of momentum spin rotation by a weak gravitational potential, giving self-contained semiclassical derivations from general relativity of the effects in Appendix A. Then in Sec.~\ref{expansion} we calculate the net momentum rotation of primordial neutrinos propagating through the gravitational inhomogeneities of the expanding universe -- the gravitational lensing of the C$\nu$B -- and the net helicity changes the neutrinos undergo. As a related application we estimate in Sec.~\ref{solarneutrinos} the expected helicity rotation of solar neutrinos caused by their gravitational interaction with the Sun itself. In the concluding Section, \ref{conclusion}, we compare the gravitational bending with the rotation of neutrino spins owing to a finite neutrino magnetic moment, estimated earlier \cite{numag}. Appendix B provides a detailed derivation of the bending of neutrinos emitted from compact spherical objects such as the Sun, neutron stars, and supernovae. We work in units with $\hbar = c =1$. \section{Spin rotation in a weak gravitational potential \label{spinrotation}} When a particle of mass $m$ and velocity $\vec v$ propagates through a weak gravitational potential $\Phi$ its direction of momentum, $\hat p$, bends at a rate \beq \frac{d\hat p}{dt}\Big|_\perp = -\left(v + \frac{1}{v}\right)\vec \nabla_\perp \Phi, \label{mombend} \eeq where the gradient is taken perpendicular to the direction of momentum. We measure the spin precession in $\Phi$ in terms of the particle spin $\vec S$ in the particle's local Lorentz rest frame, reached by a Lorentz boost without rotation. The spin precesses at the slower rate~\cite{voronov,silenko}, \beq \frac{d\vec S}{dt}\Big|_\perp = -\frac{2\gamma+1}{\gamma+1}\vec S\cdot \vec v\,\, \vec\nabla_\perp\Phi, \label{spinbend} \eeq where $\gamma = 1/\sqrt{1-v^2}$ is the usual Lorentz factor. These results are derived in Appendix \ref{gr}, including the expansion of the universe. In a helicity eigenstate $\hat S\cdot\hat p = \hat S\cdot\hat v = h = \pm 1$, one has equivalently, \beq \left[h\frac{d\hat S}{dt} - \frac{d\hat p}{dt}\right]_\perp = \frac{m}{p} \vec\nabla_\perp\Phi. \label{deltaspinbend} \eeq As a consequence of the spin lagging the momentum, the helicity of the particle is rotated by gravitational fields. For total angular bend $\delta\theta_p$ of the momentum, determined by Eq.~(\ref{mombend}), the angular bend, $\delta \theta$, of the spin with respect to the momentum is thus \beq \delta\theta = \delta\theta_{s}-\delta\theta_p = - \frac{\delta\theta_p}{\gamma(1+v^2)}. \label{lag} \eeq where $\delta\theta_s$ is the bending angle of the spin, calculated from Eq.~(\ref{spinbend}). \subsection{Helicity change in passing a distant point mass} A simple application is the deflection of a relativistic spinning particle passing a distant point mass $M$. Integrating the transverse acceleration (\ref{mombend}) over the particle trajectory from $t= -\infty$ to $\infty$ one finds the expected deflection, \beq \Delta\theta_p = \frac{2MG}{bv^2}(1+v^2), \label{Deltatheta} \eeq where $G$ is the Newtonian gravitational constant, and $b$ is the impact parameter. (For $v=1$ this is the Einstein weak field light-bending result.) The spin axis precesses by the smaller amount, \beq \Delta \theta_{s} = \frac{2MG}{b}\frac{2\gamma+1}{\gamma+1}, \eeq and the angular change of the spin axis with respect to the momentum axis is \beq \Delta \theta= - \frac{2MG}{b \gamma v^2}. \label{thetaM} \eeq In the fully relativistic limit, the spin tracks the momentum, leading to no change in the particle helicity. On the other hand, in the non-relativistic limit the spin rotates negligibly compared with the bending of its momentum, and thus a change in direction of the momentum leads to a change in particle helicity. For spin rotation with respect to the momentum by angle $\theta$ from an initial helicity state, the helicity changes from $\pm 1 $ to $\pm \cos\theta $, and the probability of observing the spin flipped to the opposite direction, which is half the magnitude of the change in helicity, is then $P_f = \sin^2(\theta/2)$. \section{Integrating over the expansion of the universe \label{expansion}} We now calculate the momentum bendings, and then spin rotations, as neutrinos propagate past the density fluctuations in the early universe. To take into account the expansion of the universe, we work in terms of the standard Friedman-Robertson-Walker metric, \beq ds^2 = a(u)^2[-du^2 + d\vec x\,^2]. \label{ga} \eeq Here $u$ is the conformal time, related to coordinate time, $t$, by $dt = a(u)\,du$, with the metric in homogeneous space; and $\vec x$ are the comoving spatial coordinates, related to the usual spatial coordinates, $\vec r$, by $d\vec r = a(u) d\vec x$. We take $a(u)=1$ at present. In the presence of small energy density fluctuations, $\rho(x) = \bar \rho + \delta\rho(x)$, with $\bar \rho$ the spatially uniform average density, the metric (\ref{ga}) becomes \cite{hartle} \beq ds^2 = a(u)^2[-(1+2\Phi) du^2 + (1-2\Phi)d\vec x\,^2], \label{metricphi} \eeq where the scalar potential $\Phi$ is given in terms of the density fluctuations by \beq \nabla_x^2\Phi = 4\pi G\left(\delta\rho(\vec x\,)+3\delta P(\vec x\,)\right) a(u)^2, \label{phirho} \eeq with $\delta P$ is the variation of the pressure from uniformity, and $a^{-1}\nabla_x$ the gradient with respect to $\vec r$. In the matter-dominated era (denoted by $\cal M$), the pressure term can be neglected, and (\ref{phirho}) becomes the familiar Newtonian equation. Furthermore in this era linear perturbation theory \cite{dodelson} implies that \beq \delta(\vec x\,) \equiv \delta\rho(\vec x\,)/\bar\rho \eeq grows as $a$, where $\bar\rho$ is the average density; thus since $\bar\rho$ scales as $1/a^3$, we see immediately that $\delta \rho(\vec x\,)$ scales as $a^{-2}$ and thus $\nabla_x^2\Phi(\vec x)$ and $\Phi(\vec x)$ as functions of $\vec x$ are constant in time. In the radiation-dominated era (denoted by $\cal R$), $\Phi(\vec x)$ as a function of $x$ is also constant in time, since in this era linear perturbation theory implies that $\delta$ grows rather as $a^2$ at large scales, while $\bar\rho$ and $\bar P$ scale as $1/a^4$. Furthermore the pressure fluctuations in this era are simply 1/3 of the density fluctuations, so that $\nabla_x^2\Phi = 8\pi G a^2 \bar\rho(x) \delta(\vec x\,) $. To calculate the angular changes in the trajectory of a neutrino, we neglect the neutrino mass at this point for simplicity. Then Eq.~(\ref{mombend}) gives a total angular change $ - 2\int d\ell \, \nabla_{x\perp} \Phi(\vec x)$, where $\ell$ is the comoving length along the path. To lowest order the integral is along the straight path of the neutrino, parametrized in the absence of density fluctuations by the coordinate $x_3$. The average of the square of the angular deflection of the particle trajectory is then \beq \langle (\Delta\theta_p)^2\rangle = 4\int dx_3 dx'_3 \vec\nabla_{x\perp}\cdot\vec\nabla_{x'\perp}\langle \Phi(x_3) \Phi(x_3')\rangle, \label{13} \eeq where \beq \langle \Phi(\vec x) \Phi(\vec x')\rangle = \int \frac{d^3k}{(2\pi)^3} e^{i\vec k\cdot(\vec x-\vec x\,')} \Psi(k) \eeq is the spatially isotropic, (conformal) time-independent auto-correlation function of the gravitational perturbations; the vectors $\vec k$ are comoving. Then \beq \langle (\Delta\theta_p)^2\rangle = 4\int dx_3 dx'_3 \int \frac{d^3k}{(2\pi)^3} e^{i k_3(x_3-x_3')} k_\perp^2 \Psi(k). \nonumber\\ \label{13} \eeq The integration over $x_3'$ essentially gives $2\pi\delta(k_3)$, so that \beq \langle (\Delta\theta_p)^2\rangle = \frac{2}{\pi} \int du \int dk_\perp k_\perp^3\Psi(k_\perp), \label{13} \eeq where $x_3 = u$ along the trajectory of the neutrino. The spectral function $\Psi(k)$ is directly related to the spectral function of the density correlation function, \beq \langle \delta(\vec x\,)\delta(\vec x\,')\rangle = \int \frac{d^3k}{(2\pi)^3} e^{i\vec k\cdot(\vec x-\vec x\,')} P(k), \label{rhoB} \eeq by \beq \Psi(k) = (4\pi G \bar \rho a^2)^2 \zeta \frac{P(k)}{k^4}, \eeq with $\zeta $ = 1 in $\cal M$, and 4 in $\cal R$ where $\delta P= \delta\rho/3$. The spectral function $P(k)$ (with dimensions of volume) depends on the magnitude of $\vec k$ and the time. Its general structure \cite{Planck} is an approximately Harrison-Zel'dovich long wavelength linear growth in $k$ below a maximum at wavevector $k_H$; for $k>k_H$, $P(k)$ falls roughly as $k^{-\nu}$ with $\nu>0$. For $k$ below $k_H$, $P(k)$ scales in $\cal M$ as $a^2$ (even beyond the peak at $k_H$), and as $a^4$ in $\cal R$. In terms of $P(k)$ (with the subscript $\perp$ on the integration variable dropped), \beq \langle (\Delta\theta_p)^2\rangle = 32\pi \zeta \int du (G\bar \rho a^2)^2 \int\frac{dk}{k} P(k). \label{13} \eeq The angular bending of the neutrino trajectories and modification of the helicity are largest in the matter-dominated era, on which we now focus. \blueflag{ We include dark energy, which affects the cosmological expansion after redshifts of order 1/2. The relation between the scale factor and the conformal time is determined by \beq \frac{da}{du} = \sqrt{\frac{8\pi G \bar \rho(a) a^4}{3}} = H_0 \sqrt{\Omega_M a + \Omega_V a^4}, \label{dadude} \eeq where $\bar \rho(a) = \rho_M/a^3 + \rho_V$, with $\rho_M/\rho_c \equiv \Omega_M \simeq 0.32 $ the present average mass fraction (including dark matter) in the universe, $\rho_V/ \rho_c \equiv \Omega_V \simeq 0.68$ the dark energy fraction, and $\rho_c$ the present critical closure density; $H_0 = \sqrt{8\pi G \rho_c/3}$ is the present Hubble constant \cite{frieman,Planck6}. With $P_0(k) = P(k)/a^2$, the angular deviations produced in propagation from matter-radiation equality (where $a(t_{eq}) \equiv a_{eq}\sim 0.8\times 10^{-4}$) to now are given by \beq \langle (\Delta\theta_p)^2\rangle \simeq \frac{9}{2\pi}H_0^4 {\cal P} \int_{u_{eq}}^{u_0} du (\Omega_M+\Omega_Va^3)^2, \label{dtheta11a} \eeq where ${\cal P} \equiv \int_0^\infty (dk/k) P_0(k)$. Numerical integration of the Planck collaboration data \cite{Planck} -- Fig.~19, yields ${\cal P} \simeq 7.25 \times 10^4$ (Mpc/h)$^3$. Using $a$ as the independent integration variable in evaluating the rotation angles, we find \beq \langle (\Delta\theta_p)^2\rangle &=& \frac{9}{2\pi} {\cal P}H_0^3 \int_{a_{eq}}^1 \frac{da}{a^2}\left(\Omega_M a +\Omega_V a^4\right)^{3/2}. \label{detheta12a} \eeq The $a$ integral is approximately 0.56. In addition ${\cal P}H_0^3 \simeq 2.69 \times 10^{-6}$ (independent of the Hubble parameter $h$), and thus \beq \langle (\Delta\theta_p)^2\rangle \simeq 2.2 \times 10^{-6}. \label{thetamdea} \eeq } This result indicates that gravitational lensing of the CMB would be \blueflag{$\sim$ 5.1 arcmin,} within a factor of two of the value $\sim$ 2.7 arcmin from more precise calculations,\footnote{Owing to reionization of intergalactic H atoms below redshift $z\sim 10$ and subsequent photon-electron scattering, the lensing of the CMB is most efficient at lower redshift. (Neutrino lensing does not experience such restrictions; the weak electron-neutrino scattering after reionization is insignificant in comparison.) For example, integration over a sharply limited range of $z< 6$ in Eq.~(\ref{detheta12a}) reduces the mean bending angle to $\sim$ 3.9 arcmin.} e.g., \cite{tonytony}. We now consider the effect of the neutrino mass, which is significant only in $\cal M$. For finite mass, the integration over $u$ in Eq.~(\ref{13}) now becomes \beq \frac14\int_{u_{eq}}^{u_0}du \,v(u)\left(v(u)+\frac1{v(u)}\right)^2, \label{18} \eeq as one sees from Eq.~(\ref{mombend}), with $dx_3 = v(u) du$. \blueflag{This modification leads to \beq \langle (\Delta\theta_p)^2\rangle &=&\frac{9}{8\pi} {\cal P}H_0^3 \int_{a_{eq}}^1 \frac{da}{a^2}\left(\Omega_M a +\Omega_V a^4\right)^{3/2} \nonumber\\ && \hspace{48pt} \times v(a)\left(v(a)+\frac1{v(a)}\right)^2. \label{mombend10a} \eeq The velocity of a neutrino of momentum $p_0$ at present, and thus with a comoving momentum $p= p_0/a$, is $v(a) = 1/\sqrt{1+m_\nu^2 a^2/p_0^2}$. } The root mean square bending angle, $\sqrt{\langle(\Delta \theta_p)^2\rangle}$, is shown in Fig.~\ref{nugrav_f} as a function of the neutrino mass. In the limit of a very slow neutrino, $p_0/m_\nu \ll 1$, the integral in Eq.~(\ref{mombend10a}) is $\simeq 0.3 m_\nu/p_0$, and we find \blueflag{ \beq \langle (\Delta\theta_p)^2\rangle \simeq \frac{2.7}{8\pi} {\cal P}H_ 0^3 \frac{m_\nu}{p_0}; \label{thetamde} \eeq } the bending of a non-relativistic neutrino is larger, as one sees in Fig.~\ref{nugrav_f}, than the bending of a relativistic neutrino. \begin{figure} \includegraphics*[width=8.5cm]{nugrav_fig1.eps} \caption{The root mean square bending angles of the neutrino momentum $\sqrt{\langle (\Delta\theta_p)^2\rangle}$, spin $\sqrt{\langle (\Delta\theta_s)^2\rangle}$, and the bending of the spin with respect to the momentum $\sqrt{\langle \theta^2\rangle}$, Eq.~(\ref{spinbend10}), in the matter-dominated era, as functions of the neutrino mass. All curves are calculated for the neutrino momenta equal to the present neutrino temperature. The contribution to the bending angles from the radiation-dominated era is negligible. } \label{nugrav_f} \end{figure} In the radiation-dominated era, from the time of neutrino decoupling, $t_d \sim$ 1 s, to matter-radiation equality, the scale factor is linear in conformal time, $a(u) = (8\pi G\bar\rho a^4/3)^{1/2} u$, and thus from Eq.~(\ref{13}), \beq \langle (\Delta\theta_p)^2\rangle = \frac{18}{\pi} \frac{a_{eq}^4}{u_{eq}^4} \int^{u_{eq}}_{u_d} \frac{du}{a(u)^4} \int\frac{dk}{k}P(k,u). \label{thetardeu} \eeq Density fluctuations grow in $\cal R$ as $a^2$, and thus $P(k)$ grows as $a^4$ outside the horizon scale. The horizon grows as $t \sim a^2$ so that the physical wavevector of the horizon decreases as $1/a^2$ and the comoving wavevector decreases as $1/a$. This implies that the maximum, $P(k_H)$, of $P(k)$ for comoving $k$ grows as $a^3$, until matter-radiation equilibrium, after which it grows as $a^2$. Since $\int dk P(k)/k$ is essentially proportional to $P(k_H)$, we infer, \beq \int \frac{dk}{k}P(k,u) &\simeq& \frac{a(u)^3}{a_{eq}^3} \int \frac{dk}{k}P(k,u_{eq})\nonumber\\ &&\simeq \frac{a(u)^3}{a_{eq}} \int \frac{dk}{k}P_0(k). \eeq With (\ref{thetardeu}), \beq \langle (\Delta\theta_p)^2\rangle &\simeq& \frac{18}{\pi} \frac{a_{eq}^{2}}{u_{eq}^3} \ln\left(\frac{a_{eq}}{a_d}\right) \int \frac{dk}{k}P_0(k), \nonumber\\ &\sim&\blueflag{a_{eq}^{1/2} \ln\left(\frac{a_{eq}}{a_d}\right) {\cal P}H_0^3}, \eeq where $a(u_d) \equiv a_d \sim 2.3\times 10^{-10}$, and we scale to the present, \blueflag{writing $u_{eq} \sim a_{eq}^{1/2}/H_0$.} The squared angular bending of momentum in the radiation-dominated era is thus of order \blueflag{a few} percent of that in the matter-dominated era, \blueflag{ Eq.~(\ref{detheta12a})}. \blueflag{ The spin axis rotates away from the momentum axis only in the matter dominated regime, where the finite neutrino mass can play a role. To estimate the rotation of the spin itself, we replace according to Eq.~(\ref{spinbend}), the factor $(v+1/v)$ by $v(2\gamma+1)/(\gamma+1)$ in Eq.~(\ref{mombend10a}), so that \beq \langle (\Delta\theta_s)^2\rangle &=& \frac{9}{8\pi} {\cal P} H_0^3 \int_0^1 \frac{da}{a^2} \left(\Omega_M a +\Omega_V a^4\right)^{3/2} \nonumber\\ && \hspace{60pt} \times v^3\left(\frac{2\gamma+1}{\gamma+1}\right)^2 . \label{mombend12} \eeq Similarly the probability of spin rotation away from a pure helicity state, is, according to Eqs.~(\ref{mombend}) and (\ref{lag}), given by Eq.~(\ref{mombend10a}) with the factor $(v+1/v)$ by $1/\gamma v =m_\nu/p$, \beq \langle \theta^2\rangle &=& \frac{9}{8\pi} {\cal P}H_0^3 \int_0^1 \frac{da}{a^2}\left(\Omega_M a +\Omega_V a^4\right)^{3/2} \left(\frac{1}{v}-v\right), \nonumber\\ \label{spinbend10} \eeq where \beq \left(\frac{1}{v}-v\right) = \frac{m^2a^2}{p_0\sqrt{p_0^2 + m^2 a^2}}. \eeq \begin{figure} \includegraphics*[width=8.5cm]{nugrav_fig2a.eps} \caption{\blueflag{ The integrand $R(a)$ in of the $a$ integral in Eq.~(\ref{spinbend10}), showing the dependence of the root mean square bending angle of the neutrino spin relative to the momentum as a function of the scale factor $a$, for two neutrino masses, and momentum equal to the present neutrino temperature. }} \label{nugrav_fig2} \end{figure} Figure~\ref{nugrav_f} shows the bending of the momentum, Eq.~(\ref{mombend10a}), the bending of the spin, calculated using Eq.~(\ref{mombend12}), and the bending of the spin axis with respect to the momentum axis, Eq.~(\ref{spinbend10}), as a function the mass of the neutrino, for the neutrino momentum equal to the temperature. Similarly Fig.~\ref{nugrav_fig2} shows the root mean square bending angle of the spin with respect to the momentum as a function of the scale factor $a$, for two representative neutrino masses. As this figure shows, the onset of the role of dark energy in the expansion of the universe leads to a relative increase in the bending in recent epochs, $a\gtrsim 0.3$. } The equality of the spin rotation with respect to the momentum and the momentum rotation for a non-relativistic neutrino, seen in Fig.~\ref{nugrav_f}, is simply a consequence of the absence of spin rotation of a non-relativistic neutrino in a gravitational field; for a relativistic neutrino, $\langle \theta^2 \rangle$ is suppressed by a factor $(m_\nu^2/2 p_0)^2$ compared with the momentum bending (\ref{thetamdea}). To put the scale of bending in context, we note from Eq.~(\ref{thetaM}) that the spin rotation of a marginally non-relativistic neutrino ($p\sim m_\nu$) is of order that a neutrino would experience in passing a solar mass neutron star at a distance $\lesssim 10^4$ km. \section{Helicity changes of solar neutrinos \label{solarneutrinos}} A related application of helicity rotation by gravitational fields is the spin rotation of solar neutrinos in the gravitational fields of the Sun. To estimate the effects, we consider neutrinos emitted in the $z$-direction, focussing first on those emitted at a given transverse distance, $b$, from the $z$-axis, and distance $r_0$ from the center of the star. Since emission at $-b$ leads to the same helicity change as $b$, and there is no coherence between emission from the points $\pm |b|$, we may take $b>0$ throughout. Then the relative bending of the spin and momentum of these neutrinos is, from Eq.~(\ref{deltaspinbend}), given by \beq \gamma v^2 \theta(b,r_0) &=& \int_{z_0}^\infty dz \, \nabla_y \Phi(r) = -b\int_{z_0}^\infty dz \, \frac{G M(r)}{r^3}, \nonumber\\ \label{solbend} \eeq where $M(r)$ is the stellar mass interior to radius $r$, and $z_0 = \pm \sqrt{r_0^2-b^2}$, with $z$ measured from the center of the star. The dependence on the neutrino mass is entirely through the velocity dependent factor, $1/\gamma v^2$. Owing to the spherical symmetry of the Sun, the average bending of the neutrinos beginning at the two values of $z_0$ is just the same as if the neutrinos started from $z_0=0$. Thus, in calculating the average helicity bending angle, we can replace the lower limit in the integral by 0; the average is independent of $r_0$. Averaging as well over the solar volume, weighted by $p_\nu (r)$, the normalized distribution of neutrino production in the Sun, we derive, as detailed in Appendix B, the average bending angle \beq \langle\theta\rangle &=& - \frac{G}{\gamma v^2}\int_0^{R_\odot} 4\pi r_0 dr_0 p_\nu(r_0) \int_0^\infty dr\frac{M(r)}{r^2} f(r,r_0), \nonumber\\ \label{solbend3} \eeq where \beq f(r,r_0) = \Theta(r_0-r) r W(r_0/r) + \Theta(r-r_0) r_0 W(r/r_0) \nonumber\\ \eeq with the elliptic integral \beq W(\xi) = \int_0^1 dx\frac{\sqrt{1-x^2}}{\sqrt{\xi^2-1+x^2}}, \quad \xi>1. \eeq Equation~(\ref{solbend3}) is a convenient starting point for integrating numerically over the empirical mass distribution $M(r)$ and neutrino emissivity distribution $p_\nu(r)$ of the Sun; using solar model distributions \cite{Bahcall} we find \begin{equation} \langle \theta \rangle = -\frac{1.54}{\gamma v^2} \frac{GM}{R} \label{numerical} \end{equation} For a uniform mass density $\rho(r)$ and uniform $p_\nu(r)$, the prefactor becomes 0.76. As seen in Fig.~\ref{solarnu} the helicity bending angle $|\langle \theta\rangle|$ of non-relativistic solar neutrinos is sizable; however, only a tiny fraction of solar neutrinos are non-relativistic. On the other hand, heavy particles with non-zero spin, such as dark photons, emitted from the Sun would have their helicities significantly modified by the Sun's gravitational field. How such a helicity rotation of dark photon could be observed remains an interesting question. \begin{figure}[hbp] \begin{center} \includegraphics[width=8.0cm]{nugrav_fig3.eps} \end{center} \caption[*]{\baselineskip 1pt The mean helicity rotation angle $|\langle \theta \rangle|$ for solar neutrinos as a function of the neutrino $\beta=v/c$.} \label{solarnu} \end{figure} To understand the magnitude of the helicity angle bending from the Sun, we note that the average emission radius of neutrinos, $\langle r_0 \rangle = \int d^3r\, r\, p_\nu(r)$ is $\simeq 0.11 R_\odot$, and thus $b\ll R_\odot$. Since $b=r_0\sin\omega$, where $\omega$ is the polar angle, the average value of $b$ is $\pi\langle r_0 \rangle/4$. We can thus replace the $z$ integral in Eq.~(\ref{solbend}) approximately by $\int_0^\infty dr GM(r)/r^3$, independent of $b$; with a simple integration by parts using $dM(r)/dr = 4\pi\rho(r)r^2$, where $\rho(r)$ is the mass density, gives \beq \langle \theta\rangle \sim -\frac{\pi^2\langle r_0\rangle G}{2\gamma v^2}\int_0^\infty \rho(r) dr. \label{estimate} \eeq The density in the Sun falls very approximately as $\rho(r) = \rho_c(1-r/R^*)$ where $\rho_c$ is the central density, and $R^*\sim 0.3R_\odot $. From the solar model \cite{Bahcall}, $\int dr \rho(r) \simeq 3.6 M_\odot/R_\odot^2$, so that \beq \langle \theta\rangle \sim - \left\{\frac{}{}\frac{3\pi}{16}\frac{\langle r_0\rangle}{R_\odot}\frac{R^*}{R_\odot}\frac{\rho_c}{\bar\rho} \right\}\frac{GM_\odot}{\gamma v^2 R_\odot} \simeq -\frac{2.0}{\gamma v^2}\frac{GM_\odot}{ R_\odot}, \label{estimate} \eeq where $\bar \rho$ is the average solar mass density. This estimate is valid to leading order in $b$; the 20\% difference from the numerical result (\ref{numerical}) arises from negative corrections of relative order $-2(b/R^*)^2\ln (R^*/b)$. A similar calculation can be carried out for neutrinos emitted from a neutron star or supernova. The characteristic helicity rotation is $\sim GM/\gamma R$, which for 10 MeV scale neutrinos is negligible compared with the magnetic rotation produced even by a neutrino magnetic moment of order that estimated in the standard model \cite{numag}. \begin{figure}[t] \includegraphics*[width=9.0cm]{nugrav_fig4.eps} \caption{Comparison of the root mean square bending angle $\sqrt{\langle \theta^2\rangle}$ of the spin of a primordial neutrino with respect to its momentum from gravitational vs. magnetic effects, as a function of the neutrino mass. All curves are calculated for the neutrino momentum equal to the temperature. The middle curve shows the results of Eq.~(\ref{spinbend10}) for the gravitational bending, for both a Dirac and a Majorana neutrino. The upper and lower curves are the bending expected from the interaction of a Dirac neutrino magnetic moment, $\mu_\nu$, with a characteristic galactic magnetic field, $\sim 10 \mu$G, for the standard model estimate \cite{fujikawa} of $\mu_\nu$ (lower curve) \blueflag{with $m_\nu = 10^{-2}$ eV}, and for a magnetic moment $10^{-14} \mu_B$, three orders of magnitude below that which would explain the XENON1T low energy electron events \cite{xenon1t} (upper curve).} \label{grav-mag} \end{figure} \section{Implications \label{conclusion}} Gravitational perturbations act equally on Dirac and Majorana neutrinos. As relic left-handed Dirac neutrinos are flipped to right-handed, an equal number of right-handed antineutrinos are flipped to left-handed, and since particles and antiparticles are distinguishable, one could in principle see the depletion experimentally. On the other hand, if neutrinos are Majorana, the reduction in left-handed neutrinos would not be observable, since the produced left-handed antineutrinos could not be distinguished experimentally from left-handed neutrinos. An initially negative helicity relic neutrino after travelling past the gravitational inhomogeneities in the universe, would have a probability now of being measured with positive helicity, $P_f=\langle \sin^2(\theta/2)\rangle$. For a presently relativistic neutrino, with mass less than $10^{-4}$ eV, the flipping probability is $\sim 6\times 10^{-7}$. Since the heaviest neutrino has a mass at least 50 meV \cite{masses}, scattering from density fluctuations should lead, as one sees from Fig~\ref{nugrav_f}, to a population of right-handed relic neutrinos and left-handed relic antineutrinos approaching one in $10^5$. This effect is too small to be seen in planned experiments to detect relic neutrinos \cite{ptolemy,long} via inverse tritium decay reaction \cite{weinberg}, but it is not beyond the range of eventual measurability. Earlier \cite{numag}, we estimated that the bending of the spin of a Dirac neutrino with a diagonal magnetic moment $\mu_\nu$, as it travels through a galaxy, is of order \beq \langle \theta^2\rangle_{g} \simeq \left(\frac{\mu_\nu B_g }{v}\right)^2 \ell_g \Lambda_g, \label{galrot} \eeq where $B$ is the average galactic magnetic field, $\ell_g$ is a mean crossing distance of the galaxy, $\Lambda_g$ is the characteristic coherence length of the field, and $\mu_B$ is the Bohr magneton. Unlike gravitational spin bending, the spins of Majorana neutrinos would not be affected by magnetic fields since Majorana neutrinos can have only transition magnetic moments, and the interactions with slowly varying astrophysical magnetic fields cannot change the neutrino mass. Equations~(\ref{thetamde}) and (\ref{galrot}) indicate that the scale of spin bending of a non-relativistic thermal neutrino of mass $m_\nu = 10^{-2}$eV by density fluctuations is comparable to that produced by a galactic magnetic field $\sim 10 \mu$G, with $\Lambda_g \sim$ 1kpc and $\ell_g \sim$ 16 kpc, if the neutrino has a magnetic moment $\mu_\nu \sim 5\times10^{-18}$. As we see in Fig.~\ref{grav-mag}, the scale of gravitational bending of a neutrino spin with respect to its momentum is well above the magnetic bending produced by the standard model estimate of the magnetic moment \cite{marciano,benlee,fujikawa}, $\sim 3\times 10^{-21} m_{-2}\mu_B$, \blueflag{where $m_{-2}$ is the neutrino mass in units of $10^{-2}$ eV}, but well below that produced by a magnetic moment $1.4-2.9 \times10^{-11}\mu_B$ that would explain the excess of low energy electron events in the XENON1T experiment \cite{xenon1t}. \blueflag{See discussion in Ref. \cite{numag}.} Quite generally, neutrino helicity modification, although not measurable by current experiment, is a potentially important probe of cosmic gravitational fields, as well as the interiors of compact objects including the sun, neutron stars, and supernovae. \acknowledgments This research was supported in part by NSF Grant PHY18-22502. We thank Jessie Shelton, Gil Holder, Stu Shapiro, and Michael Turner for helpful discussions. \begin{appendix} \section{Bending of momenta and spins in weak gravitational fields} \label{gr} In this Appendix we summarize the derivations of Eqs.~(\ref{mombend}) and (\ref{spinbend}) for the bending of the momentum and spin in a weak gravitational potential, including the expansion of the universe in the metric, Eq.~(\ref{metricphi}). The equation of motion of a particle with proper velocity $U^\mu\equiv dx^\mu/d\tau$, where $\tau$ is the proper time of the particle, propagating through a general gravitational field, is given by the geodesic equation, \beq \frac{dU^\mu}{d\tau} +\Gamma^{\mu}_{\alpha\beta}U^\alpha U^\beta = 0, \label{geodes} \eeq where $\Gamma^{\mu}_{\alpha\beta} = \frac12 g^{\mu\nu}\left(\partial_\beta g_{\nu\alpha}+ \partial_\alpha g_{\nu\beta} -\partial_\nu g_{\alpha\beta}\right)$ is the affine connection. Using the explicit components of the affine connection for the metric (\ref{metricphi}),\footnote{ The non-vanishing components of the affine connection are $\Gamma^i_{00} =\Gamma^0_{i0}= \nabla_i\Phi$, $\Gamma^i_{jk} =-\nabla_k\Phi \delta^i_j -\nabla_j\Phi \delta^i_k +\nabla_i\Phi\delta^jk$, $\Gamma^0_{00} = a^{-1}da/dx^0$, $\Gamma^i_{j0} =\Gamma^i_{0j} = \delta^i_j a^{-1}da/dx^0$, and $\Gamma^0_{ij} = \delta_{ij}(1-4\Phi)a^{-1}da/dx^0 $. } we see that the spatial velocity, $\vec U^i$, obeys \beq \frac{dU^i}{d\tau} = -\nabla_i \Phi\left((U^0)^2 +(\vec U\,)^2\right) + 2U^i (\vec U\cdot\vec\nabla)\Phi \nonumber\\ -\frac{2}{a}\frac{da}{d\tau}U^iU^0. \label{dUdtau} \eeq For acceleration along $\vec U$, the second term on the first line changes the $(U^0)^2 +(\vec U\,)^2$ to $(U^0)^2 -(\vec U\,)^2$ which equals $1/a^2$ to zeroth order in $\Phi$; thus $d(a^2U^i)/d\tau = -\nabla^i\Phi$ along $\vec U$. The four-momentum $p_\mu = mg_{\mu\nu}U^\nu$ in general obeys \beq \frac{dp_\mu}{d\tau}&=& m\frac{dg_{\mu\nu}}{d\tau}U^\nu+mg_{\mu\nu}\frac{dU^\nu}{d\tau} \nonumber\\ &=& \frac{m}2 \left(\partial_\mu g_{\alpha\beta}\right) U^\alpha U^\beta, \eeq where to find the second line we use $dA/d\tau = U^\mu dA/dx^\mu$, for a function $A$, as well as the geodesic equation combined with the definition of the affine connection. In the weak field metric with expansion (\ref{metricphi}), the spatial momentum $p_i$ thus obeys \beq \frac{dp_i}{d\tau}&=& \frac{m}{2 }\left(\partial_i g_{\alpha\beta}\right) U^\alpha U^\beta = -ma^2\nabla_i\Phi ((U^0)^2 +(U^i)^2). \nonumber\\ \eeq Since $dt/d\tau = \gamma$ to zeroth order in $\Phi$, we find, with expansion, \beq \frac{1}{|\vec p\,|}\frac{d\vec p\,}{dt} = -\left(\frac{1}{v} +v\right)\vec\nabla \Phi, \label{accel} \eeq where $\vec v=d\vec x/du$. Equation~(\ref{mombend}) follows immediately. Similarly, in the metric (\ref{metricphi}) [by definition, $p_0<0$], \beq \frac{dp_0}{d\tau}&=& \frac{m}2 \left(\partial_0 g_{\alpha\beta}\right) U^\alpha U^\beta = -\frac{m}{a} \frac{\partial a}{\partial x^0}, \eeq since $ g_{\alpha\beta} U^\alpha U^\beta = -1$. Thus $p_0a$ is conserved. We turn now to spin precession.\footnote{The spin motion was earlier analyzed for a general static metric in Ref.~\cite{voronov} in terms of the tetrad formalism, and for a Dirac particle in Ref.~\cite{silenko} using a Foldy-Wouthuysen transformation of the Dirac equation. .} The helicity is defined in terms of the spin, $\vec S$, in the local Lorentz frame at rest with respect to the particle. In this frame $S^0 \equiv 0$. To determine the equation of motion for $\vec S$, we begin with the spin $\tilde S^\mu$ in the local Lorentz frame at rest in the ``lab," which obeys the normalization condition, ${\tilde S}_\mu{\tilde S}^\mu = \vec S\,^2$, and relate $\tilde S^\mu$ to the spin in the weak field metric, denoted here by $\Sigma^\mu$. The normalization condition on ${\Sigma}^\mu$ is \beq {\Sigma}_\mu{\Sigma}^\mu &=& -a^2(1+2\Phi) ({\Sigma}^0)^2 +a^2 (1-2\Phi) \vec {\Sigma}^2=\vec S\,^2. \nonumber\\ \label{smusmu} \eeq Thus to first order in $\Phi$, \beq \tilde S^i = a(1-\Phi){\Sigma}^i,\quad \tilde S^0 =a (1+\Phi) {\Sigma}^0. \label{SSPhi} \eeq In addition, ${\Sigma}_\mu U^\mu = 0$, to guarantee that the spin in the particle rest frame has no time component. The particle spin in the weak field metric obeys the geodesic equation \beq \frac{d{\Sigma}^\mu}{d\tau} +\Gamma^{\mu}_{\alpha\beta}{\Sigma}^\alpha U^\beta = 0, \label{geodespin} \eeq and thus \beq \frac{d\vec{\Sigma}}{d\tau} &=& -2\vec\nabla\Phi (\vec{\Sigma}\cdot \vec U) + (\vec U\cdot\vec\nabla\Phi) \vec { \Sigma} + (\vec {\Sigma}\cdot\vec\nabla\Phi) \vec U \nonumber\\ && -\frac{1}{a}\frac{da}{dx^0}(U^0 \vec {\Sigma} + {\Sigma}^0 \vec U). \label{calspini} \eeq Equation~(\ref{SSPhi}) implies that to order $\Phi$ the component of the equation of motion of $\vec{\tilde S}$ transverse to $\vec U$ obeys \beq \frac{d\vec{\tilde S}}{d\tau}\Big|_\perp = \frac{d}{d\tau}\left(a(1-\Phi)\vec{\Sigma}\right)\Big|_\perp& =& -2\vec\nabla_\perp\Phi (\vec{\tilde S}\cdot \vec U). \nonumber\\ \label{spini} \eeq Equivalently, $d\vec{\tilde S}/dt |_\perp = -2\vec\nabla_\perp\Phi (\vec{\tilde S}\cdot \vec v)$, which combined with Eq.~(\ref{accel}) shows that for a massless particle, the spin direction in the lab Lorentz frame remains parallel (or anti-parallel) to the momentum. At this stage we transform back to the local Lorentz frame at rest with respect to the particle. Since $S^0\equiv 0$, the spins in the two Lorentz frames are related by, \beq \vec{\tilde S} = \vec S + (\tilde\gamma -1) \hat v (\hat v\cdot\vec S), \eeq where $\tilde\gamma = (1-\tilde v^2)^{-1/2}$, with the velocity difference of the two Lorentz frames given by $\vec{\tilde v} = [(1+\Phi)/(1-\Phi)] \vec v$. In components parallel and perpendicular to $\vec v$, $\tilde S_\perp = S_\perp$, and $\tilde S_\parallel = \gamma S_\parallel$. Thus \beq \frac{d\vec{S}}{d\tau}\Big|_\perp - \frac{d\vec {\tilde S}}{d\tau}\Big|_\perp &=& - (\tilde\gamma -1) (\hat v\cdot\vec S)\frac{d\hat v}{d\tau}\Big|_\perp. \eeq Since $d\hat v/d\tau$ is first order in $\Phi$, we can neglect the distinction between $\vec{\tilde v}$ and $\vec v$, and find \beq \frac{d\vec{S}}{d\tau}\Big|_\perp - \frac{d\vec {\tilde S}}{d\tau}\Big|_\perp &=& -\frac{\vec S\cdot \vec U}{(\gamma+1)} \frac{d\vec U}{d\tau}\Big|_\perp \nonumber\\ &=& \frac{1}{\gamma+1} \left(\vec S\times \left(\vec U \times \frac{d\vec U}{d\tau}\right)\right)_\perp. \nonumber\\ \eeq The latter term is simply the Thomas precession, at lab frequency $\omega_{\rm Th} = (\gamma^2/(\gamma +1)) \vec v \times \dot {\vec v}$, of an accelerated particle. With Eqs.~(\ref{spini}) and (\ref{dUdtau}) we then find \beq \frac{d\vec S}{d\tau}\Big|_\perp = -\frac{2\gamma+1}{\gamma+1} (\vec S\cdot \vec U) \vec\nabla_\perp \Phi, \eeq from which Eq.~(\ref{spinbend}) follows. Equivalently, \beq \frac{d\vec S}{dt}\Big|_\perp = \frac{2\gamma+1}{\gamma+1}\left(\vec S\times(\vec v \times \vec\nabla \Phi)\right)\Big|_\perp, \label{spinrot} \eeq indicating that the spin feels an effective velocity-dependent torque $(\mu \vec B)_{\rm eff} = [(2\gamma+1)/2(\gamma+1)]( \vec v \times \vec\nabla \Phi$). The non-relativistic limit of this equation gives Schiff's result for precession of a spin in the Gravity Probe B experiment \cite{schiff} (see also Ref.~\cite{weinbergGR}), while in the fully relativistic limit, $\gamma\to\infty$, the spin remains at the same angle with respect to the momentum. \section{Gravitational spin rotation of neutrinos emitted from a spherical body} We detail here the calculation of the relative spin rotation of neutrinos emitted from a spherical star, applicable to solar neutrinos as well as neutrinos from supernovae and neutron stars. We first convert the $z$ integral in Eq.~(\ref{solbend}), with $z_0$ set to 0, to an integral over $r$, so that \beq \gamma v^2 \theta(b) &=& -b\int_{b}^\infty dr \, \frac{G M(r)}{r^2\sqrt{r^2-b^2}}, \label{solbend11} \eeq Then we average the neutrino emission over the stellar volume with a spherically symmetric normalized spatial emission probability $p_\nu(r_0)d^3r_0$, in terms of cylindrical coordinates ($d^3r_0 = 2\pi b db \, dz$), \beq \langle \theta(b) \rangle &=&\int 2\pi b db \, dz\, \int dr_0 p_\nu(r_0)\delta\left(r_0-\sqrt{b^2+z^2}\right) \theta(b),\nonumber\\ &=& \int_0^{R_\odot} 4\pi r_0 dr_0 p_\nu(r_0) \int_0^{r_0} \frac{b\,db}{\sqrt{r_0^2-b^2}} \theta(b), \eeq where in the first line the ranges of the $b$ and $z$ integrals are constrained by the delta function. Thus \beq \gamma v^2 \langle\theta\rangle &=& - \int_0^{R_\odot} 4\pi r_0 dr_0 p_\nu(r_0) \int_0^{r_0} \frac{b^2db}{\sqrt{r_0^2-b^2}}\nonumber\\ &&\hspace{24pt}\times\int_{b}^\infty dr \, \frac{G M(r)}{r^2\sqrt{r^2-b^2}}. \label{solbend2} \eeq Interchanging the order of the $r$ and $b$ integrals, we see that their product is equivalent to \beq && \int_0^\infty dr\frac{GM(r)}{r^2} f(r,r_0), \eeq where \beq f(r,r_0) = \Theta(r_0-r) r W(r_0/r) + \Theta(r-r_0) r_0 W(r/r_0) \nonumber \eeq with \beq W(\xi) &= &\int_0^1 \frac{x^2\,dx}{\sqrt{1-x^2}\sqrt{\xi^2-x^2}} \nonumber\\ &=& \int_0^1 dx\frac{\sqrt{1-x^2}}{\sqrt{\xi^2-1+x^2}}, \quad \xi>1. \eeq Equation~(\ref{solbend3}) follows directly. \end{appendix}
1,941,325,220,186
arxiv
\section{Introduction} Rapid acceleration in urbanization and intensive population growth have elevated the global demand for resources, making the trade network vital to cope with the changing market \cite{chen2018global,wang2020mapping}. The recent advances in critical infrastructure with the new trade policies have facilitated the growth of the trade network and made it more interconnected globally \cite{porkka2013food, ismail2015impact,rehman2020does}. While the international trade network provides economic leverage to participating nations, it also poses an economic and financial threat to the highly interconnected countries at times of economic shock due to external stresses, namely natural disasters and global pandemics \cite{bems2010demand,korniyenko2017assessing,min2018correlated,osberghaus2019effects,gomez2020fragility}. The few cases of such a disruption to the international trade network are the Thailand flood (2011) \cite{korniyenko2017assessing}, Japan earthquake and nuclear disaster (2011) \cite{korniyenko2017assessing, hamano2020natural}, Hurricane Harvey in the United States (2017) \cite{botzen2019economic}, and the COVID-19 pandemic (2019)\cite{verschuur2021observed}. Moreover, the sensitivity of international trade to exchange rates \cite{oguro2008trade}, transportation costs and delay \cite{mancuso2020export}, culture \cite{fidrmuc2016foreign}, and geopolitics \cite{kumar2020india} makes the network fragile, which can imperil a nation's food security and impair it's economic condition. The fragility of the international trade network highlights the importance of a localized (interstate) trade network within the country, which can reduce impacts of external shocks and minimize the damage especially in the developing nations. The complex network approach has garnered considerable attention from the scientific community in understanding the structural and dynamic behavior of networks in disparate domains \cite{xiao2017complex}. In case of the trade network, the complex network analysis help us in quantifying the underlying complexity of trade networks \cite{maluck2015network, wang2020mapping}. Previous work on the trade networks has mainly focused on the network at an international scale \cite{park2005recent,garlaschelli2005structure,shutters2012agricultural,lee2013applications,gao2015features,maluck2015network, xiao2017complex,wang2020mapping,herman2021modeling} or the interaction of a single commodity among and within nations \cite{gephart2015structure,du2017complex,ren2020spatiotemporal,wang2019evolution,wang2020mapping,li2021global,geng2014dynamic}. However, studies on a domestic exchange of resources in trade networks seldom exist, especially in developing countries. The significant challenge in quantifying and analyzing the domestic trade network in developing nations is the paucity of data on the trade network. Understanding the intertwined domestic trade networks helps to identify the decentralization of the supply chain system and increase efficiency in addressing the population's needs. Furthermore, analyzing the evolution of trade networks is crucial to capture the uncertainty and fluctuation associated with trading over temporal and spatial scales. In this study, we use the complex network lens to understand the evolution of Domestic Interstate Trade Network (DITN) across India. We select India's domestic trade network encompassing a vast array of commodities (both agricultural and non-agricultural) traded through railways for our analysis. India is world's 6\textsuperscript{th} largest economy with nominal Gross Domestic Product (GDP) of US\$ 2.94 trillion \cite{worldeco}. The nation is the nexus of the trade network in South Asian countries owing to it's diverse resources and an ideal geopolitical location \cite{kumar2020india}. India, being a land of diverse climatic, cultural, and socio-economic characteristics, strongly relies on interstate trade to meet the demand of raw materials for various industries and meet the requirements of residents. As a densely populated country, India needs food and economic security that can be achieved through enhancing their internal trade by understanding the trade evolution of agriculture and non-agriculture commodities \cite{martin2017agricultural,erokhin2019handbook,xi2019impact,hu2020characteristics}. We quantify the topological features of the network to understand the structural characteristics of the trade network across two major sectors comprising commodities from agricultural and non-agricultural sectors. We examine the role of different states in the trade and their contribution to import and export in these two types of commodities over the temporal window of 2010 to 2018. At last, to quantify the nature of domestic trade with dynamic relation of import and export in agriculture as well as the non-agriculture trade, we analyze the inward and outward movement of commodities over the spatio-temporal scale. Our study offers new insights to understand the spatial and temporal interaction of commodities over a regional scale and hot spots of the trade network, which can further improve trade policies \cite{sajedianfard2021quantitative} and crucial for devising the resilient and recovery strategies. We organize the rest of the manuscript as follows: In Section 2, we first discuss the details of the data sets used in the study. Then, we present the overview of the analysis, the construction of DITN, and the topological characteristics of a network. After this, we brief the evolution of the trade network over both spatial and temporal scales and compare classes of commodities. Section 4 demonstrates the results. Finally, in Section 5, we discuss the spatio-temporal behavior of interstate trade networks. \section{Methods} \label{section:Methods} \subsection{Data and Network construction} We obtain the interstate movement of resources data for 2010-2018 from the Directorate General of Commercial Intelligence Statistics (DGCIS), Government of India \cite{data2021}. The DGCIS database has highly decentralized/disseminated trade data, decomposed into seventy commodities in the twenty-nine states traded through rail networks. We consider the mode of transport for trade as the railways due to the preferred mode of transporting bulk substances \cite{mukherjee2004trade}. The traded flow of resources between two states are given in quintals. The first of its kind study where we harmonized the commodities data and classified it into two main categories: agriculture and non-agriculture to understand the aggregated level trade transfer. (Table \ref{tb:Table 1}) shows the sorting of different commodities into the two categories. \begin{table}[ht] \begin{center} \caption{Classification of commodities} \label{tb:Table 1} \begin{tabular}{|p{3cm}||p{12cm}|} \hline Category & Commodity \\ \hline Agriculture & rice, gram and gram products, pulses, wheat and wheat products, jowar, bajra, oilseeds, cotton, fruits, vegetables, sugarcane, fodder and husk, jute, rubber, tea, coffee, spices, livestock and products. \\ \hline Non Agriculture & metal ores, coal, coke, cement, bricks, iron steel, soap, salt, paper, paint, varnish, chemicals, alcohols, metal products, electrical goods, caustic, potash, soda, transport equipment, machinery and tools, fertilizers, gases and other commodities. \\ \hline \end{tabular} \end{center} \end{table} We aggregated the harmonized commodity-specific trade matrices to obtain the total annual agricultural and non-agricultural trade volume across the states for nine consecutive years (2010--2018). The harmonized data summarize the resource flows between states and can be represented as directed and weighted DITN. In the network, the node $i$ represents the state in India. The link $l_{ij}$ demonstrates the flow of resources between states $i$ and $j$. The direction of flow (i $\rightarrow$ j) indicates the relationship between the exporter ($i$) and importer ($j$). After construction of links, weight $w_{ij}$ (trade volume) is attributed to each link. In this study, The trade volume is considered in physical values (weight) to represent the flow between two states. Thus we construct the weighted directed trade networks each year for $N$=36 nodes, representing 28 Indian states and 8 Union territories which have pre-defined political boundaries. We analyze the DITN primarily through two aspects: the topology of the trade network and the evolutionary characteristics of interstate trade patterns. \subsection{Topological characteristics of network} We consider multiple metrics to characterise the topological characteristics of network. These are briefly described below. \subsubsection{Adjacency matrix}\hfill Adjacency matrix is concise representation of network \cite{noel2005understanding, barabasi2013network}. It is matrix of size $N \times N$ with each element $A_{ij}$=$w$ representing weight between nodes $i$ and $j$ if link exist between them, otherwise its $A_{ij}$=0. Adjacency matrix for undirected network is symmetric. \begin{equation}\label{eq:1} \mathbf{A}_{i j}= \begin{cases}w, & \text { if }(i\neq j) \hspace{3em} \text {such that} \hspace{1em}w = \{ x:x\in \mathbb{R}^{+}\} \\ 0, & \text { if } (i= j)\end{cases} \end{equation} The directed and weighted interstate trade network is represented by the non-symmetric adjacency matrix $A$ with each element $A_{ij}$ depicting the trade transfers between a pair of states $i$ and $j$ in quintals, where $A_{ij} \neq A_{ji}$. \subsubsection{Degree of Node}\hfill The degree of node ($k$) is a vital parameter to express the structural property of a node in a complex network \cite{barabasi2013network}. The node degree represents the total number of links connected to a given node. Directed networks have two degrees associated with nodes, namely in-degree and out-degree, based on the links inward and outward direction, respectively. The node's degree can be calculated using adjacency matrix $A_{ij}$. The in-degree and out-degree of the states (nodes) in the interstate trade network are defined as (Equations~\ref{eq:2} and \ref{eq:3}): \begin{equation}\label{eq:2} k_i^{out} = \sum_{j=1}^n{A_{ij}}, \end{equation} \begin{equation}\label{eq:3} k_i^{in} = \sum_{j=1}^n{A_{ji}} \end{equation} where, $k_i^{out}$ and $k_i^{in}$ represent the out-degree and in-degree of the node $i$ (state) in interstate trade network for the number of states importing and exporting to that specific states. The average degree of network depicts the average number of links per node. The network's average in-degree and average out-degree for directed interstate trade networks remain the same (Equation~\ref{eq:4}): \begin{equation}\label{eq:4} <k_i^{in}> = \frac {1}{n}\sum_{i=1}^n{k_{i}^{in}}=<k_i^{out}>=\frac {1}{n}\sum_{i=1}^n{k_{i}^{out}} \end{equation} where, $<k_i^{in}>$ and $<k_i^{out}>$ denote the average in-degree and average out-degree, respectively. The $<k_i^{in}>$ and $<k_i^{out}>$ for example, represents the bulk export and import of the agriculture and non-agriculture interstate trade network. \subsubsection{Strength centrality}\hfill A variation in trade volume on disparate links has a proportionate impact on the weighted trade network \cite{wang2020structure,wang2020mapping}. To understand this impact, weighted degree is calculated, which is referred to as node strength ($s_i$) \cite{barrat2004architecture}. In the weighted directed network, we use the in-strength and out-strength of a node to identify the major importer and exporters across the interstate trade network. It can be expressed as (Equation~\ref{eq:5} and 6): \begin{equation}\label{eq:5} s_i^{in} = \sum_{j=1}^n{A_{ji}}{w_{ji}} \end{equation} \begin{equation}\label{eq:6} s_i^{out} = \sum_{j=1}^n{A_{ij}}{w_{ij}} \end{equation} where, $s_i^{in}$ and $s_i^{out}$ denote the in-strength and out-strength of a state, and $w_{ij}$ is the traded volume of commodity from node $i$ to node $j$. \subsection{Analysis of interstate trade pattern} We use a network theory to construct indicators that help analyze the overall characteristics, distribution patterns, and closeness among the trading states in the DITN. These can be represented through following metrics. \subsubsection{Network density}\hfill Network density ($D$) indicates a closer connection among the nodes of a network. In weighted and directed network $G$, with $N$ number of nodes and $L$ links, the density of network is calculated using (Equation~\ref{eq:7}) \cite{fischer1995national}. A higher network density depicts the richness of DITN. \begin{equation}\label{eq:7} D=\frac{L}{N(N-1)} \end{equation} where, $N(N-1)$ is the number of maximum possible connections in a network of size $N$. \subsubsection{Clustering coefficient}\hfill The clustering coefficient of DITN is indicator of tightness among the participating states in the network {\cite{watts1998collective}}. The clustering coefficient indicates the degree of connectedness between the neighbouring states connected to a same trading state. A large clustering coefficient reveals that the neighbours of the state is connected well through the trading. The clustering coefficient in directed network is calculated using (Equation~\ref{eq:8}). \begin{equation}\label{eq:8} C_i=\frac{L_i}{k_i(k_i-1)} \end{equation} where, the $C_i$ is the clustering coefficient of $i^{th}$ node with value between 0 and 1, $L_i$ is the number of links between the $k_i$ neighbours of node $i$. The degree of clustering for whole network is captured using the average clustering coefficient $<C>$. The $<C>$ is calculated as: \begin{equation}\label{eq:9} <C>=\frac{1}{n}\sum_{i=1}^n{C_{i}} \end{equation} To analyse the evolution of DITN for both agriculture and non-agriculture commodities, we use the above mentioned topological properties of complex network at both temporal and spatial scales. \section{Results} \subsection{Interstate trade network} In the DITN, we have classified the network into agriculture and non-agriculture classes based on the types of commodities. (Figure~\ref{fig:Fig.1}) shows the overview of interstate trade in India. \begin{figure}[hbt!] \centering \includegraphics[width=0.93\textwidth,keepaspectratio]{Figure1.pdf} \caption{The chord diagrams show the average trade flows between different states in India. The links' width showcases the trade volume in quintals, and the links' colors correspond to the exporting regions, (a and b) depict the trade network for Agricultural Non-Agricultural commodities. (c and d) shows average exports and imports for Agriculture and (e) and (f) for Non-agriculture commodities in the study area. State is represented by two letters.All averages are calculated over the period of 2010-2018.} \label{fig:Fig.1} \end{figure} \textbf {Agriculture trade:} (Figure~\ref{fig:Fig.1}a) depicts the average transfer of agricultural and animal products across the Indian states for 2010-2018. Each year, the aggregated volume of food flow is around 1.6--2.5 million quintals. The average exports and imports of agriculture commodities of various states are presented in (Figure~\ref{fig:Fig.1}c and \ref{fig:Fig.1}d). (Table~\ref{tb:Table 2}) represents the top five top exporters and importers in the agriculture DITN along with the corresponding mean trade volumes (in quintals). The North Indian states: Punjab (PN), Haryana (HR), Uttar Pradesh (UP), and Madhya Pradesh (MP) are the key food suppliers in India, contributing with more than 70\% of agricultural products export within India. Whereas, the states like Tamil Nadu (TN), Maharashtra (MH), West Bengal (WB), Uttar Pradesh (UP), and Gujarat (GJ) import 50\% of annual agricultural products. \begin{table}[ht] \begin{center} \caption{Top five states with highest node strengths for overall agriculture trade and non-agriculture trade} \label{tb:Table 2} \begin{tabular}{ |p{2cm}|p{4cm}||p{2cm}|p{4cm}| } \hline \hfil $S_{(in)}$ & weight $(quintals \times 10^6 )$ & \hfil $S_{(out)}$ & \hfil weight $(quintals \times 10^6)$\\ \hline \multicolumn{4}{|c|}{Agriculture trade} \\ \hline \hfil TN & \hfil 60.73 & \hfil PN & \hfil 148\\ \hline \hfil MH & \hfil 56.66 & \hfil MP & \hfil 72.53\\ \hline \hfil WB & \hfil 46.77 & \hfil HR & \hfil 65.85\\ \hline \hfil UP & \hfil 45.15 & \hfil AP & \hfil 41.83\\ \hline \hfil GJ & \hfil 41.54 & \hfil UP & \hfil 34.84\\ \hline \multicolumn{4}{|c|}{Non-Agriculture trade} \\ \hline \hfil WB & \hfil 643.24 & \hfil OD & \hfil 1286.14\\ \hline \hfil MH & \hfil 612.04 & \hfil CH & \hfil 897.91\\ \hline \hfil AP & \hfil 477.45 & \hfil AP & \hfil 701.05\\ \hline \hfil KA & \hfil 468.13 & \hfil MP & \hfil 256.59\\ \hline \hfil UP & \hfil 440.24 & \hfil JH & \hfil 416.76\\ \hline \end{tabular} \end{center} \end{table} \textbf {Non-Agriculture trade:} The non-agricultural commodities mainly include infrastructure supportive commodities like cement, mineral ores, and chemicals. The total trade volume of non-agriculture commodities in DITN is around 45-60 million quintals, being significantly larger than the agriculture products' trade. (Figure~\ref{fig:Fig.1}b) demonstrates the transfer of major non-agriculture products across the Indian states for 2010-2018. (Figures~\ref{fig:Fig.1}e and \ref{fig:Fig.1}f) show the mean exports and imports of non-agriculture goods. Chhattisgarh (CH) and Odisha (OD) contributes the 35\% exports of total non-agriculture trade in DITN, at the same time, WB, and MH are leading importers (Table \ref{tb:Table 2}). Summarizing above, overall the northern Indian states Punjab (PN) and Haryana (HR) dominate the agriculture DITN, while the south Indian states Tamil Nadu (TN) to Andhra Pradesh (AP) is the most prominent link for trade transfer with the average trade volume of 2.03~$\times$~10$^8$~quintals~yr$^{-1}$, which accounts for 4.6\% of the total trade volume of network (Figure~\ref{fig:Fig.1}a). In contrast to the agriculture trade pattern, the non-agriculture trade pattern does not exhibit a dominant state or link. The states considered for this analysis significantly participate in transferring non-agricultural commodities, albeit unevenly. The flow of non-agricultural goods from CH to PN is the link with high trade transfer the mean annual trade volume of 4.07~$\times$~10$^7$~quintals~yr$^{-1}$ comprising of 4.7\% of total non-agriculture interstate trade (Figure~\ref{fig:Fig.1}b). \subsection{Temporal evolution of the interstate trade network} \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth,keepaspectratio]{Figure2.pdf} \caption{Temporal structural evolution of total trade volume and network characteristics: (a) Total traded value (in $10^8$ quintals), (b) Average degree, (c) Network density, and (d) Average Clustering Coefficient for 2010-2018. here, $\beta$ indicates the slope of a linear trend line fitted to scattered points (values with * represent significant trend at $p-value < 0.05$). The results are presented for the Agriculture (green) and Non-Agriculture (purple) interstate trade networks.} \label{fig:Fig.2} \end{figure} To understand the structural evolution of DITN on the temporal window of 2010--2018, we use the topological parameters of a complex network as detailed above (See Section ~\ref{section:Methods}). Over the last decade, the number of nodes participating in agriculture (and non-agriculture) trade increased to from 26 (29) in 2010 28 (31) in 2018, respectively, suggesting that participation in interstate trading has moderately increased. In the case of links, the non-agriculture trade has a constant number of links, whereas the links in agriculture trade have substantially increased from 223 to 264 in the time frame of 2010-2018. This signifies the increasing connectivity for agricultural trades between participating states. (Figure~\ref{fig:Fig.2}a) describes the evolution of agriculture and non-agriculture interstate trade in India. The traded volume of agriculture and non-agriculture trade increased rapidly, exhibiting the significant ($p-value < 0.05$; linear trend) evolution with time. The total traded volume increased by 86.6\% and 37.2\%, respectively, for agriculture and non-agriculture trade, respectively. The strong trend ($\beta_{non-ag}= 2.8~\times~10^8$~quintals~yr$^{-1}$) is observed for the non-agriculture trade, whereas agricultural trade has gradually increased with a dip (16.7\%) in recent years. This decrease may mark the impact of 2016-2018 droughts in India \cite{mishra2021unprecedented}. The topological properties of the network do not significantly change over the temporal evolution of trade. (Figure~\ref{fig:Fig.2}b) elaborates the change of average degree over time for agriculture and non-agriculture trade. The average degree of non-agriculture trade remains unchanged, insisting that the network is stable with an increase in overall trade volume. In the agriculture trade, the slight decrease ($\beta_{non-ag}= -0.12$) in the average degree with an increase in trade flow emphasizes that the network is moving towards self-reliance under the assumption that the states have not opted for the exchange of goods by other modes of transport. Note that here in our analysis, we have considered only trades via railways which is the preferred mode of transport of bulk substances -- although the neighboring states may prefer to choose other transportation modes (via roads). The network density (Figure~\ref{fig:Fig.2}c) displays the weak negative trend in agriculture trading ($\beta_{non-ag}= -0.003$). while in the case of non-agriculture trade, it remains unchanged. Increasing total trade with no significant change in network degree depicts the diversification of commodities over the network's existing connections. Along with increasing trend in non-agricultural commodities, observing positive trend of clustering coefficient shows that the connectedness of export lines is improving over time (Figure~\ref{fig:Fig.2}d). \subsection{Evolution of the core states’ trade relations in the interstate trade network} To identify the influence and the changes of the interstate trading for the different states, we analyse the spatial variation of the trend for import and export in agriculture and the non-agriculture trade network. (Figure~\ref{fig:Fig.3}a) emulates the spatial variation in exports of agriculture commodities. The major exporters of agricultural products such as PN, HR, UP, MP and Delhi (DL) have shown significant increase in exports. The states namely Karnataka (KA), Nagaland (NL), Kerala (KL), and Assam (AS) displayed a strong negative trend in exporting agriculture products. In the case of import of agriculture commodities, MP is the only state that has reflected negative trend with increasing export (Figure~\ref{fig:Fig.3}b). The number of states having a negative trend in the export of non-agriculture commodities is high compared to agriculture commodities (Figure~\ref{fig:Fig.3}c). On the other hand contrasting behavior is observed in the import of non-agriculture commodities as the number of states with positive trends in non-agriculture import is more than the import of agriculture products(Figure~\ref{fig:Fig.3}d). (Figure~\ref{fig:Fig.4} a and b) presents the evolution of core states that have contributed significantly to the exporting of agriculture commodities. PN and HR have steady evolution as the trends for import and export have not changed significantly over time making them more stable. While leading exporters in non-agriculture commodities, OD and CH have increase in both imports and exports.The result can be attributed to the states being pioneers in processing metallurgical-dominated products, which require the high import of raw non-agriculture products and can export the finished non-agriculture products \cite{industry2022}. (Figure~\ref{fig:Fig.4} c and d). \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth,keepaspectratio]{Figure3.pdf} \caption{The spatial variation in the trend of exports and imports of Agriculture (a \& b) and Non-agriculture (c \& d) commodities over the period of 2010-2018.All values are in Million Quintals.} \label{fig:Fig.3} \end{figure} \subsection{Dynamic relation of Agriculture and Non-agriculture trade in interstate trade network} Finally, to understand the evolution and dynamic relation of agriculture and non-agriculture trade with the DITN analysis, we use the in-degree and out-degree parameters' with respect of their spatial and temporal variations. (Figure~\ref{fig:Fig.5} a and b) depicts an increase in number of outgoing and incoming connections over the analysed time-frame. This is more prominent for the agriculture trade which suggests that over the period of time, the north-central belt of India has shown improvement in the export trade connection. The non-agriculture trade network, on the other side, has shown the clustering pattern in the export and import connections (Figure~\ref{fig:Fig.5} c and d) that suggests the non-agriculture trading is influenced by their states geographic location. For example, (Figure~\ref{fig:Fig.5}c) shows that the central and east central belt of India with the state (including OD,CH MP, and MH) have shown an increase connection in export where there is a cluster of the south-eastern part shown a positive trend in import connection. \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth,keepaspectratio]{Figure4.pdf} \caption{The temporal variation of exports and imports of leading exporters of agriculture (Punjab and Haryana) and non-agriculture (Odisha and Chattisgarh) commodities over the period 2010--2018. Also shown are the respective linear regression slope ($\beta$).} \label{fig:Fig.4} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth,keepaspectratio]{Figure5.pdf} \caption{ Spatial and temporal variations of indegree and outdegree. Study area showing average outdegree and average indegree respectively for agriculture (a \& b) and non-agriculture (c \& d) commodities.All averages are calculated over the period of 2010-2018} \label{fig:Fig.5} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth,keepaspectratio]{Figure6.pdf} \caption{(A) Scatter plots of average indegree and outdegree over nine years (2010-2018) for Indian states. $\beta$ indicates the slope of the fitted linear regression line to scattered points; and $r$ indicates the Pearson correlation coefficient. (B) Scatter plots of the year-wise slope of indegree and outdegree. The size and numerical above the marker represents the respective correlation value. In all panels, shown are the fitted regression line (dash line) and corresponding 95\% confidence band. The results in sub-panels (a) and (b) are for agriculture trade and non-agriculture trade, respectively.} \label{fig:Fig.6} \end{figure} (Figure~\ref{fig:Fig.6}A) shows the relation of import and exports of agriculture and non-agriculture commodities over the 2010--2018 time frame based on the average in-degree and out-degree as parameters. The distribution of trade by states in agriculture product is sparse ($\beta=0.79$) and also have low correlation ($ r=0.5$) that indicates that there is no major hot-spots of states who govern the connections in agriculture trade network (Figure~\ref{fig:Fig.6} A-a). However in the non-agriculture trade, the states with high export connections also have high import connections ($\beta=1.16$ and $ r=0.87$) making them strongly connected in the respective trade networks and hot-spots for trading (Figure~\ref{fig:Fig.6} A-b). (Figure~\ref{fig:Fig.6}B) indicates the annual evolution of network with respect to it's import and export connection, represented here as corresponding regression slope between in-degree and out-degree of network. The agriculture interstate trade network is showing negative trend ($\beta= -0.026$) in connections and also have low correlation values over the time frame. This indicates that the states are not dependent on hot-spot states and the states are becoming more self-reliant over the years through diversification, possibly to reduce the future food risk (Figure~\ref{fig:Fig.6}B - a). In contrast, the non-agriculture interstate trade network has maintained a slight positive trend in making more connections over the years. In general, with an overall high correlation value throughout the temporal window, the non-agriculture trade network appears to be highly connected and dependent on the hot-spots making them more vulnerable to disruption (Figure~\ref{fig:Fig.6}B - b). \section{Discussion and Concluding Remarks} The advancement in transport infrastructure and liberal trading policies have enhanced the spatio-temporal import and export at an international scale \cite{gani2017logistics,IMF2022}. The community has access to the resources they cannot produce and can procure the commodity through trade, leading to over-dependence on global trade. However, rising economic instability, disasters, and geopolitical situation infuse the fragility in the international trade network \cite{puma2015assessing, kumar2020india}. Thus, maintaining the proper domestic interstate trade network is essential to reduce impacts of such external shocks. This study evaluates the evolutionary characteristics of agriculture and non-agriculture Domestic Interstate Trade Network from 2010 to 2018 through a complex network approach. The key contributions of the study going beyond the previous studies \cite{sharma1997environmental, kumar2011export, shutters2012agricultural, maluck2015network, nag2016emerging, thomas2019competitiveness, wang2020mapping,katyaini2021water} are to explore the dynamics and nuances of DITN and the spatio-temporal evolution of the agriculture versus non-agriculture trade that provides crucial insights for authorities and policymakers to manage the interstate trading. While International trade networks and single commodity global trade networks have received significant attention \cite{smith1992structure, gephart2015structure, del2017trends, chen2018global, wang2019evolution, pacini2021network}, the single commodity interstate trade network is rarely focused \cite{rioux2019economic, tintelnot2018trade,dhyne2021trade,chen2022china}. Understanding the evolution of interstate trade networks of multiple commodities can help us gain insights on nation's self-sufficiency and identification of hubs that are the most critical to sustain the supply chain. Here, we have assessed how interstate trade networks of diverse resources have evolved across India over the last decade. Our result demonstrated that non-agriculture and agriculture trade behave differently. In the case of non-agriculture DITN, no significant network density change with an increase in clustering coefficient signifies the improvement in connectedness over time, and the major trading states are relatively close in the existing trading route. In contrast, agriculture DITN with a modest decrease in network density and clustering coefficient with increased trade depicts the trade moving towards self-reliance. The finding can be attributed to the quick residence recovery of trade due to the self-reliant agriculture trade network during external shocks and pandemic situations where the interstate movements were minimal. Our analysis enables us to quantify the contribution and evolution of core nodes and the connection of trade over a temporal window in agriculture and non-agriculture DITN. We note that importing non-agriculture commodities has more positive spatial growth than agriculture commodities due to economical growth in the sector and that enhanced the strength between the existing linkages of trade \cite{Nitiayog2002, budget2017india,Nitiayog2017} . This signifies more network dependence on non-agriculture exporting states over a temporal period. Whereas in agriculture trade growth, export is limited to the northern belt of India. In importing non-agriculture commodities, the connections and evolution show a clustering pattern that depicts the geographic influence in trading due to their . The non-agriculture trade is moving towards over-dependency on the core states in trade; on the contrary, the agriculture trade is less dependent on core states. In order to strengthen the interpretation of our analysis, future work can address the following limitations. (a) We consider the commodities transfer through railways. Though most of the transfer of commodities are traded through railways, the consideration of commodities through air, water, and road transport will make the analysis more robust. (b) Here we have considered harmonized data of agriculture and non-agriculture commodities. However, studying the individual commodity transfer (i.e., rice, wheat, coal, and metal products) and their evolution can bring new insights. Trade globalization has shown an increase in food resilience and water security, whereas it also enhanced the likelihood of global crises due to interdependence. Overall, the topological quantification of Indian inland trade through a complex system's lens could help to understand the trade network's resilience and recovery through identification of more vulnerable section of network to external disruptions. The study of interstate trade at a local scale can be extended to understand virtual water trade. The virtual water traded through agriculture and non-agriculture commodities with the topological properties of the network and their evolution can help understand impacts on the local water system and aid in making informed decisions on the issue of water security \cite{konar2011water, kumar2011export,d2019global, graham2020future,nishad2022virtual}. \section*{Data and Code availability} The data sets used in this study are obtained from Ministry of Commerce and Industry {\cite{data2021}}. For network analysis, NetworkX (\url{https://networkx.org}), an open-source Python package is used. \section*{Author contributions} SK, RK and UB designed the experiments. SK performed the analysis. RD, RK and UB wrote the manuscript with inputs from SK. SK and RD contributed equally. \section*{Acknowledgments} The authors acknowledge Angana Borah, Divya Upadhyay, Pravin Bhasme and Shekhar S Goyal for active discussions and constructive comments on the manuscript. UB acknowledges IITGN Startup grant and MHRD/IISc STARS ID-367 for funding support. \section*{Abbreviations} DITN: Domestic Interstate Trade Network, DGCIS: Directorate General of Commercial Intelligence Statistics, AP: Andhra Pradesh, AR: Arunachal Pradesh, AS: Assam, BR: Bihar, CH: Chhattisgarh, CN: Chandigarh, DD: Daman and Diu, DL: Dehli, GA: Goa, GJ: Gujarat, HP:Himachal Pradesh, HR: Haryana, JH: Jharkhand, JK: Jammu and Kashmir, KA: Karnataka, KL: Kerala, MH: Maharashtra, ML: Meghalaya, MN: Manipur, MP: Madhya Pradesh, MZ: Mizoram, NL: Nagaland, OD: Odisha, PN: Punjab, RJ: Rajasthan, SK: Sikkim, TN: Tamilnadu, TR: Tripura, TS: Telangana, UP: Uttar Pradesh, UR: Uttarakhand, WB: West Bengal \clearpage \bibliographystyle{unsrt}
1,941,325,220,187
arxiv
\section{Introduction} RPCs are powerful detectors used in many HEP physics experiments. Their good time resolution and efficiency, in addition to their simplicity and low cost make them excellent candidates for very large area detectors. The high resistivity of glass plates helps to prevent discharge damage in these detectors, but this feature represents a weakness when it comes to their use in high rate environments. A semi-conductive glass RPC (GRPC) is a solution to overcome this issue. The low resistivity of its doped glass accelerates the absorption of the avalanche's charge created when a charged particles crosses the RPC. A recent beam test at DESY in January 2012 with a high rate electron beam constitutes a validation of this new concept. The GRPC detector is based on the ionization produced by charged particles in a gas gap. A typical gas mixture is $93\%$ TFE($\rm C_2F_4$), $5\%$ $\rm CO_2$ and $2\%$ $\rm SF_6$, contained in a $1.2~\rm mm$ gap between 2 glass plates. A high voltage between $6.5~\rm kV$ and $8~\rm kV$ was applied on the glass through a resistive coating, assuring the charge multiplication of initial ionizations in avalanche mode with a typical gain of $10^7$. \begin{figure}[!h] \centering \subfloat[]{\label{fig:design-a}\includegraphics[width=0.95\linewidth]{rpc.pdf}}\\ \subfloat[]{\label{fig:design-b}\includegraphics[width=0.45\linewidth]{scglass.jpg}} \vspace{0.5cm} \subfloat[]{\label{fig:design-c}\includegraphics[width=0.365\linewidth]{pad.png}} \caption{\footnotesize (a) Schematic drawing of GRPC with electrodes made of silicate glass. (b) photo of $\rm 30~cm\times 30~cm$ GRPC with semi-conductive glass. (c) readout pad with size of $1 \times 1 \rm cm^2$} \label{fig:design.} \end{figure} The new aspect of this detector is the low resistivity of the doped silicate glass (less then $10^{10-11}\rm \Omega.cm$, compared to the $10^{13}\rm \Omega.cm$ typical of float glass), provided by Tsinguha University following a new process \cite{Wang}. The glass plate thickness is $1.1~\rm mm$ for the cathode and $0.7~\rm mm$ for the anode. The resistive coating is colloidal graphite of $1~M\Omega/\Box$ resistivity. The gas was uniformly distributed in the chamber using the channeling-based system. Ceramic balls with $1.2~\rm mm$ diameter were used as spacers. The total GRPC thickness was $3~\rm mm$. The signal was collected by $1 \times 1~\rm cm^2$ copper pads (figure \ref{fig:design-c}) connected to a semi-digital readout system with 3 thresholds, identical to the one equipping the GRPC chambers used in the SDHCAL prototype developed within the CALICE collaboration \cite{Imad1}\cite{Imad2}. \section{DESY test beam} Four $30\times 30~\rm cm^2$ area RPCs were built following the design shown in figure \ref{fig:design-a} and were tested at DESY in January 2012. The DESY II synchrotron provides an intense and continuous electron beam with an energy up to $6~\rm GeV$. The particle rate depends on the beam energy,with a maximum of $35~\rm kHz$. The beam size is a few $\rm cm^2$. Two scintillator detectors were placed upstream of the detector. Their role was to measure the beam rate. \begin{figure}[!h] \centering \label{fig:desy}\includegraphics[width=0.5\linewidth]{beamprof.pdf} \caption{\footnotesize Beam profile in the chambers with $e^{-}$ at $2~\rm GeV$ .} \end{figure} One additional GRPC made with standard glass was added to the setup. \section{Results \& discussion} \subsection{GRPC performances} The local efficiency and multiplicity were measured by using 3 chambers to reconstruct particle tracks and determining the expected hit position in the 4th. The multiplicity $\rm \mu$ is defined as the number of fired pads within $3~cm$ of the expected position. The efficiency $\rm \epsilon$ is the fraction of tracks with $\mu \geq 1$. The efficiency (\ref{fig:desy-a}) and multiplicity (\ref{fig:desy-b}) were measured as function of the polarization high voltage. The same threshold was used for all voltages. The threshold value is fixed at $50~\rm fC$ and $7.2~ \rm kV$ was chosen as the working point, giving $(\mu,\epsilon) = (1.4~,~95\%)$. \begin{figure}[!h] \centering \subfloat[]{\label{fig:desy-a}\includegraphics[width=0.52\linewidth]{EffHV.pdf}} \subfloat[]{\label{fig:desy-b}\includegraphics[width=0.52\linewidth]{MulHV.pdf}} \caption{\footnotesize (a) Efficiency vs high voltage scan. (b) Multiplicity vs high voltage scan} \end{figure} \subsection{Running in a high rate beam} The scintillator detectors were used to determine the total particle flux, which was then divided by the beam RMS area ($\approx 4~\rm cm^2$) to obtain the rate by unit area. The measured ($\mu$, $\epsilon$) for different beam rates are plotted in figure \ref{fig:EFFRate}. \begin{figure}[!h] \centering \includegraphics[width=0.75\linewidth]{EFFRate.pdf} \caption{\footnotesize Efficiency vs rate for different RPC. The orange line correspond to the GRPC with float glass. The semi-conductive chambers are represented with different colors.} \label{fig:EFFRate} \end{figure} The chamber with standard float glass (GRPC 1) becomes inefficient at rate exceeding one hundred $\rm Hz/cm^2$ (above $1~\rm kHz$ the efficiency is below $10\%$) while the semi-conductive chambers (GRPC 2-5) maintain a high efficiency $\geq 90\%$ until at least $9~\rm kHz/cm^2$. \section{Conclusion} Semi conductive glass RPCs were tested at DESY in a high rate electron beam, producing very encouraging results; it has been shown that the main weakness of standard RPCs, namely the drop of efficiency at high rate, is clearly overcome, with efficiencies remaining at around $90\%$ at rate of $9~\rm kHz/cm^2$. This feature, combined with GRPC capability to provide precise time measurement, makes them an excellent candidate for future LHC muon detector upgrades. Additional studies on their aging under high rate conditions are underway. A multi-gap version is also under investigation.